id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
sequencelengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1007.0603 | Toby Walsh | Christian Bessiere and George Katsirelos and Nina Narodytska and
Claude-Guy Quimper and Toby Walsh | Decomposition of the NVALUE constraint | To appear in the Proceedings of the 16th International Conference on
Principles and Practice of Constraint Programming 2010 (CP 2010). An earlier
version appeared in the Proceedings of the Eighth International Workshop on
Constraint Modelling and Reformulation, held alongside the 15th International
Conference on Principles and Practice of Constraint Programming (CP 2009) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study decompositions of the global NVALUE constraint. Our main
contribution is theoretical: we show that there are propagators for global
constraints like NVALUE which decomposition can simulate with the same time
complexity but with a much greater space complexity. This suggests that the
benefit of a global propagator may often not be in saving time but in saving
space. Our other theoretical contribution is to show for the first time that
range consistency can be enforced on NVALUE with the same worst-case time
complexity as bound consistency. Finally, the decompositions we study are
readily encoded as linear inequalities. We are therefore able to use them in
integer linear programs.
| [
{
"version": "v1",
"created": "Mon, 5 Jul 2010 02:27:09 GMT"
}
] | 1,278,374,400,000 | [
[
"Bessiere",
"Christian",
""
],
[
"Katsirelos",
"George",
""
],
[
"Narodytska",
"Nina",
""
],
[
"Quimper",
"Claude-Guy",
""
],
[
"Walsh",
"Toby",
""
]
] |
1007.0604 | Toby Walsh | Toby Walsh | Symmetry within and between solutions | Keynote talk to appear in the Proceedings of the Eleventh Pacific Rim
International Conference on Artificial Intelligence (PRICAI-10) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Symmetry can be used to help solve many problems. For instance, Einstein's
famous 1905 paper ("On the Electrodynamics of Moving Bodies") uses symmetry to
help derive the laws of special relativity. In artificial intelligence,
symmetry has played an important role in both problem representation and
reasoning. I describe recent work on using symmetry to help solve constraint
satisfaction problems. Symmetries occur within individual solutions of problems
as well as between different solutions of the same problem. Symmetry can also
be applied to the constraints in a problem to give new symmetric constraints.
Reasoning about symmetry can speed up problem solving, and has led to the
discovery of new results in both graph and number theory.
| [
{
"version": "v1",
"created": "Mon, 5 Jul 2010 02:36:35 GMT"
}
] | 1,278,374,400,000 | [
[
"Walsh",
"Toby",
""
]
] |
1007.0614 | Toby Walsh | Toby Walsh | Online Cake Cutting | To appear in Proceedings of the Third International Workshop on
Computational Social Choice (COMSOC-2010) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose an online form of the cake cutting problem. This models situations
where players arrive and depart during the process of dividing a resource. We
show that well known fair division procedures like cut-and-choose and the
Dubins-Spanier moving knife procedure can be adapted to apply to such online
problems. We propose some desirable properties that online cake cutting
procedures might possess like online forms of proportionality and
envy-freeness, and identify which properties are in fact possessed by the
different online cake procedures.
| [
{
"version": "v1",
"created": "Mon, 5 Jul 2010 04:27:03 GMT"
}
] | 1,278,374,400,000 | [
[
"Walsh",
"Toby",
""
]
] |
1007.0637 | Mirco Gelain | Mirco Gelain, Maria Silvia Pini, Francesca RossI, Kristen Brent
Venable, Toby Walsh | Local search for stable marriage problems with ties and incomplete lists | 12 pages, Proc. PRICAI 2010 (11th Pacific Rim International
Conference on Artificial Intelligence), Byoung-Tak Zhang and Mehmet A. Orgun
eds., Springer LNAI | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The stable marriage problem has a wide variety of practical applications,
ranging from matching resident doctors to hospitals, to matching students to
schools, or more generally to any two-sided market. We consider a useful
variation of the stable marriage problem, where the men and women express their
preferences using a preference list with ties over a subset of the members of
the other sex. Matchings are permitted only with people who appear in these
preference lists. In this setting, we study the problem of finding a stable
matching that marries as many people as possible. Stability is an envy-free
notion: no man and woman who are not married to each other would both prefer
each other to their partners or to being single. This problem is NP-hard. We
tackle this problem using local search, exploiting properties of the problem to
reduce the size of the neighborhood and to make local moves efficiently.
Experimental results show that this approach is able to solve large problems,
quickly returning stable matchings of large and often optimal size.
| [
{
"version": "v1",
"created": "Mon, 5 Jul 2010 08:08:00 GMT"
}
] | 1,278,374,400,000 | [
[
"Gelain",
"Mirco",
""
],
[
"Pini",
"Maria Silvia",
""
],
[
"RossI",
"Francesca",
""
],
[
"Venable",
"Kristen Brent",
""
],
[
"Walsh",
"Toby",
""
]
] |
1007.0690 | Avinash Achar | Avinash Achar, Srivatsan Laxman and P. S. Sastry | A unified view of Automata-based algorithms for Frequent Episode
Discovery | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Frequent Episode Discovery framework is a popular framework in Temporal Data
Mining with many applications. Over the years many different notions of
frequencies of episodes have been proposed along with different algorithms for
episode discovery. In this paper we present a unified view of all such
frequency counting algorithms. We present a generic algorithm such that all
current algorithms are special cases of it. This unified view allows one to
gain insights into different frequencies and we present quantitative
relationships among different frequencies. Our unified view also helps in
obtaining correctness proofs for various algorithms as we show here. We also
point out how this unified view helps us to consider generalization of the
algorithm so that they can discover episodes with general partial orders.
| [
{
"version": "v1",
"created": "Mon, 5 Jul 2010 14:35:20 GMT"
}
] | 1,278,374,400,000 | [
[
"Achar",
"Avinash",
""
],
[
"Laxman",
"Srivatsan",
""
],
[
"Sastry",
"P. S.",
""
]
] |
1007.0859 | Mirco Gelain | M. Gelain and M. S. Pini and F. Rossi and K. B. Venable and T. Walsh | Local search for stable marriage problems | 12 pages, Proc. COMSOC 2010 (Third International Workshop on
Computational Social Choice) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The stable marriage (SM) problem has a wide variety of practical
applications, ranging from matching resident doctors to hospitals, to matching
students to schools, or more generally to any two-sided market. In the
classical formulation, n men and n women express their preferences (via a
strict total order) over the members of the other sex. Solving a SM problem
means finding a stable marriage where stability is an envy-free notion: no man
and woman who are not married to each other would both prefer each other to
their partners or to being single. We consider both the classical stable
marriage problem and one of its useful variations (denoted SMTI) where the men
and women express their preferences in the form of an incomplete preference
list with ties over a subset of the members of the other sex. Matchings are
permitted only with people who appear in these lists, an we try to find a
stable matching that marries as many people as possible. Whilst the SM problem
is polynomial to solve, the SMTI problem is NP-hard. We propose to tackle both
problems via a local search approach, which exploits properties of the problems
to reduce the size of the neighborhood and to make local moves efficiently. We
evaluate empirically our algorithm for SM problems by measuring its runtime
behaviour and its ability to sample the lattice of all possible stable
marriages. We evaluate our algorithm for SMTI problems in terms of both its
runtime behaviour and its ability to find a maximum cardinality stable
marriage.For SM problems, the number of steps of our algorithm grows only as
O(nlog(n)), and that it samples very well the set of all stable marriages. It
is thus a fair and efficient approach to generate stable marriages.Furthermore,
our approach for SMTI problems is able to solve large problems, quickly
returning stable matchings of large and often optimal size despite the
NP-hardness of this problem.
| [
{
"version": "v1",
"created": "Tue, 6 Jul 2010 10:52:44 GMT"
}
] | 1,278,460,800,000 | [
[
"Gelain",
"M.",
""
],
[
"Pini",
"M. S.",
""
],
[
"Rossi",
"F.",
""
],
[
"Venable",
"K. B.",
""
],
[
"Walsh",
"T.",
""
]
] |
1007.1766 | Tshilidzi Marwala | Gidudu Anthony, Hulley Gregg, and Marwala Tshilidzi | An svm multiclassifier approach to land cover mapping | ASPRS 2008 Annual Conference Portland, Oregon. April 28 - May 2, 2008 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | From the advent of the application of satellite imagery to land cover
mapping, one of the growing areas of research interest has been in the area of
image classification. Image classifiers are algorithms used to extract land
cover information from satellite imagery. Most of the initial research has
focussed on the development and application of algorithms to better existing
and emerging classifiers. In this paper, a paradigm shift is proposed whereby a
committee of classifiers is used to determine the final classification output.
Two of the key components of an ensemble system are that there should be
diversity among the classifiers and that there should be a mechanism through
which the results are combined. In this paper, the members of the ensemble
system include: Linear SVM, Gaussian SVM and Quadratic SVM. The final output
was determined through a simple majority vote of the individual classifiers.
From the results obtained it was observed that the final derived map generated
by an ensemble system can potentially improve on the results derived from the
individual classifiers making up the ensemble system. The ensemble system
classification accuracy was, in this case, better than the linear and quadratic
SVM result. It was however less than that of the RBF SVM. Areas for further
research could focus on improving the diversity of the ensemble system used in
this research.
| [
{
"version": "v1",
"created": "Sun, 11 Jul 2010 09:36:07 GMT"
}
] | 1,278,979,200,000 | [
[
"Anthony",
"Gidudu",
""
],
[
"Gregg",
"Hulley",
""
],
[
"Tshilidzi",
"Marwala",
""
]
] |
1007.2534 | Xavier Mora | Rosa Camps, Xavier Mora, Laia Saumell | A general method for deciding about logically constrained issues | Several substantial improvements have been included. The outline
structure of the article has also undergone some changes | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A general method is given for revising degrees of belief and arriving at
consistent decisions about a system of logically constrained issues. In
contrast to other works about belief revision, here the constraints are assumed
to be fixed. The method has two variants, dual of each other, whose revised
degrees of belief are respectively above and below the original ones. The upper
[resp. lower] revised degrees of belief are uniquely characterized as the
lowest [resp. highest] ones that are invariant by a certain max-min [resp.
min-max] operation determined by the logical constraints. In both variants,
making balance between the revised degree of belief of a proposition and that
of its negation leads to decisions that are ensured to be consistent with the
logical constraints. These decisions are ensured to agree with the majority
criterion as applied to the original degrees of belief whenever this gives a
consistent result. They are also also ensured to satisfy a property of respect
for unanimity about any particular issue, as well as a property of monotonicity
with respect to the original degrees of belief. The application of the method
to certain special domains comes down to well established or increasingly
accepted methods, such as the single-link method of cluster analysis and the
method of paths in preferential voting.
| [
{
"version": "v1",
"created": "Thu, 15 Jul 2010 11:55:38 GMT"
},
{
"version": "v2",
"created": "Fri, 1 Oct 2010 10:24:34 GMT"
},
{
"version": "v3",
"created": "Wed, 20 Jul 2011 13:23:07 GMT"
},
{
"version": "v4",
"created": "Thu, 8 Mar 2012 17:12:15 GMT"
}
] | 1,331,251,200,000 | [
[
"Camps",
"Rosa",
""
],
[
"Mora",
"Xavier",
""
],
[
"Saumell",
"Laia",
""
]
] |
1007.3159 | Fabrizio Riguzzi PhD | Marco Gavanelli and Fabrizio Riguzzi and Michela Milano and Paolo
Cagnoli | Logic-Based Decision Support for Strategic Environmental Assessment | 17 pages, 1 figure, 26th Int'l. Conference on Logic Programming
(ICLP'10) | Theory and Practice of Logic Programming, 26th Int'l. Conference
on Logic Programming (ICLP'10) Special Issue, 10(4-6), 643-658, 2010 | 10.1017/S1471068410000335 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Strategic Environmental Assessment is a procedure aimed at introducing
systematic assessment of the environmental effects of plans and programs. This
procedure is based on the so-called coaxial matrices that define dependencies
between plan activities (infrastructures, plants, resource extractions,
buildings, etc.) and positive and negative environmental impacts, and
dependencies between these impacts and environmental receptors. Up to now, this
procedure is manually implemented by environmental experts for checking the
environmental effects of a given plan or program, but it is never applied
during the plan/program construction. A decision support system, based on a
clear logic semantics, would be an invaluable tool not only in assessing a
single, already defined plan, but also during the planning process in order to
produce an optimized, environmentally assessed plan and to study possible
alternative scenarios. We propose two logic-based approaches to the problem,
one based on Constraint Logic Programming and one on Probabilistic Logic
Programming that could be, in the future, conveniently merged to exploit the
advantages of both. We test the proposed approaches on a real energy plan and
we discuss their limitations and advantages.
| [
{
"version": "v1",
"created": "Mon, 19 Jul 2010 14:36:54 GMT"
}
] | 1,279,584,000,000 | [
[
"Gavanelli",
"Marco",
""
],
[
"Riguzzi",
"Fabrizio",
""
],
[
"Milano",
"Michela",
""
],
[
"Cagnoli",
"Paolo",
""
]
] |
1007.3515 | Matthias Knorr | Jos\'e J\'ulio Alferes, Matthias Knorr, Terrance Swift | Query-driven Procedures for Hybrid MKNF Knowledge Bases | 48 pages with 1 figures, submitted to ACM TOCL | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hybrid MKNF knowledge bases are one of the most prominent tightly integrated
combinations of open-world ontology languages with closed-world (non-monotonic)
rule paradigms. The definition of Hybrid MKNF is parametric on the description
logic (DL) underlying the ontology language, in the sense that non-monotonic
rules can extend any decidable DL language. Two related semantics have been
defined for Hybrid MKNF: one that is based on the Stable Model Semantics for
logic programs and one on the Well-Founded Semantics (WFS). Under WFS, the
definition of Hybrid MKNF relies on a bottom-up computation that has polynomial
data complexity whenever the DL language is tractable. Here we define a general
query-driven procedure for Hybrid MKNF that is sound with respect to the stable
model-based semantics, and sound and complete with respect to its WFS variant.
This procedure is able to answer a slightly restricted form of conjunctive
queries, and is based on tabled rule evaluation extended with an external
oracle that captures reasoning within the ontology. Such an (abstract) oracle
receives as input a query along with knowledge already derived, and replies
with a (possibly empty) set of atoms, defined in the rules, whose truth would
suffice to prove the initial query. With appropriate assumptions on the
complexity of the abstract oracle, the general procedure maintains the data
complexity of the WFS for Hybrid MKNF knowledge bases.
To illustrate this approach, we provide a concrete oracle for EL+, a fragment
of the light-weight DL EL++. Such an oracle has practical use, as EL++ is the
language underlying OWL 2 EL, which is part of the W3C recommendations for the
Semantic Web, and is tractable for reasoning tasks such as subsumption. We show
that query-driven Hybrid MKNF preserves polynomial data complexity when using
the EL+ oracle and WFS.
| [
{
"version": "v1",
"created": "Tue, 20 Jul 2010 21:20:39 GMT"
},
{
"version": "v2",
"created": "Fri, 9 Dec 2011 17:36:35 GMT"
}
] | 1,323,648,000,000 | [
[
"Alferes",
"José Júlio",
""
],
[
"Knorr",
"Matthias",
""
],
[
"Swift",
"Terrance",
""
]
] |
1007.3663 | Piero Bonatti | Sabrina Baselice and Piero A. Bonatti | A decidable subclass of finitary programs | null | Theory and Practice of Logic Programming (2010), 10:481-496
Cambridge University Press | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Answer set programming - the most popular problem solving paradigm based on
logic programs - has been recently extended to support uninterpreted function
symbols. All of these approaches have some limitation. In this paper we propose
a class of programs called FP2 that enjoys a different trade-off between
expressiveness and complexity. FP2 programs enjoy the following unique
combination of properties: (i) the ability of expressing predicates with
infinite extensions; (ii) full support for predicates with arbitrary arity;
(iii) decidability of FP2 membership checking; (iv) decidability of skeptical
and credulous stable model reasoning for call-safe queries. Odd cycles are
supported by composing FP2 programs with argument restricted programs.
| [
{
"version": "v1",
"created": "Wed, 21 Jul 2010 14:00:14 GMT"
}
] | 1,279,756,800,000 | [
[
"Baselice",
"Sabrina",
""
],
[
"Bonatti",
"Piero A.",
""
]
] |
1007.4868 | Athar Kharal | Athar Kharal | Predicting Suicide Attacks: A Fuzzy Soft Set Approach | Submitted manuscript | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper models a decision support system to predict the occurance of
suicide attack in a given collection of cities. The system comprises two parts.
First part analyzes and identifies the factors which affect the prediction.
Admitting incomplete information and use of linguistic terms by experts, as two
characteristic features of this peculiar prediction problem we exploit the
Theory of Fuzzy Soft Sets. Hence the Part 2 of the model is an algorithm vz.
FSP which takes the assessment of factors given in Part 1 as its input and
produces a possibility profile of cities likely to receive the accident. The
algorithm is of O(2^n) complexity. It has been illustrated by an example solved
in detail. Simulation results for the algorithm have been presented which give
insight into the strengths and weaknesses of FSP. Three different decision
making measures have been simulated and compared in our discussion.
| [
{
"version": "v1",
"created": "Wed, 28 Jul 2010 04:15:17 GMT"
}
] | 1,280,361,600,000 | [
[
"Kharal",
"Athar",
""
]
] |
1007.5024 | James P. Delgrande | James P. Delgrande | A Program-Level Approach to Revising Logic Programs under the Answer Set
Semantics | null | Theory and Practice of Logic Programming, 10, 4--6, 2010, pp.
565-580 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An approach to the revision of logic programs under the answer set semantics
is presented. For programs P and Q, the goal is to determine the answer sets
that correspond to the revision of P by Q, denoted P * Q. A fundamental
principle of classical (AGM) revision, and the one that guides the approach
here, is the success postulate. In AGM revision, this stipulates that A is in K
* A. By analogy with the success postulate, for programs P and Q, this means
that the answer sets of Q will in some sense be contained in those of P * Q.
The essential idea is that for P * Q, a three-valued answer set for Q,
consisting of positive and negative literals, is first determined. The positive
literals constitute a regular answer set, while the negated literals make up a
minimal set of naf literals required to produce the answer set from Q. These
literals are propagated to the program P, along with those rules of Q that are
not decided by these literals. The approach differs from work in update logic
programs in two main respects. First, we ensure that the revising logic program
has higher priority, and so we satisfy the success postulate; second, for the
preference implicit in a revision P * Q, the program Q as a whole takes
precedence over P, unlike update logic programs, since answer sets of Q are
propagated to P. We show that a core group of the AGM postulates are satisfied,
as are the postulates that have been proposed for update logic programs.
| [
{
"version": "v1",
"created": "Wed, 28 Jul 2010 16:14:17 GMT"
}
] | 1,280,361,600,000 | [
[
"Delgrande",
"James P.",
""
]
] |
1007.5104 | Toby Walsh | Jessica Davies and George Katsirelos and Nina Narodystka and Toby
Walsh | An Empirical Study of Borda Manipulation | To appear in Proceedings of the Third International Workshop on
Computational Social Choice | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the problem of coalitional manipulation in elections using the
unweighted Borda rule. We provide empirical evidence of the manipulability of
Borda elections in the form of two new greedy manipulation algorithms based on
intuitions from the bin-packing and multiprocessor scheduling domains. Although
we have not been able to show that these algorithms beat existing methods in
the worst-case, our empirical evaluation shows that they significantly
outperform the existing method and are able to find optimal manipulations in
the vast majority of the randomly generated elections that we tested. These
empirical results provide further evidence that the Borda rule provides little
defense against coalitional manipulation.
| [
{
"version": "v1",
"created": "Thu, 29 Jul 2010 03:21:57 GMT"
}
] | 1,280,448,000,000 | [
[
"Davies",
"Jessica",
""
],
[
"Katsirelos",
"George",
""
],
[
"Narodystka",
"Nina",
""
],
[
"Walsh",
"Toby",
""
]
] |
1007.5130 | Secretary Ijaia | Giuseppe Della Penna (1), Benedetto Intrigila (2), Daniele Magazzeni
(3) and Fabio Mercorio (1) ((1) University of L'Aquila, Italy, (2) University
of Rome, Italy and (3) University of Chieti, Italy) | Resource-Optimal Planning For An Autonomous Planetary Vehicle | 15 pages, 4 figures | International Journal of Artificial Intelligence & Applications
1.3 (2010) 15-29 | 10.5121/ijaia.2010.1302 | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Autonomous planetary vehicles, also known as rovers, are small autonomous
vehicles equipped with a variety of sensors used to perform exploration and
experiments on a planet's surface. Rovers work in a partially unknown
environment, with narrow energy/time/movement constraints and, typically, small
computational resources that limit the complexity of on-line planning and
scheduling, thus they represent a great challenge in the field of autonomous
vehicles. Indeed, formal models for such vehicles usually involve hybrid
systems with nonlinear dynamics, which are difficult to handle by most of the
current planning algorithms and tools. Therefore, when offline planning of the
vehicle activities is required, for example for rovers that operate without a
continuous Earth supervision, such planning is often performed on simplified
models that are not completely realistic. In this paper we show how the
UPMurphi model checking based planning tool can be used to generate
resource-optimal plans to control the engine of an autonomous planetary
vehicle, working directly on its hybrid model and taking into account several
safety constraints, thus achieving very accurate results.
| [
{
"version": "v1",
"created": "Thu, 29 Jul 2010 07:27:25 GMT"
}
] | 1,280,448,000,000 | [
[
"Della Penna",
"Giuseppe",
""
],
[
"Intrigila",
"Benedetto",
""
],
[
"Magazzeni",
"Daniele",
""
],
[
"Mercorio",
"Fabio",
""
]
] |
1008.0273 | Jean Dezert | Jean Dezert (ONERA), Florentin Smarandache (E3I2) | Threat assessment of a possible Vehicle-Born Improvised Explosive Device
using DSmT | 26 pages | Fusion 2010, Edinburgh : United Kingdom (2010) | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents the solution about the threat of a VBIED (Vehicle-Born
Improvised Explosive Device) obtained with the DSmT (Dezert-Smarandache
Theory). This problem has been proposed recently to the authors by Simon
Maskell and John Lavery as a typical illustrative example to try to compare the
different approaches for dealing with uncertainty for decision-making support.
The purpose of this paper is to show in details how a solid justified solution
can be obtained from DSmT approach and its fusion rules thanks to a proper
modeling of the belief functions involved in this problem.
| [
{
"version": "v1",
"created": "Mon, 2 Aug 2010 10:18:24 GMT"
}
] | 1,280,793,600,000 | [
[
"Dezert",
"Jean",
"",
"ONERA"
],
[
"Smarandache",
"Florentin",
"",
"E3I2"
]
] |
1008.0659 | Thanasis Balafoutis | Thanasis Balafoutis and Kostas Stergiou | Evaluating and Improving Modern Variable and Revision Ordering
Strategies in CSPs | To appear in the Journal Fundamenta Informaticae (FI) IOS Press | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A key factor that can dramatically reduce the search space during constraint
solving is the criterion under which the variable to be instantiated next is
selected. For this purpose numerous heuristics have been proposed. Some of the
best of such heuristics exploit information about failures gathered throughout
search and recorded in the form of constraint weights, while others measure the
importance of variable assignments in reducing the search space. In this work
we experimentally evaluate the most recent and powerful variable ordering
heuristics, and new variants of them, over a wide range of benchmarks. Results
demonstrate that heuristics based on failures are in general more efficient.
Based on this, we then derive new revision ordering heuristics that exploit
recorded failures to efficiently order the propagation list when arc
consistency is maintained during search. Interestingly, in addition to reducing
the number of constraint checks and list operations, these heuristics are also
able to cut down the size of the explored search tree.
| [
{
"version": "v1",
"created": "Tue, 3 Aug 2010 21:09:43 GMT"
},
{
"version": "v2",
"created": "Sat, 7 Aug 2010 09:48:42 GMT"
}
] | 1,281,398,400,000 | [
[
"Balafoutis",
"Thanasis",
""
],
[
"Stergiou",
"Kostas",
""
]
] |
1008.0660 | Thanasis Balafoutis | Thanasis Balafoutis and Kostas Stergiou | Adaptive Branching for Constraint Satisfaction Problems | To appear in Proceedings of the 19th European Conference on
Artificial Intelligence - ECAI 2010 | In Proceedings of the 19th European Conference on Artificial
Intelligence - ECAI 2010 | 10.3233/978-1-60750-606-5-855 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The two standard branching schemes for CSPs are d-way and 2-way branching.
Although it has been shown that in theory the latter can be exponentially more
effective than the former, there is a lack of empirical evidence showing such
differences. To investigate this, we initially make an experimental comparison
of the two branching schemes over a wide range of benchmarks. Experimental
results verify the theoretical gap between d-way and 2-way branching as we move
from a simple variable ordering heuristic like dom to more sophisticated ones
like dom/ddeg. However, perhaps surprisingly, experiments also show that when
state-of-the-art variable ordering heuristics like dom/wdeg are used then d-way
can be clearly more efficient than 2-way branching in many cases. Motivated by
this observation, we develop two generic heuristics that can be applied at
certain points during search to decide whether 2-way branching or a restricted
version of 2-way branching, which is close to d-way branching, will be
followed. The application of these heuristics results in an adaptive branching
scheme. Experiments with instantiations of the two generic heuristics confirm
that search with adaptive branching outperforms search with a fixed branching
scheme on a wide range of problems.
| [
{
"version": "v1",
"created": "Tue, 3 Aug 2010 21:10:15 GMT"
}
] | 1,283,731,200,000 | [
[
"Balafoutis",
"Thanasis",
""
],
[
"Stergiou",
"Kostas",
""
]
] |
1008.0823 | Adrian Paschke | Adrian Paschke, Alexander Kozlenkov, Harold Boley | A Homogeneous Reaction Rule Language for Complex Event Processing | In Proc. 2nd International Workshop on Event Drive Architecture and
Event Processing Systems (EDA-PS 2007) at VLDB 2007 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Event-driven automation of reactive functionalities for complex event
processing is an urgent need in today's distributed service-oriented
architectures and Web-based event-driven environments. An important problem to
be addressed is how to correctly and efficiently capture and process the
event-based behavioral, reactive logic embodied in reaction rules, and
combining this with other conditional decision logic embodied, e.g., in
derivation rules. This paper elaborates a homogeneous integration approach that
combines derivation rules, reaction rules and other rule types such as
integrity constraints into the general framework of logic programming, the
industrial-strength version of declarative programming. We describe syntax and
semantics of the language, implement a distributed web-based middleware using
enterprise service technologies and illustrate its adequacy in terms of
expressiveness, efficiency and scalability through examples extracted from
industrial use cases. The developed reaction rule language provides expressive
features such as modular ID-based updates with support for external imports and
self-updates of the intensional and extensional knowledge bases, transactions
including integrity testing and roll-backs of update transition paths. It also
supports distributed complex event processing, event messaging and event
querying via efficient and scalable enterprise middleware technologies and
event/action reasoning based on an event/action algebra implemented by an
interval-based event calculus variant as a logic inference formalism.
| [
{
"version": "v1",
"created": "Wed, 4 Aug 2010 17:05:33 GMT"
}
] | 1,280,966,400,000 | [
[
"Paschke",
"Adrian",
""
],
[
"Kozlenkov",
"Alexander",
""
],
[
"Boley",
"Harold",
""
]
] |
1008.1328 | Zeeshan Ahmed Mr. | Zeeshan Ahmed and Detlef Gerhard | Semantic Oriented Agent based Approach towards Engineering Data
Management, Web Information Retrieval and User System Communication Problems | In the proceedings of 3rd International Conference for Internet
Technology and Secured Transactions, Dublin Institute of Technology, ICITST
08, June 23-28, pp 19-22 Dublin Ireland, 2008 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The four intensive problems to the software rose by the software industry
.i.e., User System Communication / Human Machine Interface, Meta Data
extraction, Information processing & management and Data representation are
discussed in this research paper. To contribute in the field we have proposed
and described an intelligent semantic oriented agent based search engine
including the concepts of intelligent graphical user interface, natural
language based information processing, data management and data reconstruction
for the final user end information representation.
| [
{
"version": "v1",
"created": "Sat, 7 Aug 2010 12:08:43 GMT"
}
] | 1,281,398,400,000 | [
[
"Ahmed",
"Zeeshan",
""
],
[
"Gerhard",
"Detlef",
""
]
] |
1008.1333 | Zeeshan Ahmed Mr. | Zeeshan Ahmed and Detlef Gerhard | An Agent based Approach towards Metadata Extraction, Modelling and
Information Retrieval over the Web | In the proceedings of First International Workshop on Cultural
Heritage on the Semantic Web in conjunction with the 6th International
Semantic Web Conference and the 2nd Asian Semantic Web Conference 2007, (ISWC
+ ASWC 2007), P 117, 12-15 November 2007 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Web development is a challenging research area for its creativity and
complexity. The existing raised key challenge in web technology technologic
development is the presentation of data in machine read and process able format
to take advantage in knowledge based information extraction and maintenance.
Currently it is not possible to search and extract optimized results using full
text queries because there is no such mechanism exists which can fully extract
the semantic from full text queries and then look for particular knowledge
based information.
| [
{
"version": "v1",
"created": "Sat, 7 Aug 2010 12:29:02 GMT"
}
] | 1,281,398,400,000 | [
[
"Ahmed",
"Zeeshan",
""
],
[
"Gerhard",
"Detlef",
""
]
] |
1008.1484 | Ping Zhu | Ping Zhu and Qiaoyan Wen | A note on communicating between information systems based on including
degrees | 4 pages | International Journal of General Systems, 40(8): 837-840, 2011 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In order to study the communication between information systems, Gong and
Xiao [Z. Gong and Z. Xiao, Communicating between information systems based on
including degrees, International Journal of General Systems 39 (2010) 189--206]
proposed the concept of general relation mappings based on including degrees.
Some properties and the extension for fuzzy information systems of the general
relation mappings have been investigated there. In this paper, we point out by
counterexamples that several assertions (Lemma 3.1, Lemma 3.2, Theorem 4.1, and
Theorem 4.3) in the aforementioned work are not true in general.
| [
{
"version": "v1",
"created": "Mon, 9 Aug 2010 10:54:54 GMT"
}
] | 1,320,710,400,000 | [
[
"Zhu",
"Ping",
""
],
[
"Wen",
"Qiaoyan",
""
]
] |
1008.1723 | Zeeshan Ahmed Mr. | Zeeshan Ahmed and Detlef Gerhard | Role of Ontology in Semantic Web Development | In the proceedings of First International Workshop on Cultural
Heritage on the Semantic Web in conjunction with the 6th International
Semantic Web Conference and the 2nd Asian Semantic Web Conference 2007, (ISWC
+ ASWC 2007), P 119, 12-15 November 2007 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | World Wide Web (WWW) is the most popular global information sharing and
communication system consisting of three standards .i.e., Uniform Resource
Identifier (URL), Hypertext Transfer Protocol (HTTP) and Hypertext Mark-up
Language (HTML). Information is provided in text, image, audio and video
formats over the web by using HTML which is considered to be unconventional in
defining and formalizing the meaning of the context...
| [
{
"version": "v1",
"created": "Sat, 7 Aug 2010 12:32:22 GMT"
}
] | 1,281,484,800,000 | [
[
"Ahmed",
"Zeeshan",
""
],
[
"Gerhard",
"Detlef",
""
]
] |
1008.3314 | Tijl De Bie | Tijl De Bie | Maximum entropy models and subjective interestingness: an application to
tiles in binary databases | 43 pages, submitted | null | null | University of Bristol Tech. Rep. 125861 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent research has highlighted the practical benefits of subjective
interestingness measures, which quantify the novelty or unexpectedness of a
pattern when contrasted with any prior information of the data miner
(Silberschatz and Tuzhilin, 1995; Geng and Hamilton, 2006). A key challenge
here is the formalization of this prior information in a way that lends itself
to the definition of an interestingness subjective measure that is both
meaningful and practical.
In this paper, we outline a general strategy of how this could be achieved,
before working out the details for a use case that is important in its own
right.
Our general strategy is based on considering prior information as constraints
on a probabilistic model representing the uncertainty about the data. More
specifically, we represent the prior information by the maximum entropy
(MaxEnt) distribution subject to these constraints. We briefly outline various
measures that could subsequently be used to contrast patterns with this MaxEnt
model, thus quantifying their subjective interestingness.
| [
{
"version": "v1",
"created": "Thu, 19 Aug 2010 14:41:55 GMT"
}
] | 1,282,262,400,000 | [
[
"De Bie",
"Tijl",
""
]
] |
1008.3879 | Yves Moinard | Yves Moinard (INRIA - IRISA) | A formalism for causal explanations with an Answer Set Programming
translation | null | 4th International Conference on Knowledge Science, Engineering &
Management (KSEM 2010), Belfast : United Kingdom (2010) | 10.1007/978-3-642-15280-1_56 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We examine the practicality for a user of using Answer Set Programming (ASP)
for representing logical formalisms. Our example is a formalism aiming at
capturing causal explanations from causal information. We show the naturalness
and relative efficiency of this translation job. We are interested in the ease
for writing an ASP program. Limitations of the earlier systems made that in
practice, the ``declarative aspect'' was more theoretical than practical. We
show how recent improvements in working ASP systems facilitate the translation.
| [
{
"version": "v1",
"created": "Mon, 23 Aug 2010 18:38:23 GMT"
}
] | 1,594,944,000,000 | [
[
"Moinard",
"Yves",
"",
"INRIA - IRISA"
]
] |
1008.4257 | Nada Matta | Nada Matta (UTT), Oswaldo Castillo (UTT) | Learning from Profession Knowledge: Application on Knitting | null | 5th International Conference on Signal-Image Technology and
Internet based Systems, Marakesh : Morocco (2009) | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge Management is a global process in companies. It includes all the
processes that allow capitalization, sharing and evolution of the Knowledge
Capital of the firm, generally recognized as a critical resource of the
organization. Several approaches have been defined to capitalize knowledge but
few of them study how to learn from this knowledge. We present in this paper an
approach that helps to enhance learning from profession knowledge in an
organisation. We apply our approach on knitting industry.
| [
{
"version": "v1",
"created": "Wed, 25 Aug 2010 11:41:28 GMT"
}
] | 1,282,780,800,000 | [
[
"Matta",
"Nada",
"",
"UTT"
],
[
"Castillo",
"Oswaldo",
"",
"UTT"
]
] |
1008.4326 | Lars Kotthoff | Ian Gent and Lars Kotthoff and Ian Miguel and Peter Nightingale | Machine learning for constraint solver design -- A case study for the
alldifferent constraint | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Constraint solvers are complex pieces of software which require many design
decisions to be made by the implementer based on limited information. These
decisions affect the performance of the finished solver significantly. Once a
design decision has been made, it cannot easily be reversed, although a
different decision may be more appropriate for a particular problem.
We investigate using machine learning to make these decisions automatically
depending on the problem to solve. We use the alldifferent constraint as a case
study. Our system is capable of making non-trivial, multi-level decisions that
improve over always making a default choice and can be implemented as part of a
general-purpose constraint solver.
| [
{
"version": "v1",
"created": "Wed, 25 Aug 2010 18:04:03 GMT"
}
] | 1,282,780,800,000 | [
[
"Gent",
"Ian",
""
],
[
"Kotthoff",
"Lars",
""
],
[
"Miguel",
"Ian",
""
],
[
"Nightingale",
"Peter",
""
]
] |
1008.4328 | Lars Kotthoff | Lars Kotthoff and Neil C.A. Moore | Distributed solving through model splitting | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Constraint problems can be trivially solved in parallel by exploring
different branches of the search tree concurrently. Previous approaches have
focused on implementing this functionality in the solver, more or less
transparently to the user. We propose a new approach, which modifies the
constraint model of the problem. An existing model is split into new models
with added constraints that partition the search space. Optionally, additional
constraints are imposed that rule out the search already done. The advantages
of our approach are that it can be implemented easily, computations can be
stopped and restarted, moved to different machines and indeed solved on
machines which are not able to communicate with each other at all.
| [
{
"version": "v1",
"created": "Wed, 25 Aug 2010 18:07:40 GMT"
}
] | 1,282,780,800,000 | [
[
"Kotthoff",
"Lars",
""
],
[
"Moore",
"Neil C. A.",
""
]
] |
1008.5163 | Brian McFee | Brian McFee and Gert Lanckriet | Learning Multi-modal Similarity | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In many applications involving multi-media data, the definition of similarity
between items is integral to several key tasks, e.g., nearest-neighbor
retrieval, classification, and recommendation. Data in such regimes typically
exhibits multiple modalities, such as acoustic and visual content of video.
Integrating such heterogeneous data to form a holistic similarity space is
therefore a key challenge to be overcome in many real-world applications.
We present a novel multiple kernel learning technique for integrating
heterogeneous data into a single, unified similarity space. Our algorithm
learns an optimal ensemble of kernel transfor- mations which conform to
measurements of human perceptual similarity, as expressed by relative
comparisons. To cope with the ubiquitous problems of subjectivity and
inconsistency in multi- media similarity, we develop graph-based techniques to
filter similarity measurements, resulting in a simplified and robust training
procedure.
| [
{
"version": "v1",
"created": "Mon, 30 Aug 2010 20:51:26 GMT"
}
] | 1,283,299,200,000 | [
[
"McFee",
"Brian",
""
],
[
"Lanckriet",
"Gert",
""
]
] |
1008.5188 | Chunhua Shen | Chunhua Shen, Hanxi Li, Nick Barnes | Totally Corrective Boosting for Regularized Risk Minimization | This paper has been withdrawn by the author | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Consideration of the primal and dual problems together leads to important new
insights into the characteristics of boosting algorithms. In this work, we
propose a general framework that can be used to design new boosting algorithms.
A wide variety of machine learning problems essentially minimize a regularized
risk functional. We show that the proposed boosting framework, termed CGBoost,
can accommodate various loss functions and different regularizers in a
totally-corrective optimization fashion. We show that, by solving the primal
rather than the dual, a large body of totally-corrective boosting algorithms
can actually be efficiently solved and no sophisticated convex optimization
solvers are needed. We also demonstrate that some boosting algorithms like
AdaBoost can be interpreted in our framework--even their optimization is not
totally corrective. We empirically show that various boosting algorithms based
on the proposed framework perform similarly on the UCIrvine machine learning
datasets [1] that we have used in the experiments.
| [
{
"version": "v1",
"created": "Mon, 30 Aug 2010 23:40:51 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Dec 2011 04:42:17 GMT"
}
] | 1,323,734,400,000 | [
[
"Shen",
"Chunhua",
""
],
[
"Li",
"Hanxi",
""
],
[
"Barnes",
"Nick",
""
]
] |
1008.5189 | Anastasia Paparrizou Ms | Thanasis Balafoutis, Anastasia Paparrizou, Kostas Stergiou and Toby
Walsh | Improving the Performance of maxRPC | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Max Restricted Path Consistency (maxRPC) is a local consistency for binary
constraints that can achieve considerably stronger pruning than arc
consistency. However, existing maxRRC algorithms suffer from overheads and
redundancies as they can repeatedly perform many constraint checks without
triggering any value deletions. In this paper we propose techniques that can
boost the performance of maxRPC algorithms. These include the combined use of
two data structures to avoid many redundant constraint checks, and heuristics
for the efficient ordering and execution of certain operations. Based on these,
we propose two closely related algorithms. The first one which is a maxRPC
algorithm with optimal O(end^3) time complexity, displays good performance when
used stand-alone, but is expensive to apply during search. The second one
approximates maxRPC and has O(en^2d^4) time complexity, but a restricted
version with O(end^4) complexity can be very efficient when used during search.
Both algorithms have O(ed) space complexity. Experimental results demonstrate
that the resulting methods constantly outperform previous algorithms for
maxRPC, often by large margins, and constitute a more than viable alternative
to arc consistency on many problems.
| [
{
"version": "v1",
"created": "Mon, 30 Aug 2010 23:50:33 GMT"
}
] | 1,283,299,200,000 | [
[
"Balafoutis",
"Thanasis",
""
],
[
"Paparrizou",
"Anastasia",
""
],
[
"Stergiou",
"Kostas",
""
],
[
"Walsh",
"Toby",
""
]
] |
1009.0347 | Peter J. Stuckey | Andreas Schutt, Thibaut Feydy, Peter J. Stuckey, Mark G. Wallace | Solving the Resource Constrained Project Scheduling Problem with
Generalized Precedences by Lazy Clause Generation | 37 pages, 3 figures, 16 tables | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The technical report presents a generic exact solution approach for
minimizing the project duration of the resource-constrained project scheduling
problem with generalized precedences (Rcpsp/max). The approach uses lazy clause
generation, i.e., a hybrid of finite domain and Boolean satisfiability solving,
in order to apply nogood learning and conflict-driven search on the solution
generation. Our experiments show the benefit of lazy clause generation for
finding an optimal solutions and proving its optimality in comparison to other
state-of-the-art exact and non-exact methods. The method is highly robust: it
matched or bettered the best known results on all of the 2340 instances we
examined except 3, according to the currently available data on the PSPLib. Of
the 631 open instances in this set it closed 573 and improved the bounds of 51
of the remaining 58 instances.
| [
{
"version": "v1",
"created": "Thu, 2 Sep 2010 08:03:47 GMT"
}
] | 1,283,472,000,000 | [
[
"Schutt",
"Andreas",
""
],
[
"Feydy",
"Thibaut",
""
],
[
"Stuckey",
"Peter J.",
""
],
[
"Wallace",
"Mark G.",
""
]
] |
1009.0407 | Thanasis Balafoutis | Thanasis Balafoutis, Anastasia Paparrizou and Kostas Stergiou | Experimental Evaluation of Branching Schemes for the CSP | To appear in the 3rd workshop on techniques for implementing
constraint programming systems (TRICS workshop at the 16th CP Conference),
St. Andrews, Scotland 2010 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The search strategy of a CP solver is determined by the variable and value
ordering heuristics it employs and by the branching scheme it follows. Although
the effects of variable and value ordering heuristics on search effort have
been widely studied, the effects of different branching schemes have received
less attention. In this paper we study this effect through an experimental
evaluation that includes standard branching schemes such as 2-way, d-way, and
dichotomic domain splitting, as well as variations of set branching where
branching is performed on sets of values. We also propose and evaluate a
generic approach to set branching where the partition of a domain into sets is
created using the scores assigned to values by a value ordering heuristic, and
a clustering algorithm from machine learning. Experimental results demonstrate
that although exponential differences between branching schemes, as predicted
in theory between 2-way and d-way branching, are not very common, still the
choice of branching scheme can make quite a difference on certain classes of
problems. Set branching methods are very competitive with 2-way branching and
outperform it on some problem classes. A statistical analysis of the results
reveals that our generic clustering-based set branching method is the best
among the methods compared.
| [
{
"version": "v1",
"created": "Thu, 2 Sep 2010 12:24:32 GMT"
}
] | 1,283,472,000,000 | [
[
"Balafoutis",
"Thanasis",
""
],
[
"Paparrizou",
"Anastasia",
""
],
[
"Stergiou",
"Kostas",
""
]
] |
1009.0451 | Fabien Tence | Fabien Tenc\'e (LISYC), C\'edric Buche (LISYC), Pierre De Loor
(LISYC), Olivier Marc (LISYC) | The Challenge of Believability in Video Games: Definitions, Agents
Models and Imitation Learning | null | GAMEON-ASIA'2010, France (2010) | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we address the problem of creating believable agents (virtual
characters) in video games. We consider only one meaning of believability,
``giving the feeling of being controlled by a player'', and outline the problem
of its evaluation. We present several models for agents in games which can
produce believable behaviours, both from industry and research. For high level
of believability, learning and especially imitation learning seems to be the
way to go. We make a quick overview of different approaches to make video
games' agents learn from players. To conclude we propose a two-step method to
develop new models for believable agents. First we must find the criteria for
believability for our application and define an evaluation method. Then the
model and the learning algorithm can be designed.
| [
{
"version": "v1",
"created": "Thu, 2 Sep 2010 15:25:06 GMT"
}
] | 1,283,472,000,000 | [
[
"Tencé",
"Fabien",
"",
"LISYC"
],
[
"Buche",
"Cédric",
"",
"LISYC"
],
[
"De Loor",
"Pierre",
"",
"LISYC"
],
[
"Marc",
"Olivier",
"",
"LISYC"
]
] |
1009.0501 | Fabien Tence | Fabien Tenc\'e (LISYC), C\'edric Buche (LISYC) | Automatable Evaluation Method Oriented toward Behaviour Believability
for Video Games | GAME-ON 2008, France (2008) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Classic evaluation methods of believable agents are time-consuming because
they involve many human to judge agents. They are well suited to validate work
on new believable behaviours models. However, during the implementation,
numerous experiments can help to improve agents' believability. We propose a
method which aim at assessing how much an agent's behaviour looks like humans'
behaviours. By representing behaviours with vectors, we can store data computed
for humans and then evaluate as many agents as needed without further need of
humans. We present a test experiment which shows that even a simple evaluation
following our method can reveal differences between quite believable agents and
humans. This method seems promising although, as shown in our experiment,
results' analysis can be difficult.
| [
{
"version": "v1",
"created": "Thu, 2 Sep 2010 18:36:44 GMT"
}
] | 1,283,472,000,000 | [
[
"Tencé",
"Fabien",
"",
"LISYC"
],
[
"Buche",
"Cédric",
"",
"LISYC"
]
] |
1009.2003 | Zeeshan Ahmed Mr. | Zeeshan Ahmed | AI 3D Cybug Gaming | In the proceedings of 9th National Research Conference on Management
and Computer Sciences, SZABIST Institute of Science and Technology, Pakistan | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this short paper I briefly discuss 3D war Game based on artificial
intelligence concepts called AI WAR. Going in to the details, I present the
importance of CAICL language and how this language is used in AI WAR. Moreover
I also present a designed and implemented 3D War Cybug for AI WAR using CAICL
and discus the implemented strategy to defeat its enemies during the game life.
| [
{
"version": "v1",
"created": "Fri, 10 Sep 2010 12:58:30 GMT"
}
] | 1,284,336,000,000 | [
[
"Ahmed",
"Zeeshan",
""
]
] |
1009.2041 | Vaishak Belle | Vaishak Belle and Gerhard Lakemeyer | Multi-Agent Only-Knowing Revisited | Appears in Principles of Knowledge Representation and Reasoning 2010 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Levesque introduced the notion of only-knowing to precisely capture the
beliefs of a knowledge base. He also showed how only-knowing can be used to
formalize non-monotonic behavior within a monotonic logic. Despite its appeal,
all attempts to extend only-knowing to the many agent case have undesirable
properties. A belief model by Halpern and Lakemeyer, for instance, appeals to
proof-theoretic constructs in the semantics and needs to axiomatize validity as
part of the logic. It is also not clear how to generalize their ideas to a
first-order case. In this paper, we propose a new account of multi-agent
only-knowing which, for the first time, has a natural possible-world semantics
for a quantified language with equality. We then provide, for the propositional
fragment, a sound and complete axiomatization that faithfully lifts Levesque's
proof theory to the many agent case. We also discuss comparisons to the earlier
approach by Halpern and Lakemeyer.
| [
{
"version": "v1",
"created": "Fri, 10 Sep 2010 16:10:26 GMT"
}
] | 1,285,891,200,000 | [
[
"Belle",
"Vaishak",
""
],
[
"Lakemeyer",
"Gerhard",
""
]
] |
1009.4586 | S. M. Kamruzzaman | Md. Hijbul Alam, Abdul Kadar Muhammad Masum, Mohammad Mahadi Hassan,
and S. M. Kamruzzaman | Optimal Bangla Keyboard Layout using Association Rule of Data Mining | 3 Pages, International Conference | Proc. 7th International Conference on Computer and Information
Technology (ICCIT 2004), Dhaka, Bangladesh, pp. 679-681, Dec. 2004 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we present an optimal Bangla Keyboard Layout, which distributes
the load equally on both hands so that maximizing the ease and minimizing the
effort. Bangla alphabet has a large number of letters, for this it is difficult
to type faster using Bangla keyboard. Our proposed keyboard will maximize the
speed of operator as they can type with both hands parallel. Here we use the
association rule of data mining to distribute the Bangla characters in the
keyboard. First, we analyze the frequencies of data consisting of monograph,
digraph and trigraph, which are derived from data wire-house, and then used
association rule of data mining to distribute the Bangla characters in the
layout. Finally, we propose a Bangla Keyboard Layout. Experimental results on
several keyboard layout shows the effectiveness of the proposed approach with
better performance.
| [
{
"version": "v1",
"created": "Thu, 23 Sep 2010 11:42:41 GMT"
}
] | 1,285,545,600,000 | [
[
"Alam",
"Md. Hijbul",
""
],
[
"Masum",
"Abdul Kadar Muhammad",
""
],
[
"Hassan",
"Mohammad Mahadi",
""
],
[
"Kamruzzaman",
"S. M.",
""
]
] |
1009.4982 | S. M. Kamruzzaman | S. M. Kamruzzaman, Md. Hijbul Alam, Abdul Kadar Muhammad Masum, and
Md. Mahadi Hassan | Optimal Bangla Keyboard Layout using Data Mining Technique | 9 Pages, International Conference | Proc. International Conference on Information and Communication
Technology in Management (ICTM 2005), Multimedia University, Malaysia, May
2005 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents an optimal Bangla Keyboard Layout, which distributes the
load equally on both hands so that maximizing the ease and minimizing the
effort. Bangla alphabet has a large number of letters, for this it is difficult
to type faster using Bangla keyboard. Our proposed keyboard will maximize the
speed of operator as they can type with both hands parallel. Here we use the
association rule of data mining to distribute the Bangla characters in the
keyboard. First, we analyze the frequencies of data consisting of monograph,
digraph and trigraph, which are derived from data wire-house, and then used
association rule of data mining to distribute the Bangla characters in the
layout. Experimental results on several data show the effectiveness of the
proposed approach with better performance.
| [
{
"version": "v1",
"created": "Sat, 25 Sep 2010 06:55:27 GMT"
}
] | 1,285,632,000,000 | [
[
"Kamruzzaman",
"S. M.",
""
],
[
"Alam",
"Md. Hijbul",
""
],
[
"Masum",
"Abdul Kadar Muhammad",
""
],
[
"Hassan",
"Md. Mahadi",
""
]
] |
1009.5048 | S. M. Kamruzzaman | Abdul Kadar Muhammad Masum, Mohammad Mahadi Hassan, and S. M.
Kamruzzaman | The Most Advantageous Bangla Keyboard Layout Using Data Mining Technique | 10 Pages, International Journal | Journal of Computer Science, IBAIS University, Dkhaka, Bangladesh,
Vol. 1, No. 2, Dec. 2007 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bangla alphabet has a large number of letters, for this it is complicated to
type faster using Bangla keyboard. The proposed keyboard will maximize the
speed of operator as they can type with both hands parallel. Association rule
of data mining to distribute the Bangla characters in the keyboard is used
here. The frequencies of data consisting of monograph, digraph and trigraph are
analyzed, which are derived from data wire-house, and then used association
rule of data mining to distribute the Bangla characters in the layout.
Experimental results on several data show the effectiveness of the proposed
approach with better performance. This paper presents an optimal Bangla
Keyboard Layout, which distributes the load equally on both hands so that
maximizing the ease and minimizing the effort.
| [
{
"version": "v1",
"created": "Sun, 26 Sep 2010 02:09:41 GMT"
}
] | 1,285,632,000,000 | [
[
"Masum",
"Abdul Kadar Muhammad",
""
],
[
"Hassan",
"Mohammad Mahadi",
""
],
[
"Kamruzzaman",
"S. M.",
""
]
] |
1009.5268 | Forrest Sheng Bao | Xin Liu, Ying Ding, Forrest Sheng Bao | General Scaled Support Vector Machines | 5 pages, 4 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Support Vector Machines (SVMs) are popular tools for data mining tasks such
as classification, regression, and density estimation. However, original SVM
(C-SVM) only considers local information of data points on or over the margin.
Therefore, C-SVM loses robustness. To solve this problem, one approach is to
translate (i.e., to move without rotation or change of shape) the hyperplane
according to the distribution of the entire data. But existing work can only be
applied for 1-D case. In this paper, we propose a simple and efficient method
called General Scaled SVM (GS-SVM) to extend the existing approach to
multi-dimensional case. Our method translates the hyperplane according to the
distribution of data projected on the normal vector of the hyperplane. Compared
with C-SVM, GS-SVM has better performance on several data sets.
| [
{
"version": "v1",
"created": "Mon, 27 Sep 2010 14:27:49 GMT"
}
] | 1,285,632,000,000 | [
[
"Liu",
"Xin",
""
],
[
"Ding",
"Ying",
""
],
[
"Bao",
"Forrest Sheng",
""
]
] |
1009.5290 | Mladen Nikolic | Mladen Nikolic | Measuring Similarity of Graphs and their Nodes by Neighbor Matching | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The problem of measuring similarity of graphs and their nodes is important in
a range of practical problems. There is a number of proposed measures, some of
them being based on iterative calculation of similarity between two graphs and
the principle that two nodes are as similar as their neighbors are. In our
work, we propose one novel method of that sort, with a refined concept of
similarity of two nodes that involves matching of their neighbors. We prove
convergence of the proposed method and show that it has some additional
desirable properties that, to our knowledge, the existing methods lack. We
illustrate the method on two specific problems and empirically compare it to
other methods.
| [
{
"version": "v1",
"created": "Mon, 27 Sep 2010 15:31:54 GMT"
}
] | 1,285,632,000,000 | [
[
"Nikolic",
"Mladen",
""
]
] |
1010.0298 | Sugata Sanyal | Siby Abraham, Imre Kiss, Sugata Sanyal, Mukund Sanglikar | Steepest Ascent Hill Climbing For A Mathematical Problem | 8 Pages, 3 Figures, 2 Tables, International Symposium on Advanced
Engineering and Applied Management 40th Anniversary in Higher Education -
Informatics & Computer Science, University Politehnica, Timisoara, 4-5
November, 2010, Hunedoara, ROMANIA | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The paper proposes artificial intelligence technique called hill climbing to
find numerical solutions of Diophantine Equations. Such equations are important
as they have many applications in fields like public key cryptography, integer
factorization, algebraic curves, projective curves and data dependency in super
computers. Importantly, it has been proved that there is no general method to
find solutions of such equations. This paper is an attempt to find numerical
solutions of Diophantine equations using steepest ascent version of Hill
Climbing. The method, which uses tree representation to depict possible
solutions of Diophantine equations, adopts a novel methodology to generate
successors. The heuristic function used help to make the process of finding
solution as a minimization process. The work illustrates the effectiveness of
the proposed methodology using a class of Diophantine equations given by a1. x1
p1 + a2. x2 p2 + ...... + an . xn pn = N where ai and N are integers. The
experimental results validate that the procedure proposed is successful in
finding solutions of Diophantine Equations with sufficiently large powers and
large number of variables.
| [
{
"version": "v1",
"created": "Sat, 2 Oct 2010 07:27:43 GMT"
}
] | 1,286,236,800,000 | [
[
"Abraham",
"Siby",
""
],
[
"Kiss",
"Imre",
""
],
[
"Sanyal",
"Sugata",
""
],
[
"Sanglikar",
"Mukund",
""
]
] |
1010.2102 | Ran El-Yaniv | Ran El-Yaniv and Noam Etzion-Rosenberg | Hierarchical Multiclass Decompositions with Application to Authorship
Determination | null | null | null | Technical report CS-200415, Technion | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper is mainly concerned with the question of how to decompose
multiclass classification problems into binary subproblems. We extend known
Jensen-Shannon bounds on the Bayes risk of binary problems to hierarchical
multiclass problems and use these bounds to develop a heuristic procedure for
constructing hierarchical multiclass decomposition for multinomials. We test
our method and compare it to the well known "all-pairs" decomposition. Our
tests are performed using a new authorship determination benchmark test of
machine learning authors. The new method consistently outperforms the all-pairs
decomposition when the number of classes is small and breaks even on larger
multiclass problems. Using both methods, the classification accuracy we
achieve, using an SVM over a feature set consisting of both high frequency
single tokens and high frequency token-pairs, appears to be exceptionally high
compared to known results in authorship determination.
| [
{
"version": "v1",
"created": "Mon, 11 Oct 2010 13:41:21 GMT"
}
] | 1,286,841,600,000 | [
[
"El-Yaniv",
"Ran",
""
],
[
"Etzion-Rosenberg",
"Noam",
""
]
] |
1010.3177 | Xin Rong | Xin Rong | Introduction to the iDian | 4 pages | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/3.0/ | The iDian (previously named as the Operation Agent System) is a framework
designed to enable computer users to operate software in natural language.
Distinct from current speech-recognition systems, our solution supports
format-free combinations of orders, and is open to both developers and
customers. We used a multi-layer structure to build the entire framework,
approached rule-based natural language processing, and implemented demos
narrowing down to Windows, text-editing and a few other applications. This
essay will firstly give an overview of the entire system, and then scrutinize
the functions and structure of the system, and finally discuss the prospective
de-velopment, esp. on-line interaction functions.
| [
{
"version": "v1",
"created": "Fri, 15 Oct 2010 14:18:25 GMT"
}
] | 1,287,360,000,000 | [
[
"Rong",
"Xin",
""
]
] |
1010.4385 | Christian Blum | Hugo Hern\'andez and Tobias Baumgartner and Maria J. Blesa and
Christian Blum and Alexander Kr\"oller and Sandor P. Fekete | A Protocol for Self-Synchronized Duty-Cycling in Sensor Networks:
Generic Implementation in Wiselib | Accepted for the proceedings of MSN 2010 (The 6th International
Conference on Mobile Ad-hoc and Sensor Networks) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work we present a protocol for self-synchronized duty-cycling in
wireless sensor networks with energy harvesting capabilities. The protocol is
implemented in Wiselib, a library of generic algorithms for sensor networks.
Simulations are conducted with the sensor network simulator Shawn. They are
based on the specifications of real hardware known as iSense sensor nodes. The
experimental results show that the proposed mechanism is able to adapt to
changing energy availabilities. Moreover, it is shown that the system is very
robust against packet loss.
| [
{
"version": "v1",
"created": "Thu, 21 Oct 2010 07:54:11 GMT"
}
] | 1,287,705,600,000 | [
[
"Hernández",
"Hugo",
""
],
[
"Baumgartner",
"Tobias",
""
],
[
"Blesa",
"Maria J.",
""
],
[
"Blum",
"Christian",
""
],
[
"Kröller",
"Alexander",
""
],
[
"Fekete",
"Sandor P.",
""
]
] |
1010.4561 | Ali Akbar Kiaei Khoshroudbari | Ali Akbar Kiaei, Saeed Bagheri Shouraki, Seyed Hossein Khasteh,
Mahmoud Khademi, and Ali Reza Ghatreh Samani | New S-norm and T-norm Operators for Active Learning Method | 11 pages, 20 figures, under review of SPRINGER (Fuzzy Optimization
and Decision Making) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Active Learning Method (ALM) is a soft computing method used for modeling and
control based on fuzzy logic. All operators defined for fuzzy sets must serve
as either fuzzy S-norm or fuzzy T-norm. Despite being a powerful modeling
method, ALM does not possess operators which serve as S-norms and T-norms which
deprive it of a profound analytical expression/form. This paper introduces two
new operators based on morphology which satisfy the following conditions:
First, they serve as fuzzy S-norm and T-norm. Second, they satisfy Demorgans
law, so they complement each other perfectly. These operators are investigated
via three viewpoints: Mathematics, Geometry and fuzzy logic.
| [
{
"version": "v1",
"created": "Thu, 21 Oct 2010 19:48:22 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Feb 2011 00:59:29 GMT"
}
] | 1,297,123,200,000 | [
[
"Kiaei",
"Ali Akbar",
""
],
[
"Shouraki",
"Saeed Bagheri",
""
],
[
"Khasteh",
"Seyed Hossein",
""
],
[
"Khademi",
"Mahmoud",
""
],
[
"Samani",
"Ali Reza Ghatreh",
""
]
] |
1010.4609 | Berthe Y. Choueiry | Shant Karakashian, Robert Woodward, Berthe Y. Choueiry, Steven
Prestwhich, and Eugene C. Freuder | A Partial Taxonomy of Substitutability and Interchangeability | 18 pages, The 10th International Workshop on Symmetry in Constraint
Satisfaction Problems (SymCon'10) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Substitutability, interchangeability and related concepts in Constraint
Programming were introduced approximately twenty years ago and have given rise
to considerable subsequent research. We survey this work, classify, and relate
the different concepts, and indicate directions for future work, in particular
with respect to making connections with research into symmetry breaking. This
paper is a condensed version of a larger work in progress.
| [
{
"version": "v1",
"created": "Fri, 22 Oct 2010 04:00:00 GMT"
}
] | 1,287,964,800,000 | [
[
"Karakashian",
"Shant",
""
],
[
"Woodward",
"Robert",
""
],
[
"Choueiry",
"Berthe Y.",
""
],
[
"Prestwhich",
"Steven",
""
],
[
"Freuder",
"Eugene C.",
""
]
] |
1010.4784 | Indre Zliobaite | Indr\.e \v{Z}liobait\.e | Learning under Concept Drift: an Overview | Technical report, Vilnius University, 2009 techniques, related areas,
applications | null | null | 2009 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Concept drift refers to a non stationary learning problem over time. The
training and the application data often mismatch in real life problems. In this
report we present a context of concept drift problem 1. We focus on the issues
relevant to adaptive training set formation. We present the framework and
terminology, and formulate a global picture of concept drift learners design.
We start with formalizing the framework for the concept drifting data in
Section 1. In Section 2 we discuss the adaptivity mechanisms of the concept
drift learners. In Section 3 we overview the principle mechanisms of concept
drift learners. In this chapter we give a general picture of the available
algorithms and categorize them based on their properties. Section 5 discusses
the related research fields and Section 5 groups and presents major concept
drift applications. This report is intended to give a bird's view of concept
drift research field, provide a context of the research and position it within
broad spectrum of research fields and applications.
| [
{
"version": "v1",
"created": "Fri, 22 Oct 2010 19:31:23 GMT"
}
] | 1,287,964,800,000 | [
[
"Žliobaitė",
"Indrė",
""
]
] |
1010.4830 | Neil Lawrence | Neil D. Lawrence | A Unifying Probabilistic Perspective for Spectral Dimensionality
Reduction: Insights and New Models | 26 pages,11 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a new perspective on spectral dimensionality reduction which
views these methods as Gaussian Markov random fields (GRFs). Our unifying
perspective is based on the maximum entropy principle which is in turn inspired
by maximum variance unfolding. The resulting model, which we call maximum
entropy unfolding (MEU) is a nonlinear generalization of principal component
analysis. We relate the model to Laplacian eigenmaps and isomap. We show that
parameter fitting in the locally linear embedding (LLE) is approximate maximum
likelihood MEU. We introduce a variant of LLE that performs maximum likelihood
exactly: Acyclic LLE (ALLE). We show that MEU and ALLE are competitive with the
leading spectral approaches on a robot navigation visualization and a human
motion capture data set. Finally the maximum likelihood perspective allows us
to introduce a new approach to dimensionality reduction based on L1
regularization of the Gaussian random field via the graphical lasso.
| [
{
"version": "v1",
"created": "Fri, 22 Oct 2010 23:16:04 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Jan 2012 01:09:37 GMT"
}
] | 1,325,721,600,000 | [
[
"Lawrence",
"Neil D.",
""
]
] |
1010.5426 | Shuai Zheng | Shuai Zheng and Kaiqi Huang and Tieniu Tan | Translation-Invariant Representation for Cumulative Foot Pressure Images | 6 pages | Shuai Zheng, Kaiqi Huang and Tieniu Tan. Translation Invariant
Representation for Cumulative foot pressure Image, The second CJK Joint
Workshop on Pattern Recognition(CJKPR), 2010 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human can be distinguished by different limb movements and unique ground
reaction force. Cumulative foot pressure image is a 2-D cumulative ground
reaction force during one gait cycle. Although it contains pressure spatial
distribution information and pressure temporal distribution information, it
suffers from several problems including different shoes and noise, when putting
it into practice as a new biometric for pedestrian identification. In this
paper, we propose a hierarchical translation-invariant representation for
cumulative foot pressure images, inspired by the success of Convolutional deep
belief network for digital classification. Key contribution in our approach is
discriminative hierarchical sparse coding scheme which helps to learn useful
discriminative high-level visual features. Based on the feature representation
of cumulative foot pressure images, we develop a pedestrian recognition system
which is invariant to three different shoes and slight local shape change.
Experiments are conducted on a proposed open dataset that contains more than
2800 cumulative foot pressure images from 118 subjects. Evaluations suggest the
effectiveness of the proposed method and the potential of cumulative foot
pressure images as a biometric.
| [
{
"version": "v1",
"created": "Tue, 26 Oct 2010 15:16:50 GMT"
}
] | 1,288,137,600,000 | [
[
"Zheng",
"Shuai",
""
],
[
"Huang",
"Kaiqi",
""
],
[
"Tan",
"Tieniu",
""
]
] |
1011.0098 | Reinhard Moratz | Till Mossakowski, Reinhard Moratz | Qualitative Reasoning about Relative Direction on Adjustable Levels of
Granularity | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An important issue in Qualitative Spatial Reasoning is the representation of
relative direction. In this paper we present simple geometric rules that enable
reasoning about relative direction between oriented points. This framework, the
Oriented Point Algebra OPRA_m, has a scalable granularity m. We develop a
simple algorithm for computing the OPRA_m composition tables and prove its
correctness. Using a composition table, algebraic closure for a set of OPRA
statements is sufficient to solve spatial navigation tasks. And it turns out
that scalable granularity is useful in these navigation tasks.
| [
{
"version": "v1",
"created": "Sat, 30 Oct 2010 19:06:44 GMT"
}
] | 1,288,656,000,000 | [
[
"Mossakowski",
"Till",
""
],
[
"Moratz",
"Reinhard",
""
]
] |
1011.0187 | Sahin Emrah Amrahov | \c{S}ahin Emrah Amrahov, Orhan A. Nooraden | A Distributed AI Aided 3D Domino Game | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the article a turn-based game played on four computers connected via
network is investigated. There are three computers with natural intelligence
and one with artificial intelligence. Game table is seen by each player's own
view point in all players' monitors. Domino pieces are three dimensional. For
distributed systems TCP/IP protocol is used. In order to get 3D image,
Microsoft XNA technology is applied. Domino 101 game is nondeterministic game
that is result of the game depends on the initial random distribution of the
pieces. Number of the distributions is equal to the multiplication of following
combinations: . Moreover, in this game that is played by four people, players
are divided into 2 pairs. Accordingly, we cannot predict how the player uses
the dominoes that is according to the dominoes of his/her partner or according
to his/her own dominoes. The fact that the natural intelligence can be a player
in any level affects the outcome. These reasons make it difficult to develop an
AI. In the article four levels of AI are developed. The AI in the first level
is equivalent to the intelligence of a child who knows the rules of the game
and recognizes the numbers. The AI in this level plays if it has any domino,
suitable to play or says pass. In most of the games which can be played on the
internet, the AI does the same. But the AI in the last level is a master
player, and it can develop itself according to its competitors' levels.
| [
{
"version": "v1",
"created": "Sun, 31 Oct 2010 18:14:44 GMT"
}
] | 1,426,550,400,000 | [
[
"Amrahov",
"Şahin Emrah",
""
],
[
"Nooraden",
"Orhan A.",
""
]
] |
1011.0190 | Sahin Emrah Amrahov | \c{S}ahin Emrah Amrahov, Fatih Aybar and Serhat Do\u{g}an | Prunnig Algorithm of Generation a Minimal Set of Rule Reducts Based on
Rough Set Theory | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper it is considered rule reduct generation problem, based on Rough
Set Theory. Rule Reduct Generation (RG) and Modified Rule Generation (MRG)
algorithms are well-known. Alternative to these algorithms Pruning Algorithm of
Generation A Minimal Set of Rule Reducts, or briefly Pruning Rule Generation
(PRG) algorithm is developed. PRG algorithm uses tree structured data type. PRG
algorithm is compared with RG and MRG algorithms
| [
{
"version": "v1",
"created": "Sun, 31 Oct 2010 18:46:50 GMT"
}
] | 1,426,550,400,000 | [
[
"Amrahov",
"Şahin Emrah",
""
],
[
"Aybar",
"Fatih",
""
],
[
"Doğan",
"Serhat",
""
]
] |
1011.0233 | Weiming Liu | Weiming Liu, Sanjiang Li | Reasoning about Cardinal Directions between Extended Objects: The
Hardness Result | 24 pages, 24 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The cardinal direction calculus (CDC) proposed by Goyal and Egenhofer is a
very expressive qualitative calculus for directional information of extended
objects. Early work has shown that consistency checking of complete networks of
basic CDC constraints is tractable while reasoning with the CDC in general is
NP-hard. This paper shows, however, if allowing some constraints unspecified,
then consistency checking of possibly incomplete networks of basic CDC
constraints is already intractable. This draws a sharp boundary between the
tractable and intractable subclasses of the CDC. The result is achieved by a
reduction from the well-known 3-SAT problem.
| [
{
"version": "v1",
"created": "Mon, 1 Nov 2010 01:02:39 GMT"
},
{
"version": "v2",
"created": "Thu, 4 Nov 2010 01:26:45 GMT"
}
] | 1,288,915,200,000 | [
[
"Liu",
"Weiming",
""
],
[
"Li",
"Sanjiang",
""
]
] |
1011.0330 | Thomas Cederborg Mr | Thomas Cederborg and Pierre-Yves Oudeyer | Imitation learning of motor primitives and language bootstrapping in
robots | This paper has been withdrawn by the author due to several issues
regarding the clarity of presentation | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Imitation learning in robots, also called programing by demonstration, has
made important advances in recent years, allowing humans to teach context
dependant motor skills/tasks to robots. We propose to extend the usual contexts
investigated to also include acoustic linguistic expressions that might denote
a given motor skill, and thus we target joint learning of the motor skills and
their potential acoustic linguistic name. In addition to this, a modification
of a class of existing algorithms within the imitation learning framework is
made so that they can handle the unlabeled demonstration of several tasks/motor
primitives without having to inform the imitator of what task is being
demonstrated or what the number of tasks are, which is a necessity for language
learning, i.e; if one wants to teach naturally an open number of new motor
skills together with their acoustic names. Finally, a mechanism for detecting
whether or not linguistic input is relevant to the task is also proposed, and
our architecture also allows the robot to find the right framing for a given
identified motor primitive. With these additions it becomes possible to build
an imitator that bridges the gap between imitation learning and language
learning by being able to learn linguistic expressions using methods from the
imitation learning community. In this sense the imitator can learn a word by
guessing whether a certain speech pattern present in the context means that a
specific task is to be executed. The imitator is however not assumed to know
that speech is relevant and has to figure this out on its own by looking at the
demonstrations: indeed, the architecture allows the robot to transparently also
learn tasks which should not be triggered by an acoustic word, but for example
by the color or position of an object or a gesture made by someone in the
environment. To demonstrate this ability to find the ...
| [
{
"version": "v1",
"created": "Mon, 1 Nov 2010 14:26:09 GMT"
},
{
"version": "v2",
"created": "Wed, 3 Nov 2010 11:08:50 GMT"
},
{
"version": "v3",
"created": "Sun, 11 Mar 2012 12:11:11 GMT"
}
] | 1,331,596,800,000 | [
[
"Cederborg",
"Thomas",
""
],
[
"Oudeyer",
"Pierre-Yves",
""
]
] |
1011.0628 | Julie M David | Julie M. David And Kannan Balakrishnan | Significance of Classification Techniques in Prediction of Learning
Disabilities | 10 pages, 3 tables and 2 figures | International Journal of Artificial Intelligence&Applications, Vol
1, No.4, Oct. 2010, pp 111-120 | 10.5121/ijaia.2010.1409 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The aim of this study is to show the importance of two classification
techniques, viz. decision tree and clustering, in prediction of learning
disabilities (LD) of school-age children. LDs affect about 10 percent of all
children enrolled in schools. The problems of children with specific learning
disabilities have been a cause of concern to parents and teachers for some
time. Decision trees and clustering are powerful and popular tools used for
classification and prediction in Data mining. Different rules extracted from
the decision tree are used for prediction of learning disabilities. Clustering
is the assignment of a set of observations into subsets, called clusters, which
are useful in finding the different signs and symptoms (attributes) present in
the LD affected child. In this paper, J48 algorithm is used for constructing
the decision tree and K-means algorithm is used for creating the clusters. By
applying these classification techniques, LD in any child can be identified.
| [
{
"version": "v1",
"created": "Tue, 2 Nov 2010 14:37:51 GMT"
}
] | 1,288,742,400,000 | [
[
"Balakrishnan",
"Julie M. David And Kannan",
""
]
] |
1011.0950 | Priyankar Ghosh | Priyankar Ghosh and Pallab Dasgupta | Detecting Ontological Conflicts in Protocols between Semantic Web
Services | null | International Journal of Web & Semantic Technology (IJWest) Vol.1,
Num.4, October 2010 | 10.5121/ijwest.2010.1403 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The task of verifying the compatibility between interacting web services has
traditionally been limited to checking the compatibility of the interaction
protocol in terms of message sequences and the type of data being exchanged.
Since web services are developed largely in an uncoordinated way, different
services often use independently developed ontologies for the same domain
instead of adhering to a single ontology as standard. In this work we
investigate the approaches that can be taken by the server to verify the
possibility to reach a state with semantically inconsistent results during the
execution of a protocol with a client, if the client ontology is published.
Often database is used to store the actual data along with the ontologies
instead of storing the actual data as a part of the ontology description. It is
important to observe that at the current state of the database the semantic
conflict state may not be reached even if the verification done by the server
indicates the possibility of reaching a conflict state. A relational algebra
based decision procedure is also developed to incorporate the current state of
the client and the server databases in the overall verification procedure.
| [
{
"version": "v1",
"created": "Wed, 3 Nov 2010 17:33:23 GMT"
}
] | 1,594,944,000,000 | [
[
"Ghosh",
"Priyankar",
""
],
[
"Dasgupta",
"Pallab",
""
]
] |
1011.1478 | Velimir Ilic | Velimir M. Ilic, Dejan I. Mancev, Branimir T. Todorovic, Miomir S.
Stankovic | Gradient Computation In Linear-Chain Conditional Random Fields Using The
Entropy Message Passing Algorithm | 11 pages, 2 tables, 3 figures, 2 algorithms | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The paper proposes a numerically stable recursive algorithm for the exact
computation of the linear-chain conditional random field gradient. It operates
as a forward algorithm over the log-domain expectation semiring and has the
purpose of enhancing memory efficiency when applied to long observation
sequences. Unlike the traditional algorithm based on the forward-backward
recursions, the memory complexity of our algorithm does not depend on the
sequence length. The experiments on real data show that it can be useful for
the problems which deal with long sequences.
| [
{
"version": "v1",
"created": "Fri, 5 Nov 2010 18:41:03 GMT"
},
{
"version": "v2",
"created": "Wed, 30 May 2012 13:46:56 GMT"
}
] | 1,338,422,400,000 | [
[
"Ilic",
"Velimir M.",
""
],
[
"Mancev",
"Dejan I.",
""
],
[
"Todorovic",
"Branimir T.",
""
],
[
"Stankovic",
"Miomir S.",
""
]
] |
1011.1660 | Ali Akbar Kiaei Khoshroudbari | Hesam Sagha, Saeed Bagheri Shouraki, Hosein Khasteh, and Ali Akbar
Kiaei | Reinforcement Learning Based on Active Learning Method | 5 pages, 11 figures, 1 table | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, a new reinforcement learning approach is proposed which is
based on a powerful concept named Active Learning Method (ALM) in modeling. ALM
expresses any multi-input-single-output system as a fuzzy combination of some
single-input-singleoutput systems. The proposed method is an actor-critic
system similar to Generalized Approximate Reasoning based Intelligent Control
(GARIC) structure to adapt the ALM by delayed reinforcement signals. Our system
uses Temporal Difference (TD) learning to model the behavior of useful actions
of a control system. The goodness of an action is modeled on Reward-
Penalty-Plane. IDS planes will be updated according to this plane. It is shown
that the system can learn with a predefined fuzzy system or without it (through
random actions).
| [
{
"version": "v1",
"created": "Sun, 7 Nov 2010 17:45:57 GMT"
}
] | 1,289,260,800,000 | [
[
"Sagha",
"Hesam",
""
],
[
"Shouraki",
"Saeed Bagheri",
""
],
[
"Khasteh",
"Hosein",
""
],
[
"Kiaei",
"Ali Akbar",
""
]
] |
1011.1662 | Ali Akbar Kiaei Khoshroudbari | Seyed Hossein Khasteh, Saeid Bagheri Shouraki, and Ali Akbar Kiaei | A New Sufficient Condition for 1-Coverage to Imply Connectivity | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An effective approach for energy conservation in wireless sensor networks is
scheduling sleep intervals for extraneous nodes while the remaining nodes stay
active to provide continuous service. For the sensor network to operate
successfully the active nodes must maintain both sensing coverage and network
connectivity, It proved before if the communication range of nodes is at least
twice the sensing range, complete coverage of a convex area implies
connectivity among the working set of nodes. In this paper we consider a
rectangular region A = a *b, such that R a R b s s {\pounds}, {\pounds}, where
s R is the sensing range of nodes. and put a constraint on minimum allowed
distance between nodes(s). according to this constraint we present a new lower
bound for communication range relative to sensing range of sensors(s 2 + 3 *R)
that complete coverage of considered area implies connectivity among the
working set of nodes; also we present a new distribution method, that satisfy
our constraint.
| [
{
"version": "v1",
"created": "Sun, 7 Nov 2010 17:49:38 GMT"
}
] | 1,289,260,800,000 | [
[
"Khasteh",
"Seyed Hossein",
""
],
[
"Shouraki",
"Saeid Bagheri",
""
],
[
"Kiaei",
"Ali Akbar",
""
]
] |
1011.2304 | Cedric Bernier | Samuel Nowakowski (LORIA), C\'edric Bernier (LORIA), Anne Boyer
(LORIA) | Target tracking in the recommender space: Toward a new recommender
system based on Kalman filtering | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a new approach for recommender systems based on
target tracking by Kalman filtering. We assume that users and their seen
resources are vectors in the multidimensional space of the categories of the
resources. Knowing this space, we propose an algorithm based on a Kalman filter
to track users and to predict the best prediction of their future position in
the recommendation space.
| [
{
"version": "v1",
"created": "Wed, 10 Nov 2010 08:26:56 GMT"
}
] | 1,289,433,600,000 | [
[
"Nowakowski",
"Samuel",
"",
"LORIA"
],
[
"Bernier",
"Cédric",
"",
"LORIA"
],
[
"Boyer",
"Anne",
"",
"LORIA"
]
] |
1011.4362 | Bruno Scherrer | Bruno Scherrer (INRIA Lorraine - LORIA) | Should one compute the Temporal Difference fix point or minimize the
Bellman Residual? The unified oblique projection view | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate projection methods, for evaluating a linear approximation of
the value function of a policy in a Markov Decision Process context. We
consider two popular approaches, the one-step Temporal Difference fix-point
computation (TD(0)) and the Bellman Residual (BR) minimization. We describe
examples, where each method outperforms the other. We highlight a simple
relation between the objective function they minimize, and show that while BR
enjoys a performance guarantee, TD(0) does not in general. We then propose a
unified view in terms of oblique projections of the Bellman equation, which
substantially simplifies and extends the characterization of (schoknecht,2002)
and the recent analysis of (Yu & Bertsekas, 2008). Eventually, we describe some
simulations that suggest that if the TD(0) solution is usually slightly better
than the BR solution, its inherent numerical instability makes it very bad in
some cases, and thus worse on average.
| [
{
"version": "v1",
"created": "Fri, 19 Nov 2010 08:20:30 GMT"
}
] | 1,290,384,000,000 | [
[
"Scherrer",
"Bruno",
"",
"INRIA Lorraine - LORIA"
]
] |
1011.5349 | Christian Blum | Hugo Hern\'andez and Christian Blum | Distributed Graph Coloring: An Approach Based on the Calling Behavior of
Japanese Tree Frogs | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph coloring, also known as vertex coloring, considers the problem of
assigning colors to the nodes of a graph such that adjacent nodes do not share
the same color. The optimization version of the problem concerns the
minimization of the number of used colors. In this paper we deal with the
problem of finding valid colorings of graphs in a distributed way, that is, by
means of an algorithm that only uses local information for deciding the color
of the nodes. Such algorithms prescind from any central control. Due to the
fact that quite a few practical applications require to find colorings in a
distributed way, the interest in distributed algorithms for graph coloring has
been growing during the last decade. As an example consider wireless ad-hoc and
sensor networks, where tasks such as the assignment of frequencies or the
assignment of TDMA slots are strongly related to graph coloring.
The algorithm proposed in this paper is inspired by the calling behavior of
Japanese tree frogs. Male frogs use their calls to attract females.
Interestingly, groups of males that are located nearby each other desynchronize
their calls. This is because female frogs are only able to correctly localize
the male frogs when their calls are not too close in time. We experimentally
show that our algorithm is very competitive with the current state of the art,
using different sets of problem instances and comparing to one of the most
competitive algorithms from the literature.
| [
{
"version": "v1",
"created": "Wed, 24 Nov 2010 11:47:59 GMT"
}
] | 1,290,643,200,000 | [
[
"Hernández",
"Hugo",
""
],
[
"Blum",
"Christian",
""
]
] |
1011.5480 | Gabriel Synnaeve | Gabriel Synnaeve (LIG), Pierre Bessiere (LPPA) | Bayesian Modeling of a Human MMORPG Player | 30th international workshop on Bayesian Inference and Maximum
Entropy, Chamonix : France (2010) | null | 10.1063/1.3573658 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes an application of Bayesian programming to the control of
an autonomous avatar in a multiplayer role-playing game (the example is based
on World of Warcraft). We model a particular task, which consists of choosing
what to do and to select which target in a situation where allies and foes are
present. We explain the model in Bayesian programming and show how we could
learn the conditional probabilities from data gathered during human-played
sessions.
| [
{
"version": "v1",
"created": "Wed, 24 Nov 2010 20:07:49 GMT"
}
] | 1,432,080,000,000 | [
[
"Synnaeve",
"Gabriel",
"",
"LIG"
],
[
"Bessiere",
"Pierre",
"",
"LPPA"
]
] |
1011.5951 | Emad Saad | Emad Saad | Reinforcement Learning in Partially Observable Markov Decision Processes
using Hybrid Probabilistic Logic Programs | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a probabilistic logic programming framework to reinforcement
learning, by integrating reinforce-ment learning, in POMDP environments, with
normal hybrid probabilistic logic programs with probabilistic answer set
seman-tics, that is capable of representing domain-specific knowledge. We
formally prove the correctness of our approach. We show that the complexity of
finding a policy for a reinforcement learning problem in our approach is
NP-complete. In addition, we show that any reinforcement learning problem can
be encoded as a classical logic program with answer set semantics. We also show
that a reinforcement learning problem can be encoded as a SAT problem. We
present a new high level action description language that allows the factored
representation of POMDP. Moreover, we modify the original model of POMDP so
that it be able to distinguish between knowledge producing actions and actions
that change the environment.
| [
{
"version": "v1",
"created": "Sat, 27 Nov 2010 07:48:08 GMT"
}
] | 1,291,075,200,000 | [
[
"Saad",
"Emad",
""
]
] |
1011.6220 | Ramakrishna Kolikipogu | K.Sasidhar, Vijaya L Kakulapati, Kolikipogu Ramakrishna and
K.KailasaRao | Multimodal Biometric Systems - Study to Improve Accuracy and Performance | 8 pages,5 figures, published in International Journal of Computer
Science & Engineering Survey (IJCSES) Vol.1, No.2, November 2010 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Biometrics is the science and technology of measuring and analyzing
biological data of human body, extracting a feature set from the acquired data,
and comparing this set against to the template set in the database.
Experimental studies show that Unimodal biometric systems had many
disadvantages regarding performance and accuracy. Multimodal biometric systems
perform better than unimodal biometric systems and are popular even more
complex also. We examine the accuracy and performance of multimodal biometric
authentication systems using state of the art Commercial Off- The-Shelf (COTS)
products. Here we discuss fingerprint and face biometric systems, decision and
fusion techniques used in these systems. We also discuss their advantage over
unimodal biometric systems.
| [
{
"version": "v1",
"created": "Mon, 29 Nov 2010 13:10:27 GMT"
}
] | 1,291,075,200,000 | [
[
"Sasidhar",
"K.",
""
],
[
"Kakulapati",
"Vijaya L",
""
],
[
"Ramakrishna",
"Kolikipogu",
""
],
[
"KailasaRao",
"K.",
""
]
] |
1012.0322 | Vitaly Schetinin | Vitaly Schetinin, Jonathan Fieldsend, Derek Partridge, Wojtek
Krzanowski, Richard Everson, Trevor Bailey and Adolfo Hernandez | A Bayesian Methodology for Estimating Uncertainty of Decisions in
Safety-Critical Systems | null | Frontiers in Artificial Intelligence and Applications. Volume 149,
IOS Press Book, 2006. Integrated Intelligent Systems for Engineering Design.
Edited by Xuan F. Zha, R.J. Howlett. ISBN 978-1-58603-675-1, pp. 82-96 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Uncertainty of decisions in safety-critical engineering applications can be
estimated on the basis of the Bayesian Markov Chain Monte Carlo (MCMC)
technique of averaging over decision models. The use of decision tree (DT)
models assists experts to interpret causal relations and find factors of the
uncertainty. Bayesian averaging also allows experts to estimate the uncertainty
accurately when a priori information on the favored structure of DTs is
available. Then an expert can select a single DT model, typically the Maximum a
Posteriori model, for interpretation purposes. Unfortunately, a priori
information on favored structure of DTs is not always available. For this
reason, we suggest a new prior on DTs for the Bayesian MCMC technique. We also
suggest a new procedure of selecting a single DT and describe an application
scenario. In our experiments on the Short-Term Conflict Alert data our
technique outperforms the existing Bayesian techniques in predictive accuracy
of the selected single DTs.
| [
{
"version": "v1",
"created": "Wed, 1 Dec 2010 21:08:04 GMT"
}
] | 1,291,334,400,000 | [
[
"Schetinin",
"Vitaly",
""
],
[
"Fieldsend",
"Jonathan",
""
],
[
"Partridge",
"Derek",
""
],
[
"Krzanowski",
"Wojtek",
""
],
[
"Everson",
"Richard",
""
],
[
"Bailey",
"Trevor",
""
],
[
"Hernandez",
"Adolfo",
""
]
] |
1012.0830 | Yves Moinard | Yves Moinard (INRIA - IRISA) | Using ASP with recent extensions for causal explanations | null | ASPOCP10, Answer Set Programming and Other Computing Paradigms
Workshop, associated with ICLP, Edinburgh : United Kingdom (2010) | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We examine the practicality for a user of using Answer Set Programming (ASP)
for representing logical formalisms. We choose as an example a formalism aiming
at capturing causal explanations from causal information. We provide an
implementation, showing the naturalness and relative efficiency of this
translation job. We are interested in the ease for writing an ASP program, in
accordance with the claimed ``declarative'' aspect of ASP. Limitations of the
earlier systems (poor data structure and difficulty in reusing pieces of
programs) made that in practice, the ``declarative aspect'' was more
theoretical than practical. We show how recent improvements in working ASP
systems facilitate a lot the translation, even if a few improvements could
still be useful.
| [
{
"version": "v1",
"created": "Fri, 3 Dec 2010 20:07:21 GMT"
}
] | 1,291,593,600,000 | [
[
"Moinard",
"Yves",
"",
"INRIA - IRISA"
]
] |
1012.1255 | Predrag Janicic | Predrag Janicic (University of Belgrade) | URSA: A System for Uniform Reduction to SAT | 39 pages, uses tikz.sty | Logical Methods in Computer Science, Volume 8, Issue 3 (September
30, 2012) lmcs:1171 | 10.2168/LMCS-8(3:30)2012 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There are a huge number of problems, from various areas, being solved by
reducing them to SAT. However, for many applications, translation into SAT is
performed by specialized, problem-specific tools. In this paper we describe a
new system for uniform solving of a wide class of problems by reducing them to
SAT. The system uses a new specification language URSA that combines imperative
and declarative programming paradigms. The reduction to SAT is defined
precisely by the semantics of the specification language. The domain of the
approach is wide (e.g., many NP-complete problems can be simply specified and
then solved by the system) and there are problems easily solvable by the
proposed system, while they can be hardly solved by using other programming
languages or constraint programming systems. So, the system can be seen not
only as a tool for solving problems by reducing them to SAT, but also as a
general-purpose constraint solving system (for finite domains). In this paper,
we also describe an open-source implementation of the described approach. The
performed experiments suggest that the system is competitive to
state-of-the-art related modelling systems.
| [
{
"version": "v1",
"created": "Mon, 6 Dec 2010 17:40:33 GMT"
},
{
"version": "v2",
"created": "Fri, 31 Aug 2012 10:30:58 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Sep 2012 21:30:27 GMT"
}
] | 1,435,708,800,000 | [
[
"Janicic",
"Predrag",
"",
"University of Belgrade"
]
] |
1012.1619 | Adrian Paschke | Pablo Lopez-Garcia | Are SNOMED CT Browsers Ready for Institutions? Introducing MySNOM | in Adrian Paschke, Albert Burger, Andrea Splendiani, M. Scott
Marshall, Paolo Romano: Proceedings of the 3rd International Workshop on
Semantic Web Applications and Tools for the Life Sciences, Berlin,Germany,
December 8-10, 2010 | null | null | SWAT4LS 2010 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | SNOMED Clinical Terms (SNOMED CT) is one of the most widespread ontologies in
the life sciences, with more than 300,000 concepts and relationships, but is
distributed with no associated software tools. In this paper we present MySNOM,
a web-based SNOMED CT browser. MySNOM allows organizations to browse their own
distribution of SNOMED CT under a controlled environment, focuses on navigating
using the structure of SNOMED CT, and has diagramming capabilities.
| [
{
"version": "v1",
"created": "Tue, 7 Dec 2010 21:45:50 GMT"
}
] | 1,291,852,800,000 | [
[
"Lopez-Garcia",
"Pablo",
""
]
] |
1012.1635 | Adrian Paschke | He Tan | A study on the relation between linguistics-oriented and domain-specific
semantics | in Adrian Paschke, Albert Burger, Andrea Splendiani, M. Scott
Marshall, Paolo Romano: Proceedings of the 3rd International Workshop on
Semantic Web Applications and Tools for the Life Sciences, Berlin,Germany,
December 8-10, 2010 | null | null | SWAT4LS 2010 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we dealt with the comparison and linking between lexical
resources with domain knowledge provided by ontologies. It is one of the issues
for the combination of the Semantic Web Ontologies and Text Mining. We
investigated the relations between the linguistics oriented and domain-specific
semantics, by associating the GO biological process concepts to the FrameNet
semantic frames. The result shows the gaps between the linguistics-oriented and
domain-specific semantics on the classification of events and the grouping of
target words. The result provides valuable information for the improvement of
domain ontologies supporting for text mining systems. And also, it will result
in benefits to language understanding technology.
| [
{
"version": "v1",
"created": "Tue, 7 Dec 2010 23:03:42 GMT"
}
] | 1,426,550,400,000 | [
[
"Tan",
"He",
""
]
] |
1012.1643 | Adrian Paschke | Adrian Paschke, Zhili Zhao | Process Makna - A Semantic Wiki for Scientific Workflows | in Adrian Paschke, Albert Burger, Andrea Splendiani, M. Scott
Marshall, Paolo Romano: Proceedings of the 3rd International Workshop on
Semantic Web Applications and Tools for the Life Sciences, Berlin,Germany,
December 8-10, 2010 | null | null | SWAT4LS 2010 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Virtual e-Science infrastructures supporting Web-based scientific workflows
are an example for knowledge-intensive collaborative and weakly-structured
processes where the interaction with the human scientists during process
execution plays a central role. In this paper we propose the lightweight
dynamic user-friendly interaction with humans during execution of scientific
workflows via the low-barrier approach of Semantic Wikis as an intuitive
interface for non-technical scientists. Our Process Makna Semantic Wiki system
is a novel combination of an business process management system adapted for
scientific workflows with a Corporate Semantic Web Wiki user interface
supporting knowledge intensive human interaction tasks during scientific
workflow execution.
| [
{
"version": "v1",
"created": "Tue, 7 Dec 2010 23:49:29 GMT"
}
] | 1,291,852,800,000 | [
[
"Paschke",
"Adrian",
""
],
[
"Zhao",
"Zhili",
""
]
] |
1012.1646 | Adrian Paschke | Richard Huber, Kirsten Hantelmann, Alexandru Todor, Sebastian Krebs,
Ralf Heese and Adrian Paschke | Use of semantic technologies for the development of a dynamic
trajectories generator in a Semantic Chemistry eLearning platform | in Adrian Paschke, Albert Burger, Andrea Splendiani, M. Scott
Marshall, Paolo Romano: Proceedings of the 3rd International Workshop on
Semantic Web Applications and Tools for the Life Sciences, Berlin,Germany,
December 8-10, 2010 | null | null | SWAT4LS 2010 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | ChemgaPedia is a multimedia, webbased eLearning service platform that
currently contains about 18.000 pages organized in 1.700 chapters covering the
complete bachelor studies in chemistry and related topics of chemistry,
pharmacy, and life sciences. The eLearning encyclopedia contains some 25.000
media objects and the eLearning platform provides services such as virtual and
remote labs for experiments. With up to 350.000 users per month the platform is
the most frequently used scientific educational service in the German spoken
Internet. In this demo we show the benefit of mapping the static eLearning
contents of ChemgaPedia to a Linked Data representation for Semantic Chemistry
which allows for generating dynamic eLearning paths tailored to the semantic
profiles of the users.
| [
{
"version": "v1",
"created": "Tue, 7 Dec 2010 23:55:47 GMT"
}
] | 1,291,852,800,000 | [
[
"Huber",
"Richard",
""
],
[
"Hantelmann",
"Kirsten",
""
],
[
"Todor",
"Alexandru",
""
],
[
"Krebs",
"Sebastian",
""
],
[
"Heese",
"Ralf",
""
],
[
"Paschke",
"Adrian",
""
]
] |
1012.1654 | Adrian Paschke | Adrian Groza, Radu Balaj | Using Semantic Wikis for Structured Argument in Medical Domain | in Adrian Paschke, Albert Burger, Andrea Splendiani, M. Scott
Marshall, Paolo Romano: Proceedings of the 3rd International Workshop on
Semantic Web Applications and Tools for the Life Sciences, Berlin,Germany,
December 8-10, 2010 | null | null | SWAT4LS 2010 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This research applies ideas from argumentation theory in the context of
semantic wikis, aiming to provide support for structured-large scale
argumentation between human agents. The implemented prototype is exemplified by
modelling the MMR vaccine controversy.
| [
{
"version": "v1",
"created": "Wed, 8 Dec 2010 00:34:17 GMT"
}
] | 1,426,550,400,000 | [
[
"Groza",
"Adrian",
""
],
[
"Balaj",
"Radu",
""
]
] |
1012.1658 | Adrian Paschke | Julia Dmitrieva, Fons J. Verbeek | Creating a new Ontology: a Modular Approach | in Adrian Paschke, Albert Burger, Andrea Splendiani, M. Scott
Marshall, Paolo Romano: Proceedings of the 3rd International Workshop on
Semantic Web Applications and Tools for the Life Sciences, Berlin,Germany,
December 8-10, 2010 | null | null | SWAT4LS 2010 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Creating a new Ontology: a Modular Approach
| [
{
"version": "v1",
"created": "Wed, 8 Dec 2010 00:41:20 GMT"
}
] | 1,426,550,400,000 | [
[
"Dmitrieva",
"Julia",
""
],
[
"Verbeek",
"Fons J.",
""
]
] |
1012.1667 | Adrian Paschke | Maria Perez, Rafael Berlanga, Ismael Sanz | A semantic approach for the requirement-driven discovery of web services
in the Life Sciences | in Adrian Paschke, Albert Burger, Andrea Splendiani, M. Scott
Marshall, Paolo Romano: Proceedings of the 3rd International Workshop on
Semantic Web Applications and Tools for the Life Sciences, Berlin,Germany,
December 8-10, 2010 | null | null | SWAT4LS 2010 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Research in the Life Sciences depends on the integration of large,
distributed and heterogeneous data sources and web services. The discovery of
which of these resources are the most appropriate to solve a given task is a
complex research question, since there is a large amount of plausible
candidates and there is little, mostly unstructured, metadata to be able to
decide among them.We contribute a semi-automatic approach,based on semantic
techniques, to assist researchers in the discovery of the most appropriate web
services to full a set of given requirements.
| [
{
"version": "v1",
"created": "Wed, 8 Dec 2010 01:12:57 GMT"
}
] | 1,291,852,800,000 | [
[
"Perez",
"Maria",
""
],
[
"Berlanga",
"Rafael",
""
],
[
"Sanz",
"Ismael",
""
]
] |
1012.1743 | Adrian Paschke | Eric Leclercq and Marinette Savonnet | Scientific Collaborations: principles of WikiBridge Design | in Adrian Paschke, Albert Burger begin_of_the_skype_highlighting
end_of_the_skype_highlighting, Andrea Splendiani, M. Scott Marshall, Paolo
Romano: Proceedings of the 3rd International Workshop on Semantic Web
Applications and Tools for the Life Sciences, Berlin,Germany, December 8-10,
2010 | null | null | SWAT4LS 2010 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Semantic wikis, wikis enhanced with Semantic Web technologies, are
appropriate systems for community-authored knowledge models. They are
particularly suitable for scientific collaboration. This paper details the
design principles ofWikiBridge, a semantic wiki.
| [
{
"version": "v1",
"created": "Wed, 8 Dec 2010 11:43:37 GMT"
}
] | 1,291,852,800,000 | [
[
"Leclercq",
"Eric",
""
],
[
"Savonnet",
"Marinette",
""
]
] |
1012.1745 | Adrian Paschke | Simon Jupp, Matthew Horridge, Luigi Iannone, Julie Klein, Stuart Owen,
Joost Schanstra, Robert Stevens, Katy Wolstencroft | Populous: A tool for populating ontology templates | in Adrian Paschke, Albert Burger begin_of_the_skype_highlighting
end_of_the_skype_highlighting, Andrea Splendiani, M. Scott Marshall, Paolo
Romano: Proceedings of the 3rd International Workshop on Semantic Web
Applications and Tools for the Life Sciences, Berlin,Germany, December 8-10,
2010 | null | null | SWAT4LS 2010 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present Populous, a tool for gathering content with which to populate an
ontology. Domain experts need to add content, that is often repetitive in its
form, but without having to tackle the underlying ontological representation.
Populous presents users with a table based form in which columns are
constrained to take values from particular ontologies; the user can select a
concept from an ontology via its meaningful label to give a value for a given
entity attribute. Populated tables are mapped to patterns that can then be used
to automatically generate the ontology's content. Populous's contribution is in
the knowledge gathering stage of ontology development. It separates knowledge
gathering from the conceptualisation and also separates the user from the
standard ontology authoring environments. As a result, Populous can allow
knowledge to be gathered in a straight-forward manner that can then be used to
do mass production of ontology content.
| [
{
"version": "v1",
"created": "Wed, 8 Dec 2010 11:55:06 GMT"
}
] | 1,291,852,800,000 | [
[
"Jupp",
"Simon",
""
],
[
"Horridge",
"Matthew",
""
],
[
"Iannone",
"Luigi",
""
],
[
"Klein",
"Julie",
""
],
[
"Owen",
"Stuart",
""
],
[
"Schanstra",
"Joost",
""
],
[
"Stevens",
"Robert",
""
],
[
"Wolstencroft",
"Katy",
""
]
] |
1012.1899 | Adrian Paschke | Halit Erdogan, Umut Oztok, Yelda Erdem, Esra Erdem | Querying Biomedical Ontologies in Natural Language using Answer Set | in Adrian Paschke, Albert Burger, Andrea Splendiani, M. Scott
Marshall, Paolo Romano: Proceedings of the 3rd International Workshop on
Semantic Web Applications and Tools for the Life Sciences, Berlin,Germany,
December 8-10, 2010 | null | null | SWAT4LS 2010 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we develop an intelligent user interface that allows users to
enter biomedical queries in a natural language, and that presents the answers
(possibly with explanations if requested) in a natural language. We develop a
rule layer over biomedical ontologies and databases, and use automated
reasoners to answer queries considering relevant parts of the rule layer.
| [
{
"version": "v1",
"created": "Thu, 9 Dec 2010 00:12:24 GMT"
}
] | 1,291,939,200,000 | [
[
"Erdogan",
"Halit",
""
],
[
"Oztok",
"Umut",
""
],
[
"Erdem",
"Yelda",
""
],
[
"Erdem",
"Esra",
""
]
] |
1012.2148 | Yongzhi Cao | Yongzhi Cao, Guoqing Chen, and Etienne Kerre | Bisimulations for fuzzy transition systems | 13 double column pages | IEEE Trans. Fuzzy Syst., vol. 19, no. 3, pp. 540-552, 2011 | 10.1109/TFUZZ.2011.2117431 | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/3.0/ | There has been a long history of using fuzzy language equivalence to compare
the behavior of fuzzy systems, but the comparison at this level is too coarse.
Recently, a finer behavioral measure, bisimulation, has been introduced to
fuzzy finite automata. However, the results obtained are applicable only to
finite-state systems. In this paper, we consider bisimulation for general fuzzy
systems which may be infinite-state or infinite-event, by modeling them as
fuzzy transition systems. To help understand and check bisimulation, we
characterize it in three ways by enumerating whole transitions, comparing
individual transitions, and using a monotonic function. In addition, we address
composition operations, subsystems, quotients, and homomorphisms of fuzzy
transition systems and discuss their properties connected with bisimulation.
The results presented here are useful for comparing the behavior of general
fuzzy systems. In particular, this makes it possible to relate an infinite
fuzzy system to a finite one, which is easier to analyze, with the same
behavior.
| [
{
"version": "v1",
"created": "Fri, 10 Dec 2010 00:24:42 GMT"
}
] | 1,479,168,000,000 | [
[
"Cao",
"Yongzhi",
""
],
[
"Chen",
"Guoqing",
""
],
[
"Kerre",
"Etienne",
""
]
] |
1012.2162 | Yongzhi Cao | Yongzhi Cao and Yoshinori Ezawa | Nondeterministic fuzzy automata | 14 pages, 3 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Fuzzy automata have long been accepted as a generalization of
nondeterministic finite automata. A closer examination, however, shows that the
fundamental property---nondeterminism---in nondeterministic finite automata has
not been well embodied in the generalization. In this paper, we introduce
nondeterministic fuzzy automata with or without $\el$-moves and fuzzy languages
recognized by them. Furthermore, we prove that (deterministic) fuzzy automata,
nondeterministic fuzzy automata, and nondeterministic fuzzy automata with
$\el$-moves are all equivalent in the sense that they recognize the same class
of fuzzy languages.
| [
{
"version": "v1",
"created": "Fri, 10 Dec 2010 02:42:30 GMT"
}
] | 1,426,550,400,000 | [
[
"Cao",
"Yongzhi",
""
],
[
"Ezawa",
"Yoshinori",
""
]
] |
1012.2789 | Xiaoyue Wang Dr | Xiaoyue Wang and Hui Ding and Goce Trajcevski and Peter Scheuermann
and Eamonn Keogh | Experimental Comparison of Representation Methods and Distance Measures
for Time Series Data | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The previous decade has brought a remarkable increase of the interest in
applications that deal with querying and mining of time series data. Many of
the research efforts in this context have focused on introducing new
representation methods for dimensionality reduction or novel similarity
measures for the underlying data. In the vast majority of cases, each
individual work introducing a particular method has made specific claims and,
aside from the occasional theoretical justifications, provided quantitative
experimental observations. However, for the most part, the comparative aspects
of these experiments were too narrowly focused on demonstrating the benefits of
the proposed methods over some of the previously introduced ones. In order to
provide a comprehensive validation, we conducted an extensive experimental
study re-implementing eight different time series representations and nine
similarity measures and their variants, and testing their effectiveness on
thirty-eight time series data sets from a wide variety of application domains.
In this paper, we give an overview of these different techniques and present
our comparative experimental findings regarding their effectiveness. In
addition to providing a unified validation of some of the existing
achievements, our experiments also indicate that, in some cases, certain claims
in the literature may be unduly optimistic.
| [
{
"version": "v1",
"created": "Thu, 9 Dec 2010 19:43:53 GMT"
}
] | 1,426,550,400,000 | [
[
"Wang",
"Xiaoyue",
""
],
[
"Ding",
"Hui",
""
],
[
"Trajcevski",
"Goce",
""
],
[
"Scheuermann",
"Peter",
""
],
[
"Keogh",
"Eamonn",
""
]
] |
1012.3280 | Cedric Bernier | Samuel Nowakowski (LORIA), C\'edric Bernier (LORIA), Anne Boyer
(LORIA) | A new Recommender system based on target tracking: a Kalman Filter
approach | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a new approach for recommender systems based on
target tracking by Kalman filtering. We assume that users and their seen
resources are vectors in the multidimensional space of the categories of the
resources. Knowing this space, we propose an algorithm based on a Kalman filter
to track users and to predict the best prediction of their future position in
the recommendation space.
| [
{
"version": "v1",
"created": "Wed, 15 Dec 2010 11:07:09 GMT"
}
] | 1,292,457,600,000 | [
[
"Nowakowski",
"Samuel",
"",
"LORIA"
],
[
"Bernier",
"Cédric",
"",
"LORIA"
],
[
"Boyer",
"Anne",
"",
"LORIA"
]
] |
1012.3312 | Victor Odumuyiwa | Bolanle Oladejo (LORIA), Victor Odumuyiwa (LORIA), Amos David (LORIA) | Dynamic Capitalization and Visualization Strategy in Collaborative
Knowledge Management System for EI Process | null | International Conference in Knowledge Management and Knowledge
Economy ICKMKE 2010, paris : France (2010) | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge is attributed to human whose problem-solving behavior is subjective
and complex. In today's knowledge economy, the need to manage knowledge
produced by a community of actors cannot be overemphasized. This is due to the
fact that actors possess some level of tacit knowledge which is generally
difficult to articulate. Problem-solving requires searching and sharing of
knowledge among a group of actors in a particular context. Knowledge expressed
within the context of a problem resolution must be capitalized for future
reuse. In this paper, an approach that permits dynamic capitalization of
relevant and reliable actors' knowledge in solving decision problem following
Economic Intelligence process is proposed. Knowledge annotation method and
temporal attributes are used for handling the complexity in the communication
among actors and in contextualizing expressed knowledge. A prototype is built
to demonstrate the functionalities of a collaborative Knowledge Management
system based on this approach. It is tested with sample cases and the result
showed that dynamic capitalization leads to knowledge validation hence
increasing reliability of captured knowledge for reuse. The system can be
adapted to various domains
| [
{
"version": "v1",
"created": "Wed, 15 Dec 2010 12:45:56 GMT"
}
] | 1,292,457,600,000 | [
[
"Oladejo",
"Bolanle",
"",
"LORIA"
],
[
"Odumuyiwa",
"Victor",
"",
"LORIA"
],
[
"David",
"Amos",
"",
"LORIA"
]
] |
1012.3336 | Victor Odumuyiwa | Olusoji Okunoye (LORIA), Bolanle Oladejo (LORIA), Victor Odumuyiwa
(LORIA) | Dynamic Knowledge Capitalization through Annotation among Economic
Intelligence Actors in a Collaborative Environment | null | Veille strat\'egique et scientifique VSST 2010, Toulouse : France
(2010) | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The shift from industrial economy to knowledge economy in today's world has
revolutionalized strategic planning in organizations as well as their problem
solving approaches. The point of focus today is knowledge and service
production with more emphasis been laid on knowledge capital. Many
organizations are investing on tools that facilitate knowledge sharing among
their employees and they are as well promoting and encouraging collaboration
among their staff in order to build the organization's knowledge capital with
the ultimate goal of creating a lasting competitive advantage for their
organizations. One of the current leading approaches used for solving
organization's decision problem is the Economic Intelligence (EI) approach
which involves interactions among various actors called EI actors. These actors
collaborate to ensure the overall success of the decision problem solving
process. In the course of the collaboration, the actors express knowledge which
could be capitalized for future reuse. In this paper, we propose in the first
place, an annotation model for knowledge elicitation among EI actors. Because
of the need to build a knowledge capital, we also propose a dynamic knowledge
capitalisation approach for managing knowledge produced by the actors. Finally,
the need to manage the interactions and the interdependencies among
collaborating EI actors, led to our third proposition which constitute an
awareness mechanism for group work management.
| [
{
"version": "v1",
"created": "Wed, 15 Dec 2010 13:56:05 GMT"
}
] | 1,292,457,600,000 | [
[
"Okunoye",
"Olusoji",
"",
"LORIA"
],
[
"Oladejo",
"Bolanle",
"",
"LORIA"
],
[
"Odumuyiwa",
"Victor",
"",
"LORIA"
]
] |
1012.3410 | Joel Ratsaby | Laszlo Kovacs and Joel Ratsaby | Descriptive-complexity based distance for fuzzy sets | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A new distance function dist(A,B) for fuzzy sets A and B is introduced. It is
based on the descriptive complexity, i.e., the number of bits (on average) that
are needed to describe an element in the symmetric difference of the two sets.
The distance gives the amount of additional information needed to describe any
one of the two sets given the other. We prove its mathematical properties and
perform pattern clustering on data based on this distance.
| [
{
"version": "v1",
"created": "Wed, 15 Dec 2010 18:02:27 GMT"
}
] | 1,292,457,600,000 | [
[
"Kovacs",
"Laszlo",
""
],
[
"Ratsaby",
"Joel",
""
]
] |
1012.4046 | Tshilidzi Marwala | Bo Xing, Wen-Jing Gao, Kimberly Battle, Tshildzi Marwala and Fulufhelo
V. Nelwamondo | Artificial Intelligence in Reverse Supply Chain Management: The State of
the Art | Proceedings of the Twenty-First Annual Symposium of the Pattern
Recognition Association of South Africa 22-23 November 2010 Stellenbosch,
South Africa, pp. 305-310 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Product take-back legislation forces manufacturers to bear the costs of
collection and disposal of products that have reached the end of their useful
lives. In order to reduce these costs, manufacturers can consider reuse,
remanufacturing and/or recycling of components as an alternative to disposal.
The implementation of such alternatives usually requires an appropriate reverse
supply chain management. With the concepts of reverse supply chain are gaining
popularity in practice, the use of artificial intelligence approaches in these
areas is also becoming popular. As a result, the purpose of this paper is to
give an overview of the recent publications concerning the application of
artificial intelligence techniques to reverse supply chain with emphasis on
certain types of product returns.
| [
{
"version": "v1",
"created": "Sat, 18 Dec 2010 01:12:14 GMT"
}
] | 1,292,889,600,000 | [
[
"Xing",
"Bo",
""
],
[
"Gao",
"Wen-Jing",
""
],
[
"Battle",
"Kimberly",
""
],
[
"Marwala",
"Tshildzi",
""
],
[
"Nelwamondo",
"Fulufhelo V.",
""
]
] |
1012.4776 | Nicolas Saunier | Nicolas Saunier and Sophie Midenet | Automatic Estimation of the Exposure to Lateral Collision in Signalized
Intersections using Video Sensors | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Intersections constitute one of the most dangerous elements in road systems.
Traffic signals remain the most common way to control traffic at high-volume
intersections and offer many opportunities to apply intelligent transportation
systems to make traffic more efficient and safe. This paper describes an
automated method to estimate the temporal exposure of road users crossing the
conflict zone to lateral collision with road users originating from a different
approach. This component is part of a larger system relying on video sensors to
provide queue lengths and spatial occupancy that are used for real time traffic
control and monitoring. The method is evaluated on data collected during a real
world experiment.
| [
{
"version": "v1",
"created": "Tue, 21 Dec 2010 19:46:21 GMT"
}
] | 1,292,976,000,000 | [
[
"Saunier",
"Nicolas",
""
],
[
"Midenet",
"Sophie",
""
]
] |
1012.5585 | Tim Januschowski | Tim januschowski and Barbara M. Smith and M. R. C. van Dongen | Symmetry Breaking with Polynomial Delay | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A conservative class of constraint satisfaction problems CSPs is a class for
which membership is preserved under arbitrary domain reductions. Many
well-known tractable classes of CSPs are conservative. It is well known that
lexleader constraints may significantly reduce the number of solutions by
excluding symmetric solutions of CSPs. We show that adding certain lexleader
constraints to any instance of any conservative class of CSPs still allows us
to find all solutions with a time which is polynomial between successive
solutions. The time is polynomial in the total size of the instance and the
additional lexleader constraints. It is well known that for complete symmetry
breaking one may need an exponential number of lexleader constraints. However,
in practice, the number of additional lexleader constraints is typically
polynomial number in the size of the instance. For polynomially many lexleader
constraints, we may in general not have complete symmetry breaking but
polynomially many lexleader constraints may provide practically useful symmetry
breaking -- and they sometimes exclude super-exponentially many solutions. We
prove that for any instance from a conservative class, the time between finding
successive solutions of the instance with polynomially many additional
lexleader constraints is polynomial even in the size of the instance without
lexleaderconstraints.
| [
{
"version": "v1",
"created": "Mon, 27 Dec 2010 09:58:16 GMT"
}
] | 1,426,550,400,000 | [
[
"januschowski",
"Tim",
""
],
[
"Smith",
"Barbara M.",
""
],
[
"van Dongen",
"M. R. C.",
""
]
] |
1012.5705 | Wan Ahmad Tajuddin Wan Abdullah | Wan Ahmad Tajuddin Wan Abdullah | Looking for plausibility | 6 pages, invited paper presented at the International Conference on
Advanced Computer Science and Information Systems 2010 (ICACSIS2010), Bali,
Indonesia, 20-22 November 2010 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the interpretation of experimental data, one is actually looking for
plausible explanations. We look for a measure of plausibility, with which we
can compare different possible explanations, and which can be combined when
there are different sets of data. This is contrasted to the conventional
measure for probabilities as well as to the proposed measure of possibilities.
We define what characteristics this measure of plausibility should have.
In getting to the conception of this measure, we explore the relation of
plausibility to abductive reasoning, and to Bayesian probabilities. We also
compare with the Dempster-Schaefer theory of evidence, which also has its own
definition for plausibility. Abduction can be associated with biconditionality
in inference rules, and this provides a platform to relate to the
Collins-Michalski theory of plausibility. Finally, using a formalism for wiring
logic onto Hopfield neural networks, we ask if this is relevant in obtaining
this measure.
| [
{
"version": "v1",
"created": "Tue, 28 Dec 2010 07:14:32 GMT"
}
] | 1,293,667,200,000 | [
[
"Abdullah",
"Wan Ahmad Tajuddin Wan",
""
]
] |
1012.5815 | Tamal Ghosh Tamal Ghosh | Tamal Ghosh, Mousumi Modak and Pranab K Dan | SAPFOCS: a metaheuristic based approach to part family formation
problems in group technology | 10 pages; 6 figures; 12 tables | nternational Journal of Management Science International Journal
of Management Science and Engineering Management, 6(3): 231-240, 2011 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This article deals with Part family formation problem which is believed to be
moderately complicated to be solved in polynomial time in the vicinity of Group
Technology (GT). In the past literature researchers investigated that the part
family formation techniques are principally based on production flow analysis
(PFA) which usually considers operational requirements, sequences and time.
Part Coding Analysis (PCA) is merely considered in GT which is believed to be
the proficient method to identify the part families. PCA classifies parts by
allotting them to different families based on their resemblances in: (1) design
characteristics such as shape and size, and/or (2) manufacturing
characteristics (machining requirements). A novel approach based on simulated
annealing namely SAPFOCS is adopted in this study to develop effective part
families exploiting the PCA technique. Thereafter Taguchi's orthogonal design
method is employed to solve the critical issues on the subject of parameters
selection for the proposed metaheuristic algorithm. The adopted technique is
therefore tested on 5 different datasets of size 5 {\times} 9 to 27 {\times} 9
and the obtained results are compared with C-Linkage clustering technique. The
experimental results reported that the proposed metaheuristic algorithm is
extremely effective in terms of the quality of the solution obtained and has
outperformed C-Linkage algorithm in most instances.
| [
{
"version": "v1",
"created": "Tue, 28 Dec 2010 18:57:04 GMT"
},
{
"version": "v2",
"created": "Wed, 11 May 2011 07:18:26 GMT"
}
] | 1,305,158,400,000 | [
[
"Ghosh",
"Tamal",
""
],
[
"Modak",
"Mousumi",
""
],
[
"Dan",
"Pranab K",
""
]
] |
1012.5847 | Joohyung Lee | Martin Gebser, Joohyung Lee and Yuliya Lierler | On Elementary Loops of Logic Programs | 36 pages, 2 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Using the notion of an elementary loop, Gebser and Schaub refined the theorem
on loop formulas due to Lin and Zhao by considering loop formulas of elementary
loops only. In this article, we reformulate their definition of an elementary
loop, extend it to disjunctive programs, and study several properties of
elementary loops, including how maximal elementary loops are related to minimal
unfounded sets. The results provide useful insights into the stable model
semantics in terms of elementary loops. For a nondisjunctive program, using a
graph-theoretic characterization of an elementary loop, we show that the
problem of recognizing an elementary loop is tractable. On the other hand, we
show that the corresponding problem is {\sf coNP}-complete for a disjunctive
program. Based on the notion of an elementary loop, we present the class of
Head-Elementary-loop-Free (HEF) programs, which strictly generalizes the class
of Head-Cycle-Free (HCF) programs due to Ben-Eliyahu and Dechter. Like an HCF
program, an HEF program can be turned into an equivalent nondisjunctive program
in polynomial time by shifting head atoms into the body.
| [
{
"version": "v1",
"created": "Tue, 28 Dec 2010 21:49:11 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Jan 2011 15:34:01 GMT"
}
] | 1,294,099,200,000 | [
[
"Gebser",
"Martin",
""
],
[
"Lee",
"Joohyung",
""
],
[
"Lierler",
"Yuliya",
""
]
] |
1012.5960 | Reinhard Moratz | Reinhard Moratz | Extending Binary Qualitative Direction Calculi with a Granular Distance
Concept: Hidden Feature Attachment | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we introduce a method for extending binary qualitative
direction calculi with adjustable granularity like OPRAm or the star calculus
with a granular distance concept. This method is similar to the concept of
extending points with an internal reference direction to get oriented points
which are the basic entities in the OPRAm calculus. Even if the spatial objects
are from a geometrical point of view infinitesimal small points locally
available reference measures are attached. In the case of OPRAm, a reference
direction is attached. The same principle works also with local reference
distances which are called elevations. The principle of attaching references
features to a point is called hidden feature attachment.
| [
{
"version": "v1",
"created": "Wed, 29 Dec 2010 15:29:33 GMT"
}
] | 1,293,667,200,000 | [
[
"Moratz",
"Reinhard",
""
]
] |
1012.6018 | Fabien Tence | Fabien Tenc\'e (LISYC), C\'edric Buche (LISYC), Pierre De Loor
(LISYC), Olivier Marc (LISYC) | Learning a Representation of a Believable Virtual Character's
Environment with an Imitation Algorithm | null | GAMEON-ARABIA'10, Egypt (2010) | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In video games, virtual characters' decision systems often use a simplified
representation of the world. To increase both their autonomy and believability
we want those characters to be able to learn this representation from human
players. We propose to use a model called growing neural gas to learn by
imitation the topology of the environment. The implementation of the model, the
modifications and the parameters we used are detailed. Then, the quality of the
learned representations and their evolution during the learning are studied
using different measures. Improvements for the growing neural gas to give more
information to the character's model are given in the conclusion.
| [
{
"version": "v1",
"created": "Wed, 29 Dec 2010 19:58:44 GMT"
}
] | 1,293,667,200,000 | [
[
"Tencé",
"Fabien",
"",
"LISYC"
],
[
"Buche",
"Cédric",
"",
"LISYC"
],
[
"De Loor",
"Pierre",
"",
"LISYC"
],
[
"Marc",
"Olivier",
"",
"LISYC"
]
] |
1101.2279 | Tuan Nguyen | Tuan Nguyen, Minh Do, Alfonso Gerevini, Ivan Serina, Biplav
Srivastava, Subbarao Kambhampati | Planning with Partial Preference Models | 38 pages, submitted to Artificial Intelligence Journal | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current work in planning with preferences assume that the user's preference
models are completely specified and aim to search for a single solution plan.
In many real-world planning scenarios, however, the user probably cannot
provide any information about her desired plans, or in some cases can only
express partial preferences. In such situations, the planner has to present not
only one but a set of plans to the user, with the hope that some of them are
similar to the plan she prefers. We first propose the usage of different
measures to capture quality of plan sets that are suitable for such scenarios:
domain-independent distance measures defined based on plan elements (actions,
states, causal links) if no knowledge of the user's preferences is given, and
the Integrated Convex Preference measure in case the user's partial preference
is provided. We then investigate various heuristic approaches to find set of
plans according to these measures, and present empirical results demonstrating
the promise of our approach.
| [
{
"version": "v1",
"created": "Wed, 12 Jan 2011 06:40:42 GMT"
}
] | 1,426,550,400,000 | [
[
"Nguyen",
"Tuan",
""
],
[
"Do",
"Minh",
""
],
[
"Gerevini",
"Alfonso",
""
],
[
"Serina",
"Ivan",
""
],
[
"Srivastava",
"Biplav",
""
],
[
"Kambhampati",
"Subbarao",
""
]
] |
1101.2378 | Joachim Selke | Joachim Selke and Wolf-Tilo Balke | Extracting Features from Ratings: The Role of Factor Models | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Performing effective preference-based data retrieval requires detailed and
preferentially meaningful structurized information about the current user as
well as the items under consideration. A common problem is that representations
of items often only consist of mere technical attributes, which do not resemble
human perception. This is particularly true for integral items such as movies
or songs. It is often claimed that meaningful item features could be extracted
from collaborative rating data, which is becoming available through social
networking services. However, there is only anecdotal evidence supporting this
claim; but if it is true, the extracted information could very valuable for
preference-based data retrieval. In this paper, we propose a methodology to
systematically check this common claim. We performed a preliminary
investigation on a large collection of movie ratings and present initial
evidence.
| [
{
"version": "v1",
"created": "Wed, 12 Jan 2011 14:56:01 GMT"
}
] | 1,294,876,800,000 | [
[
"Selke",
"Joachim",
""
],
[
"Balke",
"Wolf-Tilo",
""
]
] |
1101.3465 | Emanuel Gluskin | Emanuel Gluskin | The "psychological map of the brain", as a personal information card
(file), - a project for the student of the 21st century | This is an unusual work, not easy for classification. I beg the
readers' pardon for the excessive topical originality, but I tried to close
the gap between the "accelerating" specialization causing one to forget the
true Educational Side/Meaning that still can be found behind the modern
science and technology. There are a lot of points to be developed, - one more
disadvantage. 4 pages, 1 figure | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We suggest a procedure that is relevant both to electronic performance and
human psychology, so that the creative logic and the respect for human nature
appear in a good agreement. The idea is to create an electronic card containing
basic information about a person's psychological behavior in order to make it
possible to quickly decide about the suitability of one for another. This
"psychological electronics" approach could be tested via student projects.
| [
{
"version": "v1",
"created": "Mon, 17 Jan 2011 17:30:31 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Jan 2011 06:21:26 GMT"
}
] | 1,295,481,600,000 | [
[
"Gluskin",
"Emanuel",
""
]
] |
1101.4356 | Matteo Cristani PhD | Elisa Burato and Matteo Cristani and Luca Vigan\`o | Meaning Negotiation as Inference | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Meaning negotiation (MN) is the general process with which agents reach an
agreement about the meaning of a set of terms. Artificial Intelligence scholars
have dealt with the problem of MN by means of argumentations schemes, beliefs
merging and information fusion operators, and ontology alignment but the
proposed approaches depend upon the number of participants. In this paper, we
give a general model of MN for an arbitrary number of agents, in which each
participant discusses with the others her viewpoint by exhibiting it in an
actual set of constraints on the meaning of the negotiated terms. We call this
presentation of individual viewpoints an angle. The agents do not aim at
forming a common viewpoint but, instead, at agreeing about an acceptable common
angle. We analyze separately the process of MN by two agents (\emph{bilateral}
or \emph{pairwise} MN) and by more than two agents (\emph{multiparty} MN), and
we use game theoretic models to understand how the process develops in both
cases: the models are Bargaining Game for bilateral MN and English Auction for
multiparty MN. We formalize the process of reaching such an agreement by giving
a deduction system that comprises of rules that are consistent and adequate for
representing MN.
| [
{
"version": "v1",
"created": "Sun, 23 Jan 2011 09:49:55 GMT"
}
] | 1,426,550,400,000 | [
[
"Burato",
"Elisa",
""
],
[
"Cristani",
"Matteo",
""
],
[
"Viganò",
"Luca",
""
]
] |
1102.0079 | Ping Zhu | Ping Zhu and Qiaoyan Wen | Information-theoretic measures associated with rough set approximations | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although some information-theoretic measures of uncertainty or granularity
have been proposed in rough set theory, these measures are only dependent on
the underlying partition and the cardinality of the universe, independent of
the lower and upper approximations. It seems somewhat unreasonable since the
basic idea of rough set theory aims at describing vague concepts by the lower
and upper approximations. In this paper, we thus define new
information-theoretic entropy and co-entropy functions associated to the
partition and the approximations to measure the uncertainty and granularity of
an approximation space. After introducing the novel notions of entropy and
co-entropy, we then examine their properties. In particular, we discuss the
relationship of co-entropies between different universes. The theoretical
development is accompanied by illustrative numerical examples.
| [
{
"version": "v1",
"created": "Tue, 1 Feb 2011 05:38:33 GMT"
}
] | 1,296,604,800,000 | [
[
"Zhu",
"Ping",
""
],
[
"Wen",
"Qiaoyan",
""
]
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.