id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
sequencelengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1209.0997 | Kostyantyn Shchekotykhin | Kostyantyn Shchekotykhin, Philipp Fleiss, Patrick Rodler, Gerhard
Friedrich | Direct computation of diagnoses for ontology debugging | 16 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modern ontology debugging methods allow efficient identification and
localization of faulty axioms defined by a user while developing an ontology.
The ontology development process in this case is characterized by rather
frequent and regular calls to a reasoner resulting in an early user awareness
of modeling errors. In such a scenario an ontology usually includes only a
small number of conflict sets, i.e. sets of axioms preserving the faults. This
property allows efficient use of standard model-based diagnosis techniques
based on the application of hitting set algorithms to a number of given
conflict sets. However, in many use cases such as ontology alignment the
ontologies might include many more conflict sets than in usual ontology
development settings, thus making precomputation of conflict sets and
consequently ontology diagnosis infeasible. In this paper we suggest a
debugging approach based on a direct computation of diagnoses that omits
calculation of conflict sets. Embedded in an ontology debugger, the proposed
algorithm is able to identify diagnoses for an ontology which includes a large
number of faults and for which application of standard diagnosis methods fails.
The evaluation results show that the approach is practicable and is able to
identify a fault in adequate time.
| [
{
"version": "v1",
"created": "Wed, 5 Sep 2012 14:41:57 GMT"
}
] | 1,346,889,600,000 | [
[
"Shchekotykhin",
"Kostyantyn",
""
],
[
"Fleiss",
"Philipp",
""
],
[
"Rodler",
"Patrick",
""
],
[
"Friedrich",
"Gerhard",
""
]
] |
1209.1899 | Yuming Xu | Xu Yuming | A matrix approach for computing extensions of argumentation frameworks | arXiv admin note: substantial text overlap with arXiv:1110.1416 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/3.0/ | The matrices and their sub-blocks are introduced into the study of
determining various extensions in the sense of Dung's theory of argumentation
frameworks. It is showed that each argumentation framework has its matrix
representations, and the core semantics defined by Dung can be characterized by
specific sub-blocks of the matrix. Furthermore, the elementary permutations of
a matrix are employed by which an efficient matrix approach for finding out all
extensions under a given semantics is obtained. Different from several
established approaches, such as the graph labelling algorithm, Constraint
Satisfaction Problem algorithm, the matrix approach not only put the mathematic
idea into the investigation for finding out various extensions, but also
completely achieve the goal to compute all the extensions needed.
| [
{
"version": "v1",
"created": "Mon, 10 Sep 2012 08:09:05 GMT"
}
] | 1,347,321,600,000 | [
[
"Yuming",
"Xu",
""
]
] |
1209.2322 | Fernando Gascon | Javier Puente, David de la Fuente, Jesus Lozano and Fernando Gascon | On firm specific characteristics of pharmaceutical generics and
incentives to permanence under fuzzy conditions | null | International Journal of Applications of Fuzzy Sets(ISSN
2241-1240) Vol. 1 (2011), 19-37 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The aim of this paper is to develop a methodology that is useful for
analysing from a microeconomic perspective the incentives to entry, permanence
and exit in the market for pharmaceutical generics under fuzzy conditions. In
an empirical application of our proposed methodology, the potential towards
permanence of labs with different characteristics has been estimated. The case
we deal with is set in an open market where global players diversify into
different national markets of pharmaceutical generics. Risk issues are
significantly important in deterring decision makers from expanding in the
generic pharmaceutical business. However, not all players are affected in the
same way and/or to the same extent. Small, non-diversified generics labs are in
the worse position. We have highlighted that the expected NPV and the number of
generics in the portfolio of a pharmaceutical lab are important variables, but
that it is also important to consider the degree of diversification. Labs with
a higher potential for diversification across markets have an advantage over
smaller labs. We have described a fuzzy decision support system based on the
Mamdani model in order to determine the incentives for a laboratory to remain
in the market both when it is stable and when it is growing.
| [
{
"version": "v1",
"created": "Tue, 11 Sep 2012 14:03:13 GMT"
}
] | 1,347,408,000,000 | [
[
"Puente",
"Javier",
""
],
[
"de la Fuente",
"David",
""
],
[
"Lozano",
"Jesus",
""
],
[
"Gascon",
"Fernando",
""
]
] |
1209.3419 | Francesco Scarcello | Georg Gottlob and Gianluigi Greco and Francesco Scarcello | Tractable Optimization Problems through Hypergraph-Based Structural
Restrictions | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/3.0/ | Several variants of the Constraint Satisfaction Problem have been proposed
and investigated in the literature for modelling those scenarios where
solutions are associated with some given costs. Within these frameworks
computing an optimal solution is an NP-hard problem in general; yet, when
restricted over classes of instances whose constraint interactions can be
modelled via (nearly-)acyclic graphs, this problem is known to be solvable in
polynomial time. In this paper, larger classes of tractable instances are
singled out, by discussing solution approaches based on exploiting hypergraph
acyclicity and, more generally, structural decomposition methods, such as
(hyper)tree decompositions.
| [
{
"version": "v1",
"created": "Sat, 15 Sep 2012 16:40:19 GMT"
}
] | 1,347,926,400,000 | [
[
"Gottlob",
"Georg",
""
],
[
"Greco",
"Gianluigi",
""
],
[
"Scarcello",
"Francesco",
""
]
] |
1209.3734 | Patrick Rodler | Patrick Rodler and Kostyantyn Shchekotykhin and Philipp Fleiss and
Gerhard Friedrich | RIO: Minimizing User Interaction in Ontology Debugging | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Efficient ontology debugging is a cornerstone for many activities in the
context of the Semantic Web, especially when automatic tools produce (parts of)
ontologies such as in the field of ontology matching. The best currently known
interactive debugging systems rely upon some meta information in terms of fault
probabilities, which can speed up the debugging procedure in the good case, but
can also have negative impact on the performance in the bad case. The problem
is that assessment of the meta information is only possible a-posteriori.
Consequently, as long as the actual fault is unknown, there is always some risk
of suboptimal interactive diagnoses discrimination. As an alternative, one
might prefer to rely on a tool which pursues a no-risk strategy. In this case,
however, possibly well-chosen meta information cannot be exploited, resulting
again in inefficient debugging actions. In this work we present a reinforcement
learning strategy that continuously adapts its behavior depending on the
performance achieved and minimizes the risk of using low-quality meta
information. Therefore, this method is suitable for application scenarios where
reliable a-priori fault estimates are difficult to obtain. Using problematic
ontologies in the field of ontology matching, we show that the proposed
risk-aware query strategy outperforms both active learning approaches and
no-risk strategies on average in terms of required amount of user interaction.
| [
{
"version": "v1",
"created": "Mon, 17 Sep 2012 18:02:50 GMT"
}
] | 1,347,926,400,000 | [
[
"Rodler",
"Patrick",
""
],
[
"Shchekotykhin",
"Kostyantyn",
""
],
[
"Fleiss",
"Philipp",
""
],
[
"Friedrich",
"Gerhard",
""
]
] |
1209.3811 | Aditya Menon | Aditya Krishna Menon, Omer Tamuz, Sumit Gulwani, Butler Lampson, Adam
Tauman Kalai | Textual Features for Programming by Example | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In Programming by Example, a system attempts to infer a program from input
and output examples, generally by searching for a composition of certain base
functions. Performing a naive brute force search is infeasible for even mildly
involved tasks. We note that the examples themselves often present clues as to
which functions to compose, and how to rank the resulting programs. In text
processing, which is our domain of interest, clues arise from simple textual
features: for example, if parts of the input and output strings are
permutations of one another, this suggests that sorting may be useful. We
describe a system that learns the reliability of such clues, allowing for
faster search and a principled ranking over programs. Experiments on a
prototype of this system show that this learning scheme facilitates efficient
inference on a range of text processing tasks.
| [
{
"version": "v1",
"created": "Mon, 17 Sep 2012 22:56:19 GMT"
}
] | 1,348,012,800,000 | [
[
"Menon",
"Aditya Krishna",
""
],
[
"Tamuz",
"Omer",
""
],
[
"Gulwani",
"Sumit",
""
],
[
"Lampson",
"Butler",
""
],
[
"Kalai",
"Adam Tauman",
""
]
] |
1209.3869 | Poonam Tanwar | Poonam Tanwar, T. V. Prasad, Dr. Kamlesh Datta | Hybrid technique for effective knowledge representation & a comparative
study | 15 pages,9 figures, 1 table, Pablished in IJCSES,International
Journal of Computer Science & Engineering Survey Vol.3, No.4, August 2012 | Pablished in IJCSES,International Journal of Computer Science &
Engineering Survey Vol.3, No.4, August 2012 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge representation (KR) and inference mechanism are most desirable
thing to make the system intelligent. System is known to an intelligent if its
intelligence is equivalent to the intelligence of human being for a particular
domain or general. Because of incomplete ambiguous and uncertain information
the task of making intelligent system is very difficult. The objective of this
paper is to present the hybrid KR technique for making the system effective &
Optimistic. The requirement for (effective & optimistic) is because the system
must be able to reply the answer with a confidence of some factor. This paper
also presents the comparison between various hybrid KR techniques with the
proposed one.
| [
{
"version": "v1",
"created": "Tue, 18 Sep 2012 08:19:37 GMT"
}
] | 1,352,764,800,000 | [
[
"Tanwar",
"Poonam",
""
],
[
"Prasad",
"T. V.",
""
],
[
"Datta",
"Dr. Kamlesh",
""
]
] |
1209.4290 | Sergey Rodionov | Alexey Potapov, Sergey Rodionov, Andrew Myasnikov, Galymzhan Begimov | Cognitive Bias for Universal Algorithmic Intelligence | 10 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing theoretical universal algorithmic intelligence models are not
practically realizable. More pragmatic approach to artificial general
intelligence is based on cognitive architectures, which are, however,
non-universal in sense that they can construct and use models of the
environment only from Turing-incomplete model spaces. We believe that the way
to the real AGI consists in bridging the gap between these two approaches. This
is possible if one considers cognitive functions as a "cognitive bias" (priors
and search heuristics) that should be incorporated into the models of universal
algorithmic intelligence without violating their universality. Earlier reported
results suiting this approach and its overall feasibility are discussed on the
example of perception, planning, knowledge representation, attention, theory of
mind, language, and some others.
| [
{
"version": "v1",
"created": "Wed, 19 Sep 2012 16:01:31 GMT"
}
] | 1,348,099,200,000 | [
[
"Potapov",
"Alexey",
""
],
[
"Rodionov",
"Sergey",
""
],
[
"Myasnikov",
"Andrew",
""
],
[
"Begimov",
"Galymzhan",
""
]
] |
1209.4445 | Sachin Lakra | Sachin Lakra, T.V. Prasad and G. Ramakrishna | Speech Signal Filters based on Soft Computing Techniques: A Comparison | 5 pages | The 2010 International Congress on Computer Applications and
Computational Science (CACS 2010), 4-6 December, 2010, Singapore | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The paper presents a comparison of various soft computing techniques used for
filtering and enhancing speech signals. The three major techniques that fall
under soft computing are neural networks, fuzzy systems and genetic algorithms.
Other hybrid techniques such as neuro-fuzzy systems are also available. In
general, soft computing techniques have been experimentally observed to give
far superior performance as compared to non-soft computing techniques in terms
of robustness and accuracy.
| [
{
"version": "v1",
"created": "Thu, 20 Sep 2012 08:10:07 GMT"
}
] | 1,348,185,600,000 | [
[
"Lakra",
"Sachin",
""
],
[
"Prasad",
"T. V.",
""
],
[
"Ramakrishna",
"G.",
""
]
] |
1209.4532 | Sachin Lakra | T.V. Prasad, Sachin Lakra, G. Ramakrishna | Applicability of Crisp and Fuzzy Logic in Intelligent Response
Generation | 4 pages, 1 table | Published in proceedings of National Conference on Information,
Computational Technologies and e-Governance 2010, Alwar, Rajasthan, India,
19-20 November, 2010, pp. 137-139 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper discusses the merits and demerits of crisp logic and fuzzy logic
with respect to their applicability in intelligent response generation by a
human being and by a robot. Intelligent systems must have the capability of
taking decisions that are wise and handle situations intelligently. A direct
relationship exists between the level of perfection in handling a situation and
the level of completeness of the available knowledge or information or data
required to handle the situation. The paper concludes that the use of crisp
logic with complete knowledge leads to perfection in handling situations
whereas fuzzy logic can handle situations imperfectly only. However, in the
light of availability of incomplete knowledge fuzzy theory is more effective
but may be disadvantageous as compared to crisp logic.
| [
{
"version": "v1",
"created": "Thu, 20 Sep 2012 14:00:06 GMT"
}
] | 1,348,185,600,000 | [
[
"Prasad",
"T. V.",
""
],
[
"Lakra",
"Sachin",
""
],
[
"Ramakrishna",
"G.",
""
]
] |
1209.4535 | Sachin Lakra | Sachin Lakra, T.V. Prasad, Deepak Kumar Sharma, Shree Harsh Atrey,
Anubhav Kumar Sharma | Application of Fuzzy Mathematics to Speech-to-Text Conversion by
Elimination of Paralinguistic Content | 6 pages, 3 figures, 1 table. arXiv admin note: text overlap with
arXiv:1001.2267 by other authors | Published in proceedings of National Conference on Soft Computing
and Artificial Intelligence 2009, Faridabad, Haryana, India, Jan 2009 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For the past few decades, man has been trying to create an intelligent
computer which can talk and respond like he can. The task of creating a system
that can talk like a human being is the primary objective of Automatic Speech
Recognition. Various Speech Recognition techniques have been developed in
theory and have been applied in practice. This paper discusses the problems
that have been encountered in developing Speech Recognition, the techniques
that have been applied to automate the task, and a representation of the core
problems of present day Speech Recognition by using Fuzzy Mathematics.
| [
{
"version": "v1",
"created": "Thu, 20 Sep 2012 14:06:32 GMT"
}
] | 1,348,185,600,000 | [
[
"Lakra",
"Sachin",
""
],
[
"Prasad",
"T. V.",
""
],
[
"Sharma",
"Deepak Kumar",
""
],
[
"Atrey",
"Shree Harsh",
""
],
[
"Sharma",
"Anubhav Kumar",
""
]
] |
1209.4838 | Dimiter Dobrev | Dimiter Dobrev | Formal Definition of AI | null | International Journal "Information Theories & Applications",
vol.12, Number 3, 2005, pp.277-285 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A definition of Artificial Intelligence was proposed in [1] but this
definition was not absolutely formal at least because the word "Human" was
used. In this paper we will formalize the definition from [1]. The biggest
problem in this definition was that the level of intelligence of AI is compared
to the intelligence of a human being. In order to change this we will introduce
some parameters to which AI will depend. One of this parameters will be the
level of intelligence and we will define one AI to each level of intelligence.
We assume that for some level of intelligence the respective AI will be more
intelligent than a human being. Nevertheless, we cannot say which is this level
because we cannot calculate its exact value.
| [
{
"version": "v1",
"created": "Fri, 21 Sep 2012 14:58:33 GMT"
}
] | 1,348,444,800,000 | [
[
"Dobrev",
"Dimiter",
""
]
] |
1209.4976 | Yanfang Liu | Yanfang Liu and William Zhu | Matroidal structure of rough sets based on serial and transitive
relations | 16 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The theory of rough sets is concerned with the lower and upper approximations
of objects through a binary relation on a universe. It has been applied to
machine learning, knowledge discovery and data mining. The theory of matroids
is a generalization of linear independence in vector spaces. It has been used
in combinatorial optimization and algorithm design. In order to take advantages
of both rough sets and matroids, in this paper we propose a matroidal structure
of rough sets based on a serial and transitive relation on a universe. We
define the family of all minimal neighborhoods of a relation on a universe, and
prove it satisfy the circuit axioms of matroids when the relation is serial and
transitive. In order to further study this matroidal structure, we investigate
the inverse of this construction: inducing a relation by a matroid. The
relationships between the upper approximation operators of rough sets based on
relations and the closure operators of matroids in the above two constructions
are studied. Moreover, we investigate the connections between the above two
constructions.
| [
{
"version": "v1",
"created": "Sat, 22 Sep 2012 09:25:50 GMT"
},
{
"version": "v2",
"created": "Thu, 29 Nov 2012 10:39:19 GMT"
}
] | 1,354,233,600,000 | [
[
"Liu",
"Yanfang",
""
],
[
"Zhu",
"William",
""
]
] |
1209.4978 | Yanfang Liu | Yanfang Liu and William Zhu | Covering matroid | 15 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a new type of matroids, namely covering matroids,
and investigate the connections with the second type of covering-based rough
sets and some existing special matroids. Firstly, as an extension of
partitions, coverings are more natural combinatorial objects and can sometimes
be more efficient to deal with problems in the real world. Through extending
partitions to coverings, we propose a new type of matroids called covering
matroids and prove them to be an extension of partition matroids. Secondly,
since some researchers have successfully applied partition matroids to
classical rough sets, we study the relationships between covering matroids and
covering-based rough sets which are an extension of classical rough sets.
Thirdly, in matroid theory, there are many special matroids, such as
transversal matroids, partition matroids, 2-circuit matroid and
partition-circuit matroids. The relationships among several special matroids
and covering matroids are studied.
| [
{
"version": "v1",
"created": "Sat, 22 Sep 2012 09:34:10 GMT"
},
{
"version": "v2",
"created": "Fri, 30 Nov 2012 02:42:55 GMT"
}
] | 1,354,492,800,000 | [
[
"Liu",
"Yanfang",
""
],
[
"Zhu",
"William",
""
]
] |
1209.5456 | Yanfang Liu | Yanfang Liu and William Zhu | Relation matroid and its relationship with generalized rough set based
on relation | 15 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, the relationship between matroids and generalized rough sets based
on relations has been studied from the viewpoint of linear independence of
matrices. In this paper, we reveal more relationships by the predecessor and
successor neighborhoods from relations. First, through these two neighborhoods,
we propose a pair of matroids, namely predecessor relation matroid and
successor relation matroid, respectively. Basic characteristics of this pair of
matroids, such as dependent sets, circuits, the rank function and the closure
operator, are described by the predecessor and successor neighborhoods from
relations. Second, we induce a relation from a matroid through the circuits of
the matroid. We prove that the induced relation is always an equivalence
relation. With these two inductions, a relation induces a relation matroid, and
the relation matroid induces an equivalence relation, then the connection
between the original relation and the induced equivalence relation is studied.
Moreover, the relationships between the upper approximation operator in
generalized rough sets and the closure operator in matroids are investigated.
| [
{
"version": "v1",
"created": "Mon, 24 Sep 2012 23:42:09 GMT"
},
{
"version": "v2",
"created": "Thu, 29 Nov 2012 10:43:02 GMT"
}
] | 1,354,233,600,000 | [
[
"Liu",
"Yanfang",
""
],
[
"Zhu",
"William",
""
]
] |
1209.5470 | Bin Yang | Bin Yang and William Zhu | Matroidal structure of generalized rough sets based on symmetric and
transitive relations | 5 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Rough sets are efficient for data pre-process in data mining. Lower and upper
approximations are two core concepts of rough sets. This paper studies
generalized rough sets based on symmetric and transitive relations from the
operator-oriented view by matroidal approaches. We firstly construct a
matroidal structure of generalized rough sets based on symmetric and transitive
relations, and provide an approach to study the matroid induced by a symmetric
and transitive relation. Secondly, this paper establishes a close relationship
between matroids and generalized rough sets. Approximation quality and
roughness of generalized rough sets can be computed by the circuit of matroid
theory. At last, a symmetric and transitive relation can be constructed by a
matroid with some special properties.
| [
{
"version": "v1",
"created": "Tue, 25 Sep 2012 02:14:19 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Dec 2012 02:30:43 GMT"
}
] | 1,355,788,800,000 | [
[
"Yang",
"Bin",
""
],
[
"Zhu",
"William",
""
]
] |
1209.5473 | Lirun Su | Lirun Su and William Zhu | Some characteristics of matroids through rough sets | 13 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | At present, practical application and theoretical discussion of rough sets
are two hot problems in computer science. The core concepts of rough set theory
are upper and lower approximation operators based on equivalence relations.
Matroid, as a branch of mathematics, is a structure that generalizes linear
independence in vector spaces. Further, matroid theory borrows extensively from
the terminology of linear algebra and graph theory. We can combine rough set
theory with matroid theory through using rough sets to study some
characteristics of matroids. In this paper, we apply rough sets to matroids
through defining a family of sets which are constructed from the upper
approximation operator with respect to an equivalence relation. First, we prove
the family of sets satisfies the support set axioms of matroids, and then we
obtain a matroid. We say the matroids induced by the equivalence relation and a
type of matroid, namely support matroid, is induced. Second, through rough
sets, some characteristics of matroids such as independent sets, support sets,
bases, hyperplanes and closed sets are investigated.
| [
{
"version": "v1",
"created": "Tue, 25 Sep 2012 02:35:13 GMT"
}
] | 1,348,617,600,000 | [
[
"Su",
"Lirun",
""
],
[
"Zhu",
"William",
""
]
] |
1209.5480 | Hua Yao | Hua Yao and William Zhu | Condition for neighborhoods in covering based rough sets to form a
partition | 12 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neighborhood is an important concept in covering based rough sets. That under
what condition neighborhoods form a partition is a meaningful issue induced by
this concept. Many scholars have paid attention to this issue and presented
some necessary and sufficient conditions. However, there exists one common
trait among these conditions, that is they are established on the basis of all
neighborhoods have been obtained. In this paper, we provide a necessary and
sufficient condition directly based on the covering itself. First, we
investigate the influence of that there are reducible elements in the covering
on neighborhoods. Second, we propose the definition of uniform block and obtain
a sufficient condition from it. Third, we propose the definitions of repeat
degree and excluded number. By means of the two concepts, we obtain a necessary
and sufficient condition for neighborhoods to form a partition. In a word, we
have gained a deeper and more direct understanding of the essence over that
neighborhoods form a partition.
| [
{
"version": "v1",
"created": "Tue, 25 Sep 2012 03:03:41 GMT"
}
] | 1,348,617,600,000 | [
[
"Yao",
"Hua",
""
],
[
"Zhu",
"William",
""
]
] |
1209.5482 | Jingqian Wang | Jingqian Wang and William Zhu | Rough sets and matroidal contraction | 11 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Rough sets are efficient for data pre-processing in data mining. As a
generalization of the linear independence in vector spaces, matroids provide
well-established platforms for greedy algorithms. In this paper, we apply rough
sets to matroids and study the contraction of the dual of the corresponding
matroid. First, for an equivalence relation on a universe, a matroidal
structure of the rough set is established through the lower approximation
operator. Second, the dual of the matroid and its properties such as
independent sets, bases and rank function are investigated. Finally, the
relationships between the contraction of the dual matroid to the complement of
a single point set and the contraction of the dual matroid to the complement of
the equivalence class of this point are studied.
| [
{
"version": "v1",
"created": "Tue, 25 Sep 2012 03:07:31 GMT"
}
] | 1,348,617,600,000 | [
[
"Wang",
"Jingqian",
""
],
[
"Zhu",
"William",
""
]
] |
1209.5484 | Hua Yao | Hua Yao and William Zhu | Condition for neighborhoods induced by a covering to be equal to the
covering itself | 11 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is a meaningful issue that under what condition neighborhoods induced by a
covering are equal to the covering itself. A necessary and sufficient condition
for this issue has been provided by some scholars. In this paper, through a
counter-example, we firstly point out the necessary and sufficient condition is
false. Second, we present a necessary and sufficient condition for this issue.
Third, we concentrate on the inverse issue of computing neighborhoods by a
covering, namely giving an arbitrary covering, whether or not there exists
another covering such that the neighborhoods induced by it is just the former
covering. We present a necessary and sufficient condition for this issue as
well. In a word, through the study on the two fundamental issues induced by
neighborhoods, we have gained a deeper understanding of the relationship
between neighborhoods and the covering which induce the neighborhoods.
| [
{
"version": "v1",
"created": "Tue, 25 Sep 2012 03:14:39 GMT"
}
] | 1,348,617,600,000 | [
[
"Yao",
"Hua",
""
],
[
"Zhu",
"William",
""
]
] |
1209.5567 | Qingyin Li | Qingyin Li and William Zhu | Closed-set lattice of regular sets based on a serial and transitive
relation through matroids | 12 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Rough sets are efficient for data pre-processing in data mining. Matroids are
based on linear algebra and graph theory, and have a variety of applications in
many fields. Both rough sets and matroids are closely related to lattices. For
a serial and transitive relation on a universe, the collection of all the
regular sets of the generalized rough set is a lattice. In this paper, we use
the lattice to construct a matroid and then study relationships between the
lattice and the closed-set lattice of the matroid. First, the collection of all
the regular sets based on a serial and transitive relation is proved to be a
semimodular lattice. Then, a matroid is constructed through the height function
of the semimodular lattice. Finally, we propose an approach to obtain all the
closed sets of the matroid from the semimodular lattice. Borrowing from
matroids, results show that lattice theory provides an interesting view to
investigate rough sets.
| [
{
"version": "v1",
"created": "Tue, 25 Sep 2012 10:36:27 GMT"
},
{
"version": "v2",
"created": "Sat, 14 Dec 2013 14:53:48 GMT"
}
] | 1,387,238,400,000 | [
[
"Li",
"Qingyin",
""
],
[
"Zhu",
"William",
""
]
] |
1209.5569 | Qingyin Li | Qingyin Li and William Zhu | Lattice structures of fixed points of the lower approximations of two
types of covering-based rough sets | 17 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Covering is a common type of data structure and covering-based rough set
theory is an efficient tool to process this data. Lattice is an important
algebraic structure and used extensively in investigating some types of
generalized rough sets. In this paper, we propose two family of sets and study
the conditions that these two sets become some lattice structures. These two
sets are consisted by the fixed point of the lower approximations of the first
type and the sixth type of covering-based rough sets, respectively. These two
sets are called the fixed point set of neighborhoods and the fixed point set of
covering, respectively. First, for any covering, the fixed point set of
neighborhoods is a complete and distributive lattice, at the same time, it is
also a double p-algebra. Especially, when the neighborhood forms a partition of
the universe, the fixed point set of neighborhoods is both a boolean lattice
and a double Stone algebra. Second, for any covering, the fixed point set of
covering is a complete lattice.When the covering is unary, the fixed point set
of covering becomes a distributive lattice and a double p-algebra. a
distributive lattice and a double p-algebra when the covering is unary.
Especially, when the reduction of the covering forms a partition of the
universe, the fixed point set of covering is both a boolean lattice and a
double Stone algebra.
| [
{
"version": "v1",
"created": "Tue, 25 Sep 2012 10:41:45 GMT"
}
] | 1,348,617,600,000 | [
[
"Li",
"Qingyin",
""
],
[
"Zhu",
"William",
""
]
] |
1209.5663 | Valmi Dufour-Lussier | Valmi Dufour-Lussier (INRIA Lorraine - LORIA), Florence Le Ber (INRIA
Lorraine - LORIA, LHyGeS), Jean Lieber (INRIA Lorraine - LORIA), Thomas
Meilender (INRIA Lorraine - LORIA), Emmanuel Nauer (INRIA Lorraine - LORIA) | Semi-automatic annotation process for procedural texts: An application
on cooking recipes | null | Cooking with Computers workshop (ECAI 2012) (2012) | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Taaable is a case-based reasoning system that adapts cooking recipes to user
constraints. Within it, the preparation part of recipes is formalised as a
graph. This graph is a semantic representation of the sequence of instructions
composing the cooking process and is used to compute the procedure adaptation,
conjointly with the textual adaptation. It is composed of cooking actions and
ingredients, among others, represented as vertices, and semantic relations
between those, shown as arcs, and is built automatically thanks to natural
language processing. The results of the automatic annotation process is often a
disconnected graph, representing an incomplete annotation, or may contain
errors. Therefore, a validating and correcting step is required. In this paper,
we present an existing graphic tool named \kcatos, conceived for representing
and editing decision trees, and show how it has been adapted and integrated in
WikiTaaable, the semantic wiki in which the knowledge used by Taaable is
stored. This interface provides the wiki users with a way to correct the case
representation of the cooking process, improving at the same time the quality
of the knowledge about cooking procedures stored in WikiTaaable.
| [
{
"version": "v1",
"created": "Tue, 25 Sep 2012 16:13:14 GMT"
}
] | 1,348,617,600,000 | [
[
"Dufour-Lussier",
"Valmi",
"",
"INRIA Lorraine - LORIA"
],
[
"Ber",
"Florence Le",
"",
"INRIA\n Lorraine - LORIA, LHyGeS"
],
[
"Lieber",
"Jean",
"",
"INRIA Lorraine - LORIA"
],
[
"Meilender",
"Thomas",
"",
"INRIA Lorraine - LORIA"
],
[
"Nauer",
"Emmanuel",
"",
"INRIA Lorraine - LORIA"
]
] |
1209.5853 | Yi Sun | Yi Sun and Daan Wierstra and Tom Schaul and Juergen Schmidhuber | Efficient Natural Evolution Strategies | Puslished in GECCO'2009 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Efficient Natural Evolution Strategies (eNES) is a novel alternative to
conventional evolutionary algorithms, using the natural gradient to adapt the
mutation distribution. Unlike previous methods based on natural gradients, eNES
uses a fast algorithm to calculate the inverse of the exact Fisher information
matrix, thus increasing both robustness and performance of its evolution
gradient estimation, even in higher dimensions. Additional novel aspects of
eNES include optimal fitness baselines and importance mixing (a procedure for
updating the population with very few fitness evaluations). The algorithm
yields competitive results on both unimodal and multimodal benchmarks.
| [
{
"version": "v1",
"created": "Wed, 26 Sep 2012 07:42:06 GMT"
}
] | 1,348,704,000,000 | [
[
"Sun",
"Yi",
""
],
[
"Wierstra",
"Daan",
""
],
[
"Schaul",
"Tom",
""
],
[
"Schmidhuber",
"Juergen",
""
]
] |
1209.6195 | Nicolaie Popescu-Bodorin | Cristina M. Noaica, Robert Badea, Iulia M. Motoc, Claudiu G. Ghica,
Alin C. Rosoiu, Nicolaie Popescu-Bodorin | Examples of Artificial Perceptions in Optical Character Recognition and
Iris Recognition | 5th Int. Conf. on Soft Computing and Applications (Szeged, HU), 22-24
Aug 2012 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper assumes the hypothesis that human learning is perception based,
and consequently, the learning process and perceptions should not be
represented and investigated independently or modeled in different simulation
spaces. In order to keep the analogy between the artificial and human learning,
the former is assumed here as being based on the artificial perception. Hence,
instead of choosing to apply or develop a Computational Theory of (human)
Perceptions, we choose to mirror the human perceptions in a numeric
(computational) space as artificial perceptions and to analyze the
interdependence between artificial learning and artificial perception in the
same numeric space, using one of the simplest tools of Artificial Intelligence
and Soft Computing, namely the perceptrons. As practical applications, we
choose to work around two examples: Optical Character Recognition and Iris
Recognition. In both cases a simple Turing test shows that artificial
perceptions of the difference between two characters and between two irides are
fuzzy, whereas the corresponding human perceptions are, in fact, crisp.
| [
{
"version": "v1",
"created": "Thu, 27 Sep 2012 11:39:58 GMT"
}
] | 1,348,790,400,000 | [
[
"Noaica",
"Cristina M.",
""
],
[
"Badea",
"Robert",
""
],
[
"Motoc",
"Iulia M.",
""
],
[
"Ghica",
"Claudiu G.",
""
],
[
"Rosoiu",
"Alin C.",
""
],
[
"Popescu-Bodorin",
"Nicolaie",
""
]
] |
1209.6395 | Zouhair Abdelhamid | Abdelhamid Zouhair, El Mokhtar En-Naimi, Benaissa Amami, Hadhoum
Boukachour, Patrick Person, Cyrille Bertelle | Multi-Agents Dynamic Case Based Reasoning and The Inverse Longest Common
Sub-Sequence And Individualized Follow-up of Learners in The CEHL | International Journal of Computer Science Issues, Volume 9, Issue 4,
No 2, July 2012 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In E-learning, there is still the problem of knowing how to ensure an
individualized and continuous learner's follow-up during learning process,
indeed among the numerous tools proposed, very few systems concentrate on a
real time learner's follow-up. Our work in this field develops the design and
implementation of a Multi-Agents System Based on Dynamic Case Based Reasoning
which can initiate learning and provide an individualized follow-up of learner.
When interacting with the platform, every learner leaves his/her traces in the
machine. These traces are stored in a basis under the form of scenarios which
enrich collective past experience. The system monitors, compares and analyses
these traces to keep a constant intelligent watch and therefore detect
difficulties hindering progress and/or avoid possible dropping out. The system
can support any learning subject. The success of a case-based reasoning system
depends critically on the performance of the retrieval step used and, more
specifically, on similarity measure used to retrieve scenarios that are similar
to the course of the learner (traces in progress). We propose a complementary
similarity measure, named Inverse Longest Common Sub-Sequence (ILCSS). To help
and guide the learner, the system is equipped with combined virtual and human
tutors.
| [
{
"version": "v1",
"created": "Thu, 27 Sep 2012 23:22:48 GMT"
}
] | 1,349,049,600,000 | [
[
"Zouhair",
"Abdelhamid",
""
],
[
"En-Naimi",
"El Mokhtar",
""
],
[
"Amami",
"Benaissa",
""
],
[
"Boukachour",
"Hadhoum",
""
],
[
"Person",
"Patrick",
""
],
[
"Bertelle",
"Cyrille",
""
]
] |
1210.0074 | Aiping Huang | Aiping Huang, William Zhu | Topological characterizations to three types of covering approximation
operators | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Covering-based rough set theory is a useful tool to deal with inexact,
uncertain or vague knowledge in information systems. Topology, one of the most
important subjects in mathematics, provides mathematical tools and interesting
topics in studying information systems and rough sets. In this paper, we
present the topological characterizations to three types of covering
approximation operators. First, we study the properties of topology induced by
the sixth type of covering lower approximation operator. Second, some
topological characterizations to the covering lower approximation operator to
be an interior operator are established. We find that the topologies induced by
this operator and by the sixth type of covering lower approximation operator
are the same. Third, we study the conditions which make the first type of
covering upper approximation operator be a closure operator, and find that the
topology induced by the operator is the same as the topology induced by the
fifth type of covering upper approximation operator. Forth, the conditions of
the second type of covering upper approximation operator to be a closure
operator and the properties of topology induced by it are established. Finally,
these three topologies space are compared. In a word, topology provides a
useful method to study the covering-based rough sets.
| [
{
"version": "v1",
"created": "Sat, 29 Sep 2012 03:16:49 GMT"
}
] | 1,349,136,000,000 | [
[
"Huang",
"Aiping",
""
],
[
"Zhu",
"William",
""
]
] |
1210.0075 | Aiping Huang | Aiping Huang, William Zhu | Geometric lattice structure of covering-based rough sets through
matroids | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Covering-based rough set theory is a useful tool to deal with inexact,
uncertain or vague knowledge in information systems. Geometric lattice has
widely used in diverse fields, especially search algorithm design which plays
important role in covering reductions. In this paper, we construct four
geometric lattice structures of covering-based rough sets through matroids, and
compare their relationships. First, a geometric lattice structure of
covering-based rough sets is established through the transversal matroid
induced by the covering, and its characteristics including atoms, modular
elements and modular pairs are studied. We also construct a one-to-one
correspondence between this type of geometric lattices and transversal matroids
in the context of covering-based rough sets. Second, sufficient and necessary
conditions for three types of covering upper approximation operators to be
closure operators of matroids are presented. We exhibit three types of matroids
through closure axioms, and then obtain three geometric lattice structures of
covering-based rough sets. Third, these four geometric lattice structures are
compared. Some core concepts such as reducible elements in covering-based rough
sets are investigated with geometric lattices. In a word, this work points out
an interesting view, namely geometric lattice, to study covering-based rough
sets.
| [
{
"version": "v1",
"created": "Sat, 29 Sep 2012 03:26:18 GMT"
}
] | 1,349,136,000,000 | [
[
"Huang",
"Aiping",
""
],
[
"Zhu",
"William",
""
]
] |
1210.0091 | Hong Zhao | Hong Zhao, Fan Min, William Zhu | Test-cost-sensitive attribute reduction of data with normal distribution
measurement errors | This paper has been withdrawn by the author due to the error of the
title | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The measurement error with normal distribution is universal in applications.
Generally, smaller measurement error requires better instrument and higher test
cost. In decision making based on attribute values of objects, we shall select
an attribute subset with appropriate measurement error to minimize the total
test cost. Recently, error-range-based covering rough set with uniform
distribution error was proposed to investigate this issue. However, the
measurement errors satisfy normal distribution instead of uniform distribution
which is rather simple for most applications. In this paper, we introduce
normal distribution measurement errors to covering-based rough set model, and
deal with test-cost-sensitive attribute reduction problem in this new model.
The major contributions of this paper are four-fold. First, we build a new data
model based on normal distribution measurement errors. With the new data model,
the error range is an ellipse in a two-dimension space. Second, the
covering-based rough set with normal distribution measurement errors is
constructed through the "3-sigma" rule. Third, the test-cost-sensitive
attribute reduction problem is redefined on this covering-based rough set.
Fourth, a heuristic algorithm is proposed to deal with this problem. The
algorithm is tested on ten UCI (University of California - Irvine) datasets.
The experimental results show that the algorithm is more effective and
efficient than the existing one. This study is a step toward realistic
applications of cost-sensitive learning.
| [
{
"version": "v1",
"created": "Sat, 29 Sep 2012 10:22:55 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Jun 2013 03:15:51 GMT"
}
] | 1,370,304,000,000 | [
[
"Zhao",
"Hong",
""
],
[
"Min",
"Fan",
""
],
[
"Zhu",
"William",
""
]
] |
1210.0772 | Yanfang Liu | Yanfang Liu and William Zhu | Relationship between the second type of covering-based rough set and
matroid via closure operator | 10 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, in order to broad the application and theoretical areas of rough
sets and matroids, some authors have combined them from many different
viewpoints, such as circuits, rank function, spanning sets and so on. In this
paper, we connect the second type of covering-based rough sets and matroids
from the view of closure operators. On one hand, we establish a closure system
through the fixed point family of the second type of covering lower
approximation operator, and then construct a closure operator. For a covering
of a universe, the closure operator is a closure one of a matroid if and only
if the reduct of the covering is a partition of the universe. On the other
hand, we investigate the sufficient and necessary condition that the second
type of covering upper approximation operation is a closure one of a matroid.
| [
{
"version": "v1",
"created": "Tue, 2 Oct 2012 13:40:23 GMT"
}
] | 1,349,222,400,000 | [
[
"Liu",
"Yanfang",
""
],
[
"Zhu",
"William",
""
]
] |
1210.0887 | Dimiter Dobrev | Dimiter Dobrev | The Definition of AI in Terms of Multi Agent Systems | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The questions which we will consider here are "What is AI?" and "How can we
make AI?". Here we will present the definition of AI in terms of multi-agent
systems. This means that here you will not find a new answer to the question
"What is AI?", but an old answer in a new form.
This new form of the definition of AI is of interest for the theory of
multi-agent systems because it gives us better understanding of this theory.
More important is that this work will help us answer the second question. We
want to make a program which is capable of constructing a model of its
environment. Every multi-agent model is equivalent to a single-agent model but
multi-agent models are more natural and accordingly more easily discoverable.
| [
{
"version": "v1",
"created": "Tue, 2 Oct 2012 19:28:42 GMT"
}
] | 1,349,222,400,000 | [
[
"Dobrev",
"Dimiter",
""
]
] |
1210.1568 | Dimiter Dobrev | Dimiter Dobrev | A Definition of Artificial Intelligence | null | Dobrev D. A Definition of Artificial Intelligence. In: Mathematica
Balkanica, New Series, Vol. 19, 2005, Fasc. 1-2, pp.67-74 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we offer a formal definition of Artificial Intelligence and
this directly gives us an algorithm for construction of this object. Really,
this algorithm is useless due to the combinatory explosion.
The main innovation in our definition is that it does not include the
knowledge as a part of the intelligence. So according to our definition a newly
born baby also is an Intellect. Here we differs with Turing's definition which
suggests that an Intellect is a person with knowledge gained through the years.
| [
{
"version": "v1",
"created": "Wed, 3 Oct 2012 20:46:10 GMT"
}
] | 1,349,654,400,000 | [
[
"Dobrev",
"Dimiter",
""
]
] |
1210.1649 | Thomas Krennwallner | Thomas Eiter, Michael Fink, Thomas Krennwallner, Christoph Redl | Conflict-driven ASP Solving with External Sources | To appear in Theory and Practice of Logic Programming | Theor. Pract. Log. Prog. 12:4-5 (2012) 659-679 | 10.1017/S1471068412000233 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Answer Set Programming (ASP) is a well-known problem solving approach based
on nonmonotonic logic programs and efficient solvers. To enable access to
external information, HEX-programs extend programs with external atoms, which
allow for a bidirectional communication between the logic program and external
sources of computation (e.g., description logic reasoners and Web resources).
Current solvers evaluate HEX-programs by a translation to ASP itself, in which
values of external atoms are guessed and verified after the ordinary answer set
computation. This elegant approach does not scale with the number of external
accesses in general, in particular in presence of nondeterminism (which is
instrumental for ASP). In this paper, we present a novel, native algorithm for
evaluating HEX-programs which uses learning techniques. In particular, we
extend conflict-driven ASP solving techniques, which prevent the solver from
running into the same conflict again, from ordinary to HEX-programs. We show
how to gain additional knowledge from external source evaluations and how to
use it in a conflict-driven algorithm. We first target the uninformed case,
i.e., when we have no extra information on external sources, and then extend
our approach to the case where additional meta-information is available.
Experiments show that learning from external sources can significantly decrease
both the runtime and the number of considered candidate compatible sets.
| [
{
"version": "v1",
"created": "Fri, 5 Oct 2012 06:12:59 GMT"
}
] | 1,349,654,400,000 | [
[
"Eiter",
"Thomas",
""
],
[
"Fink",
"Michael",
""
],
[
"Krennwallner",
"Thomas",
""
],
[
"Redl",
"Christoph",
""
]
] |
1210.2715 | Dimiter Dobrev | Dimiter Dobrev | AI in arbitrary world | null | 5th Panhellenic Logic Symposium, July 2005, University of Athens,
Athens, Greece, pp. 62-67 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In order to build AI we have to create a program which copes well in an
arbitrary world. In this paper we will restrict our attention on one concrete
world, which represents the game Tick-Tack-Toe. This world is a very simple one
but it is sufficiently complicated for our task because most people cannot
manage with it. The main difficulty in this world is that the player cannot see
the entire internal state of the world so he has to build a model in order to
understand the world. The model which we will offer will consist of final
automata and first order formulas.
| [
{
"version": "v1",
"created": "Tue, 9 Oct 2012 08:58:12 GMT"
}
] | 1,349,913,600,000 | [
[
"Dobrev",
"Dimiter",
""
]
] |
1210.3375 | Ben Aissa Ezzeddine | Benaissa Ezzeddine and Benabdelhafid Abdellatif and Benaissa Mounir | An Agent-based framework for cooperation in Supply Chain | IJCSI International Journal of Computer Science Issues, Vol. 9, Issue
5, No 3, September 2012 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Supply Chain coordination has become a critical success factor for Supply
Chain management (SCM) and effectively improving the performance of
organizations in various industries. Companies are increasingly located at the
intersection of one or more corporate networks which are designated by "Supply
Chain". Managing this chain is mainly based on an 'information sharing' and
redeployment activities between the various links that comprise it. Several
attempts have been made by industrialists and researchers to educate
policymakers about the gains to be made by the implementation of cooperative
relationships. The approach presented in this paper here is among the works
that aim to propose solutions related to information systems distributed Supply
Chains to enable the different actors of the chain to improve their
performance. We propose in particular solutions that focus on cooperation
between actors in the Supply Chain.
| [
{
"version": "v1",
"created": "Thu, 11 Oct 2012 21:10:41 GMT"
}
] | 1,350,259,200,000 | [
[
"Ezzeddine",
"Benaissa",
""
],
[
"Abdellatif",
"Benabdelhafid",
""
],
[
"Mounir",
"Benaissa",
""
]
] |
1210.3946 | Sebastien Verel | Fabio Daolio (ISI), S\'ebastien Verel (INRIA Lille - Nord Europe),
Gabriela Ochoa, Marco Tomassini (ISI) | Local optima networks and the performance of iterated local search | Proceedings of the fourteenth international conference on Genetic and
evolutionary computation conference, Philadelphia : United States (2012) | null | 10.1145/2330163.2330217 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Local Optima Networks (LONs) have been recently proposed as an alternative
model of combinatorial fitness landscapes. The model compresses the information
given by the whole search space into a smaller mathematical object that is the
graph having as vertices the local optima and as edges the possible weighted
transitions between them. A new set of metrics can be derived from this model
that capture the distribution and connectivity of the local optima in the
underlying configuration space. This paper departs from the descriptive
analysis of local optima networks, and actively studies the correlation between
network features and the performance of a local search heuristic. The NK family
of landscapes and the Iterated Local Search metaheuristic are considered. With
a statistically-sound approach based on multiple linear regression, it is shown
that some LONs' features strongly influence and can even partly predict the
performance of a heuristic search algorithm. This study validates the
expressive power of LONs as a model of combinatorial fitness landscapes.
| [
{
"version": "v1",
"created": "Mon, 15 Oct 2012 09:11:57 GMT"
}
] | 1,350,345,600,000 | [
[
"Daolio",
"Fabio",
"",
"ISI"
],
[
"Verel",
"Sébastien",
"",
"INRIA Lille - Nord Europe"
],
[
"Ochoa",
"Gabriela",
"",
"ISI"
],
[
"Tomassini",
"Marco",
"",
"ISI"
]
] |
1210.4840 | Guy Van den Broeck | Guy Van den Broeck, Arthur Choi, Adnan Darwiche | Lifted Relax, Compensate and then Recover: From Approximate to Exact
Lifted Probabilistic Inference | Appears in Proceedings of the Twenty-Eighth Conference on Uncertainty
in Artificial Intelligence (UAI2012) | null | null | UAI-P-2012-PG-131-141 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose an approach to lifted approximate inference for first-order
probabilistic models, such as Markov logic networks. It is based on performing
exact lifted inference in a simplified first-order model, which is found by
relaxing first-order constraints, and then compensating for the relaxation.
These simplified models can be incrementally improved by carefully recovering
constraints that have been relaxed, also at the first-order level. This leads
to a spectrum of approximations, with lifted belief propagation on one end, and
exact lifted inference on the other. We discuss how relaxation, compensation,
and recovery can be performed, all at the firstorder level, and show
empirically that our approach substantially improves on the approximations of
both propositional solvers and lifted belief propagation.
| [
{
"version": "v1",
"created": "Tue, 16 Oct 2012 17:32:23 GMT"
}
] | 1,350,604,800,000 | [
[
"Broeck",
"Guy Van den",
""
],
[
"Choi",
"Arthur",
""
],
[
"Darwiche",
"Adnan",
""
]
] |
1210.4845 | Udi Apsel | Udi Apsel, Ronen I. Brafman | Exploiting Uniform Assignments in First-Order MPE | Appears in Proceedings of the Twenty-Eighth Conference on Uncertainty
in Artificial Intelligence (UAI2012) | null | null | UAI-P-2012-PG-74-83 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The MPE (Most Probable Explanation) query plays an important role in
probabilistic inference. MPE solution algorithms for probabilistic relational
models essentially adapt existing belief assessment method, replacing summation
with maximization. But the rich structure and symmetries captured by relational
models together with the properties of the maximization operator offer an
opportunity for additional simplification with potentially significant
computational ramifications. Specifically, these models often have groups of
variables that define symmetric distributions over some population of formulas.
The maximizing choice for different elements of this group is the same. If we
can realize this ahead of time, we can significantly reduce the size of the
model by eliminating a potentially significant portion of random variables.
This paper defines the notion of uniformly assigned and partially uniformly
assigned sets of variables, shows how one can recognize these sets efficiently,
and how the model can be greatly simplified once we recognize them, with little
computational effort. We demonstrate the effectiveness of these ideas
empirically on a number of models.
| [
{
"version": "v1",
"created": "Tue, 16 Oct 2012 17:34:35 GMT"
}
] | 1,350,604,800,000 | [
[
"Apsel",
"Udi",
""
],
[
"Brafman",
"Ronen I.",
""
]
] |
1210.4857 | Andrew E. Gelfand | Andrew E. Gelfand, Max Welling | Generalized Belief Propagation on Tree Robust Structured Region Graphs | Appears in Proceedings of the Twenty-Eighth Conference on Uncertainty
in Artificial Intelligence (UAI2012) | null | null | UAI-P-2012-PG-296-305 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper provides some new guidance in the construction of region graphs
for Generalized Belief Propagation (GBP). We connect the problem of choosing
the outer regions of a LoopStructured Region Graph (SRG) to that of finding a
fundamental cycle basis of the corresponding Markov network. We also define a
new class of tree-robust Loop-SRG for which GBP on any induced (spanning) tree
of the Markov network, obtained by setting to zero the off-tree interactions,
is exact. This class of SRG is then mapped to an equivalent class of
tree-robust cycle bases on the Markov network. We show that a treerobust cycle
basis can be identified by proving that for every subset of cycles, the graph
obtained from the edges that participate in a single cycle only, is multiply
connected. Using this we identify two classes of tree-robust cycle bases:
planar cycle bases and "star" cycle bases. In experiments we show that
tree-robustness can be successfully exploited as a design principle to improve
the accuracy and convergence of GBP.
| [
{
"version": "v1",
"created": "Tue, 16 Oct 2012 17:37:52 GMT"
}
] | 1,350,604,800,000 | [
[
"Gelfand",
"Andrew E.",
""
],
[
"Welling",
"Max",
""
]
] |
1210.4861 | Stefano Ermon | Stefano Ermon, Carla P. Gomes, Bart Selman | Uniform Solution Sampling Using a Constraint Solver As an Oracle | Appears in Proceedings of the Twenty-Eighth Conference on Uncertainty
in Artificial Intelligence (UAI2012) | null | null | UAI-P-2012-PG-255-264 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of sampling from solutions defined by a set of hard
constraints on a combinatorial space. We propose a new sampling technique that,
while enforcing a uniform exploration of the search space, leverages the
reasoning power of a systematic constraint solver in a black-box scheme. We
present a series of challenging domains, such as energy barriers and highly
asymmetric spaces, that reveal the difficulties introduced by hard constraints.
We demonstrate that standard approaches such as Simulated Annealing and Gibbs
Sampling are greatly affected, while our new technique can overcome many of
these difficulties. Finally, we show that our sampling scheme naturally defines
a new approximate model counting technique, which we empirically show to be
very accurate on a range of benchmark problems.
| [
{
"version": "v1",
"created": "Tue, 16 Oct 2012 17:38:34 GMT"
}
] | 1,350,604,800,000 | [
[
"Ermon",
"Stefano",
""
],
[
"Gomes",
"Carla P.",
""
],
[
"Selman",
"Bart",
""
]
] |
1210.4875 | Andrey Kolobov | Andrey Kolobov, Mausam, Daniel Weld | A Theory of Goal-Oriented MDPs with Dead Ends | Appears in Proceedings of the Twenty-Eighth Conference on Uncertainty
in Artificial Intelligence (UAI2012) | null | null | UAI-P-2012-PG-438-447 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Stochastic Shortest Path (SSP) MDPs is a problem class widely studied in AI,
especially in probabilistic planning. They describe a wide range of scenarios
but make the restrictive assumption that the goal is reachable from any state,
i.e., that dead-end states do not exist. Because of this, SSPs are unable to
model various scenarios that may have catastrophic events (e.g., an airplane
possibly crashing if it flies into a storm). Even though MDP algorithms have
been used for solving problems with dead ends, a principled theory of SSP
extensions that would allow dead ends, including theoretically sound algorithms
for solving such MDPs, has been lacking. In this paper, we propose three new
MDP classes that admit dead ends under increasingly weaker assumptions. We
present Value Iteration-based as well as the more efficient heuristic search
algorithms for optimally solving each class, and explore theoretical
relationships between these classes. We also conduct a preliminary empirical
study comparing the performance of our algorithms on different MDP classes,
especially on scenarios with unavoidable dead ends.
| [
{
"version": "v1",
"created": "Tue, 16 Oct 2012 17:42:41 GMT"
}
] | 1,350,604,800,000 | [
[
"Kolobov",
"Andrey",
""
],
[
"Mausam",
"",
""
],
[
"Weld",
"Daniel",
""
]
] |
1210.4878 | Alexander T. Ihler | Alexander T. Ihler, Natalia Flerova, Rina Dechter, Lars Otten | Join-graph based cost-shifting schemes | Appears in Proceedings of the Twenty-Eighth Conference on Uncertainty
in Artificial Intelligence (UAI2012) | null | null | UAI-P-2012-PG-397-406 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We develop several algorithms taking advantage of two common approaches for
bounding MPE queries in graphical models: minibucket elimination and
message-passing updates for linear programming relaxations. Both methods are
quite similar, and offer useful perspectives for the other; our hybrid
approaches attempt to balance the advantages of each. We demonstrate the power
of our hybrid algorithms through extensive empirical evaluation. Most notably,
a Branch and Bound search guided by the heuristic function calculated by one of
our new algorithms has recently won first place in the PASCAL2 inference
challenge.
| [
{
"version": "v1",
"created": "Tue, 16 Oct 2012 17:43:24 GMT"
}
] | 1,350,604,800,000 | [
[
"Ihler",
"Alexander T.",
""
],
[
"Flerova",
"Natalia",
""
],
[
"Dechter",
"Rina",
""
],
[
"Otten",
"Lars",
""
]
] |
1210.4882 | Ariel D. Procaccia | Ariel D. Procaccia, Sashank J. Reddi, Nisarg Shah | A Maximum Likelihood Approach For Selecting Sets of Alternatives | Appears in Proceedings of the Twenty-Eighth Conference on Uncertainty
in Artificial Intelligence (UAI2012) | null | null | UAI-P-2012-PG-695-704 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of selecting a subset of alternatives given noisy
evaluations of the relative strength of different alternatives. We wish to
select a k-subset (for a given k) that provides a maximum likelihood estimate
for one of several objectives, e.g., containing the strongest alternative.
Although this problem is NP-hard, we show that when the noise level is
sufficiently high, intuitive methods provide the optimal solution. We thus
generalize classical results about singling out one alternative and identifying
the hidden ranking of alternatives by strength. Extensive experiments show that
our methods perform well in practical settings.
| [
{
"version": "v1",
"created": "Tue, 16 Oct 2012 17:44:59 GMT"
}
] | 1,350,604,800,000 | [
[
"Procaccia",
"Ariel D.",
""
],
[
"Reddi",
"Sashank J.",
""
],
[
"Shah",
"Nisarg",
""
]
] |
1210.4885 | Lars Otten | Lars Otten, Rina Dechter | A Case Study in Complexity Estimation: Towards Parallel Branch-and-Bound
over Graphical Models | Appears in Proceedings of the Twenty-Eighth Conference on Uncertainty
in Artificial Intelligence (UAI2012) | null | null | UAI-P-2012-PG-665-674 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the problem of complexity estimation in the context of parallelizing
an advanced Branch and Bound-type algorithm over graphical models. The
algorithm's pruning power makes load balancing, one crucial element of every
distributed system, very challenging. We propose using a statistical regression
model to identify and tackle disproportionally complex parallel subproblems,
the cause of load imbalance, ahead of time. The proposed model is evaluated and
analyzed on various levels and shown to yield robust predictions. We then
demonstrate its effectiveness for load balancing in practice.
| [
{
"version": "v1",
"created": "Tue, 16 Oct 2012 17:45:42 GMT"
}
] | 1,350,604,800,000 | [
[
"Otten",
"Lars",
""
],
[
"Dechter",
"Rina",
""
]
] |
1210.4890 | Denis D. Maua | Denis D. Maua, Cassio Polpo de Campos, Marco Zaffalon | The Complexity of Approximately Solving Influence Diagrams | Appears in Proceedings of the Twenty-Eighth Conference on Uncertainty
in Artificial Intelligence (UAI2012) | null | null | UAI-P-2012-PG-604-613 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Influence diagrams allow for intuitive and yet precise description of complex
situations involving decision making under uncertainty. Unfortunately, most of
the problems described by influence diagrams are hard to solve. In this paper
we discuss the complexity of approximately solving influence diagrams. We do
not assume no-forgetting or regularity, which makes the class of problems we
address very broad. Remarkably, we show that when both the tree-width and the
cardinality of the variables are bounded the problem admits a fully
polynomial-time approximation scheme.
| [
{
"version": "v1",
"created": "Tue, 16 Oct 2012 17:46:41 GMT"
}
] | 1,350,604,800,000 | [
[
"Maua",
"Denis D.",
""
],
[
"de Campos",
"Cassio Polpo",
""
],
[
"Zaffalon",
"Marco",
""
]
] |
1210.4897 | Qiang Liu | Qiang Liu, Alexander T. Ihler | Belief Propagation for Structured Decision Making | Appears in Proceedings of the Twenty-Eighth Conference on Uncertainty
in Artificial Intelligence (UAI2012) | null | null | UAI-P-2012-PG-523-532 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Variational inference algorithms such as belief propagation have had
tremendous impact on our ability to learn and use graphical models, and give
many insights for developing or understanding exact and approximate inference.
However, variational approaches have not been widely adoped for decision making
in graphical models, often formulated through influence diagrams and including
both centralized and decentralized (or multi-agent) decisions. In this work, we
present a general variational framework for solving structured cooperative
decision-making problems, use it to propose several belief propagation-like
algorithms, and analyze them both theoretically and empirically.
| [
{
"version": "v1",
"created": "Tue, 16 Oct 2012 17:48:18 GMT"
}
] | 1,350,604,800,000 | [
[
"Liu",
"Qiang",
""
],
[
"Ihler",
"Alexander T.",
""
]
] |
1210.4911 | Radu Marinescu | Radu Marinescu, Abdul Razak, Nic Wilson | Multi-objective Influence Diagrams | Appears in Proceedings of the Twenty-Eighth Conference on Uncertainty
in Artificial Intelligence (UAI2012) | null | null | UAI-P-2012-PG-574-583 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe multi-objective influence diagrams, based on a set of p
objectives, where utility values are vectors in Rp, and are typically only
partially ordered. These can still be solved by a variable elimination
algorithm, leading to a set of maximal values of expected utility. If the
Pareto ordering is used this set can often be prohibitively large. We consider
approximate representations of the Pareto set based on e-coverings, allowing
much larger problems to be solved. In addition, we define a method for
incorporating user tradeoffs, which also greatly improves the efficiency.
| [
{
"version": "v1",
"created": "Tue, 16 Oct 2012 17:55:38 GMT"
}
] | 1,350,604,800,000 | [
[
"Marinescu",
"Radu",
""
],
[
"Razak",
"Abdul",
""
],
[
"Wilson",
"Nic",
""
]
] |
1210.4912 | Zhongzhang Zhang | Zhongzhang Zhang, Xiaoping Chen | FHHOP: A Factored Hybrid Heuristic Online Planning Algorithm for Large
POMDPs | Appears in Proceedings of the Twenty-Eighth Conference on Uncertainty
in Artificial Intelligence (UAI2012) | null | null | UAI-P-2012-PG-934-943 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Planning in partially observable Markov decision processes (POMDPs) remains a
challenging topic in the artificial intelligence community, in spite of recent
impressive progress in approximation techniques. Previous research has
indicated that online planning approaches are promising in handling large-scale
POMDP domains efficiently as they make decisions "on demand" instead of
proactively for the entire state space. We present a Factored Hybrid Heuristic
Online Planning (FHHOP) algorithm for large POMDPs. FHHOP gets its power by
combining a novel hybrid heuristic search strategy with a recently developed
factored state representation. On several benchmark problems, FHHOP
substantially outperformed state-of-the-art online heuristic search approaches
in terms of both scalability and quality.
| [
{
"version": "v1",
"created": "Tue, 16 Oct 2012 17:55:47 GMT"
}
] | 1,350,604,800,000 | [
[
"Zhang",
"Zhongzhang",
""
],
[
"Chen",
"Xiaoping",
""
]
] |
1210.4916 | Max Welling | Max Welling, Andrew E. Gelfand, Alexander T. Ihler | A Cluster-Cumulant Expansion at the Fixed Points of Belief Propagation | Appears in Proceedings of the Twenty-Eighth Conference on Uncertainty
in Artificial Intelligence (UAI2012) | null | null | UAI-P-2012-PG-883-892 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a new cluster-cumulant expansion (CCE) based on the fixed points
of iterative belief propagation (IBP). This expansion is similar in spirit to
the loop-series (LS) recently introduced in [1]. However, in contrast to the
latter, the CCE enjoys the following important qualities: 1) it is defined for
arbitrary state spaces 2) it is easily extended to fixed points of generalized
belief propagation (GBP), 3) disconnected groups of variables will not
contribute to the CCE and 4) the accuracy of the expansion empirically improves
upon that of the LS. The CCE is based on the same M\"obius transform as the
Kikuchi approximation, but unlike GBP does not require storing the beliefs of
the GBP-clusters nor does it suffer from convergence issues during belief
updating.
| [
{
"version": "v1",
"created": "Tue, 16 Oct 2012 17:56:32 GMT"
}
] | 1,350,604,800,000 | [
[
"Welling",
"Max",
""
],
[
"Gelfand",
"Andrew E.",
""
],
[
"Ihler",
"Alexander T.",
""
]
] |
1210.6209 | Yanfang Liu | Yanfang Liu and William Zhu | Characteristic of partition-circuit matroid through approximation number | 12 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Rough set theory is a useful tool to deal with uncertain, granular and
incomplete knowledge in information systems. And it is based on equivalence
relations or partitions. Matroid theory is a structure that generalizes linear
independence in vector spaces, and has a variety of applications in many
fields. In this paper, we propose a new type of matroids, namely,
partition-circuit matroids, which are induced by partitions. Firstly, a
partition satisfies circuit axioms in matroid theory, then it can induce a
matroid which is called a partition-circuit matroid. A partition and an
equivalence relation on the same universe are one-to-one corresponding, then
some characteristics of partition-circuit matroids are studied through rough
sets. Secondly, similar to the upper approximation number which is proposed by
Wang and Zhu, we define the lower approximation number. Some characteristics of
partition-circuit matroids and the dual matroids of them are investigated
through the lower approximation number and the upper approximation number.
| [
{
"version": "v1",
"created": "Tue, 23 Oct 2012 11:50:42 GMT"
}
] | 1,351,036,800,000 | [
[
"Liu",
"Yanfang",
""
],
[
"Zhu",
"William",
""
]
] |
1210.6275 | Jo\~ao Eugenio Marynowski | Jo\~ao Eugenio Marynowski | Ambiente de Planejamento Ip\^e | MSc dissertation involving Artificial Intelligence, Planning, Petri
Net, Plangraph, Intelig\^encia Artificial, Planejamento, Redes de Petri e
Grafo de Planos | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work we investigate the systems that implements algorithms for the
planning problem in Artificial Intelligence, called planners, with especial
attention to the planners based on the plan graph. We analyze the problem of
comparing the performance of the different algorithms and we propose an
environment for the development and analysis of planners.
| [
{
"version": "v1",
"created": "Tue, 23 Oct 2012 15:54:00 GMT"
},
{
"version": "v2",
"created": "Wed, 24 Oct 2012 20:03:00 GMT"
}
] | 1,351,209,600,000 | [
[
"Marynowski",
"João Eugenio",
""
]
] |
1210.6415 | EPTCS | Stefan Edelkamp, Peter Kissmann, \'Alvaro Torralba | Lex-Partitioning: A New Option for BDD Search | In Proceedings GRAPHITE 2012, arXiv:1210.6118 | EPTCS 99, 2012, pp. 66-82 | 10.4204/EPTCS.99.8 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For the exploration of large state spaces, symbolic search using binary
decision diagrams (BDDs) can save huge amounts of memory and computation time.
State sets are represented and modified by accessing and manipulating their
characteristic functions. BDD partitioning is used to compute the image as the
disjunction of smaller subimages.
In this paper, we propose a novel BDD partitioning option. The partitioning
is lexicographical in the binary representation of the states contained in the
set that is represented by a BDD and uniform with respect to the number of
states represented. The motivation of controlling the state set sizes in the
partitioning is to eventually bridge the gap between explicit and symbolic
search.
Let n be the size of the binary state vector. We propose an O(n) ranking and
unranking scheme that supports negated edges and operates on top of precomputed
satcount values. For the uniform split of a BDD, we then use unranking to
provide paths along which we partition the BDDs. In a shared BDD representation
the efforts are O(n). The algorithms are fully integrated in the CUDD library
and evaluated in strongly solving general game playing benchmarks.
| [
{
"version": "v1",
"created": "Wed, 24 Oct 2012 00:33:28 GMT"
}
] | 1,351,123,200,000 | [
[
"Edelkamp",
"Stefan",
""
],
[
"Kissmann",
"Peter",
""
],
[
"Torralba",
"Álvaro",
""
]
] |
1210.7002 | Abdelmalek Amine | Mohamed Hamou, Abdelmalek Amine and Ahmed Chaouki Lokbani | A Biomimetic Approach Based on Immune Systems for Classification of
Unstructured Data | 10 pages, 4 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we present the results of unstructured data clustering in this
case a textual data from Reuters 21578 corpus with a new biomimetic approach
using immune system. Before experimenting our immune system, we digitalized
textual data by the n-grams approach. The novelty lies on hybridization of
n-grams and immune systems for clustering. The experimental results show that
the recommended ideas are promising and prove that this method can solve the
text clustering problem.
| [
{
"version": "v1",
"created": "Thu, 25 Oct 2012 21:24:06 GMT"
}
] | 1,351,468,800,000 | [
[
"Hamou",
"Mohamed",
""
],
[
"Amine",
"Abdelmalek",
""
],
[
"Lokbani",
"Ahmed Chaouki",
""
]
] |
1210.7154 | Patrick Lambrix | Patrick Lambrix, Zlatan Dragisic, Valentina Ivanova | Get my pizza right: Repairing missing is-a relations in ALC ontologies
(extended version) | null | null | 10.1007/978-3-642-37996-3_2 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the increased use of ontologies in semantically-enabled applications,
the issue of debugging defects in ontologies has become increasingly important.
These defects can lead to wrong or incomplete results for the applications.
Debugging consists of the phases of detection and repairing. In this paper we
focus on the repairing phase of a particular kind of defects, i.e. the missing
relations in the is-a hierarchy. Previous work has dealt with the case of
taxonomies. In this work we extend the scope to deal with ALC ontologies that
can be represented using acyclic terminologies. We present algorithms and
discuss a system.
| [
{
"version": "v1",
"created": "Fri, 26 Oct 2012 14:27:01 GMT"
}
] | 1,699,833,600,000 | [
[
"Lambrix",
"Patrick",
""
],
[
"Dragisic",
"Zlatan",
""
],
[
"Ivanova",
"Valentina",
""
]
] |
1210.7959 | Lars Kotthoff | Lars Kotthoff | Algorithm Selection for Combinatorial Search Problems: A Survey | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Algorithm Selection Problem is concerned with selecting the best
algorithm to solve a given problem on a case-by-case basis. It has become
especially relevant in the last decade, as researchers are increasingly
investigating how to identify the most suitable existing algorithm for solving
a problem instead of developing new algorithms. This survey presents an
overview of this work focusing on the contributions made in the area of
combinatorial search problems, where Algorithm Selection techniques have
achieved significant performance improvements. We unify and organise the vast
literature according to criteria that determine Algorithm Selection systems in
practice. The comprehensive classification of approaches identifies and
analyses the different directions from which Algorithm Selection has been
approached. This paper contrasts and compares different methods for solving the
problem as well as ways of using these solutions. It closes by identifying
directions of current and future research.
| [
{
"version": "v1",
"created": "Tue, 30 Oct 2012 10:48:21 GMT"
}
] | 1,351,641,600,000 | [
[
"Kotthoff",
"Lars",
""
]
] |
1211.0611 | Aiping Huang | Aiping Huang, William Zhu | Matrix approach to rough sets through vector matroids over a field | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Rough sets were proposed to deal with the vagueness and incompleteness of
knowledge in information systems. There are may optimization issues in this
field such as attribute reduction. Matroids generalized from matrices are
widely used in optimization. Therefore, it is necessary to connect matroids
with rough sets. In this paper, we take field into consideration and introduce
matrix to study rough sets through vector matroids. First, a matrix
representation of an equivalence relation is proposed, and then a matroidal
structure of rough sets over a field is presented by the matrix. Second, the
properties of the matroidal structure including circuits, bases and so on are
studied through two special matrix solution spaces, especially null space.
Third, over a binary field, we construct an equivalence relation from matrix
null space, and establish an algebra isomorphism from the collection of
equivalence relations to the collection of sets, which any member is a family
of the minimal non-empty sets that are supports of members of null space of a
binary dependence matrix. In a word, matrix provides a new viewpoint to study
rough sets.
| [
{
"version": "v1",
"created": "Sat, 3 Nov 2012 13:19:34 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Feb 2013 02:16:40 GMT"
},
{
"version": "v3",
"created": "Thu, 28 Mar 2013 02:03:21 GMT"
}
] | 1,426,204,800,000 | [
[
"Huang",
"Aiping",
""
],
[
"Zhu",
"William",
""
]
] |
1211.2736 | Venkateshwara Prasad Tangirala | Rajeswari P. V. N. and T. V. Prasad | Hybrid Systems for Knowledge Representation in Artificial Intelligence | 6 pages | International Journal of Advanced Research in Artificial
Intelligence, 1 (8), 2012, 31-36 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There are few knowledge representation (KR) techniques available for
efficiently representing knowledge. However, with the increase in complexity,
better methods are needed. Some researchers came up with hybrid mechanisms by
combining two or more methods. In an effort to construct an intelligent
computer system, a primary consideration is to represent large amounts of
knowledge in a way that allows effective use and efficiently organizing
information to facilitate making the recommended inferences. There are merits
and demerits of combinations, and standardized method of KR is needed. In this
paper, various hybrid schemes of KR were explored at length and details
presented.
| [
{
"version": "v1",
"created": "Mon, 12 Nov 2012 19:09:08 GMT"
}
] | 1,352,764,800,000 | [
[
"N.",
"Rajeswari P. V.",
""
],
[
"Prasad",
"T. V.",
""
]
] |
1211.2972 | Dan Stowell | Dan Stowell and Mark D. Plumbley | Segregating event streams and noise with a Markov renewal process model | null | Journal of Machine Learning Research, 14(Aug):2213-2238, 2013 | null | null | cs.AI | http://creativecommons.org/licenses/by/3.0/ | We describe an inference task in which a set of timestamped event
observations must be clustered into an unknown number of temporal sequences
with independent and varying rates of observations. Various existing approaches
to multi-object tracking assume a fixed number of sources and/or a fixed
observation rate; we develop an approach to inferring structure in timestamped
data produced by a mixture of an unknown and varying number of similar Markov
renewal processes, plus independent clutter noise. The inference simultaneously
distinguishes signal from noise as well as clustering signal observations into
separate source streams. We illustrate the technique via a synthetic experiment
as well as an experiment to track a mixture of singing birds.
| [
{
"version": "v1",
"created": "Tue, 13 Nov 2012 12:43:45 GMT"
}
] | 1,379,635,200,000 | [
[
"Stowell",
"Dan",
""
],
[
"Plumbley",
"Mark D.",
""
]
] |
1211.4122 | Zilong Xu | Zilong Xu, Fan Min, William Zhu | Cost-sensitive C4.5 with post-pruning and competition | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Decision tree is an effective classification approach in data mining and
machine learning. In applications, test costs and misclassification costs
should be considered while inducing decision trees. Recently, some
cost-sensitive learning algorithms based on ID3 such as CS-ID3, IDX,
\lambda-ID3 have been proposed to deal with the issue. These algorithms deal
with only symbolic data. In this paper, we develop a decision tree algorithm
inspired by C4.5 for numeric data. There are two major issues for our
algorithm. First, we develop the test cost weighted information gain ratio as
the heuristic information. According to this heuristic information, our
algorithm is to pick the attribute that provides more gain ratio and costs less
for each selection. Second, we design a post-pruning strategy through
considering the tradeoff between test costs and misclassification costs of the
generated decision tree. In this way, the total cost is reduced. Experimental
results indicate that (1) our algorithm is stable and effective; (2) the
post-pruning technique reduces the total cost significantly; (3) the
competition strategy is effective to obtain a cost-sensitive decision tree with
low cost.
| [
{
"version": "v1",
"created": "Sat, 17 Nov 2012 13:23:41 GMT"
}
] | 1,353,369,600,000 | [
[
"Xu",
"Zilong",
""
],
[
"Min",
"Fan",
""
],
[
"Zhu",
"William",
""
]
] |
1211.4133 | Ibrahim El Bitar | Ibrahim El Bitar, Fatima-Zahra Belouadha, Ounsa Roudies | A Logic and Adaptive Approach for Efficient Diagnosis Systems using CBR | 5 pages,3 figures, 1 table | http://www.ijcaonline.org/archives/volume39/number15/4893-7393
year: 2012 | 10.5120/4893-7393 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Case Based Reasoning (CBR) is an intelligent way of thinking based on
experience and capitalization of already solved cases (source cases) to find a
solution to a new problem (target case). Retrieval phase consists on
identifying source cases that are similar to the target case. This phase may
lead to erroneous results if the existing knowledge imperfections are not taken
into account. This work presents a novel solution based on Fuzzy logic
techniques and adaptation measures which aggregate weighted similarities to
improve the retrieval results. To confirm the efficiency of our solution, we
have applied it to the industrial diagnosis domain. The obtained results are
more efficient results than those obtained by applying typical measures.
| [
{
"version": "v1",
"created": "Sat, 17 Nov 2012 15:52:55 GMT"
}
] | 1,353,369,600,000 | [
[
"Bitar",
"Ibrahim El",
""
],
[
"Belouadha",
"Fatima-Zahra",
""
],
[
"Roudies",
"Ounsa",
""
]
] |
1211.4552 | Gabriel Synnaeve | Gabriel Synnaeve (LIG, LPPA), Pierre Bessiere (LPPA) | A Dataset for StarCraft AI \& an Example of Armies Clustering | Artificial Intelligence in Adversarial Real-Time Games 2012, Palo
Alto : United States (2012) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper advocates the exploration of the full state of recorded real-time
strategy (RTS) games, by human or robotic players, to discover how to reason
about tactics and strategy. We present a dataset of StarCraft games
encompassing the most of the games' state (not only player's orders). We
explain one of the possible usages of this dataset by clustering armies on
their compositions. This reduction of armies compositions to mixtures of
Gaussian allow for strategic reasoning at the level of the components. We
evaluated this clustering method by predicting the outcomes of battles based on
armies compositions' mixtures components
| [
{
"version": "v1",
"created": "Mon, 19 Nov 2012 20:18:43 GMT"
}
] | 1,353,369,600,000 | [
[
"Synnaeve",
"Gabriel",
"",
"LIG, LPPA"
],
[
"Bessiere",
"Pierre",
"",
"LPPA"
]
] |
1211.5643 | Ladislau B\"ol\"oni | Ladislau Boloni | Shadows and headless shadows: a worlds-based, autobiographical approach
to reasoning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many cognitive systems deploy multiple, closed, individually consistent
models which can represent interpretations of the present state of the world,
moments in the past, possible futures or alternate versions of reality. While
they appear under different names, these structures can be grouped under the
general term of worlds. The Xapagy architecture is a story-oriented cognitive
system which relies exclusively on the autobiographical memory implemented as a
raw collection of events organized into world-type structures called {\em
scenes}. The system performs reasoning by shadowing current events with events
from the autobiography. The shadows are then extrapolated into headless shadows
corresponding to predictions, hidden events or inferred relations.
| [
{
"version": "v1",
"created": "Sat, 24 Nov 2012 04:11:37 GMT"
}
] | 1,353,974,400,000 | [
[
"Boloni",
"Ladislau",
""
]
] |
1211.5644 | Ladislau B\"ol\"oni | Ladislau Boloni | Modeling problems of identity in Little Red Riding Hood | arXiv admin note: text overlap with arXiv:1105.3486 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper argues that the problem of identity is a critical challenge in
agents which are able to reason about stories. The Xapagy architecture has been
built from scratch to perform narrative reasoning and relies on a somewhat
unusual approach to represent instances and identity. We illustrate the
approach by a representation of the story of Little Red Riding Hood in the
architecture, with a focus on the problem of identity raised by the narrative.
| [
{
"version": "v1",
"created": "Sat, 24 Nov 2012 04:21:42 GMT"
}
] | 1,353,974,400,000 | [
[
"Boloni",
"Ladislau",
""
]
] |
1211.6097 | Ladislau B\"ol\"oni | Ladislau Boloni | Shadows and Headless Shadows: an Autobiographical Approach to Narrative
Reasoning | arXiv admin note: substantial text overlap with arXiv:1211.5643 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Xapagy architecture is a story-oriented cognitive system which relies
exclusively on the autobiographical memory implemented as a raw collection of
events. Reasoning is performed by shadowing current events with events from the
autobiography. The shadows are then extrapolated into headless shadows (HLSs).
In a story following mood, HLSs can be used to track the level of surprise of
the agent, to infer hidden actions or relations between the participants, and
to summarize ongoing events. In recall mood, the HLSs can be used to create new
stories ranging from exact recall to free-form confabulation.
| [
{
"version": "v1",
"created": "Sat, 24 Nov 2012 04:35:45 GMT"
}
] | 1,354,060,800,000 | [
[
"Boloni",
"Ladislau",
""
]
] |
1212.0750 | Michael Gr. Voskoglou Prof. Dr. | Michael Gr. Voskoglou, Sheryl Buckley | Problem Solving and Computational Thinking in a Learning Environment | 19 pages, 2 figures | Egyptian Computer Science Journal, Vol. 36 (4), 28-46, 2012 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Computational thinking is a new problem soling method named for its extensive
use of computer science techniques. It synthesizes critical thinking and
existing knowledge and applies them in solving complex technological problems.
The term was coined by J. Wing, but the relationship between computational and
critical thinking, the two modes of thiking in solving problems, has not been
yet learly established. This paper aims at shedding some light into this
relationship. We also present two classroom experiments performed recently at
the Graduate Technological Educational Institute of Patras in Greece. The
results of these experiments give a strong indication that the use of computers
as a tool for problem solving enchances the students' abilities in solving real
world problems involving mathematical modelling. This is also crossed by
earlier findings of other researchers for the problem solving process in
general (not only for mathematical problems).
| [
{
"version": "v1",
"created": "Sun, 2 Dec 2012 17:34:36 GMT"
}
] | 1,354,665,600,000 | [
[
"Voskoglou",
"Michael Gr.",
""
],
[
"Buckley",
"Sheryl",
""
]
] |
1212.0768 | Philippe Morignot | Philippe Morignot (INRIA Rocquencourt), Fawzi Nashashibi (INRIA
Rocquencourt) | An ontology-based approach to relax traffic regulation for autonomous
vehicle assistance | null | 12th IASTED International Conference on Artificial Intelligence
and Applications (AIA'13), Austria (2013) | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traffic regulation must be respected by all vehicles, either human- or
computer- driven. However, extreme traffic situations might exhibit practical
cases in which a vehicle should safely and reasonably relax traffic regulation,
e.g., in order not to be indefinitely blocked and to keep circulating. In this
paper, we propose a high-level representation of an automated vehicle, other
vehicles and their environment, which can assist drivers in taking such
"illegal" but practical relaxation decisions. This high-level representation
(an ontology) includes topological knowledge and inference rules, in order to
compute the next high-level motion an automated vehicle should take, as
assistance to a driver. Results on practical cases are presented.
| [
{
"version": "v1",
"created": "Tue, 4 Dec 2012 15:34:10 GMT"
}
] | 1,354,665,600,000 | [
[
"Morignot",
"Philippe",
"",
"INRIA Rocquencourt"
],
[
"Nashashibi",
"Fawzi",
"",
"INRIA\n Rocquencourt"
]
] |
1212.2056 | Giacoma Monreale | Giacoma Valentina Monreale, Ugo Montanari and Nicklas Hoch | Soft Constraint Logic Programming for Electric Vehicle Travel
Optimization | 17 pages; 26th Workshop on Logic Programming - 2012 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Soft Constraint Logic Programming is a natural and flexible declarative
programming formalism, which allows to model and solve real-life problems
involving constraints of different types.
In this paper, after providing a slightly more general and elegant
presentation of the framework, we show how we can apply it to the e-mobility
problem of coordinating electric vehicles in order to overcome both energetic
and temporal constraints and so to reduce their running cost. In particular, we
focus on the journey optimization sub-problem, considering sequences of trips
from a user's appointment to another one. Solutions provide the best
alternatives in terms of time and energy consumption, including route sequences
and possible charging events.
| [
{
"version": "v1",
"created": "Mon, 10 Dec 2012 13:30:23 GMT"
}
] | 1,355,184,000,000 | [
[
"Monreale",
"Giacoma Valentina",
""
],
[
"Montanari",
"Ugo",
""
],
[
"Hoch",
"Nicklas",
""
]
] |
1212.2444 | Richard Booth | Richard Booth, Eva Richter | On revising fuzzy belief bases | Appears in Proceedings of the Nineteenth Conference on Uncertainty in
Artificial Intelligence (UAI2003) | null | null | UAI-P-2003-PG-81-88 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We look at the problem of revising fuzzy belief bases, i.e., belief base
revision in which both formulas in the base as well as revision-input formulas
can come attached with varying truth-degrees. Working within a very general
framework for fuzzy logic which is able to capture a variety of types of
inference under uncertainty, such as truth-functional fuzzy logics and certain
types of probabilistic inference, we show how the idea of rational change from
'crisp' base revision, as embodied by the idea of partial meet revision, can be
faithfully extended to revising fuzzy belief bases. We present and axiomatise
an operation of partial meet fuzzy revision and illustrate how the operation
works in several important special instances of the framework.
| [
{
"version": "v1",
"created": "Fri, 19 Oct 2012 15:04:03 GMT"
}
] | 1,355,270,400,000 | [
[
"Booth",
"Richard",
""
],
[
"Richter",
"Eva",
""
]
] |
1212.2445 | Janneke H. Bolt | Janneke H. Bolt, Silja Renooij, Linda C. van der Gaag | Upgrading Ambiguous Signs in QPNs | Appears in Proceedings of the Nineteenth Conference on Uncertainty in
Artificial Intelligence (UAI2003) | null | null | UAI-P-2003-PG-73-80 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | WA qualitative probabilistic network models the probabilistic relationships
between its variables by means of signs. Non-monotonic influences have
associated an ambiguous sign. These ambiguous signs typically lead to
uninformative results upon inference. A non-monotonic influence can, however,
be associated with a, more informative, sign that indicates its effect in the
current state of the network. To capture this effect, we introduce the concept
of situational sign. Furthermore, if the network converts to a state in which
all variables that provoke the non-monotonicity have been observed, a
non-monotonic influence reduces to a monotonic influence. We study the
persistence and propagation of situational signs upon inference and give a
method to establish the sign of a reduced influence.
| [
{
"version": "v1",
"created": "Fri, 19 Oct 2012 15:03:58 GMT"
}
] | 1,355,270,400,000 | [
[
"Bolt",
"Janneke H.",
""
],
[
"Renooij",
"Silja",
""
],
[
"van der Gaag",
"Linda C.",
""
]
] |
1212.2446 | Andrea Bobbio | Andrea Bobbio, Stefania Montani, Luigi Portinale | Parametric Dependability Analysis through Probabilistic Horn Abduction | Appears in Proceedings of the Nineteenth Conference on Uncertainty in
Artificial Intelligence (UAI2003) | null | null | UAI-P-2003-PG-65-72 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dependability modeling and evaluation is aimed at investigating that a system
performs its function correctly in time. A usual way to achieve a high
reliability, is to design redundant systems that contain several replicas of
the same subsystem or component. State space methods for dependability analysis
may suffer of the state space explosion problem in such a kind of situation.
Combinatorial models, on the other hand, require the simplified assumption of
statistical independence; however, in case of redundant systems, this does not
guarantee a reduced number of modeled elements. In order to provide a more
compact system representation, parametric system modeling has been investigated
in the literature, in such a way that a set of replicas of a given subsystem is
parameterized so that only one representative instance is explicitly included.
While modeling aspects can be suitably addressed by these approaches,
analytical tools working on parametric characterizations are often more
difficult to be defined and the standard approach is to 'unfold' the parametric
model, in order to exploit standard analysis algorithms working at the unfolded
'ground' level. Moreover, parameterized combinatorial methods still require the
statistical independence assumption. In the present paper we consider the
formalism of Parametric Fault Tree (PFT) and we show how it can be related to
Probabilistic Horn Abduction (PHA). Since PHA is a framework where both
modeling and analysis can be performed in a restricted first-order language, we
aim at showing that converting a PFT into a PHA knowledge base will allow an
approach to dependability analysis directly exploiting parametric
representation. We will show that classical qualitative and quantitative
dependability measures can be characterized within PHA. Furthermore, additional
modeling aspects (such as noisy gates and local dependencies) as well as
additional reliability measures (such as posterior probability analysis) can be
naturally addressed by this conversion. A simple example of a multi-processor
system with several replicated units is used to illustrate the approach.
| [
{
"version": "v1",
"created": "Fri, 19 Oct 2012 15:03:55 GMT"
}
] | 1,355,270,400,000 | [
[
"Bobbio",
"Andrea",
""
],
[
"Montani",
"Stefania",
""
],
[
"Portinale",
"Luigi",
""
]
] |
1212.2448 | Jeff A. Bilmes | Jeff A. Bilmes, Chris Bartels | On Triangulating Dynamic Graphical Models | Appears in Proceedings of the Nineteenth Conference on Uncertainty in
Artificial Intelligence (UAI2003) | null | null | UAI-P-2003-PG-47-56 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces new methodology to triangulate dynamic Bayesian
networks (DBNs) and dynamic graphical models (DGMs). While most methods to
triangulate such networks use some form of constrained elimination scheme based
on properties of the underlying directed graph, we find it useful to view
triangulation and elimination using properties only of the resulting undirected
graph, obtained after the moralization step. We first briefly introduce the
Graphical model toolkit (GMTK) and its notion of dynamic graphical models, one
that slightly extends the standard notion of a DBN. We next introduce the
'boundary algorithm', a method to find the best boundary between partitions in
a dynamic model. We find that using this algorithm, the notions of forward- and
backward-interface become moot - namely, the size and fill-in of the best
forward- and backward- interface are identical. Moreover, we observe that
finding a good partition boundary allows for constrained elimination orders
(and therefore graph triangulations) that are not possible using standard
slice-by-slice constrained eliminations. More interestingly, with certain
boundaries it is possible to obtain constrained elimination schemes that lie
outside the space of possible triangulations using only unconstrained
elimination. Lastly, we report triangulation results on invented graphs,
standard DBNs from the literature, novel DBNs used in speech recognition
research systems, and also random graphs. Using a number of different
triangulation quality measures (max clique size, state-space, etc.), we find
that with our boundary algorithm the triangulation quality can dramatically
improve.
| [
{
"version": "v1",
"created": "Fri, 19 Oct 2012 15:03:47 GMT"
}
] | 1,355,270,400,000 | [
[
"Bilmes",
"Jeff A.",
""
],
[
"Bartels",
"Chris",
""
]
] |
1212.2449 | Bozhena Bidyuk | Bozhena Bidyuk, Rina Dechter | An Empirical Study of w-Cutset Sampling for Bayesian Networks | Appears in Proceedings of the Nineteenth Conference on Uncertainty in
Artificial Intelligence (UAI2003) | null | null | UAI-P-2003-PG-37-46 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The paper studies empirically the time-space trade-off between sampling and
inference in a sl cutset sampling algorithm. The algorithm samples over a
subset of nodes in a Bayesian network and applies exact inference over the
rest. Consequently, while the size of the sampling space decreases, requiring
less samples for convergence, the time for generating each single sample
increases. The w-cutset sampling selects a sampling set such that the
induced-width of the network when the sampling set is observed is bounded by w,
thus requiring inference whose complexity is exponential in w. In this paper,
we investigate performance of w-cutset sampling over a range of w values and
measure the accuracy of w-cutset sampling as a function of w. Our experiments
demonstrate that the cutset sampling idea is quite powerful showing that an
optimal balance between inference and sampling benefits substantially from
restricting the cutset size, even at the cost of more complex inference.
| [
{
"version": "v1",
"created": "Fri, 19 Oct 2012 15:03:43 GMT"
}
] | 1,355,270,400,000 | [
[
"Bidyuk",
"Bozhena",
""
],
[
"Dechter",
"Rina",
""
]
] |
1212.2450 | Salem Benferhat | Salem Benferhat, Sylvain Lagrue, Odile Papini | A possibilistic handling of partially ordered information | Appears in Proceedings of the Nineteenth Conference on Uncertainty in
Artificial Intelligence (UAI2003) | null | null | UAI-P-2003-PG-29-36 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In a standard possibilistic logic, prioritized information are encoded by
means of weighted knowledge base. This paper proposes an extension of
possibilistic logic for dealing with partially ordered information. We Show
that all basic notions of standard possibilitic logic (sumbsumption, syntactic
and semantic inference, etc.) have natural couterparts when dealing with
partially ordered information. We also propose an algorithm which computes
possibilistic conclusions of a partial knowledge base of a partially ordered
knowlege base.
| [
{
"version": "v1",
"created": "Fri, 19 Oct 2012 15:03:38 GMT"
}
] | 1,355,270,400,000 | [
[
"Benferhat",
"Salem",
""
],
[
"Lagrue",
"Sylvain",
""
],
[
"Papini",
"Odile",
""
]
] |
1212.2452 | Fahiem Bacchus | Fahiem Bacchus, Shannon Dalmao, Toniann Pitassi | Value Elimination: Bayesian Inference via Backtracking Search | Appears in Proceedings of the Nineteenth Conference on Uncertainty in
Artificial Intelligence (UAI2003) | null | null | UAI-P-2003-PG-20-28 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Backtracking search is a powerful algorithmic paradigm that can be used to
solve many problems. It is in a certain sense the dual of variable elimination;
but on many problems, e.g., SAT, it is vastly superior to variable elimination
in practice. Motivated by this we investigate the application of backtracking
search to the problem of Bayesian inference (Bayes). We show that natural
generalizations of known techniques allow backtracking search to achieve
performance guarantees similar to standard algorithms for Bayes, and that there
exist problems on which backtracking can in fact do much better. We also
demonstrate that these ideas can be applied to implement a Bayesian inference
engine whose performance is competitive with standard algorithms. Since
backtracking search can very naturally take advantage of context specific
structure, the potential exists for performance superior to standard algorithms
on many problems.
| [
{
"version": "v1",
"created": "Fri, 19 Oct 2012 15:03:31 GMT"
}
] | 1,355,270,400,000 | [
[
"Bacchus",
"Fahiem",
""
],
[
"Dalmao",
"Shannon",
""
],
[
"Pitassi",
"Toniann",
""
]
] |
1212.2455 | David Allen | David Allen, Adnan Darwiche | New Advances in Inference by Recursive Conditioning | Appears in Proceedings of the Nineteenth Conference on Uncertainty in
Artificial Intelligence (UAI2003) | null | null | UAI-P-2003-PG-2-10 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recursive Conditioning (RC) was introduced recently as the first any-space
algorithm for inference in Bayesian networks which can trade time for space by
varying the size of its cache at the increment needed to store a floating point
number. Under full caching, RC has an asymptotic time and space complexity
which is comparable to mainstream algorithms based on variable elimination and
clustering (exponential in the network treewidth and linear in its size). We
show two main results about RC in this paper. First, we show that its actual
space requirements under full caching are much more modest than those needed by
mainstream methods and study the implications of this finding. Second, we show
that RC can effectively deal with determinism in Bayesian networks by employing
standard logical techniques, such as unit resolution, allowing a significant
reduction in its time requirements in certain cases. We illustrate our results
using a number of benchmark networks, including the very challenging ones that
arise in genetic linkage analysis.
| [
{
"version": "v1",
"created": "Fri, 19 Oct 2012 15:03:22 GMT"
}
] | 1,355,270,400,000 | [
[
"Allen",
"David",
""
],
[
"Darwiche",
"Adnan",
""
]
] |
1212.2456 | Julia M Flores | Julia M. Flores, Jose A. Gamez, Kristian G. Olesen | Incremental Compilation of Bayesian networks | Appears in Proceedings of the Nineteenth Conference on Uncertainty in
Artificial Intelligence (UAI2003) | null | null | UAI-P-2003-PG-233-240 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most methods of exact probability propagation in Bayesian networks do not
carry out the inference directly over the network, but over a secondary
structure known as a junction tree or a join tree (JT). The process of
obtaining a JT is usually termed {sl compilation}. As compilation is usually
viewed as a whole process; each time the network is modified, a new compilation
process has to be carried out. The possibility of reusing an already existing
JT, in order to obtain the new one regarding only the modifications in the
network has received only little attention in the literature. In this paper we
present a method for incremental compilation of a Bayesian network, following
the classical scheme in which triangulation plays the key role. In order to
perform incremental compilation we propose to recompile only those parts of the
JT which can have been affected by the networks modifications. To do so, we
exploit the technique OF maximal prime subgraph decomposition in determining
the minimal subgraph(s) that have to be recompiled, and thereby the minimal
subtree(s) of the JT that should be replaced by new subtree(s).We focus on
structural modifications : addition and deletion of links and variables.
| [
{
"version": "v1",
"created": "Fri, 19 Oct 2012 15:05:20 GMT"
}
] | 1,355,270,400,000 | [
[
"Flores",
"Julia M.",
""
],
[
"Gamez",
"Jose A.",
""
],
[
"Olesen",
"Kristian G.",
""
]
] |
1212.2457 | Alberto Finzi | Alberto Finzi, Thomas Lukasiewicz | Structure-Based Causes and Explanations in the Independent Choice Logic | Appears in Proceedings of the Nineteenth Conference on Uncertainty in
Artificial Intelligence (UAI2003) | null | null | UAI-P-2003-PG-225-232 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper is directed towards combining Pearl's structural-model approach to
causal reasoning with high-level formalisms for reasoning about actions. More
precisely, we present a combination of Pearl's structural-model approach with
Poole's independent choice logic. We show how probabilistic theories in the
independent choice logic can be mapped to probabilistic causal models. This
mapping provides the independent choice logic with appealing concepts of
causality and explanation from the structural-model approach. We illustrate
this along Halpern and Pearl's sophisticated notions of actual cause,
explanation, and partial explanation. This mapping also adds first-order
modeling capabilities and explicit actions to the structural-model approach.
| [
{
"version": "v1",
"created": "Fri, 19 Oct 2012 15:05:16 GMT"
}
] | 1,355,270,400,000 | [
[
"Finzi",
"Alberto",
""
],
[
"Lukasiewicz",
"Thomas",
""
]
] |
1212.2459 | Zhengzhu Feng | Zhengzhu Feng, Eric A. Hansen, Shlomo Zilberstein | Symbolic Generalization for On-line Planning | Appears in Proceedings of the Nineteenth Conference on Uncertainty in
Artificial Intelligence (UAI2003) | null | null | UAI-P-2003-PG-209-216 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Symbolic representations have been used successfully in off-line planning
algorithms for Markov decision processes. We show that they can also improve
the performance of on-line planners. In addition to reducing computation time,
symbolic generalization can reduce the amount of costly real-world interactions
required for convergence. We introduce Symbolic Real-Time Dynamic Programming
(or sRTDP), an extension of RTDP. After each step of on-line interaction with
an environment, sRTDP uses symbolic model-checking techniques to generalizes
its experience by updating a group of states rather than a single state. We
examine two heuristic approaches to dynamic grouping of states and show that
they accelerate the planning process significantly in terms of both CPU time
and the number of steps of interaction with the environment.
| [
{
"version": "v1",
"created": "Fri, 19 Oct 2012 15:05:06 GMT"
}
] | 1,355,270,400,000 | [
[
"Feng",
"Zhengzhu",
""
],
[
"Hansen",
"Eric A.",
""
],
[
"Zilberstein",
"Shlomo",
""
]
] |
1212.2461 | Thomas Eiter | Thomas Eiter, Thomas Lukasiewicz | Probabilistic Reasoning about Actions in Nonmonotonic Causal Theories | Appears in Proceedings of the Nineteenth Conference on Uncertainty in
Artificial Intelligence (UAI2003) | null | null | UAI-P-2003-PG-192-199 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present the language {m P}{cal C}+ for probabilistic reasoning about
actions, which is a generalization of the action language {cal C}+ that allows
to deal with probabilistic as well as nondeterministic effects of actions. We
define a formal semantics of {m P}{cal C}+ in terms of probabilistic
transitions between sets of states. Using a concept of a history and its belief
state, we then show how several important problems in reasoning about actions
can be concisely formulated in our formalism.
| [
{
"version": "v1",
"created": "Fri, 19 Oct 2012 15:04:57 GMT"
}
] | 1,355,270,400,000 | [
[
"Eiter",
"Thomas",
""
],
[
"Lukasiewicz",
"Thomas",
""
]
] |
1212.2463 | Rina Dechter | Rina Dechter, Robert Mateescu | A Simple Insight into Iterative Belief Propagation's Success | Appears in Proceedings of the Nineteenth Conference on Uncertainty in
Artificial Intelligence (UAI2003) | null | null | UAI-P-2003-PG-175-183 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In Non - ergodic belief networks the posterior belief OF many queries given
evidence may become zero.The paper shows that WHEN belief propagation IS
applied iteratively OVER arbitrary networks(the so called, iterative OR loopy
belief propagation(IBP)) it IS identical TO an arc - consistency algorithm
relative TO zero - belief queries(namely assessing zero posterior
probabilities). This implies that zero - belief conclusions derived BY belief
propagation converge AND are sound.More importantly it suggests that the
inference power OF IBP IS AS strong AND AS weak, AS that OF arc -
consistency.This allows the synthesis OF belief networks FOR which belief
propagation IS useless ON one hand, AND focuses the investigation OF classes OF
belief network FOR which belief propagation may be zero - complete.Finally, ALL
the above conclusions apply also TO Generalized belief propagation algorithms
that extend loopy belief propagation AND allow a crisper understanding OF their
power.
| [
{
"version": "v1",
"created": "Fri, 19 Oct 2012 15:04:48 GMT"
}
] | 1,355,270,400,000 | [
[
"Dechter",
"Rina",
""
],
[
"Mateescu",
"Robert",
""
]
] |
1212.2469 | Sanjay Chaudhari | Sanjay Chaudhari, Thomas S. Richardson | Using the structure of d-connecting paths as a qualitative measure of
the strength of dependence | Appears in Proceedings of the Nineteenth Conference on Uncertainty in
Artificial Intelligence (UAI2003) | null | null | UAI-P-2003-PG-116-123 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pearls concept OF a d - connecting path IS one OF the foundations OF the
modern theory OF graphical models : the absence OF a d - connecting path IN a
DAG indicates that conditional independence will hold IN ANY distribution
factorising according TO that graph. IN this paper we show that IN singly -
connected Gaussian DAGs it IS possible TO USE the form OF a d - connection TO
obtain qualitative information about the strength OF conditional
dependence.More precisely, the squared partial correlations BETWEEN two given
variables, conditioned ON different subsets may be partially ordered BY
examining the relationship BETWEEN the d - connecting path AND the SET OF
variables conditioned upon.
| [
{
"version": "v1",
"created": "Fri, 19 Oct 2012 15:04:23 GMT"
}
] | 1,355,270,400,000 | [
[
"Chaudhari",
"Sanjay",
""
],
[
"Richardson",
"Thomas S.",
""
]
] |
1212.2476 | David Ephraim Larkin | David Ephraim Larkin | Approximate Decomposition: A Method for Bounding and Estimating
Probabilistic and Deterministic Queries | Appears in Proceedings of the Nineteenth Conference on Uncertainty in
Artificial Intelligence (UAI2003) | null | null | UAI-P-2003-PG-346-353 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we introduce a method for approximating the solution to
inference and optimization tasks in uncertain and deterministic reasoning. Such
tasks are in general intractable for exact algorithms because of the large
number of dependency relationships in their structure. Our method effectively
maps such a dense problem to a sparser one which is in some sense "closest".
Exact methods can be run on the sparser problem to derive bounds on the
original answer, which can be quite sharp. We present empirical results
demonstrating that our method works well on the tasks of belief inference and
finding the probability of the most probable explanation in belief networks,
and finding the cost of the solution that violates the smallest number of
constraints in constraint satisfaction problems. On one large CPCS network, for
example, we were able to calculate upper and lower bounds on the conditional
probability of a variable, given evidence, that were almost identical in the
average case.
| [
{
"version": "v1",
"created": "Fri, 19 Oct 2012 15:06:19 GMT"
}
] | 1,355,270,400,000 | [
[
"Larkin",
"David Ephraim",
""
]
] |
1212.2481 | Milos Hauskrecht | Milos Hauskrecht, Tomas Singliar | Monte-Carlo optimizations for resource allocation problems in stochastic
network systems | Appears in Proceedings of the Nineteenth Conference on Uncertainty in
Artificial Intelligence (UAI2003) | null | null | UAI-P-2003-PG-305-312 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Real-world distributed systems and networks are often unreliable and subject
to random failures of its components. Such a stochastic behavior affects
adversely the complexity of optimization tasks performed routinely upon such
systems, in particular, various resource allocation tasks. In this work we
investigate and develop Monte Carlo solutions for a class of two-stage
optimization problems in stochastic networks in which the expected value of
resource allocations before and after stochastic failures needs to be
optimized. The limitation of these problems is that their exact solutions are
exponential in the number of unreliable network components: thus, exact methods
do not scale-up well to large networks often seen in practice. We first prove
that Monte Carlo optimization methods can overcome the exponential bottleneck
of exact methods. Next we support our theoretical findings on resource
allocation experiments and show a very good scale-up potential of the new
methods to large stochastic networks.
| [
{
"version": "v1",
"created": "Fri, 19 Oct 2012 15:05:55 GMT"
}
] | 1,355,270,400,000 | [
[
"Hauskrecht",
"Milos",
""
],
[
"Singliar",
"Tomas",
""
]
] |
1212.2482 | Charles Gretton | Charles Gretton, David Price, Sylvie Thiebaux | Implementation and Comparison of Solution Methods for Decision Processes
with Non-Markovian Rewards | Appears in Proceedings of the Nineteenth Conference on Uncertainty in
Artificial Intelligence (UAI2003) | null | null | UAI-P-2003-PG-289-296 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper examines a number of solution methods for decision processes with
non-Markovian rewards (NMRDPs). They all exploit a temporal logic specification
of the reward function to automatically translate the NMRDP into an equivalent
Markov decision process (MDP) amenable to well-known MDP solution methods. They
differ however in the representation of the target MDP and the class of MDP
solution methods to which they are suited. As a result, they adopt different
temporal logics and different translations. Unfortunately, no implementation of
these methods nor experimental let alone comparative results have ever been
reported. This paper is the first step towards filling this gap. We describe an
integrated system for solving NMRDPs which implements these methods and several
variants under a common interface; we use it to compare the various approaches
and identify the problem features favoring one over the other.
| [
{
"version": "v1",
"created": "Fri, 19 Oct 2012 15:05:51 GMT"
}
] | 1,355,270,400,000 | [
[
"Gretton",
"Charles",
""
],
[
"Price",
"David",
""
],
[
"Thiebaux",
"Sylvie",
""
]
] |
1212.2484 | Phan H. Giang | Phan H. Giang, Prakash P. Shenoy | Decision Making with Partially Consonant Belief Functions | Appears in Proceedings of the Nineteenth Conference on Uncertainty in
Artificial Intelligence (UAI2003) | null | null | UAI-P-2003-PG-272-280 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper studies decision making for Walley's partially consonant belief
functions (pcb). In a pcb, the set of foci are partitioned. Within each
partition, the foci are nested. The pcb class includes probability functions
and possibility functions as extreme cases. Unlike earlier proposals for a
decision theory with belief functions, we employ an axiomatic approach. We
adopt an axiom system similar in spirit to von Neumann - Morgenstern's linear
utility theory for a preference relation on pcb lotteries. We prove a
representation theorem for this relation. Utility for a pcb lottery is a
combination of linear utility for probabilistic lottery and binary utility for
possibilistic lottery.
| [
{
"version": "v1",
"created": "Fri, 19 Oct 2012 15:05:42 GMT"
}
] | 1,355,270,400,000 | [
[
"Giang",
"Phan H.",
""
],
[
"Shenoy",
"Prakash P.",
""
]
] |
1212.2486 | Brendan J. Frey | Brendan J. Frey | Extending Factor Graphs so as to Unify Directed and Undirected Graphical
Models | Appears in Proceedings of the Nineteenth Conference on Uncertainty in
Artificial Intelligence (UAI2003) | null | null | UAI-P-2003-PG-257-264 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The two most popular types of graphical model are directed models (Bayesian
networks) and undirected models (Markov random fields, or MRFs). Directed and
undirected models offer complementary properties in model construction,
expressing conditional independencies, expressing arbitrary factorizations of
joint distributions, and formulating message-passing inference algorithms. We
show that the strengths of these two representations can be combined in a
single type of graphical model called a 'factor graph'. Every Bayesian network
or MRF can be easily converted to a factor graph that expresses the same
conditional independencies, expresses the same factorization of the joint
distribution, and can be used for probabilistic inference through application
of a single, simple message-passing algorithm. In contrast to chain graphs,
where message-passing is implemented on a hypergraph, message-passing can be
directly implemented on the factor graph. We describe a modified 'Bayes-ball'
algorithm for establishing conditional independence in factor graphs, and we
show that factor graphs form a strict superset of Bayesian networks and MRFs.
In particular, we give an example of a commonly-used 'mixture of experts' model
fragment, whose independencies cannot be represented in a Bayesian network or
an MRF, but can be represented in a factor graph. We finish by giving examples
of real-world problems that are not well suited to representation in Bayesian
networks and MRFs, but are well-suited to representation in factor graphs.
| [
{
"version": "v1",
"created": "Fri, 19 Oct 2012 15:05:33 GMT"
}
] | 1,355,270,400,000 | [
[
"Frey",
"Brendan J.",
""
]
] |
1212.2496 | Patrice Perny | Patrice Perny, Olivier Spanjaard | An Axiomatic Approach to Robustness in Search Problems with Multiple
Scenarios | Appears in Proceedings of the Nineteenth Conference on Uncertainty in
Artificial Intelligence (UAI2003) | null | null | UAI-P-2003-PG-469-476 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper is devoted to the search of robust solutions in state space graphs
when costs depend on scenarios. We first present axiomatic requirements for
preference compatibility with the intuitive idea of robustness.This leads us to
propose the Lorenz dominance rule as a basis for robustness analysis. Then,
after presenting complexity results about the determination of robust
solutions, we propose a new sophistication of A* specially designed to
determine the set of robust paths in a state space graph. The behavior of the
algorithm is illustrated on a small example. Finally, an axiomatic
justification of the refinement of robustness by an OWA criterion is provided.
| [
{
"version": "v1",
"created": "Fri, 19 Oct 2012 15:07:32 GMT"
}
] | 1,355,270,400,000 | [
[
"Perny",
"Patrice",
""
],
[
"Spanjaard",
"Olivier",
""
]
] |
1212.2497 | James D. Park | James D. Park, Adnan Darwiche | Solving MAP Exactly using Systematic Search | Appears in Proceedings of the Nineteenth Conference on Uncertainty in
Artificial Intelligence (UAI2003) | null | null | UAI-P-2003-PG-459-468 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | MAP is the problem of finding a most probable instantiation of a set of
variables in a Bayesian network given some evidence. Unlike computing posterior
probabilities, or MPE (a special case of MAP), the time and space complexity of
structural solutions for MAP are not only exponential in the network treewidth,
but in a larger parameter known as the "constrained" treewidth. In practice,
this means that computing MAP can be orders of magnitude more expensive than
computing posterior probabilities or MPE. This paper introduces a new, simple
upper bound on the probability of a MAP solution, which admits a tradeoff
between the bound quality and the time needed to compute it. The bound is shown
to be generally much tighter than those of other methods of comparable
complexity. We use this proposed upper bound to develop a branch-and-bound
search algorithm for solving MAP exactly. Experimental results demonstrate that
the search algorithm is able to solve many problems that are far beyond the
reach of any structure-based method for MAP. For example, we show that the
proposed algorithm can compute MAP exactly and efficiently for some networks
whose constrained treewidth is more than 40.
| [
{
"version": "v1",
"created": "Fri, 19 Oct 2012 15:07:27 GMT"
}
] | 1,355,270,400,000 | [
[
"Park",
"James D.",
""
],
[
"Darwiche",
"Adnan",
""
]
] |
1212.2501 | Francisco Mugica | Francisco Mugica, Angela Nebot, Pilar Gomez | Dealing with uncertainty in fuzzy inductive reasoning methodology | Appears in Proceedings of the Nineteenth Conference on Uncertainty in
Artificial Intelligence (UAI2003) | null | null | UAI-P-2003-PG-427-434 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The aim of this research is to develop a reasoning under uncertainty strategy
in the context of the Fuzzy Inductive Reasoning (FIR) methodology. FIR emerged
from the General Systems Problem Solving developed by G. Klir. It is a data
driven methodology based on systems behavior rather than on structural
knowledge. It is a very useful tool for both the modeling and the prediction of
those systems for which no previous structural knowledge is available. FIR
reasoning is based on pattern rules synthesized from the available data. The
size of the pattern rule base can be very large making the prediction process
quite difficult. In order to reduce the size of the pattern rule base, it is
possible to automatically extract classical Sugeno fuzzy rules starting from
the set of pattern rules. The Sugeno rule base preserves pattern rules
knowledge as much as possible. In this process some information is lost but
robustness is considerably increased. In the forecasting process either the
pattern rule base or the Sugeno fuzzy rule base can be used. The first option
is desirable when the computational resources make it possible to deal with the
overall pattern rule base or when the extracted fuzzy rules are not accurate
enough due to uncertainty associated to the original data. In the second
option, the prediction process is done by means of the classical Sugeno
inference system. If the amount of uncertainty associated to the data is small,
the predictions obtained using the Sugeno fuzzy rule base will be very
accurate. In this paper a mixed pattern/fuzzy rules strategy is proposed to
deal with uncertainty in such a way that the best of both perspectives is used.
Areas in the data space with a higher level of uncertainty are identified by
means of the so-called error models. The prediction process in these areas
makes use of a mixed pattern/fuzzy rules scheme, whereas areas identified with
a lower level of uncertainty only use the Sugeno fuzzy rule base. The proposed
strategy is applied to a real biomedical system, i.e., the central nervous
system control of the cardiovascular system.
| [
{
"version": "v1",
"created": "Fri, 19 Oct 2012 15:07:07 GMT"
}
] | 1,355,270,400,000 | [
[
"Mugica",
"Francisco",
""
],
[
"Nebot",
"Angela",
""
],
[
"Gomez",
"Pilar",
""
]
] |
1212.2502 | Nicolas Meuleau | Nicolas Meuleau, David Smith | Optimal Limited Contingency Planning | Appears in Proceedings of the Nineteenth Conference on Uncertainty in
Artificial Intelligence (UAI2003) | null | null | UAI-P-2003-PG-417-426 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For a given problem, the optimal Markov policy can be considerred as a
conditional or contingent plan containing a (potentially large) number of
branches. Unfortunately, there are applications where it is desirable to
strictly limit the number of decision points and branches in a plan. For
example, it may be that plans must later undergo more detailed simulation to
verify correctness and safety, or that they must be simple enough to be
understood and analyzed by humans. As a result, it may be necessary to limit
consideration to plans with only a small number of branches. This raises the
question of how one goes about finding optimal plans containing only a limited
number of branches. In this paper, we present an any-time algorithm for optimal
k-contingency planning (OKP). It is the first optimal algorithm for limited
contingency planning that is not an explicit enumeration of possible contingent
plans. By modelling the problem as a Partially Observable Markov Decision
Process, it implements the Bellman optimality principle and prunes the solution
space. We present experimental results of applying this algorithm to some
simple test cases.
| [
{
"version": "v1",
"created": "Fri, 19 Oct 2012 15:07:02 GMT"
}
] | 1,355,270,400,000 | [
[
"Meuleau",
"Nicolas",
""
],
[
"Smith",
"David",
""
]
] |
1212.2505 | Radu Marinescu | Radu Marinescu, Kalev Kask, Rina Dechter | Systematic vs. Non-systematic Algorithms for Solving the MPE Task | Appears in Proceedings of the Nineteenth Conference on Uncertainty in
Artificial Intelligence (UAI2003) | null | null | UAI-P-2003-PG-394-402 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The paper continues the study of partitioning based inference of heuristics
for search in the context of solving the Most Probable Explanation task in
Bayesian Networks. We compare two systematic Branch and Bound search
algorithms, BBBT (for which the heuristic information is constructed during
search and allows dynamic variable/value ordering) and its predecessor BBMB
(for which the heuristic information is pre-compiled), against a number of
popular local search algorithms for the MPE problem. We show empirically that,
when viewed as approximation schemes, BBBT/BBMB are superior to all of these
best known SLS algorithms, especially when the domain sizes increase beyond 2.
This is in contrast with the performance of SLS vs. systematic search on
CSP/SAT problems, where SLS often significantly outperforms systematic
algorithms. As far as we know, BBBT/BBMB are currently the best performing
algorithms for solving the MPE task.
| [
{
"version": "v1",
"created": "Fri, 19 Oct 2012 15:06:47 GMT"
}
] | 1,355,270,400,000 | [
[
"Marinescu",
"Radu",
""
],
[
"Kask",
"Kalev",
""
],
[
"Dechter",
"Rina",
""
]
] |
1212.2507 | Changhe Yuan | Changhe Yuan, Marek J. Druzdzel | An Importance Sampling Algorithm Based on Evidence Pre-propagation | Appears in Proceedings of the Nineteenth Conference on Uncertainty in
Artificial Intelligence (UAI2003) | null | null | UAI-P-2003-PG-624-631 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Precision achieved by stochastic sampling algorithms for Bayesian networks
typically deteriorates in face of extremely unlikely evidence. To address this
problem, we propose the Evidence Pre-propagation Importance Sampling algorithm
(EPIS-BN), an importance sampling algorithm that computes an approximate
importance function by the heuristic methods: loopy belief Propagation and
e-cutoff. We tested the performance of e-cutoff on three large real Bayesian
networks: ANDES, CPCS, and PATHFINDER. We observed that on each of these
networks the EPIS-BN algorithm gives us a considerable improvement over the
current state of the art algorithm, the AIS-BN algorithm. In addition, it
avoids the costly learning stage of the AIS-BN algorithm.
| [
{
"version": "v1",
"created": "Fri, 19 Oct 2012 15:08:55 GMT"
}
] | 1,355,270,400,000 | [
[
"Yuan",
"Changhe",
""
],
[
"Druzdzel",
"Marek J.",
""
]
] |
1212.2518 | Rita Sharma | Rita Sharma, David L Poole | Efficient Inference in Large Discrete Domains | Appears in Proceedings of the Nineteenth Conference on Uncertainty in
Artificial Intelligence (UAI2003) | null | null | UAI-P-2003-PG-535-542 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we examine the problem of inference in Bayesian Networks with
discrete random variables that have very large or even unbounded domains. For
example, in a domain where we are trying to identify a person, we may have
variables that have as domains, the set of all names, the set of all postal
codes, or the set of all credit card numbers. We cannot just have big tables of
the conditional probabilities, but need compact representations. We provide an
inference algorithm, based on variable elimination, for belief networks
containing both large domain and normal discrete random variables. We use
intensional (i.e., in terms of procedures) and extensional (in terms of listing
the elements) representations of conditional probabilities and of the
intermediate factors.
| [
{
"version": "v1",
"created": "Fri, 19 Oct 2012 15:08:10 GMT"
}
] | 1,355,270,400,000 | [
[
"Sharma",
"Rita",
""
],
[
"Poole",
"David L",
""
]
] |
1212.2519 | Vitor Santos Costa | Vitor Santos Costa, David Page, Maleeha Qazi, James Cussens | CLP(BN): Constraint Logic Programming for Probabilistic Knowledge | Appears in Proceedings of the Nineteenth Conference on Uncertainty in
Artificial Intelligence (UAI2003) | null | null | UAI-P-2003-PG-517-524 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present CLP(BN), a novel approach that aims at expressing Bayesian
networks through the constraint logic programming framework. Arguably, an
important limitation of traditional Bayesian networks is that they are
propositional, and thus cannot represent relations between multiple similar
objects in multiple contexts. Several researchers have thus proposed
first-order languages to describe such networks. Namely, one very successful
example of this approach are the Probabilistic Relational Models (PRMs), that
combine Bayesian networks with relational database technology. The key
difficulty that we had to address when designing CLP(cal{BN}) is that logic
based representations use ground terms to denote objects. With probabilitic
data, we need to be able to uniquely represent an object whose value we are not
sure about. We use {sl Skolem functions} as unique new symbols that uniquely
represent objects with unknown value. The semantics of CLP(cal{BN}) programs
then naturally follow from the general framework of constraint logic
programming, as applied to a specific domain where we have probabilistic data.
This paper introduces and defines CLP(cal{BN}), and it describes an
implementation and initial experiments. The paper also shows how CLP(cal{BN})
relates to Probabilistic Relational Models (PRMs), Ngo and Haddawys
Probabilistic Logic Programs, AND Kersting AND De Raedts Bayesian Logic
Programs.
| [
{
"version": "v1",
"created": "Fri, 19 Oct 2012 15:08:01 GMT"
}
] | 1,355,270,400,000 | [
[
"Costa",
"Vitor Santos",
""
],
[
"Page",
"David",
""
],
[
"Qazi",
"Maleeha",
""
],
[
"Cussens",
"James",
""
]
] |
1212.2614 | Michael Gr. Voskoglou Prof. Dr. | Michael Gr. Voskoglou | A Study on Fuzzy Systems | 9 pages, 3 figures, 1 table | American Journal of Computational and Applied Mathematics, 2(5),
232-240, 2012 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We use princiles of fuzzy logic to develop a general model representing
several processes in a system's operation characterized by a degree of
vagueness and/or uncertainy. Further, we introduce three altenative measures of
a fuzzy system's effectiveness connected to the above model. An applcation is
also developed for the Mathematical Modelling process illustrating our results.
| [
{
"version": "v1",
"created": "Tue, 11 Dec 2012 20:31:01 GMT"
}
] | 1,355,270,400,000 | [
[
"Voskoglou",
"Michael Gr.",
""
]
] |
1212.2657 | Anna Ryabokon | Anna Ryabokon | Study: Symmetry breaking for ASP | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In their nature configuration problems are combinatorial (optimization)
problems. In order to find a configuration a solver has to instantiate a number
of components of a some type and each of these components can be used in a
relation defined for a type. Therefore, many solutions of a configuration
problem have symmetric ones which can be obtained by replacing some component
of a solution by another one of the same type. These symmetric solutions
decrease performance of optimization algorithms because of two reasons: a) they
satisfy all requirements and cannot be pruned out from the search space; and b)
existence of symmetric optimal solutions does not allow to prove the optimum in
a feasible time.
| [
{
"version": "v1",
"created": "Tue, 11 Dec 2012 21:47:26 GMT"
}
] | 1,355,356,800,000 | [
[
"Ryabokon",
"Anna",
""
]
] |
1212.2671 | Ignacio Algredo-Badillo Dr. | Ernesto Cort\'es P\'erez, Ignacio Algredo-Badillo, V\'ictor Hugo
Garc\'ia Rodr\'iguez | Performance Analysis of ANFIS in short term Wind Speed Prediction | 9 pages, 11 figures, 1 table; IJCSI International Journal of Computer
Science Issues, Vol. 9, Issue 5, No 3, September 2012. ISSN (Online):
1694-0814. www.IJCSI.org | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Results are presented on the performance of Adaptive Neuro-Fuzzy Inference
system (ANFIS) for wind velocity forecasts in the Isthmus of Tehuantepec region
in the state of Oaxaca, Mexico. The data bank was provided by the
meteorological station located at the University of Isthmus, Tehuantepec
campus, and this data bank covers the period from 2008 to 2011. Three data
models were constructed to carry out 16, 24 and 48 hours forecasts using the
following variables: wind velocity, temperature, barometric pressure, and date.
The performance measure for the three models is the mean standard error (MSE).
In this work, performance analysis in short-term prediction is presented,
because it is essential in order to define an adequate wind speed model for
eolian parks, where a right planning provide economic benefits.
| [
{
"version": "v1",
"created": "Tue, 11 Dec 2012 22:48:36 GMT"
}
] | 1,355,356,800,000 | [
[
"Pérez",
"Ernesto Cortés",
""
],
[
"Algredo-Badillo",
"Ignacio",
""
],
[
"Rodríguez",
"Víctor Hugo García",
""
]
] |
1212.2857 | Francesco Santini | Stefano Bistarelli, Francesco Santini | ConArg: a Tool to Solve (Weighted) Abstract Argumentation Frameworks
with (Soft) Constraints | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | ConArg is a Constraint Programming-based tool that can be used to model and
solve different problems related to Abstract Argumentation Frameworks (AFs). To
implement this tool we have used JaCoP, a Java library that provides the user
with a Finite Domain Constraint Programming paradigm. ConArg is able to
randomly generate networks with small-world properties in order to find
conflict-free, admissible, complete, stable grounded, preferred, semi-stable,
stage and ideal extensions on such interaction graphs. We present the main
features of ConArg and we report the performance in time, showing also a
comparison with ASPARTIX [1], a similar tool using Answer Set Programming. The
use of techniques for constraint solving can tackle the complexity of the
problems presented in [2]. Moreover we suggest semiring-based soft constraints
as a mean to parametrically represent and solve Weighted Argumentation
Frameworks: different kinds of preference levels related to attacks, e.g., a
score representing a "fuzziness", a "cost" or a probability, can be represented
by choosing different instantiation of the semiring algebraic structure. The
basic idea is to provide a common computational and quantitative framework.
| [
{
"version": "v1",
"created": "Wed, 12 Dec 2012 16:06:28 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Jan 2013 16:33:47 GMT"
}
] | 1,358,380,800,000 | [
[
"Bistarelli",
"Stefano",
""
],
[
"Santini",
"Francesco",
""
]
] |
1212.2902 | Michael Schneider | Michael Schneider, Sebastian Rudolph, Geoff Sutcliffe | Modeling in OWL 2 without Restrictions | Technical Report | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Semantic Web ontology language OWL 2 DL comes with a variety of language
features that enable sophisticated and practically useful modeling. However,
the use of these features has been severely restricted in order to retain
decidability of the language. For example, OWL 2 DL does not allow a property
to be both transitive and asymmetric, which would be desirable, e.g., for
representing an ancestor relation. In this paper, we argue that the so-called
global restrictions of OWL 2 DL preclude many useful forms of modeling, by
providing a catalog of basic modeling patterns that would be available in OWL 2
DL if the global restrictions were discarded. We then report on the results of
evaluating several state-of-the-art OWL 2 DL reasoners on problems that use
combinations of features in a way that the global restrictions are violated.
The systems turn out to rely heavily on the global restrictions and are thus
largely incapable of coping with the modeling patterns. Next we show how
off-the-shelf first-order logic theorem proving technology can be used to
perform reasoning in the OWL 2 direct semantics, the semantics that underlies
OWL 2 DL, but without requiring the global restrictions. Applying a naive
proof-of-concept implementation of this approach to the test problems was
successful in all cases. Based on our observations, we make suggestions for
future lines of research on expressive description logic-style OWL reasoning.
| [
{
"version": "v1",
"created": "Wed, 12 Dec 2012 17:58:01 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Dec 2012 06:41:41 GMT"
},
{
"version": "v3",
"created": "Sun, 28 Apr 2013 22:30:09 GMT"
}
] | 1,367,280,000,000 | [
[
"Schneider",
"Michael",
""
],
[
"Rudolph",
"Sebastian",
""
],
[
"Sutcliffe",
"Geoff",
""
]
] |
1212.5276 | Marc Schoenauer | Mostepha Redouane Khouadjia (INRIA Saclay - Ile de France), Marc
Schoenauer (INRIA Saclay - Ile de France, LRI), Vincent Vidal (DCSD), Johann
Dr\'eo (TRT), Pierre Sav\'eant (TRT) | Multi-Objective AI Planning: Evaluating DAE-YAHSP on a Tunable Benchmark | 7th International Conference on Evolutionary Multi-Criterion
Optimization (2013) To appearr. arXiv admin note: text overlap with
arXiv:0804.3965 by other authors | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | All standard AI planners to-date can only handle a single objective, and the
only way for them to take into account multiple objectives is by aggregation of
the objectives. Furthermore, and in deep contrast with the single objective
case, there exists no benchmark problems on which to test the algorithms for
multi-objective planning. Divide and Evolve (DAE) is an evolutionary planner
that won the (single-objective) deterministic temporal satisficing track in the
last International Planning Competition. Even though it uses intensively the
classical (and hence single-objective) planner YAHSP, it is possible to turn
DAE-YAHSP into a multi-objective evolutionary planner. A tunable benchmark
suite for multi-objective planning is first proposed, and the performances of
several variants of multi-objective DAE-YAHSP are compared on different
instances of this benchmark, hopefully paving the road to further
multi-objective competitions in AI planning.
| [
{
"version": "v1",
"created": "Thu, 20 Dec 2012 21:26:17 GMT"
}
] | 1,356,307,200,000 | [
[
"Khouadjia",
"Mostepha Redouane",
"",
"INRIA Saclay - Ile de France"
],
[
"Schoenauer",
"Marc",
"",
"INRIA Saclay - Ile de France, LRI"
],
[
"Vidal",
"Vincent",
"",
"DCSD"
],
[
"Dréo",
"Johann",
"",
"TRT"
],
[
"Savéant",
"Pierre",
"",
"TRT"
]
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.