id
stringlengths
9
10
submitter
stringlengths
5
47
authors
stringlengths
5
1.72k
title
stringlengths
11
234
comments
stringlengths
1
491
journal-ref
stringlengths
4
396
doi
stringlengths
13
97
report-no
stringlengths
4
138
categories
stringclasses
1 value
license
stringclasses
9 values
abstract
stringlengths
29
3.66k
versions
listlengths
1
21
update_date
int64
1,180B
1,718B
authors_parsed
sequencelengths
1
98
1301.3869
Daphne Koller
Daphne Koller, Ron Parr
Policy Iteration for Factored MDPs
Appears in Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence (UAI2000)
null
null
UAI-P-2000-PG-326-334
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many large MDPs can be represented compactly using a dynamic Bayesian network. Although the structure of the value function does not retain the structure of the process, recent work has shown that value functions in factored MDPs can often be approximated well using a decomposed value function: a linear combination of <I>restricted</I> basis functions, each of which refers only to a small subset of variables. An approximate value function for a particular policy can be computed using approximate dynamic programming, but this approach (and others) can only produce an approximation relative to a distance metric which is weighted by the stationary distribution of the current policy. This type of weighted projection is ill-suited to policy improvement. We present a new approach to value determination, that uses a simple closed-form computation to directly compute a least-squares decomposed approximation to the value function <I>for any weights</I>. We then use this value determination algorithm as a subroutine in a policy iteration process. We show that, under reasonable restrictions, the policies induced by a factored value function are compactly represented, and can be manipulated efficiently in a policy iteration process. We also present a method for computing error bounds for decomposed value functions using a variable-elimination algorithm for function optimization. The complexity of all of our algorithms depends on the factorization of system dynamics and of the approximate value function.
[ { "version": "v1", "created": "Wed, 16 Jan 2013 15:51:06 GMT" } ]
1,358,467,200,000
[ [ "Koller", "Daphne", "" ], [ "Parr", "Ron", "" ] ]
1301.3872
Tsai-Ching Lu
Tsai-Ching Lu, Marek J. Druzdzel, Tze-Yun Leong
Causal Mechanism-based Model Construction
Appears in Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence (UAI2000)
null
null
UAI-P-2000-PG-353-362
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a framework for building graphical causal model that is based on the concept of causal mechanisms. Causal models are intuitive for human users and, more importantly, support the prediction of the effect of manipulation. We describe an implementation of the proposed framework as an interactive model construction module, ImaGeNIe, in SMILE (Structural Modeling, Inference, and Learning Engine) and in GeNIe (SMILE's Windows user interface).
[ { "version": "v1", "created": "Wed, 16 Jan 2013 15:51:18 GMT" } ]
1,358,467,200,000
[ [ "Lu", "Tsai-Ching", "" ], [ "Druzdzel", "Marek J.", "" ], [ "Leong", "Tze-Yun", "" ] ]
1301.3873
Thomas Lukasiewicz
Thomas Lukasiewicz
Credal Networks under Maximum Entropy
Appears in Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence (UAI2000)
null
null
UAI-P-2000-PG-363-370
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We apply the principle of maximum entropy to select a unique joint probability distribution from the set of all joint probability distributions specified by a credal network. In detail, we start by showing that the unique joint distribution of a Bayesian tree coincides with the maximum entropy model of its conditional distributions. This result, however, does not hold anymore for general Bayesian networks. We thus present a new kind of maximum entropy models, which are computed sequentially. We then show that for all general Bayesian networks, the sequential maximum entropy model coincides with the unique joint distribution. Moreover, we apply the new principle of sequential maximum entropy to interval Bayesian networks and more generally to credal networks. We especially show that this application is equivalent to a number of small local entropy maximizations.
[ { "version": "v1", "created": "Wed, 16 Jan 2013 15:51:22 GMT" } ]
1,358,467,200,000
[ [ "Lukasiewicz", "Thomas", "" ] ]
1301.3874
Peter McBurney
Peter McBurney, Simon Parsons
Risk Agoras: Dialectical Argumentation for Scientific Reasoning
Appears in Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence (UAI2000)
null
null
UAI-P-2000-PG-371-379
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a formal framework for intelligent systems which can reason about scientific domains, in particular about the carcinogenicity of chemicals, and we study its properties. Our framework is grounded in a philosophy of scientific enquiry and discourse, and uses a model of dialectical argumentation. The formalism enables representation of scientific uncertainty and conflict in a manner suitable for qualitative reasoning about the domain.
[ { "version": "v1", "created": "Wed, 16 Jan 2013 15:51:26 GMT" } ]
1,358,467,200,000
[ [ "McBurney", "Peter", "" ], [ "Parsons", "Simon", "" ] ]
1301.3876
Brian Milch
Brian Milch, Daphne Koller
Probabilistic Models for Agents' Beliefs and Decisions
Appears in Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence (UAI2000)
null
null
UAI-P-2000-PG-389-396
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many applications of intelligent systems require reasoning about the mental states of agents in the domain. We may want to reason about an agent's beliefs, including beliefs about other agents; we may also want to reason about an agent's preferences, and how his beliefs and preferences relate to his behavior. We define a probabilistic epistemic logic (PEL) in which belief statements are given a formal semantics, and provide an algorithm for asserting and querying PEL formulas in Bayesian networks. We then show how to reason about an agent's behavior by modeling his decision process as an influence diagram and assuming that he behaves rationally. PEL can then be used for reasoning from an agent's observed actions to conclusions about other aspects of the domain, including unobserved domain variables and the agent's mental states.
[ { "version": "v1", "created": "Wed, 16 Jan 2013 15:51:34 GMT" } ]
1,358,467,200,000
[ [ "Milch", "Brian", "" ], [ "Koller", "Daphne", "" ] ]
1301.3879
Thomas D. Nielsen
Thomas D. Nielsen, Finn Verner Jensen
Representing and Solving Asymmetric Bayesian Decision Problems
Appears in Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence (UAI2000)
null
null
UAI-P-2000-PG-416-425
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper deals with the representation and solution of asymmetric Bayesian decision problems. We present a formal framework, termed asymmetric influence diagrams, that is based on the influence diagram and allows an efficient representation of asymmetric decision problems. As opposed to existing frameworks, the asymmetric influece diagram primarily encodes asymmetry at the qualitative level and it can therefore be read directly from the model. We give an algorithm for solving asymmetric influence diagrams. The algorithm initially decomposes the asymmetric decision problem into a structure of symmetric subproblems organized as a tree. A solution to the decision problem can then be found by propagating from the leaves toward the root using existing evaluation methods to solve the sub-problems.
[ { "version": "v1", "created": "Wed, 16 Jan 2013 15:51:46 GMT" } ]
1,358,467,200,000
[ [ "Nielsen", "Thomas D.", "" ], [ "Jensen", "Finn Verner", "" ] ]
1301.3880
Thomas D. Nielsen
Thomas D. Nielsen, Pierre-Henri Wuillemin, Finn Verner Jensen, Uffe Kj{\ae}rulff
Using ROBDDs for Inference in Bayesian Networks with Troubleshooting as an Example
Appears in Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence (UAI2000)
null
null
UAI-P-2000-PG-426-435
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When using Bayesian networks for modelling the behavior of man-made machinery, it usually happens that a large part of the model is deterministic. For such Bayesian networks deterministic part of the model can be represented as a Boolean function, and a central part of belief updating reduces to the task of calculating the number of satisfying configurations in a Boolean function. In this paper we explore how advances in the calculation of Boolean functions can be adopted for belief updating, in particular within the context of troubleshooting. We present experimental results indicating a substantial speed-up compared to traditional junction tree propagation.
[ { "version": "v1", "created": "Wed, 16 Jan 2013 15:51:50 GMT" } ]
1,358,467,200,000
[ [ "Nielsen", "Thomas D.", "" ], [ "Wuillemin", "Pierre-Henri", "" ], [ "Jensen", "Finn Verner", "" ], [ "Kjærulff", "Uffe", "" ] ]
1301.3881
Dennis Nilsson
Dennis Nilsson, Steffen L. Lauritzen
Evaluating Influence Diagrams using LIMIDs
Appears in Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence (UAI2000)
null
null
UAI-P-2000-PG-436-445
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a new approach to the solution of decision problems formulated as influence diagrams. The approach converts the influence diagram into a simpler structure, the LImited Memory Influence Diagram (LIMID), where only the requisite information for the computation of optimal policies is depicted. Because the requisite information is explicitly represented in the diagram, the evaluation procedure can take advantage of it. In this paper we show how to convert an influence diagram to a LIMID and describe the procedure for finding an optimal strategy. Our approach can yield significant savings of memory and computational time when compared to traditional methods.
[ { "version": "v1", "created": "Wed, 16 Jan 2013 15:51:54 GMT" } ]
1,358,467,200,000
[ [ "Nilsson", "Dennis", "" ], [ "Lauritzen", "Steffen L.", "" ] ]
1301.3883
Tim Paek
Tim Paek, Eric J. Horvitz
Conversation as Action Under Uncertainty
Appears in Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence (UAI2000)
null
null
UAI-P-2000-PG-455-464
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Conversations abound with uncetainties of various kinds. Treating conversation as inference and decision making under uncertainty, we propose a task independent, multimodal architecture for supporting robust continuous spoken dialog called Quartet. We introduce four interdependent levels of analysis, and describe representations, inference procedures, and decision strategies for managing uncertainties within and between the levels. We highlight the approach by reviewing interactions between a user and two spoken dialog systems developed using the Quartet architecture: Prsenter, a prototype system for navigating Microsoft PowerPoint presentations, and the Bayesian Receptionist, a prototype system for dealing with tasks typically handled by front desk receptionists at the Microsoft corporate campus.
[ { "version": "v1", "created": "Wed, 16 Jan 2013 15:52:02 GMT" } ]
1,358,467,200,000
[ [ "Paek", "Tim", "" ], [ "Horvitz", "Eric J.", "" ] ]
1301.3887
Pascal Poupart
Pascal Poupart, Craig Boutilier
Value-Directed Belief State Approximation for POMDPs
Appears in Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence (UAI2000)
null
null
UAI-P-2000-PG-497-506
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem belief-state monitoring for the purposes of implementing a policy for a partially-observable Markov decision process (POMDP), specifically how one might approximate the belief state. Other schemes for belief-state approximation (e.g., based on minimixing a measures such as KL-diveregence between the true and estimated state) are not necessarily appropriate for POMDPs. Instead we propose a framework for analyzing value-directed approximation schemes, where approximation quality is determined by the expected error in utility rather than by the error in the belief state itself. We propose heuristic methods for finding good projection schemes for belief state estimation - exhibiting anytime characteristics - given a POMDP value fucntion. We also describe several algorithms for constructing bounds on the error in decision quality (expected utility) associated with acting in accordance with a given belief state approximation.
[ { "version": "v1", "created": "Wed, 16 Jan 2013 15:52:18 GMT" } ]
1,358,467,200,000
[ [ "Poupart", "Pascal", "" ], [ "Boutilier", "Craig", "" ] ]
1301.3888
David V. Pynadath
David V. Pynadath, Michael P. Wellman
Probabilistic State-Dependent Grammars for Plan Recognition
Appears in Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence (UAI2000)
null
null
UAI-P-2000-PG-507-514
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Techniques for plan recognition under uncertainty require a stochastic model of the plan-generation process. We introduce Probabilistic State-Dependent Grammars (PSDGs) to represent an agent's plan-generation process. The PSDG language model extends probabilistic context-free grammars (PCFGs) by allowing production probabilities to depend on an explicit model of the planning agent's internal and external state. Given a PSDG description of the plan-generation process, we can then use inference algorithms that exploit the particular independence properties of the PSDG language to efficiently answer plan-recognition queries. The combination of the PSDG language model and inference algorithms extends the range of plan-recognition domains for which practical probabilistic inference is possible, as illustrated by applications in traffic monitoring and air combat.
[ { "version": "v1", "created": "Wed, 16 Jan 2013 15:52:22 GMT" } ]
1,358,467,200,000
[ [ "Pynadath", "David V.", "" ], [ "Wellman", "Michael P.", "" ] ]
1301.3889
Silja Renooij
Silja Renooij, Linda C. van der Gaag, Simon Parsons, Shaw Green
Pivotal Pruning of Trade-offs in QPNs
Appears in Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence (UAI2000)
null
null
UAI-P-2000-PG-515-522
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Qualitative probabilistic networks have been designed for probabilistic reasoning in a qualitative way. Due to their coarse level of representation detail, qualitative probabilistic networks do not provide for resolving trade-offs and typically yield ambiguous results upon inference. We present an algorithm for computing more insightful results for unresolved trade-offs. The algorithm builds upon the idea of using pivots to zoom in on the trade-offs and identifying the information that would serve to resolve them.
[ { "version": "v1", "created": "Wed, 16 Jan 2013 15:52:25 GMT" } ]
1,358,467,200,000
[ [ "Renooij", "Silja", "" ], [ "van der Gaag", "Linda C.", "" ], [ "Parsons", "Simon", "" ], [ "Green", "Shaw", "" ] ]
1301.3893
Claus Skaanning
Claus Skaanning
A Knowledge Acquisition Tool for Bayesian-Network Troubleshooters
Appears in Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence (UAI2000)
null
null
UAI-P-2000-PG-549-557
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes a domain-specific knowledge acquisition tool for intelligent automated troubleshooters based on Bayesian networks. No Bayesian network knowledge is required to use the tool, and troubleshooting information can be specified as natural and intuitive as possible. Probabilities can be specified in the direction that is most natural to the domain expert. Thus, the knowledge acquisition efficiently removes the traditional knowledge acquisition bottleneck of Bayesian networks.
[ { "version": "v1", "created": "Wed, 16 Jan 2013 15:52:41 GMT" } ]
1,358,467,200,000
[ [ "Skaanning", "Claus", "" ] ]
1301.3894
Harald Steck
Harald Steck
On the Use of Skeletons when Learning in Bayesian Networks
Appears in Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence (UAI2000)
null
null
UAI-P-2000-PG-558-565
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present a heuristic operator which aims at simultaneously optimizing the orientations of all the edges in an intermediate Bayesian network structure during the search process. This is done by alternating between the space of directed acyclic graphs (DAGs) and the space of skeletons. The found orientations of the edges are based on a scoring function rather than on induced conditional independences. This operator can be used as an extension to commonly employed search strategies. It is evaluated in experiments with artificial and real-world data.
[ { "version": "v1", "created": "Wed, 16 Jan 2013 15:52:45 GMT" } ]
1,358,467,200,000
[ [ "Steck", "Harald", "" ] ]
1301.3898
Jin Tian
Jin Tian, Judea Pearl
Probabilities of Causation: Bounds and Identification
Appears in Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence (UAI2000)
null
null
UAI-P-2000-PG-589-598
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper deals with the problem of estimating the probability that one event was a cause of another in a given scenario. Using structural-semantical definitions of the probabilities of necessary or sufficient causation (or both), we show how to optimally bound these quantities from data obtained in experimental and observational studies, making minimal assumptions concerning the data-generating process. In particular, we strengthen the results of Pearl (1999) by weakening the data-generation assumptions and deriving theoretically sharp bounds on the probabilities of causation. These results delineate precisely how empirical data can be used both in settling questions of attribution and in solving attribution-related problems of decision making.
[ { "version": "v1", "created": "Wed, 16 Jan 2013 15:53:00 GMT" } ]
1,358,467,200,000
[ [ "Tian", "Jin", "" ], [ "Pearl", "Judea", "" ] ]
1301.3900
Jirina Vejnarova
Jirina Vejnarova
Conditional Independence and Markov Properties in Possibility Theory
Appears in Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence (UAI2000)
null
null
UAI-P-2000-PG-609-616
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Conditional independence and Markov properties are powerful tools allowing expression of multidimensional probability distributions by means of low-dimensional ones. As multidimensional possibilistic models have been studied for several years, the demand for analogous tools in possibility theory seems to be quite natural. This paper is intended to be a promotion of de Cooman's measure-theoretic approcah to possibility theory, as this approach allows us to find analogies to many important results obtained in probabilistic framework. First, we recall semi-graphoid properties of conditional possibilistic independence, parameterized by a continuous t-norm, and find sufficient conditions for a class of Archimedean t-norms to have the graphoid property. Then we introduce Markov properties and factorization of possibility distrubtions (again parameterized by a continuous t-norm) and find the relationships between them. These results are accompanied by a number of conterexamples, which show that the assumptions of specific theorems are substantial.
[ { "version": "v1", "created": "Wed, 16 Jan 2013 15:53:09 GMT" } ]
1,358,467,200,000
[ [ "Vejnarova", "Jirina", "" ] ]
1301.3903
Frank Wittig
Frank Wittig, Anthony Jameson
Exploiting Qualitative Knowledge in the Learning of Conditional Probabilities of Bayesian Networks
Appears in Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence (UAI2000)
null
null
UAI-P-2000-PG-644-652
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Algorithms for learning the conditional probabilities of Bayesian networks with hidden variables typically operate within a high-dimensional search space and yield only locally optimal solutions. One way of limiting the search space and avoiding local optima is to impose qualitative constraints that are based on background knowledge concerning the domain. We present a method for integrating formal statements of qualitative constraints into two learning algorithms, APN and EM. In our experiments with synthetic data, this method yielded networks that satisfied the constraints almost perfectly. The accuracy of the learned networks was consistently superior to that of corresponding networks learned without constraints. The exploitation of qualitative constraints therefore appears to be a promising way to increase both the interpretability and the accuracy of learned Bayesian networks with known structure.
[ { "version": "v1", "created": "Wed, 16 Jan 2013 15:53:24 GMT" } ]
1,358,467,200,000
[ [ "Wittig", "Frank", "" ], [ "Jameson", "Anthony", "" ] ]
1301.4272
Marco Correia
Marco Correia and Pedro Barahona
View-based propagation of decomposable constraints
The final publication is available at link.springer.com
null
10.1007/s10601-013-9140-8
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Constraints that may be obtained by composition from simpler constraints are present, in some way or another, in almost every constraint program. The decomposition of such constraints is a standard technique for obtaining an adequate propagation algorithm from a combination of propagators designed for simpler constraints. The decomposition approach is appealing in several ways. Firstly because creating a specific propagator for every constraint is clearly infeasible since the number of constraints is infinite. Secondly, because designing a propagation algorithm for complex constraints can be very challenging. Finally, reusing existing propagators allows to reduce the size of code to be developed and maintained. Traditionally, constraint solvers automatically decompose constraints into simpler ones using additional auxiliary variables and propagators, or expect the users to perform such decomposition themselves, eventually leading to the same propagation model. In this paper we explore views, an alternative way to create efficient propagators for such constraints in a modular, simple and correct way, which avoids the introduction of auxiliary variables and propagators.
[ { "version": "v1", "created": "Thu, 17 Jan 2013 23:37:47 GMT" } ]
1,358,726,400,000
[ [ "Correia", "Marco", "" ], [ "Barahona", "Pedro", "" ] ]
1301.4430
Haiqin Wang
Haiqin Wang, Marek J. Druzdzel
User Interface Tools for Navigation in Conditional Probability Tables and Elicitation of Probabilities in Bayesian Networks
Appears in Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence (UAI2000)
null
null
UAI-P-2000-PG-617-625
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Elicitation of probabilities is one of the most laborious tasks in building decision-theoretic models, and one that has so far received only moderate attention in decision-theoretic systems. We propose a set of user interface tools for graphical probabilistic models, focusing on two aspects of probability elicitation: (1) navigation through conditional probability tables and (2) interactive graphical assessment of discrete probability distributions. We propose two new graphical views that aid navigation in very large conditional probability tables: the CPTree (Conditional Probability Tree) and the SCPT (shrinkable Conditional Probability Table). Based on what is known about graphical presentation of quantitative data to humans, we offer several useful enhancements to probability wheel and bar graph, including different chart styles and options that can be adapted to user preferences and needs. We present the results of a simple usability study that proves the value of the proposed tools.
[ { "version": "v1", "created": "Fri, 18 Jan 2013 16:50:44 GMT" } ]
1,358,726,400,000
[ [ "Wang", "Haiqin", "" ], [ "Druzdzel", "Marek J.", "" ] ]
1301.4604
Nando de Freitas
Nando de Freitas and Kevin Murphy
Proceedings of the Twenty-Eighth Conference on Uncertainty in Artificial Intelligence (2012)
null
null
null
UAI2012
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This is the Proceedings of the Twenty-Eighth Conference on Uncertainty in Artificial Intelligence, which was held on Catalina Island, CA August 14-18 2012.
[ { "version": "v1", "created": "Sat, 19 Jan 2013 22:32:52 GMT" }, { "version": "v2", "created": "Thu, 28 Aug 2014 04:31:38 GMT" } ]
1,409,270,400,000
[ [ "de Freitas", "Nando", "" ], [ "Murphy", "Kevin", "" ] ]
1301.4606
Christopher Meek
Christopher Meek and Uffe Kjaerulff
Proceedings of the Nineteenth Conference on Uncertainty in Artificial Intelligence (2003)
null
null
null
UAI2003
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This is the Proceedings of the Nineteenth Conference on Uncertainty in Artificial Intelligence, which was held in Acapulco, Mexico, August 7-10 2003
[ { "version": "v1", "created": "Sat, 19 Jan 2013 23:12:33 GMT" }, { "version": "v2", "created": "Thu, 28 Aug 2014 04:18:59 GMT" } ]
1,409,270,400,000
[ [ "Meek", "Christopher", "" ], [ "Kjaerulff", "Uffe", "" ] ]
1301.4607
John Breese
John Breese and Daphne Koller
Proceedings of the Seventeenth Conference on Uncertainty in Artificial Intelligence (2001)
null
null
null
UAI2001
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This is the Proceedings of the Seventeenth Conference on Uncertainty in Artificial Intelligence, which was held in Seattle, WA, August 2-5 2001
[ { "version": "v1", "created": "Sat, 19 Jan 2013 23:16:59 GMT" }, { "version": "v2", "created": "Thu, 28 Aug 2014 04:16:28 GMT" } ]
1,409,270,400,000
[ [ "Breese", "John", "" ], [ "Koller", "Daphne", "" ] ]
1301.4608
Adnan Darwiche
Adnan Darwiche and Nir Friedman
Proceedings of the Eighteenth Conference on Uncertainty in Artificial Intelligence (2002)
null
null
null
UAI2002
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This is the Proceedings of the Eighteenth Conference on Uncertainty in Artificial Intelligence, which was held in Alberta, Canada, August 1-4 2002
[ { "version": "v1", "created": "Sat, 19 Jan 2013 23:17:26 GMT" }, { "version": "v2", "created": "Thu, 28 Aug 2014 04:17:50 GMT" } ]
1,409,270,400,000
[ [ "Darwiche", "Adnan", "" ], [ "Friedman", "Nir", "" ] ]
1301.4659
Firoj Parwej Dr.
Firoj Parwej
English Sentence Recognition using Artificial Neural Network through Mouse-based Gestures
6 Pages, 7 Figures. arXiv admin note: text overlap with arXiv:1007.0627 by other authors without attribution
International Journal of Computer Applications (IJCA)USA, Volume 61, No.17, January 2013 ISSN 0975 - 8887, http://www.ijcaonline.org, http://www.ijcaonline.org/archives/volume61/number17/10023-4998
10.5120/10023-4998
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Handwriting is one of the most important means of daily communication. Although the problem of handwriting recognition has been considered for more than 60 years there are still many open issues, especially in the task of unconstrained handwritten sentence recognition. This paper focuses on the automatic system that recognizes continuous English sentence through a mouse-based gestures in real-time based on Artificial Neural Network. The proposed Artificial Neural Network is trained using the traditional backpropagation algorithm for self supervised neural network which provides the system with great learning ability and thus has proven highly successful in training for feed-forward Artificial Neural Network. The designed algorithm is not only capable of translating discrete gesture moves, but also continuous gestures through the mouse. In this paper we are using the efficient neural network approach for recognizing English sentence drawn by mouse. This approach shows an efficient way of extracting the boundary of the English Sentence and specifies the area of the recognition English sentence where it has been drawn in an image and then used Artificial Neural Network to recognize the English sentence. The proposed approach English sentence recognition (ESR) system is designed and tested successfully. Experimental results show that the higher speed and accuracy were examined.
[ { "version": "v1", "created": "Sun, 20 Jan 2013 14:13:22 GMT" } ]
1,358,812,800,000
[ [ "Parwej", "Firoj", "" ] ]
1301.4991
Christophe Cruz
Helmi Ben Hmida (i3mainz), Christophe Cruz (Le2i), Frank Boochs (i3mainz), Christophe Nicolle (Le2i)
Knowledge Base Approach for 3D Objects Detection in Point Clouds Using 3D Processing and Specialists Knowledge
ISSN: 1942-2679. arXiv admin note: text overlap with arXiv:1301.4783
International Journal On Advances in Intelligent Systems 5, 1 et 2 (2012) 1-14
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a knowledge-based detection of objects approach using the OWL ontology language, the Semantic Web Rule Language, and 3D processing built-ins aiming at combining geometrical analysis of 3D point clouds and specialist's knowledge. Here, we share our experience regarding the creation of 3D semantic facility model out of unorganized 3D point clouds. Thus, a knowledge-based detection approach of objects using the OWL ontology language is presented. This knowledge is used to define SWRL detection rules. In addition, the combination of 3D processing built-ins and topological Built-Ins in SWRL rules allows a more flexible and intelligent detection, and the annotation of objects contained in 3D point clouds. The created WiDOP prototype takes a set of 3D point clouds as input, and produces as output a populated ontology corresponding to an indexed scene visualized within VRML language. The context of the study is the detection of railway objects materialized within the Deutsche Bahn scene such as signals, technical cupboards, electric poles, etc. Thus, the resulting enriched and populated ontology, that contains the annotations of objects in the point clouds, is used to feed a GIS system or an IFC file for architecture purposes.
[ { "version": "v1", "created": "Mon, 21 Jan 2013 12:42:17 GMT" } ]
1,358,899,200,000
[ [ "Hmida", "Helmi Ben", "", "i3mainz" ], [ "Cruz", "Christophe", "", "Le2i" ], [ "Boochs", "Frank", "", "i3mainz" ], [ "Nicolle", "Christophe", "", "Le2i" ] ]
1301.4992
Christophe Cruz
Helmi Ben Hmida (i3mainz), Christophe Cruz (Le2i), Frank Boochs (i3mainz), Christophe Nicolle (Le2i)
From 9-IM Topological Operators to Qualitative Spatial Relations using 3D Selective Nef Complexes and Logic Rules for bodies
arXiv admin note: substantial text overlap with arXiv:1301.4780
International Conference on Knowledge Engineering and Ontology Development, Barcelone : Spain (2012)
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a method to compute automatically topological relations using SWRL rules. The calculation of these rules is based on the definition of a Selective Nef Complexes Nef Polyhedra structure generated from standard Polyhedron. The Selective Nef Complexes is a data model providing a set of binary Boolean operators such as Union, Difference, Intersection and Symmetric difference, and unary operators such as Interior, Closure and Boundary. In this work, these operators are used to compute topological relations between objects defined by the constraints of the 9 Intersection Model (9-IM) from Egenhofer. With the help of these constraints, we defined a procedure to compute the topological relations on Nef polyhedra. These topological relationships are Disjoint, Meets, Contains, Inside, Covers, CoveredBy, Equals and Overlaps, and defined in a top-level ontology with a specific semantic definition on relation such as Transitive, Symmetric, Asymmetric, Functional, Reflexive, and Irreflexive. The results of the computation of topological relationships are stored in an OWL-DL ontology allowing after what to infer on these new relationships between objects. In addition, logic rules based on the Semantic Web Rule Language allows the definition of logic programs that define which topological relationships have to be computed on which kind of objects with specific attributes. For instance, a "Building" that overlaps a "Railway" is a "RailStation".
[ { "version": "v1", "created": "Mon, 21 Jan 2013 12:43:38 GMT" } ]
1,358,899,200,000
[ [ "Hmida", "Helmi Ben", "", "i3mainz" ], [ "Cruz", "Christophe", "", "Le2i" ], [ "Boochs", "Frank", "", "i3mainz" ], [ "Nicolle", "Christophe", "", "Le2i" ] ]
1301.5946
Lu\'is Filipe Te\'ofilo
Lu\'is Filipe Te\'ofilo, Lu\'is Paulo Reis, Henrique Lopes Cardoso, Dinis F\'elix, Rui S\^eca, Jo\~ao Ferreira, Pedro Mendes, Nuno Cruz, Vitor Pereira, Nuno Passos
Computer Poker Research at LIACC
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Computer Poker's unique characteristics present a well-suited challenge for research in artificial intelligence. For that reason, and due to the Poker's market increase in popularity in Portugal since 2008, several members of LIACC have researched in this field. Several works were published as papers and master theses and more recently a member of LIACC engaged on a research in this area as a Ph.D. thesis in order to develop a more extensive and in-depth work. This paper describes the existing research in LIACC about Computer Poker, with special emphasis on the completed master's theses and plans for future work. This paper means to present a summary of the lab's work to the research community in order to encourage the exchange of ideas with other labs / individuals. LIACC hopes this will improve research in this area so as to reach the goal of creating an agent that surpasses the best human players.
[ { "version": "v1", "created": "Fri, 25 Jan 2013 01:56:03 GMT" } ]
1,359,331,200,000
[ [ "Teófilo", "Luís Filipe", "" ], [ "Reis", "Luís Paulo", "" ], [ "Cardoso", "Henrique Lopes", "" ], [ "Félix", "Dinis", "" ], [ "Sêca", "Rui", "" ], [ "Ferreira", "João", "" ], [ "Mendes", "Pedro", "" ], [ "Cruz", "Nuno", "" ], [ "Pereira", "Vitor", "" ], [ "Passos", "Nuno", "" ] ]
1301.6011
D P Acharjya Ph.D
B.K.Tripathy, D.P.Acharjya and V.Cynthya
A Framework for Intelligent Medical Diagnosis using Rough Set with Formal Concept Analysis
22 pages
International Journal of Artificial Intelligence & Applications (IJAIA), Vol.2, No.2, April 2011
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Medical diagnosis process vary in the degree to which they attempt to deal with different complicating aspects of diagnosis such as relative importance of symptoms, varied symptom pattern and the relation between diseases them selves. Based on decision theory, in the past many mathematical models such as crisp set, probability distribution, fuzzy set, intuitionistic fuzzy set were developed to deal with complicating aspects of diagnosis. But, many such models are failed to include important aspects of the expert decisions. Therefore, an effort has been made to process inconsistencies in data being considered by Pawlak with the introduction of rough set theory. Though rough set has major advantages over the other methods, but it generates too many rules that create many difficulties while taking decisions. Therefore, it is essential to minimize the decision rules. In this paper, we use two processes such as pre process and post process to mine suitable rules and to explore the relationship among the attributes. In pre process we use rough set theory to mine suitable rules, whereas in post process we use formal concept analysis from these suitable rules to explore better knowledge and most important factors affecting the decision making.
[ { "version": "v1", "created": "Fri, 25 Jan 2013 11:24:05 GMT" } ]
1,359,331,200,000
[ [ "Tripathy", "B. K.", "" ], [ "Acharjya", "D. P.", "" ], [ "Cynthya", "V.", "" ] ]
1301.6262
Sim-Hui Tee
Sim-Hui Tee
Developing Parallel Dependency Graph In Improving Game Balancing
5 pages
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The dependency graph is a data architecture that models all the dependencies between the different types of assets in the game. It depicts the dependency-based relationships between the assets of a game. For example, a player must construct an arsenal before he can build weapons. It is vital that the dependency graph of a game is designed logically to ensure a logical sequence of game play. However, a mere logical dependency graph is not sufficient in sustaining the players' enduring interests in a game, which brings the problem of game balancing into picture. The issue of game balancing arises when the players do not feel the chances of winning the game over their AI opponents who are more skillful in the game play. At the current state of research, the architecture of dependency graph is monolithic for the players. The sequence of asset possession is always foreseeable because there is only a single dependency graph. Game balancing is impossible when the assets of AI players are overwhelmingly outnumbering that of human players. This paper proposes a parallel architecture of dependency graph for the AI players and human players. Instead of having a single dependency graph, a parallel architecture is proposed where the dependency graph of AI player is adjustable with that of human player using a support dependency as a game balancing mechanism. This paper exhibits that the parallel dependency graph helps to improve game balancing.
[ { "version": "v1", "created": "Sat, 26 Jan 2013 14:41:03 GMT" } ]
1,359,417,600,000
[ [ "Tee", "Sim-Hui", "" ] ]
1301.6359
Alexander Serov
Alexander Serov
Subjective Reality and Strong Artificial Intelligence
10 pages
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The main prospective aim of modern research related to Artificial Intelligence is the creation of technical systems that implement the idea of Strong Intelligence. According our point of view the path to the development of such systems comes through the research in the field related to perceptions. Here we formulate the model of the perception of external world which may be used for the description of perceptual activity of intelligent beings. We consider a number of issues related to the development of the set of patterns which will be used by the intelligent system when interacting with environment. The key idea of the presented perception model is the idea of subjective reality. The principle of the relativity of perceived world is formulated. It is shown that this principle is the immediate consequence of the idea of subjective reality. In this paper we show how the methodology of subjective reality may be used for the creation of different types of Strong AI systems.
[ { "version": "v1", "created": "Sun, 27 Jan 2013 14:29:04 GMT" }, { "version": "v2", "created": "Tue, 29 Jan 2013 17:32:23 GMT" } ]
1,359,504,000,000
[ [ "Serov", "Alexander", "" ] ]
1301.6675
Gustavo Arroyo-Figueroa
Gustavo Arroyo-Figueroa, Luis Enrique Sucar
A Temporal Bayesian Network for Diagnosis and Prediction
Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999)
null
null
UAI-P-1999-PG-13-20
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Diagnosis and prediction in some domains, like medical and industrial diagnosis, require a representation that combines uncertainty management and temporal reasoning. Based on the fact that in many cases there are few state changes in the temporal range of interest, we propose a novel representation called Temporal Nodes Bayesian Networks (TNBN). In a TNBN each node represents an event or state change of a variable, and an arc corresponds to a causal-temporal relationship. The temporal intervals can differ in number and size for each temporal node, so this allows multiple granularity. Our approach is contrasted with a dynamic Bayesian network for a simple medical example. An empirical evaluation is presented for a more complex problem, a subsystem of a fossil power plant, in which this approach is used for fault diagnosis and prediction with good results.
[ { "version": "v1", "created": "Wed, 23 Jan 2013 15:56:38 GMT" } ]
1,359,504,000,000
[ [ "Arroyo-Figueroa", "Gustavo", "" ], [ "Sucar", "Luis Enrique", "" ] ]
1301.6679
Salem Benferhat
Salem Benferhat, Didier Dubois, Laurent Garcia, Henri Prade
Possibilistic logic bases and possibilistic graphs
Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999)
null
null
UAI-P-1999-PG-57-64
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Possibilistic logic bases and possibilistic graphs are two different frameworks of interest for representing knowledge. The former stratifies the pieces of knowledge (expressed by logical formulas) according to their level of certainty, while the latter exhibits relationships between variables. The two types of representations are semantically equivalent when they lead to the same possibility distribution (which rank-orders the possible interpretations). A possibility distribution can be decomposed using a chain rule which may be based on two different kinds of conditioning which exist in possibility theory (one based on product in a numerical setting, one based on minimum operation in a qualitative setting). These two types of conditioning induce two kinds of possibilistic graphs. In both cases, a translation of these graphs into possibilistic bases is provided. The converse translation from a possibilistic knowledge base into a min-based graph is also described.
[ { "version": "v1", "created": "Wed, 23 Jan 2013 15:56:55 GMT" } ]
1,359,504,000,000
[ [ "Benferhat", "Salem", "" ], [ "Dubois", "Didier", "" ], [ "Garcia", "Laurent", "" ], [ "Prade", "Henri", "" ] ]
1301.6680
Magnus Boman
Magnus Boman, Paul Davidsson, Hakan L. Younes
Artificial Decision Making Under Uncertainty in Intelligent Buildings
Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999)
null
null
UAI-P-1999-PG-65-70
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Our hypothesis is that by equipping certain agents in a multi-agent system controlling an intelligent building with automated decision support, two important factors will be increased. The first is energy saving in the building. The second is customer value---how the people in the building experience the effects of the actions of the agents. We give evidence for the truth of this hypothesis through experimental findings related to tools for artificial decision making. A number of assumptions related to agent control, through monitoring and delegation of tasks to other kinds of agents, of rooms at a test site are relaxed. Each assumption controls at least one uncertainty that complicates considerably the procedures for selecting actions part of each such agent. We show that in realistic decision situations, room-controlling agents can make bounded rational decisions even under dynamic real-time constraints. This result can be, and has been, generalized to other domains with even harsher time constraints.
[ { "version": "v1", "created": "Wed, 23 Jan 2013 15:56:59 GMT" } ]
1,359,504,000,000
[ [ "Boman", "Magnus", "" ], [ "Davidsson", "Paul", "" ], [ "Younes", "Hakan L.", "" ] ]
1301.6681
Craig Boutilier
Craig Boutilier, Ronen I. Brafman, Holger H. Hoos, David L. Poole
Reasoning With Conditional Ceteris Paribus Preference Statem
Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999)
null
null
UAI-P-1999-PG-71-80
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In many domains it is desirable to assess the preferences of users in a qualitative rather than quantitative way. Such representations of qualitative preference orderings form an importnat component of automated decision tools. We propose a graphical representation of preferences that reflects conditional dependence and independence of preference statements under a ceteris paribus (all else being equal) interpretation. Such a representation is ofetn compact and arguably natural. We describe several search algorithms for dominance testing based on this representation; these algorithms are quite effective, especially in specific network topologies, such as chain-and tree- structured networks, as well as polytrees.
[ { "version": "v1", "created": "Wed, 23 Jan 2013 15:57:03 GMT" } ]
1,359,504,000,000
[ [ "Boutilier", "Craig", "" ], [ "Brafman", "Ronen I.", "" ], [ "Hoos", "Holger H.", "" ], [ "Poole", "David L.", "" ] ]
1301.6686
Gregory F. Cooper
Gregory F. Cooper, Changwon Yoo
Causal Discovery from a Mixture of Experimental and Observational Data
Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999)
null
null
UAI-P-1999-PG-116-125
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes a Bayesian method for combining an arbitrary mixture of observational and experimental data in order to learn causal Bayesian networks. Observational data are passively observed. Experimental data, such as that produced by randomized controlled trials, result from the experimenter manipulating one or more variables (typically randomly) and observing the states of other variables. The paper presents a Bayesian method for learning the causal structure and parameters of the underlying causal process that is generating the data, given that (1) the data contains a mixture of observational and experimental case records, and (2) the causal process is modeled as a causal Bayesian network. This learning method was applied using as input various mixtures of experimental and observational data that were generated from the ALARM causal Bayesian network. In these experiments, the absolute and relative quantities of experimental and observational data were varied systematically. For each of these training datasets, the learning method was applied to predict the causal structure and to estimate the causal parameters that exist among randomly selected pairs of nodes in ALARM that are not confounded. The paper reports how these structure predictions and parameter estimates compare with the true causal structures and parameters as given by the ALARM network.
[ { "version": "v1", "created": "Wed, 23 Jan 2013 15:57:22 GMT" } ]
1,359,504,000,000
[ [ "Cooper", "Gregory F.", "" ], [ "Yoo", "Changwon", "" ] ]
1301.6687
James Cussens
James Cussens
Loglinear models for first-order probabilistic reasoning
Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999)
null
null
UAI-P-1999-PG-126-133
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent work on loglinear models in probabilistic constraint logic programming is applied to first-order probabilistic reasoning. Probabilities are defined directly on the proofs of atomic formulae, and by marginalisation on the atomic formulae themselves. We use Stochastic Logic Programs (SLPs) composed of labelled and unlabelled definite clauses to define the proof probabilities. We have a conservative extension of first-order reasoning, so that, for example, there is a one-one mapping between logical and random variables. We show how, in this framework, Inductive Logic Programming (ILP) can be used to induce the features of a loglinear model from data. We also compare the presented framework with other approaches to first-order probabilistic reasoning.
[ { "version": "v1", "created": "Wed, 23 Jan 2013 15:57:26 GMT" } ]
1,359,504,000,000
[ [ "Cussens", "James", "" ] ]
1301.6689
Denver Dash
Denver Dash, Marek J. Druzdzel
A Hybrid Anytime Algorithm for the Constructiion of Causal Models From Sparse Data
Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999)
null
null
UAI-P-1999-PG-142-149
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a hybrid constraint-based/Bayesian algorithm for learning causal networks in the presence of sparse data. The algorithm searches the space of equivalence classes of models (essential graphs) using a heuristic based on conventional constraint-based techniques. Each essential graph is then converted into a directed acyclic graph and scored using a Bayesian scoring metric. Two variants of the algorithm are developed and tested using data from randomly generated networks of sizes from 15 to 45 nodes with data sizes ranging from 250 to 2000 records. Both variations are compared to, and found to consistently outperform two variations of greedy search with restarts.
[ { "version": "v1", "created": "Wed, 23 Jan 2013 15:57:34 GMT" } ]
1,359,504,000,000
[ [ "Dash", "Denver", "" ], [ "Druzdzel", "Marek J.", "" ] ]
1301.6691
Michael I. Dekhtyar
Michael I. Dekhtyar, Alex Dekhtyar, V. S. Subrahmanian
Hybrid Probabilistic Programs: Algorithms and Complexity
Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999)
null
null
UAI-P-1999-PG-160-169
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hybrid Probabilistic Programs (HPPs) are logic programs that allow the programmer to explicitly encode his knowledge of the dependencies between events being described in the program. In this paper, we classify HPPs into three classes called HPP_1,HPP_2 and HPP_r,r>= 3. For these classes, we provide three types of results for HPPs. First, we develop algorithms to compute the set of all ground consequences of an HPP. Then we provide algorithms and complexity results for the problems of entailment ("Given an HPP P and a query Q as input, is Q a logical consequence of P?") and consistency ("Given an HPP P as input, is P consistent?"). Our results provide a fine characterization of when polynomial algorithms exist for the above problems, and when these problems become intractable.
[ { "version": "v1", "created": "Wed, 23 Jan 2013 15:57:43 GMT" } ]
1,359,504,000,000
[ [ "Dekhtyar", "Michael I.", "" ], [ "Dekhtyar", "Alex", "" ], [ "Subrahmanian", "V. S.", "" ] ]
1301.6692
Didier Dubois
Didier Dubois, Michel Grabisch, Henri Prade, Philippe Smets
Assessing the value of a candidate. Comparing belief function and possibility theories
Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999)
null
null
UAI-P-1999-PG-170-177
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The problem of assessing the value of a candidate is viewed here as a multiple combination problem. On the one hand a candidate can be evaluated according to different criteria, and on the other hand several experts are supposed to assess the value of candidates according to each criterion. Criteria are not equally important, experts are not equally competent or reliable. Moreover levels of satisfaction of criteria, or levels of confidence are only assumed to take their values in qualitative scales which are just linearly ordered. The problem is discussed within two frameworks, the transferable belief model and the qualitative possibility theory. They respectively offer a quantitative and a qualitative setting for handling the problem, providing thus a way to compare the nature of the underlying assumptions.
[ { "version": "v1", "created": "Wed, 23 Jan 2013 15:57:47 GMT" } ]
1,359,504,000,000
[ [ "Dubois", "Didier", "" ], [ "Grabisch", "Michel", "" ], [ "Prade", "Henri", "" ], [ "Smets", "Philippe", "" ] ]
1301.6694
Helene Fargier
Helene Fargier, Patrice Perny
Qualitative Models for Decision Under Uncertainty without the Commensurability Assumption
Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999)
null
null
UAI-P-1999-PG-188-195
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper investigates a purely qualitative version of Savage's theory for decision making under uncertainty. Until now, most representation theorems for preference over acts rely on a numerical representation of utility and uncertainty where utility and uncertainty are commensurate. Disrupting the tradition, we relax this assumption and introduce a purely ordinal axiom requiring that the Decision Maker (DM) preference between two acts only depends on the relative position of their consequences for each state. Within this qualitative framework, we determine the only possible form of the decision rule and investigate some instances compatible with the transitivity of the strict preference. Finally we propose a mild relaxation of our ordinality axiom, leaving room for a new family of qualitative decision rules compatible with transitivity.
[ { "version": "v1", "created": "Wed, 23 Jan 2013 15:57:55 GMT" } ]
1,359,504,000,000
[ [ "Fargier", "Helene", "" ], [ "Perny", "Patrice", "" ] ]
1301.6699
Phan H. Giang
Phan H. Giang, Prakash P. Shenoy
On Transformations between Probability and Spohnian Disbelief Functions
Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999)
null
null
UAI-P-1999-PG-236-244
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we analyze the relationship between probability and Spohn's theory for representation of uncertain beliefs. Using the intuitive idea that the more probable a proposition is, the more believable it is, we study transformations from probability to Sphonian disbelief and vice-versa. The transformations described in this paper are different from those described in the literature. In particular, the former satisfies the principles of ordinal congruence while the latter does not. Such transformations between probability and Spohn's calculi can contribute to (1) a clarification of the semantics of nonprobabilistic degree of uncertain belief, and (2) to a construction of a decision theory for such calculi. In practice, the transformations will allow a meaningful combination of more than one calculus in different stages of using an expert system such as knowledge acquisition, inference, and interpretation of results.
[ { "version": "v1", "created": "Wed, 23 Jan 2013 15:58:18 GMT" } ]
1,359,504,000,000
[ [ "Giang", "Phan H.", "" ], [ "Shenoy", "Prakash P.", "" ] ]
1301.6700
Robert P. Goldman
Robert P. Goldman, Christopher W. Geib, Christopher A. Miller
A New Model of Plan Recognition
Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999)
null
null
UAI-P-1999-PG-245-254
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a new abductive, probabilistic theory of plan recognition. This model differs from previous plan recognition theories in being centered around a model of plan execution: most previous methods have been based on plans as formal objects or on rules describing the recognition process. We show that our new model accounts for phenomena omitted from most previous plan recognition theories: notably the cumulative effect of a sequence of observations of partially-ordered, interleaved plans and the effect of context on plan adoption. The model also supports inferences about the evolution of plan execution in situations where another agent intervenes in plan execution. This facility provides support for using plan recognition to build systems that will intelligently assist a user.
[ { "version": "v1", "created": "Wed, 23 Jan 2013 15:58:22 GMT" } ]
1,359,504,000,000
[ [ "Goldman", "Robert P.", "" ], [ "Geib", "Christopher W.", "" ], [ "Miller", "Christopher A.", "" ] ]
1301.6702
Vu A. Ha
Vu A. Ha, Peter Haddawy
A Hybrid Approach to Reasoning with Partially Elicited Preference Models
Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999)
null
null
UAI-P-1999-PG-263-270
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Classical Decision Theory provides a normative framework for representing and reasoning about complex preferences. Straightforward application of this theory to automate decision making is difficult due to high elicitation cost. In response to this problem, researchers have recently developed a number of qualitative, logic-oriented approaches for representing and reasoning about references. While effectively addressing some expressiveness issues, these logics have not proven powerful enough for building practical automated decision making systems. In this paper we present a hybrid approach to preference elicitation and decision making that is grounded in classical multi-attribute utility theory, but can make effective use of the expressive power of qualitative approaches. Specifically, assuming a partially specified multilinear utility function, we show how comparative statements about classes of decision alternatives can be used to further constrain the utility function and thus identify sup-optimal alternatives. This work demonstrates that quantitative and qualitative approaches can be synergistically integrated to provide effective and flexible decision support.
[ { "version": "v1", "created": "Wed, 23 Jan 2013 15:58:30 GMT" } ]
1,359,504,000,000
[ [ "Ha", "Vu A.", "" ], [ "Haddawy", "Peter", "" ] ]
1301.6703
David Harmanec
David Harmanec
Faithful Approximations of Belief Functions
Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999)
null
null
UAI-P-1999-PG-271-278
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A conceptual foundation for approximation of belief functions is proposed and investigated. It is based on the requirements of consistency and closeness. An optimal approximation is studied. Unfortunately, the computation of the optimal approximation turns out to be intractable. Hence, various heuristic methods are proposed and experimantally evaluated both in terms of their accuracy and in terms of the speed of computation. These methods are compared to the earlier proposed approximations of belief functions.
[ { "version": "v1", "created": "Wed, 23 Jan 2013 15:58:34 GMT" } ]
1,359,504,000,000
[ [ "Harmanec", "David", "" ] ]
1301.6704
Jesse Hoey
Jesse Hoey, Robert St-Aubin, Alan Hu, Craig Boutilier
SPUDD: Stochastic Planning using Decision Diagrams
Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999)
null
null
UAI-P-1999-PG-279-288
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Markov decisions processes (MDPs) are becoming increasing popular as models of decision theoretic planning. While traditional dynamic programming methods perform well for problems with small state spaces, structured methods are needed for large problems. We propose and examine a value iteration algorithm for MDPs that uses algebraic decision diagrams(ADDs) to represent value functions and policies. An MDP is represented using Bayesian networks and ADDs and dynamic programming is applied directly to these ADDs. We demonstrate our method on large MDPs (up to 63 million states) and show that significant gains can be had when compared to tree-structured representations (with up to a thirty-fold reduction in the number of nodes required to represent optimal value functions).
[ { "version": "v1", "created": "Wed, 23 Jan 2013 15:58:38 GMT" } ]
1,359,504,000,000
[ [ "Hoey", "Jesse", "" ], [ "St-Aubin", "Robert", "" ], [ "Hu", "Alan", "" ], [ "Boutilier", "Craig", "" ] ]
1301.6706
Michael C. Horsch
Michael C. Horsch, David L. Poole
Estimating the Value of Computation in Flexible Information Refinement
Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999)
null
null
UAI-P-1999-PG-297-304
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We outline a method to estimate the value of computation for a flexible algorithm using empirical data. To determine a reasonable trade-off between cost and value, we build an empirical model of the value obtained through computation, and apply this model to estimate the value of computation for quite different problems. In particular, we investigate this trade-off for the problem of constructing policies for decision problems represented as influence diagrams. We show how two features of our anytime algorithm provide reasonable estimates of the value of computation in this domain.
[ { "version": "v1", "created": "Wed, 23 Jan 2013 15:58:47 GMT" } ]
1,359,504,000,000
[ [ "Horsch", "Michael C.", "" ], [ "Poole", "David L.", "" ] ]
1301.6708
Kalev Kask
Kalev Kask, Rina Dechter
Mini-Bucket Heuristics for Improved Search
Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999)
null
null
UAI-P-1999-PG-314-323
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper is a second in a series of two papers evaluating the power of a new scheme that generates search heuristics mechanically. The heuristics are extracted from an approximation scheme called mini-bucket elimination that was recently introduced. The first paper introduced the idea and evaluated it within Branch-and-Bound search. In the current paper the idea is further extended and evaluated within Best-First search. The resulting algorithms are compared on coding and medical diagnosis problems, using varying strength of the mini-bucket heuristics. Our results demonstrate an effective search scheme that permits controlled tradeoff between preprocessing (for heuristic generation) and search. Best-first search is shown to outperform Branch-and-Bound, when supplied with good heuristics, and sufficient memory space.
[ { "version": "v1", "created": "Wed, 23 Jan 2013 15:58:54 GMT" } ]
1,359,504,000,000
[ [ "Kask", "Kalev", "" ], [ "Dechter", "Rina", "" ] ]
1301.6709
Daphne Koller
Daphne Koller, Uri Lerner, Dragomir Anguelov
A General Algorithm for Approximate Inference and its Application to Hybrid Bayes Nets
Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999)
null
null
UAI-P-1999-PG-324-333
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The clique tree algorithm is the standard method for doing inference in Bayesian networks. It works by manipulating clique potentials - distributions over the variables in a clique. While this approach works well for many networks, it is limited by the need to maintain an exact representation of the clique potentials. This paper presents a new unified approach that combines approximate inference and the clique tree algorithm, thereby circumventing this limitation. Many known approximate inference algorithms can be viewed as instances of this approach. The algorithm essentially does clique tree propagation, using approximate inference to estimate the densities in each clique. In many settings, the computation of the approximate clique potential can be done easily using statistical importance sampling. Iterations are used to gradually improve the quality of the estimation.
[ { "version": "v1", "created": "Wed, 23 Jan 2013 15:58:59 GMT" } ]
1,359,504,000,000
[ [ "Koller", "Daphne", "" ], [ "Lerner", "Uri", "" ], [ "Anguelov", "Dragomir", "" ] ]
1301.6712
Ryszard Kowalczyk
Ryszard Kowalczyk
On Quantified Linguistic Approximation
Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999)
null
null
UAI-P-1999-PG-351-358
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most fuzzy systems including fuzzy decision support and fuzzy control systems provide out-puts in the form of fuzzy sets that represent the inferred conclusions. Linguistic interpretation of such outputs often involves the use of linguistic approximation that assigns a linguistic label to a fuzzy set based on the predefined primary terms, linguistic modifiers and linguistic connectives. More generally, linguistic approximation can be formalized in the terms of the re-translation rules that correspond to the translation rules in ex-plicitation (e.g. simple, modifier, composite, quantification and qualification rules) in com-puting with words [Zadeh 1996]. However most existing methods of linguistic approximation use the simple, modifier and composite re-translation rules only. Although these methods can provide a sufficient approximation of simple fuzzy sets the approximation of more complex ones that are typical in many practical applications of fuzzy systems may be less satisfactory. Therefore the question arises why not use in linguistic ap-proximation also other re-translation rules corre-sponding to the translation rules in explicitation to advantage. In particular linguistic quantifica-tion may be desirable in situations where the conclusions interpreted as quantified linguistic propositions can be more informative and natu-ral. This paper presents some aspects of linguis-tic approximation in the context of the re-translation rules and proposes an approach to linguistic approximation with the use of quantifi-cation rules, i.e. quantified linguistic approxima-tion. Two methods of the quantified linguistic approximation are considered with the use of lin-guistic quantifiers based on the concepts of the non-fuzzy and fuzzy cardinalities of fuzzy sets. A number of examples are provided to illustrate the proposed approach.
[ { "version": "v1", "created": "Wed, 23 Jan 2013 15:59:10 GMT" } ]
1,359,504,000,000
[ [ "Kowalczyk", "Ryszard", "" ] ]
1301.6713
Henry E. Kyburg Jr.
Henry E. Kyburg Jr., Choh Man Teng
Choosing Among Interpretations of Probability
Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999)
null
null
UAI-P-1999-PG-359-365
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There is available an ever-increasing variety of procedures for managing uncertainty. These methods are discussed in the literature of artificial intelligence, as well as in the literature of philosophy of science. Heretofore these methods have been evaluated by intuition, discussion, and the general philosophical method of argument and counterexample. Almost any method of uncertainty management will have the property that in the long run it will deliver numbers approaching the relative frequency of the kinds of events at issue. To find a measure that will provide a meaningful evaluation of these treatments of uncertainty, we must look, not at the long run, but at the short or intermediate run. Our project attempts to develop such a measure in terms of short or intermediate length performance. We represent the effects of practical choices by the outcomes of bets offered to agents characterized by two uncertainty management approaches: the subjective Bayesian approach and the Classical confidence interval approach. Experimental evaluation suggests that the confidence interval approach can outperform the subjective approach in the relatively short run.
[ { "version": "v1", "created": "Wed, 23 Jan 2013 15:59:14 GMT" } ]
1,359,504,000,000
[ [ "Kyburg", "Henry E.", "Jr." ], [ "Teng", "Choh Man", "" ] ]
1301.6715
Christopher Lusena
Christopher Lusena, Tong Li, Shelia Sittinger, Chris Wells, Judy Goldsmith
My Brain is Full: When More Memory Helps
Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999)
null
null
UAI-P-1999-PG-374-381
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem of finding good finite-horizon policies for POMDPs under the expected reward metric. The policies considered are {em free finite-memory policies with limited memory}; a policy is a mapping from the space of observation-memory pairs to the space of action-memeory pairs (the policy updates the memory as it goes), and the number of possible memory states is a parameter of the input to the policy-finding algorithms. The algorithms considered here are preliminary implementations of three search heuristics: local search, simulated annealing, and genetic algorithms. We compare their outcomes to each other and to the optimal policies for each instance. We compare run times of each policy and of a dynamic programming algorithm for POMDPs developed by Hansen that iteratively improves a finite-state controller --- the previous state of the art for finite memory policies. The value of the best policy can only improve as the amount of memory increases, up to the amount needed for an optimal finite-memory policy. Our most surprising finding is that more memory helps in another way: given more memory than is needed for an optimal policy, the algorithms are more likely to converge to optimal-valued policies.
[ { "version": "v1", "created": "Wed, 23 Jan 2013 15:59:22 GMT" } ]
1,359,504,000,000
[ [ "Lusena", "Christopher", "" ], [ "Li", "Tong", "" ], [ "Sittinger", "Shelia", "" ], [ "Wells", "Chris", "" ], [ "Goldsmith", "Judy", "" ] ]
1301.6716
Anders L. Madsen
Anders L. Madsen, Finn Verner Jensen
Lazy Evaluation of Symmetric Bayesian Decision Problems
Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999)
null
null
UAI-P-1999-PG-382-390
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Solving symmetric Bayesian decision problems is a computationally intensive task to perform regardless of the algorithm used. In this paper we propose a method for improving the efficiency of algorithms for solving Bayesian decision problems. The method is based on the principle of lazy evaluation - a principle recently shown to improve the efficiency of inference in Bayesian networks. The basic idea is to maintain decompositions of potentials and to postpone computations for as long as possible. The efficiency improvements obtained with the lazy evaluation based method is emphasized through examples. Finally, the lazy evaluation based method is compared with the hugin and valuation-based systems architectures for solving symmetric Bayesian decision problems.
[ { "version": "v1", "created": "Wed, 23 Jan 2013 15:59:26 GMT" } ]
1,359,504,000,000
[ [ "Madsen", "Anders L.", "" ], [ "Jensen", "Finn Verner", "" ] ]
1301.6717
Suzanne M. Mahoney
Suzanne M. Mahoney, Kathryn Blackmond Laskey
Representing and Combining Partially Specified CPTs
Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999)
null
null
UAI-P-1999-PG-391-400
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper extends previous work with network fragments and situation-specific network construction. We formally define the asymmetry network, an alternative representation for a conditional probability table. We also present an object-oriented representation for partially specified asymmetry networks. We show that the representation is parsimonious. We define an algebra for the elements of the representation that allows us to 'factor' any CPT and to soundly combine the partially specified asymmetry networks.
[ { "version": "v1", "created": "Wed, 23 Jan 2013 15:59:31 GMT" } ]
1,359,504,000,000
[ [ "Mahoney", "Suzanne M.", "" ], [ "Laskey", "Kathryn Blackmond", "" ] ]
1301.6718
Yishay Mansour
Yishay Mansour, Satinder Singh
On the Complexity of Policy Iteration
Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999)
null
null
UAI-P-1999-PG-401-408
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Decision-making problems in uncertain or stochastic domains are often formulated as Markov decision processes (MDPs). Policy iteration (PI) is a popular algorithm for searching over policy-space, the size of which is exponential in the number of states. We are interested in bounds on the complexity of PI that do not depend on the value of the discount factor. In this paper we prove the first such non-trivial, worst-case, upper bounds on the number of iterations required by PI to converge to the optimal policy. Our analysis also sheds new light on the manner in which PI progresses through the space of policies.
[ { "version": "v1", "created": "Wed, 23 Jan 2013 15:59:34 GMT" } ]
1,359,504,000,000
[ [ "Mansour", "Yishay", "" ], [ "Singh", "Satinder", "" ] ]
1301.6719
David A. McAllester
David A. McAllester, Satinder Singh
Approximate Planning for Factored POMDPs using Belief State Simplification
Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999)
null
null
UAI-P-1999-PG-409-416
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We are interested in the problem of planning for factored POMDPs. Building on the recent results of Kearns, Mansour and Ng, we provide a planning algorithm for factored POMDPs that exploits the accuracy-efficiency tradeoff in the belief state simplification introduced by Boyen and Koller.
[ { "version": "v1", "created": "Wed, 23 Jan 2013 15:59:38 GMT" } ]
1,359,504,000,000
[ [ "McAllester", "David A.", "" ], [ "Singh", "Satinder", "" ] ]
1301.6720
Nicolas Meuleau
Nicolas Meuleau, Kee-Eung Kim, Leslie Pack Kaelbling, Anthony R. Cassandra
Solving POMDPs by Searching the Space of Finite Policies
Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999)
null
null
UAI-P-1999-PG-417-426
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Solving partially observable Markov decision processes (POMDPs) is highly intractable in general, at least in part because the optimal policy may be infinitely large. In this paper, we explore the problem of finding the optimal policy from a restricted set of policies, represented as finite state automata of a given size. This problem is also intractable, but we show that the complexity can be greatly reduced when the POMDP and/or policy are further constrained. We demonstrate good empirical results with a branch-and-bound method for finding globally optimal deterministic policies, and a gradient-ascent method for finding locally optimal stochastic policies.
[ { "version": "v1", "created": "Wed, 23 Jan 2013 15:59:42 GMT" } ]
1,359,504,000,000
[ [ "Meuleau", "Nicolas", "" ], [ "Kim", "Kee-Eung", "" ], [ "Kaelbling", "Leslie Pack", "" ], [ "Cassandra", "Anthony R.", "" ] ]
1301.6729
Thomas D. Nielsen
Thomas D. Nielsen, Finn Verner Jensen
Welldefined Decision Scenarios
Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999)
null
null
UAI-P-1999-PG-502-511
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Influence diagrams serve as a powerful tool for modelling symmetric decision problems. When solving an influence diagram we determine a set of strategies for the decisions involved. A strategy for a decision variable is in principle a function over its past. However, some of the past may be irrelevant for the decision, and for computational reasons it is important not to deal with redundant variables in the strategies. We show that current methods (e.g. the "Decision Bayes-ball" algorithm by Shachter UAI98) do not determine the relevant past, and we present a complete algorithm. Actually, this paper takes a more general outset: When formulating a decision scenario as an influence diagram, a linear temporal ordering of the decisions variables is required. This constraint ensures that the decision scenario is welldefined. However, the structure of a decision scenario often yields certain decisions conditionally independent, and it is therefore unnecessary to impose a linear temporal ordering on the decisions. In this paper we deal with partial influence diagrams i.e. influence diagrams with only a partial temporal ordering specified. We present a set of conditions which are necessary and sufficient to ensure that a partial influence diagram is welldefined. These conditions are used as a basis for the construction of an algorithm for determining whether or not a partial influence diagram is welldefined.
[ { "version": "v1", "created": "Wed, 23 Jan 2013 16:00:18 GMT" } ]
1,359,504,000,000
[ [ "Nielsen", "Thomas D.", "" ], [ "Jensen", "Finn Verner", "" ] ]
1301.6732
David M Pennock
David M. Pennock, Michael P. Wellman
Graphical Representations of Consensus Belief
Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999)
null
null
UAI-P-1999-PG-531-540
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graphical models based on conditional independence support concise encodings of the subjective belief of a single agent. A natural question is whether the consensus belief of a group of agents can be represented with equal parsimony. We prove, under relatively mild assumptions, that even if everyone agrees on a common graph topology, no method of combining beliefs can maintain that structure. Even weaker conditions rule out local aggregation within conditional probability tables. On a more positive note, we show that if probabilities are combined with the logarithmic opinion pool (LogOP), then commonly held Markov independencies are maintained. This suggests a straightforward procedure for constructing a consensus Markov network. We describe an algorithm for computing the LogOP with time complexity comparable to that of exact Bayesian inference.
[ { "version": "v1", "created": "Wed, 23 Jan 2013 16:00:29 GMT" } ]
1,359,504,000,000
[ [ "Pennock", "David M.", "" ], [ "Wellman", "Michael P.", "" ] ]
1301.6733
Avi Pfeffer
Avi Pfeffer, Daphne Koller, Brian Milch, Ken T. Takusagawa
SPOOK: A System for Probabilistic Object-Oriented Knowledge Representation
Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999)
null
null
UAI-P-1999-PG-541-550
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In previous work, we pointed out the limitations of standard Bayesian networks as a modeling framework for large, complex domains. We proposed a new, richly structured modeling language, {em Object-oriented Bayesian Netorks}, that we argued would be able to deal with such domains. However, it turns out that OOBNs are not expressive enough to model many interesting aspects of complex domains: the existence of specific named objects, arbitrary relations between objects, and uncertainty over domain structure. These aspects are crucial in real-world domains such as battlefield awareness. In this paper, we present SPOOK, an implemented system that addresses these limitations. SPOOK implements a more expressive language that allows it to represent the battlespace domain naturally and compactly. We present a new inference algorithm that utilizes the model structure in a fundamental way, and show empirically that it achieves orders of magnitude speedup over existing approaches.
[ { "version": "v1", "created": "Wed, 23 Jan 2013 16:00:32 GMT" } ]
1,359,504,000,000
[ [ "Pfeffer", "Avi", "" ], [ "Koller", "Daphne", "" ], [ "Milch", "Brian", "" ], [ "Takusagawa", "Ken T.", "" ] ]
1301.6734
Luigi Portinale
Luigi Portinale, Andrea Bobbio
Bayesian Networks for Dependability Analysis: an Application to Digital Control Reliability
Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999)
null
null
UAI-P-1999-PG-551-558
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bayesian Networks (BN) provide robust probabilistic methods of reasoning under uncertainty, but despite their formal grounds are strictly based on the notion of conditional dependence, not much attention has been paid so far to their use in dependability analysis. The aim of this paper is to propose BN as a suitable tool for dependability analysis, by challenging the formalism with basic issues arising in dependability tasks. We will discuss how both modeling and analysis issues can be naturally dealt with by BN. Moreover, we will show how some limitations intrinsic to combinatorial dependability methods such as Fault Trees can be overcome using BN. This will be pursued through the study of a real-world example concerning the reliability analysis of a redundant digital Programmable Logic Controller (PLC) with majority voting 2:3
[ { "version": "v1", "created": "Wed, 23 Jan 2013 16:00:36 GMT" } ]
1,359,504,000,000
[ [ "Portinale", "Luigi", "" ], [ "Bobbio", "Andrea", "" ] ]
1301.6735
Silja Renooij
Silja Renooij, Linda C. van der Gaag
Enhancing QPNs for Trade-off Resolution
Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999)
null
null
UAI-P-1999-PG-559-566
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Qualitative probabilistic networks have been introduced as qualitative abstractions of Bayesian belief networks. One of the major drawbacks of these qualitative networks is their coarse level of detail, which may lead to unresolved trade-offs during inference. We present an enhanced formalism for qualitative networks with a finer level of detail. An enhanced qualitative probabilistic network differs from a regular qualitative network in that it distinguishes between strong and weak influences. Enhanced qualitative probabilistic networks are purely qualitative in nature, as regular qualitative networks are, yet allow for efficiently resolving trade-offs during inference.
[ { "version": "v1", "created": "Wed, 23 Jan 2013 16:00:40 GMT" } ]
1,359,504,000,000
[ [ "Renooij", "Silja", "" ], [ "van der Gaag", "Linda C.", "" ] ]
1301.6736
Regis Sabbadin
Regis Sabbadin
A Possibilistic Model for Qualitative Sequential Decision Problems under Uncertainty in Partially Observable Environments
Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999)
null
null
UAI-P-1999-PG-567-574
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this article we propose a qualitative (ordinal) counterpart for the Partially Observable Markov Decision Processes model (POMDP) in which the uncertainty, as well as the preferences of the agent, are modeled by possibility distributions. This qualitative counterpart of the POMDP model relies on a possibilistic theory of decision under uncertainty, recently developed. One advantage of such a qualitative framework is its ability to escape from the classical obstacle of stochastic POMDPs, in which even with a finite state space, the obtained belief state space of the POMDP is infinite. Instead, in the possibilistic framework even if exponentially larger than the state space, the belief state space remains finite.
[ { "version": "v1", "created": "Wed, 23 Jan 2013 16:00:44 GMT" } ]
1,359,504,000,000
[ [ "Sabbadin", "Regis", "" ] ]
1301.6739
Ross D. Shachter
Ross D. Shachter
Efficient Value of Information Computation
Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999)
null
null
UAI-P-1999-PG-594-601
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the most useful sensitivity analysis techniques of decision analysis is the computation of value of information (or clairvoyance), the difference in value obtained by changing the decisions by which some of the uncertainties are observed. In this paper, some simple but powerful extensions to previous algorithms are introduced which allow an efficient value of information calculation on the rooted cluster tree (or strong junction tree) used to solve the original decision problem.
[ { "version": "v1", "created": "Wed, 23 Jan 2013 16:00:56 GMT" } ]
1,359,504,000,000
[ [ "Shachter", "Ross D.", "" ] ]
1301.6740
Hagit Shatkay
Hagit Shatkay
Learning Hidden Markov Models with Geometrical Constraints
Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999)
null
null
UAI-P-1999-PG-602-611
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hidden Markov models (HMMs) and partially observable Markov decision processes (POMDPs) form a useful tool for modeling dynamical systems. They are particularly useful for representing environments such as road networks and office buildings, which are typical for robot navigation and planning. The work presented here is concerned with acquiring such models. We demonstrate how domain-specific information and constraints can be incorporated into the statistical estimation process, greatly improving the learned models in terms of the model quality, the number of iterations required for convergence and robustness to reduction in the amount of available data. We present new initialization heuristics which can be used even when the data suffers from cumulative rotational error, new update rules for the model parameters, as an instance of generalized EM, and a strategy for enforcing complete geometrical consistency in the model. Experimental results demonstrate the effectiveness of our approach for both simulated and real robot data, in traditionally hard-to-learn environments.
[ { "version": "v1", "created": "Wed, 23 Jan 2013 16:01:01 GMT" } ]
1,359,504,000,000
[ [ "Shatkay", "Hagit", "" ] ]
1301.6741
Philippe Smets
Philippe Smets
Practical Uses of Belief Functions
Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999)
null
null
UAI-P-1999-PG-612-621
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present examples where the use of belief functions provided sound and elegant solutions to real life problems. These are essentially characterized by ?missing' information. The examples deal with 1) discriminant analysis using a learning set where classes are only partially known; 2) an information retrieval systems handling inter-documents relationships; 3) the combination of data from sensors competent on partially overlapping frames; 4) the determination of the number of sources in a multi-sensor environment by studying the inter-sensors contradiction. The purpose of the paper is to report on such applications where the use of belief functions provides a convenient tool to handle ?messy' data problems.
[ { "version": "v1", "created": "Wed, 23 Jan 2013 16:01:05 GMT" } ]
1,359,504,000,000
[ [ "Smets", "Philippe", "" ] ]
1301.6742
Masami Takikawa
Masami Takikawa, Bruce D'Ambrosio
Multiplicative Factorization of Noisy-Max
Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999)
null
null
UAI-P-1999-PG-622-630
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The noisy-or and its generalization noisy-max have been utilized to reduce the complexity of knowledge acquisition. In this paper, we present a new representation of noisy-max that allows for efficient inference in general Bayesian networks. Empirical studies show that our method is capable of computing queries in well-known large medical networks, QMR-DT and CPCS, for which no previous exact inference method has been shown to perform well.
[ { "version": "v1", "created": "Wed, 23 Jan 2013 16:01:09 GMT" } ]
1,359,504,000,000
[ [ "Takikawa", "Masami", "" ], [ "D'Ambrosio", "Bruce", "" ] ]
1301.6744
Volker Tresp
Volker Tresp, Michael Haft, Reimar Hofmann
Mixture Approximations to Bayesian Networks
Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999)
null
null
UAI-P-1999-PG-639-646
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Structure and parameters in a Bayesian network uniquely specify the probability distribution of the modeled domain. The locality of both structure and probabilistic information are the great benefits of Bayesian networks and require the modeler to only specify local information. On the other hand this locality of information might prevent the modeler - and even more any other person - from obtaining a general overview of the important relationships within the domain. The goal of the work presented in this paper is to provide an "alternative" view on the knowledge encoded in a Bayesian network which might sometimes be very helpful for providing insights into the underlying domain. The basic idea is to calculate a mixture approximation to the probability distribution represented by the Bayesian network. The mixture component densities can be thought of as representing typical scenarios implied by the Bayesian model, providing intuition about the basic relationships. As an additional benefit, performing inference in the approximate model is very simple and intuitive and can provide additional insights. The computational complexity for the calculation of the mixture approximations criticaly depends on the measure which defines the distance between the probability distribution represented by the Bayesian network and the approximate distribution. Both the KL-divergence and the backward KL-divergence lead to inefficient algorithms. Incidentally, the latter is used in recent work on mixtures of mean field solutions to which the work presented here is closely related. We show, however, that using a mean squared error cost function leads to update equations which can be solved using the junction tree algorithm. We conclude that the mean squared error cost function can be used for Bayesian networks in which inference based on the junction tree is tractable. For large networks, however, one may have to rely on mean field approximations.
[ { "version": "v1", "created": "Wed, 23 Jan 2013 16:01:18 GMT" } ]
1,359,504,000,000
[ [ "Tresp", "Volker", "" ], [ "Haft", "Michael", "" ], [ "Hofmann", "Reimar", "" ] ]
1301.6745
Linda C. van der Gaag
Linda C. van der Gaag, Silja Renooij, Cilia L. M. Witteman, Berthe M. P. Aleman, Babs G. Taal
How to Elicit Many Probabilities
Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999)
null
null
UAI-P-1999-PG-647-654
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In building Bayesian belief networks, the elicitation of all probabilities required can be a major obstacle. We learned the extent of this often-cited observation in the construction of the probabilistic part of a complex influence diagram in the field of cancer treatment. Based upon our negative experiences with existing methods, we designed a new method for probability elicitation from domain experts. The method combines various ideas, among which are the ideas of transcribing probabilities and of using a scale with both numerical and verbal anchors for marking assessments. In the construction of the probabilistic part of our influence diagram, the method proved to allow for the elicitation of many probabilities in little time.
[ { "version": "v1", "created": "Wed, 23 Jan 2013 16:01:22 GMT" } ]
1,359,504,000,000
[ [ "van der Gaag", "Linda C.", "" ], [ "Renooij", "Silja", "" ], [ "Witteman", "Cilia L. M.", "" ], [ "Aleman", "Berthe M. P.", "" ], [ "Taal", "Babs G.", "" ] ]
1301.6746
Frans Voorbraak
Frans Voorbraak
Probabilistic Belief Change: Expansion, Conditioning and Constraining
Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999)
null
null
UAI-P-1999-PG-655-662
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The AGM theory of belief revision has become an important paradigm for investigating rational belief changes. Unfortunately, researchers working in this paradigm have restricted much of their attention to rather simple representations of belief states, namely logically closed sets of propositional sentences. In our opinion, this has resulted in a too abstract categorisation of belief change operations: expansion, revision, or contraction. Occasionally, in the AGM paradigm, also probabilistic belief changes have been considered, and it is widely accepted that the probabilistic version of expansion is conditioning. However, we argue that it may be more correct to view conditioning and expansion as two essentially different kinds of belief change, and that what we call constraining is a better candidate for being considered probabilistic expansion.
[ { "version": "v1", "created": "Wed, 23 Jan 2013 16:01:26 GMT" } ]
1,359,504,000,000
[ [ "Voorbraak", "Frans", "" ] ]
1301.6748
Michael S. K. M. Wong
Michael S. K. M. Wong, C. J. Butz
Contextual Weak Independence in Bayesian Networks
Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999)
null
null
UAI-P-1999-PG-670-679
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is well-known that the notion of (strong) conditional independence (CI) is too restrictive to capture independencies that only hold in certain contexts. This kind of contextual independency, called context-strong independence (CSI), can be used to facilitate the acquisition, representation, and inference of probabilistic knowledge. In this paper, we suggest the use of contextual weak independence (CWI) in Bayesian networks. It should be emphasized that the notion of CWI is a more general form of contextual independence than CSI. Furthermore, if the contextual strong independence holds for all contexts, then the notion of CSI becomes strong CI. On the other hand, if the weak contextual independence holds for all contexts, then the notion of CWI becomes weak independence (WI) nwhich is a more general noncontextual independency than strong CI. More importantly, complete axiomatizations are studied for both the class of WI and the class of CI and WI together. Finally, the interesting property of WI being a necessary and sufficient condition for ensuring consistency in granular probabilistic networks is shown.
[ { "version": "v1", "created": "Wed, 23 Jan 2013 16:01:33 GMT" } ]
1,359,504,000,000
[ [ "Wong", "Michael S. K. M.", "" ], [ "Butz", "C. J.", "" ] ]
1301.6749
Yanping Xiang
Yanping Xiang, Finn Verner Jensen
Inference in Multiply Sectioned Bayesian Networks with Extended Shafer-Shenoy and Lazy Propagation
Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999)
null
null
UAI-P-1999-PG-680-687
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As Bayesian networks are applied to larger and more complex problem domains, search for flexible modeling and more efficient inference methods is an ongoing effort. Multiply sectioned Bayesian networks (MSBNs) extend the HUGIN inference for Bayesian networks into a coherent framework for flexible modeling and distributed inference.Lazy propagation extends the Shafer-Shenoy and HUGIN inference methods with reduced space complexity. We apply the Shafer-Shenoy and lazy propagation to inference in MSBNs. The combination of the MSBN framework and lazy propagation provides a better framework for modeling and inference in very large domains. It retains the modeling flexibility of MSBNs and reduces the runtime space complexity, allowing exact inference in much larger domains given the same computational resources.
[ { "version": "v1", "created": "Wed, 23 Jan 2013 16:01:37 GMT" } ]
1,359,504,000,000
[ [ "Xiang", "Yanping", "" ], [ "Jensen", "Finn Verner", "" ] ]
1301.6750
Yanping Xiang
Yanping Xiang, Kim-Leng Poh
Time-Critical Dynamic Decision Making
Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999)
null
null
UAI-P-1999-PG-688-695
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent interests in dynamic decision modeling have led to the development of several representation and inference methods. These methods however, have limited application under time critical conditions where a trade-off between model quality and computational tractability is essential. This paper presents an approach to time-critical dynamic decision modeling. A knowledge representation and modeling method called the time-critical dynamic influence diagram is proposed. The formalism has two forms. The condensed form is used for modeling and model abstraction, while the deployed form which can be converted from the condensed form is used for inference purposes. The proposed approach has the ability to represent space-temporal abstraction within the model. A knowledge-based meta-reasoning approach is proposed for the purpose of selecting the best abstracted model that provide the optimal trade-off between model quality and model tractability. An outline of the knowledge-based model construction algorithm is also provided.
[ { "version": "v1", "created": "Wed, 23 Jan 2013 16:01:41 GMT" } ]
1,359,504,000,000
[ [ "Xiang", "Yanping", "" ], [ "Poh", "Kim-Leng", "" ] ]
1301.6751
Nevin Lianwen Zhang
Nevin Lianwen Zhang, Stephen S. Lee, Weihong Zhang
A Method for Speeding Up Value Iteration in Partially Observable Markov Decision Processes
Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999)
null
null
UAI-P-1999-PG-696-703
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a technique for speeding up the convergence of value iteration for partially observable Markov decisions processes (POMDPs). The underlying idea is similar to that behind modified policy iteration for fully observable Markov decision processes (MDPs). The technique can be easily incorporated into any existing POMDP value iteration algorithms. Experiments have been conducted on several test problems with one POMDP value iteration algorithm called incremental pruning. We find that the technique can make incremental pruning run several orders of magnitude faster.
[ { "version": "v1", "created": "Wed, 23 Jan 2013 16:01:45 GMT" } ]
1,359,504,000,000
[ [ "Zhang", "Nevin Lianwen", "" ], [ "Lee", "Stephen S.", "" ], [ "Zhang", "Weihong", "" ] ]
1301.6789
D P Acharjya Ph.D
B.K.Tripathy and D.P.Acharjya
Approximation of Classification and Measures of Uncertainty in Rough Set on Two Universal Sets
14 pages, International Journal of Advanced Science and Technology Vol. 40, March, 2012
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The notion of rough set captures indiscernibility of elements in a set. But, in many real life situations, an information system establishes the relation between different universes. This gave the extension of rough set on single universal set to rough set on two universal sets. In this paper, we introduce approximation of classifications and measures of uncertainty basing upon rough set on two universal sets employing the knowledge due to binary relations.
[ { "version": "v1", "created": "Fri, 25 Jan 2013 11:58:23 GMT" } ]
1,359,504,000,000
[ [ "Tripathy", "B. K.", "" ], [ "Acharjya", "D. P.", "" ] ]
1301.7251
Teresa Alsinet
Teresa Alsinet, Lluis Godo, Sandra Sandri
On the Semantics and Automated Deduction for PLFC, a Logic of Possibilistic Uncertainty and Fuzziness
Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999)
null
null
UAI-P-1999-PG-3-12
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Possibilistic logic is a well-known graded logic of uncertainty suitable to reason under incomplete information and partially inconsistent knowledge, which is built upon classical first order logic. There exists for Possibilistic logic a proof procedure based on a refutation complete resolution-style calculus. Recently, a syntactical extension of first order Possibilistic logic (called PLFC) dealing with fuzzy constants and fuzzily restricted quantifiers has been proposed. Our aim is to present steps towards both the formalization of PLFC itself and an automated deduction system for it by (i) providing a formal semantics; (ii) defining a sound resolution-style calculus by refutation; and (iii) describing a first-order proof procedure for PLFC clauses based on (ii) and on a novel notion of most general substitution of two literals in a resolution step. In contrast to standard Possibilistic logic semantics, truth-evaluation of formulas with fuzzy constants are many-valued instead of boolean, and consequently an extended notion of possibilistic uncertainty is also needed.
[ { "version": "v1", "created": "Wed, 30 Jan 2013 14:58:40 GMT" } ]
1,359,590,400,000
[ [ "Alsinet", "Teresa", "" ], [ "Godo", "Lluis", "" ], [ "Sandri", "Sandra", "" ] ]
1301.7358
Leila Amgoud
Leila Amgoud, Claudette Cayrol
On the Acceptability of Arguments in Preference-Based Argumentation
Appears in Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI1998)
null
null
UAI-P-1998-PG-1-7
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Argumentation is a promising model for reasoning with uncertain knowledge. The key concept of acceptability enables to differentiate arguments and counterarguments: The certainty of a proposition can then be evaluated through the most acceptable arguments for that proposition. In this paper, we investigate different complementary points of view: - an acceptability based on the existence of direct counterarguments, - an acceptability based on the existence of defenders. Pursuing previous work on preference-based argumentation principles, we enforce both points of view by taking into account preference orderings for comparing arguments. Our approach is illustrated in the context of reasoning with stratified knowldge bases.
[ { "version": "v1", "created": "Wed, 30 Jan 2013 15:02:19 GMT" } ]
1,359,676,800,000
[ [ "Amgoud", "Leila", "" ], [ "Cayrol", "Claudette", "" ] ]
1301.7359
Salem Benferhat
Salem Benferhat, Claudio Sossai
Merging Uncertain Knowledge Bases in a Possibilistic Logic Framework
Appears in Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI1998)
null
null
UAI-P-1998-PG-8-15
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper addresses the problem of merging uncertain information in the framework of possibilistic logic. It presents several syntactic combination rules to merge possibilistic knowledge bases, provided by different sources, into a new possibilistic knowledge base. These combination rules are first described at the meta-level outside the language of possibilistic logic. Next, an extension of possibilistic logic, where the combination rules are inside the language, is proposed. A proof system in a sequent form, which is sound and complete with respect to the possibilistic logic semantics, is given.
[ { "version": "v1", "created": "Wed, 30 Jan 2013 15:02:23 GMT" } ]
1,359,676,800,000
[ [ "Benferhat", "Salem", "" ], [ "Sossai", "Claudio", "" ] ]
1301.7360
Mark Bloemeke
Mark Bloemeke, Marco Valtorta
A Hybrid Algorithm to Compute Marginal and Joint Beliefs in Bayesian Networks and Its Complexity
Appears in Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI1998)
null
null
UAI-P-1998-PG-16-23
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There exist two general forms of exact algorithms for updating probabilities in Bayesian Networks. The first approach involves using a structure, usually a clique tree, and performing local message based calculation to extract the belief in each variable. The second general class of algorithm involves the use of non-serial dynamic programming techniques to extract the belief in some desired group of variables. In this paper we present a hybrid algorithm based on the latter approach yet possessing the ability to retrieve the belief in all single variables. The technique is advantageous in that it saves a NP-hard computation step over using one algorithm of each type. Furthermore, this technique re-enforces a conjecture of Jensen and Jensen [JJ94] in that it still requires a single NP-hard step to set up the structure on which inference is performed, as we show by confirming Li and D'Ambrosio's [LD94] conjectured NP-hardness of OFP.
[ { "version": "v1", "created": "Wed, 30 Jan 2013 15:02:29 GMT" } ]
1,359,676,800,000
[ [ "Bloemeke", "Mark", "" ], [ "Valtorta", "Marco", "" ] ]
1301.7361
Craig Boutilier
Craig Boutilier, Ronen I. Brafman, Christopher W. Geib
Structured Reachability Analysis for Markov Decision Processes
Appears in Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI1998)
null
null
UAI-P-1998-PG-24-32
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent research in decision theoretic planning has focussed on making the solution of Markov decision processes (MDPs) more feasible. We develop a family of algorithms for structured reachability analysis of MDPs that are suitable when an initial state (or set of states) is known. Using compact, structured representations of MDPs (e.g., Bayesian networks), our methods, which vary in the tradeoff between complexity and accuracy, produce structured descriptions of (estimated) reachable states that can be used to eliminate variables or variable values from the problem description, reducing the size of the MDP and making it easier to solve. One contribution of our work is the extension of ideas from GRAPHPLAN to deal with the distributed nature of action representations typically embodied within Bayes nets and the problem of correlated action effects. We also demonstrate that our algorithm can be made more complete by using k-ary constraints instead of binary constraints. Another contribution is the illustration of how the compact representation of reachability constraints can be exploited by several existing (exact and approximate) abstraction algorithms for MDPs.
[ { "version": "v1", "created": "Wed, 30 Jan 2013 15:02:33 GMT" }, { "version": "v2", "created": "Tue, 23 Apr 2013 15:58:57 GMT" } ]
1,366,761,600,000
[ [ "Boutilier", "Craig", "" ], [ "Brafman", "Ronen I.", "" ], [ "Geib", "Christopher W.", "" ] ]
1301.7362
Xavier Boyen
Xavier Boyen, Daphne Koller
Tractable Inference for Complex Stochastic Processes
Appears in Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI1998)
null
null
UAI-P-1998-PG-33-42
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The monitoring and control of any dynamic system depends crucially on the ability to reason about its current status and its future trajectory. In the case of a stochastic system, these tasks typically involve the use of a belief state- a probability distribution over the state of the process at a given point in time. Unfortunately, the state spaces of complex processes are very large, making an explicit representation of a belief state intractable. Even in dynamic Bayesian networks (DBNs), where the process itself can be represented compactly, the representation of the belief state is intractable. We investigate the idea of maintaining a compact approximation to the true belief state, and analyze the conditions under which the errors due to the approximations taken over the lifetime of the process do not accumulate to make our answers completely irrelevant. We show that the error in a belief state contracts exponentially as the process evolves. Thus, even with multiple approximations, the error in our process remains bounded indefinitely. We show how the additional structure of a DBN can be used to design our approximation scheme, improving its performance significantly. We demonstrate the applicability of our ideas in the context of a monitoring task, showing that orders of magnitude faster inference can be achieved with only a small degradation in accuracy.
[ { "version": "v1", "created": "Wed, 30 Jan 2013 15:02:39 GMT" } ]
1,359,676,800,000
[ [ "Boyen", "Xavier", "" ], [ "Koller", "Daphne", "" ] ]
1301.7365
Charles Castel
Charles Castel, Corine Cossart, Catherine Tessier
Dealing with Uncertainty in Situation Assessment: towards a Symbolic Approach
Appears in Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI1998)
null
null
UAI-P-1998-PG-61-68
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The situation assessment problem is considered, in terms of object, condition, activity, and plan recognition, based on data coming from the real-word {em via} various sensors. It is shown that uncertainty issues are linked both to the models and to the matching algorithm. Three different types of uncertainties are identified, and within each one, the numerical and the symbolic cases are distinguished. The emphasis is then put on purely symbolic uncertainties: it is shown that they can be dealt with within a purely symbolic framework resulting from a transposition of classical numerical estimation tools.
[ { "version": "v1", "created": "Wed, 30 Jan 2013 15:02:54 GMT" } ]
1,359,676,800,000
[ [ "Castel", "Charles", "" ], [ "Cossart", "Corine", "" ], [ "Tessier", "Catherine", "" ] ]
1301.7366
Enrique F. Castillo
Enrique F. Castillo, Juan Ferr\'andiz, Pilar Sanmartin
Marginalizing in Undirected Graph and Hypergraph Models
Appears in Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI1998)
null
null
UAI-P-1998-PG-69-78
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given an undirected graph G or hypergraph X model for a given set of variables V, we introduce two marginalization operators for obtaining the undirected graph GA or hypergraph HA associated with a given subset A c V such that the marginal distribution of A factorizes according to GA or HA, respectively. Finally, we illustrate the method by its application to some practical examples. With them we show that hypergraph models allow defining a finer factorization or performing a more precise conditional independence analysis than undirected graph models.
[ { "version": "v1", "created": "Wed, 30 Jan 2013 15:02:59 GMT" } ]
1,359,676,800,000
[ [ "Castillo", "Enrique F.", "" ], [ "Ferrándiz", "Juan", "" ], [ "Sanmartin", "Pilar", "" ] ]
1301.7367
Urszula Chajewska
Urszula Chajewska, Lise Getoor, Joseph Norman, Yuval Shahar
Utility Elicitation as a Classification Problem
Appears in Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI1998)
null
null
UAI-P-1998-PG-79-88
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the application of classification techniques to utility elicitation. In a decision problem, two sets of parameters must generally be elicited: the probabilities and the utilities. While the prior and conditional probabilities in the model do not change from user to user, the utility models do. Thus it is necessary to elicit a utility model separately for each new user. Elicitation is long and tedious, particularly if the outcome space is large and not decomposable. There are two common approaches to utility function elicitation. The first is to base the determination of the users utility function solely ON elicitation OF qualitative preferences.The second makes assumptions about the form AND decomposability OF the utility function.Here we take a different approach: we attempt TO identify the new USERs utility function based on classification relative to a database of previously collected utility functions. We do this by identifying clusters of utility functions that minimize an appropriate distance measure. Having identified the clusters, we develop a classification scheme that requires many fewer and simpler assessments than full utility elicitation and is more robust than utility elicitation based solely on preferences. We have tested our algorithm on a small database of utility functions in a prenatal diagnosis domain and the results are quite promising.
[ { "version": "v1", "created": "Wed, 30 Jan 2013 15:03:05 GMT" } ]
1,359,676,800,000
[ [ "Chajewska", "Urszula", "" ], [ "Getoor", "Lise", "" ], [ "Norman", "Joseph", "" ], [ "Shahar", "Yuval", "" ] ]
1301.7368
Fabio Gagliardi Cozman
Fabio Gagliardi Cozman
Irrelevance and Independence Relations in Quasi-Bayesian Networks
Appears in Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI1998)
null
null
UAI-P-1998-PG-89-96
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper analyzes irrelevance and independence relations in graphical models associated with convex sets of probability distributions (called Quasi-Bayesian networks). The basic question in Quasi-Bayesian networks is, How can irrelevance/independence relations in Quasi-Bayesian networks be detected, enforced and exploited? This paper addresses these questions through Walley's definitions of irrelevance and independence. Novel algorithms and results are presented for inferences with the so-called natural extensions using fractional linear programming, and the properties of the so-called type-1 extensions are clarified through a new generalization of d-separation.
[ { "version": "v1", "created": "Wed, 30 Jan 2013 15:03:11 GMT" } ]
1,359,676,800,000
[ [ "Cozman", "Fabio Gagliardi", "" ] ]
1301.7369
Adnan Darwiche
Adnan Darwiche
Dynamic Jointrees
Appears in Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI1998)
null
null
UAI-P-1998-PG-97-104
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is well known that one can ignore parts of a belief network when computing answers to certain probabilistic queries. It is also well known that the ignorable parts (if any) depend on the specific query of interest and, therefore, may change as the query changes. Algorithms based on jointrees, however, do not seem to take computational advantage of these facts given that they typically construct jointrees for worst-case queries; that is, queries for which every part of the belief network is considered relevant. To address this limitation, we propose in this paper a method for reconfiguring jointrees dynamically as the query changes. The reconfiguration process aims at maintaining a jointree which corresponds to the underlying belief network after it has been pruned given the current query. Our reconfiguration method is marked by three characteristics: (a) it is based on a non-classical definition of jointrees; (b) it is relatively efficient; and (c) it can reuse some of the computations performed before a jointree is reconfigured. We present preliminary experimental results which demonstrate significant savings over using static jointrees when query changes are considerable.
[ { "version": "v1", "created": "Wed, 30 Jan 2013 15:03:15 GMT" } ]
1,359,676,800,000
[ [ "Darwiche", "Adnan", "" ] ]
1301.7370
Benoit Desjardins
Benoit Desjardins
On the Semi-Markov Equivalence of Causal Models
Appears in Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI1998)
null
null
UAI-P-1998-PG-105-112
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The variability of structure in a finite Markov equivalence class of causally sufficient models represented by directed acyclic graphs has been fully characterized. Without causal sufficiency, an infinite semi-Markov equivalence class of models has only been characterized by the fact that each model in the equivalence class entails the same marginal statistical dependencies. In this paper, we study the variability of structure of causal models within a semi-Markov equivalence class and propose a systematic approach to construct models entailing any specific marginal statistical dependencies.
[ { "version": "v1", "created": "Wed, 30 Jan 2013 15:03:20 GMT" } ]
1,359,676,800,000
[ [ "Desjardins", "Benoit", "" ] ]
1301.7371
Didier Dubois
Didier Dubois, Helene Fargier, Henri Prade
Comparative Uncertainty, Belief Functions and Accepted Beliefs
Appears in Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI1998)
null
null
UAI-P-1998-PG-113-120
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper relates comparative belief structures and a general view of belief management in the setting of deductively closed logical representations of accepted beliefs. We show that the range of compatibility between the classical deductive closure and uncertain reasoning covers precisely the nonmonotonic 'preferential' inference system of Kraus, Lehmann and Magidor and nothing else. In terms of uncertain reasoning any possibility or necessity measure gives birth to a structure of accepted beliefs. The classes of probability functions and of Shafer's belief functions which yield belief sets prove to be very special ones.
[ { "version": "v1", "created": "Wed, 30 Jan 2013 15:03:26 GMT" } ]
1,359,676,800,000
[ [ "Dubois", "Didier", "" ], [ "Fargier", "Helene", "" ], [ "Prade", "Henri", "" ] ]
1301.7372
Didier Dubois
Didier Dubois, Henri Prade, Regis Sabbadin
Qualitative Decision Theory with Sugeno Integrals
Appears in Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI1998)
null
null
UAI-P-1998-PG-121-128
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents an axiomatic framework for qualitative decision under uncertainty in a finite setting. The corresponding utility is expressed by a sup-min expression, called Sugeno (or fuzzy) integral. Technically speaking, Sugeno integral is a median, which is indeed a qualitative counterpart to the averaging operation underlying expected utility. The axiomatic justification of Sugeno integral-based utility is expressed in terms of preference between acts as in Savage decision theory. Pessimistic and optimistic qualitative utilities, based on necessity and possibility measures, previously introduced by two of the authors, can be retrieved in this setting by adding appropriate axioms.
[ { "version": "v1", "created": "Wed, 30 Jan 2013 15:03:31 GMT" } ]
1,359,676,800,000
[ [ "Dubois", "Didier", "" ], [ "Prade", "Henri", "" ], [ "Sabbadin", "Regis", "" ] ]
1301.7379
Vu A. Ha
Vu A. Ha, Peter Haddawy
Towards Case-Based Preference Elicitation: Similarity Measures on Preference Structures
Appears in Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI1998)
null
null
UAI-P-1998-PG-193-201
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While decision theory provides an appealing normative framework for representing rich preference structures, eliciting utility or value functions typically incurs a large cost. For many applications involving interactive systems this overhead precludes the use of formal decision-theoretic models of preference. Instead of performing elicitation in a vacuum, it would be useful if we could augment directly elicited preferences with some appropriate default information. In this paper we propose a case-based approach to alleviating the preference elicitation bottleneck. Assuming the existence of a population of users from whom we have elicited complete or incomplete preference structures, we propose eliciting the preferences of a new user interactively and incrementally, using the closest existing preference structures as potential defaults. Since a notion of closeness demands a measure of distance among preference structures, this paper takes the first step of studying various distance measures over fully and partially specified preference structures. We explore the use of Euclidean distance, Spearmans footrule, and define a new measure, the probabilistic distance. We provide computational techniques for all three measures.
[ { "version": "v1", "created": "Wed, 30 Jan 2013 15:04:06 GMT" } ]
1,359,676,800,000
[ [ "Ha", "Vu A.", "" ], [ "Haddawy", "Peter", "" ] ]
1301.7380
Eric A. Hansen
Eric A. Hansen
Solving POMDPs by Searching in Policy Space
Appears in Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI1998)
null
null
UAI-P-1998-PG-211-219
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most algorithms for solving POMDPs iteratively improve a value function that implicitly represents a policy and are said to search in value function space. This paper presents an approach to solving POMDPs that represents a policy explicitly as a finite-state controller and iteratively improves the controller by search in policy space. Two related algorithms illustrate this approach. The first is a policy iteration algorithm that can outperform value iteration in solving infinitehorizon POMDPs. It provides the foundation for a new heuristic search algorithm that promises further speedup by focusing computational effort on regions of the problem space that are reachable, or likely to be reached, from a start state.
[ { "version": "v1", "created": "Wed, 30 Jan 2013 15:04:11 GMT" } ]
1,359,676,800,000
[ [ "Hansen", "Eric A.", "" ] ]
1301.7381
Milos Hauskrecht
Milos Hauskrecht, Nicolas Meuleau, Leslie Pack Kaelbling, Thomas L. Dean, Craig Boutilier
Hierarchical Solution of Markov Decision Processes using Macro-actions
Appears in Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI1998)
null
null
UAI-P-1998-PG-220-229
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the use of temporally abstract actions, or macro-actions, in the solution of Markov decision processes. Unlike current models that combine both primitive actions and macro-actions and leave the state space unchanged, we propose a hierarchical model (using an abstract MDP) that works with macro-actions only, and that significantly reduces the size of the state space. This is achieved by treating macroactions as local policies that act in certain regions of state space, and by restricting states in the abstract MDP to those at the boundaries of regions. The abstract MDP approximates the original and can be solved more efficiently. We discuss several ways in which macro-actions can be generated to ensure good solution quality. Finally, we consider ways in which macro-actions can be reused to solve multiple, related MDPs; and we show that this can justify the computational overhead of macro-action generation.
[ { "version": "v1", "created": "Wed, 30 Jan 2013 15:04:16 GMT" } ]
1,359,676,800,000
[ [ "Hauskrecht", "Milos", "" ], [ "Meuleau", "Nicolas", "" ], [ "Kaelbling", "Leslie Pack", "" ], [ "Dean", "Thomas L.", "" ], [ "Boutilier", "Craig", "" ] ]
1301.7383
Holger H. Hoos
Holger H. Hoos, Thomas Stutzle
Evaluating Las Vegas Algorithms - Pitfalls and Remedies
Appears in Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI1998)
null
null
UAI-P-1998-PG-238-245
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Stochastic search algorithms are among the most sucessful approaches for solving hard combinatorial problems. A large class of stochastic search approaches can be cast into the framework of Las Vegas Algorithms (LVAs). As the run-time behavior of LVAs is characterized by random variables, the detailed knowledge of run-time distributions provides important information for the analysis of these algorithms. In this paper we propose a novel methodology for evaluating the performance of LVAs, based on the identification of empirical run-time distributions. We exemplify our approach by applying it to Stochastic Local Search (SLS) algorithms for the satisfiability problem (SAT) in propositional logic. We point out pitfalls arising from the use of improper empirical methods and discuss the benefits of the proposed methodology for evaluating and comparing LVAs.
[ { "version": "v1", "created": "Wed, 30 Jan 2013 15:04:26 GMT" } ]
1,359,676,800,000
[ [ "Hoos", "Holger H.", "" ], [ "Stutzle", "Thomas", "" ] ]
1301.7384
Michael C. Horsch
Michael C. Horsch, David L. Poole
An Anytime Algorithm for Decision Making under Uncertainty
Appears in Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI1998)
null
null
UAI-P-1998-PG-246-255
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present an anytime algorithm which computes policies for decision problems represented as multi-stage influence diagrams. Our algorithm constructs policies incrementally, starting from a policy which makes no use of the available information. The incremental process constructs policies which includes more of the information available to the decision maker at each step. While the process converges to the optimal policy, our approach is designed for situations in which computing the optimal policy is infeasible. We provide examples of the process on several large decision problems, showing that, for these examples, the process constructs valuable (but sub-optimal) policies before the optimal policy would be available by traditional methods.
[ { "version": "v1", "created": "Wed, 30 Jan 2013 15:04:31 GMT" } ]
1,359,676,800,000
[ [ "Horsch", "Michael C.", "" ], [ "Poole", "David L.", "" ] ]
1301.7386
Pablo H. Ibarguengoytia
Pablo H. Ibarguengoytia, Luis Enrique Sucar, Sunil Vadera
Any Time Probabilistic Reasoning for Sensor Validation
Appears in Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI1998)
null
null
UAI-P-1998-PG-266-273
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For many real time applications, it is important to validate the information received from the sensors before entering higher levels of reasoning. This paper presents an any time probabilistic algorithm for validating the information provided by sensors. The system consists of two Bayesian network models. The first one is a model of the dependencies between sensors and it is used to validate each sensor. It provides a list of potentially faulty sensors. To isolate the real faults, a second Bayesian network is used, which relates the potential faults with the real faults. This second model is also used to make the validation algorithm any time, by validating first the sensors that provide more information. To select the next sensor to validate, and measure the quality of the results at each stage, an entropy function is used. This function captures in a single quantity both the certainty and specificity measures of any time algorithms. Together, both models constitute a mechanism for validating sensors in an any time fashion, providing at each step the probability of correct/faulty for each sensor, and the total quality of the results. The algorithm has been tested in the validation of temperature sensors of a power plant.
[ { "version": "v1", "created": "Wed, 30 Jan 2013 15:04:41 GMT" } ]
1,359,676,800,000
[ [ "Ibarguengoytia", "Pablo H.", "" ], [ "Sucar", "Luis Enrique", "" ], [ "Vadera", "Sunil", "" ] ]
1301.7387
Manfred Jaeger
Manfred Jaeger
Measure Selection: Notions of Rationality and Representation Independence
Appears in Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI1998)
null
null
UAI-P-1998-PG-274-281
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We take another look at the general problem of selecting a preferred probability measure among those that comply with some given constraints. The dominant role that entropy maximization has obtained in this context is questioned by arguing that the minimum information principle on which it is based could be supplanted by an at least as plausible "likelihood of evidence" principle. We then review a method for turning given selection functions into representation independent variants, and discuss the tradeoffs involved in this transformation.
[ { "version": "v1", "created": "Wed, 30 Jan 2013 15:04:45 GMT" } ]
1,359,676,800,000
[ [ "Jaeger", "Manfred", "" ] ]
1301.7391
Michael Kearns
Michael Kearns, Yishay Mansour
Exact Inference of Hidden Structure from Sample Data in Noisy-OR Networks
Appears in Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI1998)
null
null
UAI-P-1998-PG-304-310
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the literature on graphical models, there has been increased attention paid to the problems of learning hidden structure (see Heckerman [H96] for survey) and causal mechanisms from sample data [H96, P88, S93, P95, F98]. In most settings we should expect the former to be difficult, and the latter potentially impossible without experimental intervention. In this work, we examine some restricted settings in which perfectly reconstruct the hidden structure solely on the basis of observed sample data.
[ { "version": "v1", "created": "Wed, 30 Jan 2013 15:05:04 GMT" } ]
1,359,676,800,000
[ [ "Kearns", "Michael", "" ], [ "Mansour", "Yishay", "" ] ]
1301.7394
Vasilica Lepar
Vasilica Lepar, Prakash P. Shenoy
A Comparison of Lauritzen-Spiegelhalter, Hugin, and Shenoy-Shafer Architectures for Computing Marginals of Probability Distributions
Appears in Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI1998)
null
null
UAI-P-1998-PG-328-337
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the last decade, several architectures have been proposed for exact computation of marginals using local computation. In this paper, we compare three architectures - Lauritzen-Spiegelhalter, Hugin, and Shenoy-Shafer - from the perspective of graphical structure for message propagation, message-passing scheme, computational efficiency, and storage efficiency.
[ { "version": "v1", "created": "Wed, 30 Jan 2013 15:05:20 GMT" } ]
1,359,676,800,000
[ [ "Lepar", "Vasilica", "" ], [ "Shenoy", "Prakash P.", "" ] ]
1301.7395
Chao-Lin Liu
Chao-Lin Liu, Michael P. Wellman
Incremental Tradeoff Resolution in Qualitative Probabilistic Networks
Appears in Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI1998)
null
null
UAI-P-1998-PG-338-345
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Qualitative probabilistic reasoning in a Bayesian network often reveals tradeoffs: relationships that are ambiguous due to competing qualitative influences. We present two techniques that combine qualitative and numeric probabilistic reasoning to resolve such tradeoffs, inferring the qualitative relationship between nodes in a Bayesian network. The first approach incrementally marginalizes nodes that contribute to the ambiguous qualitative relationships. The second approach evaluates approximate Bayesian networks for bounds of probability distributions, and uses these bounds to determinate qualitative relationships in question. This approach is also incremental in that the algorithm refines the state spaces of random variables for tighter bounds until the qualitative relationships are resolved. Both approaches provide systematic methods for tradeoff resolution at potentially lower computational cost than application of purely numeric methods.
[ { "version": "v1", "created": "Wed, 30 Jan 2013 15:05:25 GMT" } ]
1,359,676,800,000
[ [ "Liu", "Chao-Lin", "" ], [ "Wellman", "Michael P.", "" ] ]
1301.7396
Chao-Lin Liu
Chao-Lin Liu, Michael P. Wellman
Using Qualitative Relationships for Bounding Probability Distributions
Appears in Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI1998)
null
null
UAI-P-1998-PG-346-353
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We exploit qualitative probabilistic relationships among variables for computing bounds of conditional probability distributions of interest in Bayesian networks. Using the signs of qualitative relationships, we can implement abstraction operations that are guaranteed to bound the distributions of interest in the desired direction. By evaluating incrementally improved approximate networks, our algorithm obtains monotonically tightening bounds that converge to exact distributions. For supermodular utility functions, the tightening bounds monotonically reduce the set of admissible decision alternatives as well.
[ { "version": "v1", "created": "Wed, 30 Jan 2013 15:05:30 GMT" } ]
1,359,676,800,000
[ [ "Liu", "Chao-Lin", "" ], [ "Wellman", "Michael P.", "" ] ]
1301.7397
Thomas Lukasiewicz
Thomas Lukasiewicz
Magic Inference Rules for Probabilistic Deduction under Taxonomic Knowledge
Appears in Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI1998)
null
null
UAI-P-1998-PG-354-361
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present locally complete inference rules for probabilistic deduction from taxonomic and probabilistic knowledge-bases over conjunctive events. Crucially, in contrast to similar inference rules in the literature, our inference rules are locally complete for conjunctive events and under additional taxonomic knowledge. We discover that our inference rules are extremely complex and that it is at first glance not clear at all where the deduced tightest bounds come from. Moreover, analyzing the global completeness of our inference rules, we find examples of globally very incomplete probabilistic deductions. More generally, we even show that all systems of inference rules for taxonomic and probabilistic knowledge-bases over conjunctive events are globally incomplete. We conclude that probabilistic deduction by the iterative application of inference rules on interval restrictions for conditional probabilities, even though considered very promising in the literature so far, seems very limited in its field of application.
[ { "version": "v1", "created": "Wed, 30 Jan 2013 15:05:34 GMT" } ]
1,359,676,800,000
[ [ "Lukasiewicz", "Thomas", "" ] ]