id
stringlengths
9
10
submitter
stringlengths
5
47
authors
stringlengths
5
1.72k
title
stringlengths
11
234
comments
stringlengths
1
491
journal-ref
stringlengths
4
396
doi
stringlengths
13
97
report-no
stringlengths
4
138
categories
stringclasses
1 value
license
stringclasses
9 values
abstract
stringlengths
29
3.66k
versions
listlengths
1
21
update_date
int64
1,180B
1,718B
authors_parsed
sequencelengths
1
98
1301.7398
Anders L. Madsen
Anders L. Madsen, Finn Verner Jensen
Lazy Propagation in Junction Trees
Appears in Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI1998)
null
null
UAI-P-1998-PG-362-369
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The efficiency of algorithms using secondary structures for probabilistic inference in Bayesian networks can be improved by exploiting independence relations induced by evidence and the direction of the links in the original network. In this paper we present an algorithm that on-line exploits independence relations induced by evidence and the direction of the links in the original network to reduce both time and space costs. Instead of multiplying the conditional probability distributions for the various cliques, we determine on-line which potentials to multiply when a message is to be produced. The performance improvement of the algorithm is emphasized through empirical evaluations involving large real world Bayesian networks, and we compare the method with the HUGIN and Shafer-Shenoy inference algorithms.
[ { "version": "v1", "created": "Wed, 30 Jan 2013 15:05:39 GMT" } ]
1,359,676,800,000
[ [ "Madsen", "Anders L.", "" ], [ "Jensen", "Finn Verner", "" ] ]
1301.7399
Suzanne M. Mahoney
Suzanne M. Mahoney, Kathryn Blackmond Laskey
Constructing Situation Specific Belief Networks
Appears in Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI1998)
null
null
UAI-P-1998-PG-370-378
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes a process for constructing situation-specific belief networks from a knowledge base of network fragments. A situation-specific network is a minimal query complete network constructed from a knowledge base in response to a query for the probability distribution on a set of target variables given evidence and context variables. We present definitions of query completeness and situation-specific networks. We describe conditions on the knowledge base that guarantee query completeness. The relationship of our work to earlier work on KBMC is also discussed.
[ { "version": "v1", "created": "Wed, 30 Jan 2013 15:05:44 GMT" } ]
1,359,676,800,000
[ [ "Mahoney", "Suzanne M.", "" ], [ "Laskey", "Kathryn Blackmond", "" ] ]
1301.7402
Paul-Andre Monney
Paul-Andre Monney
From Likelihood to Plausibility
Appears in Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI1998)
null
null
UAI-P-1998-PG-396-403
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Several authors have explained that the likelihood ratio measures the strength of the evidence represented by observations in statistical problems. This idea works fine when the goal is to evaluate the strength of the available evidence for a simple hypothesis versus another simple hypothesis. However, the applicability of this idea is limited to simple hypotheses because the likelihood function is primarily defined on points (simple hypotheses) of the parameter space. In this paper we define a general weight of evidence that is applicable to both simple and composite hypotheses. It is based on the Dempster-Shafer concept of plausibility and is shown to be a generalization of the likelihood ratio. Functional models are of a fundamental importance for the general weight of evidence proposed in this paper. The relevant concepts and ideas are explained by means of a familiar urn problem and the general analysis of a real-world medical problem is presented.
[ { "version": "v1", "created": "Wed, 30 Jan 2013 15:06:00 GMT" } ]
1,359,676,800,000
[ [ "Monney", "Paul-Andre", "" ] ]
1301.7404
Benson Hin Kwong Ng
Benson Hin Kwong Ng, Kam-Fai Wong, Boon-Toh Low
Resolving Conflicting Arguments under Uncertainties
Appears in Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI1998)
null
null
UAI-P-1998-PG-414-421
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Distributed knowledge based applications in open domain rely on common sense information which is bound to be uncertain and incomplete. To draw the useful conclusions from ambiguous data, one must address uncertainties and conflicts incurred in a holistic view. No integrated frameworks are viable without an in-depth analysis of conflicts incurred by uncertainties. In this paper, we give such an analysis and based on the result, propose an integrated framework. Our framework extends definite argumentation theory to model uncertainty. It supports three views over conflicting and uncertain knowledge. Thus, knowledge engineers can draw different conclusions depending on the application context (i.e. view). We also give an illustrative example on strategical decision support to show the practical usefulness of our framework.
[ { "version": "v1", "created": "Wed, 30 Jan 2013 15:06:09 GMT" } ]
1,359,676,800,000
[ [ "Ng", "Benson Hin Kwong", "" ], [ "Wong", "Kam-Fai", "" ], [ "Low", "Boon-Toh", "" ] ]
1301.7405
Ron Parr
Ron Parr
Flexible Decomposition Algorithms for Weakly Coupled Markov Decision Problems
Appears in Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI1998)
null
null
UAI-P-1998-PG-422-430
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents two new approaches to decomposing and solving large Markov decision problems (MDPs), a partial decoupling method and a complete decoupling method. In these approaches, a large, stochastic decision problem is divided into smaller pieces. The first approach builds a cache of policies for each part of the problem independently, and then combines the pieces in a separate, light-weight step. A second approach also divides the problem into smaller pieces, but information is communicated between the different problem pieces, allowing intelligent decisions to be made about which piece requires the most attention. Both approaches can be used to find optimal policies or approximately optimal policies with provable bounds. These algorithms also provide a framework for the efficient transfer of knowledge across problems that share similar structure.
[ { "version": "v1", "created": "Wed, 30 Jan 2013 15:06:15 GMT" } ]
1,359,676,800,000
[ [ "Parr", "Ron", "" ] ]
1301.7406
David M Pennock
David M. Pennock
Logarithmic Time Parallel Bayesian Inference
Appears in Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI1998)
null
null
UAI-P-1998-PG-431-438
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
I present a parallel algorithm for exact probabilistic inference in Bayesian networks. For polytree networks with n variables, the worst-case time complexity is O(log n) on a CREW PRAM (concurrent-read, exclusive-write parallel random-access machine) with n processors, for any constant number of evidence variables. For arbitrary networks, the time complexity is O(r^{3w}*log n) for n processors, or O(w*log n) for r^{3w}*n processors, where r is the maximum range of any variable, and w is the induced width (the maximum clique size), after moralizing and triangulating the network.
[ { "version": "v1", "created": "Wed, 30 Jan 2013 15:06:20 GMT" } ]
1,359,676,800,000
[ [ "Pennock", "David M.", "" ] ]
1301.7407
Mark Alan Peot
Mark Alan Peot, Ross D. Shachter
Learning From What You Don't Observe
Appears in Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI1998)
null
null
UAI-P-1998-PG-439-446
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The process of diagnosis involves learning about the state of a system from various observations of symptoms or findings about the system. Sophisticated Bayesian (and other) algorithms have been developed to revise and maintain beliefs about the system as observations are made. Nonetheless, diagnostic models have tended to ignore some common sense reasoning exploited by human diagnosticians; In particular, one can learn from which observations have not been made, in the spirit of conversational implicature. There are two concepts that we describe to extract information from the observations not made. First, some symptoms, if present, are more likely to be reported before others. Second, most human diagnosticians and expert systems are economical in their data-gathering, searching first where they are more likely to find symptoms present. Thus, there is a desirable bias toward reporting symptoms that are present. We develop a simple model for these concepts that can significantly improve diagnostic inference.
[ { "version": "v1", "created": "Wed, 30 Jan 2013 15:06:25 GMT" } ]
1,359,676,800,000
[ [ "Peot", "Mark Alan", "" ], [ "Shachter", "Ross D.", "" ] ]
1301.7408
David L Poole
David L. Poole
Context-Specific Approximation in Probabilistic Inference
Appears in Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI1998)
null
null
UAI-P-1998-PG-447-454
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There is evidence that the numbers in probabilistic inference don't really matter. This paper considers the idea that we can make a probabilistic model simpler by making fewer distinctions. Unfortunately, the level of a Bayesian network seems too coarse; it is unlikely that a parent will make little difference for all values of the other parents. In this paper we consider an approximation scheme where distinctions can be ignored in some contexts, but not in other contexts. We elaborate on a notion of a parent context that allows a structured context-specific decomposition of a probability distribution and the associated probabilistic inference scheme called probabilistic partial evaluation (Poole 1997). This paper shows a way to simplify a probabilistic model by ignoring distinctions which have similar probabilities, a method to exploit the simpler model, a bound on the resulting errors, and some preliminary empirical results on simple networks.
[ { "version": "v1", "created": "Wed, 30 Jan 2013 15:06:30 GMT" } ]
1,359,676,800,000
[ [ "Poole", "David L.", "" ] ]
1301.7409
Irina Rish
Irina Rish, Kalev Kask, Rina Dechter
Empirical Evaluation of Approximation Algorithms for Probabilistic Decoding
Appears in Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI1998)
null
null
UAI-P-1998-PG-455-463
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It was recently shown that the problem of decoding messages transmitted through a noisy channel can be formulated as a belief updating task over a probabilistic network [McEliece]. Moreover, it was observed that iterative application of the (linear time) Pearl's belief propagation algorithm designed for polytrees outperformed state of the art decoding algorithms, even though the corresponding networks may have many cycles. This paper demonstrates empirically that an approximation algorithm approx-mpe for solving the most probable explanation (MPE) problem, developed within the recently proposed mini-bucket elimination framework [Dechter96], outperforms iterative belief propagation on classes of coding networks that have bounded induced width. Our experiments suggest that approximate MPE decoders can be good competitors to the approximate belief updating decoders.
[ { "version": "v1", "created": "Wed, 30 Jan 2013 15:06:34 GMT" } ]
1,359,676,800,000
[ [ "Rish", "Irina", "" ], [ "Kask", "Kalev", "" ], [ "Dechter", "Rina", "" ] ]
1301.7410
Paola Sebastiani
Paola Sebastiani, Marco Ramoni
Decision Theoretic Foundations of Graphical Model Selection
Appears in Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI1998)
null
null
UAI-P-1998-PG-464-471
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes a decision theoretic formulation of learning the graphical structure of a Bayesian Belief Network from data. This framework subsumes the standard Bayesian approach of choosing the model with the largest posterior probability as the solution of a decision problem with a 0-1 loss function and allows the use of more general loss functions able to trade-off the complexity of the selected model and the error of choosing an oversimplified model. A new class of loss functions, called disintegrable, is introduced, to allow the decision problem to match the decomposability of the graphical model. With this class of loss functions, the optimal solution to the decision problem can be found using an efficient bottom-up search strategy.
[ { "version": "v1", "created": "Wed, 30 Jan 2013 15:06:39 GMT" } ]
1,359,676,800,000
[ [ "Sebastiani", "Paola", "" ], [ "Ramoni", "Marco", "" ] ]
1301.7412
Ross D. Shachter
Ross D. Shachter
Bayes-Ball: The Rational Pastime (for Determining Irrelevance and Requisite Information in Belief Networks and Influence Diagrams)
Appears in Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI1998)
null
null
UAI-P-1998-PG-480-487
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the benefits of belief networks and influence diagrams is that so much knowledge is captured in the graphical structure. In particular, statements of conditional irrelevance (or independence) can be verified in time linear in the size of the graph. To resolve a particular inference query or decision problem, only some of the possible states and probability distributions must be specified, the "requisite information." This paper presents a new, simple, and efficient "Bayes-ball" algorithm which is well-suited to both new students of belief networks and state of the art implementations. The Bayes-ball algorithm determines irrelevant sets and requisite information more efficiently than existing methods, and is linear in the size of the graph for belief networks and influence diagrams.
[ { "version": "v1", "created": "Wed, 30 Jan 2013 15:06:48 GMT" } ]
1,359,676,800,000
[ [ "Shachter", "Ross D.", "" ] ]
1301.7414
Milan Studeny
Milan Studeny
Bayesian Networks from the Point of View of Chain Graphs
Appears in Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI1998)
null
null
UAI-P-1998-PG-496-503
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
AThe paper gives a few arguments in favour of the use of chain graphs for description of probabilistic conditional independence structures. Every Bayesian network model can be equivalently introduced by means of a factorization formula with respect to a chain graph which is Markov equivalent to the Bayesian network. A graphical characterization of such graphs is given. The class of equivalent graphs can be represented by a distinguished graph which is called the largest chain graph. The factorization formula with respect to the largest chain graph is a basis of a proposal of how to represent the corresponding (discrete) probability distribution in a computer (i.e. parametrize it). This way does not depend on the choice of a particular Bayesian network from the class of equivalent networks and seems to be the most efficient way from the point of view of memory demands. A separation criterion for reading independency statements from a chain graph is formulated in a simpler way. It resembles the well-known d-separation criterion for Bayesian networks and can be implemented locally.
[ { "version": "v1", "created": "Wed, 30 Jan 2013 15:06:56 GMT" } ]
1,359,676,800,000
[ [ "Studeny", "Milan", "" ] ]
1301.7416
Nevin Lianwen Zhang
Nevin Lianwen Zhang
Probabilistic Inference in Influence Diagrams
Appears in Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI1998)
null
null
UAI-P-1998-PG-514-522
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper is about reducing influence diagram (ID) evaluation into Bayesian network (BN) inference problems. Such reduction is interesting because it enables one to readily use one's favorite BN inference algorithm to efficiently evaluate IDs. Two such reduction methods have been proposed previously (Cooper 1988, Shachter and Peot 1992). This paper proposes a new method. The BN inference problems induced by the mew method are much easier to solve than those induced by the two previous methods.
[ { "version": "v1", "created": "Wed, 30 Jan 2013 15:07:07 GMT" } ]
1,359,676,800,000
[ [ "Zhang", "Nevin Lianwen", "" ] ]
1301.7417
Nevin Lianwen Zhang
Nevin Lianwen Zhang, Stephen S. Lee
Planning with Partially Observable Markov Decision Processes: Advances in Exact Solution Method
Appears in Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI1998)
null
null
UAI-P-1998-PG-523-530
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There is much interest in using partially observable Markov decision processes (POMDPs) as a formal model for planning in stochastic domains. This paper is concerned with finding optimal policies for POMDPs. We propose several improvements to incremental pruning, presently the most efficient exact algorithm for solving POMDPs.
[ { "version": "v1", "created": "Wed, 30 Jan 2013 15:07:12 GMT" } ]
1,359,676,800,000
[ [ "Zhang", "Nevin Lianwen", "" ], [ "Lee", "Stephen S.", "" ] ]
1301.7418
Weixiong Zhang
Weixiong Zhang
Flexible and Approximate Computation through State-Space Reduction
Appears in Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI1998)
null
null
UAI-P-1998-PG-531-538
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the real world, insufficient information, limited computation resources, and complex problem structures often force an autonomous agent to make a decision in time less than that required to solve the problem at hand completely. Flexible and approximate computations are two approaches to decision making under limited computation resources. Flexible computation helps an agent to flexibly allocate limited computation resources so that the overall system utility is maximized. Approximate computation enables an agent to find the best satisfactory solution within a deadline. In this paper, we present two state-space reduction methods for flexible and approximate computation: quantitative reduction to deal with inaccurate heuristic information, and structural reduction to handle complex problem structures. These two methods can be applied successively to continuously improve solution quality if more computation is available. Our results show that these reduction methods are effective and efficient, finding better solutions with less computation than some existing well-known methods.
[ { "version": "v1", "created": "Wed, 30 Jan 2013 15:07:19 GMT" } ]
1,359,676,800,000
[ [ "Zhang", "Weixiong", "" ] ]
1302.0216
Dimiter Dobrev
Dimiter Dobrev
Comparison between the two definitions of AI
added four new sections
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Two different definitions of the Artificial Intelligence concept have been proposed in papers [1] and [2]. The first definition is informal. It says that any program that is cleverer than a human being, is acknowledged as Artificial Intelligence. The second definition is formal because it avoids reference to the concept of human being. The readers of papers [1] and [2] might be left with the impression that both definitions are equivalent and the definition in [2] is simply a formal version of that in [1]. This paper will compare both definitions of Artificial Intelligence and, hopefully, will bring a better understanding of the concept.
[ { "version": "v1", "created": "Thu, 31 Jan 2013 15:15:40 GMT" }, { "version": "v2", "created": "Thu, 22 Aug 2013 22:56:04 GMT" } ]
1,377,475,200,000
[ [ "Dobrev", "Dimiter", "" ] ]
1302.0334
Daniel Buehrer
Daniel Buehrer and Chee-Hwa Lee
Class Algebra for Ontology Reasoning
pp.2-13
Proc. of TOOLS Asia 99 (Technology of Object-Oriented Languages and Systems, 1999 International Conference), IEEE Press
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Class algebra provides a natural framework for sharing of ISA hierarchies between users that may be unaware of each other's definitions. This permits data from relational databases, object-oriented databases, and tagged XML documents to be unioned into one distributed ontology, sharable by all users without the need for prior negotiation or the development of a "standard" ontology for each field. Moreover, class algebra produces a functional correspondence between a class's class algebraic definition (i.e. its "intent") and the set of all instances which satisfy the expression (i.e. its "extent"). The framework thus provides assistance in quickly locating examples and counterexamples of various definitions. This kind of information is very valuable when developing models of the real world, and serves as an invaluable tool assisting in the proof of theorems concerning these class algebra expressions. Finally, the relative frequencies of objects in the ISA hierarchy can produce a useful Boolean algebra of probabilities. The probabilities can be used by traditional information-theoretic classification methodologies to obtain optimal ways of classifying objects in the database.
[ { "version": "v1", "created": "Sat, 2 Feb 2013 02:18:00 GMT" } ]
1,360,022,400,000
[ [ "Buehrer", "Daniel", "" ], [ "Lee", "Chee-Hwa", "" ] ]
1302.1155
Kurt Ammon
Kurt Ammon
An Effective Procedure for Computing "Uncomputable" Functions
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We give an effective procedure that produces a natural number in its output from any natural number in its input, that is, it computes a total function. The elementary operations of the procedure are Turing-computable. The procedure has a second input which can contain the Goedel number of any Turing-computable total function whose range is a subset of the set of the Goedel numbers of all Turing-computable total functions. We prove that the second input cannot be set to the Goedel number of any Turing-computable function that computes the output from any natural number in its first input. In this sense, there is no Turing program that computes the output from its first input. The procedure is used to define creative procedures which compute functions that are not Turing-computable. We argue that creative procedures model an aspect of reasoning that cannot be modeled by Turing machines.
[ { "version": "v1", "created": "Tue, 5 Feb 2013 19:11:59 GMT" } ]
1,360,108,800,000
[ [ "Ammon", "Kurt", "" ] ]
1302.1334
Yuriy Parzhin
Yuri Parzhin
Principles of modal and vector theory of formal intelligence systems
34 pages, 8 figures
null
null
null
cs.AI
http://creativecommons.org/licenses/by/3.0/
The paper considers the class of information systems capable of solving heuristic problems on basis of formal theory that was termed modal and vector theory of formal intelligent systems (FIS). The paper justifies the construction of FIS resolution algorithm, defines the main features of these systems and proves theorems that underlie the theory. The principle of representation diversity of FIS construction is formulated. The paper deals with the main principles of constructing and functioning formal intelligent system (FIS) on basis of FIS modal and vector theory. The following phenomena are considered: modular architecture of FIS presentation sub-system, algorithms of data processing at every step of the stage of creating presentations. Besides the paper suggests the structure of neural elements, i.e. zone detectors and processors that are the basis for FIS construction.
[ { "version": "v1", "created": "Wed, 6 Feb 2013 12:16:33 GMT" } ]
1,360,195,200,000
[ [ "Parzhin", "Yuri", "" ] ]
1302.1520
Ami Berler
Ami Berler, Solomon Eyal Shimony
Bayes Networks for Sonar Sensor Fusion
Appears in Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence (UAI1997)
null
null
UAI-P-1997-PG-14-21
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Wide-angle sonar mapping of the environment by mobile robot is nontrivial due to several sources of uncertainty: dropouts due to "specular" reflections, obstacle location uncertainty due to the wide beam, and distance measurement error. Earlier papers address the latter problems, but dropouts remain a problem in many environments. We present an approach that lifts the overoptimistic independence assumption used in earlier work, and use Bayes nets to represent the dependencies between objects of the model. Objects of the model consist of readings, and of regions in which "quasi location invariance" of the (possible) obstacles exists, with respect to the readings. Simulation supports the method's feasibility. The model is readily extensible to allow for prior distributions, as well as other types of sensing operations.
[ { "version": "v1", "created": "Wed, 6 Feb 2013 15:53:39 GMT" } ]
1,360,281,600,000
[ [ "Berler", "Ami", "" ], [ "Shimony", "Solomon Eyal", "" ] ]
1302.1521
John Bigham
John Bigham
Exploiting Uncertain and Temporal Information in Correlation
Appears in Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence (UAI1997)
null
null
UAI-P-1997-PG-22-29
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A modelling language is described which is suitable for the correlation of information when the underlying functional model of the system is incomplete or uncertain and the temporal dependencies are imprecise. An efficient and incremental implementation is outlined which depends on cost functions satisfying certain criteria. Possibilistic logic and probability theory (as it is used in the applications targetted) satisfy these criteria.
[ { "version": "v1", "created": "Wed, 6 Feb 2013 15:53:45 GMT" } ]
1,360,281,600,000
[ [ "Bigham", "John", "" ] ]
1302.1522
Craig Boutilier
Craig Boutilier
Correlated Action Effects in Decision Theoretic Regression
Appears in Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence (UAI1997)
null
null
UAI-P-1997-PG-30-37
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Much recent research in decision theoretic planning has adopted Markov decision processes (MDPs) as the model of choice, and has attempted to make their solution more tractable by exploiting problem structure. One particular algorithm, structured policy construction achieves this by means of a decision theoretic analog of goal regression using action descriptions based on Bayesian networks with tree-structured conditional probability tables. The algorithm as presented is not able to deal with actions with correlated effects. We describe a new decision theoretic regression operator that corrects this weakness. While conceptually straightforward, this extension requires a somewhat more complicated technical approach.
[ { "version": "v1", "created": "Wed, 6 Feb 2013 15:53:51 GMT" } ]
1,360,281,600,000
[ [ "Boutilier", "Craig", "" ] ]
1302.1523
Alex G. Buchner
Alex G. Buchner, Werner Dubitzky, Alfons Schuster, Philippe Lopes, Peter G. O'Donoghue, John G. Hughes, David A. Bell, Kenny Adamson, John A. White, John M.C.C. Anderson, Maurice D. Mulvenna
Corporate Evidential Decision Making in Performance Prediction Domains
Appears in Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence (UAI1997)
null
null
UAI-P-1997-PG-38-45
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Performance prediction or forecasting sporting outcomes involves a great deal of insight into the particular area one is dealing with, and a considerable amount of intuition about the factors that bear on such outcomes and performances. The mathematical Theory of Evidence offers representation formalisms which grant experts a high degree of freedom when expressing their subjective beliefs in the context of decision-making situations like performance prediction. Furthermore, this reasoning framework incorporates a powerful mechanism to systematically pool the decisions made by individual subject matter experts. The idea behind such a combination of knowledge is to improve the competence (quality) of the overall decision-making process. This paper reports on a performance prediction experiment carried out during the European Football Championship in 1996. Relying on the knowledge of four predictors, Evidence Theory was used to forecast the final scores of all 31 matches. The results of this empirical study are very encouraging.
[ { "version": "v1", "created": "Wed, 6 Feb 2013 15:53:56 GMT" } ]
1,360,281,600,000
[ [ "Buchner", "Alex G.", "" ], [ "Dubitzky", "Werner", "" ], [ "Schuster", "Alfons", "" ], [ "Lopes", "Philippe", "" ], [ "O'Donoghue", "Peter G.", "" ], [ "Hughes", "John G.", "" ], [ "Bell", "David A.", "" ], [ "Adamson", "Kenny", "" ], [ "White", "John A.", "" ], [ "Anderson", "John M. C. C.", "" ], [ "Mulvenna", "Maurice D.", "" ] ]
1302.1524
Luis M. de Campos
Luis M. de Campos, Juan F. Huete
Algorithms for Learning Decomposable Models and Chordal Graphs
Appears in Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence (UAI1997)
null
null
UAI-P-1997-PG-46-53
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Decomposable dependency models and their graphical counterparts, i.e., chordal graphs, possess a number of interesting and useful properties. On the basis of two characterizations of decomposable models in terms of independence relationships, we develop an exact algorithm for recovering the chordal graphical representation of any given decomposable model. We also propose an algorithm for learning chordal approximations of dependency models isomorphic to general undirected graphs.
[ { "version": "v1", "created": "Wed, 6 Feb 2013 15:54:02 GMT" } ]
1,360,281,600,000
[ [ "de Campos", "Luis M.", "" ], [ "Huete", "Juan F.", "" ] ]
1302.1525
Anthony R. Cassandra
Anthony R. Cassandra, Michael L. Littman, Nevin Lianwen Zhang
Incremental Pruning: A Simple, Fast, Exact Method for Partially Observable Markov Decision Processes
Appears in Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence (UAI1997)
null
null
UAI-P-1997-PG-54-61
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most exact algorithms for general partially observable Markov decision processes (POMDPs) use a form of dynamic programming in which a piecewise-linear and convex representation of one value function is transformed into another. We examine variations of the "incremental pruning" method for solving this problem and compare them to earlier algorithms from theoretical and empirical perspectives. We find that incremental pruning is presently the most efficient exact method for solving POMDPs.
[ { "version": "v1", "created": "Wed, 6 Feb 2013 15:54:07 GMT" } ]
1,360,281,600,000
[ [ "Cassandra", "Anthony R.", "" ], [ "Littman", "Michael L.", "" ], [ "Zhang", "Nevin Lianwen", "" ] ]
1302.1526
Urszula Chajewska
Urszula Chajewska, Joseph Y. Halpern
Defining Explanation in Probabilistic Systems
Appears in Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence (UAI1997)
null
null
UAI-P-1997-PG-62-71
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As probabilistic systems gain popularity and are coming into wider use, the need for a mechanism that explains the system's findings and recommendations becomes more critical. The system will also need a mechanism for ordering competing explanations. We examine two representative approaches to explanation in the literature - one due to G\"ardenfors and one due to Pearl - and show that both suffer from significant problems. We propose an approach to defining a notion of "better explanation" that combines some of the features of both together with more recent work by Pearl and others on causality.
[ { "version": "v1", "created": "Wed, 6 Feb 2013 15:54:13 GMT" } ]
1,360,281,600,000
[ [ "Chajewska", "Urszula", "" ], [ "Halpern", "Joseph Y.", "" ] ]
1302.1527
Adrian Y. W. Cheuk
Adrian Y. W. Cheuk, Craig Boutilier
Structured Arc Reversal and Simulation of Dynamic Probabilistic Networks
Appears in Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence (UAI1997)
null
null
UAI-P-1997-PG-72-79
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present an algorithm for arc reversal in Bayesian networks with tree-structured conditional probability tables, and consider some of its advantages, especially for the simulation of dynamic probabilistic networks. In particular, the method allows one to produce CPTs for nodes involved in the reversal that exploit regularities in the conditional distributions. We argue that this approach alleviates some of the overhead associated with arc reversal, plays an important role in evidence integration and can be used to restrict sampling of variables in DPNs. We also provide an algorithm that detects the dynamic irrelevance of state variables in forward simulation. This algorithm exploits the structured CPTs in a reversed network to determine, in a time-independent fashion, the conditions under which a variable does or does not need to be sampled.
[ { "version": "v1", "created": "Wed, 6 Feb 2013 15:54:19 GMT" } ]
1,360,281,600,000
[ [ "Cheuk", "Adrian Y. W.", "" ], [ "Boutilier", "Craig", "" ] ]
1302.1531
Fabio Gagliardi Cozman
Fabio Gagliardi Cozman
Robustness Analysis of Bayesian Networks with Local Convex Sets of Distributions
Appears in Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence (UAI1997)
null
null
UAI-P-1997-PG-108-115
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Robust Bayesian inference is the calculation of posterior probability bounds given perturbations in a probabilistic model. This paper focuses on perturbations that can be expressed locally in Bayesian networks through convex sets of distributions. Two approaches for combination of local models are considered. The first approach takes the largest set of joint distributions that is compatible with the local sets of distributions; we show how to reduce this type of robust inference to a linear programming problem. The second approach takes the convex hull of joint distributions generated from the local sets of distributions; we demonstrate how to apply interior-point optimization methods to generate posterior bounds and how to generate approximations that are guaranteed to converge to correct posterior bounds. We also discuss calculation of bounds for expected utilities and variances, and global perturbation models.
[ { "version": "v1", "created": "Wed, 6 Feb 2013 15:54:41 GMT" } ]
1,360,281,600,000
[ [ "Cozman", "Fabio Gagliardi", "" ] ]
1302.1532
Adnan Darwiche
Adnan Darwiche, Gregory M. Provan
A Standard Approach for Optimizing Belief Network Inference using Query DAGs
Appears in Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence (UAI1997)
null
null
UAI-P-1997-PG-116-123
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper proposes a novel, algorithm-independent approach to optimizing belief network inference. rather than designing optimizations on an algorithm by algorithm basis, we argue that one should use an unoptimized algorithm to generate a Q-DAG, a compiled graphical representation of the belief network, and then optimize the Q-DAG and its evaluator instead. We present a set of Q-DAG optimizations that supplant optimizations designed for traditional inference algorithms, including zero compression, network pruning and caching. We show that our Q-DAG optimizations require time linear in the Q-DAG size, and significantly simplify the process of designing algorithms for optimizing belief network inference.
[ { "version": "v1", "created": "Wed, 6 Feb 2013 15:54:47 GMT" } ]
1,360,281,600,000
[ [ "Darwiche", "Adnan", "" ], [ "Provan", "Gregory M.", "" ] ]
1302.1533
Thomas L. Dean
Thomas L. Dean, Robert Givan, Sonia Leach
Model Reduction Techniques for Computing Approximately Optimal Solutions for Markov Decision Processes
Appears in Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence (UAI1997)
null
null
UAI-P-1997-PG-124-131
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a method for solving implicit (factored) Markov decision processes (MDPs) with very large state spaces. We introduce a property of state space partitions which we call epsilon-homogeneity. Intuitively, an epsilon-homogeneous partition groups together states that behave approximately the same under all or some subset of policies. Borrowing from recent work on model minimization in computer-aided software verification, we present an algorithm that takes a factored representation of an MDP and an 0<=epsilon<=1 and computes a factored epsilon-homogeneous partition of the state space. This partition defines a family of related MDPs - those MDPs with state space equal to the blocks of the partition, and transition probabilities "approximately" like those of any (original MDP) state in the source block. To formally study such families of MDPs, we introduce the new notion of a "bounded parameter MDP" (BMDP), which is a family of (traditional) MDPs defined by specifying upper and lower bounds on the transition probabilities and rewards. We describe algorithms that operate on BMDPs to find policies that are approximately optimal with respect to the original MDP. In combination, our method for reducing a large implicit MDP to a possibly much smaller BMDP using an epsilon-homogeneous partition, and our methods for selecting actions in BMDPs constitute a new approach for analyzing large implicit MDPs. Among its advantages, this new approach provides insight into existing algorithms to solving implicit MDPs, provides useful connections to work in automata theory and model minimization, and suggests methods, which involve varying epsilon, to trade time and space (specifically in terms of the size of the corresponding state space) for solution quality.
[ { "version": "v1", "created": "Wed, 6 Feb 2013 15:54:52 GMT" } ]
1,360,281,600,000
[ [ "Dean", "Thomas L.", "" ], [ "Givan", "Robert", "" ], [ "Leach", "Sonia", "" ] ]
1302.1534
Rina Dechter
Rina Dechter, Irina Rish
A Scheme for Approximating Probabilistic Inference
Appears in Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence (UAI1997)
null
null
UAI-P-1997-PG-132-141
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes a class of probabilistic approximation algorithms based on bucket elimination which offer adjustable levels of accuracy and efficiency. We analyze the approximation for several tasks: finding the most probable explanation, belief updating and finding the maximum a posteriori hypothesis. We identify regions of completeness and provide preliminary empirical evaluation on randomly generated networks.
[ { "version": "v1", "created": "Wed, 6 Feb 2013 15:54:58 GMT" } ]
1,360,281,600,000
[ [ "Dechter", "Rina", "" ], [ "Rish", "Irina", "" ] ]
1302.1535
Soren L. Dittmer
Soren L. Dittmer, Finn Verner Jensen
Myopic Value of Information in Influence Diagrams
Appears in Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence (UAI1997)
null
null
UAI-P-1997-PG-142-149
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a method for calculation of myopic value of information in influence diagrams (Howard & Matheson, 1981) based on the strong junction tree framework (Jensen, Jensen & Dittmer, 1994). The difference in instantiation order in the influence diagrams is reflected in the corresponding junction trees by the order in which the chance nodes are marginalized. This order of marginalization can be changed by table expansion and in effect the same junction tree with expanded tables may be used for calculating the expected utility for scenarios with different instantiation order. We also compare our method to the classic method of modeling different instantiation orders in the same influence diagram.
[ { "version": "v1", "created": "Wed, 6 Feb 2013 15:55:04 GMT" } ]
1,360,281,600,000
[ [ "Dittmer", "Soren L.", "" ], [ "Jensen", "Finn Verner", "" ] ]
1302.1536
Jens Doerpmund
Jens Doerpmund
Limitations of Skeptical Default Reasoning
Appears in Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence (UAI1997)
null
null
UAI-P-1997-PG-150-156
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Poole has shown that nonmonotonic logics do not handle the lottery paradox correctly. In this paper we will show that Pollock's theory of defeasible reasoning fails for the same reason: defeasible reasoning is incompatible with the skeptical notion of derivability.
[ { "version": "v1", "created": "Wed, 6 Feb 2013 15:55:09 GMT" } ]
1,360,281,600,000
[ [ "Doerpmund", "Jens", "" ] ]
1302.1537
Didier Dubois
Didier Dubois, Helene Fargier, Henri Prade
Decision-making Under Ordinal Preferences and Comparative Uncertainty
Appears in Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence (UAI1997)
null
null
UAI-P-1997-PG-157-164
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper investigates the problem of finding a preference relation on a set of acts from the knowledge of an ordering on events (subsets of states of the world) describing the decision-maker (DM)s uncertainty and an ordering of consequences of acts, describing the DMs preferences. However, contrary to classical approaches to decision theory, we try to do it without resorting to any numerical representation of utility nor uncertainty, and without even using any qualitative scale on which both uncertainty and preference could be mapped. It is shown that although many axioms of Savage theory can be preserved and despite the intuitive appeal of the method for constructing a preference over acts, the approach is inconsistent with a probabilistic representation of uncertainty, but leads to the kind of uncertainty theory encountered in non-monotonic reasoning (especially preferential and rational inference), closely related to possibility theory. Moreover the method turns out to be either very little decisive or to lead to very risky decisions, although its basic principles look sound. This paper raises the question of the very possibility of purely symbolic approaches to Savage-like decision-making under uncertainty and obtains preliminary negative results.
[ { "version": "v1", "created": "Wed, 6 Feb 2013 15:55:15 GMT" } ]
1,360,281,600,000
[ [ "Dubois", "Didier", "" ], [ "Fargier", "Helene", "" ], [ "Prade", "Henri", "" ] ]
1302.1540
Judy Goldsmith
Judy Goldsmith, Michael L. Littman, Martin Mundhenk
The Complexity of Plan Existence and Evaluation in Probabilistic Domains
Appears in Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence (UAI1997)
null
null
UAI-P-1997-PG-182-189
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We examine the computational complexity of testing and finding small plans in probabilistic planning domains with succinct representations. We find that many problems of interest are complete for a variety of complexity classes: NP, co-NP, PP, NP^PP, co-NP^PP, and PSPACE. Of these, the probabilistic classes PP and NP^PP are likely to be of special interest in the field of uncertainty in artificial intelligence and are deserving of additional study. These results suggest a fruitful direction of future algorithmic development.
[ { "version": "v1", "created": "Wed, 6 Feb 2013 15:55:32 GMT" } ]
1,360,281,600,000
[ [ "Goldsmith", "Judy", "" ], [ "Littman", "Michael L.", "" ], [ "Mundhenk", "Martin", "" ] ]
1302.1541
Carla P. Gomes
Carla P. Gomes, Bart Selman
Algorithm Portfolio Design: Theory vs. Practice
Appears in Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence (UAI1997)
null
null
UAI-P-1997-PG-190-197
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Stochastic algorithms are among the best for solving computationally hard search and reasoning problems. The runtime of such procedures is characterized by a random variable. Different algorithms give rise to different probability distributions. One can take advantage of such differences by combining several algorithms into a portfolio, and running them in parallel or interleaving them on a single processor. We provide a detailed evaluation of the portfolio approach on distributions of hard combinatorial search problems. We show under what conditions the protfolio approach can have a dramatic computational advantage over the best traditional methods.
[ { "version": "v1", "created": "Wed, 6 Feb 2013 15:55:37 GMT" } ]
1,360,281,600,000
[ [ "Gomes", "Carla P.", "" ], [ "Selman", "Bart", "" ] ]
1302.1543
Adam J. Grove
Adam J. Grove, Joseph Y. Halpern
Probability Update: Conditioning vs. Cross-Entropy
Appears in Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence (UAI1997)
null
null
UAI-P-1997-PG-208-214
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Conditioning is the generally agreed-upon method for updating probability distributions when one learns that an event is certainly true. But it has been argued that we need other rules, in particular the rule of cross-entropy minimization, to handle updates that involve uncertain information. In this paper we re-examine such a case: van Fraassen's Judy Benjamin problem, which in essence asks how one might update given the value of a conditional probability. We argue that -- contrary to the suggestions in the literature -- it is possible to use simple conditionalization in this case, and thereby obtain answers that agree fully with intuition. This contrasts with proposals such as cross-entropy, which are easier to apply but can give unsatisfactory answers. Based on the lessons from this example, we speculate on some general philosophical issues concerning probability update.
[ { "version": "v1", "created": "Wed, 6 Feb 2013 15:55:49 GMT" } ]
1,360,281,600,000
[ [ "Grove", "Adam J.", "" ], [ "Halpern", "Joseph Y.", "" ] ]
1302.1546
Luis D. Hernandez
Luis D. Hernandez, Serafin Moral
Inference with Idempotent Valuations
Appears in Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence (UAI1997)
null
null
UAI-P-1997-PG-229-237
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Valuation based systems verifying an idempotent property are studied. A partial order is defined between the valuations giving them a lattice structure. Then, two different strategies are introduced to represent valuations: as infimum of the most informative valuations or as supremum of the least informative ones. It is studied how to carry out computations with both representations in an efficient way. The particular cases of finite sets and convex polytopes are considered.
[ { "version": "v1", "created": "Wed, 6 Feb 2013 15:56:12 GMT" } ]
1,360,281,600,000
[ [ "Hernandez", "Luis D.", "" ], [ "Moral", "Serafin", "" ] ]
1302.1548
Eric J. Horvitz
Eric J. Horvitz, Adam Seiver
Time-Critical Reasoning: Representations and Application
Appears in Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence (UAI1997)
null
null
UAI-P-1997-PG-250-257
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We review the problem of time-critical action and discuss a reformulation that shifts knowledge acquisition from the assessment of complex temporal probabilistic dependencies to the direct assessment of time-dependent utilities over key outcomes of interest. We dwell on a class of decision problems characterized by the centrality of diagnosing and reacting in a timely manner to pathological processes. We motivate key ideas in the context of trauma-care triage and transportation decisions.
[ { "version": "v1", "created": "Wed, 6 Feb 2013 15:56:24 GMT" } ]
1,360,281,600,000
[ [ "Horvitz", "Eric J.", "" ], [ "Seiver", "Adam", "" ] ]
1302.1550
Manfred Jaeger
Manfred Jaeger
Relational Bayesian Networks
Appears in Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence (UAI1997)
null
null
UAI-P-1997-PG-266-273
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A new method is developed to represent probabilistic relations on multiple random events. Where previously knowledge bases containing probabilistic rules were used for this purpose, here a probability distribution over the relations is directly represented by a Bayesian network. By using a powerful way of specifying conditional probability distributions in these networks, the resulting formalism is more expressive than the previous ones. Particularly, it provides for constraints on equalities of events, and it allows to define complex, nested combination functions.
[ { "version": "v1", "created": "Wed, 6 Feb 2013 15:57:05 GMT" } ]
1,360,281,600,000
[ [ "Jaeger", "Manfred", "" ] ]
1302.1551
Radim Jirousek
Radim Jirousek
Composition of Probability Measures on Finite Spaces
Appears in Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence (UAI1997)
null
null
UAI-P-1997-PG-274-281
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Decomposable models and Bayesian networks can be defined as sequences of oligo-dimensional probability measures connected with operators of composition. The preliminary results suggest that the probabilistic models allowing for effective computational procedures are represented by sequences possessing a special property; we shall call them perfect sequences. The paper lays down the elementary foundation necessary for further study of iterative application of operators of composition. We believe to develop a technique describing several graph models in a unifying way. We are convinced that practically all theoretical results and procedures connected with decomposable models and Bayesian networks can be translated into the terminology introduced in this paper. For example, complexity of computational procedures in these models is closely dependent on possibility to change the ordering of oligo-dimensional measures defining the model. Therefore, in this paper, lot of attention is paid to possibility to change ordering of the operators of composition.
[ { "version": "v1", "created": "Wed, 6 Feb 2013 15:57:13 GMT" } ]
1,360,281,600,000
[ [ "Jirousek", "Radim", "" ] ]
1302.1553
Uffe Kj{\ae}rulff
Uffe Kj{\ae}rulff
Nested Junction Trees
Appears in Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence (UAI1997)
null
null
UAI-P-1997-PG-294-301
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The efficiency of inference in both the Hugin and, most notably, the Shafer-Shenoy architectures can be improved by exploiting the independence relations induced by the incoming messages of a clique. That is, the message to be sent from a clique can be computed via a factorization of the clique potential in the form of a junction tree. In this paper we show that by exploiting such nested junction trees in the computation of messages both space and time costs of the conventional propagation methods may be reduced. The paper presents a structured way of exploiting the nested junction trees technique to achieve such reductions. The usefulness of the method is emphasized through a thorough empirical evaluation involving ten large real-world Bayesian networks and the Hugin inference algorithm.
[ { "version": "v1", "created": "Wed, 6 Feb 2013 15:57:28 GMT" } ]
1,360,281,600,000
[ [ "Kjærulff", "Uffe", "" ] ]
1302.1554
Daphne Koller
Daphne Koller, Avi Pfeffer
Object-Oriented Bayesian Networks
Appears in Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence (UAI1997)
null
null
UAI-P-1997-PG-302-313
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bayesian networks provide a modeling language and associated inference algorithm for stochastic domains. They have been successfully applied in a variety of medium-scale applications. However, when faced with a large complex domain, the task of modeling using Bayesian networks begins to resemble the task of programming using logical circuits. In this paper, we describe an object-oriented Bayesian network (OOBN) language, which allows complex domains to be described in terms of inter-related objects. We use a Bayesian network fragment to describe the probabilistic relations between the attributes of an object. These attributes can themselves be objects, providing a natural framework for encoding part-of hierarchies. Classes are used to provide a reusable probabilistic model which can be applied to multiple similar objects. Classes also support inheritance of model fragments from a class to a subclass, allowing the common aspects of related classes to be defined only once. Our language has clear declarative semantics: an OOBN can be interpreted as a stochastic functional program, so that it uniquely specifies a probabilistic model. We provide an inference algorithm for OOBNs, and show that much of the structural information encoded by an OOBN--particularly the encapsulation of variables within an object and the reuse of model fragments in different contexts--can also be used to speed up the inference process.
[ { "version": "v1", "created": "Wed, 6 Feb 2013 15:57:36 GMT" } ]
1,360,281,600,000
[ [ "Koller", "Daphne", "" ], [ "Pfeffer", "Avi", "" ] ]
1302.1555
Alexander V. Kozlov
Alexander V. Kozlov, Daphne Koller
Nonuniform Dynamic Discretization in Hybrid Networks
Appears in Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence (UAI1997)
null
null
UAI-P-1997-PG-314-325
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider probabilistic inference in general hybrid networks, which include continuous and discrete variables in an arbitrary topology. We reexamine the question of variable discretization in a hybrid network aiming at minimizing the information loss induced by the discretization. We show that a nonuniform partition across all variables as opposed to uniform partition of each variable separately reduces the size of the data structures needed to represent a continuous function. We also provide a simple but efficient procedure for nonuniform partition. To represent a nonuniform discretization in the computer memory, we introduce a new data structure, which we call a Binary Split Partition (BSP) tree. We show that BSP trees can be an exponential factor smaller than the data structures in the standard uniform discretization in multiple dimensions and show how the BSP trees can be used in the standard join tree algorithm. We show that the accuracy of the inference process can be significantly improved by adjusting discretization with evidence. We construct an iterative anytime algorithm that gradually improves the quality of the discretization and the accuracy of the answer on a query. We provide empirical evidence that the algorithm converges.
[ { "version": "v1", "created": "Wed, 6 Feb 2013 15:57:46 GMT" } ]
1,360,281,600,000
[ [ "Kozlov", "Alexander V.", "" ], [ "Koller", "Daphne", "" ] ]
1302.1556
Henry E. Kyburg Jr.
Henry E. Kyburg Jr
Probabilistic Acceptance
Appears in Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence (UAI1997)
null
null
UAI-P-1997-PG-326-333
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The idea of fully accepting statements when the evidence has rendered them probable enough faces a number of difficulties. We leave the interpretation of probability largely open, but attempt to suggest a contextual approach to full belief. We show that the difficulties of probabilistic acceptance are not as severe as they are sometimes painted, and that though there are oddities associated with probabilistic acceptance they are in some instances less awkward than the difficulties associated with other nonmonotonic formalisms. We show that the structure at which we arrive provides a natural home for statistical inference.
[ { "version": "v1", "created": "Wed, 6 Feb 2013 15:57:52 GMT" } ]
1,360,281,600,000
[ [ "Kyburg", "Henry E.", "Jr" ] ]
1302.1557
Kathryn Blackmond Laskey
Kathryn Blackmond Laskey, Suzanne M. Mahoney
Network Fragments: Representing Knowledge for Constructing Probabilistic Models
Appears in Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence (UAI1997)
null
null
UAI-P-1997-PG-334-341
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In most current applications of belief networks, domain knowledge is represented by a single belief network that applies to all problem instances in the domain. In more complex domains, problem-specific models must be constructed from a knowledge base encoding probabilistic relationships in the domain. Most work in knowledge-based model construction takes the rule as the basic unit of knowledge. We present a knowledge representation framework that permits the knowledge base designer to specify knowledge in larger semantically meaningful units which we call network fragments. Our framework provides for representation of asymmetric independence and canonical intercausal interaction. We discuss the combination of network fragments to form problem-specific models to reason about particular problem instances. The framework is illustrated using examples from the domain of military situation awareness.
[ { "version": "v1", "created": "Wed, 6 Feb 2013 15:57:59 GMT" } ]
1,360,281,600,000
[ [ "Laskey", "Kathryn Blackmond", "" ], [ "Mahoney", "Suzanne M.", "" ] ]
1302.1558
Yan Lin
Yan Lin, Marek J. Druzdzel
Computational Advantages of Relevance Reasoning in Bayesian Belief Networks
Appears in Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence (UAI1997)
null
null
UAI-P-1997-PG-342-350
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces a computational framework for reasoning in Bayesian belief networks that derives significant advantages from focused inference and relevance reasoning. This framework is based on d -separation and other simple and computationally efficient techniques for pruning irrelevant parts of a network. Our main contribution is a technique that we call relevance-based decomposition. Relevance-based decomposition approaches belief updating in large networks by focusing on their parts and decomposing them into partially overlapping subnetworks. This makes reasoning in some intractable networks possible and, in addition, often results in significant speedup, as the total time taken to update all subnetworks is in practice often considerably less than the time taken to update the network as a whole. We report results of empirical tests that demonstrate practical significance of our approach.
[ { "version": "v1", "created": "Wed, 6 Feb 2013 15:58:06 GMT" } ]
1,360,281,600,000
[ [ "Lin", "Yan", "" ], [ "Druzdzel", "Marek J.", "" ] ]
1302.1560
Todd Michael Mansell
Todd Michael Mansell
A Target Classification Decision Aid
Appears in Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence (UAI1997)
null
null
UAI-P-1997-PG-358-365
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A submarine's sonar team is responsible for detecting, localising and classifying targets using information provided by the platform's sensor suite. The information used to make these assessments is typically uncertain and/or incomplete and is likely to require a measure of confidence in its reliability. Moreover, improvements in sensor and communication technology are resulting in increased amounts of on-platform and off-platform information available for evaluation. This proliferation of imprecise information increases the risk of overwhelming the operator. To assist the task of localisation and classification a concept demonstration decision aid (Horizon), based on evidential reasoning, has been developed. Horizon is an information fusion software package for representing and fusing imprecise information about the state of the world, expressed across suitable frames of reference. The Horizon software is currently at prototype stage.
[ { "version": "v1", "created": "Wed, 6 Feb 2013 15:58:18 GMT" } ]
1,360,281,600,000
[ [ "Mansell", "Todd Michael", "" ] ]
1302.1562
Paul-Andre Monney
Paul-Andre Monney
Support and Plausibility Degrees in Generalized Functional Models
Appears in Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence (UAI1997)
null
null
UAI-P-1997-PG-376-383
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
By discussing several examples, the theory of generalized functional models is shown to be very natural for modeling some situations of reasoning under uncertainty. A generalized functional model is a pair (f, P) where f is a function describing the interactions between a parameter variable, an observation variable and a random source, and P is a probability distribution for the random source. Unlike traditional functional models, generalized functional models do not require that there is only one value of the parameter variable that is compatible with an observation and a realization of the random source. As a consequence, the results of the analysis of a generalized functional model are not expressed in terms of probability distributions but rather by support and plausibility functions. The analysis of a generalized functional model is very logical and is inspired from ideas already put forward by R.A. Fisher in his theory of fiducial probability.
[ { "version": "v1", "created": "Wed, 6 Feb 2013 15:58:30 GMT" } ]
1,360,281,600,000
[ [ "Monney", "Paul-Andre", "" ] ]
1302.1563
Scott B. Morris
Scott B. Morris, Doug Cork, Richard E. Neapolitan
The Cognitive Processing of Causal Knowledge
Appears in Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence (UAI1997)
null
null
UAI-P-1997-PG-384-391
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There is a brief description of the probabilistic causal graph model for representing, reasoning with, and learning causal structure using Bayesian networks. It is then argued that this model is closely related to how humans reason with and learn causal structure. It is shown that studies in psychology on discounting (reasoning concerning how the presence of one cause of an effect makes another cause less probable) support the hypothesis that humans reach the same judgments as algorithms for doing inference in Bayesian networks. Next, it is shown how studies by Piaget indicate that humans learn causal structure by observing the same independencies and dependencies as those used by certain algorithms for learning the structure of a Bayesian network. Based on this indication, a subjective definition of causality is forwarded. Finally, methods for further testing the accuracy of these claims are discussed.
[ { "version": "v1", "created": "Wed, 6 Feb 2013 15:58:35 GMT" } ]
1,360,281,600,000
[ [ "Morris", "Scott B.", "" ], [ "Cork", "Doug", "" ], [ "Neapolitan", "Richard E.", "" ] ]
1302.1567
Solomon Eyal Shimony
Solomon Eyal Shimony, Carmel Domshlak, Eugene Santos Jr
Cost-Sharing in Bayesian Knowledge Bases
Appears in Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence (UAI1997)
null
null
UAI-P-1997-PG-421-428
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bayesian knowledge bases (BKBs) are a generalization of Bayes networks and weighted proof graphs (WAODAGs), that allow cycles in the causal graph. Reasoning in BKBs requires finding the most probable inferences consistent with the evidence. The cost-sharing heuristic for finding least-cost explanations in WAODAGs was presented and shown to be effective by Charniak and Husain. However, the cycles in BKBs would make the definition of cost-sharing cyclic as well, if applied directly to BKBs. By treating the defining equations of cost-sharing as a system of equations, one can properly define an admissible cost-sharing heuristic for BKBs. Empirical evaluation shows that cost-sharing improves performance significantly when applied to BKBs.
[ { "version": "v1", "created": "Wed, 6 Feb 2013 15:58:57 GMT" } ]
1,360,281,600,000
[ [ "Shimony", "Solomon Eyal", "" ], [ "Domshlak", "Carmel", "" ], [ "Santos", "Eugene", "Jr" ] ]
1302.1569
Choh Man Teng
Choh Man Teng
Sequential Thresholds: Context Sensitive Default Extensions
Appears in Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence (UAI1997)
null
null
UAI-P-1997-PG-437-444
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Default logic encounters some conceptual difficulties in representing common sense reasoning tasks. We argue that we should not try to formulate modular default rules that are presumed to work in all or most circumstances. We need to take into account the importance of the context which is continuously evolving during the reasoning process. Sequential thresholding is a quantitative counterpart of default logic which makes explicit the role context plays in the construction of a non-monotonic extension. We present a semantic characterization of generic non-monotonic reasoning, as well as the instantiations pertaining to default logic and sequential thresholding. This provides a link between the two mechanisms as well as a way to integrate the two that can be beneficial to both.
[ { "version": "v1", "created": "Wed, 6 Feb 2013 15:59:08 GMT" } ]
1,360,281,600,000
[ [ "Teng", "Choh Man", "" ] ]
1302.1570
Moshe Tennenholtz
Moshe Tennenholtz
On Stable Multi-Agent Behavior in Face of Uncertainty
Appears in Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence (UAI1997)
null
null
UAI-P-1997-PG-445-452
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A stable joint plan should guarantee the achievement of a designer's goal in a multi-agent environment, while ensuring that deviations from the prescribed plan would be detected. We present a computational framework where stable joint plans can be studied, as well as several basic results about the representation, verification and synthesis of stable joint plans.
[ { "version": "v1", "created": "Wed, 6 Feb 2013 15:59:13 GMT" } ]
1,360,281,600,000
[ [ "Tennenholtz", "Moshe", "" ] ]
1302.1573
Nevin Lianwen Zhang
Nevin Lianwen Zhang, Wenju Liu
Region-Based Approximations for Planning in Stochastic Domains
Appears in Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence (UAI1997)
null
null
UAI-P-1997-PG-472-480
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper is concerned with planning in stochastic domains by means of partially observable Markov decision processes (POMDPs). POMDPs are difficult to solve. This paper identifies a subclass of POMDPs called region observable POMDPs, which are easier to solve and can be used to approximate general POMDPs to arbitrary accuracy.
[ { "version": "v1", "created": "Wed, 6 Feb 2013 15:59:30 GMT" } ]
1,360,281,600,000
[ [ "Zhang", "Nevin Lianwen", "" ], [ "Liu", "Wenju", "" ] ]
1302.1574
Nevin Lianwen Zhang
Nevin Lianwen Zhang, Li Yan
Independence of Causal Influence and Clique Tree Propagation
Appears in Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence (UAI1997)
null
null
UAI-P-1997-PG-481-488
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper explores the role of independence of causal influence (ICI) in Bayesian network inference. ICI allows one to factorize a conditional probability table into smaller pieces. We describe a method for exploiting the factorization in clique tree propagation (CTP) - the state-of-the-art exact inference algorithm for Bayesian networks. We also present empirical results showing that the resulting algorithm is significantly more efficient than the combination of CTP and previous techniques for exploiting ICI.
[ { "version": "v1", "created": "Wed, 6 Feb 2013 15:59:36 GMT" } ]
1,360,281,600,000
[ [ "Zhang", "Nevin Lianwen", "" ], [ "Yan", "Li", "" ] ]
1302.1575
Nevin Lianwen Zhang
Nevin Lianwen Zhang, Weihong Zhang
Fast Value Iteration for Goal-Directed Markov Decision Processes
Appears in Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence (UAI1997)
null
null
UAI-P-1997-PG-489-494
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Planning problems where effects of actions are non-deterministic can be modeled as Markov decision processes. Planning problems are usually goal-directed. This paper proposes several techniques for exploiting the goal-directedness to accelerate value iteration, a standard algorithm for solving Markov decision processes. Empirical studies have shown that the techniques can bring about significant speedups.
[ { "version": "v1", "created": "Wed, 6 Feb 2013 15:59:41 GMT" } ]
1,360,281,600,000
[ [ "Zhang", "Nevin Lianwen", "" ], [ "Zhang", "Weihong", "" ] ]
1302.2056
Jose Hernandez-Orallo
Jose Hernandez-Orallo
Complexity distribution of agent policies
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We analyse the complexity of environments according to the policies that need to be used to achieve high performance. The performance results for a population of policies leads to a distribution that is examined in terms of policy complexity and analysed through several diagrams and indicators. The notion of environment response curve is also introduced, by inverting the performance results into an ability scale. We apply all these concepts, diagrams and indicators to a minimalistic environment class, agent-populated elementary cellular automata, showing how the difficulty, discriminating power and ranges (previous to normalisation) may vary for several environments.
[ { "version": "v1", "created": "Fri, 8 Feb 2013 15:01:20 GMT" } ]
1,360,540,800,000
[ [ "Hernandez-Orallo", "Jose", "" ] ]
1302.2465
Patrick Rodler
Patrick Rodler and Kostyantyn Shchekotykhin and Philipp Fleiss and Gerhard Friedrich
RIO: Minimizing User Interaction in Debugging of Knowledge Bases
arXiv admin note: substantial text overlap with arXiv:1209.3734
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The best currently known interactive debugging systems rely upon some meta-information in terms of fault probabilities in order to improve their efficiency. However, misleading meta information might result in a dramatic decrease of the performance and its assessment is only possible a-posteriori. Consequently, as long as the actual fault is unknown, there is always some risk of suboptimal interactions. In this work we present a reinforcement learning strategy that continuously adapts its behavior depending on the performance achieved and minimizes the risk of using low-quality meta information. Therefore, this method is suitable for application scenarios where reliable prior fault estimates are difficult to obtain. Using diverse real-world knowledge bases, we show that the proposed interactive query strategy is scalable, features decent reaction time, and outperforms both entropy-based and no-risk strategies on average w.r.t. required amount of user interaction.
[ { "version": "v1", "created": "Mon, 11 Feb 2013 12:53:47 GMT" }, { "version": "v2", "created": "Wed, 6 Mar 2013 14:46:03 GMT" } ]
1,362,614,400,000
[ [ "Rodler", "Patrick", "" ], [ "Shchekotykhin", "Kostyantyn", "" ], [ "Fleiss", "Philipp", "" ], [ "Friedrich", "Gerhard", "" ] ]
1302.3549
Silvia Acid
Silvia Acid, Luis M. de Campos
An Algorithm for Finding Minimum d-Separating Sets in Belief Networks
Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996)
null
null
UAI-P-1996-PG-3-10
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The criterion commonly used in directed acyclic graphs (dags) for testing graphical independence is the well-known d-separation criterion. It allows us to build graphical representations of dependency models (usually probabilistic dependency models) in the form of belief networks, which make easy interpretation and management of independence relationships possible, without reference to numerical parameters (conditional probabilities). In this paper, we study the following combinatorial problem: finding the minimum d-separating set for two nodes in a dag. This set would represent the minimum information (in the sense of minimum number of variables) necessary to prevent these two nodes from influencing each other. The solution to this basic problem and some of its extensions can be useful in several ways, as we shall see later. Our solution is based on a two-step process: first, we reduce the original problem to the simpler one of finding a minimum separating set in an undirected graph, and second, we develop an algorithm for solving it.
[ { "version": "v1", "created": "Wed, 13 Feb 2013 14:11:19 GMT" } ]
1,361,145,600,000
[ [ "Acid", "Silvia", "" ], [ "de Campos", "Luis M.", "" ] ]
1302.3550
John Mark Agosta
John Mark Agosta
Constraining Influence Diagram Structure by Generative Planning: An Application to the Optimization of Oil Spill Response
Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996)
null
null
UAI-P-1996-PG-11-19
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper works through the optimization of a real world planning problem, with a combination of a generative planning tool and an influence diagram solver. The problem is taken from an existing application in the domain of oil spill emergency response. The planning agent manages constraints that order sets of feasible equipment employment actions. This is mapped at an intermediate level of abstraction onto an influence diagram. In addition, the planner can apply a surveillance operator that determines observability of the state---the unknown trajectory of the oil. The uncertain world state and the objective function properties are part of the influence diagram structure, but not represented in the planning agent domain. By exploiting this structure under the constraints generated by the planning agent, the influence diagram solution complexity simplifies considerably, and an optimum solution to the employment problem based on the objective function is found. Finding this optimum is equivalent to the simultaneous evaluation of a range of plans. This result is an example of bounded optimality, within the limitations of this hybrid generative planner and influence diagram architecture.
[ { "version": "v1", "created": "Wed, 13 Feb 2013 14:11:25 GMT" } ]
1,361,145,600,000
[ [ "Agosta", "John Mark", "" ] ]
1302.3551
Satnam Alag
Satnam Alag, Alice M. Agogino
Inference Using Message Propagation and Topology Transformation in Vector Gaussian Continuous Networks
Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996)
null
null
UAI-P-1996-PG-20-27
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We extend Gaussian networks - directed acyclic graphs that encode probabilistic relationships between variables - to its vector form. Vector Gaussian continuous networks consist of composite nodes representing multivariates, that take continuous values. These vector or composite nodes can represent correlations between parents, as opposed to conventional univariate nodes. We derive rules for inference in these networks based on two methods: message propagation and topology transformation. These two approaches lead to the development of algorithms, that can be implemented in either a centralized or a decentralized manner. The domain of application of these networks are monitoring and estimation problems. This new representation along with the rules for inference developed here can be used to derive current Bayesian algorithms such as the Kalman filter, and provide a rich foundation to develop new algorithms. We illustrate this process by deriving the decentralized form of the Kalman filter. This work unifies concepts from artificial intelligence and modern control theory.
[ { "version": "v1", "created": "Wed, 13 Feb 2013 14:11:31 GMT" } ]
1,361,145,600,000
[ [ "Alag", "Satnam", "" ], [ "Agogino", "Alice M.", "" ] ]
1302.3552
Constantin F. Aliferis
Constantin F. Aliferis, Gregory F. Cooper
A Structurally and Temporally Extended Bayesian Belief Network Model: Definitions, Properties, and Modeling Techniques
Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996)
null
null
UAI-P-1996-PG-28-39
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We developed the language of Modifiable Temporal Belief Networks (MTBNs) as a structural and temporal extension of Bayesian Belief Networks (BNs) to facilitate normative temporal and causal modeling under uncertainty. In this paper we present definitions of the model, its components, and its fundamental properties. We also discuss how to represent various types of temporal knowledge, with an emphasis on hybrid temporal-explicit time modeling, dynamic structures, avoiding causal temporal inconsistencies, and dealing with models that involve simultaneously actions (decisions) and causal and non-causal associations. We examine the relationships among BNs, Modifiable Belief Networks, and MTBNs with a single temporal granularity, and suggest areas of application suitable to each one of them.
[ { "version": "v1", "created": "Wed, 13 Feb 2013 14:11:37 GMT" } ]
1,361,145,600,000
[ [ "Aliferis", "Constantin F.", "" ], [ "Cooper", "Gregory F.", "" ] ]
1302.3553
Steen A. Andersson
Steen A. Andersson, David Madigan, Michael D. Perlman
An Alternative Markov Property for Chain Graphs
Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996)
null
null
UAI-P-1996-PG-40-48
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graphical Markov models use graphs, either undirected, directed, or mixed, to represent possible dependences among statistical variables. Applications of undirected graphs (UDGs) include models for spatial dependence and image analysis, while acyclic directed graphs (ADGs), which are especially convenient for statistical analysis, arise in such fields as genetics and psychometrics and as models for expert systems and Bayesian belief networks. Lauritzen, Wermuth and Frydenberg (LWF) introduced a Markov property for chain graphs, which are mixed graphs that can be used to represent simultaneously both causal and associative dependencies and which include both UDGs and ADGs as special cases. In this paper an alternative Markov property (AMP) for chain graphs is introduced, which in some ways is a more direct extension of the ADG Markov property than is the LWF property for chain graph.
[ { "version": "v1", "created": "Wed, 13 Feb 2013 14:11:42 GMT" } ]
1,361,145,600,000
[ [ "Andersson", "Steen A.", "" ], [ "Madigan", "David", "" ], [ "Perlman", "Michael D.", "" ] ]
1302.3554
Ella M. Atkins
Ella M. Atkins, Edmund H. Durfee, Kang G. Shin
Plan Development using Local Probabilistic Models
Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996)
null
null
UAI-P-1996-PG-49-56
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Approximate models of world state transitions are necessary when building plans for complex systems operating in dynamic environments. External event probabilities can depend on state feature values as well as time spent in that particular state. We assign temporally -dependent probability functions to state transitions. These functions are used to locally compute state probabilities, which are then used to select highly probable goal paths and eliminate improbable states. This probabilistic model has been implemented in the Cooperative Intelligent Real-time Control Architecture (CIRCA), which combines an AI planner with a separate real-time system such that plans are developed, scheduled, and executed with real-time guarantees. We present flight simulation tests that demonstrate how our probabilistic model may improve CIRCA performance.
[ { "version": "v1", "created": "Wed, 13 Feb 2013 14:11:48 GMT" } ]
1,361,145,600,000
[ [ "Atkins", "Ella M.", "" ], [ "Durfee", "Edmund H.", "" ], [ "Shin", "Kang G.", "" ] ]
1302.3555
Donald Bamber
Donald Bamber
Entailment in Probability of Thresholded Generalizations
Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996)
null
null
UAI-P-1996-PG-57-64
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A nonmonotonic logic of thresholded generalizations is presented. Given propositions A and B from a language L and a positive integer k, the thresholded generalization A=>B{k} means that the conditional probability P(B|A) falls short of one by no more than c*d^k. A two-level probability structure is defined. At the lower level, a model is defined to be a probability function on L. At the upper level, there is a probability distribution over models. A definition is given of what it means for a collection of thresholded generalizations to entail another thresholded generalization. This nonmonotonic entailment relation, called "entailment in probability", has the feature that its conclusions are "probabilistically trustworthy" meaning that, given true premises, it is improbable that an entailed conclusion would be false. A procedure is presented for ascertaining whether any given collection of premises entails any given conclusion. It is shown that entailment in probability is closely related to Goldszmidt and Pearl's System-Z^+, thereby demonstrating that the conclusions of System-Z^+ are probabilistically trustworthy.
[ { "version": "v1", "created": "Wed, 13 Feb 2013 14:11:54 GMT" } ]
1,361,145,600,000
[ [ "Bamber", "Donald", "" ] ]
1302.3557
Mathias Bauer
Mathias Bauer
Approximations for Decision Making in the Dempster-Shafer Theory of Evidence
Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996)
null
null
UAI-P-1996-PG-73-80
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The computational complexity of reasoning within the Dempster-Shafer theory of evidence is one of the main points of criticism this formalism has to face. To overcome this difficulty various approximation algorithms have been suggested that aim at reducing the number of focal elements in the belief functions involved. Besides introducing a new algorithm using this method, this paper describes an empirical study that examines the appropriateness of these approximation procedures in decision making situations. It presents the empirical findings and discusses the various tradeoffs that have to be taken into account when actually applying one of these methods.
[ { "version": "v1", "created": "Wed, 13 Feb 2013 14:12:06 GMT" } ]
1,361,145,600,000
[ [ "Bauer", "Mathias", "" ] ]
1302.3559
Salem Benferhat
Salem Benferhat, Didier Dubois, Henri Prade
Coping with the Limitations of Rational Inference in the Framework of Possibility Theory
Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996)
null
null
UAI-P-1996-PG-90-97
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Possibility theory offers a framework where both Lehmann's "preferential inference" and the more productive (but less cautious) "rational closure inference" can be represented. However, there are situations where the second inference does not provide expected results either because it cannot produce them, or even provide counter-intuitive conclusions. This state of facts is not due to the principle of selecting a unique ordering of interpretations (which can be encoded by one possibility distribution), but rather to the absence of constraints expressing pieces of knowledge we have implicitly in mind. It is advocated in this paper that constraints induced by independence information can help finding the right ordering of interpretations. In particular, independence constraints can be systematically assumed with respect to formulas composed of literals which do not appear in the conditional knowledge base, or for default rules with respect to situations which are "normal" according to the other default rules in the base. The notion of independence which is used can be easily expressed in the qualitative setting of possibility theory. Moreover, when a counter-intuitive plausible conclusion of a set of defaults, is in its rational closure, but not in its preferential closure, it is always possible to repair the set of defaults so as to produce the desired conclusion.
[ { "version": "v1", "created": "Wed, 13 Feb 2013 14:12:18 GMT" } ]
1,361,145,600,000
[ [ "Benferhat", "Salem", "" ], [ "Dubois", "Didier", "" ], [ "Prade", "Henri", "" ] ]
1302.3560
Blai Bonet
Blai Bonet, Hector Geffner
Arguing for Decisions: A Qualitative Model of Decision Making
Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996)
null
null
UAI-P-1996-PG-98-105
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We develop a qualitative model of decision making with two aims: to describe how people make simple decisions and to enable computer programs to do the same. Current approaches based on Planning or Decisions Theory either ignore uncertainty and tradeoffs, or provide languages and algorithms that are too complex for this task. The proposed model provides a language based on rules, a semantics based on high probabilities and lexicographical preferences, and a transparent decision procedure where reasons for and against decisions interact. The model is no substitude for Decision Theory, yet for decisions that people find easy to explain it may provide an appealing alternative.
[ { "version": "v1", "created": "Wed, 13 Feb 2013 14:12:23 GMT" } ]
1,361,145,600,000
[ [ "Bonet", "Blai", "" ], [ "Geffner", "Hector", "" ] ]
1302.3562
Craig Boutilier
Craig Boutilier, Nir Friedman, Moises Goldszmidt, Daphne Koller
Context-Specific Independence in Bayesian Networks
Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996)
null
null
UAI-P-1996-PG-115-123
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bayesian networks provide a language for qualitatively representing the conditional independence properties of a distribution. This allows a natural and compact representation of the distribution, eases knowledge acquisition, and supports effective inference algorithms. It is well-known, however, that there are certain independencies that we cannot capture qualitatively within the Bayesian network structure: independencies that hold only in certain contexts, i.e., given a specific assignment of values to certain variables. In this paper, we propose a formal notion of context-specific independence (CSI), based on regularities in the conditional probability tables (CPTs) at a node. We present a technique, analogous to (and based on) d-separation, for determining when such independence holds in a given network. We then focus on a particular qualitative representation scheme - tree-structured CPTs - for capturing CSI. We suggest ways in which this representation can be used to support effective inference algorithms. In particular, we present a structural decomposition of the resulting network which can improve the performance of clustering algorithms, and an alternative algorithm based on cutset conditioning.
[ { "version": "v1", "created": "Wed, 13 Feb 2013 14:12:34 GMT" } ]
1,361,145,600,000
[ [ "Boutilier", "Craig", "" ], [ "Friedman", "Nir", "" ], [ "Goldszmidt", "Moises", "" ], [ "Koller", "Daphne", "" ] ]
1302.3563
John Breese
John S. Breese, David Heckerman
Decision-Theoretic Troubleshooting: A Framework for Repair and Experiment
Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996)
null
null
UAI-P-1996-PG-124-132
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We develop and extend existing decision-theoretic methods for troubleshooting a nonfunctioning device. Traditionally, diagnosis with Bayesian networks has focused on belief updating---determining the probabilities of various faults given current observations. In this paper, we extend this paradigm to include taking actions. In particular, we consider three classes of actions: (1) we can make observations regarding the behavior of a device and infer likely faults as in traditional diagnosis, (2) we can repair a component and then observe the behavior of the device to infer likely faults, and (3) we can change the configuration of the device, observe its new behavior, and infer the likelihood of faults. Analysis of latter two classes of troubleshooting actions requires incorporating notions of persistence into the belief-network formalism used for probabilistic inference.
[ { "version": "v1", "created": "Wed, 13 Feb 2013 14:12:40 GMT" }, { "version": "v2", "created": "Sun, 17 May 2015 23:18:21 GMT" } ]
1,431,993,600,000
[ [ "Breese", "John S.", "" ], [ "Heckerman", "David", "" ] ]
1302.3568
Lonnie Chrisman
Lonnie Chrisman
Independence with Lower and Upper Probabilities
Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996)
null
null
UAI-P-1996-PG-169-177
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is shown that the ability of the interval probability representation to capture epistemological independence is severely limited. Two events are epistemologically independent if knowledge of the first event does not alter belief (i.e., probability bounds) about the second. However, independence in this form can only exist in a 2-monotone probability function in degenerate cases i.e., if the prior bounds are either point probabilities or entirely vacuous. Additional limitations are characterized for other classes of lower probabilities as well. It is argued that these phenomena are simply a matter of interpretation. They appear to be limitations when one interprets probability bounds as a measure of epistemological indeterminacy (i.e., uncertainty arising from a lack of knowledge), but are exactly as one would expect when probability intervals are interpreted as representations of ontological indeterminacy (indeterminacy introduced by structural approximations). The ontological interpretation is introduced and discussed.
[ { "version": "v1", "created": "Wed, 13 Feb 2013 14:13:09 GMT" } ]
1,361,145,600,000
[ [ "Chrisman", "Lonnie", "" ] ]
1302.3569
Lonnie Chrisman
Lonnie Chrisman
Propagation of 2-Monotone Lower Probabilities on an Undirected Graph
Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996)
null
null
UAI-P-1996-PG-178-185
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Lower and upper probabilities, also known as Choquet capacities, are widely used as a convenient representation for sets of probability distributions. This paper presents a graphical decomposition and exact propagation algorithm for computing marginal posteriors of 2-monotone lower probabilities (equivalently, 2-alternating upper probabilities).
[ { "version": "v1", "created": "Wed, 13 Feb 2013 14:13:15 GMT" } ]
1,361,145,600,000
[ [ "Chrisman", "Lonnie", "" ] ]
1302.3570
Fabio Gagliardi Cozman
Fabio Gagliardi Cozman, Eric Krotkov
Quasi-Bayesian Strategies for Efficient Plan Generation: Application to the Planning to Observe Problem
Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996)
null
null
UAI-P-1996-PG-186-193
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Quasi-Bayesian theory uses convex sets of probability distributions and expected loss to represent preferences about plans. The theory focuses on decision robustness, i.e., the extent to which plans are affected by deviations in subjective assessments of probability. The present work presents solutions for plan generation when robustness of probability assessments must be included: plans contain information about the robustness of certain actions. The surprising result is that some problems can be solved faster in the Quasi-Bayesian framework than within usual Bayesian theory. We investigate this on the planning to observe problem, i.e., an agent must decide whether to take new observations or not. The fundamental question is: How, and how much, to search for a "best" plan, based on the robustness of probability assessments? Plan generation algorithms are derived in the context of material classification with an acoustic robotic probe. A package that constructs Quasi-Bayesian plans is available through anonymous ftp.
[ { "version": "v1", "created": "Wed, 13 Feb 2013 14:13:21 GMT" } ]
1,478,217,600,000
[ [ "Cozman", "Fabio Gagliardi", "" ], [ "Krotkov", "Eric", "" ] ]
1302.3571
Bruce D'Ambrosio
Bruce D'Ambrosio, Scott Burgess
Some Experiments with Real-Time Decision Algorithms
Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996)
null
null
UAI-P-1996-PG-194-202
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Real-time Decision algorithms are a class of incremental resource-bounded [Horvitz, 89] or anytime [Dean, 93] algorithms for evaluating influence diagrams. We present a test domain for real-time decision algorithms, and the results of experiments with several Real-time Decision Algorithms in this domain. The results demonstrate high performance for two algorithms, a decision-evaluation variant of Incremental Probabilisitic Inference [D'Ambrosio 93] and a variant of an algorithm suggested by Goldszmidt, [Goldszmidt, 95], PK-reduced. We discuss the implications of these experimental results and explore the broader applicability of these algorithms.
[ { "version": "v1", "created": "Wed, 13 Feb 2013 14:13:27 GMT" } ]
1,361,145,600,000
[ [ "D'Ambrosio", "Bruce", "" ], [ "Burgess", "Scott", "" ] ]
1302.3572
Rina Dechter
Rina Dechter
Bucket Elimination: A Unifying Framework for Several Probabilistic Inference
Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996)
null
null
UAI-P-1996-PG-211-219
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Probabilistic inference algorithms for finding the most probable explanation, the maximum aposteriori hypothesis, and the maximum expected utility and for updating belief are reformulated as an elimination--type algorithm called bucket elimination. This emphasizes the principle common to many of the algorithms appearing in that literature and clarifies their relationship to nonserial dynamic programming algorithms. We also present a general way of combining conditioning and elimination within this framework. Bounds on complexity are given for all the algorithms as a function of the problem's structure.
[ { "version": "v1", "created": "Wed, 13 Feb 2013 14:13:33 GMT" } ]
1,361,145,600,000
[ [ "Dechter", "Rina", "" ] ]
1302.3573
Rina Dechter
Rina Dechter
Topological Parameters for Time-Space Tradeoff
Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996)
null
null
UAI-P-1996-PG-220-227
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we propose a family of algorithms combining tree-clustering with conditioning that trade space for time. Such algorithms are useful for reasoning in probabilistic and deterministic networks as well as for accomplishing optimization tasks. By analyzing the problem structure it will be possible to select from a spectrum the algorithm that best meets a given time-space specification.
[ { "version": "v1", "created": "Wed, 13 Feb 2013 14:13:38 GMT" } ]
1,361,145,600,000
[ [ "Dechter", "Rina", "" ] ]
1302.3574
AnHai Doan
AnHai Doan, Peter Haddawy
Sound Abstraction of Probabilistic Actions in The Constraint Mass Assignment Framework
Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996)
null
null
UAI-P-1996-PG-228-235
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper provides a formal and practical framework for sound abstraction of probabilistic actions. We start by precisely defining the concept of sound abstraction within the context of finite-horizon planning (where each plan is a finite sequence of actions). Next we show that such abstraction cannot be performed within the traditional probabilistic action representation, which models a world with a single probability distribution over the state space. We then present the constraint mass assignment representation, which models the world with a set of probability distributions and is a generalization of mass assignment representations. Within this framework, we present sound abstraction procedures for three types of action abstraction. We end the paper with discussions and related work on sound and approximate abstraction. We give pointers to papers in which we discuss other sound abstraction-related issues, including applications, estimating loss due to abstraction, and automatically generating abstraction hierarchies.
[ { "version": "v1", "created": "Wed, 13 Feb 2013 14:13:44 GMT" } ]
1,361,145,600,000
[ [ "Doan", "AnHai", "" ], [ "Haddawy", "Peter", "" ] ]
1302.3575
Didier Dubois
Didier Dubois, Henri Prade
Belief Revision with Uncertain Inputs in the Possibilistic Setting
Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996)
null
null
UAI-P-1996-PG-236-243
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper discusses belief revision under uncertain inputs in the framework of possibility theory. Revision can be based on two possible definitions of the conditioning operation, one based on min operator which requires a purely ordinal scale only, and another based on product, for which a richer structure is needed, and which is a particular case of Dempster's rule of conditioning. Besides, revision under uncertain inputs can be understood in two different ways depending on whether the input is viewed, or not, as a constraint to enforce. Moreover, it is shown that M.A. Williams' transmutations, originally defined in the setting of Spohn's functions, can be captured in this framework, as well as Boutilier's natural revision.
[ { "version": "v1", "created": "Wed, 13 Feb 2013 14:13:49 GMT" } ]
1,361,145,600,000
[ [ "Dubois", "Didier", "" ], [ "Prade", "Henri", "" ] ]
1302.3576
Yousri El Fattah
Yousri El Fattah, Rina Dechter
An Evaluation of Structural Parameters for Probabilistic Reasoning: Results on Benchmark Circuits
Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996)
null
null
UAI-P-1996-PG-244-251
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many algorithms for processing probabilistic networks are dependent on the topological properties of the problem's structure. Such algorithms (e.g., clustering, conditioning) are effective only if the problem has a sparse graph captured by parameters such as tree width and cycle-cut set size. In this paper we initiate a study to determine the potential of structure-based algorithms in real-life applications. We analyze empirically the structural properties of problems coming from the circuit diagnosis domain. Specifically, we locate those properties that capture the effectiveness of clustering and conditioning as well as of a family of conditioning+clustering algorithms designed to gradually trade space for time. We perform our analysis on 11 benchmark circuits widely used in the testing community. We also report on the effect of ordering heuristics on tree-clustering and show that, on our benchmarks, the well-known max-cardinality ordering is substantially inferior to an ordering called min-degree.
[ { "version": "v1", "created": "Wed, 13 Feb 2013 14:13:55 GMT" } ]
1,361,145,600,000
[ [ "Fattah", "Yousri El", "" ], [ "Dechter", "Rina", "" ] ]
1302.3578
Nir Friedman
Nir Friedman, Joseph Y. Halpern
A Qualitative Markov Assumption and its Implications for Belief Change
Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996)
null
null
UAI-P-1996-PG-263-273
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The study of belief change has been an active area in philosophy and AI. In recent years two special cases of belief change, belief revision and belief update, have been studied in detail. Roughly, revision treats a surprising observation as a sign that previous beliefs were wrong, while update treats a surprising observation as an indication that the world has changed. In general, we would expect that an agent making an observation may both want to revise some earlier beliefs and assume that some change has occurred in the world. We define a novel approach to belief change that allows us to do this, by applying ideas from probability theory in a qualitative setting. The key idea is to use a qualitative Markov assumption, which says that state transitions are independent. We show that a recent approach to modeling qualitative uncertainty using plausibility measures allows us to make such a qualitative Markov assumption in a relatively straightforward way, and show how the Markov assumption can be used to provide an attractive belief-change model.
[ { "version": "v1", "created": "Wed, 13 Feb 2013 14:14:08 GMT" } ]
1,361,145,600,000
[ [ "Friedman", "Nir", "" ], [ "Halpern", "Joseph Y.", "" ] ]
1302.3581
Vu A. Ha
Vu A. Ha, Peter Haddawy
Theoretical Foundations for Abstraction-Based Probabilistic Planning
Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996)
null
null
UAI-P-1996-PG-291-298
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modeling worlds and actions under uncertainty is one of the central problems in the framework of decision-theoretic planning. The representation must be general enough to capture real-world problems but at the same time it must provide a basis upon which theoretical results can be derived. The central notion in the framework we propose here is that of the affine-operator, which serves as a tool for constructing (convex) sets of probability distributions, and which can be considered as a generalization of belief functions and interval mass assignments. Uncertainty in the state of the worlds is modeled with sets of probability distributions, represented by affine-trees while actions are defined as tree-manipulators. A small set of key properties of the affine-operator is presented, forming the basis for most existing operator-based definitions of probabilistic action projection and action abstraction. We derive and prove correct three projection rules, which vividly illustrate the precision-complexity tradeoff in plan projection. Finally, we show how the three types of action abstraction identified by Haddawy and Doan are manifested in the present framework.
[ { "version": "v1", "created": "Wed, 13 Feb 2013 14:14:25 GMT" } ]
1,361,145,600,000
[ [ "Ha", "Vu A.", "" ], [ "Haddawy", "Peter", "" ] ]
1302.3582
Max Henrion
Max Henrion, Malcolm Pradhan, Brendan del Favero, Kurt Huang, Gregory M. Provan, Paul O'Rorke
Why Is Diagnosis Using Belief Networks Insensitive to Imprecision In Probabilities?
Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996)
null
null
UAI-P-1996-PG-307-314
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent research has found that diagnostic performance with Bayesian belief networks is often surprisingly insensitive to imprecision in the numerical probabilities. For example, the authors have recently completed an extensive study in which they applied random noise to the numerical probabilities in a set of belief networks for medical diagnosis, subsets of the CPCS network, a subset of the QMR (Quick Medical Reference) focused on liver and bile diseases. The diagnostic performance in terms of the average probabilities assigned to the actual diseases showed small sensitivity even to large amounts of noise. In this paper, we summarize the findings of this study and discuss possible explanations of this low sensitivity. One reason is that the criterion for performance is average probability of the true hypotheses, rather than average error in probability, which is insensitive to symmetric noise distributions. But, we show that even asymmetric, logodds-normal noise has modest effects. A second reason is that the gold-standard posterior probabilities are often near zero or one, and are little disturbed by noise.
[ { "version": "v1", "created": "Wed, 13 Feb 2013 14:14:34 GMT" } ]
1,361,145,600,000
[ [ "Henrion", "Max", "" ], [ "Pradhan", "Malcolm", "" ], [ "del Favero", "Brendan", "" ], [ "Huang", "Kurt", "" ], [ "Provan", "Gregory M.", "" ], [ "O'Rorke", "Paul", "" ] ]
1302.3583
Michael C. Horsch
Michael C. Horsch, David L. Poole
Flexible Policy Construction by Information Refinement
Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996)
null
null
UAI-P-1996-PG-315-324
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We report on work towards flexible algorithms for solving decision problems represented as influence diagrams. An algorithm is given to construct a tree structure for each decision node in an influence diagram. Each tree represents a decision function and is constructed incrementally. The improvements to the tree converge to the optimal decision function (neglecting computational costs) and the asymptotic behaviour is only a constant factor worse than dynamic programming techniques, counting the number of Bayesian network queries. Empirical results show how expected utility increases with the size of the tree and the number of Bayesian net calculations.
[ { "version": "v1", "created": "Wed, 13 Feb 2013 14:14:40 GMT" } ]
1,361,145,600,000
[ [ "Horsch", "Michael C.", "" ], [ "Poole", "David L.", "" ] ]
1302.3584
Kurt Huang
Kurt Huang, Max Henrion
Efficient Search-Based Inference for Noisy-OR Belief Networks: TopEpsilon
Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996)
null
null
UAI-P-1996-PG-325-331
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Inference algorithms for arbitrary belief networks are impractical for large, complex belief networks. Inference algorithms for specialized classes of belief networks have been shown to be more efficient. In this paper, we present a search-based algorithm for approximate inference on arbitrary, noisy-OR belief networks, generalizing earlier work on search-based inference for two-level, noisy-OR belief networks. Initial experimental results appear promising.
[ { "version": "v1", "created": "Wed, 13 Feb 2013 14:14:45 GMT" } ]
1,361,145,600,000
[ [ "Huang", "Kurt", "" ], [ "Henrion", "Max", "" ] ]
1302.3585
Pablo H. Ibarguengoytia
Pablo H. Ibarguengoytia, Luis Enrique Sucar, Sunil Vadera
A Probabilistic Model For Sensor Validation
Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996)
null
null
UAI-P-1996-PG-332-339
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The validation of data from sensors has become an important issue in the operation and control of modern industrial plants. One approach is to use knowledge based techniques to detect inconsistencies in measured data. This article presents a probabilistic model for the detection of such inconsistencies. Based on probability propagation, this method is able to find the existence of a possible fault among the set of sensors. That is, if an error exists, many sensors present an apparent fault due to the propagation from the sensor(s) with a real fault. So the fault detection mechanism can only tell if a sensor has a potential fault, but it can not tell if the fault is real or apparent. So the central problem is to develop a theory, and then an algorithm, for distinguishing real and apparent faults, given that one or more sensors can fail at the same time. This article then, presents an approach based on two levels: (i) probabilistic reasoning, to detect a potential fault, and (ii) constraint management, to distinguish the real fault from the apparent ones. The proposed approach is exemplified by applying it to a power plant model.
[ { "version": "v1", "created": "Wed, 13 Feb 2013 14:14:51 GMT" } ]
1,361,145,600,000
[ [ "Ibarguengoytia", "Pablo H.", "" ], [ "Sucar", "Luis Enrique", "" ], [ "Vadera", "Sunil", "" ] ]
1302.3586
Tommi S. Jaakkola
Tommi S. Jaakkola, Michael I. Jordan
Computing Upper and Lower Bounds on Likelihoods in Intractable Networks
Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996)
null
null
UAI-P-1996-PG-340-348
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present deterministic techniques for computing upper and lower bounds on marginal probabilities in sigmoid and noisy-OR networks. These techniques become useful when the size of the network (or clique size) precludes exact computations. We illustrate the tightness of the bounds by numerical experiments.
[ { "version": "v1", "created": "Wed, 13 Feb 2013 14:14:57 GMT" } ]
1,361,145,600,000
[ [ "Jaakkola", "Tommi S.", "" ], [ "Jordan", "Michael I.", "" ] ]
1302.3587
Allan Leck Jensen
Allan Leck Jensen, Finn Verner Jensen
MIDAS - An Influence Diagram for Management of Mildew in Winter Wheat
Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996)
null
null
UAI-P-1996-PG-349-356
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a prototype of a decision support system for management of the fungal disease mildew in winter wheat. The prototype is based on an influence diagram which is used to determine the optimal time and dose of mildew treatments. This involves multiple decision opportunities over time, stochasticity, inaccurate information and incomplete knowledge. The paper describes the practical and theoretical problems encountered during the construction of the influence diagram, and also the experience with the prototype.
[ { "version": "v1", "created": "Wed, 13 Feb 2013 14:15:02 GMT" } ]
1,361,145,600,000
[ [ "Jensen", "Allan Leck", "" ], [ "Jensen", "Finn Verner", "" ] ]
1302.3588
Alexander V. Kozlov
Alexander V. Kozlov, Jaswinder Pal Singh
Computational Complexity Reduction for BN2O Networks Using Similarity of States
Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996)
null
null
UAI-P-1996-PG-357-364
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although probabilistic inference in a general Bayesian belief network is an NP-hard problem, computation time for inference can be reduced in most practical cases by exploiting domain knowledge and by making approximations in the knowledge representation. In this paper we introduce the property of similarity of states and a new method for approximate knowledge representation and inference which is based on this property. We define two or more states of a node to be similar when the ratio of their probabilities, the likelihood ratio, does not depend on the instantiations of the other nodes in the network. We show that the similarity of states exposes redundancies in the joint probability distribution which can be exploited to reduce the computation time of probabilistic inference in networks with multiple similar states, and that the computational complexity in the networks with exponentially many similar states might be polynomial. We demonstrate our ideas on the example of a BN2O network -- a two layer network often used in diagnostic problems -- by reducing it to a very close network with multiple similar states. We show that the answers to practical queries converge very fast to the answers obtained with the original network. The maximum error is as low as 5% for models that require only 10% of the computation time needed by the original BN2O model.
[ { "version": "v1", "created": "Wed, 13 Feb 2013 14:15:08 GMT" } ]
1,478,217,600,000
[ [ "Kozlov", "Alexander V.", "" ], [ "Singh", "Jaswinder Pal", "" ] ]
1302.3589
Henry E. Kyburg Jr.
Henry E. Kyburg Jr
Uncertain Inferences and Uncertain Conclusions
Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996)
null
null
UAI-P-1996-PG-365-372
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Uncertainty may be taken to characterize inferences, their conclusions, their premises or all three. Under some treatments of uncertainty, the inferences itself is never characterized by uncertainty. We explore both the significance of uncertainty in the premises and in the conclusion of an argument that involves uncertainty. We argue that for uncertainty to characterize the conclusion of an inference is natural, but that there is an interplay between uncertainty in the premises and uncertainty in the procedure of argument itself. We show that it is possible in principle to incorporate all uncertainty in the premises, rendering uncertainty arguments deductively valid. But we then argue (1) that this does not reflect human argument, (2) that it is computationally costly, and (3) that the gain in simplicity obtained by allowing uncertainty inference can sometimes outweigh the loss of flexibility it entails.
[ { "version": "v1", "created": "Wed, 13 Feb 2013 14:15:14 GMT" } ]
1,361,145,600,000
[ [ "Kyburg", "Henry E.", "Jr" ] ]
1302.3591
Suzanne M. Mahoney
Suzanne M. Mahoney, Kathryn Blackmond Laskey
Network Engineering for Complex Belief Networks
Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996)
null
null
UAI-P-1996-PG-389-396
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Like any large system development effort, the construction of a complex belief network model requires systems engineering to manage the design and construction process. We propose a rapid prototyping approach to network engineering. We describe criteria for identifying network modules and the use of "stubs" to represent not-yet-constructed modules. We propose an object oriented representation for belief networks which captures the semantics of the problem in addition to conditional independencies and probabilities. Methods for evaluating complex belief network models are discussed. The ideas are illustrated with examples from a large belief network construction problem in the military intelligence domain.
[ { "version": "v1", "created": "Wed, 13 Feb 2013 14:15:26 GMT" } ]
1,361,145,600,000
[ [ "Mahoney", "Suzanne M.", "" ], [ "Laskey", "Kathryn Blackmond", "" ] ]
1302.3592
Liem Ngo
Liem Ngo
Probabilistic Disjunctive Logic Programming
Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996)
null
null
UAI-P-1996-PG-397-404
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we propose a framework for combining Disjunctive Logic Programming and Poole's Probabilistic Horn Abduction. We use the concept of hypothesis to specify the probability structure. We consider the case in which probabilistic information is not available. Instead of using probability intervals, we allow for the specification of the probabilities of disjunctions. Because minimal models are used as characteristic models in disjunctive logic programming, we apply the principle of indifference on the set of minimal models to derive default probability values. We define the concepts of explanation and partial explanation of a formula, and use them to determine the default probability distribution(s) induced by a program. An algorithm for calculating the default probability of a goal is presented.
[ { "version": "v1", "created": "Wed, 13 Feb 2013 14:15:32 GMT" } ]
1,361,145,600,000
[ [ "Ngo", "Liem", "" ] ]
1302.3594
Mark Alan Peot
Mark Alan Peot
Geometric Implications of the Naive Bayes Assumption
Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996)
null
null
UAI-P-1996-PG-414-419
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A naive (or Idiot) Bayes network is a network with a single hypothesis node and several observations that are conditionally independent given the hypothesis. We recently surveyed a number of members of the UAI community and discovered a general lack of understanding of the implications of the Naive Bayes assumption on the kinds of problems that can be solved by these networks. It has long been recognized [Minsky 61] that if observations are binary, the decision surfaces in these networks are hyperplanes. We extend this result (hyperplane separability) to Naive Bayes networks with m-ary observations. In addition, we illustrate the effect of observation-observation dependencies on decision surfaces. Finally, we discuss the implications of these results on knowledge acquisition and research in learning.
[ { "version": "v1", "created": "Wed, 13 Feb 2013 14:15:44 GMT" } ]
1,361,145,600,000
[ [ "Peot", "Mark Alan", "" ] ]
1302.3595
Judea Pearl
Judea Pearl, Rina Dechter
Identifying Independencies in Causal Graphs with Feedback
Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996)
null
null
UAI-P-1996-PG-420-426
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show that the d -separation criterion constitutes a valid test for conditional independence relationships that are induced by feedback systems involving discrete variables.
[ { "version": "v1", "created": "Wed, 13 Feb 2013 14:15:49 GMT" } ]
1,361,145,600,000
[ [ "Pearl", "Judea", "" ], [ "Dechter", "Rina", "" ] ]
1302.3596
Kim-Leng Poh
Kim-Leng Poh, Eric J. Horvitz
A Graph-Theoretic Analysis of Information Value
Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996)
null
null
UAI-P-1996-PG-427-435
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We derive qualitative relationships about the informational relevance of variables in graphical decision models based on a consideration of the topology of the models. Specifically, we identify dominance relations for the expected value of information on chance variables in terms of their position and relationships in influence diagrams. The qualitative relationships can be harnessed to generate nonnumerical procedures for ordering uncertain variables in a decision model by their informational relevance.
[ { "version": "v1", "created": "Wed, 13 Feb 2013 14:15:55 GMT" } ]
1,361,145,600,000
[ [ "Poh", "Kim-Leng", "" ], [ "Horvitz", "Eric J.", "" ] ]
1302.3597
David L Poole
David L. Poole
A Framework for Decision-Theoretic Planning I: Combining the Situation Calculus, Conditional Plans, Probability and Utility
Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996)
null
null
UAI-P-1996-PG-436-445
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper shows how we can combine logical representations of actions and decision theory in such a manner that seems natural for both. In particular we assume an axiomatization of the domain in terms of situation calculus, using what is essentially Reiter's solution to the frame problem, in terms of the completion of the axioms defining the state change. Uncertainty is handled in terms of the independent choice logic, which allows for independent choices and a logic program that gives the consequences of the choices. As part of the consequences are a specification of the utility of (final) states. The robot adopts robot plans, similar to the GOLOG programming language. Within this logic, we can define the expected utility of a conditional plan, based on the axiomatization of the actions, the uncertainty and the utility. The ?planning' problem is to find the plan with the highest expected utility. This is related to recent structured representations for POMDPs; here we use stochastic situation calculus rules to specify the state transition function and the reward/value function. Finally we show that with stochastic frame axioms, actions representations in probabilistic STRIPS are exponentially larger than using the representation proposed here.
[ { "version": "v1", "created": "Wed, 13 Feb 2013 14:16:00 GMT" } ]
1,361,145,600,000
[ [ "Poole", "David L.", "" ] ]
1302.3598
Malcolm Pradhan
Malcolm Pradhan, Paul Dagum
Optimal Monte Carlo Estimation of Belief Network Inference
Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996)
null
null
UAI-P-1996-PG-446-453
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present two Monte Carlo sampling algorithms for probabilistic inference that guarantee polynomial-time convergence for a larger class of network than current sampling algorithms provide. These new methods are variants of the known likelihood weighting algorithm. We use of recent advances in the theory of optimal stopping rules for Monte Carlo simulation to obtain an inference approximation with relative error epsilon and a small failure probability delta. We present an empirical evaluation of the algorithms which demonstrates their improved performance.
[ { "version": "v1", "created": "Wed, 13 Feb 2013 14:16:06 GMT" } ]
1,361,145,600,000
[ [ "Pradhan", "Malcolm", "" ], [ "Dagum", "Paul", "" ] ]
1302.3599
Thomas S. Richardson
Thomas S. Richardson
A Discovery Algorithm for Directed Cyclic Graphs
Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996)
null
null
UAI-P-1996-PG-454-461
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Directed acyclic graphs have been used fruitfully to represent causal strucures (Pearl 1988). However, in the social sciences and elsewhere models are often used which correspond both causally and statistically to directed graphs with directed cycles (Spirtes 1995). Pearl (1993) discussed predicting the effects of intervention in models of this kind, so-called linear non-recursive structural equation models. This raises the question of whether it is possible to make inferences about causal structure with cycles, form sample data. In particular do there exist general, informative, feasible and reliable precedures for inferring causal structure from conditional independence relations among variables in a sample generated by an unknown causal structure? In this paper I present a discovery algorithm that is correct in the large sample limit, given commonly (but often implicitly) made plausible assumptions, and which provides information about the existence or non-existence of causal pathways from one variable to another. The algorithm is polynomial on sparse graphs.
[ { "version": "v1", "created": "Wed, 13 Feb 2013 14:16:12 GMT" } ]
1,603,497,600,000
[ [ "Richardson", "Thomas S.", "" ] ]
1302.3600
Thomas S. Richardson
Thomas S. Richardson
A Polynomial-Time Algorithm for Deciding Markov Equivalence of Directed Cyclic Graphical Models
Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996)
null
null
UAI-P-1996-PG-462-469
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although the concept of d-separation was originally defined for directed acyclic graphs (see Pearl 1988), there is a natural extension of he concept to directed cyclic graphs. When exactly the same set of d-separation relations hold in two directed graphs, no matter whether respectively cyclic or acyclic, we say that they are Markov equivalent. In other words, when two directed cyclic graphs are Markov equivalent, the set of distributions that satisfy a natural extension of the Global Directed Markov condition (Lauritzen et al. 1990) is exactly the same for each graph. There is an obvious exponential (in the number of vertices) time algorithm for deciding Markov equivalence of two directed cyclic graphs; simply chech all of the d-separation relations in each graph. In this paper I state a theorem that gives necessary and sufficient conditions for the Markov equivalence of two directed cyclic graphs, where each of the conditions can be checked in polynomial time. Hence, the theorem can be easily adapted into a polynomial time algorithm for deciding the Markov equivalence of two directed cyclic graphs. Although space prohibits inclusion of correctness proofs, they are fully described in Richardson (1994b).
[ { "version": "v1", "created": "Wed, 13 Feb 2013 14:16:18 GMT" } ]
1,361,145,600,000
[ [ "Richardson", "Thomas S.", "" ] ]
1302.3601
Wilhelm Roedder
Wilhelm Roedder, Carl-Heinz Meyer
Coherent Knowledge Processing at Maximum Entropy by SPIRIT
Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996)
null
null
UAI-P-1996-PG-470-476
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
SPIRIT is an expert system shell for probabilistic knowledge bases. Knowledge acquisition is performed by processing facts and rules on discrete variables in a rich syntax. The shell generates a probability distribution which respects all acquired facts and rules and which maximizes entropy. The user-friendly devices of SPIRIT to define variables, formulate rules and create the knowledge base are revealed in detail. Inductive learning is possible. Medium sized applications show the power of the system.
[ { "version": "v1", "created": "Wed, 13 Feb 2013 14:16:23 GMT" } ]
1,361,145,600,000
[ [ "Roedder", "Wilhelm", "" ], [ "Meyer", "Carl-Heinz", "" ] ]
1302.3602
Eugene Santos Jr.
Eugene Santos Jr., Solomon Eyal Shimony, Edward Williams
Sample-and-Accumulate Algorithms for Belief Updating in Bayes Networks
Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996)
null
null
UAI-P-1996-PG-477-484
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Belief updating in Bayes nets, a well known computationally hard problem, has recently been approximated by several deterministic algorithms, and by various randomized approximation algorithms. Deterministic algorithms usually provide probability bounds, but have an exponential runtime. Some randomized schemes have a polynomial runtime, but provide only probability estimates. We present randomized algorithms that enumerate high-probability partial instantiations, resulting in probability bounds. Some of these algorithms are also sampling algorithms. Specifically, we introduce and evaluate a variant of backward sampling, both as a sampling algorithm and as a randomized enumeration algorithm. We also relax the implicit assumption used by both sampling and accumulation algorithms, that query nodes must be instantiated in all the samples.
[ { "version": "v1", "created": "Wed, 13 Feb 2013 14:16:30 GMT" } ]
1,361,145,600,000
[ [ "Santos", "Eugene", "Jr." ], [ "Shimony", "Solomon Eyal", "" ], [ "Williams", "Edward", "" ] ]