id
stringlengths
9
10
submitter
stringlengths
5
47
authors
stringlengths
5
1.72k
title
stringlengths
11
234
comments
stringlengths
1
491
journal-ref
stringlengths
4
396
doi
stringlengths
13
97
report-no
stringlengths
4
138
categories
stringclasses
1 value
license
stringclasses
9 values
abstract
stringlengths
29
3.66k
versions
listlengths
1
21
update_date
int64
1,180B
1,718B
authors_parsed
listlengths
1
98
1302.6804
Florence Dupin de Saint-Cyr
Florence Dupin de Saint-Cyr, Jerome Lang, Thomas Schiex
Penalty logic and its Link with Dempster-Shafer Theory
Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994)
null
null
UAI-P-1994-PG-204-211
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Penalty logic, introduced by Pinkas, associates to each formula of a knowledge base the price to pay if this formula is violated. Penalties may be used as a criterion for selecting preferred consistent subsets in an inconsistent knowledge base, thus inducing a non-monotonic inference relation. A precise formalization and the main properties of penalty logic and of its associated non-monotonic inference relation are given in the first part. We also show that penalty logic and Dempster-Shafer theory are related, especially in the infinitesimal case.
[ { "version": "v1", "created": "Wed, 27 Feb 2013 14:15:44 GMT" } ]
1,362,009,600,000
[ [ "de Saint-Cyr", "Florence Dupin", "" ], [ "Lang", "Jerome", "" ], [ "Schiex", "Thomas", "" ] ]
1302.6805
Kazuo J. Ezawa
Kazuo J. Ezawa
Value of Evidence on Influence Diagrams
Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994)
null
null
UAI-P-1994-PG-212-220
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we introduce evidence propagation operations on influence diagrams and a concept of value of evidence, which measures the value of experimentation. Evidence propagation operations are critical for the computation of the value of evidence, general update and inference operations in normative expert systems which are based on the influence diagram (generalized Bayesian network) paradigm. The value of evidence allows us to compute directly an outcome sensitivity, a value of perfect information and a value of control which are used in decision analysis (the science of decision making under uncertainty). More specifically, the outcome sensitivity is the maximum difference among the values of evidence, the value of perfect information is the expected value of the values of evidence, and the value of control is the optimal value of the values of evidence. We also discuss an implementation and a relative computational efficiency issues related to the value of evidence and the value of perfect information.
[ { "version": "v1", "created": "Wed, 27 Feb 2013 14:15:51 GMT" } ]
1,362,009,600,000
[ [ "Ezawa", "Kazuo J.", "" ] ]
1302.6806
Pascale Fonck
Pascale Fonck
Conditional Independence in Possibility Theory
Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994)
null
null
UAI-P-1994-PG-221-226
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Possibilistic conditional independence is investigated: we propose a definition of this notion similar to the one used in probability theory. The links between independence and non-interactivity are investigated, and properties of these relations are given. The influence of the conjunction used to define a conditional measure of possibility is also highlighted: we examine three types of conjunctions: Lukasiewicz - like T-norms, product-like T-norms and the minimum operator.
[ { "version": "v1", "created": "Wed, 27 Feb 2013 14:15:56 GMT" } ]
1,362,009,600,000
[ [ "Fonck", "Pascale", "" ] ]
1302.6807
Robert Fung
Robert Fung, Brendan del Favero
Backward Simulation in Bayesian Networks
Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994)
null
null
UAI-P-1994-PG-227-234
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Backward simulation is an approximate inference technique for Bayesian belief networks. It differs from existing simulation methods in that it starts simulation from the known evidence and works backward (i.e., contrary to the direction of the arcs). The technique's focus on the evidence leads to improved convergence in situations where the posterior beliefs are dominated by the evidence rather than by the prior probabilities. Since this class of situations is large, the technique may make practical the application of approximate inference in Bayesian belief networks to many real-world problems.
[ { "version": "v1", "created": "Wed, 27 Feb 2013 14:16:02 GMT" } ]
1,362,009,600,000
[ [ "Fung", "Robert", "" ], [ "del Favero", "Brendan", "" ] ]
1302.6809
Dan Geiger
Dan Geiger, Azaria Paz, Judea Pearl
On Testing Whether an Embedded Bayesian Network Represents a Probability Model
Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994)
null
null
UAI-P-1994-PG-244-252
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Testing the validity of probabilistic models containing unmeasured (hidden) variables is shown to be a hard task. We show that the task of testing whether models are structurally incompatible with the data at hand, requires an exponential number of independence evaluations, each of the form: "X is conditionally independent of Y, given Z." In contrast, a linear number of such evaluations is required to test a standard Bayesian network (one per vertex). On the positive side, we show that if a network with hidden variables G has a tree skeleton, checking whether G represents a given probability model P requires the polynomial number of such independence evaluations. Moreover, we provide an algorithm that efficiently constructs a tree-structured Bayesian network (with hidden variables) that represents P if such a network exists, and further recognizes when such a network does not exist.
[ { "version": "v1", "created": "Wed, 27 Feb 2013 14:16:13 GMT" } ]
1,362,009,600,000
[ [ "Geiger", "Dan", "" ], [ "Paz", "Azaria", "" ], [ "Pearl", "Judea", "" ] ]
1302.6810
Robert P. Goldman
Robert P. Goldman, Mark S. Boddy
Epsilon-Safe Planning
Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994)
null
null
UAI-P-1994-PG-253-261
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce an approach to high-level conditional planning we call epsilon-safe planning. This probabilistic approach commits us to planning to meet some specified goal with a probability of success of at least 1-epsilon for some user-supplied epsilon. We describe several algorithms for epsilon-safe planning based on conditional planners. The two conditional planners we discuss are Peot and Smith's nonlinear conditional planner, CNLP, and our own linear conditional planner, PLINTH. We present a straightforward extension to conditional planners for which computing the necessary probabilities is simple, employing a commonly-made but perhaps overly-strong independence assumption. We also discuss a second approach to epsilon-safe planning which relaxes this independence assumption, involving the incremental construction of a probability dependence model in conjunction with the construction of the plan graph.
[ { "version": "v1", "created": "Wed, 27 Feb 2013 14:16:19 GMT" } ]
1,362,009,600,000
[ [ "Goldman", "Robert P.", "" ], [ "Boddy", "Mark S.", "" ] ]
1302.6811
Peter Haddawy
Peter Haddawy
Generating Bayesian Networks from Probability Logic Knowledge Bases
Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994)
null
null
UAI-P-1994-PG-262-269
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a method for dynamically generating Bayesian networks from knowledge bases consisting of first-order probability logic sentences. We present a subset of probability logic sufficient for representing the class of Bayesian networks with discrete-valued nodes. We impose constraints on the form of the sentences that guarantee that the knowledge base contains all the probabilistic information necessary to generate a network. We define the concept of d-separation for knowledge bases and prove that a knowledge base with independence conditions defined by d-separation is a complete specification of a probability distribution. We present a network generation algorithm that, given an inference problem in the form of a query Q and a set of evidence E, generates a network to compute P(Q|E). We prove the algorithm to be correct.
[ { "version": "v1", "created": "Wed, 27 Feb 2013 14:16:25 GMT" } ]
1,362,009,600,000
[ [ "Haddawy", "Peter", "" ] ]
1302.6812
Peter Haddawy
Peter Haddawy, AnHai Doan
Abstracting Probabilistic Actions
Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994)
null
null
UAI-P-1994-PG-270-277
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper discusses the problem of abstracting conditional probabilistic actions. We identify two distinct types of abstraction: intra-action abstraction and inter-action abstraction. We define what it means for the abstraction of an action to be correct and then derive two methods of intra-action abstraction and two methods of inter-action abstraction which are correct according to this criterion. We illustrate the developed techniques by applying them to actions described with the temporal action representation used in the DRIPS decision-theoretic planner and we describe how the planner uses abstraction to reduce the complexity of planning.
[ { "version": "v1", "created": "Wed, 27 Feb 2013 14:16:31 GMT" } ]
1,362,009,600,000
[ [ "Haddawy", "Peter", "" ], [ "Doan", "AnHai", "" ] ]
1302.6814
David Heckerman
David Heckerman, John S. Breese
A New Look at Causal Independence
Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994)
null
null
UAI-P-1994-PG-286-292
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Heckerman (1993) defined causal independence in terms of a set of temporal conditional independence statements. These statements formalized certain types of causal interaction where (1) the effect is independent of the order that causes are introduced and (2) the impact of a single cause on the effect does not depend on what other causes have previously been applied. In this paper, we introduce an equivalent a temporal characterization of causal independence based on a functional representation of the relationship between causes and the effect. In this representation, the interaction between causes and effect can be written as a nested decomposition of functions. Causal independence can be exploited by representing this decomposition in the belief network, resulting in representations that are more efficient for inference than general causal models. We present empirical results showing the benefits of a causal-independence representation for belief-network inference.
[ { "version": "v1", "created": "Wed, 27 Feb 2013 14:16:44 GMT" }, { "version": "v2", "created": "Sun, 17 May 2015 00:03:17 GMT" } ]
1,431,993,600,000
[ [ "Heckerman", "David", "" ], [ "Breese", "John S.", "" ] ]
1302.6815
David Heckerman
David Heckerman, Dan Geiger, David Maxwell Chickering
Learning Bayesian Networks: The Combination of Knowledge and Statistical Data
Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994)
null
null
UAI-P-1994-PG-293-301
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We describe algorithms for learning Bayesian networks from a combination of user knowledge and statistical data. The algorithms have two components: a scoring metric and a search procedure. The scoring metric takes a network structure, statistical data, and a user's prior knowledge, and returns a score proportional to the posterior probability of the network structure given the data. The search procedure generates networks for evaluation by the scoring metric. Our contributions are threefold. First, we identify two important properties of metrics, which we call event equivalence and parameter modularity. These properties have been mostly ignored, but when combined, greatly simplify the encoding of a user's prior knowledge. In particular, a user can express her knowledge-for the most part-as a single prior Bayesian network for the domain. Second, we describe local search and annealing algorithms to be used in conjunction with scoring metrics. In the special case where each node has at most one parent, we show that heuristic search can be replaced with a polynomial algorithm to identify the networks with the highest score. Third, we describe a methodology for evaluating Bayesian-network learning algorithms. We apply this approach to a comparison of metrics and search procedures.
[ { "version": "v1", "created": "Wed, 27 Feb 2013 14:16:50 GMT" }, { "version": "v2", "created": "Sat, 16 May 2015 23:46:48 GMT" } ]
1,431,993,600,000
[ [ "Heckerman", "David", "" ], [ "Geiger", "Dan", "" ], [ "Chickering", "David Maxwell", "" ] ]
1302.6816
David Heckerman
David Heckerman, Ross D. Shachter
A Decision-Based View of Causality
Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994)
null
null
UAI-P-1994-PG-302-310
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most traditional models of uncertainty have focused on the associational relationship among variables as captured by conditional dependence. In order to successfully manage intelligent systems for decision making, however, we must be able to predict the effects of actions. In this paper, we attempt to unite two branches of research that address such predictions: causal modeling and decision analysis. First, we provide a definition of causal dependence in decision-analytic terms, which we derive from consequences of causal dependence cited in the literature. Using this definition, we show how causal dependence can be represented within an influence diagram. In particular, we identify two inadequacies of an ordinary influence diagram as a representation for cause. We introduce a special class of influence diagrams, called causal influence diagrams, which corrects one of these problems, and identify situations where the other inadequacy can be eliminated. In addition, we describe the relationships between Howard Canonical Form and existing graphical representations of cause.
[ { "version": "v1", "created": "Wed, 27 Feb 2013 14:16:56 GMT" }, { "version": "v2", "created": "Sat, 16 May 2015 23:48:17 GMT" } ]
1,431,993,600,000
[ [ "Heckerman", "David", "" ], [ "Shachter", "Ross D.", "" ] ]
1302.6817
Jochen Heinsohn
Jochen Heinsohn
Probabilistic Description Logics
Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994)
null
null
UAI-P-1994-PG-311-318
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
On the one hand, classical terminological knowledge representation excludes the possibility of handling uncertain concept descriptions involving, e.g., "usually true" concept properties, generalized quantifiers, or exceptions. On the other hand, purely numerical approaches for handling uncertainty in general are unable to consider terminological knowledge. This paper presents the language ACP which is a probabilistic extension of terminological logics and aims at closing the gap between the two areas of research. We present the formal semantics underlying the language ALUP and introduce the probabilistic formalism that is based on classes of probabilities and is realized by means of probabilistic constraints. Besides inferring implicitly existent probabilistic relationships, the constraints guarantee terminological and probabilistic consistency. Altogether, the new language ALUP applies to domains where both term descriptions and uncertainty have to be handled.
[ { "version": "v1", "created": "Wed, 27 Feb 2013 14:17:01 GMT" } ]
1,362,009,600,000
[ [ "Heinsohn", "Jochen", "" ] ]
1302.6818
Max Henrion
Max Henrion, Gregory M. Provan, Brendan del Favero, Gillian Sanders
An Experimental Comparison of Numerical and Qualitative Probabilistic Reasoning
Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994)
null
null
UAI-P-1994-PG-319-326
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Qualitative and infinitesimal probability schemes are consistent with the axioms of probability theory, but avoid the need for precise numerical probabilities. Using qualitative probabilities could substantially reduce the effort for knowledge engineering and improve the robustness of results. We examine experimentally how well infinitesimal probabilities (the kappa-calculus of Goldszmidt and Pearl) perform a diagnostic task - troubleshooting a car that will not start by comparison with a conventional numerical belief network. We found the infinitesimal scheme to be as good as the numerical scheme in identifying the true fault. The performance of the infinitesimal scheme worsens significantly for prior fault probabilities greater than 0.03. These results suggest that infinitesimal probability methods may be of substantial practical value for machine diagnosis with small prior fault probabilities.
[ { "version": "v1", "created": "Wed, 27 Feb 2013 14:17:07 GMT" } ]
1,362,009,600,000
[ [ "Henrion", "Max", "" ], [ "Provan", "Gregory M.", "" ], [ "del Favero", "Brendan", "" ], [ "Sanders", "Gillian", "" ] ]
1302.6819
Bernhard Hollunder
Bernhard Hollunder
An Alternative Proof Method for Possibilistic Logic and its Application to Terminological Logics
Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994)
null
null
UAI-P-1994-PG-327-335
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Possibilistic logic, an extension of first-order logic, deals with uncertainty that can be estimated in terms of possibility and necessity measures. Syntactically, this means that a first-order formula is equipped with a possibility degree or a necessity degree that expresses to what extent the formula is possibly or necessarily true. Possibilistic resolution yields a calculus for possibilistic logic which respects the semantics developed for possibilistic logic. A drawback, which possibilistic resolution inherits from classical resolution, is that it may not terminate if applied to formulas belonging to decidable fragments of first-order logic. Therefore we propose an alternative proof method for possibilistic logic. The main feature of this method is that it completely abstracts from a concrete calculus but uses as basic operation a test for classical entailment. We then instantiate possibilistic logic with a terminological logic, which is a decidable subclass o f first-order logic but nevertheless much more expressive than propositional logic. This yields an extension of terminological logics towards the representation of uncertain knowledge which is satisfactory from a semantic as well as algorithmic point of view.
[ { "version": "v1", "created": "Wed, 27 Feb 2013 14:17:13 GMT" } ]
1,362,009,600,000
[ [ "Hollunder", "Bernhard", "" ] ]
1302.6820
Yen-Teh Hsia
Yen-Teh Hsia
Possibilistic Conditioning and Propagation
Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994)
null
null
UAI-P-1994-PG-336-343
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We give an axiomatization of confidence transfer - a known conditioning scheme - from the perspective of expectation-based inference in the sense of Gardenfors and Makinson. Then, we use the notion of belief independence to "filter out" different proposal s of possibilistic conditioning rules, all are variations of confidence transfer. Among the three rules that we consider, only Dempster's rule of conditioning passes the test of supporting the notion of belief independence. With the use of this conditioning rule, we then show that we can use local computation for computing desired conditional marginal possibilities of the joint possibility satisfying the given constraints. It turns out that our local computation scheme is already proposed by Shenoy. However, our intuitions are completely different from that of Shenoy. While Shenoy just defines a local computation scheme that fits his framework of valuation-based systems, we derive that local computation scheme from II(,8) = tI(,8 I a) * II(a) and appropriate independence assumptions, just like how the Bayesians derive their local computation scheme.
[ { "version": "v1", "created": "Wed, 27 Feb 2013 14:17:19 GMT" } ]
1,362,009,600,000
[ [ "Hsia", "Yen-Teh", "" ] ]
1302.6821
Marcus J. Huber
Marcus J. Huber, Edmund H. Durfee, Michael P. Wellman
The Automated Mapping of Plans for Plan Recognition
Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994)
null
null
UAI-P-1994-PG-344-351
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To coordinate with other agents in its environment, an agent needs models of what the other agents are trying to do. When communication is impossible or expensive, this information must be acquired indirectly via plan recognition. Typical approaches to plan recognition start with a specification of the possible plans the other agents may be following, and develop special techniques for discriminating among the possibilities. Perhaps more desirable would be a uniform procedure for mapping plans to general structures supporting inference based on uncertain and incomplete observations. In this paper, we describe a set of methods for converting plans represented in a flexible procedural language to observation models represented as probabilistic belief networks.
[ { "version": "v1", "created": "Wed, 27 Feb 2013 14:17:25 GMT" } ]
1,362,009,600,000
[ [ "Huber", "Marcus J.", "" ], [ "Durfee", "Edmund H.", "" ], [ "Wellman", "Michael P.", "" ] ]
1302.6822
Manfred Jaeger
Manfred Jaeger
A Logic for Default Reasoning About Probabilities
Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994)
null
null
UAI-P-1994-PG-352-359
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A logic is defined that allows to express information about statistical probabilities and about degrees of belief in specific propositions. By interpreting the two types of probabilities in one common probability space, the semantics given are well suited to model the influence of statistical information on the formation of subjective beliefs. Cross entropy minimization is a key element in these semantics, the use of which is justified by showing that the resulting logic exhibits some very reasonable properties.
[ { "version": "v1", "created": "Wed, 27 Feb 2013 14:17:31 GMT" } ]
1,362,009,600,000
[ [ "Jaeger", "Manfred", "" ] ]
1302.6823
Finn Verner Jensen
Finn Verner Jensen, Frank Jensen
Optimal Junction Trees
Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994)
null
null
UAI-P-1994-PG-360-366
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper deals with optimality issues in connection with updating beliefs in networks. We address two processes: triangulation and construction of junction trees. In the first part, we give a simple algorithm for constructing an optimal junction tree from a triangulated network. In the second part, we argue that any exact method based on local calculations must either be less efficient than the junction tree method, or it has an optimality problem equivalent to that of triangulation.
[ { "version": "v1", "created": "Wed, 27 Feb 2013 14:17:36 GMT" } ]
1,362,009,600,000
[ [ "Jensen", "Finn Verner", "" ], [ "Jensen", "Frank", "" ] ]
1302.6824
Frank Jensen
Frank Jensen, Finn Verner Jensen, Soren L. Dittmer
From Influence Diagrams to Junction Trees
Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994)
null
null
UAI-P-1994-PG-367-373
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present an approach to the solution of decision problems formulated as influence diagrams. This approach involves a special triangulation of the underlying graph, the construction of a junction tree with special properties, and a message passing algorithm operating on the junction tree for computation of expected utilities and optimal decision policies.
[ { "version": "v1", "created": "Wed, 27 Feb 2013 14:17:42 GMT" } ]
1,362,009,600,000
[ [ "Jensen", "Frank", "" ], [ "Jensen", "Finn Verner", "" ], [ "Dittmer", "Soren L.", "" ] ]
1302.6825
Uffe Kj{\ae}rulff
Uffe Kj{\ae}rulff
Reduction of Computational Complexity in Bayesian Networks through Removal of Weak Dependencies
Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994)
null
null
UAI-P-1994-PG-374-382
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper presents a method for reducing the computational complexity of Bayesian networks through identification and removal of weak dependencies (removal of links from the (moralized) independence graph). The removal of a small number of links may reduce the computational complexity dramatically, since several fill-ins and moral links may be rendered superfluous by the removal. The method is described in terms of impact on the independence graph, the junction tree, and the potential functions associated with these. An empirical evaluation of the method using large real-world networks demonstrates the applicability of the method. Further, the method, which has been implemented in Hugin, complements the approximation method suggested by Jensen & Andersen (1990).
[ { "version": "v1", "created": "Wed, 27 Feb 2013 14:17:48 GMT" } ]
1,362,009,600,000
[ [ "Kjærulff", "Uffe", "" ] ]
1302.6826
Wai Lam
Wai Lam, Fahiem Bacchus
Using New Data to Refine a Bayesian Network
Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994)
null
null
UAI-P-1994-PG-383-390
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We explore the issue of refining an existent Bayesian network structure using new data which might mention only a subset of the variables. Most previous works have only considered the refinement of the network's conditional probability parameters, and have not addressed the issue of refining the network's structure. We develop a new approach for refining the network's structure. Our approach is based on the Minimal Description Length (MDL) principle, and it employs an adapted version of a Bayesian network learning algorithm developed in our previous work. One of the adaptations required is to modify the previous algorithm to account for the structure of the existent network. The learning algorithm generates a partial network structure which can then be used to improve the existent network. We also present experimental evidence demonstrating the effectiveness of our approach.
[ { "version": "v1", "created": "Wed, 27 Feb 2013 14:17:54 GMT" } ]
1,362,009,600,000
[ [ "Lam", "Wai", "" ], [ "Bacchus", "Fahiem", "" ] ]
1302.6827
Jerome Lang
Jerome Lang
Syntax-based Default Reasoning as Probabilistic Model-based Diagnosis
Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994)
null
null
UAI-P-1994-PG-391-398
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We view the syntax-based approaches to default reasoning as a model-based diagnosis problem, where each source giving a piece of information is considered as a component. It is formalized in the ATMS framework (each source corresponds to an assumption). We assume then that all sources are independent and "fail" with a very small probability. This leads to a probability assignment on the set of candidates, or equivalently on the set of consistent environments. This probability assignment induces a Dempster-Shafer belief function which measures the probability that a proposition can be deduced from the evidence. This belief function can be used in several different ways to define a non-monotonic consequence relation. We study and compare these consequence relations. The -case of prioritized knowledge bases is briefly considered.
[ { "version": "v1", "created": "Wed, 27 Feb 2013 14:18:00 GMT" } ]
1,362,009,600,000
[ [ "Lang", "Jerome", "" ] ]
1302.6829
Stephane Lapointe
Stephane Lapointe, Rene Proulx
Fuzzy Geometric Relations to Represent Hierarchical Spatial Information
Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994)
null
null
UAI-P-1994-PG-407-415
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A model to represent spatial information is presented in this paper. It is based on fuzzy constraints represented as fuzzy geometric relations that can be hierarchically structured. The concept of spatial template is introduced to capture the idea of interrelated objects in two-dimensional space. The representation model is used to specify imprecise or vague information consisting in relative locations and orientations of template objects. It is shown in this paper how a template represented by this model can be matched against a crisp situation to recognize a particular instance of this template. Furthermore, the proximity measure (fuzzy measure) between the instance and the template is worked out - this measure can be interpreted as a degree of similarity. In this context, template recognition can be viewed as a case of fuzzy pattern recognition. The results of this work have been implemented and applied to a complex military problem from which this work originated.
[ { "version": "v1", "created": "Wed, 27 Feb 2013 14:18:12 GMT" } ]
1,362,009,600,000
[ [ "Lapointe", "Stephane", "" ], [ "Proulx", "Rene", "" ] ]
1302.6830
Paul E. Lehner
Paul E. Lehner, Christopher Elsaesser, Scott A. Musman
Constructing Belief Networks to Evaluate Plans
Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994)
null
null
UAI-P-1994-PG-416-422
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper examines the problem of constructing belief networks to evaluate plans produced by an knowledge-based planner. Techniques are presented for handling various types of complicating plan features. These include plans with context-dependent consequences, indirect consequences, actions with preconditions that must be true during the execution of an action, contingencies, multiple levels of abstraction multiple execution agents with partially-ordered and temporally overlapping actions, and plans which reference specific times and time durations.
[ { "version": "v1", "created": "Wed, 27 Feb 2013 14:18:18 GMT" } ]
1,362,009,600,000
[ [ "Lehner", "Paul E.", "" ], [ "Elsaesser", "Christopher", "" ], [ "Musman", "Scott A.", "" ] ]
1302.6831
Todd Michael Mansell
Todd Michael Mansell, Grahame Smith
Operator Selection While Planning Under Uncertainty
Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994)
null
null
UAI-P-1994-PG-423-431
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes the best first search strategy used by U-Plan (Mansell 1993a), a planning system that constructs quantitatively ranked plans given an incomplete description of an uncertain environment. U-Plan uses uncertain and incomplete evidence de scribing the environment, characterizes it using a Dempster-Shafer interval, and generates a set of possible world states. Plan construction takes place in an abstraction hierarchy where strategic decisions are made before tactical decisions. Search through this abstraction hierarchy is guided by a quantitative measure (expected fulfillment) based on decision theory. The search strategy is best first with the provision to update expected fulfillment and review previous decisions in the light of planning developments. U-Plan generates multiple plans for multiple possible worlds, and attempts to use existing plans for new world situations. A super-plan is then constructed, based on merging the set of plans and appropriately timed knowledge acquisition operators, which are used to decide between plan alternatives during plan execution.
[ { "version": "v1", "created": "Wed, 27 Feb 2013 14:18:23 GMT" } ]
1,362,009,600,000
[ [ "Mansell", "Todd Michael", "" ], [ "Smith", "Grahame", "" ] ]
1302.6832
Wolfgang Nejdl
Wolfgang Nejdl, Johann Gamper
Model-Based Diagnosis with Qualitative Temporal Uncertainty
Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994)
null
null
UAI-P-1994-PG-432-439
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we describe a framework for model-based diagnosis of dynamic systems, which extends previous work in this field by using and expressing temporal uncertainty in the form of qualitative interval relations a la Allen. Based on a logical framework extended by qualitative and quantitative temporal constraints we show how to describe behavioral models (both consistency- and abductive-based), discuss how to use abstract observations and show how abstract temporal diagnoses are computed. This yields an expressive framework, which allows the representation of complex temporal behavior allowing us to represent temporal uncertainty. Due to its abstraction capabilities computation is made independent of the number of observations and time points in a temporal setting. An example of hepatitis diagnosis is used throughout the paper.
[ { "version": "v1", "created": "Wed, 27 Feb 2013 14:18:29 GMT" } ]
1,362,009,600,000
[ [ "Nejdl", "Wolfgang", "" ], [ "Gamper", "Johann", "" ] ]
1302.6833
Keung-Chi Ng
Keung-Chi Ng, Tod S. Levitt
Incremental Dynamic Construction of Layered Polytree Networks
Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994)
null
null
UAI-P-1994-PG-440-446
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Certain classes of problems, including perceptual data understanding, robotics, discovery, and learning, can be represented as incremental, dynamically constructed belief networks. These automatically constructed networks can be dynamically extended and modified as evidence of new individuals becomes available. The main result of this paper is the incremental extension of the singly connected polytree network in such a way that the network retains its singly connected polytree structure after the changes. The algorithm is deterministic and is guaranteed to have a complexity of single node addition that is at most of order proportional to the number of nodes (or size) of the network. Additional speed-up can be achieved by maintaining the path information. Despite its incremental and dynamic nature, the algorithm can also be used for probabilistic inference in belief networks in a fashion similar to other exact inference algorithms.
[ { "version": "v1", "created": "Wed, 27 Feb 2013 14:18:35 GMT" } ]
1,362,009,600,000
[ [ "Ng", "Keung-Chi", "" ], [ "Levitt", "Tod S.", "" ] ]
1302.6835
Judea Pearl
Judea Pearl
A Probabilistic Calculus of Actions
Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994)
null
null
UAI-P-1994-PG-454-462
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a symbolic machinery that admits both probabilistic and causal information about a given domain and produces probabilistic statements about the effect of actions and the impact of observations. The calculus admits two types of conditioning operators: ordinary Bayes conditioning, P(y|X = x), which represents the observation X = x, and causal conditioning, P(y|do(X = x)), read the probability of Y = y conditioned on holding X constant (at x) by deliberate action. Given a mixture of such observational and causal sentences, together with the topology of the causal graph, the calculus derives new conditional probabilities of both types, thus enabling one to quantify the effects of actions (and policies) from partially specified knowledge bases, such as Bayesian networks in which some conditional probabilities may not be available.
[ { "version": "v1", "created": "Wed, 27 Feb 2013 14:18:47 GMT" } ]
1,362,009,600,000
[ [ "Pearl", "Judea", "" ] ]
1302.6836
Stephen G. Pimentel
Stephen G. Pimentel, Lawrence M. Brem
Robust Planning in Uncertain Environments
Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994)
null
null
UAI-P-1994-PG-463-469
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes a novel approach to planning which takes advantage of decision theory to greatly improve robustness in an uncertain environment. We present an algorithm which computes conditional plans of maximum expected utility. This algorithm relies on a representation of the search space as an AND/OR tree and employs a depth-limit to control computation costs. A numeric robustness factor, which parameterizes the utility function, allows the user to modulate the degree of risk-aversion employed by the planner. Via a look-ahead search, the planning algorithm seeks to find an optimal plan using expected utility as its optimization criterion. We present experimental results obtained by applying our algorithm to a non-deterministic extension of the blocks world domain. Our results demonstrate that the robustness factor governs the degree of risk embodied in the conditional plans computed by our algorithm.
[ { "version": "v1", "created": "Wed, 27 Feb 2013 14:18:53 GMT" } ]
1,362,009,600,000
[ [ "Pimentel", "Stephen G.", "" ], [ "Brem", "Lawrence M.", "" ] ]
1302.6837
Michael Pittarelli
Michael Pittarelli
Anytime Decision Making with Imprecise Probabilities
Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994)
null
null
UAI-P-1994-PG-470-477
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper examines methods of decision making that are able to accommodate limitations on both the form in which uncertainty pertaining to a decision problem can be realistically represented and the amount of computing time available before a decision must be made. The methods are anytime algorithms in the sense of Boddy and Dean 1991. Techniques are presented for use with Frisch and Haddawy's [1992] anytime deduction system, with an anytime adaptation of Nilsson's [1986] probabilistic logic, and with a probabilistic database model.
[ { "version": "v1", "created": "Wed, 27 Feb 2013 14:18:58 GMT" } ]
1,362,009,600,000
[ [ "Pittarelli", "Michael", "" ] ]
1302.6839
Malcolm Pradhan
Malcolm Pradhan, Gregory M. Provan, Blackford Middleton, Max Henrion
Knowledge Engineering for Large Belief Networks
Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994)
null
null
UAI-P-1994-PG-484-490
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present several techniques for knowledge engineering of large belief networks (BNs) based on the our experiences with a network derived from a large medical knowledge base. The noisyMAX, a generalization of the noisy-OR gate, is used to model causal in dependence in a BN with multi-valued variables. We describe the use of leak probabilities to enforce the closed-world assumption in our model. We present Netview, a visualization tool based on causal independence and the use of leak probabilities. The Netview software allows knowledge engineers to dynamically view sub-networks for knowledge engineering, and it provides version control for editing a BN. Netview generates sub-networks in which leak probabilities are dynamically updated to reflect the missing portions of the network.
[ { "version": "v1", "created": "Wed, 27 Feb 2013 14:19:10 GMT" } ]
1,362,009,600,000
[ [ "Pradhan", "Malcolm", "" ], [ "Provan", "Gregory M.", "" ], [ "Middleton", "Blackford", "" ], [ "Henrion", "Max", "" ] ]
1302.6840
Runping Qi
Runping Qi, Nevin Lianwen Zhang, David L. Poole
Solving Asymmetric Decision Problems with Influence Diagrams
Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994)
null
null
UAI-P-1994-PG-491-497
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While influence diagrams have many advantages as a representation framework for Bayesian decision problems, they have a serious drawback in handling asymmetric decision problems. To be represented in an influence diagram, an asymmetric decision problem must be symmetrized. A considerable amount of unnecessary computation may be involved when a symmetrized influence diagram is evaluated by conventional algorithms. In this paper we present an approach for avoiding such unnecessary computation in influence diagram evaluation.
[ { "version": "v1", "created": "Wed, 27 Feb 2013 14:19:16 GMT" } ]
1,362,009,600,000
[ [ "Qi", "Runping", "" ], [ "Zhang", "Nevin Lianwen", "" ], [ "Poole", "David L.", "" ] ]
1302.6841
Marco Ramoni
Marco Ramoni, Alberto Riva
Belief Maintenance in Bayesian Networks
Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994)
null
null
UAI-P-1994-PG-498-505
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bayesian Belief Networks (BBNs) are a powerful formalism for reasoning under uncertainty but bear some severe limitations: they require a large amount of information before any reasoning process can start, they have limited contradiction handling capabilities, and their ability to provide explanations for their conclusion is still controversial. There exists a class of reasoning systems, called Truth Maintenance Systems (TMSs), which are able to deal with partially specified knowledge, to provide well-founded explanation for their conclusions, and to detect and handle contradictions. TMSs incorporating measure of uncertainty are called Belief Maintenance Systems (BMSs). This paper describes how a BMS based on probabilistic logic can be applied to BBNs, thus introducing a new class of BBNs, called Ignorant Belief Networks, able to incrementally deal with partially specified conditional dependencies, to provide explanations, and to detect and handle contradictions.
[ { "version": "v1", "created": "Wed, 27 Feb 2013 14:19:22 GMT" } ]
1,362,009,600,000
[ [ "Ramoni", "Marco", "" ], [ "Riva", "Alberto", "" ] ]
1302.6842
Eugene Santos Jr.
Eugene Santos Jr., Solomon Eyal Shimony
Belief Updating by Enumerating High-Probability Independence-Based Assignments
Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994)
null
null
UAI-P-1994-PG-506-513
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Independence-based (IB) assignments to Bayesian belief networks were originally proposed as abductive explanations. IB assignments assign fewer variables in abductive explanations than do schemes assigning values to all evidentially supported variables. We use IB assignments to approximate marginal probabilities in Bayesian belief networks. Recent work in belief updating for Bayes networks attempts to approximate posterior probabilities by finding a small number of the highest probability complete (or perhaps evidentially supported) assignments. Under certain assumptions, the probability mass in the union of these assignments is sufficient to obtain a good approximation. Such methods are especially useful for highly-connected networks, where the maximum clique size or the cutset size make the standard algorithms intractable. Since IB assignments contain fewer assigned variables, the probability mass in each assignment is greater than in the respective complete assignment. Thus, fewer IB assignments are sufficient, and a good approximation can be obtained more efficiently. IB assignments can be used for efficiently approximating posterior node probabilities even in cases which do not obey the rather strict skewness assumptions used in previous research. Two algorithms for finding the high probability IB assignments are suggested: one by doing a best-first heuristic search, and another by special-purpose integer linear programming. Experimental results show that this approach is feasible for highly connected belief networks.
[ { "version": "v1", "created": "Wed, 27 Feb 2013 14:19:28 GMT" } ]
1,362,009,600,000
[ [ "Santos", "Eugene", "Jr." ], [ "Shimony", "Solomon Eyal", "" ] ]
1302.6843
Ross D. Shachter
Ross D. Shachter, Stig K. Andersen, Peter Szolovits
Global Conditioning for Probabilistic Inference in Belief Networks
Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994)
null
null
UAI-P-1994-PG-514-522
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we propose a new approach to probabilistic inference on belief networks, global conditioning, which is a simple generalization of Pearl's (1986b) method of loopcutset conditioning. We show that global conditioning, as well as loop-cutset conditioning, can be thought of as a special case of the method of Lauritzen and Spiegelhalter (1988) as refined by Jensen et al (199Oa; 1990b). Nonetheless, this approach provides new opportunities for parallel processing and, in the case of sequential processing, a tradeoff of time for memory. We also show how a hybrid method (Suermondt and others 1990) combining loop-cutset conditioning with Jensen's method can be viewed within our framework. By exploring the relationships between these methods, we develop a unifying framework in which the advantages of each approach can be combined successfully.
[ { "version": "v1", "created": "Wed, 27 Feb 2013 14:19:34 GMT" } ]
1,362,009,600,000
[ [ "Shachter", "Ross D.", "" ], [ "Andersen", "Stig K.", "" ], [ "Szolovits", "Peter", "" ] ]
1302.6844
Philippe Smets
Philippe Smets
Belief Induced by the Partial Knowledge of the Probabilities
Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994)
null
null
UAI-P-1994-PG-523-532
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We construct the belief function that quantifies the agent, beliefs about which event of Q will occurred when he knows that the event is selected by a chance set-up and that the probability function associated to the chance set up is only partially known.
[ { "version": "v1", "created": "Wed, 27 Feb 2013 14:19:41 GMT" } ]
1,362,009,600,000
[ [ "Smets", "Philippe", "" ] ]
1302.6845
Paul Snow
Paul Snow
Ignorance and the Expressiveness of Single- and Set-Valued Probability Models of Belief
Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994)
null
null
UAI-P-1994-PG-531-537
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Over time, there have hen refinements in the way that probability distributions are used for representing beliefs. Models which rely on single probability distributions depict a complete ordering among the propositions of interest, yet human beliefs are sometimes not completely ordered. Non-singleton sets of probability distributions can represent partially ordered beliefs. Convex sets are particularly convenient and expressive, but it is known that there are reasonable patterns of belief whose faithful representation require less restrictive sets. The present paper shows that prior ignorance about three or more exclusive alternatives and the emergence of partially ordered beliefs when evidence is obtained defy representation by any single set of distributions, but yield to a representation baud on several uts. The partial order is shown to be a partial qualitative probability which shares some intuitively appealing attributes with probability distributions.
[ { "version": "v1", "created": "Wed, 27 Feb 2013 14:19:46 GMT" } ]
1,362,009,600,000
[ [ "Snow", "Paul", "" ] ]
1302.6846
Sampath Srinivas
Sampath Srinivas
A Probabilistic Approach to Hierarchical Model-based Diagnosis
Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994)
null
null
UAI-P-1994-PG-538-545
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Model-based diagnosis reasons backwards from a functional schematic of a system to isolate faults given observations of anomalous behavior. We develop a fully probabilistic approach to model based diagnosis and extend it to support hierarchical models. Our scheme translates the functional schematic into a Bayesian network and diagnostic inference takes place in the Bayesian network. A Bayesian network diagnostic inference algorithm is modified to take advantage of the hierarchy to give computational gains.
[ { "version": "v1", "created": "Wed, 27 Feb 2013 14:19:52 GMT" } ]
1,362,009,600,000
[ [ "Srinivas", "Sampath", "" ] ]
1302.6847
Milan Studeny
Milan Studeny
Semigraphoids Are Two-Antecedental Approximations of Stochastic Conditional Independence Models
Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994)
null
null
UAI-P-1994-PG-546-552
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The semigraphoid closure of every couple of CI-statements (GI=conditional independence) is a stochastic CI-model. As a consequence of this result it is shown that every probabilistically sound inference rule for CI-model, having at most two antecedents, is derivable from the semigraphoid inference rules. This justifies the use of semigraphoids as approximations of stochastic CI-models in probabilistic reasoning. The list of all 19 potential dominant elements of the mentioned semigraphoid closure is given as a byproduct.
[ { "version": "v1", "created": "Wed, 27 Feb 2013 14:19:58 GMT" } ]
1,362,009,600,000
[ [ "Studeny", "Milan", "" ] ]
1302.6848
Sek-Wah Tan
Sek-Wah Tan
Exceptional Subclasses in Qualitative Probability
Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994)
null
null
UAI-P-1994-PG-553-559
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
System Z+ [Goldszmidt and Pearl, 1991, Goldszmidt, 1992] is a formalism for reasoning with normality defaults of the form "typically if phi then + (with strength cf)" where 6 is a positive integer. The system has a critical shortcoming in that it does not sanction inheritance across exceptional subclasses. In this paper we propose an extension to System Z+ that rectifies this shortcoming by extracting additional conditions between worlds from the defaults database. We show that the additional constraints do not change the notion of the consistency of a database. We also make comparisons with competing default reasoning systems.
[ { "version": "v1", "created": "Wed, 27 Feb 2013 14:20:04 GMT" } ]
1,362,009,600,000
[ [ "Tan", "Sek-Wah", "" ] ]
1302.6849
Pei Wang
Pei Wang
A Defect in Dempster-Shafer Theory
Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994)
null
null
UAI-P-1994-PG-560-566
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
By analyzing the relationships among chance, weight of evidence and degree of beliefwe show that the assertion "probability functions are special cases of belief functions" and the assertion "Dempster's rule can be used to combine belief functions based on distinct bodies of evidence" together lead to an inconsistency in Dempster-Shafer theory. To solve this problem, we must reject some fundamental postulates of the theory. We introduce a new approach for uncertainty management that shares many intuitive ideas with D-S theory, while avoiding this problem.
[ { "version": "v1", "created": "Wed, 27 Feb 2013 14:20:10 GMT" } ]
1,362,009,600,000
[ [ "Wang", "Pei", "" ] ]
1302.6850
Michael P. Wellman
Michael P. Wellman, Chao-Lin Liu
State-space Abstraction for Anytime Evaluation of Probabilistic Networks
Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994)
null
null
UAI-P-1994-PG-567-574
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One important factor determining the computational complexity of evaluating a probabilistic network is the cardinality of the state spaces of the nodes. By varying the granularity of the state spaces, one can trade off accuracy in the result for computational efficiency. We present an anytime procedure for approximate evaluation of probabilistic networks based on this idea. On application to some simple networks, the procedure exhibits a smooth improvement in approximation quality as computation time increases. This suggests that state-space abstraction is one more useful control parameter for designing real-time probabilistic reasoners.
[ { "version": "v1", "created": "Wed, 27 Feb 2013 14:20:17 GMT" } ]
1,362,009,600,000
[ [ "Wellman", "Michael P.", "" ], [ "Liu", "Chao-Lin", "" ] ]
1302.6851
Emil Weydert
Emil Weydert
General Belief Measures
Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994)
null
null
UAI-P-1994-PG-575-582
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Probability measures by themselves, are known to be inappropriate for modeling the dynamics of plain belief and their excessively strong measurability constraints make them unsuitable for some representational tasks, e.g. in the context of firstorder knowledge. In this paper, we are therefore going to look for possible alternatives and extensions. We begin by delimiting the general area of interest, proposing a minimal list of assumptions to be satisfied by any reasonable quasi-probabilistic valuation concept. Within this framework, we investigate two particularly interesting kinds of quasi-measures which are not or much less affected by the traditional problems. * Ranking measures, which generalize Spohn-type and possibility measures. * Cumulative measures, which combine the probabilistic and the ranking philosophy, allowing thereby a fine-grained account of static and dynamic belief.
[ { "version": "v1", "created": "Wed, 27 Feb 2013 14:20:23 GMT" } ]
1,362,009,600,000
[ [ "Weydert", "Emil", "" ] ]
1302.6852
Nic Wilson
Nic Wilson
Generating Graphoids from Generalised Conditional Probability
Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994)
null
null
UAI-P-1994-PG-583-590
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We take a general approach to uncertainty on product spaces, and give sufficient conditions for the independence structures of uncertainty measures to satisfy graphoid properties. Since these conditions are arguably more intuitive than some of the graphoid properties, they can be viewed as explanations why probability and certain other formalisms generate graphoids. The conditions include a sufficient condition for the Intersection property which can still apply even if there is a strong logical relations hip between the variables. We indicate how these results can be used to produce theories of qualitative conditional probability which are semi-graphoids and graphoids.
[ { "version": "v1", "created": "Wed, 27 Feb 2013 14:20:29 GMT" } ]
1,362,009,600,000
[ [ "Wilson", "Nic", "" ] ]
1302.6853
Michael S. K. M. Wong
Michael S. K. M. Wong, Z. W. Wang
On Axiomatization of Probabilistic Conditional Independencies
Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994)
null
null
UAI-P-1994-PG-591-597
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper studies the connection between probabilistic conditional independence in uncertain reasoning and data dependency in relational databases. As a demonstration of the usefulness of this preliminary investigation, an alternate proof is presented for refuting the conjecture suggested by Pearl and Paz that probabilistic conditional independencies have a complete axiomatization.
[ { "version": "v1", "created": "Wed, 27 Feb 2013 14:20:35 GMT" } ]
1,362,009,600,000
[ [ "Wong", "Michael S. K. M.", "" ], [ "Wang", "Z. W.", "" ] ]
1302.6854
Hong Xu
Hong Xu, Philippe Smets
Evidential Reasoning with Conditional Belief Functions
Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994)
null
null
UAI-P-1994-PG-598-605
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the existing evidential networks with belief functions, the relations among the variables are always represented by joint belief functions on the product space of the involved variables. In this paper, we use conditional belief functions to represent such relations in the network and show some relations of these two kinds of representations. We also present a propagation algorithm for such networks. By analyzing the properties of some special evidential networks with conditional belief functions, we show that the reasoning process can be simplified in such kinds of networks.
[ { "version": "v1", "created": "Wed, 27 Feb 2013 14:20:41 GMT" } ]
1,362,009,600,000
[ [ "Xu", "Hong", "" ], [ "Smets", "Philippe", "" ] ]
1302.6855
Nevin Lianwen Zhang
Nevin Lianwen Zhang, David L Poole
Inter-causal Independence and Heterogeneous Factorization
Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994)
null
null
UAI-P-1994-PG-606-614
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is well known that conditional independence can be used to factorize a joint probability into a multiplication of conditional probabilities. This paper proposes a constructive definition of inter-causal independence, which can be used to further factorize a conditional probability. An inference algorithm is developed, which makes use of both conditional independence and inter-causal independence to reduce inference complexity in Bayesian networks.
[ { "version": "v1", "created": "Wed, 27 Feb 2013 14:20:47 GMT" } ]
1,362,009,600,000
[ [ "Zhang", "Nevin Lianwen", "" ], [ "Poole", "David L", "" ] ]
1303.1454
Marek J. Druzdzel
Marek J. Druzdzel, Herbert A. Simon
Causality in Bayesian Belief Networks
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-3-11
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We address the problem of causal interpretation of the graphical structure of Bayesian belief networks (BBNs). We review the concept of causality explicated in the domain of structural equations models and show that it is applicable to BBNs. In this view, which we call mechanism-based, causality is defined within models and causal asymmetries arise when mechanisms are placed in the context of a system. We lay the link between structural equations models and BBNs models and formulate the conditions under which the latter can be given causal interpretation.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:18:23 GMT" } ]
1,362,700,800,000
[ [ "Druzdzel", "Marek J.", "" ], [ "Simon", "Herbert A.", "" ] ]
1303.1455
Judea Pearl
Judea Pearl
From Conditional Oughts to Qualitative Decision Theory
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-12-20
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The primary theme of this investigation is a decision theoretic account of conditional ought statements (e.g., "You ought to do A, if C") that rectifies glaring deficiencies in classical deontic logic. The resulting account forms a sound basis for qualitative decision theory, thus providing a framework for qualitative planning under uncertainty. In particular, we show that adding causal relationships (in the form of a single graph) as part of an epistemic state is sufficient to facilitate the analysis of action sequences, their consequences, their interaction with observations, their expected utilities and, hence, the synthesis of plans and strategies under uncertainty.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:18:29 GMT" } ]
1,362,700,800,000
[ [ "Pearl", "Judea", "" ] ]
1303.1456
Russ B. Altman
Russ B. Altman
A Probabilistic Algorithm for Calculating Structure: Borrowing from Simulated Annealing
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-23-31
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We have developed a general Bayesian algorithm for determining the coordinates of points in a three-dimensional space. The algorithm takes as input a set of probabilistic constraints on the coordinates of the points, and an a priori distribution for each point location. The output is a maximum-likelihood estimate of the location of each point. We use the extended, iterated Kalman filter, and add a search heuristic for optimizing its solution under nonlinear conditions. This heuristic is based on the same principle as the simulated annealing heuristic for other optimization problems. Our method uses any probabilistic constraints that can be expressed as a function of the point coordinates (for example, distance, angles, dihedral angles, and planarity). It assumes that all constraints have Gaussian noise. In this paper, we describe the algorithm and show its performance on a set of synthetic data to illustrate its convergence properties, and its applicability to domains such ng molecular structure determination.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:18:35 GMT" } ]
1,362,700,800,000
[ [ "Altman", "Russ B.", "" ] ]
1303.1457
Scott A. Musman
Scott A. Musman, L. W. Chang
A Study of Scaling Issues in Bayesian Belief Networks for Ship Classification
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-32-39
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The problems associated with scaling involve active and challenging research topics in the area of artificial intelligence. The purpose is to solve real world problems by means of AI technologies, in cases where the complexity of representation of the real world problem is potentially combinatorial. In this paper, we present a novel approach to cope with the scaling issues in Bayesian belief networks for ship classification. The proposed approach divides the conceptual model of a complex ship classification problem into a set of small modules that work together to solve the classification problem while preserving the functionality of the original model. The possible ways of explaining sensor returns (e.g., the evidence) for some features, such as portholes along the length of a ship, are sometimes combinatorial. Thus, using an exhaustive approach, which entails the enumeration of all possible explanations, is impractical for larger problems. We present a network structure (referred to as Sequential Decomposition, SD) in which each observation is associated with a set of legitimate outcomes which are consistent with the explanation of each observed piece of evidence. The results show that the SD approach allows one to represent feature-observation relations in a manageable way and achieve the same explanatory power as an exhaustive approach.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:18:41 GMT" } ]
1,362,700,800,000
[ [ "Musman", "Scott A.", "" ], [ "Chang", "L. W.", "" ] ]
1303.1458
Gregory M. Provan
Gregory M. Provan
Tradeoffs in Constructing and Evaluating Temporal Influence Diagrams
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-40-47
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper addresses the tradeoffs which need to be considered in reasoning using probabilistic network representations, such as Influence Diagrams (IDs). In particular, we examine the tradeoffs entailed in using Temporal Influence Diagrams (TIDs) which adequately capture the temporal evolution of a dynamic system without prohibitive data and computational requirements. Three approaches for TID construction which make different tradeoffs are examined: (1) tailoring the network at each time interval to the data available (rather then just copying the original Bayes Network for all time intervals); (2) modeling the evolution of a parsimonious subset of variables (rather than all variables); and (3) model selection approaches, which seek to minimize some measure of the predictive accuracy of the model without introducing too many parameters, which might cause "overfitting" of the model. Methods of evaluating the accuracy/efficiency of the tradeoffs are proposed.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:18:46 GMT" } ]
1,362,700,800,000
[ [ "Provan", "Gregory M.", "" ] ]
1303.1459
Harold P. Lehmann
Harold P. Lehmann, Ross D. Shachter
End-User Construction of Influence Diagrams for Bayesian Statistics
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-48-54
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Influence diagrams are ideal knowledge representations for Bayesian statistical models. However, these diagrams are difficult for end users to interpret and to manipulate. We present a user-based architecture that enables end users to create and to manipulate the knowledge representation. We use the problem of physicians' interpretation of two-arm parallel randomized clinical trials (TAPRCT) to illustrate the architecture and its use. There are three primary data structures. Elements of statistical models are encoded as subgraphs of a restricted class of influence diagram. The interpretations of those elements are mapped into users' language in a domain-specific, user-based semantic interface, called a patient-flow diagram, in the TAPRCT problem. Pennitted transformations of the statistical model that maintain the semantic relationships of the model are encoded in a metadata-state diagram, called the cohort-state diagram, in the TAPRCT problem. The algorithm that runs the system uses modular actions called construction steps. This framework has been implemented in a system called THOMAS, that allows physicians to interpret the data reported from a TAPRCT.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:18:52 GMT" } ]
1,362,700,800,000
[ [ "Lehmann", "Harold P.", "" ], [ "Shachter", "Ross D.", "" ] ]
1303.1461
Paul Dagum
Paul Dagum, Adam Galper
Forecasting Sleep Apnea with Dynamic Network Models
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-64-71
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Dynamic network models (DNMs) are belief networks for temporal reasoning. The DNM methodology combines techniques from time series analysis and probabilistic reasoning to provide (1) a knowledge representation that integrates noncontemporaneous and contemporaneous dependencies and (2) methods for iteratively refining these dependencies in response to the effects of exogenous influences. We use belief-network inference algorithms to perform forecasting, control, and discrete event simulation on DNMs. The belief network formulation allows us to move beyond the traditional assumptions of linearity in the relationships among time-dependent variables and of normality in their probability distributions. We demonstrate the DNM methodology on an important forecasting problem in medicine. We conclude with a discussion of how the methodology addresses several limitations found in traditional time series analyses.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:19:03 GMT" } ]
1,362,700,800,000
[ [ "Dagum", "Paul", "" ], [ "Galper", "Adam", "" ] ]
1303.1462
Peter J. Regan
Peter J. Regan
Normative Engineering Risk Management Systems
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-72-79
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes a normative system design that incorporates diagnosis, dynamic evolution, decision making, and information gathering. A single influence diagram demonstrates the design's coherence, yet each activity is more effectively modeled and evaluated separately. Application to offshore oil platforms illustrates the design. For this application, the normative system is embedded in a real-time expert system.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:19:10 GMT" } ]
1,362,700,800,000
[ [ "Regan", "Peter J.", "" ] ]
1303.1463
David Heckerman
David Heckerman, Michael Shwe
Diagnosis of Multiple Faults: A Sensitivity Analysis
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-80-87
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We compare the diagnostic accuracy of three diagnostic inference models: the simple Bayes model, the multimembership Bayes model, which is isomorphic to the parallel combination function in the certainty-factor model, and a model that incorporates the noisy OR-gate interaction. The comparison is done on 20 clinicopathological conference (CPC) cases from the American Journal of Medicine-challenging cases describing actual patients often with multiple disorders. We find that the distributions produced by the noisy OR model agree most closely with the gold-standard diagnoses, although substantial differences exist between the distributions and the diagnoses. In addition, we find that the multimembership Bayes model tends to significantly overestimate the posterior probabilities of diseases, whereas the simple Bayes model tends to significantly underestimate the posterior probabilities. Our results suggest that additional work to refine the noisy OR model for internal medicine will be worthwhile.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:19:15 GMT" }, { "version": "v2", "created": "Sat, 16 May 2015 23:52:47 GMT" } ]
1,431,993,600,000
[ [ "Heckerman", "David", "" ], [ "Shwe", "Michael", "" ] ]
1303.1464
Paul Dagum
Paul Dagum, Adam Galper
Additive Belief-Network Models
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-91-98
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The inherent intractability of probabilistic inference has hindered the application of belief networks to large domains. Noisy OR-gates [30] and probabilistic similarity networks [18, 17] escape the complexity of inference by restricting model expressiveness. Recent work in the application of belief-network models to time-series analysis and forecasting [9, 10] has given rise to the additive belief network model (ABNM). We (1) discuss the nature and implications of the approximations made by an additive decomposition of a belief network, (2) show greater efficiency in the induction of additive models when available data are scarce, (3) generalize probabilistic inference algorithms to exploit the additive decomposition of ABNMs, (4) show greater efficiency of inference, and (5) compare results on inference with a simple additive belief network.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:19:21 GMT" } ]
1,362,700,800,000
[ [ "Dagum", "Paul", "" ], [ "Galper", "Adam", "" ] ]
1303.1465
Francisco Javier Diez
Francisco Javier Diez
Parameter Adjustment in Bayes Networks. The generalized noisy OR-gate
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-99-105
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Spiegelhalter and Lauritzen [15] studied sequential learning in Bayesian networks and proposed three models for the representation of conditional probabilities. A forth model, shown here, assumes that the parameter distribution is given by a product of Gaussian functions and updates them from the _ and _r messages of evidence propagation. We also generalize the noisy OR-gate for multivalued variables, develop the algorithm to compute probability in time proportional to the number of parents (even in networks with loops) and apply the learning model to this gate.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:19:27 GMT" } ]
1,362,700,800,000
[ [ "Diez", "Francisco Javier", "" ] ]
1303.1466
Didier Dubois
Didier Dubois, Henri Prade
A fuzzy relation-based extension of Reggia's relational model for diagnosis handling uncertain and incomplete information
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-106-113
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Relational models for diagnosis are based on a direct description of the association between disorders and manifestations. This type of model has been specially used and developed by Reggia and his co-workers in the late eighties as a basic starting point for approaching diagnosis problems. The paper proposes a new relational model which includes Reggia's model as a particular case and which allows for a more expressive representation of the observations and of the manifestations associated with disorders. The model distinguishes, i) between manifestations which are certainly absent and those which are not (yet) observed, and ii) between manifestations which cannot be caused by a given disorder and manifestations for which we do not know if they can or cannot be caused by this disorder. This new model, which can handle uncertainty in a non-probabilistic way, is based on possibility theory and so-called twofold fuzzy sets, previously introduced by the authors.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:19:33 GMT" } ]
1,362,700,800,000
[ [ "Dubois", "Didier", "" ], [ "Prade", "Henri", "" ] ]
1303.1467
Morten Elvang-G{\o}ransson
Morten Elvang-G{\o}ransson, Paul J. Krause, John Fox
Dialectic Reasoning with Inconsistent Information
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-114-121
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
From an inconsistent database non-trivial arguments may be constructed both for a proposition, and for the contrary of that proposition. Therefore, inconsistency in a logical database causes uncertainty about which conclusions to accept. This kind of uncertainty is called logical uncertainty. We define a concept of "acceptability", which induces a means for differentiating arguments. The more acceptable an argument, the more confident we are in it. A specific interest is to use the acceptability classes to assign linguistic qualifiers to propositions, such that the qualifier assigned to a propositions reflects its logical uncertainty. A more general interest is to understand how classes of acceptability can be defined for arguments constructed from an inconsistent database, and how this notion of acceptability can be devised to reflect different criteria. Whilst concentrating on the aspects of assigning linguistic qualifiers to propositions, we also indicate the more general significance of the notion of acceptability.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:19:38 GMT" } ]
1,376,265,600,000
[ [ "Elvang-Gøransson", "Morten", "" ], [ "Krause", "Paul J.", "" ], [ "Fox", "John", "" ] ]
1303.1468
David Heckerman
David Heckerman
Causal Independence for Knowledge Acquisition and Inference
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-122-127
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
I introduce a temporal belief-network representation of causal independence that a knowledge engineer can use to elicit probabilistic models. Like the current, atemporal belief-network representation of causal independence, the new representation makes knowledge acquisition tractable. Unlike the atemproal representation, however, the temporal representation can simplify inference, and does not require the use of unobservable variables. The representation is less general than is the atemporal representation, but appears to be useful for many practical applications.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:19:44 GMT" }, { "version": "v2", "created": "Sat, 16 May 2015 23:51:05 GMT" } ]
1,431,993,600,000
[ [ "Heckerman", "David", "" ] ]
1303.1469
Eric J. Horvitz
Eric J. Horvitz, Adrian Klein
Utility-Based Abstraction and Categorization
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-128-135
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We take a utility-based approach to categorization. We construct generalizations about events and actions by considering losses associated with failing to distinguish among detailed distinctions in a decision model. The utility-based methods transform detailed states of the world into more abstract categories comprised of disjunctions of the states. We show how we can cluster distinctions into groups of distinctions at progressively higher levels of abstraction, and describe rules for decision making with the abstractions. The techniques introduce a utility-based perspective on the nature of concepts, and provide a means of simplifying decision models used in automated reasoning systems. We demonstrate the techniques by describing the capabilities and output of TUBA, a program for utility-based abstraction.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:19:50 GMT" } ]
1,362,700,800,000
[ [ "Horvitz", "Eric J.", "" ], [ "Klein", "Adrian", "" ] ]
1303.1470
Kathryn Blackmond Laskey
Kathryn Blackmond Laskey
Sensitivity Analysis for Probability Assessments in Bayesian Networks
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-136-142
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When eliciting probability models from experts, knowledge engineers may compare the results of the model with expert judgment on test scenarios, then adjust model parameters to bring the behavior of the model more in line with the expert's intuition. This paper presents a methodology for analytic computation of sensitivity values to measure the impact of small changes in a network parameter on a target probability value or distribution. These values can be used to guide knowledge elicitation. They can also be used in a gradient descent algorithm to estimate parameter values that maximize a measure of goodness-of-fit to both local and holistic probability assessments.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:19:56 GMT" } ]
1,362,700,800,000
[ [ "Laskey", "Kathryn Blackmond", "" ] ]
1303.1471
John F. Lemmer
John F. Lemmer
Causal Modeling
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-143-151
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Causal Models are like Dependency Graphs and Belief Nets in that they provide a structure and a set of assumptions from which a joint distribution can, in principle, be computed. Unlike Dependency Graphs, Causal Models are models of hierarchical and/or parallel processes, rather than models of distributions (partially) known to a model builder through some sort of gestalt. As such, Causal Models are more modular, easier to build, more intuitive, and easier to understand than Dependency Graph Models. Causal Models are formally defined and Dependency Graph Models are shown to be a special case of them. Algorithms supporting inference are presented. Parsimonious methods for eliciting dependent probabilities are presented.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:20:02 GMT" } ]
1,362,700,800,000
[ [ "Lemmer", "John F.", "" ] ]
1303.1472
Izhar Matzkevich
Izhar Matzkevich, Bruce Abramson
Some Complexity Considerations in the Combination of Belief Networks
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-152-158
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One topic that is likely to attract an increasing amount of attention within the Knowledge-base systems research community is the coordination of information provided by multiple experts. We envision a situation in which several experts independently encode information as belief networks. A potential user must then coordinate the conclusions and recommendations of these networks to derive some sort of consensus. One approach to such a consensus is the fusion of the contributed networks into a single, consensus model prior to the consideration of any case-specific data (specific observations, test results). This approach requires two types of combination procedures, one for probabilities, and one for graphs. Since the combination of probabilities is relatively well understood, the key barriers to this approach lie in the realm of graph theory. This paper provides formal definitions of some of the operations necessary to effect the necessary graphical combinations, and provides complexity analyses of these procedures. The paper's key result is that most of these operations are NP-hard, and its primary message is that the derivation of ?good? consensus networks must be done heuristically.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:20:07 GMT" } ]
1,362,700,800,000
[ [ "Matzkevich", "Izhar", "" ], [ "Abramson", "Bruce", "" ] ]
1303.1473
Izhar Matzkevich
Izhar Matzkevich, Bruce Abramson
Deriving a Minimal I-map of a Belief Network Relative to a Target Ordering of its Nodes
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-159-165
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper identifies and solves a new optimization problem: Given a belief network (BN) and a target ordering on its variables, how can we efficiently derive its minimal I-map whose arcs are consistent with the target ordering? We present three solutions to this problem, all of which lead to directed acyclic graphs based on the original BN's recursive basis relative to the specified ordering (such a DAG is sometimes termed the boundary DAG drawn from the given BN relative to the said ordering [5]). Along the way, we also uncover an important general principal about arc reversals: when reordering a BN according to some target ordering, (while attempting to minimize the number of arcs generated), the sequence of arc reversals should follow the topological ordering induced by the original belief network's arcs to as great an extent as possible. These results promise to have a significant impact on the derivation of consensus models, as well as on other algorithms that require the reconfiguration and/or combination of BN's.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:20:12 GMT" } ]
1,362,700,800,000
[ [ "Matzkevich", "Izhar", "" ], [ "Abramson", "Bruce", "" ] ]
1303.1474
Kim-Leng Poh
Kim-Leng Poh, Michael R. Fehling
Probabilistic Conceptual Network: A Belief Representation Scheme for Utility-Based Categorization
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-166-173
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Probabilistic conceptual network is a knowledge representation scheme designed for reasoning about concepts and categorical abstractions in utility-based categorization. The scheme combines the formalisms of abstraction and inheritance hierarchies from artificial intelligence, and probabilistic networks from decision analysis. It provides a common framework for representing conceptual knowledge, hierarchical knowledge, and uncertainty. It facilitates dynamic construction of categorization decision models at varying levels of abstraction. The scheme is applied to an automated machining problem for reasoning about the state of the machine at varying levels of abstraction in support of actions for maintaining competitiveness of the plant.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:20:18 GMT" } ]
1,362,700,800,000
[ [ "Poh", "Kim-Leng", "" ], [ "Fehling", "Michael R.", "" ] ]
1303.1475
Kim-Leng Poh
Kim-Leng Poh, Eric J. Horvitz
Reasoning about the Value of Decision-Model Refinement: Methods and Application
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-174-182
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the value of extending the completeness of a decision model along different dimensions of refinement. Specifically, we analyze the expected value of quantitative, conceptual, and structural refinement of decision models. We illustrate the key dimensions of refinement with examples. The analyses of value of model refinement can be used to focus the attention of an analyst or an automated reasoning system on extensions of a decision model associated with the greatest expected value.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:20:23 GMT" } ]
1,362,700,800,000
[ [ "Poh", "Kim-Leng", "" ], [ "Horvitz", "Eric J.", "" ] ]
1303.1476
William B. Poland
William B. Poland, Ross D. Shachter
Mixtures of Gaussians and Minimum Relative Entropy Techniques for Modeling Continuous Uncertainties
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-183-190
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Problems of probabilistic inference and decision making under uncertainty commonly involve continuous random variables. Often these are discretized to a few points, to simplify assessments and computations. An alternative approximation is to fit analytically tractable continuous probability distributions. This approach has potential simplicity and accuracy advantages, especially if variables can be transformed first. This paper shows how a minimum relative entropy criterion can drive both transformation and fitting, illustrating with a power and logarithm family of transformations and mixtures of Gaussian (normal) distributions, which allow use of efficient influence diagram methods. The fitting procedure in this case is the well-known EM algorithm. The selection of the number of components in a fitted mixture distribution is automated with an objective that trades off accuracy and computational cost.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:20:29 GMT" } ]
1,362,700,800,000
[ [ "Poland", "William B.", "" ], [ "Shachter", "Ross D.", "" ] ]
1303.1477
Prakash P. Shenoy
Prakash P. Shenoy
Valuation Networks and Conditional Independence
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-191-199
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Valuation networks have been proposed as graphical representations of valuation-based systems (VBSs). The VBS framework is able to capture many uncertainty calculi including probability theory, Dempster-Shafer's belief-function theory, Spohn's epistemic belief theory, and Zadeh's possibility theory. In this paper, we show how valuation networks encode conditional independence relations. For the probabilistic case, the class of probability models encoded by valuation networks includes undirected graph models, directed acyclic graph models, directed balloon graph models, and recursive causal graph models.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:20:35 GMT" } ]
1,362,700,800,000
[ [ "Shenoy", "Prakash P.", "" ] ]
1303.1478
Solomon Eyal Shimony
Solomon Eyal Shimony
Relevant Explanations: Allowing Disjunctive Assignments
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-200-207
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Relevance-based explanation is a scheme in which partial assignments to Bayesian belief network variables are explanations (abductive conclusions). We allow variables to remain unassigned in explanations as long as they are irrelevant to the explanation, where irrelevance is defined in terms of statistical independence. When multiple-valued variables exist in the system, especially when subsets of values correspond to natural types of events, the over specification problem, alleviated by independence-based explanation, resurfaces. As a solution to that, as well as for addressing the question of explanation specificity, it is desirable to collapse such a subset of values into a single value on the fly. The equivalent method, which is adopted here, is to generalize the notion of assignments to allow disjunctive assignments. We proceed to define generalized independence based explanations as maximum posterior probability independence based generalized assignments (GIB-MAPs). GIB assignments are shown to have certain properties that ease the design of algorithms for computing GIB-MAPs. One such algorithm is discussed here, as well as suggestions for how other algorithms may be adapted to compute GIB-MAPs. GIB-MAP explanations still suffer from instability, a problem which may be addressed using ?approximate? conditional independence as a condition for irrelevance.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:20:41 GMT" } ]
1,362,700,800,000
[ [ "Shimony", "Solomon Eyal", "" ] ]
1303.1479
Sampath Srinivas
Sampath Srinivas
A Generalization of the Noisy-Or Model
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-208-215
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Noisy-Or model is convenient for describing a class of uncertain relationships in Bayesian networks [Pearl 1988]. Pearl describes the Noisy-Or model for Boolean variables. Here we generalize the model to nary input and output variables and to arbitrary functions other than the Boolean OR function. This generalization is a useful modeling aid for construction of Bayesian networks. We illustrate with some examples including digital circuit diagnosis and network reliability analysis.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:20:46 GMT" } ]
1,362,700,800,000
[ [ "Srinivas", "Sampath", "" ] ]
1303.1480
Fahiem Bacchus
Fahiem Bacchus
Using First-Order Probability Logic for the Construction of Bayesian Networks
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-219-226
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a mechanism for constructing graphical models, specifically Bayesian networks, from a knowledge base of general probabilistic information. The unique feature of our approach is that it uses a powerful first-order probabilistic logic for expressing the general knowledge base. This logic allows for the representation of a wide range of logical and probabilistic information. The model construction procedure we propose uses notions from direct inference to identify pieces of local statistical information from the knowledge base that are most appropriate to the particular event we want to reason about. These pieces are composed to generate a joint probability distribution specified as a Bayesian network. Although there are fundamental difficulties in dealing with fully general knowledge, our procedure is practical for quite rich knowledge bases and it supports the construction of a far wider range of networks than allowed for by current template technology.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:20:51 GMT" } ]
1,362,700,800,000
[ [ "Bacchus", "Fahiem", "" ] ]
1303.1481
Marie desJardins
Marie desJardins
Representing and Reasoning With Probabilistic Knowledge: A Bayesian Approach
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-227-234
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
PAGODA (Probabilistic Autonomous Goal-Directed Agent) is a model for autonomous learning in probabilistic domains [desJardins, 1992] that incorporates innovative techniques for using the agent's existing knowledge to guide and constrain the learning process and for representing, reasoning with, and learning probabilistic knowledge. This paper describes the probabilistic representation and inference mechanism used in PAGODA. PAGODA forms theories about the effects of its actions and the world state on the environment over time. These theories are represented as conditional probability distributions. A restriction is imposed on the structure of the theories that allows the inference mechanism to find a unique predicted distribution for any action and world state description. These restricted theories are called uniquely predictive theories. The inference mechanism, Probability Combination using Independence (PCI), uses minimal independence assumptions to combine the probabilities in a theory to make probabilistic predictions.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:20:56 GMT" } ]
1,362,700,800,000
[ [ "desJardins", "Marie", "" ] ]
1303.1482
John W. Egar
John W. Egar, Mark A. Musen
Graph-Grammar Assistance for Automated Generation of Influence Diagrams
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-235-242
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the most difficult aspects of modeling complex dilemmas in decision-analytic terms is composing a diagram of relevance relations from a set of domain concepts. Decision models in domains such as medicine, however, exhibit certain prototypical patterns that can guide the modeling process. Medical concepts can be classified according to semantic types that have characteristic positions and typical roles in an influence-diagram model. We have developed a graph-grammar production system that uses such inherent interrelationships among medical terms to facilitate the modeling of medical decisions.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:21:02 GMT" } ]
1,362,700,800,000
[ [ "Egar", "John W.", "" ], [ "Musen", "Mark A.", "" ] ]
1303.1483
Wai Lam
Wai Lam, Fahiem Bacchus
Using Causal Information and Local Measures to Learn Bayesian Networks
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-243-250
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In previous work we developed a method of learning Bayesian Network models from raw data. This method relies on the well known minimal description length (MDL) principle. The MDL principle is particularly well suited to this task as it allows us to tradeoff, in a principled way, the accuracy of the learned network against its practical usefulness. In this paper we present some new results that have arisen from our work. In particular, we present a new local way of computing the description length. This allows us to make significant improvements in our search algorithm. In addition, we modify our algorithm so that it can take into account partial domain information that might be provided by a domain expert. The local computation of description length also opens the door for local refinement of an existent network. The feasibility of our approach is demonstrated by experiments involving networks of a practical size.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:21:10 GMT" } ]
1,362,700,800,000
[ [ "Lam", "Wai", "" ], [ "Bacchus", "Fahiem", "" ] ]
1303.1484
Ron Musick
Ron Musick
Minimal Assumption Distribution Propagation in Belief Networks
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-251-258
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As belief networks are used to model increasingly complex situations, the need to automatically construct them from large databases will become paramount. This paper concentrates on solving a part of the belief network induction problem: that of learning the quantitative structure (the conditional probabilities), given the qualitative structure. In particular, a theory is presented that shows how to propagate inference distributions in a belief network, with the only assumption being that the given qualitative structure is correct. Most inference algorithms must make at least this assumption. The theory is based on four network transformations that are sufficient for any inference in a belief network. Furthermore, the claim is made that contrary to popular belief, error will not necessarily grow as the inference chain grows. Instead, for QBN belief nets induced from large enough samples, the error is more likely to decrease as the size of the inference chain increases.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:21:15 GMT" } ]
1,362,700,800,000
[ [ "Musick", "Ron", "" ] ]
1303.1485
Moninder Singh
Moninder Singh, Marco Valtorta
An Algorithm for the Construction of Bayesian Network Structures from Data
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-259-265
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Previous algorithms for the construction of Bayesian belief network structures from data have been either highly dependent on conditional independence (CI) tests, or have required an ordering on the nodes to be supplied by the user. We present an algorithm that integrates these two approaches - CI tests are used to generate an ordering on the nodes from the database which is then used to recover the underlying Bayesian network structure using a non CI based method. Results of preliminary evaluation of the algorithm on two networks (ALARM and LED) are presented. We also discuss some algorithm performance issues and open problems.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:21:21 GMT" } ]
1,362,700,800,000
[ [ "Singh", "Moninder", "" ], [ "Valtorta", "Marco", "" ] ]
1303.1486
Joe Suzuki
Joe Suzuki
A Construction of Bayesian Networks from Databases Based on an MDL Principle
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-266-273
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper addresses learning stochastic rules especially on an inter-attribute relation based on a Minimum Description Length (MDL) principle with a finite number of examples, assuming an application to the design of intelligent relational database systems. The stochastic rule in this paper consists of a model giving the structure like the dependencies of a Bayesian Belief Network (BBN) and some stochastic parameters each indicating a conditional probability of an attribute value given the state determined by the other attributes' values in the same record. Especially, we propose the extended version of the algorithm of Chow and Liu in that our learning algorithm selects the model in the range where the dependencies among the attributes are represented by some general plural number of trees.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:21:27 GMT" } ]
1,362,700,800,000
[ [ "Suzuki", "Joe", "" ] ]
1303.1487
Soe-Tsyr Yuan
Soe-Tsyr Yuan
Knowledge-Based Decision Model Construction for Hierarchical Diagnosis: A Preliminary Report
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-274-281
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Numerous methods for probabilistic reasoning in large, complex belief or decision networks are currently being developed. There has been little research on automating the dynamic, incremental construction of decision models. A uniform value-driven method of decision model construction is proposed for the hierarchical complete diagnosis. Hierarchical complete diagnostic reasoning is formulated as a stochastic process and modeled using influence diagrams. Given observations, this method creates decision models in order to obtain the best actions sequentially for locating and repairing a fault at minimum cost. This method construct decision models incrementally, interleaving probe actions with model construction and evaluation. The method treats meta-level and baselevel tasks uniformly. That is, the method takes a decision-theoretic look at the control of search in causal pathways and structural hierarchies.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:21:32 GMT" } ]
1,362,700,800,000
[ [ "Yuan", "Soe-Tsyr", "" ] ]
1303.1488
Lisa J. Burnell
Lisa J. Burnell, Eric J. Horvitz
A Synthesis of Logical and Probabilistic Reasoning for Program Understanding and Debugging
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-285-291
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We describe the integration of logical and uncertain reasoning methods to identify the likely source and location of software problems. To date, software engineers have had few tools for identifying the sources of error in complex software packages. We describe a method for diagnosing software problems through combining logical and uncertain reasoning analyses. Our preliminary results suggest that such methods can be of value in directing the attention of software engineers to paths of an algorithm that have the highest likelihood of harboring a programming error.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:21:38 GMT" } ]
1,362,700,800,000
[ [ "Burnell", "Lisa J.", "" ], [ "Horvitz", "Eric J.", "" ] ]
1303.1489
Peter Che
Peter Che, Richard E. Neapolitan, James Kenevan, Martha Evens
An Implementation of a Method for Computing the Uncertainty in Inferred Probabilities in Belief Networks
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-292-300
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years the belief network has been used increasingly to model systems in Al that must perform uncertain inference. The development of efficient algorithms for probabilistic inference in belief networks has been a focus of much research in AI. Efficient algorithms for certain classes of belief networks have been developed, but the problem of reporting the uncertainty in inferred probabilities has received little attention. A system should not only be capable of reporting the values of inferred probabilities and/or the favorable choices of a decision; it should report the range of possible error in the inferred probabilities and/or choices. Two methods have been developed and implemented for determining the variance in inferred probabilities in belief networks. These methods, the Approximate Propagation Method and the Monte Carlo Integration Method are discussed and compared in this paper.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:21:44 GMT" } ]
1,362,700,800,000
[ [ "Che", "Peter", "" ], [ "Neapolitan", "Richard E.", "" ], [ "Kenevan", "James", "" ], [ "Evens", "Martha", "" ] ]
1303.1490
Bruce D'Ambrosio
Bruce D'Ambrosio
Incremental Probabilistic Inference
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-301-308
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Propositional representation services such as truth maintenance systems offer powerful support for incremental, interleaved, problem-model construction and evaluation. Probabilistic inference systems, in contrast, have lagged behind in supporting this incrementality typically demanded by problem solvers. The problem, we argue, is that the basic task of probabilistic inference is typically formulated at too large a grain-size. We show how a system built around a smaller grain-size inference task can have the desired incrementality and serve as the basis for a low-level (propositional) probabilistic representation service.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:21:48 GMT" } ]
1,362,700,800,000
[ [ "D'Ambrosio", "Bruce", "" ] ]
1303.1491
Thomas L. Dean
Thomas L. Dean, Leslie Pack Kaelbling, Jak Kirman, Ann Nicholson
Deliberation Scheduling for Time-Critical Sequential Decision Making
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-309-316
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We describe a method for time-critical decision making involving sequential tasks and stochastic processes. The method employs several iterative refinement routines for solving different aspects of the decision making problem. This paper concentrates on the meta-level control problem of deliberation scheduling, allocating computational resources to these routines. We provide different models corresponding to optimization problems that capture the different circumstances and computational strategies for decision making under time constraints. We consider precursor models in which all decision making is performed prior to execution and recurrent models in which decision making is performed in parallel with execution, accounting for the states observed during execution and anticipating future states. We describe algorithms for precursor and recurrent models and provide the results of our empirical investigations to date.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:21:54 GMT" } ]
1,362,700,800,000
[ [ "Dean", "Thomas L.", "" ], [ "Kaelbling", "Leslie Pack", "" ], [ "Kirman", "Jak", "" ], [ "Nicholson", "Ann", "" ] ]
1303.1492
Marek J. Druzdzel
Marek J. Druzdzel, Max Henrion
Intercausal Reasoning with Uninstantiated Ancestor Nodes
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-317-325
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Intercausal reasoning is a common inference pattern involving probabilistic dependence of causes of an observed common effect. The sign of this dependence is captured by a qualitative property called product synergy. The current definition of product synergy is insufficient for intercausal reasoning where there are additional uninstantiated causes of the common effect. We propose a new definition of product synergy and prove its adequacy for intercausal reasoning with direct and indirect evidence for the common effect. The new definition is based on a new property matrix half positive semi-definiteness, a weakened form of matrix positive semi-definiteness.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:22:00 GMT" } ]
1,362,700,800,000
[ [ "Druzdzel", "Marek J.", "" ], [ "Henrion", "Max", "" ] ]
1303.1493
Dan Geiger
Dan Geiger, David Heckerman
Inference Algorithms for Similarity Networks
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-326-334
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We examine two types of similarity networks each based on a distinct notion of relevance. For both types of similarity networks we present an efficient inference algorithm that works under the assumption that every event has a nonzero probability of occurrence. Another inference algorithm is developed for type 1 similarity networks that works under no restriction, albeit less efficiently.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:22:08 GMT" }, { "version": "v2", "created": "Sat, 16 May 2015 23:49:37 GMT" } ]
1,431,993,600,000
[ [ "Geiger", "Dan", "" ], [ "Heckerman", "David", "" ] ]
1303.1494
Paul E. Lehner
Paul E. Lehner, Azar Sadigh
Two Procedures for Compiling Influence Diagrams
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-335-341
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Two algorithms are presented for "compiling" influence diagrams into a set of simple decision rules. These decision rules define simple-to-execute, complete, consistent, and near-optimal decision procedures. These compilation algorithms can be used to derive decision procedures for human teams solving time constrained decision problems.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:22:13 GMT" } ]
1,362,700,800,000
[ [ "Lehner", "Paul E.", "" ], [ "Sadigh", "Azar", "" ] ]
1303.1495
Zhaoyu Li
Zhaoyu Li, Bruce D'Ambrosio
An efficient approach for finding the MPE in belief networks
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-342-349
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given a belief network with evidence, the task of finding the I most probable explanations (MPE) in the belief network is that of identifying and ordering the I most probable instantiations of the non-evidence nodes of the belief network. Although many approaches have been proposed for solving this problem, most work only for restricted topologies (i.e., singly connected belief networks). In this paper, we will present a new approach for finding I MPEs in an arbitrary belief network. First, we will present an algorithm for finding the MPE in a belief network. Then, we will present a linear time algorithm for finding the next MPE after finding the first MPE. And finally, we will discuss the problem of finding the MPE for a subset of variables of a belief network, and show that the problem can be efficiently solved by this approach.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:22:18 GMT" } ]
1,362,700,800,000
[ [ "Li", "Zhaoyu", "" ], [ "D'Ambrosio", "Bruce", "" ] ]
1303.1496
Todd Michael Mansell
Todd Michael Mansell
A Method for Planning Given Uncertain and Incomplete Information
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-350-358
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes ongoing research into planning in an uncertain environment. In particular, it introduces U-Plan, a planning system that constructs quantitatively ranked plans given an incomplete description of the state of the world. U-Plan uses a DempsterShafer interval to characterise uncertain and incomplete information about the state of the world. The planner takes as input what is known about the world, and constructs a number of possible initial states with representations at different abstraction levels. A plan is constructed for the initial state with the greatest support, and this plan is tested to see if it will work for other possible initial states. All, part, or none of the existing plans may be used in the generation of the plans for the remaining possible worlds. Planning takes place in an abstraction hierarchy where strategic decisions are made before tactical decisions. A super-plan is then constructed, based on merging the set of plans and the appropriately timed acquisition of essential knowledge, which is used to decide between plan alternatives. U-Plan usually produces a super-plan in less time than a classical planner would take to produce a set of plans, one for each possible world.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:22:24 GMT" } ]
1,362,700,800,000
[ [ "Mansell", "Todd Michael", "" ] ]
1303.1497
David L Poole
David L. Poole
The use of conflicts in searching Bayesian networks
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-359-367
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper discusses how conflicts (as used by the consistency-based diagnosis community) can be adapted to be used in a search-based algorithm for computing prior and posterior probabilities in discrete Bayesian Networks. This is an "anytime" algorithm, that at any stage can estimate the probabilities and give an error bound. Whereas the most popular Bayesian net algorithms exploit the structure of the network for efficiency, we exploit probability distributions for efficiency; this algorithm is most suited to the case with extreme probabilities. This paper presents a solution to the inefficiencies found in naive algorithms, and shows how the tools of the consistency-based diagnosis community (namely conflicts) can be used effectively to improve the efficiency. Empirical results with networks having tens of thousands of nodes are presented.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:22:29 GMT" } ]
1,362,700,800,000
[ [ "Poole", "David L.", "" ] ]
1303.1498
Carlos Rojas-Guzman
Carlos Rojas-Guzman, Mark A. Kramer
GALGO: A Genetic ALGOrithm Decision Support Tool for Complex Uncertain Systems Modeled with Bayesian Belief Networks
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-368-375
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bayesian belief networks can be used to represent and to reason about complex systems with uncertain, incomplete and conflicting information. Belief networks are graphs encoding and quantifying probabilistic dependence and conditional independence among variables. One type of reasoning of interest in diagnosis is called abductive inference (determination of the global most probable system description given the values of any partial subset of variables). In some cases, abductive inference can be performed with exact algorithms using distributed network computations but it is an NP-hard problem and complexity increases drastically with the presence of undirected cycles, number of discrete states per variable, and number of variables in the network. This paper describes an approximate method based on genetic algorithms to perform abductive inference in large, multiply connected networks for which complexity is a concern when using most exact methods and for which systematic search methods are not feasible. The theoretical adequacy of the method is discussed and preliminary experimental results are presented.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:22:36 GMT" } ]
1,362,700,800,000
[ [ "Rojas-Guzman", "Carlos", "" ], [ "Kramer", "Mark A.", "" ] ]
1303.1499
Sumit Sarkar
Sumit Sarkar
Using Tree-Decomposable Structures to Approximate Belief Networks
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-376-382
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tree structures have been shown to provide an efficient framework for propagating beliefs [Pearl,1986]. This paper studies the problem of finding an optimal approximating tree. The star decomposition scheme for sets of three binary variables [Lazarsfeld,1966; Pearl,1986] is shown to enhance the class of probability distributions that can support tree structures; such structures are called tree-decomposable structures. The logarithm scoring rule is found to be an appropriate optimality criterion to evaluate different tree-decomposable structures. Characteristics of such structures closest to the actual belief network are identified using the logarithm rule, and greedy and exact techniques are developed to find the optimal approximation.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:22:41 GMT" } ]
1,362,700,800,000
[ [ "Sarkar", "Sumit", "" ] ]
1303.1500
Ross D. Shachter
Ross D. Shachter, Pierre Ndilikilikesha
Using Potential Influence Diagrams for Probabilistic Inference and Decision Making
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-383-390
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The potential influence diagram is a generalization of the standard "conditional" influence diagram, a directed network representation for probabilistic inference and decision analysis [Ndilikilikesha, 1991]. It allows efficient inference calculations corresponding exactly to those on undirected graphs. In this paper, we explore the relationship between potential and conditional influence diagrams and provide insight into the properties of the potential influence diagram. In particular, we show how to convert a potential influence diagram into a conditional influence diagram, and how to view the potential influence diagram operations in terms of the conditional influence diagram.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:22:47 GMT" } ]
1,362,700,800,000
[ [ "Shachter", "Ross D.", "" ], [ "Ndilikilikesha", "Pierre", "" ] ]
1303.1501
Tom S. Verma
Tom S. Verma, Judea Pearl
Deciding Morality of Graphs is NP-complete
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-391-399
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In order to find a causal explanation for data presented in the form of covariance and concentration matrices it is necessary to decide if the graph formed by such associations is a projection of a directed acyclic graph (dag). We show that the general problem of deciding whether such a dag exists is NP-complete.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:22:53 GMT" } ]
1,362,700,800,000
[ [ "Verma", "Tom S.", "" ], [ "Pearl", "Judea", "" ] ]
1303.1502
Nevin Lianwen Zhang
Nevin Lianwen Zhang, Runping Qi, David L. Poole
Incremental computation of the value of perfect information in stepwise-decomposable influence diagrams
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-400-407
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To determine the value of perfect information in an influence diagram, one needs first to modify the diagram to reflect the change in information availability, and then to compute the optimal expected values of both the original diagram and the modified diagram. The value of perfect information is the difference between the two optimal expected values. This paper is about how to speed up the computation of the optimal expected value of the modified diagram by making use of the intermediate computation results obtained when computing the optimal expected value of the original diagram.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:22:59 GMT" } ]
1,362,700,800,000
[ [ "Zhang", "Nevin Lianwen", "" ], [ "Qi", "Runping", "" ], [ "Poole", "David L.", "" ] ]
1303.1503
Salem Benferhat
Salem Benferhat, Didier Dubois, Henri Prade
Argumentative inference in uncertain and inconsistent knowledge bases
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-411-419
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents and discusses several methods for reasoning from inconsistent knowledge bases. A so-called argumentative-consequence relation taking into account the existence of consistent arguments in favor of a conclusion and the absence of consistent arguments in favor of its contrary, is particularly investigated. Flat knowledge bases, i.e. without any priority between their elements, as well as prioritized ones where some elements are considered as more strongly entrenched than others are studied under different consequence relations. Lastly a paraconsistent-like treatment of prioritized knowledge bases is proposed, where both the level of entrenchment and the level of paraconsistency attached to a formula are propagated. The priority levels are handled in the framework of possibility theory.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:23:05 GMT" } ]
1,362,700,800,000
[ [ "Benferhat", "Salem", "" ], [ "Dubois", "Didier", "" ], [ "Prade", "Henri", "" ] ]
1303.1504
Adnan Darwiche
Adnan Darwiche
Argument Calculus and Networks
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-420-427
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A major reason behind the success of probability calculus is that it possesses a number of valuable tools, which are based on the notion of probabilistic independence. In this paper, I identify a notion of logical independence that makes some of these tools available to a class of propositional databases, called argument databases. Specifically, I suggest a graphical representation of argument databases, called argument networks, which resemble Bayesian networks. I also suggest an algorithm for reasoning with argument networks, which resembles a basic algorithm for reasoning with Bayesian networks. Finally, I show that argument networks have several applications: Nonmonotonic reasoning, truth maintenance, and diagnosis.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:23:11 GMT" } ]
1,362,700,800,000
[ [ "Darwiche", "Adnan", "" ] ]
1303.1505
John Fox
John Fox, Paul J. Krause, Morten Elvang-G{\o}ransson
Argumentation as a General Framework for Uncertain Reasoning
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-428-434
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Argumentation is the process of constructing arguments about propositions, and the assignment of statements of confidence to those propositions based on the nature and relative strength of their supporting arguments. The process is modelled as a labelled deductive system, in which propositions are doubly labelled with the grounds on which they are based and a representation of the confidence attached to the argument. Argument construction is captured by a generalized argument consequence relation based on the ^,--fragment of minimal logic. Arguments can be aggregated by a variety of numeric and symbolic flattening functions. This approach appears to shed light on the common logical structure of a variety of quantitative, qualitative and defeasible uncertainty calculi.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:23:15 GMT" } ]
1,362,700,800,000
[ [ "Fox", "John", "" ], [ "Krause", "Paul J.", "" ], [ "Elvang-Gøransson", "Morten", "" ] ]
1303.1506
Simon Parsons
Simon Parsons, E. H. Mamdani
On reasoning in networks with qualitative uncertainty
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-435-442
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper some initial work towards a new approach to qualitative reasoning under uncertainty is presented. This method is not only applicable to qualitative probabilistic reasoning, as is the case with other methods, but also allows the qualitative propagation within networks of values based upon possibility theory and Dempster-Shafer evidence theory. The method is applied to two simple networks from which a large class of directed graphs may be constructed. The results of this analysis are used to compare the qualitative behaviour of the three major quantitative uncertainty handling formalisms, and to demonstrate that the qualitative integration of the formalisms is possible under certain assumptions.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:23:21 GMT" } ]
1,362,700,800,000
[ [ "Parsons", "Simon", "" ], [ "Mamdani", "E. H.", "" ] ]
1303.1507
Michael S. K. M. Wong
Michael S. K. M. Wong, Z. W. Wang
Qualitative Measures of Ambiguity
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-443-450
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces a qualitative measure of ambiguity and analyses its relationship with other measures of uncertainty. Probability measures relative likelihoods, while ambiguity measures vagueness surrounding those judgments. Ambiguity is an important representation of uncertain knowledge. It deals with a different, type of uncertainty modeled by subjective probability or belief.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:23:27 GMT" } ]
1,362,700,800,000
[ [ "Wong", "Michael S. K. M.", "" ], [ "Wang", "Z. W.", "" ] ]