id
stringlengths
9
10
submitter
stringlengths
5
47
authors
stringlengths
5
1.72k
title
stringlengths
11
234
comments
stringlengths
1
491
journal-ref
stringlengths
4
396
doi
stringlengths
13
97
report-no
stringlengths
4
138
categories
stringclasses
1 value
license
stringclasses
9 values
abstract
stringlengths
29
3.66k
versions
listlengths
1
21
update_date
int64
1,180B
1,718B
authors_parsed
sequencelengths
1
98
1303.1508
Robert F. Bordley
Robert F. Bordley
A Bayesian Variant of Shafer's Commonalities For Modelling Unforeseen Events
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-453-460
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Shafer's theory of belief and the Bayesian theory of probability are two alternative and mutually inconsistent approaches toward modelling uncertainty in artificial intelligence. To help reduce the conflict between these two approaches, this paper reexamines expected utility theory-from which Bayesian probability theory is derived. Expected utility theory requires the decision maker to assign a utility to each decision conditioned on every possible event that might occur. But frequently the decision maker cannot foresee all the events that might occur, i.e., one of the possible events is the occurrence of an unforeseen event. So once we acknowledge the existence of unforeseen events, we need to develop some way of assigning utilities to decisions conditioned on unforeseen events. The commonsensical solution to this problem is to assign similar utilities to events which are similar. Implementing this commonsensical solution is equivalent to replacing Bayesian subjective probabilities over the space of foreseen and unforeseen events by random set theory probabilities over the space of foreseen events. This leads to an expected utility principle in which normalized variants of Shafer's commonalities play the role of subjective probabilities. Hence allowing for unforeseen events in decision analysis causes Bayesian probability theory to become much more similar to Shaferian theory.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:23:32 GMT" } ]
1,362,700,800,000
[ [ "Bordley", "Robert F.", "" ] ]
1303.1509
Craig Boutilier
Craig Boutilier
The Probability of a Possibility: Adding Uncertainty to Default Rules
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-461-468
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a semantics for adding uncertainty to conditional logics for default reasoning and belief revision. We are able to treat conditional sentences as statements of conditional probability, and express rules for revision such as "If A were believed, then B would be believed to degree p." This method of revision extends conditionalization by allowing meaningful revision by sentences whose probability is zero. This is achieved through the use of counterfactual probabilities. Thus, our system accounts for the best properties of qualitative methods of update (in particular, the AGM theory of revision) and probabilistic methods. We also show how our system can be viewed as a unification of probability theory and possibility theory, highlighting their orthogonality and providing a means for expressing the probability of a possibility. We also demonstrate the connection to Lewis's method of imaging.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:23:38 GMT" } ]
1,362,700,800,000
[ [ "Boutilier", "Craig", "" ] ]
1303.1510
Dimiter Driankov
Dimiter Driankov, Jerome Lang
Possibilistic decreasing persistence
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-469-476
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A key issue in the handling of temporal data is the treatment of persistence; in most approaches it consists in inferring defeasible confusions by extrapolating from the actual knowledge of the history of the world; we propose here a gradual modelling of persistence, following the idea that persistence is decreasing (the further we are from the last time point where a fluent is known to be true, the less certainly true the fluent is); it is based on possibility theory, which has strong relations with other well-known ordering-based approaches to nonmonotonic reasoning. We compare our approach with Dean and Kanazawa's probabilistic projection. We give a formal modelling of the decreasing persistence problem. Lastly, we show how to infer nonmonotonic conclusions using the principle of decreasing persistence.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:23:43 GMT" } ]
1,362,700,800,000
[ [ "Driankov", "Dimiter", "" ], [ "Lang", "Jerome", "" ] ]
1303.1511
Jiwen W. Guan
Jiwen W. Guan, David A. Bell
Discounting and Combination Operations in Evidential Reasoning
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-477-484
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Evidential reasoning is now a leading topic in Artificial Intelligence. Evidence is represented by a variety of evidential functions. Evidential reasoning is carried out by certain kinds of fundamental operation on these functions. This paper discusses two of the basic operations on evidential functions, the discount operation and the well-known orthogonal sum operation. We show that the discount operation is not commutative with the orthogonal sum operation, and derive expressions for the two operations applied to the various evidential function.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:23:49 GMT" } ]
1,362,700,800,000
[ [ "Guan", "Jiwen W.", "" ], [ "Bell", "David A.", "" ] ]
1303.1512
Jurg Kohlas
Jurg Kohlas, Paul-Andre Monney
Probabilistic Assumption-Based Reasoning
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-485-491
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The classical propositional assumption-based model is extended to incorporate probabilities for the assumptions. Then it is placed into the framework of evidence theory. Several authors like Laskey, Lehner (1989) and Provan (1990) already proposed a similar point of view, but the first paper is not as much concerned with mathematical foundations, and Provan's paper develops into a different direction. Here we thoroughly develop and present the mathematical foundations of this theory, together with computational methods adapted from Reiter, De Kleer (1987) and Inoue (1992). Finally, recently proposed techniques for computing degrees of support are presented.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:23:56 GMT" } ]
1,362,700,800,000
[ [ "Kohlas", "Jurg", "" ], [ "Monney", "Paul-Andre", "" ] ]
1303.1513
Serafin Moral
Serafin Moral, Luis M. de Campos
Partially Specified Belief Functions
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-492-499
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a procedure to determine a complete belief function from the known values of belief for some of the subsets of the frame of discerment. The method is based on the principle of minimum commitment and a new principle called the focusing principle. This additional principle is based on the idea that belief is specified for the most relevant sets: the focal elements. The resulting procedure is compared with existing methods of building complete belief functions: the minimum specificity principle and the least commitment principle.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:24:02 GMT" } ]
1,362,700,800,000
[ [ "Moral", "Serafin", "" ], [ "de Campos", "Luis M.", "" ] ]
1303.1514
Philippe Smets
Philippe Smets
Jeffrey's rule of conditioning generalized to belief functions
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-500-505
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Jeffrey's rule of conditioning has been proposed in order to revise a probability measure by another probability function. We generalize it within the framework of the models based on belief functions. We show that several forms of Jeffrey's conditionings can be defined that correspond to the geometrical rule of conditioning and to Dempster's rule of conditioning, respectively.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:24:07 GMT" } ]
1,362,700,800,000
[ [ "Smets", "Philippe", "" ] ]
1303.1515
Fengming Song
Fengming Song, Ping Liang
Inference with Possibilistic Evidence
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-506-514
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, the concept of possibilistic evidence which is a possibility distribution as well as a body of evidence is proposed over an infinite universe of discourse. The inference with possibilistic evidence is investigated based on a unified inference framework maintaining both the compatibility of concepts and the consistency of the probability logic.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:24:13 GMT" } ]
1,362,700,800,000
[ [ "Song", "Fengming", "" ], [ "Liang", "Ping", "" ] ]
1303.1516
Carl G. Wagner
Carl G. Wagner, Bruce Tonn
Constructing Lower Probabilities
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-515-518
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An elaboration of Dempster's method of constructing belief functions suggests a broadly applicable strategy for constructing lower probabilities under a variety of evidentiary constraints.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:24:18 GMT" } ]
1,362,700,800,000
[ [ "Wagner", "Carl G.", "" ], [ "Tonn", "Bruce", "" ] ]
1303.1517
Pei Wang
Pei Wang
Belief Revision in Probability Theory
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-519-526
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a probability-based reasoning system, Bayes' theorem and its variations are often used to revise the system's beliefs. However, if the explicit conditions and the implicit conditions of probability assignments `me properly distinguished, it follows that Bayes' theorem is not a generally applicable revision rule. Upon properly distinguishing belief revision from belief updating, we see that Jeffrey's rule and its variations are not revision rules, either. Without these distinctions, the limitation of the Bayesian approach is often ignored or underestimated. Revision, in its general form, cannot be done in the Bayesian approach, because a probability distribution function alone does not contain the information needed by the operation.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:24:24 GMT" } ]
1,362,700,800,000
[ [ "Wang", "Pei", "" ] ]
1303.1518
Nic Wilson
Nic Wilson
The Assumptions Behind Dempster's Rule
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-527-534
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper examines the concept of a combination rule for belief functions. It is shown that two fairly simple and apparently reasonable assumptions determine Dempster's rule, giving a new justification for it.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:24:29 GMT" } ]
1,362,700,800,000
[ [ "Wilson", "Nic", "" ] ]
1303.1519
Hong Xu
Hong Xu, Yen-Teh Hsia, Philippe Smets
A Belief-Function Based Decision Support System
Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993)
null
null
UAI-P-1993-PG-535-542
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present a decision support system based on belief functions and the pignistic transformation. The system is an integration of an evidential system for belief function propagation and a valuation-based system for Bayesian decision analysis. The two subsystems are connected through the pignistic transformation. The system takes as inputs the user's "gut feelings" about a situation and suggests what, if any, are to be tested and in what order, and it does so with a user friendly interface.
[ { "version": "v1", "created": "Wed, 6 Mar 2013 14:24:35 GMT" } ]
1,362,700,800,000
[ [ "Xu", "Hong", "" ], [ "Hsia", "Yen-Teh", "" ], [ "Smets", "Philippe", "" ] ]
1303.2013
J. G. Wolff
J Gerard Wolff
Computing as compression: the SP theory of intelligence
8 pages, 2 figures. arXiv admin note: text overlap with arXiv:1212.0229
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper provides an overview of the SP theory of intelligence and its central idea that artificial intelligence, mainstream computing, and much of human perception and cognition, may be understood as information compression. The background and origins of the SP theory are described, and the main elements of the theory, including the key concept of multiple alignment, borrowed from bioinformatics but with important differences. Associated with the SP theory is the idea that redundancy in information may be understood as repetition of patterns, that compression of information may be achieved via the matching and unification (merging) of patterns, and that computing and information compression are both fundamentally probabilistic. It appears that the SP system is Turing-equivalent in the sense that anything that may be computed with a Turing machine may, in principle, also be computed with an SP machine. One of the main strengths of the SP theory and the multiple alignment concept is in modelling concepts and phenomena in artificial intelligence. Within that area, the SP theory provides a simple but versatile means of representing different kinds of knowledge, it can model both the parsing and production of natural language, with potential for the understanding and translation of natural languages, it has strengths in pattern recognition, with potential in computer vision, it can model several kinds of reasoning, and it has capabilities in planning, problem solving, and unsupervised learning. The paper includes two examples showing how alternative parsings of an ambiguous sentence may be modelled as multiple alignments, and another example showing how the concept of multiple alignment may be applied in medical diagnosis.
[ { "version": "v1", "created": "Fri, 8 Mar 2013 14:52:24 GMT" } ]
1,362,960,000,000
[ [ "Wolff", "J Gerard", "" ] ]
1303.4183
Lukasz Swierczewski
Lukasz Swierczewski
Generating extrema approximation of analytically incomputable functions through usage of parallel computer aided genetic algorithms
16 pages, 13 figures
null
null
null
cs.AI
http://creativecommons.org/licenses/publicdomain/
This paper presents capabilities of using genetic algorithms to find approximations of function extrema, which cannot be found using analytic ways. To enhance effectiveness of calculations, algorithm has been parallelized using OpenMP library. We gained much increase in speed on platforms using multithreaded processors with shared memory free access. During analysis we used different modifications of genetic operator, using them we obtained varied evolution process of potential solutions. Results allow to choose best methods among many applied in genetic algorithms and observation of acceleration on Yorkfield, Bloomfield, Westmere-EX and most recent Sandy Bridge cores.
[ { "version": "v1", "created": "Mon, 18 Mar 2013 08:49:48 GMT" } ]
1,363,651,200,000
[ [ "Swierczewski", "Lukasz", "" ] ]
1303.5132
Vania Bogorny
Vitor Cunha Fontes and Vania Bogorny
Discovering Semantic Spatial and Spatio-Temporal Outliers from Moving Object Trajectories
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Several algorithms have been proposed for discovering patterns from trajectories of moving objects, but only a few have concentrated on outlier detection. Existing approaches, in general, discover spatial outliers, and do not provide any further analysis of the patterns. In this paper we introduce semantic spatial and spatio-temporal outliers and propose a new algorithm for trajectory outlier detection. Semantic outliers are computed between regions of interest, where objects have similar movement intention, and there exist standard paths which connect the regions. We show with experiments on real data that the method finds semantic outliers from trajectory data that are not discovered by similar approaches.
[ { "version": "v1", "created": "Thu, 21 Mar 2013 00:28:41 GMT" } ]
1,363,910,400,000
[ [ "Fontes", "Vitor Cunha", "" ], [ "Bogorny", "Vania", "" ] ]
1303.5177
Nabila Shikoun
Nabila Shikoun, Mohamed El Nahas and Samar Kassim
Model Based Framework for Estimating Mutation Rate of Hepatitis C Virus in Egypt
6 pages, 5 figures
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hepatitis C virus (HCV) is a widely spread disease all over the world. HCV has very high mutation rate that makes it resistant to antibodies. Modeling HCV to identify the virus mutation process is essential to its detection and predicting its evolution. This paper presents a model based framework for estimating mutation rate of HCV in two steps. Firstly profile hidden Markov model (PHMM) architecture was builder to select the sequences which represents sequence per year. Secondly mutation rate was calculated by using pair-wise distance method between sequences. A pilot study is conducted on NS5B zone of HCV dataset of genotype 4 subtype a (HCV4a) in Egypt.
[ { "version": "v1", "created": "Thu, 21 Mar 2013 06:49:05 GMT" } ]
1,363,910,400,000
[ [ "Shikoun", "Nabila", "" ], [ "Nahas", "Mohamed El", "" ], [ "Kassim", "Samar", "" ] ]
1303.5391
Zhi An
Zhi An, David A. Bell, John G. Hughes
RES - a Relative Method for Evidential Reasoning
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-1-8
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we describe a novel method for evidential reasoning [1]. It involves modelling the process of evidential reasoning in three steps, namely, evidence structure construction, evidence accumulation, and decision making. The proposed method, called RES, is novel in that evidence strength is associated with an evidential support relationship (an argument) between a pair of statements and such strength is carried by comparison between arguments. This is in contrast to the onventional approaches, where evidence strength is represented numerically and is associated with a statement.
[ { "version": "v1", "created": "Wed, 13 Mar 2013 12:51:26 GMT" } ]
1,364,169,600,000
[ [ "An", "Zhi", "" ], [ "Bell", "David A.", "" ], [ "Hughes", "John G.", "" ] ]
1303.5392
Remco R. Bouckaert
Remco R. Bouckaert
Optimizing Causal Orderings for Generating DAGs from Data
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-9-16
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An algorithm for generating the structure of a directed acyclic graph from data using the notion of causal input lists is presented. The algorithm manipulates the ordering of the variables with operations which very much resemble arc reversal. Operations are only applied if the DAG after the operation represents at least the independencies represented by the DAG before the operation until no more arcs can be removed from the DAG. The resulting DAG is a minimal l-map.
[ { "version": "v1", "created": "Wed, 13 Mar 2013 12:51:32 GMT" } ]
1,364,169,600,000
[ [ "Bouckaert", "Remco R.", "" ] ]
1303.5393
Craig Boutilier
Craig Boutilier
Modal Logics for Qualitative Possibility and Beliefs
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-17-24
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Possibilistic logic has been proposed as a numerical formalism for reasoning with uncertainty. There has been interest in developing qualitative accounts of possibility, as well as an explanation of the relationship between possibility and modal logics. We present two modal logics that can be used to represent and reason with qualitative statements of possibility and necessity. Within this modal framework, we are able to identify interesting relationships between possibilistic logic, beliefs and conditionals. In particular, the most natural conditional definable via possibilistic means for default reasoning is identical to Pearl's conditional for e-semantics.
[ { "version": "v1", "created": "Wed, 13 Mar 2013 12:51:38 GMT" } ]
1,364,169,600,000
[ [ "Boutilier", "Craig", "" ] ]
1303.5394
Brian Y. Chan
Brian Y. Chan, Ross D. Shachter
Structural Controllability and Observability in Influence Diagrams
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-25-32
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Influence diagram is a graphical representation of belief networks with uncertainty. This article studies the structural properties of a probabilistic model in an influence diagram. In particular, structural controllability theorems and structural observability theorems are developed and algorithms are formulated. Controllability and observability are fundamental concepts in dynamic systems (Luenberger 1979). Controllability corresponds to the ability to control a system while observability analyzes the inferability of its variables. Both properties can be determined by the ranks of the system matrices. Structural controllability and observability, on the other hand, analyze the property of a system with its structure only, without the specific knowledge of the values of its elements (tin 1974, Shields and Pearson 1976). The structural analysis explores the connection between the structure of a model and the functional dependence among its elements. It is useful in comprehending problem and formulating solution by challenging the underlying intuitions and detecting inconsistency in a model. This type of qualitative reasoning can sometimes provide insight even when there is insufficient numerical information in a model.
[ { "version": "v1", "created": "Wed, 13 Mar 2013 12:51:44 GMT" } ]
1,364,169,600,000
[ [ "Chan", "Brian Y.", "" ], [ "Shachter", "Ross D.", "" ] ]
1303.5395
Philippe Chatalic
Philippe Chatalic, Christine Froidevaux
Lattice-Based Graded Logic: a Multimodal Approach
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-33-40
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Experts do not always feel very, comfortable when they have to give precise numerical estimations of certainty degrees. In this paper we present a qualitative approach which allows for attaching partially ordered symbolic grades to logical formulas. Uncertain information is expressed by means of parameterized modal operators. We propose a semantics for this multimodal logic and give a sound and complete axiomatization. We study the links with related approaches and suggest how this framework might be used to manage both uncertain and incomplere knowledge.
[ { "version": "v1", "created": "Wed, 13 Mar 2013 12:51:51 GMT" } ]
1,364,169,600,000
[ [ "Chatalic", "Philippe", "" ], [ "Froidevaux", "Christine", "" ] ]
1303.5396
Paul Dagum
Paul Dagum, Adam Galper, Eric J. Horvitz
Dynamic Network Models for Forecasting
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-41-48
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We have developed a probabilistic forecasting methodology through a synthesis of belief network models and classical time-series analysis. We present the dynamic network model (DNM) and describe methods for constructing, refining, and performing inference with this representation of temporal probabilistic knowledge. The DNM representation extends static belief-network models to more general dynamic forecasting models by integrating and iteratively refining contemporaneous and time-lagged dependencies. We discuss key concepts in terms of a model for forecasting U.S. car sales in Japan.
[ { "version": "v1", "created": "Wed, 13 Mar 2013 12:51:57 GMT" } ]
1,364,169,600,000
[ [ "Dagum", "Paul", "" ], [ "Galper", "Adam", "" ], [ "Horvitz", "Eric J.", "" ] ]
1303.5397
Paul Dagum
Paul Dagum, Eric J. Horvitz
Reformulating Inference Problems Through Selective Conditioning
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-49-54
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We describe how we selectively reformulate portions of a belief network that pose difficulties for solution with a stochastic-simulation algorithm. With employ the selective conditioning approach to target specific nodes in a belief network for decomposition, based on the contribution the nodes make to the tractability of stochastic simulation. We review previous work on BNRAS algorithms- randomized approximation algorithms for probabilistic inference. We show how selective conditioning can be employed to reformulate a single BNRAS problem into multiple tractable BNRAS simulation problems. We discuss how we can use another simulation algorithm-logic sampling-to solve a component of the inference problem that provides a means for knitting the solutions of individual subproblems into a final result. Finally, we analyze tradeoffs among the computational subtasks associated with the selective conditioning approach to reformulation.
[ { "version": "v1", "created": "Wed, 13 Mar 2013 12:52:03 GMT" } ]
1,364,169,600,000
[ [ "Dagum", "Paul", "" ], [ "Horvitz", "Eric J.", "" ] ]
1303.5398
Norman C. Dalkey
Norman C. Dalkey
Entropy and Belief Networks
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-55-58
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The product expansion of conditional probabilities for belief nets is not maximum entropy. This appears to deny a desirable kind of assurance for the model. However, a kind of guarantee that is almost as strong as maximum entropy can be derived. Surprisingly, a variant model also exhibits the guarantee, and for many cases obtains a higher performance score than the product expansion.
[ { "version": "v1", "created": "Wed, 13 Mar 2013 12:52:07 GMT" } ]
1,364,169,600,000
[ [ "Dalkey", "Norman C.", "" ] ]
1303.5399
Bruce D'Ambrosio
Bruce D'Ambrosio, Tony Fountain, Zhaoyu Li
Parallelizing Probabilistic Inference: Some Early Explorations
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-59-66
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We report on an experimental investigation into opportunities for parallelism in beliefnet inference. Specifically, we report on a study performed of the available parallelism, on hypercube style machines, of a set of randomly generated belief nets, using factoring (SPI) style inference algorithms. Our results indicate that substantial speedup is available, but that it is available only through parallelization of individual conformal product operations, and depends critically on finding an appropriate factoring. We find negligible opportunity for parallelism at the topological, or clustering tree, level.
[ { "version": "v1", "created": "Wed, 13 Mar 2013 12:52:13 GMT" } ]
1,364,169,600,000
[ [ "D'Ambrosio", "Bruce", "" ], [ "Fountain", "Tony", "" ], [ "Li", "Zhaoyu", "" ] ]
1303.5400
Adnan Darwiche
Adnan Darwiche
Objection-Based Causal Networks
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-67-73
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces the notion of objection-based causal networks which resemble probabilistic causal networks except that they are quantified using objections. An objection is a logical sentence and denotes a condition under which a, causal dependency does not exist. Objection-based causal networks enjoy almost all the properties that make probabilistic causal networks popular, with the added advantage that objections are, arguably more intuitive than probabilities.
[ { "version": "v1", "created": "Wed, 13 Mar 2013 12:52:19 GMT" } ]
1,364,169,600,000
[ [ "Darwiche", "Adnan", "" ] ]
1303.5401
Didier Dubois
Didier Dubois, Henri Prade, Lluis Godo, Ramon Lopez de Mantaras
A Symbolic Approach to Reasoning with Linguistic Quantifiers
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-74-82
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper investigates the possibility of performing automated reasoning in probabilistic logic when probabilities are expressed by means of linguistic quantifiers. Each linguistic term is expressed as a prescribed interval of proportions. Then instead of propagating numbers, qualitative terms are propagated in accordance with the numerical interpretation of these terms. The quantified syllogism, modelling the chaining of probabilistic rules, is studied in this context. It is shown that a qualitative counterpart of this syllogism makes sense, and is relatively independent of the threshold defining the linguistically meaningful intervals, provided that these threshold values remain in accordance with the intuition. The inference power is less than that of a full-fledged probabilistic con-quaint propagation device but better corresponds to what could be thought of as commonsense probabilistic reasoning.
[ { "version": "v1", "created": "Wed, 13 Mar 2013 12:52:25 GMT" } ]
1,364,169,600,000
[ [ "Dubois", "Didier", "" ], [ "Prade", "Henri", "" ], [ "Godo", "Lluis", "" ], [ "de Mantaras", "Ramon Lopez", "" ] ]
1303.5402
Francesco Fulvio Monai
Francesco Fulvio Monai, Thomas Chehire
Possibilistic Assumption based Truth Maintenance System, Validation in a Data Fusion Application
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-83-91
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data fusion allows the elaboration and the evaluation of a situation synthesized from low level informations provided by different kinds of sensors. The fusion of the collected data will result in fewer and higher level informations more easily assessed by a human operator and that will assist him effectively in his decision process. In this paper we present the suitability and the advantages of using a Possibilistic Assumption based Truth Maintenance System (n-ATMS) in a data fusion military application. We first describe the problem, the needed knowledge representation formalisms and problem solving paradigms. Then we remind the reader of the basic concepts of ATMSs, Possibilistic Logic and 11-ATMSs. Finally we detail the solution to the given data fusion problem and conclude with the results and comparison with a non-possibilistic solution.
[ { "version": "v1", "created": "Wed, 13 Mar 2013 12:52:32 GMT" } ]
1,364,169,600,000
[ [ "Monai", "Francesco Fulvio", "" ], [ "Chehire", "Thomas", "" ] ]
1303.5404
Angelo Gilio
Angelo Gilio, Fulvio Spezzaferri
Knowledge Integration for Conditional Probability Assessments
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-98-103
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the probabilistic approach to uncertainty management the input knowledge is usually represented by means of some probability distributions. In this paper we assume that the input knowledge is given by two discrete conditional probability distributions, represented by two stochastic matrices P and Q. The consistency of the knowledge base is analyzed. Coherence conditions and explicit formulas for the extension to marginal distributions are obtained in some special cases.
[ { "version": "v1", "created": "Wed, 13 Mar 2013 12:52:43 GMT" } ]
1,364,169,600,000
[ [ "Gilio", "Angelo", "" ], [ "Spezzaferri", "Fulvio", "" ] ]
1303.5405
Robert P. Goldman
Robert P. Goldman, John S. Breese
Integrating Model Construction and Evaluation
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-104-111
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To date, most probabilistic reasoning systems have relied on a fixed belief network constructed at design time. The network is used by an application program as a representation of (in)dependencies in the domain. Probabilistic inference algorithms operate over the network to answer queries. Recognizing the inflexibility of fixed models has led researchers to develop automated network construction procedures that use an expressive knowledge base to generate a network that can answer a query. Although more flexible than fixed model approaches, these construction procedures separate construction and evaluation into distinct phases. In this paper we develop an approach to combining incremental construction and evaluation of a partial probability model. The combined method holds promise for improved methods for control of model construction based on a trade-off between fidelity of results and cost of construction.
[ { "version": "v1", "created": "Wed, 13 Mar 2013 12:52:48 GMT" } ]
1,364,169,600,000
[ [ "Goldman", "Robert P.", "" ], [ "Breese", "John S.", "" ] ]
1303.5406
Moises Goldszmidt
Moises Goldszmidt, Judea Pearl
Reasoning With Qualitative Probabilities Can Be Tractable
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-112-120
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We recently described a formalism for reasoning with if-then rules that re expressed with different levels of firmness [18]. The formalism interprets these rules as extreme conditional probability statements, specifying orders of magnitude of disbelief, which impose constraints over possible rankings of worlds. It was shown that, once we compute a priority function Z+ on the rules, the degree to which a given query is confirmed or denied can be computed in O(log n`) propositional satisfiability tests, where n is the number of rules in the knowledge base. In this paper, we show that computing Z+ requires O(n2 X log n) satisfiability tests, not an exponential number as was conjectured in [18], which reduces to polynomial complexity in the case of Horn expressions. We also show how reasoning with imprecise observations can be incorporated in our formalism and how the popular notions of belief revision and epistemic entrenchment are embodied naturally and tractably.
[ { "version": "v1", "created": "Wed, 13 Mar 2013 12:52:54 GMT" } ]
1,364,169,600,000
[ [ "Goldszmidt", "Moises", "" ], [ "Pearl", "Judea", "" ] ]
1303.5407
Uffe Kj{\ae}rulff
Uffe Kj{\ae}rulff
A computational scheme for Reasoning in Dynamic Probabilistic Networks
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-121-129
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A computational scheme for reasoning about dynamic systems using (causal) probabilistic networks is presented. The scheme is based on the framework of Lauritzen and Spiegelhalter (1988), and may be viewed as a generalization of the inference methods of classical time-series analysis in the sense that it allows description of non-linear, multivariate dynamic systems with complex conditional independence structures. Further, the scheme provides a method for efficient backward smoothing and possibilities for efficient, approximate forecasting methods. The scheme has been implemented on top of the HUGIN shell.
[ { "version": "v1", "created": "Wed, 13 Mar 2013 12:53:00 GMT" } ]
1,364,169,600,000
[ [ "Kjærulff", "Uffe", "" ] ]
1303.5408
Frank Klawonn
Frank Klawonn, Philippe Smets
The Dynamic of Belief in the Transferable Belief Model and Specialization-Generalization Matrices
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-130-137
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The fundamental updating process in the transferable belief model is related to the concept of specialization and can be described by a specialization matrix. The degree of belief in the truth of a proposition is a degree of justified support. The Principle of Minimal Commitment implies that one should never give more support to the truth of a proposition than justified. We show that Dempster's rule of conditioning corresponds essentially to the least committed specialization, and that Dempster's rule of combination results essentially from commutativity requirements. The concept of generalization, dual to thc concept of specialization, is described.
[ { "version": "v1", "created": "Wed, 13 Mar 2013 12:53:05 GMT" } ]
1,364,169,600,000
[ [ "Klawonn", "Frank", "" ], [ "Smets", "Philippe", "" ] ]
1303.5409
George J. Klir
George J. Klir, Behzad Parviz
A Note on the Measure of Discord
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-138-141
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A new entropy-like measure as well as a new measure of total uncertainty pertaining to the Dempster-Shafer theory are introduced. It is argued that these measures are better justified than any of the previously proposed candidates.
[ { "version": "v1", "created": "Wed, 13 Mar 2013 12:53:11 GMT" } ]
1,364,169,600,000
[ [ "Klir", "George J.", "" ], [ "Parviz", "Behzad", "" ] ]
1303.5410
Henry E. Kyburg Jr.
Henry E. Kyburg Jr
Semantics for Probabilistic Inference
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-142-148
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A number of writers(Joseph Halpern and Fahiem Bacchus among them) have offered semantics for formal languages in which inferences concerning probabilities can be made. Our concern is different. This paper provides a formalization of nonmonotonic inferences in which the conclusion is supported only to a certain degree. Such inferences are clearly 'invalid' since they must allow the falsity of a conclusion even when the premises are true. Nevertheless, such inferences can be characterized both syntactically and semantically. The 'premises' of probabilistic arguments are sets of statements (as in a database or knowledge base), the conclusions categorical statements in the language. We provide standards for both this form of inference, for which high probability is required, and for an inference in which the conclusion is qualified by an intermediate interval of support.
[ { "version": "v1", "created": "Wed, 13 Mar 2013 12:53:17 GMT" } ]
1,364,169,600,000
[ [ "Kyburg", "Henry E.", "Jr" ] ]
1303.5411
Henry E. Kyburg Jr.
Henry E. Kyburg Jr., Michael Pittarelli
Some Problems for Convex Bayesians
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-149-154
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We discuss problems for convex Bayesian decision making and uncertainty representation. These include the inability to accommodate various natural and useful constraints and the possibility of an analog of the classical Dutch Book being made against an agent behaving in accordance with convex Bayesian prescriptions. A more general set-based Bayesianism may be as tractable and would avoid the difficulties we raise.
[ { "version": "v1", "created": "Wed, 13 Mar 2013 12:53:22 GMT" } ]
1,364,169,600,000
[ [ "Kyburg", "Henry E.", "Jr." ], [ "Pittarelli", "Michael", "" ] ]
1303.5412
Kathryn Blackmond Laskey
Kathryn Blackmond Laskey
Bayesian Meta-Reasoning: Determining Model Adequacy from Within a Small World
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-155-158
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a Bayesian framework for assessing the adequacy of a model without the necessity of explicitly enumerating a specific alternate model. A test statistic is developed for tracking the performance of the model across repeated problem instances. Asymptotic methods are used to derive an approximate distribution for the test statistic. When the model is rejected, the individual components of the test statistic can be used to guide search for an alternate model.
[ { "version": "v1", "created": "Wed, 13 Mar 2013 12:53:27 GMT" } ]
1,364,169,600,000
[ [ "Laskey", "Kathryn Blackmond", "" ] ]
1303.5413
Kathryn Blackmond Laskey
Kathryn Blackmond Laskey
The Bounded Bayesian
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-159-165
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The ideal Bayesian agent reasons from a global probability model, but real agents are restricted to simplified models which they know to be adequate only in restricted circumstances. Very little formal theory has been developed to help fallibly rational agents manage the process of constructing and revising small world models. The goal of this paper is to present a theoretical framework for analyzing model management approaches. For a probability forecasting problem, a search process over small world models is analyzed as an approximation to a larger-world model which the agent cannot explicitly enumerate or compute. Conditions are given under which the sequence of small-world models converges to the larger-world probabilities.
[ { "version": "v1", "created": "Wed, 13 Mar 2013 12:53:33 GMT" } ]
1,364,169,600,000
[ [ "Laskey", "Kathryn Blackmond", "" ] ]
1303.5414
Tze-Yun Leong
Tze-Yun Leong
Representing Context-Sensitive Knowledge in a Network Formalism: A Preliminary Report
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-166-173
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automated decision making is often complicated by the complexity of the knowledge involved. Much of this complexity arises from the context sensitive variations of the underlying phenomena. We propose a framework for representing descriptive, context-sensitive knowledge. Our approach attempts to integrate categorical and uncertain knowledge in a network formalism. This paper outlines the basic representation constructs, examines their expressiveness and efficiency, and discusses the potential applications of the framework.
[ { "version": "v1", "created": "Wed, 13 Mar 2013 12:53:40 GMT" } ]
1,364,169,600,000
[ [ "Leong", "Tze-Yun", "" ] ]
1303.5415
Dekang Lin
Dekang Lin
A Probabilistic Network of Predicates
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-174-181
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bayesian networks are directed acyclic graphs representing independence relationships among a set of random variables. A random variable can be regarded as a set of exhaustive and mutually exclusive propositions. We argue that there are several drawbacks resulting from the propositional nature and acyclic structure of Bayesian networks. To remedy these shortcomings, we propose a probabilistic network where nodes represent unary predicates and which may contain directed cycles. The proposed representation allows us to represent domain knowledge in a single static network even though we cannot determine the instantiations of the predicates before hand. The ability to deal with cycles also enables us to handle cyclic causal tendencies and to recognize recursive plans.
[ { "version": "v1", "created": "Wed, 13 Mar 2013 12:53:47 GMT" } ]
1,364,169,600,000
[ [ "Lin", "Dekang", "" ] ]
1303.5416
Weiru Liu
Weiru Liu, John G. Hughes, Michael F. McTear
Representing Heuristic Knowledge in D-S Theory
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-182-190
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Dempster-Shafer theory of evidence has been used intensively to deal with uncertainty in knowledge-based systems. However the representation of uncertain relationships between evidence and hypothesis groups (heuristic knowledge) is still a major research problem. This paper presents an approach to representing such heuristic knowledge by evidential mappings which are defined on the basis of mass functions. The relationships between evidential mappings and multi valued mappings, as well as between evidential mappings and Bayesian multi- valued causal link models in Bayesian theory are discussed. Following this the detailed procedures for constructing evidential mappings for any set of heuristic rules are introduced. Several situations of belief propagation are discussed.
[ { "version": "v1", "created": "Wed, 13 Mar 2013 12:53:54 GMT" } ]
1,364,169,600,000
[ [ "Liu", "Weiru", "" ], [ "Hughes", "John G.", "" ], [ "McTear", "Michael F.", "" ] ]
1303.5417
Izhar Matzkevich
Izhar Matzkevich, Bruce Abramson
The Topological Fusion of Bayes Nets
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-191-198
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bayes nets are relatively recent innovations. As a result, most of their theoretical development has focused on the simplest class of single-author models. The introduction of more sophisticated multiple-author settings raises a variety of interesting questions. One such question involves the nature of compromise and consensus. Posterior compromises let each model process all data to arrive at an independent response, and then split the difference. Prior compromises, on the other hand, force compromise to be reached on all points before data is observed. This paper introduces prior compromises in a Bayes net setting. It outlines the problem and develops an efficient algorithm for fusing two directed acyclic graphs into a single, consensus structure, which may then be used as the basis of a prior compromise.
[ { "version": "v1", "created": "Wed, 13 Mar 2013 12:53:59 GMT" } ]
1,364,169,600,000
[ [ "Matzkevich", "Izhar", "" ], [ "Abramson", "Bruce", "" ] ]
1303.5418
Serafin Moral
Serafin Moral
Calculating Uncertainty Intervals From Conditional Convex Sets of Probabilities
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-199-206
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In Moral, Campos (1991) and Cano, Moral, Verdegay-Lopez (1991) a new method of conditioning convex sets of probabilities has been proposed. The result of it is a convex set of non-necessarily normalized probability distributions. The normalizing factor of each probability distribution is interpreted as the possibility assigned to it by the conditioning information. From this, it is deduced that the natural value for the conditional probability of an event is a possibility distribution. The aim of this paper is to study methods of transforming this possibility distribution into a probability (or uncertainty) interval. These methods will be based on the use of Sugeno and Choquet integrals. Their behaviour will be compared in basis to some selected examples.
[ { "version": "v1", "created": "Wed, 13 Mar 2013 12:54:05 GMT" } ]
1,364,169,600,000
[ [ "Moral", "Serafin", "" ] ]
1303.5419
Ann Nicholson
Ann Nicholson, J. M. Brady
Sensor Validation Using Dynamic Belief Networks
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-207-214
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The trajectory of a robot is monitored in a restricted dynamic environment using light beam sensor data. We have a Dynamic Belief Network (DBN), based on a discrete model of the domain, which provides discrete monitoring analogous to conventional quantitative filter techniques. Sensor observations are added to the basic DBN in the form of specific evidence. However, sensor data is often partially or totally incorrect. We show how the basic DBN, which infers only an impossible combination of evidence, may be modified to handle specific types of incorrect data which may occur in the domain. We then present an extension to the DBN, the addition of an invalidating node, which models the status of the sensor as working or defective. This node provides a qualitative explanation of inconsistent data: it is caused by a defective sensor. The connection of successive instances of the invalidating node models the status of a sensor over time, allowing the DBN to handle both persistent and intermittent faults.
[ { "version": "v1", "created": "Wed, 13 Mar 2013 12:54:11 GMT" } ]
1,364,169,600,000
[ [ "Nicholson", "Ann", "" ], [ "Brady", "J. M.", "" ] ]
1303.5421
Kristian G. Olesen
Kristian G. Olesen, Steffen L. Lauritzen, Finn Verner Jensen
aHUGIN: A System Creating Adaptive Causal Probabilistic Networks
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-223-229
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper describes aHUGIN, a tool for creating adaptive systems. aHUGIN is an extension of the HUGIN shell, and is based on the methods reported by Spiegelhalter and Lauritzen (1990a). The adaptive systems resulting from aHUGIN are able to adjust the C011ditional probabilities in the model. A short analysis of the adaptation task is given and the features of aHUGIN are described. Finally a session with experiments is reported and the results are discussed.
[ { "version": "v1", "created": "Wed, 13 Mar 2013 12:54:22 GMT" } ]
1,364,169,600,000
[ [ "Olesen", "Kristian G.", "" ], [ "Lauritzen", "Steffen L.", "" ], [ "Jensen", "Finn Verner", "" ] ]
1303.5422
Gerhard Paa{\ss}
Gerhard Paa{\ss}
MESA: Maximum Entropy by Simulated Annealing
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-230-237
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Probabilistic reasoning systems combine different probabilistic rules and probabilistic facts to arrive at the desired probability values of consequences. In this paper we describe the MESA-algorithm (Maximum Entropy by Simulated Annealing) that derives a joint distribution of variables or propositions. It takes into account the reliability of probability values and can resolve conflicts between contradictory statements. The joint distribution is represented in terms of marginal distributions and therefore allows to process large inference networks and to determine desired probability values with high precision. The procedure derives a maximum entropy distribution subject to the given constraints. It can be applied to inference networks of arbitrary topology and may be extended into a number of directions.
[ { "version": "v1", "created": "Wed, 13 Mar 2013 12:54:28 GMT" } ]
1,364,169,600,000
[ [ "Paaß", "Gerhard", "" ] ]
1303.5423
Thomas S. Paterson
Thomas S. Paterson, Michael R. Fehling
Decision Methods for Adaptive Task-Sharing in Associate Systems
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-238-243
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes some results of research on associate systems: knowledge-based systems that flexibly and adaptively support their human users in carrying out complex, time-dependent problem-solving tasks under uncertainty. Based on principles derived from decision theory and decision analysis, a problem-solving approach is presented which can overcome many of the limitations of traditional expert-systems. This approach implements an explicit model of the human user's problem-solving capabilities as an integral element in the overall problem solving architecture. This integrated model, represented as an influence diagram, is the basis for achieving adaptive task sharing behavior between the associate system and the human user. This associate system model has been applied toward ongoing research on a Mars Rover Manager's Associate (MRMA). MRMA's role would be to manage a small fleet of robotic rovers on the Martian surface. The paper describes results for a specific scenario where MRMA examines the benefits and costs of consulting human experts on Earth to assist a Mars rover with a complex resource management decision.
[ { "version": "v1", "created": "Wed, 13 Mar 2013 12:54:34 GMT" } ]
1,364,169,600,000
[ [ "Paterson", "Thomas S.", "" ], [ "Fehling", "Michael R.", "" ] ]
1303.5424
Luigi Portinale
Luigi Portinale
Modeling Uncertain Temporal Evolutions in Model-Based Diagnosis
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-244-251
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although the notion of diagnostic problem has been extensively investigated in the context of static systems, in most practical applications the behavior of the modeled system is significantly variable during time. The goal of the paper is to propose a novel approach to the modeling of uncertainty about temporal evolutions of time-varying systems and a characterization of model-based temporal diagnosis. Since in most real world cases knowledge about the temporal evolution of the system to be diagnosed is uncertain, we consider the case when probabilistic temporal knowledge is available for each component of the system and we choose to model it by means of Markov chains. In fact, we aim at exploiting the statistical assumptions underlying reliability theory in the context of the diagnosis of timevarying systems. We finally show how to exploit Markov chain theory in order to discard, in the diagnostic process, very unlikely diagnoses.
[ { "version": "v1", "created": "Wed, 13 Mar 2013 12:54:40 GMT" } ]
1,364,169,600,000
[ [ "Portinale", "Luigi", "" ] ]
1303.5425
Yuping Qiu
Yuping Qiu, Louis Anthony Cox, Jr., Lawrence Davis
Guess-And-Verify Heuristics for Reducing Uncertainties in Expert Classification Systems
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-252-258
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An expert classification system having statistical information about the prior probabilities of the different classes should be able to use this knowledge to reduce the amount of additional information that it must collect, e.g., through questions, in order to make a correct classification. This paper examines how best to use such prior information and additional information-collection opportunities to reduce uncertainty about the class to which a case belongs, thus minimizing the average cost or effort required to correctly classify new cases.
[ { "version": "v1", "created": "Wed, 13 Mar 2013 12:54:46 GMT" } ]
1,364,169,600,000
[ [ "Qiu", "Yuping", "" ], [ "Cox,", "Louis Anthony", "Jr." ], [ "Davis", "Lawrence", "" ] ]
1303.5426
Peter J. Regan
Peter J. Regan, Samuel Holtzman
R&D Analyst: An Interactive Approach to Normative Decision System Model Construction
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-259-267
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes the architecture of R&D Analyst, a commercial intelligent decision system for evaluating corporate research and development projects and portfolios. In analyzing projects, R&D Analyst interactively guides a user in constructing an influence diagram model for an individual research project. The system's interactive approach can be clearly explained from a blackboard system perspective. The opportunistic reasoning emphasis of blackboard systems satisfies the flexibility requirements of model construction, thereby suggesting that a similar architecture would be valuable for developing normative decision systems in other domains. Current research is aimed at extending the system architecture to explicitly consider of sequential decisions involving limited temporal, financial, and physical resources.
[ { "version": "v1", "created": "Wed, 13 Mar 2013 12:54:52 GMT" } ]
1,364,169,600,000
[ [ "Regan", "Peter J.", "" ], [ "Holtzman", "Samuel", "" ] ]
1303.5427
Thomas Schiex
Thomas Schiex
Possibilistic Constraint Satisfaction Problems or "How to handle soft constraints?"
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-268-275
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many AI synthesis problems such as planning or scheduling may be modelized as constraint satisfaction problems (CSP). A CSP is typically defined as the problem of finding any consistent labeling for a fixed set of variables satisfying all given constraints between these variables. However, for many real tasks such as job-shop scheduling, time-table scheduling, design?, all these constraints have not the same significance and have not to be necessarily satisfied. A first distinction can be made between hard constraints, which every solution should satisfy and soft constraints, whose satisfaction has not to be certain. In this paper, we formalize the notion of possibilistic constraint satisfaction problems that allows the modeling of uncertainly satisfied constraints. We use a possibility distribution over labelings to represent respective possibilities of each labeling. Necessity-valued constraints allow a simple expression of the respective certainty degrees of each constraint. The main advantage of our approach is its integration in the CSP technical framework. Most classical techniques, such as Backtracking (BT), arcconsistency enforcing (AC) or Forward Checking have been extended to handle possibilistics CSP and are effectively implemented. The utility of our approach is demonstrated on a simple design problem.
[ { "version": "v1", "created": "Wed, 13 Mar 2013 12:54:58 GMT" } ]
1,364,169,600,000
[ [ "Schiex", "Thomas", "" ] ]
1303.5428
Ross D. Shachter
Ross D. Shachter, Mark Alan Peot
Decision Making Using Probabilistic Inference Methods
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-276-283
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The analysis of decision making under uncertainty is closely related to the analysis of probabilistic inference. Indeed, much of the research into efficient methods for probabilistic inference in expert systems has been motivated by the fundamental normative arguments of decision theory. In this paper we show how the developments underlying those efficient methods can be applied immediately to decision problems. In addition to general approaches which need know nothing about the actual probabilistic inference method, we suggest some simple modifications to the clustering family of algorithms in order to efficiently incorporate decision making capabilities.
[ { "version": "v1", "created": "Wed, 13 Mar 2013 12:55:04 GMT" } ]
1,364,169,600,000
[ [ "Shachter", "Ross D.", "" ], [ "Peot", "Mark Alan", "" ] ]
1303.5429
Prakash P. Shenoy
Prakash P. Shenoy
Conditional Independence in Uncertainty Theories
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-284-291
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces the notions of independence and conditional independence in valuation-based systems (VBS). VBS is an axiomatic framework capable of representing many different uncertainty calculi. We define independence and conditional independence in terms of factorization of the joint valuation. The definitions of independence and conditional independence in VBS generalize the corresponding definitions in probability theory. Our definitions apply not only to probability theory, but also to Dempster-Shafer's belief-function theory, Spohn's epistemic-belief theory, and Zadeh's possibility theory. In fact, they apply to any uncertainty calculi that fit in the framework of valuation-based systems.
[ { "version": "v1", "created": "Wed, 13 Mar 2013 12:55:11 GMT" } ]
1,364,169,600,000
[ [ "Shenoy", "Prakash P.", "" ] ]
1303.5430
Philippe Smets
Philippe Smets
The Nature of the Unnormalized Beliefs Encountered in the Transferable Belief Model
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-292-297
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Within the transferable belief model, positive basic belief masses can be allocated to the empty set, leading to unnormalized belief functions. The nature of these unnormalized beliefs is analyzed.
[ { "version": "v1", "created": "Wed, 13 Mar 2013 12:55:17 GMT" } ]
1,364,169,600,000
[ [ "Smets", "Philippe", "" ] ]
1303.5431
Paul Snow
Paul Snow
Intuitions about Ordered Beliefs Leading to Probabilistic Models
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-298-302
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The general use of subjective probabilities to model belief has been justified using many axiomatic schemes. For example, ?consistent betting behavior' arguments are well-known. To those not already convinced of the unique fitness and generality of probability models, such justifications are often unconvincing. The present paper explores another rationale for probability models. ?Qualitative probability,' which is known to provide stringent constraints on belief representation schemes, is derived from five simple assumptions about relationships among beliefs. While counterparts of familiar rationality concepts such as transitivity, dominance, and consistency are used, the betting context is avoided. The gap between qualitative probability and probability proper can be bridged by any of several additional assumptions. The discussion here relies on results common in the recent AI literature, introducing a sixth simple assumption. The narrative emphasizes models based on unique complete orderings, but the rationale extends easily to motivate set-valued representations of partial orderings as well.
[ { "version": "v1", "created": "Wed, 13 Mar 2013 12:55:23 GMT" } ]
1,364,169,600,000
[ [ "Snow", "Paul", "" ] ]
1303.5432
Luis Enrique Sucar
Luis Enrique Sucar, Duncan F. Gillies
Expressing Relational and Temporal Knowledge in Visual Probabilistic Networks
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-303-309
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bayesian networks have been used extensively in diagnostic tasks such as medicine, where they represent the dependency relations between a set of symptoms and a set of diseases. A criticism of this type of knowledge representation is that it is restricted to this kind of task, and that it cannot cope with the knowledge required in other artificial intelligence applications. For example, in computer vision, we require the ability to model complex knowledge, including temporal and relational factors. In this paper we extend Bayesian networks to model relational and temporal knowledge for high-level vision. These extended networks have a simple structure which permits us to propagate probability efficiently. We have applied them to the domain of endoscopy, illustrating how the general modelling principles can be used in specific cases.
[ { "version": "v1", "created": "Wed, 13 Mar 2013 12:55:28 GMT" } ]
1,364,169,600,000
[ [ "Sucar", "Luis Enrique", "" ], [ "Gillies", "Duncan F.", "" ] ]
1303.5433
Chin-Wang Tao
Chin-Wang Tao, Wiley E. Thompson
A Fuzzy Logic Approach to Target Tracking
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-310-314
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper discusses a target tracking problem in which no dynamic mathematical model is explicitly assumed. A nonlinear filter based on the fuzzy If-then rules is developed. A comparison with a Kalman filter is made, and empirical results show that the performance of the fuzzy filter is better. Intensive simulations suggest that theoretical justification of the empirical results is possible.
[ { "version": "v1", "created": "Wed, 13 Mar 2013 12:55:34 GMT" } ]
1,364,169,600,000
[ [ "Tao", "Chin-Wang", "" ], [ "Thompson", "Wiley E.", "" ] ]
1303.5434
Helmut Thone
Helmut Thone, Ulrich Guntzer, Werner Kiessling
Towards Precision of Probabilistic Bounds Propagation
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-315-322
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The DUCK-calculus presented here is a recent approach to cope with probabilistic uncertainty in a sound and efficient way. Uncertain rules with bounds for probabilities and explicit conditional independences can be maintained incrementally. The basic inference mechanism relies on local bounds propagation, implementable by deductive databases with a bottom-up fixpoint evaluation. In situations, where no precise bounds are deducible, it can be combined with simple operations research techniques on a local scope. In particular, we provide new precise analytical bounds for probabilistic entailment.
[ { "version": "v1", "created": "Wed, 13 Mar 2013 12:55:40 GMT" } ]
1,364,169,600,000
[ [ "Thone", "Helmut", "" ], [ "Guntzer", "Ulrich", "" ], [ "Kiessling", "Werner", "" ] ]
1303.5435
Tom S. Verma
Tom S. Verma, Judea Pearl
An Algorithm for Deciding if a Set of Observed Independencies Has a Causal Explanation
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-323-330
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a previous paper [Pearl and Verma, 1991] we presented an algorithm for extracting causal influences from independence information, where a causal influence was defined as the existence of a directed arc in all minimal causal models consistent with the data. In this paper we address the question of deciding whether there exists a causal model that explains ALL the observed dependencies and independencies. Formally, given a list M of conditional independence statements, it is required to decide whether there exists a directed acyclic graph (dag) D that is perfectly consistent with M, namely, every statement in M, and no other, is reflected via dseparation in D. We present and analyze an effective algorithm that tests for the existence of such a day, and produces one, if it exists.
[ { "version": "v1", "created": "Wed, 13 Mar 2013 12:55:46 GMT" } ]
1,364,169,600,000
[ [ "Verma", "Tom S.", "" ], [ "Pearl", "Judea", "" ] ]
1303.5436
Carl G. Wagner
Carl G. Wagner
Generalizing Jeffrey Conditionalization
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-331-335
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Jeffrey's rule has been generalized by Wagner to the case in which new evidence bounds the possible revisions of a prior probability below by a Dempsterian lower probability. Classical probability kinematics arises within this generalization as the special case in which the evidentiary focal elements of the bounding lower probability are pairwise disjoint. We discuss a twofold extension of this generalization, first allowing the lower bound to be any two-monotone capacity and then allowing the prior to be a lower envelope.
[ { "version": "v1", "created": "Wed, 13 Mar 2013 12:55:52 GMT" } ]
1,364,169,600,000
[ [ "Wagner", "Carl G.", "" ] ]
1303.5437
Michael S. K. M. Wong
Michael S. K. M. Wong, L. S. Wang, Y. Y. Yao
Interval Structure: A Framework for Representing Uncertain Information
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-336-343
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, a unified framework for representing uncertain information based on the notion of an interval structure is proposed. It is shown that the lower and upper approximations of the rough-set model, the lower and upper bounds of incidence calculus, and the belief and plausibility functions all obey the axioms of an interval structure. An interval structure can be used to synthesize the decision rules provided by the experts. An efficient algorithm to find the desirable set of rules is developed from a set of sound and complete inference axioms.
[ { "version": "v1", "created": "Wed, 13 Mar 2013 12:55:58 GMT" } ]
1,364,169,600,000
[ [ "Wong", "Michael S. K. M.", "" ], [ "Wang", "L. S.", "" ], [ "Yao", "Y. Y.", "" ] ]
1303.5438
Yang Xiang
Yang Xiang, David L. Poole, Michael P. Beddoes
Exploring Localization in Bayesian Networks for Large Expert Systems
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-344-351
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current Bayesian net representations do not consider structure in the domain and include all variables in a homogeneous network. At any time, a human reasoner in a large domain may direct his attention to only one of a number of natural subdomains, i.e., there is ?localization' of queries and evidence. In such a case, propagating evidence through a homogeneous network is inefficient since the entire network has to be updated each time. This paper presents multiply sectioned Bayesian networks that enable a (localization preserving) representation of natural subdomains by separate Bayesian subnets. The subnets are transformed into a set of permanent junction trees such that evidential reasoning takes place at only one of them at a time. Probabilities obtained are identical to those that would be obtained from the homogeneous network. We discuss attention shift to a different junction tree and propagation of previously acquired evidence. Although the overall system can be large, computational requirements are governed by the size of only one junction tree.
[ { "version": "v1", "created": "Wed, 13 Mar 2013 12:56:04 GMT" } ]
1,364,169,600,000
[ [ "Xiang", "Yang", "" ], [ "Poole", "David L.", "" ], [ "Beddoes", "Michael P.", "" ] ]
1303.5439
Hong Xu
Hong Xu
A Decision Calculus for Belief Functions in Valuation-Based Systems
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-352-359
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Valuation-based system (VBS) provides a general framework for representing knowledge and drawing inferences under uncertainty. Recent studies have shown that the semantics of VBS can represent and solve Bayesian decision problems (Shenoy, 1991a). The purpose of this paper is to propose a decision calculus for Dempster-Shafer (D-S) theory in the framework of VBS. The proposed calculus uses a weighting factor whose role is similar to the probabilistic interpretation of an assumption that disambiguates decision problems represented with belief functions (Strat 1990). It will be shown that with the presented calculus, if the decision problems are represented in the valuation network properly, we can solve the problems by using fusion algorithm (Shenoy 1991a). It will also be shown the presented decision calculus can be reduced to the calculus for Bayesian probability theory when probabilities, instead of belief functions, are given.
[ { "version": "v1", "created": "Wed, 13 Mar 2013 12:56:10 GMT" } ]
1,364,169,600,000
[ [ "Xu", "Hong", "" ] ]
1303.5440
Nevin Lianwen Zhang
Nevin Lianwen Zhang, David L. Poole
Sidestepping the Triangulation Problem in Bayesian Net Computations
Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992)
null
null
UAI-P-1992-PG-360-367
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a new approach for computing posterior probabilities in Bayesian nets, which sidesteps the triangulation problem. The current state of art is the clique tree propagation approach. When the underlying graph of a Bayesian net is triangulated, this approach arranges its cliques into a tree and computes posterior probabilities by appropriately passing around messages in that tree. The computation in each clique is simply direct marginalization. When the underlying graph is not triangulated, one has to first triangulated it by adding edges. Referred to as the triangulation problem, the problem of finding an optimal or even a ?good? triangulation proves to be difficult. In this paper, we propose to first decompose a Bayesian net into smaller components by making use of Tarjan's algorithm for decomposing an undirected graph at all its minimal complete separators. Then, the components are arranged into a tree and posterior probabilities are computed by appropriately passing around messages in that tree. The computation in each component is carried out by repeating the whole procedure from the beginning. Thus the triangulation problem is sidestepped.
[ { "version": "v1", "created": "Wed, 13 Mar 2013 12:56:16 GMT" } ]
1,364,169,600,000
[ [ "Zhang", "Nevin Lianwen", "" ], [ "Poole", "David L.", "" ] ]
1303.5659
Taisuke Sato
Taisuke Sato and Keiichi Kubota
Viterbi training in PRISM
23 pages, 1 figure
Theory and Practice of Logic Programming 15 (2015) 147-168
10.1017/S1471068413000677
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
VT (Viterbi training), or hard EM, is an efficient way of parameter learning for probabilistic models with hidden variables. Given an observation $y$, it searches for a state of hidden variables $x$ that maximizes $p(x,y \mid \theta)$ by coordinate ascent on parameters $\theta$ and $x$. In this paper we introduce VT to PRISM, a logic-based probabilistic modeling system for generative models. VT improves PRISM in three ways. First VT in PRISM converges faster than EM in PRISM due to the VT's termination condition. Second, parameters learned by VT often show good prediction performance compared to those learned by EM. We conducted two parsing experiments with probabilistic grammars while learning parameters by a variety of inference methods, i.e.\ VT, EM, MAP and VB. The result is that VT achieved the best parsing accuracy among them in both experiments. Also we conducted a similar experiment for classification tasks where a hidden variable is not a prediction target unlike probabilistic grammars. We found that in such a case VT does not necessarily yield superior performance. Third since VT always deals with a single probability of a single explanation, Viterbi explanation, the exclusiveness condition that is imposed on PRISM programs is no more required if we learn parameters by VT. Last but not least we can say that as VT in PRISM is general and applicable to any PRISM program, it largely reduces the need for the user to develop a specific VT algorithm for a specific model. Furthermore since VT in PRISM can be used just by setting a PRISM flag appropriately, it makes VT easily accessible to (probabilistic) logic programmers. To appear in Theory and Practice of Logic Programming (TPLP).
[ { "version": "v1", "created": "Fri, 22 Mar 2013 16:22:43 GMT" }, { "version": "v2", "created": "Fri, 29 Nov 2013 02:55:17 GMT" } ]
1,582,070,400,000
[ [ "Sato", "Taisuke", "" ], [ "Kubota", "Keiichi", "" ] ]
1303.5704
John Mark Agosta
John Mark Agosta
"Conditional Inter-Causally Independent" Node Distributions, a Property of "Noisy-Or" Models
Appears in Proceedings of the Seventh Conference on Uncertainty in Artificial Intelligence (UAI1991)
null
null
UAI-P-1991-PG-9-16
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper examines the interdependence generated between two parent nodes with a common instantiated child node, such as two hypotheses sharing common evidence. The relation so generated has been termed "intercausal." It is shown by construction that inter-causal independence is possible for binary distributions at one state of evidence. For such "CICI" distributions, the two measures of inter-causal effect, "multiplicative synergy" and "additive synergy" are equal. The well known "noisy-or" model is an example of such a distribution. This introduces novel semantics for the noisy-or, as a model of the degree of conflict among competing hypotheses of a common observation.
[ { "version": "v1", "created": "Wed, 20 Mar 2013 15:29:29 GMT" } ]
1,364,256,000,000
[ [ "Agosta", "John Mark", "" ] ]
1303.5705
Jaume Agust\'i-Cullell
Jaume Agust\'i-Cullell, Francesc Esteva, Pere Garcia, Lluis Godo, Carles Sierra
Combining Multiple-Valued Logics in Modular Expert Systems
Appears in Proceedings of the Seventh Conference on Uncertainty in Artificial Intelligence (UAI1991)
null
null
UAI-P-1991-PG-17-25
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The way experts manage uncertainty usually changes depending on the task they are performing. This fact has lead us to consider the problem of communicating modules (task implementations) in a large and structured knowledge based system when modules have different uncertainty calculi. In this paper, the analysis of the communication problem is made assuming that (i) each uncertainty calculus is an inference mechanism defining an entailment relation, and therefore the communication is considered to be inference-preserving, and (ii) we restrict ourselves to the case which the different uncertainty calculi are given by a class of truth functional Multiple-valued Logics.
[ { "version": "v1", "created": "Wed, 20 Mar 2013 15:29:35 GMT" } ]
1,364,256,000,000
[ [ "Agustí-Cullell", "Jaume", "" ], [ "Esteva", "Francesc", "" ], [ "Garcia", "Pere", "" ], [ "Godo", "Lluis", "" ], [ "Sierra", "Carles", "" ] ]
1303.5706
Stephane Amarger
Stephane Amarger, Didier Dubois, Henri Prade
Constraint Propagation with Imprecise Conditional Probabilities
Appears in Proceedings of the Seventh Conference on Uncertainty in Artificial Intelligence (UAI1991)
null
null
UAI-P-1991-PG-26-34
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An approach to reasoning with default rules where the proportion of exceptions, or more generally the probability of encountering an exception, can be at least roughly assessed is presented. It is based on local uncertainty propagation rules which provide the best bracketing of a conditional probability of interest from the knowledge of the bracketing of some other conditional probabilities. A procedure that uses two such propagation rules repeatedly is proposed in order to estimate any simple conditional probability of interest from the available knowledge. The iterative procedure, that does not require independence assumptions, looks promising with respect to the linear programming method. Improved bounds for conditional probabilities are given when independence assumptions hold.
[ { "version": "v1", "created": "Wed, 20 Mar 2013 15:29:40 GMT" } ]
1,364,256,000,000
[ [ "Amarger", "Stephane", "" ], [ "Dubois", "Didier", "" ], [ "Prade", "Henri", "" ] ]
1303.5708
Wray L. Buntine
Wray L. Buntine
Some Properties of Plausible Reasoning
Appears in Proceedings of the Seventh Conference on Uncertainty in Artificial Intelligence (UAI1991)
null
null
UAI-P-1991-PG-44-51
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a plausible reasoning system to illustrate some broad issues in knowledge representation: dualities between different reasoning forms, the difficulty of unifying complementary reasoning styles, and the approximate nature of plausible reasoning. These issues have a common underlying theme: there should be an underlying belief calculus of which the many different reasoning forms are special cases, sometimes approximate. The system presented allows reasoning about defaults, likelihood, necessity and possibility in a manner similar to the earlier work of Adams. The system is based on the belief calculus of subjective Bayesian probability which itself is based on a few simple assumptions about how belief should be manipulated. Approximations, semantics, consistency and consequence results are presented for the system. While this puts these often discussed plausible reasoning forms on a probabilistic footing, useful application to practical problems remains an issue.
[ { "version": "v1", "created": "Wed, 20 Mar 2013 15:29:51 GMT" } ]
1,364,256,000,000
[ [ "Buntine", "Wray L.", "" ] ]
1303.5709
Wray L. Buntine
Wray L. Buntine
Theory Refinement on Bayesian Networks
Appears in Proceedings of the Seventh Conference on Uncertainty in Artificial Intelligence (UAI1991)
null
null
UAI-P-1991-PG-52-60
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Theory refinement is the task of updating a domain theory in the light of new cases, to be done automatically or with some expert assistance. The problem of theory refinement under uncertainty is reviewed here in the context of Bayesian statistics, a theory of belief revision. The problem is reduced to an incremental learning task as follows: the learning system is initially primed with a partial theory supplied by a domain expert, and thereafter maintains its own internal representation of alternative theories which is able to be interrogated by the domain expert and able to be incrementally refined from data. Algorithms for refinement of Bayesian networks are presented to illustrate what is meant by "partial theory", "alternative theory representation", etc. The algorithms are an incremental variant of batch learning algorithms from the literature so can work well in batch and incremental mode.
[ { "version": "v1", "created": "Wed, 20 Mar 2013 15:29:57 GMT" } ]
1,364,256,000,000
[ [ "Buntine", "Wray L.", "" ] ]
1303.5710
Jose E. Cano
Jose E. Cano, Serafin Moral, Juan F. Verdegay-Lopez
Combination of Upper and Lower Probabilities
Appears in Proceedings of the Seventh Conference on Uncertainty in Artificial Intelligence (UAI1991)
null
null
UAI-P-1991-PG-61-68
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we consider several types of information and methods of combination associated with incomplete probabilistic systems. We discriminate between 'a priori' and evidential information. The former one is a description of the whole population, the latest is a restriction based on observations for a particular case. Then, we propose different combination methods for each one of them. We also consider conditioning as the heterogeneous combination of 'a priori' and evidential information. The evidential information is represented as a convex set of likelihood functions. These will have an associated possibility distribution with behavior according to classical Possibility Theory.
[ { "version": "v1", "created": "Wed, 20 Mar 2013 15:30:02 GMT" } ]
1,364,256,000,000
[ [ "Cano", "Jose E.", "" ], [ "Moral", "Serafin", "" ], [ "Verdegay-Lopez", "Juan F.", "" ] ]
1303.5711
Glenn Carroll
Glenn Carroll, Eugene Charniak
A Probabilistic Analysis of Marker-Passing Techniques for Plan-Recognition
Appears in Proceedings of the Seventh Conference on Uncertainty in Artificial Intelligence (UAI1991)
null
null
UAI-P-1991-PG-69-76
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Useless paths are a chronic problem for marker-passing techniques. We use a probabilistic analysis to justify a method for quickly identifying and rejecting useless paths. Using the same analysis, we identify key conditions and assumptions necessary for marker-passing to perform well.
[ { "version": "v1", "created": "Wed, 20 Mar 2013 15:30:07 GMT" } ]
1,364,256,000,000
[ [ "Carroll", "Glenn", "" ], [ "Charniak", "Eugene", "" ] ]
1303.5712
Kuo-Chu Chang
Kuo-Chu Chang, Robert Fung
Symbolic Probabilistic Inference with Continuous Variables
Appears in Proceedings of the Seventh Conference on Uncertainty in Artificial Intelligence (UAI1991)
null
null
UAI-P-1991-PG-77-81
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Research on Symbolic Probabilistic Inference (SPI) [2, 3] has provided an algorithm for resolving general queries in Bayesian networks. SPI applies the concept of dependency directed backward search to probabilistic inference, and is incremental with respect to both queries and observations. Unlike traditional Bayesian network inferencing algorithms, SPI algorithm is goal directed, performing only those calculations that are required to respond to queries. Research to date on SPI applies to Bayesian networks with discrete-valued variables and does not address variables with continuous values. In this papers, we extend the SPI algorithm to handle Bayesian networks made up of continuous variables where the relationships between the variables are restricted to be ?linear gaussian?. We call this variation of the SPI algorithm, SPI Continuous (SPIC). SPIC modifies the three basic SPI operations: multiplication, summation, and substitution. However, SPIC retains the framework of the SPI algorithm, namely building the search tree and recursive query mechanism and therefore retains the goal-directed and incrementality features of SPI.
[ { "version": "v1", "created": "Wed, 20 Mar 2013 15:30:11 GMT" } ]
1,364,256,000,000
[ [ "Chang", "Kuo-Chu", "" ], [ "Fung", "Robert", "" ] ]
1303.5713
Kuo-Chu Chang
Kuo-Chu Chang, Robert Fung
Symbolic Probabilistic Inference with Evidence Potential
Appears in Proceedings of the Seventh Conference on Uncertainty in Artificial Intelligence (UAI1991)
null
null
UAI-P-1991-PG-82-85
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent research on the Symbolic Probabilistic Inference (SPI) algorithm[2] has focused attention on the importance of resolving general queries in Bayesian networks. SPI applies the concept of dependency-directed backward search to probabilistic inference, and is incremental with respect to both queries and observations. In response to this research we have extended the evidence potential algorithm [3] with the same features. We call the extension symbolic evidence potential inference (SEPI). SEPI like SPI can handle generic queries and is incremental with respect to queries and observations. While in SPI, operations are done on a search tree constructed from the nodes of the original network, in SEPI, a clique-tree structure obtained from the evidence potential algorithm [3] is the basic framework for recursive query processing. In this paper, we describe the systematic query and caching procedure of SEPI. SEPI begins with finding a clique tree from a Bayesian network-the standard procedure of the evidence potential algorithm. With the clique tree, various probability distributions are computed and stored in each clique. This is the ?pre-processing? step of SEPI. Once this step is done, the query can then be computed. To process a query, a recursive process similar to the SPI algorithm is used. The queries are directed to the root clique and decomposed into queries for the clique's subtrees until a particular query can be answered at the clique at which it is directed. The algorithm and the computation are simple. The SEPI algorithm will be presented in this paper along with several examples.
[ { "version": "v1", "created": "Wed, 20 Mar 2013 15:30:16 GMT" } ]
1,364,256,000,000
[ [ "Chang", "Kuo-Chu", "" ], [ "Fung", "Robert", "" ] ]
1303.5714
Gregory F. Cooper
Gregory F. Cooper, Edward H. Herskovits
A Bayesian Method for Constructing Bayesian Belief Networks from Databases
Appears in Proceedings of the Seventh Conference on Uncertainty in Artificial Intelligence (UAI1991)
null
null
UAI-P-1991-PG-86-94
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a Bayesian method for constructing Bayesian belief networks from a database of cases. Potential applications include computer-assisted hypothesis testing, automated scientific discovery, and automated construction of probabilistic expert systems. Results are presented of a preliminary evaluation of an algorithm for constructing a belief network from a database of cases. We relate the methods in this paper to previous work, and we discuss open problems.
[ { "version": "v1", "created": "Wed, 20 Mar 2013 15:30:21 GMT" } ]
1,364,256,000,000
[ [ "Cooper", "Gregory F.", "" ], [ "Herskovits", "Edward H.", "" ] ]
1303.5715
Bruce D'Ambrosio
Bruce D'Ambrosio
Local Expression Languages for Probabilistic Dependence: a Preliminary Report
Appears in Proceedings of the Seventh Conference on Uncertainty in Artificial Intelligence (UAI1991)
null
null
UAI-P-1991-PG-95-102
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a generalization of the local expression language used in the Symbolic Probabilistic Inference (SPI) approach to inference in belief nets [1l, [8]. The local expression language in SPI is the language in which the dependence of a node on its antecedents is described. The original language represented the dependence as a single monolithic conditional probability distribution. The extended language provides a set of operators (*, +, and -) which can be used to specify methods for combining partial conditional distributions. As one instance of the utility of this extension, we show how this extended language can be used to capture the semantics, representational advantages, and inferential complexity advantages of the "noisy or" relationship.
[ { "version": "v1", "created": "Wed, 20 Mar 2013 15:30:27 GMT" } ]
1,364,256,000,000
[ [ "D'Ambrosio", "Bruce", "" ] ]
1303.5716
John Fox
John Fox, Paul J. Krause
Symbolic Decision Theory and Autonomous Systems
Appears in Proceedings of the Seventh Conference on Uncertainty in Artificial Intelligence (UAI1991)
null
null
UAI-P-1991-PG-103-110
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The ability to reason under uncertainty and with incomplete information is a fundamental requirement of decision support technology. In this paper we argue that the concentration on theoretical techniques for the evaluation and selection of decision options has distracted attention from many of the wider issues in decision making. Although numerical methods of reasoning under uncertainty have strong theoretical foundations, they are representationally weak and only deal with a small part of the decision process. Knowledge based systems, on the other hand, offer greater flexibility but have not been accompanied by a clear decision theory. We describe here work which is under way towards providing a theoretical framework for symbolic decision procedures. A central proposal is an extended form of inference which we call argumentation; reasoning for and against decision options from generalised domain theories. The approach has been successfully used in several decision support applications, but it is argued that a comprehensive decision theory must cover autonomous decision making, where the agent can formulate questions as well as take decisions. A major theoretical challenge for this theory is to capture the idea of reflection to permit decision agents to reason about their goals, what they believe and why, and what they need to know or do in order to achieve their goals.
[ { "version": "v1", "created": "Wed, 20 Mar 2013 15:30:32 GMT" } ]
1,364,256,000,000
[ [ "Fox", "John", "" ], [ "Krause", "Paul J.", "" ] ]
1303.5717
B. Fringuelli
B. Fringuelli, S. Marcugini, A. Milani, S. Rivoira
A Reason Maintenace System Dealing with Vague Data
Appears in Proceedings of the Seventh Conference on Uncertainty in Artificial Intelligence (UAI1991)
null
null
UAI-P-1991-PG-111-117
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A reason maintenance system which extends an ATMS through Mukaidono's fuzzy logic is described. It supports a problem solver in situations affected by incomplete information and vague data, by allowing nonmonotonic inferences and the revision of previous conclusions when contradictions are detected.
[ { "version": "v1", "created": "Wed, 20 Mar 2013 15:30:37 GMT" } ]
1,364,256,000,000
[ [ "Fringuelli", "B.", "" ], [ "Marcugini", "S.", "" ], [ "Milani", "A.", "" ], [ "Rivoira", "S.", "" ] ]
1303.5718
Dan Geiger
Dan Geiger, David Heckerman
Advances in Probabilistic Reasoning
Appears in Proceedings of the Seventh Conference on Uncertainty in Artificial Intelligence (UAI1991)
null
null
UAI-P-1991-PG-118-126
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper discuses multiple Bayesian networks representation paradigms for encoding asymmetric independence assertions. We offer three contributions: (1) an inference mechanism that makes explicit use of asymmetric independence to speed up computations, (2) a simplified definition of similarity networks and extensions of their theory, and (3) a generalized representation scheme that encodes more types of asymmetric independence assertions than do similarity networks.
[ { "version": "v1", "created": "Wed, 20 Mar 2013 15:30:42 GMT" }, { "version": "v2", "created": "Sat, 16 May 2015 23:56:18 GMT" } ]
1,431,993,600,000
[ [ "Geiger", "Dan", "" ], [ "Heckerman", "David", "" ] ]
1303.5719
Adam J. Grove
Adam J. Grove, Daphne Koller
Probability Estimation in Face of Irrelevant Information
Appears in Proceedings of the Seventh Conference on Uncertainty in Artificial Intelligence (UAI1991)
null
null
UAI-P-1991-PG-127-134
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we consider one aspect of the problem of applying decision theory to the design of agents that learn how to make decisions under uncertainty. This aspect concerns how an agent can estimate probabilities for the possible states of the world, given that it only makes limited observations before committing to a decision. We show that the naive application of statistical tools can be improved upon if the agent can determine which of his observations are truly relevant to the estimation problem at hand. We give a framework in which such determinations can be made, and define an estimation procedure to use them. Our framework also suggests several extensions, which show how additional knowledge can be used to improve tile estimation procedure still further.
[ { "version": "v1", "created": "Wed, 20 Mar 2013 15:30:46 GMT" } ]
1,364,256,000,000
[ [ "Grove", "Adam J.", "" ], [ "Koller", "Daphne", "" ] ]
1303.5720
David Heckerman
David Heckerman, Eric J. Horvitz, Blackford Middleton
An Approximate Nonmyopic Computation for Value of Information
Appears in Proceedings of the Seventh Conference on Uncertainty in Artificial Intelligence (UAI1991)
null
null
UAI-P-1991-PG-135-141
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Value-of-information analyses provide a straightforward means for selecting the best next observation to make, and for determining whether it is better to gather additional information or to act immediately. Determining the next best test to perform, given a state of uncertainty about the world, requires a consideration of the value of making all possible sequences of observations. In practice, decision analysts and expert-system designers have avoided the intractability of exact computation of the value of information by relying on a myopic approximation. Myopic analyses are based on the assumption that only one additional test will be performed, even when there is an opportunity to make a large number of observations. We present a nonmyopic approximation for value of information that bypasses the traditional myopic analyses by exploiting the statistical properties of large samples.
[ { "version": "v1", "created": "Wed, 20 Mar 2013 15:30:51 GMT" }, { "version": "v2", "created": "Sat, 16 May 2015 23:55:05 GMT" } ]
1,431,993,600,000
[ [ "Heckerman", "David", "" ], [ "Horvitz", "Eric J.", "" ], [ "Middleton", "Blackford", "" ] ]
1303.5721
Max Henrion
Max Henrion
Search-based Methods to Bound Diagnostic Probabilities in Very Large Belief Nets
Appears in Proceedings of the Seventh Conference on Uncertainty in Artificial Intelligence (UAI1991)
null
null
UAI-P-1991-PG-142-150
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Since exact probabilistic inference is intractable in general for large multiply connected belief nets, approximate methods are required. A promising approach is to use heuristic search among hypotheses (instantiations of the network) to find the most probable ones, as in the TopN algorithm. Search is based on the relative probabilities of hypotheses which are efficient to compute. Given upper and lower bounds on the relative probability of partial hypotheses, it is possible to obtain bounds on the absolute probabilities of hypotheses. Best-first search aimed at reducing the maximum error progressively narrows the bounds as more hypotheses are examined. Here, qualitative probabilistic analysis is employed to obtain bounds on the relative probability of partial hypotheses for the BN20 class of networks networks and a generalization replacing the noisy OR assumption by negative synergy. The approach is illustrated by application to a very large belief network, QMR-BN, which is a reformulation of the Internist-1 system for diagnosis in internal medicine.
[ { "version": "v1", "created": "Wed, 20 Mar 2013 15:30:56 GMT" } ]
1,364,256,000,000
[ [ "Henrion", "Max", "" ] ]
1303.5722
Eric J. Horvitz
Eric J. Horvitz, Geoffrey Rutledge
Time-Dependent Utility and Action Under Uncertainty
Appears in Proceedings of the Seventh Conference on Uncertainty in Artificial Intelligence (UAI1991)
null
null
UAI-P-1991-PG-151-158
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We discuss representing and reasoning with knowledge about the time-dependent utility of an agent's actions. Time-dependent utility plays a crucial role in the interaction between computation and action under bounded resources. We present a semantics for time-dependent utility and describe the use of time-dependent information in decision contexts. We illustrate our discussion with examples of time-pressured reasoning in Protos, a system constructed to explore the ideal control of inference by reasoners with limit abilities.
[ { "version": "v1", "created": "Wed, 20 Mar 2013 15:31:01 GMT" } ]
1,364,256,000,000
[ [ "Horvitz", "Eric J.", "" ], [ "Rutledge", "Geoffrey", "" ] ]
1303.5723
Daniel Hunter
Daniel Hunter
Non-monotonic Reasoning and the Reversibility of Belief Change
Appears in Proceedings of the Seventh Conference on Uncertainty in Artificial Intelligence (UAI1991)
null
null
UAI-P-1991-PG-159-164
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Traditional approaches to non-monotonic reasoning fail to satisfy a number of plausible axioms for belief revision and suffer from conceptual difficulties as well. Recent work on ranked preferential models (RPMs) promises to overcome some of these difficulties. Here we show that RPMs are not adequate to handle iterated belief change. Specifically, we show that RPMs do not always allow for the reversibility of belief change. This result indicates the need for numerical strengths of belief.
[ { "version": "v1", "created": "Wed, 20 Mar 2013 15:31:06 GMT" } ]
1,364,256,000,000
[ [ "Hunter", "Daniel", "" ] ]
1303.5724
Yen-Teh Hsia
Yen-Teh Hsia
Belief and Surprise - A Belief-Function Formulation
Appears in Proceedings of the Seventh Conference on Uncertainty in Artificial Intelligence (UAI1991)
null
null
UAI-P-1991-PG-165-173
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We motivate and describe a theory of belief in this paper. This theory is developed with the following view of human belief in mind. Consider the belief that an event E will occur (or has occurred or is occurring). An agent either entertains this belief or does not entertain this belief (i.e., there is no "grade" in entertaining the belief). If the agent chooses to exercise "the will to believe" and entertain this belief, he/she/it is entitled to a degree of confidence c (1 > c > 0) in doing so. Adopting this view of human belief, we conjecture that whenever an agent entertains the belief that E will occur with c degree of confidence, the agent will be surprised (to the extent c) upon realizing that E did not occur.
[ { "version": "v1", "created": "Wed, 20 Mar 2013 15:31:10 GMT" } ]
1,364,256,000,000
[ [ "Hsia", "Yen-Teh", "" ] ]
1303.5725
Robert Kennes
Robert Kennes
Evidential Reasoning in a Categorial Perspective: Conjunction and Disjunction of Belief Functions
Appears in Proceedings of the Seventh Conference on Uncertainty in Artificial Intelligence (UAI1991)
null
null
UAI-P-1991-PG-174-181
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The categorial approach to evidential reasoning can be seen as a combination of the probability kinematics approach of Richard Jeffrey (1965) and the maximum (cross-) entropy inference approach of E. T. Jaynes (1957). As a consequence of that viewpoint, it is well known that category theory provides natural definitions for logical connectives. In particular, disjunction and conjunction are modelled by general categorial constructions known as products and coproducts. In this paper, I focus mainly on Dempster-Shafer theory of belief functions for which I introduce a category I call Dempster?s category. I prove the existence of and give explicit formulas for conjunction and disjunction in the subcategory of separable belief functions. In Dempster?s category, the new defined conjunction can be seen as the most cautious conjunction of beliefs, and thus no assumption about distinctness (of the sources) of beliefs is needed as opposed to Dempster?s rule of combination, which calls for distinctness (of the sources) of beliefs.
[ { "version": "v1", "created": "Wed, 20 Mar 2013 15:31:15 GMT" } ]
1,364,256,000,000
[ [ "Kennes", "Robert", "" ] ]
1303.5726
Rudolf Kruse
Rudolf Kruse, Detlef Nauck, Frank Klawonn
Reasoning with Mass Distributions
Appears in Proceedings of the Seventh Conference on Uncertainty in Artificial Intelligence (UAI1991)
null
null
UAI-P-1991-PG-182-187
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The concept of movable evidence masses that flow from supersets to subsets as specified by experts represents a suitable framework for reasoning under uncertainty. The mass flow is controlled by specialization matrices. New evidence is integrated into the frame of discernment by conditioning or revision (Dempster's rule of conditioning), for which special specialization matrices exist. Even some aspects of non-monotonic reasoning can be represented by certain specialization matrices.
[ { "version": "v1", "created": "Wed, 20 Mar 2013 15:31:20 GMT" } ]
1,364,256,000,000
[ [ "Kruse", "Rudolf", "" ], [ "Nauck", "Detlef", "" ], [ "Klawonn", "Frank", "" ] ]
1303.5728
Kathryn Blackmond Laskey
Kathryn Blackmond Laskey
Conflict and Surprise: Heuristics for Model Revision
Appears in Proceedings of the Seventh Conference on Uncertainty in Artificial Intelligence (UAI1991)
null
null
UAI-P-1991-PG-197-204
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Any probabilistic model of a problem is based on assumptions which, if violated, invalidate the model. Users of probability based decision aids need to be alerted when cases arise that are not covered by the aid's model. Diagnosis of model failure is also necessary to control dynamic model construction and revision. This paper presents a set of decision theoretically motivated heuristics for diagnosing situations in which a model is likely to provide an inadequate representation of the process being modeled.
[ { "version": "v1", "created": "Wed, 20 Mar 2013 15:31:31 GMT" } ]
1,364,256,000,000
[ [ "Laskey", "Kathryn Blackmond", "" ] ]
1303.5729
Paul E. Lehner
Paul E. Lehner, Azar Sadigh
Reasoning under Uncertainty: Some Monte Carlo Results
Appears in Proceedings of the Seventh Conference on Uncertainty in Artificial Intelligence (UAI1991)
null
null
UAI-P-1991-PG-205-211
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A series of monte carlo studies were performed to compare the behavior of some alternative procedures for reasoning under uncertainty. The behavior of several Bayesian, linear model and default reasoning procedures were examined in the context of increasing levels of calibration error. The most interesting result is that Bayesian procedures tended to output more extreme posterior belief values (posterior beliefs near 0.0 or 1.0) than other techniques, but the linear models were relatively less likely to output strong support for an erroneous conclusion. Also, accounting for the probabilistic dependencies between evidence items was important for both Bayesian and linear updating procedures.
[ { "version": "v1", "created": "Wed, 20 Mar 2013 15:31:35 GMT" } ]
1,364,256,000,000
[ [ "Lehner", "Paul E.", "" ], [ "Sadigh", "Azar", "" ] ]
1303.5730
Tze-Yun Leong
Tze-Yun Leong
Representation Requirements for Supporting Decision Model Formulation
Appears in Proceedings of the Seventh Conference on Uncertainty in Artificial Intelligence (UAI1991)
null
null
UAI-P-1991-PG-212-219
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper outlines a methodology for analyzing the representational support for knowledge-based decision-modeling in a broad domain. A relevant set of inference patterns and knowledge types are identified. By comparing the analysis results to existing representations, some insights are gained into a design approach for integrating categorical and uncertain knowledge in a context sensitive manner.
[ { "version": "v1", "created": "Wed, 20 Mar 2013 15:31:41 GMT" } ]
1,364,256,000,000
[ [ "Leong", "Tze-Yun", "" ] ]
1303.5731
Nathaniel G. Martin
Nathaniel G. Martin, James F. Allen
A Language for Planning with Statistics
Appears in Proceedings of the Seventh Conference on Uncertainty in Artificial Intelligence (UAI1991)
null
null
UAI-P-1991-PG-220-227
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When a planner must decide whether it has enough evidence to make a decision based on probability, it faces the sample size problem. Current planners using probabilities need not deal with this problem because they do not generate their probabilities from observations. This paper presents an event based language in which the planner's probabilities are calculated from the binomial random variable generated by the observed ratio of one type of event to another. Such probabilities are subject to error, so the planner must introspect about their validity. Inferences about the probability of these events can be made using statistics. Inferences about the validity of the approximations can be made using interval estimation. Interval estimation allows the planner to avoid making choices that are only weakly supported by the planner's evidence.
[ { "version": "v1", "created": "Wed, 20 Mar 2013 15:31:46 GMT" } ]
1,364,256,000,000
[ [ "Martin", "Nathaniel G.", "" ], [ "Allen", "James F.", "" ] ]
1303.5732
B\"ulent Murtezaoglu
B\"ulent Murtezao\u{g}lu, Henry E. Kyburg Jr
A Modification to Evidential Probability
Appears in Proceedings of the Seventh Conference on Uncertainty in Artificial Intelligence (UAI1991)
null
null
UAI-P-1991-PG-228-231
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Selecting the right reference class and the right interval when faced with conflicting candidates and no possibility of establishing subset style dominance has been a problem for Kyburg's Evidential Probability system. Various methods have been proposed by Loui and Kyburg to solve this problem in a way that is both intuitively appealing and justifiable within Kyburg's framework. The scheme proposed in this paper leads to stronger statistical assertions without sacrificing too much of the intuitive appeal of Kyburg's latest proposal.
[ { "version": "v1", "created": "Wed, 20 Mar 2013 15:31:51 GMT" } ]
1,364,256,000,000
[ [ "Murtezaoğlu", "Bülent", "" ], [ "Kyburg", "Henry E.", "Jr" ] ]
1303.5733
Richard E. Neapolitan
Richard E. Neapolitan, James Kenevan
Investigation of Variances in Belief Networks
Appears in Proceedings of the Seventh Conference on Uncertainty in Artificial Intelligence (UAI1991)
null
null
UAI-P-1991-PG-232-241
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The belief network is a well-known graphical structure for representing independences in a joint probability distribution. The methods, which perform probabilistic inference in belief networks, often treat the conditional probabilities which are stored in the network as certain values. However, if one takes either a subjectivistic or a limiting frequency approach to probability, one can never be certain of probability values. An algorithm should not only be capable of reporting the probabilities of the alternatives of remaining nodes when other nodes are instantiated; it should also be capable of reporting the uncertainty in these probabilities relative to the uncertainty in the probabilities which are stored in the network. In this paper a method for determining the variances in inferred probabilities is obtained under the assumption that a posterior distribution on the uncertainty variables can be approximated by the prior distribution. It is shown that this assumption is plausible if their is a reasonable amount of confidence in the probabilities which are stored in the network. Furthermore in this paper, a surprising upper bound for the prior variances in the probabilities of the alternatives of all nodes is obtained in the case where the probability distributions of the probabilities of the alternatives are beta distributions. It is shown that the prior variance in the probability at an alternative of a node is bounded above by the largest variance in an element of the conditional probability distribution for that node.
[ { "version": "v1", "created": "Wed, 20 Mar 2013 15:31:56 GMT" } ]
1,364,256,000,000
[ [ "Neapolitan", "Richard E.", "" ], [ "Kenevan", "James", "" ] ]
1303.5734
Keung-Chi Ng
Keung-Chi Ng, Bruce Abramson
A Sensitivity Analysis of Pathfinder: A Follow-up Study
Appears in Proceedings of the Seventh Conference on Uncertainty in Artificial Intelligence (UAI1991)
null
null
UAI-P-1991-PG-242-248
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
At last year?s Uncertainty in AI Conference, we reported the results of a sensitivity analysis study of Pathfinder. Our findings were quite unexpected-slight variations to Pathfinder?s parameters appeared to lead to substantial degradations in system performance. A careful look at our first analysis, together with the valuable feedback provided by the participants of last year?s conference, led us to conduct a follow-up study. Our follow-up differs from our initial study in two ways: (i) the probabilities 0.0 and 1.0 remained unchanged, and (ii) the variations to the probabilities that are close to both ends (0.0 or 1.0) were less than the ones close to the middle (0.5). The results of the follow-up study look more reasonable-slight variations to Pathfinder?s parameters now have little effect on its performance. Taken together, these two sets of results suggest a viable extension of a common decision analytic sensitivity analysis to the larger, more complex settings generally encountered in artificial intelligence.
[ { "version": "v1", "created": "Wed, 20 Mar 2013 15:32:02 GMT" } ]
1,364,256,000,000
[ [ "Ng", "Keung-Chi", "" ], [ "Abramson", "Bruce", "" ] ]
1303.5735
Raymond T. Ng
Raymond T. Ng, V. S. Subrahmanian
Non-monotonic Negation in Probabilistic Deductive Databases
Appears in Proceedings of the Seventh Conference on Uncertainty in Artificial Intelligence (UAI1991)
null
null
UAI-P-1991-PG-249-256
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we study the uses and the semantics of non-monotonic negation in probabilistic deductive data bases. Based on the stable semantics for classical logic programming, we introduce the notion of stable formula, functions. We show that stable formula, functions are minimal fixpoints of operators associated with probabilistic deductive databases with negation. Furthermore, since a. probabilistic deductive database may not necessarily have a stable formula function, we provide a stable class semantics for such databases. Finally, we demonstrate that the proposed semantics can handle default reasoning naturally in the context of probabilistic deduction.
[ { "version": "v1", "created": "Wed, 20 Mar 2013 15:32:08 GMT" } ]
1,364,256,000,000
[ [ "Ng", "Raymond T.", "" ], [ "Subrahmanian", "V. S.", "" ] ]
1303.5736
Robert K. Paasch
Robert K. Paasch, Alice M. Agogino
Management of Uncertainty in the Multi-Level Monitoring and Diagnosis of the Time of Flight Scintillation Array
Appears in Proceedings of the Seventh Conference on Uncertainty in Artificial Intelligence (UAI1991)
null
null
UAI-P-1991-PG-257-263
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a general architecture for the monitoring and diagnosis of large scale sensor-based systems with real time diagnostic constraints. This architecture is multileveled, combining a single monitoring level based on statistical methods with two model based diagnostic levels. At each level, sources of uncertainty are identified, and integrated methodologies for uncertainty management are developed. The general architecture was applied to the monitoring and diagnosis of a specific nuclear physics detector at Lawrence Berkeley National Laboratory that contained approximately 5000 components and produced over 500 channels of output data. The general architecture is scalable, and work is ongoing to apply it to detector systems one and two orders of magnitude more complex.
[ { "version": "v1", "created": "Wed, 20 Mar 2013 15:32:13 GMT" } ]
1,364,256,000,000
[ [ "Paasch", "Robert K.", "" ], [ "Agogino", "Alice M.", "" ] ]
1303.5737
Gerhard Paa{\ss}
Gerhard Paass
Integrating Probabilistic Rules into Neural Networks: A Stochastic EM Learning Algorithm
Appears in Proceedings of the Seventh Conference on Uncertainty in Artificial Intelligence (UAI1991)
null
null
UAI-P-1991-PG-264-270
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The EM-algorithm is a general procedure to get maximum likelihood estimates if part of the observations on the variables of a network are missing. In this paper a stochastic version of the algorithm is adapted to probabilistic neural networks describing the associative dependency of variables. These networks have a probability distribution, which is a special case of the distribution generated by probabilistic inference networks. Hence both types of networks can be combined allowing to integrate probabilistic rules as well as unspecified associations in a sound way. The resulting network may have a number of interesting features including cycles of probabilistic rules, hidden 'unobservable' variables, and uncertain and contradictory evidence.
[ { "version": "v1", "created": "Wed, 20 Mar 2013 15:32:18 GMT" } ]
1,364,256,000,000
[ [ "Paass", "Gerhard", "" ] ]
1303.5738
David L Poole
David L. Poole
Representing Bayesian Networks within Probabilistic Horn Abduction
Appears in Proceedings of the Seventh Conference on Uncertainty in Artificial Intelligence (UAI1991)
null
null
UAI-P-1991-PG-271-278
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a simple framework for Horn clause abduction, with probabilities associated with hypotheses. It is shown how this representation can represent any probabilistic knowledge representable in a Bayesian belief network. The main contributions are in finding a relationship between logical and probabilistic notions of evidential reasoning. This can be used as a basis for a new way to implement Bayesian Networks that allows for approximations to the value of the posterior probabilities, and also points to a way that Bayesian networks can be extended beyond a propositional language.
[ { "version": "v1", "created": "Wed, 20 Mar 2013 15:32:22 GMT" } ]
1,364,256,000,000
[ [ "Poole", "David L.", "" ] ]
1303.5739
Gregory M. Provan
Gregory M. Provan
Dynamic Network Updating Techniques For Diagnostic Reasoning
Appears in Proceedings of the Seventh Conference on Uncertainty in Artificial Intelligence (UAI1991)
null
null
UAI-P-1991-PG-279-286
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A new probabilistic network construction system, DYNASTY, is proposed for diagnostic reasoning given variables whose probabilities change over time. Diagnostic reasoning is formulated as a sequential stochastic process, and is modeled using influence diagrams. Given a set O of observations, DYNASTY creates an influence diagram in order to devise the best action given O. Sensitivity analyses are conducted to determine if the best network has been created, given the uncertainty in network parameters and topology. DYNASTY uses an equivalence class approach to provide decision thresholds for the sensitivity analysis. This equivalence-class approach to diagnostic reasoning differentiates diagnoses only if the required actions are different. A set of network-topology updating algorithms are proposed for dynamically updating the network when necessary.
[ { "version": "v1", "created": "Wed, 20 Mar 2013 15:32:27 GMT" } ]
1,364,256,000,000
[ [ "Provan", "Gregory M.", "" ] ]
1303.5740
Runping Qi
Runping Qi, David L. Poole
High Level Path Planning with Uncertainty
Appears in Proceedings of the Seventh Conference on Uncertainty in Artificial Intelligence (UAI1991)
null
null
UAI-P-1991-PG-287-294
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For high level path planning, environments are usually modeled as distance graphs, and path planning problems are reduced to computing the shortest path in distance graphs. One major drawback of this modeling is the inability to model uncertainties, which are often encountered in practice. In this paper, a new tool, called U-yraph, is proposed for environment modeling. A U-graph is an extension of distance graphs with the ability to handle a kind of uncertainty. By modeling an uncertain environment as a U-graph, and a navigation problem as a Markovian decision process, we can precisely define a new optimality criterion for navigation plans, and more importantly, we can come up with a general algorithm for computing optimal plans for navigation tasks.
[ { "version": "v1", "created": "Wed, 20 Mar 2013 15:32:32 GMT" } ]
1,364,256,000,000
[ [ "Qi", "Runping", "" ], [ "Poole", "David L.", "" ] ]