id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
sequencelengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1304.2724 | David Heckerman | David Heckerman, Holly B. Jimison | A Perspective on Confidence and Its Use in Focusing Attention During
Knowledge Acquisition | Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987) | null | null | UAI-P-1987-PG-123-131 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a representation of partial confidence in belief and preference
that is consistent with the tenets of decision-theory. The fundamental insight
underlying the representation is that if a person is not completely confident
in a probability or utility assessment, additional modeling of the assessment
may improve decisions to which it is relevant. We show how a traditional
decision-analytic approach can be used to balance the benefits of additional
modeling with associated costs. The approach can be used during knowledge
acquisition to focus the attention of a knowledge engineer or expert on parts
of a decision model that deserve additional refinement.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:47:22 GMT"
}
] | 1,365,638,400,000 | [
[
"Heckerman",
"David",
""
],
[
"Jimison",
"Holly B.",
""
]
] |
1304.2725 | Max Henrion | Max Henrion | Practical Issues in Constructing a Bayes' Belief Network | Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987) | null | null | UAI-P-1987-PG-132-139 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bayes belief networks and influence diagrams are tools for constructing
coherent probabilistic representations of uncertain knowledge. The process of
constructing such a network to represent an expert's knowledge is used to
illustrate a variety of techniques which can facilitate the process of
structuring and quantifying uncertain relationships. These include some
generalizations of the "noisy OR gate" concept. Sensitivity analysis of generic
elements of Bayes' networks provides insight into when rough probability
assessments are sufficient and when greater precision may be important.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:47:27 GMT"
}
] | 1,365,638,400,000 | [
[
"Henrion",
"Max",
""
]
] |
1304.2726 | Michael C. Higgins | Michael C. Higgins | NAIVE: A Method for Representing Uncertainty and Temporal Relationships
in an Automated Reasoner | Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987) | null | null | UAI-P-1987-PG-140-147 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes NAIVE, a low-level knowledge representation language and
inferencing process. NAIVE has been designed for reasoning about
nondeterministic dynamic systems like those found in medicine. Knowledge is
represented in a graph structure consisting of nodes, which correspond to the
variables describing the system of interest, and arcs, which correspond to the
procedures used to infer the value of a variable from the values of other
variables. The value of a variable can be determined at an instant in time,
over a time interval or for a series of times. Information about the value of a
variable is expressed as a probability density function which quantifies the
likelihood of each possible value. The inferencing process uses these
probability density functions to propagate uncertainty. NAIVE has been used to
develop medical knowledge bases including over 100 variables.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:47:31 GMT"
}
] | 1,365,638,400,000 | [
[
"Higgins",
"Michael C.",
""
]
] |
1304.2727 | Henry E. Kyburg Jr. | Henry E. Kyburg Jr | Objective Probability | Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987) | null | null | UAI-P-1987-PG-148-155 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A distinction is sometimes made between "statistical" and "subjective"
probabilities. This is based on a distinction between "unique" events and
"repeatable" events. We argue that this distinction is untenable, since all
events are "unique" and all events belong to "kinds", and offer a conception of
probability for A1 in which (1) all probabilities are based on -- possibly
vague -- statistical knowledge, and (2) every statement in the language has a
probability. This conception of probability can be applied to very rich
languages.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:47:36 GMT"
}
] | 1,365,638,400,000 | [
[
"Kyburg",
"Henry E.",
"Jr"
]
] |
1304.2728 | Silvio Ursic | Silvio Ursic | Coefficients of Relations for Probabilistic Reasoning | Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987) | null | null | UAI-P-1987-PG-156-162 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Definitions and notations with historical references are given for some
numerical coefficients commonly used to quantify relations among collections of
objects for the purpose of expressing approximate knowledge and probabilistic
reasoning.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:47:41 GMT"
}
] | 1,365,638,400,000 | [
[
"Ursic",
"Silvio",
""
]
] |
1304.2729 | Ben P. Wise | Ben P. Wise | Satisfaction of Assumptions is a Weak Predictor of Performance | Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987) | null | null | UAI-P-1987-PG-163-169 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper demonstrates a methodology for examining the accuracy of uncertain
inference systems (UIS), after their parameters have been optimized, and does
so for several common UIS's. This methodology may be used to test the accuracy
when either the prior assumptions or updating formulae are not exactly
satisfied. Surprisingly, these UIS's were revealed to be no more accurate on
the average than a simple linear regression. Moreover, even on prior
distributions which were deliberately biased so as give very good accuracy,
they were less accurate than the simple probabilistic model which assumes
marginal independence between inputs. This demonstrates that the importance of
updating formulae can outweigh that of prior assumptions. Thus, when UIS's are
judged by their final accuracy after optimization, we get completely different
results than when they are judged by whether or not their prior assumptions are
perfectly satisfied.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:47:45 GMT"
}
] | 1,365,638,400,000 | [
[
"Wise",
"Ben P.",
""
]
] |
1304.2730 | Lei Xu | Lei Xu, Judea Pearl | Structuring Causal Tree Models with Continuous Variables | Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987) | null | null | UAI-P-1987-PG-170-179 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper considers the problem of invoking auxiliary, unobservable
variables to facilitate the structuring of causal tree models for a given set
of continuous variables. Paralleling the treatment of bi-valued variables in
[Pearl 1986], we show that if a collection of coupled variables are governed by
a joint normal distribution and a tree-structured representation exists, then
both the topology and all internal relationships of the tree can be uncovered
by observing pairwise dependencies among the observed variables (i.e., the
leaves of the tree). Furthermore, the conditions for normally distributed
variables are less restrictive than those governing bi-valued variables. The
result extends the applications of causal tree models which were found useful
in evidential reasoning tasks.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:47:50 GMT"
}
] | 1,365,638,400,000 | [
[
"Xu",
"Lei",
""
],
[
"Pearl",
"Judea",
""
]
] |
1304.2731 | John Yen | John Yen | Implementing Evidential Reasoning in Expert Systems | Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987) | null | null | UAI-P-1987-PG-180-188 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Dempster-Shafer theory has been extended recently for its application to
expert systems. However, implementing the extended D-S reasoning model in
rule-based systems greatly complicates the task of generating informative
explanations. By implementing GERTIS, a prototype system for diagnosing
rheumatoid arthritis, we show that two kinds of knowledge are essential for
explanation generation: (l) taxonomic class relationships between hypotheses
and (2) pointers to the rules that significantly contribute to belief in the
hypothesis. As a result, the knowledge represented in GERTIS is richer and more
complex than that of conventional rule-based systems. GERTIS not only
demonstrates the feasibility of rule-based evidential-reasoning systems, but
also suggests ways to generate better explanations, and to explicitly represent
various useful relationships among hypotheses and rules.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:47:55 GMT"
}
] | 1,365,638,400,000 | [
[
"Yen",
"John",
""
]
] |
1304.2732 | Wray L. Buntine | Wray L. Buntine | Decision Tree Induction Systems: A Bayesian Analysis | Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987) | null | null | UAI-P-1987-PG-190-197 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Decision tree induction systems are being used for knowledge acquisition in
noisy domains. This paper develops a subjective Bayesian interpretation of the
task tackled by these systems and the heuristic methods they use. It is argued
that decision tree systems implicitly incorporate a prior belief that the
simpler (in terms of decision tree complexity) of two hypotheses be preferred,
all else being equal, and that they perform a greedy search of the space of
decision rules to find one in which there is strong posterior belief. A number
of improvements to these systems are then suggested.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:48:00 GMT"
}
] | 1,365,638,400,000 | [
[
"Buntine",
"Wray L.",
""
]
] |
1304.2733 | Richard A. Caruana | Richard A. Caruana | The Automatic Training of Rule Bases that Use Numerical Uncertainty
Representations | Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987) | null | null | UAI-P-1987-PG-198-204 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The use of numerical uncertainty representations allows better modeling of
some aspects of human evidential reasoning. It also makes knowledge acquisition
and system development, test, and modification more difficult. We propose that
where possible, the assignment and/or refinement of rule weights should be
performed automatically. We present one approach to performing this training -
numerical optimization - and report on the results of some preliminary tests in
training rule bases. We also show that truth maintenance can be used to make
training more efficient and ask some epistemological questions raised by
training rule weights.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:48:04 GMT"
}
] | 1,365,638,400,000 | [
[
"Caruana",
"Richard A.",
""
]
] |
1304.2734 | Norman C. Dalkey | Norman C. Dalkey | The Inductive Logic of Information Systems | Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987) | null | null | UAI-P-1987-PG-205-211 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An inductive logic can be formulated in which the elements are not
propositions or probability distributions, but information systems. The logic
is complete for information systems with binary hypotheses, i.e., it applies to
all such systems. It is not complete for information systems with more than two
hypotheses, but applies to a subset of such systems. The logic is inductive in
that conclusions are more informative than premises. Inferences using the
formalism have a strong justification in terms of the expected value of the
derived information system.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:48:09 GMT"
}
] | 1,365,638,400,000 | [
[
"Dalkey",
"Norman C.",
""
]
] |
1304.2735 | Stephen I. Gallant | Stephen I. Gallant | Automated Generation of Connectionist Expert Systems for Problems
Involving Noise and Redundancy | Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987) | null | null | UAI-P-1987-PG-212-221 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | When creating an expert system, the most difficult and expensive task is
constructing a knowledge base. This is particularly true if the problem
involves noisy data and redundant measurements. This paper shows how to modify
the MACIE process for generating connectionist expert systems from training
examples so that it can accommodate noisy and redundant data. The basic idea is
to dynamically generate appropriate training examples by constructing both a
'deep' model and a noise model for the underlying problem. The use of
winner-take-all groups of variables is also discussed. These techniques are
illustrated with a small example that would be very difficult for standard
expert system approaches.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:48:14 GMT"
}
] | 1,365,638,400,000 | [
[
"Gallant",
"Stephen I.",
""
]
] |
1304.2736 | George Rebane | George Rebane, Judea Pearl | The Recovery of Causal Poly-Trees from Statistical Data | Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987) | null | null | UAI-P-1987-PG-222-228 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Poly-trees are singly connected causal networks in which variables may arise
from multiple causes. This paper develops a method of recovering ply-trees from
empirically measured probability distributions of pairs of variables. The
method guarantees that, if the measured distributions are generated by a causal
process structured as a ply-tree then the topological structure of such tree
can be recovered precisely and, in addition, the causal directionality of the
branches can be determined up to the maximum extent possible. The method also
pinpoints the minimum (if any) external semantics required to determine the
causal relationships among the variables considered.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:48:18 GMT"
}
] | 1,365,638,400,000 | [
[
"Rebane",
"George",
""
],
[
"Pearl",
"Judea",
""
]
] |
1304.2737 | Ross D. Shachter | Ross D. Shachter, David M. Eddy, Vic Hasselblad, Robert Wolpert | A Heuristic Bayesian Approach to Knowledge Acquisition: Application to
Analysis of Tissue-Type Plasminogen Activator | Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987) | null | null | UAI-P-1987-PG-229-236 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes a heuristic Bayesian method for computing probability
distributions from experimental data, based upon the multivariate normal form
of the influence diagram. An example illustrates its use in medical technology
assessment. This approach facilitates the integration of results from different
studies, and permits a medical expert to make proper assessments without
considerable statistical training.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:48:23 GMT"
}
] | 1,365,638,400,000 | [
[
"Shachter",
"Ross D.",
""
],
[
"Eddy",
"David M.",
""
],
[
"Hasselblad",
"Vic",
""
],
[
"Wolpert",
"Robert",
""
]
] |
1304.2738 | Spencer Star | Spencer Star | Theory-Based Inductive Learning: An Integration of Symbolic and
Quantitative Methods | Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987) | null | null | UAI-P-1987-PG-237-248 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The objective of this paper is to propose a method that will generate a
causal explanation of observed events in an uncertain world and then make
decisions based on that explanation. Feedback can cause the explanation and
decisions to be modified. I call the method Theory-Based Inductive Learning
(T-BIL). T-BIL integrates deductive learning, based on a technique called
Explanation-Based Generalization (EBG) from the field of machine learning, with
inductive learning methods from Bayesian decision theory. T-BIL takes as inputs
(1) a decision problem involving a sequence of related decisions over time, (2)
a training example of a solution to the decision problem in one period, and (3)
the domain theory relevant to the decision problem. T-BIL uses these inputs to
construct a probabilistic explanation of why the training example is an
instance of a solution to one stage of the sequential decision problem. This
explanation is then generalized to cover a more general class of instances and
is used as the basis for making the next-stage decisions. As the outcomes of
each decision are observed, the explanation is revised, which in turn affects
the subsequent decisions. A detailed example is presented that uses T-BIL to
solve a very general stochastic adaptive control problem for an autonomous
mobile robot.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:48:28 GMT"
}
] | 1,365,638,400,000 | [
[
"Star",
"Spencer",
""
]
] |
1304.2739 | Piero P. Bonissone | Piero P. Bonissone | Using T-Norm Based Uncertainty Calculi in a Naval Situation Assessment
Application | Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987) | null | null | UAI-P-1987-PG-250-261 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | RUM (Reasoning with Uncertainty Module), is an integrated software tool based
on a KEE, a frame system implemented in an object oriented language. RUM's
architecture is composed of three layers: representation, inference, and
control. The representation layer is based on frame-like data structures that
capture the uncertainty information used in the inference layer and the
uncertainty meta-information used in the control layer. The inference layer
provides a selection of five T-norm based uncertainty calculi with which to
perform the intersection, detachment, union, and pooling of information. The
control layer uses the meta-information to select the appropriate calculus for
each context and to resolve eventual ignorance or conflict in the information.
This layer also provides a context mechanism that allows the system to focus on
the relevant portion of the knowledge base, and an uncertain-belief revision
system that incrementally updates the certainty values of well-formed formulae
(wffs) in an acyclic directed deduction graph. RUM has been tested and
validated in a sequence of experiments in both naval and aerial situation
assessment (SA), consisting of correlating reports and tracks, locating and
classifying platforms, and identifying intents and threats. An example of naval
situation assessment is illustrated. The testbed environment for developing
these experiments has been provided by LOTTA, a symbolic simulator implemented
in Flavors. This simulator maintains time-varying situations in a multi-player
antagonistic game where players must make decisions in light of uncertain and
incomplete data. RUM has been used to assist one of the LOTTA players to
perform the SA task.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:48:34 GMT"
}
] | 1,365,638,400,000 | [
[
"Bonissone",
"Piero P.",
""
]
] |
1304.2740 | Yizong Cheng | Yizong Cheng, Rangasami L. Kashyap | A Study of Associative Evidential Reasoning | Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987) | null | null | UAI-P-1987-PG-262-269 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Evidential reasoning is cast as the problem of simplifying the
evidence-hypothesis relation and constructing combination formulas that possess
certain testable properties. Important classes of evidence as identifiers,
annihilators, and idempotents and their roles in determining binary operations
on intervals of reals are discussed. The appropriate way of constructing
formulas for combining evidence and their limitations, for instance, in
robustness, are presented.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:48:39 GMT"
}
] | 1,365,638,400,000 | [
[
"Cheng",
"Yizong",
""
],
[
"Kashyap",
"Rangasami L.",
""
]
] |
1304.2741 | I. R. Goodman | I. R. Goodman | A Measure-Free Approach to Conditioning | Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987) | null | null | UAI-P-1987-PG-270-277 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In an earlier paper, a new theory of measurefree "conditional" objects was
presented. In this paper, emphasis is placed upon the motivation of the theory.
The central part of this motivation is established through an example involving
a knowledge-based system. In order to evaluate combination of evidence for this
system, using observed data, auxiliary at tribute and diagnosis variables, and
inference rules connecting them, one must first choose an appropriate algebraic
logic description pair (ALDP): a formal language or syntax followed by a
compatible logic or semantic evaluation (or model). Three common choices- for
this highly non-unique choice - are briefly discussed, the logics being
Classical Logic, Fuzzy Logic, and Probability Logic. In all three,the key
operator representing implication for the inference rules is interpreted as the
often-used disjunction of a negation (b => a) = (b'v a), for any events a,b.
However, another reasonable interpretation of the implication operator is
through the familiar form of probabilistic conditioning. But, it can be shown -
quite surprisingly - that the ALDP corresponding to Probability Logic cannot be
used as a rigorous basis for this interpretation! To fill this gap, a new ALDP
is constructed consisting of "conditional objects", extending ordinary
Probability Logic, and compatible with the desired conditional probability
interpretation of inference rules. It is shown also that this choice of ALDP
leads to feasible computations for the combination of evidence evaluation in
the example. In addition, a number of basic properties of conditional objects
and the resulting Conditional Probability Logic are given, including a
characterization property and a developed calculus of relations.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:48:45 GMT"
}
] | 1,365,638,400,000 | [
[
"Goodman",
"I. R.",
""
]
] |
1304.2742 | Peter Haddawy | Peter Haddawy, Alan M. Frisch | Convergent Deduction for Probabilistic Logic | Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987) | null | null | UAI-P-1987-PG-278-286 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper discusses the semantics and proof theory of Nilsson's
probabilistic logic, outlining both the benefits of its well-defined model
theory and the drawbacks of its proof theory. Within Nilsson's semantic
framework, we derive a set of inference rules which are provably sound. The
resulting proof system, in contrast to Nilsson's approach, has the important
feature of convergence - that is, the inference process proceeds by computing
increasingly narrow probability intervals which converge from above and below
on the smallest entailed probability interval. Thus the procedure can be
stopped at any time to yield partial information concerning the smallest
entailed interval.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:48:49 GMT"
}
] | 1,365,638,400,000 | [
[
"Haddawy",
"Peter",
""
],
[
"Frisch",
"Alan M.",
""
]
] |
1304.2744 | Donald H. Mitchell | Donald H. Mitchell, Steven A. Harp, David K. Simkin | A Knowledge Engineer's Comparison of Three Evidence Aggregation Methods | Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987) | null | null | UAI-P-1987-PG-297-304 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The comparisons of uncertainty calculi from the last two Uncertainty
Workshops have all used theoretical probabilistic accuracy as the sole metric.
While mathematical correctness is important, there are other factors which
should be considered when developing reasoning systems. These other factors
include, among other things, the error in uncertainty measures obtainable for
the problem and the effect of this error on the performance of the resulting
system.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:48:59 GMT"
}
] | 1,365,638,400,000 | [
[
"Mitchell",
"Donald H.",
""
],
[
"Harp",
"Steven A.",
""
],
[
"Simkin",
"David K.",
""
]
] |
1304.2745 | Eric Neufeld | Eric Neufeld, David L Poole | Towards Solving the Multiple Extension Problem: Combining Defaults and
Probabilities | Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987) | null | null | UAI-P-1987-PG-305-312 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The multiple extension problem arises frequently in diagnostic and default
inference. That is, we can often use any of a number of sets of defaults or
possible hypotheses to explain observations or make Predictions. In default
inference, some extensions seem to be simply wrong and we use qualitative
techniques to weed out the unwanted ones. In the area of diagnosis, however,
the multiple explanations may all seem reasonable, however improbable. Choosing
among them is a matter of quantitative preference. Quantitative preference
works well in diagnosis when knowledge is modelled causally. Here we suggest a
framework that combines probabilities and defaults in a single unified
framework that retains the semantics of diagnosis as construction of
explanations from a fixed set of possible hypotheses. We can then compute
probabilities incrementally as we construct explanations. Here we describe a
branch and bound algorithm that maintains a set of all partial explanations
while exploring a most promising one first. A most probable explanation is
found first if explanations are partially ordered.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:49:04 GMT"
}
] | 1,365,638,400,000 | [
[
"Neufeld",
"Eric",
""
],
[
"Poole",
"David L",
""
]
] |
1304.2746 | Richard M. Tong | Richard M. Tong, Lee A. Appelbaum | Problem Structure and Evidential Reasoning | Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987) | null | null | UAI-P-1987-PG-313-320 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In our previous series of studies to investigate the role of evidential
reasoning in the RUBRIC system for full-text document retrieval (Tong et al.,
1985; Tong and Shapiro, 1985; Tong and Appelbaum, 1987), we identified the
important role that problem structure plays in the overall performance of the
system. In this paper, we focus on these structural elements (which we now call
"semantic structure") and show how explicit consideration of their properties
reduces what previously were seen as difficult evidential reasoning problems to
more tractable questions.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:49:09 GMT"
}
] | 1,365,638,400,000 | [
[
"Tong",
"Richard M.",
""
],
[
"Appelbaum",
"Lee A.",
""
]
] |
1304.2747 | Michael P. Wellman | Michael P. Wellman, David Heckerman | The Role of Calculi in Uncertain Inference Systems | Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987) | null | null | UAI-P-1987-PG-321-331 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Much of the controversy about methods for automated decision making has
focused on specific calculi for combining beliefs or propagating uncertainty.
We broaden the debate by (1) exploring the constellation of secondary tasks
surrounding any primary decision problem, and (2) identifying knowledge
engineering concerns that present additional representational tradeoffs. We
argue on pragmatic grounds that the attempt to support all of these tasks
within a single calculus is misguided. In the process, we note several
uncertain reasoning objectives that conflict with the Bayesian ideal of
complete specification of probabilities and utilities. In response, we advocate
treating the uncertainty calculus as an object language for reasoning
mechanisms that support the secondary tasks. Arguments against Bayesian
decision theory are weakened when the calculus is relegated to this role.
Architectures for uncertainty handling that take statements in the calculus as
objects to be reasoned about offer the prospect of retaining normative status
with respect to decision making while supporting the other tasks in uncertain
reasoning.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:49:15 GMT"
}
] | 1,365,638,400,000 | [
[
"Wellman",
"Michael P.",
""
],
[
"Heckerman",
"David",
""
]
] |
1304.2748 | Ben P. Wise | Ben P. Wise, Bruce M. Perrin, David S. Vaughan, Robert M. Yadrick | The Role of Tuning Uncertain Inference Systems | Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987) | null | null | UAI-P-1987-PG-332-339 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study examined the effects of "tuning" the parameters of the incremental
function of MYCIN, the independent function of PROSPECTOR, a probability model
that assumes independence, and a simple additive linear equation. me parameters
of each of these models were optimized to provide solutions which most nearly
approximated those from a full probability model for a large set of simple
networks. Surprisingly, MYCIN, PROSPECTOR, and the linear equation performed
equivalently; the independence model was clearly more accurate on the networks
studied.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:49:20 GMT"
}
] | 1,365,638,400,000 | [
[
"Wise",
"Ben P.",
""
],
[
"Perrin",
"Bruce M.",
""
],
[
"Vaughan",
"David S.",
""
],
[
"Yadrick",
"Robert M.",
""
]
] |
1304.2750 | Lashon B. Booker | Lashon B. Booker, Naveen Hota, Gavin Hemphill | Implementing a Bayesian Scheme for Revising Belief Commitments | Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987) | null | null | UAI-P-1987-PG-348-354 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Our previous work on classifying complex ship images [1,2] has evolved into
an effort to develop software tools for building and solving generic
classification problems. Managing the uncertainty associated with feature data
and other evidence is an important issue in this endeavor. Bayesian techniques
for managing uncertainty [7,12,13] have proven to be useful for managing
several of the belief maintenance requirements of classification problem
solving. One such requirement is the need to give qualitative explanations of
what is believed. Pearl [11] addresses this need by computing what he calls a
belief commitment-the most probable instantiation of all hypothesis variables
given the evidence available. Before belief commitments can be computed, the
straightforward implementation of Pearl's procedure involves finding an
analytical solution to some often difficult optimization problems. We describe
an efficient implementation of this procedure using tensor products that solves
these problems enumeratively and avoids the need for case by case analysis. The
procedure is thereby made more practical to use in the general case.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:49:30 GMT"
}
] | 1,365,638,400,000 | [
[
"Booker",
"Lashon B.",
""
],
[
"Hota",
"Naveen",
""
],
[
"Hemphill",
"Gavin",
""
]
] |
1304.2751 | John S. Breese | John S. Breese, Edison Tse | Integrating Logical and Probabilistic Reasoning for Decision Making | Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987) | null | null | UAI-P-1987-PG-355-362 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe a representation and a set of inference methods that combine
logic programming techniques with probabilistic network representations for
uncertainty (influence diagrams). The techniques emphasize the dynamic
construction and solution of probabilistic and decision-theoretic models for
complex and uncertain domains. Given a query, a logical proof is produced if
possible; if not, an influence diagram based on the query and the knowledge of
the decision domain is produced and subsequently solved. A uniform declarative,
first-order, knowledge representation is combined with a set of integrated
inference procedures for logical, probabilistic, and decision-theoretic
reasoning.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:49:35 GMT"
}
] | 1,365,638,400,000 | [
[
"Breese",
"John S.",
""
],
[
"Tse",
"Edison",
""
]
] |
1304.2752 | Stephen Chiu | Stephen Chiu, Masaki Togai | Compiling Fuzzy Logic Control Rules to Hardware Implementations | Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987) | null | null | UAI-P-1987-PG-363-371 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A major aspect of human reasoning involves the use of approximations.
Particularly in situations where the decision-making process is under stringent
time constraints, decisions are based largely on approximate, qualitative
assessments of the situations. Our work is concerned with the application of
approximate reasoning to real-time control. Because of the stringent processing
speed requirements in such applications, hardware implementations of fuzzy
logic inferencing are being pursued. We describe a programming environment for
translating fuzzy control rules into hardware realizations. Two methods of
hardware realizations are possible. The First is based on a special purpose
chip for fuzzy inferencing. The second is based on a simple memory chip. The
ability to directly translate a set of decision rules into hardware
implementations is expected to make fuzzy control an increasingly practical
approach to the control of complex systems.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:49:40 GMT"
}
] | 1,365,638,400,000 | [
[
"Chiu",
"Stephen",
""
],
[
"Togai",
"Masaki",
""
]
] |
1304.2753 | Paul Cohen | Paul Cohen | Steps Towards Programs that Manage Uncertainty | Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987) | null | null | UAI-P-1987-PG-372-379 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reasoning under uncertainty in Al hats come to mean assessing the credibility
of hypotheses inferred from evidence. But techniques for assessing credibility
do not tell a problem solver what to do when it is uncertain. This is the focus
of our current research. We have developed a medical expert system called MUM,
for Managing Uncertainty in Medicine, that plans diagnostic sequences of
questions, tests, and treatments. This paper describes the kinds of problems
that MUM was designed to solve and gives a brief description of its
architecture. More recently, we have built an empty version of MUM called MU,
and used it to reimplement MUM and a small diagnostic system for plant
pathology. The latter part of the paper describes the features of MU that make
it appropriate for building expert systems that manage uncertainty.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:49:44 GMT"
}
] | 1,365,638,400,000 | [
[
"Cohen",
"Paul",
""
]
] |
1304.2754 | Gregory F. Cooper | Gregory F. Cooper | An Algorithm for Computing Probabilistic Propositions | Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987) | null | null | UAI-P-1987-PG-380-385 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A method for computing probabilistic propositions is presented. It assumes
the availability of a single external routine for computing the probability of
one instantiated variable, given a conjunction of other instantiated variables.
In particular, the method allows belief network algorithms to calculate general
probabilistic propositions over nodes in the network. Although in the worst
case the time complexity of the method is exponential in the size of a query,
it is polynomial in the size of a number of common types of queries.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:49:49 GMT"
}
] | 1,365,638,400,000 | [
[
"Cooper",
"Gregory F.",
""
]
] |
1304.2755 | Bruce D'Ambrosio | Bruce D'Ambrosio | Combining Symbolic and Numeric Approaches to Uncertainty Management | Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987) | null | null | UAI-P-1987-PG-386-393 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A complete approach to reasoning under uncertainty requires support for
incremental and interactive formulation and revision of, as well as reasoning
with, models of the problem domain capable of representing our uncertainty. We
present a hybrid reasoning scheme which combines symbolic and numeric methods
for uncertainty management to provide efficient and effective support for each
of these tasks. The hybrid is based on symbolic techniques adapted from
Assumption-based Truth Maintenance systems (ATMS), combined with numeric
methods adapted from the Dempster/Shafer theory of evidence, as extended in
Baldwin's Support Logic Programming system. The hybridization is achieved by
viewing an ATMS as a symbolic algebra system for uncertainty calculations. This
technique has several major advantages over conventional methods for performing
inference with numeric certainty estimates in addition to the ability to
dynamically determine hypothesis spaces, including improved management of
dependent and partially independent evidence, faster run-time evaluation of
propositional certainties, the ability to query the certainty value of a
proposition from multiple perspectives, and the ability to incrementally extend
or revise domain models.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:49:54 GMT"
}
] | 1,365,638,400,000 | [
[
"D'Ambrosio",
"Bruce",
""
]
] |
1304.2756 | Christopher Elsaesser | Christopher Elsaesser | Explanation of Probabilistic Inference for Decision Support Systems | Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987) | null | null | UAI-P-1987-PG-394-403 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An automated explanation facility for Bayesian conditioning aimed at
improving user acceptance of probability-based decision support systems has
been developed. The domain-independent facility is based on an information
processing perspective on reasoning about conditional evidence that accounts
both for biased and normative inferences. Experimental results indicate that
the facility is both acceptable to naive users and effective in improving
understanding.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:49:58 GMT"
}
] | 1,365,638,400,000 | [
[
"Elsaesser",
"Christopher",
""
]
] |
1304.2758 | Ross D. Shachter | Ross D. Shachter, Leonard Bertrand | Efficient Inference on Generalized Fault Diagrams | Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987) | null | null | UAI-P-1987-PG-413-420 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The generalized fault diagram, a data structure for failure analysis based on
the influence diagram, is defined. Unlike the fault tree, this structure allows
for dependence among the basic events and replicated logical elements. A
heuristic procedure is developed for efficient processing of these structures.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:50:09 GMT"
}
] | 1,365,638,400,000 | [
[
"Shachter",
"Ross D.",
""
],
[
"Bertrand",
"Leonard",
""
]
] |
1304.2759 | Eric J. Horvitz | Eric J. Horvitz | Reasoning About Beliefs and Actions Under Computational Resource
Constraints | Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987) | null | null | UAI-P-1987-PG-429-447 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although many investigators affirm a desire to build reasoning systems that
behave consistently with the axiomatic basis defined by probability theory and
utility theory, limited resources for engineering and computation can make a
complete normative analysis impossible. We attempt to move discussion beyond
the debate over the scope of problems that can be handled effectively to cases
where it is clear that there are insufficient computational resources to
perform an analysis deemed as complete. Under these conditions, we stress the
importance of considering the expected costs and benefits of applying
alternative approximation procedures and heuristics for computation and
knowledge acquisition. We discuss how knowledge about the structure of user
utility can be used to control value tradeoffs for tailoring inference to
alternative contexts. We address the notion of real-time rationality, focusing
on the application of knowledge about the expected timewise-refinement
abilities of reasoning strategies to balance the benefits of additional
computation with the costs of acting with a partial result. We discuss the
benefits of applying decision theory to control the solution of difficult
problems given limitations and uncertainty in reasoning resources.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:50:20 GMT"
}
] | 1,365,638,400,000 | [
[
"Horvitz",
"Eric J.",
""
]
] |
1304.2760 | Thomas Slack | Thomas Slack | Advantages and a Limitation of Using LEG Nets in a Real-TIme Problem | Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987) | null | null | UAI-P-1987-PG-421-428 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | After experimenting with a number of non-probabilistic methods for dealing
with uncertainty many researchers reaffirm a preference for probability methods
[1] [2], although this remains controversial. The importance of being able to
form decisions from incomplete data in diagnostic problems has highlighted
probabilistic methods [5] which compute posterior probabilities from prior
distributions in a way similar to Bayes Rule, and thus are called Bayesian
methods. This paper documents the use of a Bayesian method in a real time
problem which is similar to medical diagnosis in that there is a need to form
decisions and take some action without complete knowledge of conditions in the
problem domain. This particular method has a limitation which is discussed.
| [
{
"version": "v1",
"created": "Thu, 28 Mar 2013 20:44:01 GMT"
}
] | 1,365,638,400,000 | [
[
"Slack",
"Thomas",
""
]
] |
1304.2797 | Emad Saad | Emad Saad | Logical Fuzzy Preferences | arXiv admin note: substantial text overlap with arXiv:1304.2384 | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/3.0/ | We present a unified logical framework for representing and reasoning about
both quantitative and qualitative preferences in fuzzy answer set programming,
called fuzzy answer set optimization programs. The proposed framework is vital
to allow defining quantitative preferences over the possible outcomes of
qualitative preferences. We show the application of fuzzy answer set
optimization programs to the course scheduling with fuzzy preferences problem.
To the best of our knowledge, this development is the first to consider a
logical framework for reasoning about quantitative preferences, in general, and
reasoning about both quantitative and qualitative preferences in particular.
| [
{
"version": "v1",
"created": "Fri, 5 Apr 2013 22:10:22 GMT"
}
] | 1,365,638,400,000 | [
[
"Saad",
"Emad",
""
]
] |
1304.2799 | Emad Saad | Emad Saad | Nested Aggregates in Answer Sets: An Application to a Priori
Optimization | arXiv admin note: text overlap with arXiv:1304.2384 | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/3.0/ | We allow representing and reasoning in the presence of nested multiple
aggregates over multiple variables and nested multiple aggregates over
functions involving multiple variables in answer sets, precisely, in answer set
optimization programming and in answer set programming. We show the
applicability of the answer set optimization programming with nested multiple
aggregates and the answer set programming with nested multiple aggregates to
the Probabilistic Traveling Salesman Problem, a fundamental a priori
optimization problem in Operation Research.
| [
{
"version": "v1",
"created": "Fri, 5 Apr 2013 22:27:33 GMT"
}
] | 1,365,638,400,000 | [
[
"Saad",
"Emad",
""
]
] |
1304.3076 | Stephen W. Barth | Stephen W. Barth, Steven W. Norton | Knowledge Engineering Within A Generalized Bayesian Framework | Appears in Proceedings of the Second Conference on Uncertainty in
Artificial Intelligence (UAI1986) | null | null | UAI-P-1986-PG-7-16 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | During the ongoing debate over the representation of uncertainty in
Artificial Intelligence, Cheeseman, Lemmer, Pearl, and others have argued that
probability theory, and in particular the Bayesian theory, should be used as
the basis for the inference mechanisms of Expert Systems dealing with
uncertainty. In order to pursue the issue in a practical setting, sophisticated
tools for knowledge engineering are needed that allow flexible and
understandable interaction with the underlying knowledge representation
schemes. This paper describes a Generalized Bayesian framework for building
expert systems which function in uncertain domains, using algorithms proposed
by Lemmer. It is neither rule-based nor frame-based, and requires a new system
of knowledge engineering tools. The framework we describe provides a
knowledge-based system architecture with an inference engine, explanation
capability, and a unique aid for building consistent knowledge bases.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:51:06 GMT"
}
] | 1,365,724,800,000 | [
[
"Barth",
"Stephen W.",
""
],
[
"Norton",
"Steven W.",
""
]
] |
1304.3077 | Moshe Ben-Bassat | Moshe Ben-Bassat | Taxonomy, Structure, and Implementation of Evidential Reasoning | Appears in Proceedings of the Second Conference on Uncertainty in
Artificial Intelligence (UAI1986) | null | null | UAI-P-1986-PG-17-28 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The fundamental elements of evidential reasoning problems are described,
followed by a discussion of the structure of various types of problems.
Bayesian inference networks and state space formalism are used as the tool for
problem representation.
A human-oriented decision making cycle for solving evidential reasoning
problems is described and illustrated for a military situation assessment
problem. The implementation of this cycle may serve as the basis for an expert
system shell for evidential reasoning; i.e. a situation assessment processor.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:51:12 GMT"
}
] | 1,365,724,800,000 | [
[
"Ben-Bassat",
"Moshe",
""
]
] |
1304.3078 | Lashon B. Booker | Lashon B. Booker, Naveen Hota | Probabilistic Reasoning About Ship Images | Appears in Proceedings of the Second Conference on Uncertainty in
Artificial Intelligence (UAI1986) | null | null | UAI-P-1986-PG-29-36 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the most important aspects of current expert systems technology is the
ability to make causal inferences about the impact of new evidence. When the
domain knowledge and problem knowledge are uncertain and incomplete Bayesian
reasoning has proven to be an effective way of forming such inferences [3,4,8].
While several reasoning schemes have been developed based on Bayes Rule, there
has been very little work examining the comparative effectiveness of these
schemes in a real application. This paper describes a knowledge based system
for ship classification [1], originally developed using the PROSPECTOR updating
method [2], that has been reimplemented to use the inference procedure
developed by Pearl and Kim [4,5]. We discuss our reasons for making this
change, the implementation of the new inference engine, and the comparative
performance of the two versions of the system.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:51:17 GMT"
}
] | 1,365,724,800,000 | [
[
"Booker",
"Lashon B.",
""
],
[
"Hota",
"Naveen",
""
]
] |
1304.3079 | Kaihu Chen | Kaihu Chen | Towards The Inductive Acquisition of Temporal Knowledge | Appears in Proceedings of the Second Conference on Uncertainty in
Artificial Intelligence (UAI1986) | null | null | UAI-P-1986-PG-37-42 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ability to predict the future in a given domain can be acquired by
discovering empirically from experience certain temporal patterns that tend to
repeat unerringly. Previous works in time series analysis allow one to make
quantitative predictions on the likely values of certain linear variables.
Since certain types of knowledge are better expressed in symbolic forms, making
qualitative predictions based on symbolic representations require a different
approach. A domain independent methodology called TIM (Time based Inductive
Machine) for discovering potentially uncertain temporal patterns from real time
observations using the technique of inductive inference is described here.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:51:22 GMT"
}
] | 1,365,724,800,000 | [
[
"Chen",
"Kaihu",
""
]
] |
1304.3080 | Su-shing Chen | Su-shing Chen | Some Extensions of Probabilistic Logic | Appears in Proceedings of the Second Conference on Uncertainty in
Artificial Intelligence (UAI1986) | null | null | UAI-P-1986-PG-43-48 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In [12], Nilsson proposed the probabilistic logic in which the truth values
of logical propositions are probability values between 0 and 1. It is
applicable to any logical system for which the consistency of a finite set of
propositions can be established. The probabilistic inference scheme reduces to
the ordinary logical inference when the probabilities of all propositions are
either 0 or 1. This logic has the same limitations of other probabilistic
reasoning systems of the Bayesian approach. For common sense reasoning,
consistency is not a very natural assumption. We have some well known examples:
{Dick is a Quaker, Quakers are pacifists, Republicans are not pacifists, Dick
is a Republican}and {Tweety is a bird, birds can fly, Tweety is a penguin}. In
this paper, we shall propose some extensions of the probabilistic logic. In the
second section, we shall consider the space of all interpretations, consistent
or not. In terms of frames of discernment, the basic probability assignment
(bpa) and belief function can be defined. Dempster's combination rule is
applicable. This extension of probabilistic logic is called the evidential
logic in [ 1]. For each proposition s, its belief function is represented by an
interval [Spt(s), Pls(s)]. When all such intervals collapse to single points,
the evidential logic reduces to probabilistic logic (in the generalized version
of not necessarily consistent interpretations). Certainly, we get Nilsson's
probabilistic logic by further restricting to consistent interpretations. In
the third section, we shall give a probabilistic interpretation of
probabilistic logic in terms of multi-dimensional random variables. This
interpretation brings the probabilistic logic into the framework of probability
theory. Let us consider a finite set S = {sl, s2, ..., Sn) of logical
propositions. Each proposition may have true or false values; and may be
considered as a random variable. We have a probability distribution for each
proposition. The e-dimensional random variable (sl,..., Sn) may take values in
the space of all interpretations of 2n binary vectors. We may compute absolute
(marginal), conditional and joint probability distributions. It turns out that
the permissible probabilistic interpretation vector of Nilsson [12] consists of
the joint probabilities of S. Inconsistent interpretations will not appear, by
setting their joint probabilities to be zeros. By summing appropriate joint
probabilities, we get probabilities of individual propositions or subsets of
propositions. Since the Bayes formula and other techniques are valid for
e-dimensional random variables, the probabilistic logic is actually very close
to the Bayesian inference schemes. In the last section, we shall consider a
relaxation scheme for probabilistic logic. In this system, not only new
evidences will update the belief measures of a collection of propositions, but
also constraint satisfaction among these propositions in the relational network
will revise these measures. This mechanism is similar to human reasoning which
is an evaluative process converging to the most satisfactory result. The main
idea arises from the consistent labeling problem in computer vision. This
method is originally applied to scene analysis of line drawings. Later, it is
applied to matching, constraint satisfaction and multi sensor fusion by several
authors [8], [16] (and see references cited there). Recently, this method is
used in knowledge aggregation by Landy and Hummel [9].
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:51:27 GMT"
}
] | 1,365,724,800,000 | [
[
"Chen",
"Su-shing",
""
]
] |
1304.3081 | Ping-Chung Chi | Ping-Chung Chi, Dana Nau | Predicting The Performance of Minimax and Product in Game-Tree | Appears in Proceedings of the Second Conference on Uncertainty in
Artificial Intelligence (UAI1986) | null | null | UAI-P-1986-PG-49-56 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The discovery that the minimax decision rule performs poorly in some games
has sparked interest in possible alternatives to minimax. Until recently, the
only games in which minimax was known to perform poorly were games which were
mainly of theoretical interest. However, this paper reports results showing
poor performance of minimax in a more common game called kalah. For the kalah
games tested, a non-minimax decision rule called the product rule performs
significantly better than minimax.
This paper also discusses a possible way to predict whether or not minimax
will perform well in a game when compared to product. A parameter called the
rate of heuristic flaw (rhf) has been found to correlate positively with the.
performance of product against minimax. Both analytical and experimental
results are given that appear to support the predictive power of rhf.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:51:32 GMT"
}
] | 1,365,724,800,000 | [
[
"Chi",
"Ping-Chung",
""
],
[
"Nau",
"Dana",
""
]
] |
1304.3082 | A. Julian Craddock | A. Julian Craddock, Roger A. Browse | Reasoning With Uncertain Knowledge | Appears in Proceedings of the Second Conference on Uncertainty in
Artificial Intelligence (UAI1986) | null | null | UAI-P-1986-PG-57-62 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A model of knowledge representation is described in which propositional facts
and the relationships among them can be supported by other facts. The set of
knowledge which can be supported is called the set of cognitive units, each
having associated descriptions of their explicit and implicit support
structures, summarizing belief and reliability of belief. This summary is
precise enough to be useful in a computational model while remaining
descriptive of the underlying symbolic support structure. When a fact supports
another supportive relationship between facts we call this meta-support. This
facilitates reasoning about both the propositional knowledge. and the support
structures underlying it.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:51:38 GMT"
}
] | 1,365,724,800,000 | [
[
"Craddock",
"A. Julian",
""
],
[
"Browse",
"Roger A.",
""
]
] |
1304.3083 | Norman C. Dalkey | Norman C. Dalkey | Models vs. Inductive Inference for Dealing With Probabilistic Knowledge | Appears in Proceedings of the Second Conference on Uncertainty in
Artificial Intelligence (UAI1986) | null | null | UAI-P-1986-PG-63-70 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Two different approaches to dealing with probabilistic knowledge are examined
-models and inductive inference. Examples of the first are: influence diagrams
[1], Bayesian networks [2], log-linear models [3, 4]. Examples of the second
are: games-against nature [5, 6] varieties of maximum-entropy methods [7, 8,
9], and the author's min-score induction [10]. In the modeling approach, the
basic issue is manageability, with respect to data elicitation and computation.
Thus, it is assumed that the pertinent set of users in some sense knows the
relevant probabilities, and the problem is to format that knowledge in a way
that is convenient to input and store and that allows computation of the
answers to current questions in an expeditious fashion. The basic issue for the
inductive approach appears at first sight to be very different. In this
approach it is presumed that the relevant probabilities are only partially
known, and the problem is to extend that incomplete information in a reasonable
way to answer current questions. Clearly, this approach requires that some form
of induction be invoked. Of course, manageability is an important additional
concern. Despite their seeming differences, the two approaches have a fair
amount in common, especially with respect to the structural framework they
employ. Roughly speaking, this framework involves identifying clusters of
variables which strongly interact, establishing marginal probability
distributions on the clusters, and extending the subdistributions to a more
complete distribution, usually via a product formalism. The product extension
is justified on the modeling approach in terms of assumed conditional
independence; in the inductive approach the product form arises from an
inductive rule.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:51:44 GMT"
}
] | 1,365,724,800,000 | [
[
"Dalkey",
"Norman C.",
""
]
] |
1304.3084 | Brian Falkenhainer | Brian Falkenhainer | Towards a General-Purpose Belief Maintenance System | Appears in Proceedings of the Second Conference on Uncertainty in
Artificial Intelligence (UAI1986) | null | null | UAI-P-1986-PG-71-76 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There currently exists a gap between the theories proposed by the probability
and uncertainty and the needs of Artificial Intelligence research. These
theories primarily address the needs of expert systems, using knowledge
structures which must be pre-compiled and remain static in structure during
runtime. Many Al systems require the ability to dynamically add and remove
parts of the current knowledge structure (e.g., in order to examine what the
world would be like for different causal theories). This requires more
flexibility than existing uncertainty systems display. In addition, many Al
researchers are only interested in using "probabilities" as a means of
obtaining an ordering, rather than attempting to derive an accurate
probabilistic account of a situation. This indicates the need for systems which
stress ease of use and don't require extensive probability information when one
cannot (or doesn't wish to) provide such information. This paper attempts to
help reconcile the gap between approaches to uncertainty and the needs of many
AI systems by examining the control issues which arise, independent of a
particular uncertainty calculus. when one tries to satisfy these needs. Truth
Maintenance Systems have been used extensively in problem solving tasks to help
organize a set of facts and detect inconsistencies in the believed state of the
world. These systems maintain a set of true/false propositions and their
associated dependencies. However, situations often arise in which we are unsure
of certain facts or in which the conclusions we can draw from available
information are somewhat uncertain. The non-monotonic TMS 12] was an attempt at
reasoning when all the facts are not known, but it fails to take into account
degrees of belief and how available evidence can combine to strengthen a
particular belief. This paper addresses the problem of probabilistic reasoning
as it applies to Truth Maintenance Systems. It describes a belief Maintenance
System that manages a current set of beliefs in much the same way that a TMS
manages a set of true/false propositions. If the system knows that belief in
fact is dependent in some way upon belief in fact2, then it automatically
modifies its belief in facts when new information causes a change in belief of
fact2. It models the behavior of a TMS, replacing its 3-valued logic (true,
false, unknown) with an infinite valued logic, in such a way as to reduce to a
standard TMS if all statements are given in absolute true/false terms. Belief
Maintenance Systems can, therefore, be thought of as a generalization of Truth
Maintenance Systems, whose possible reasoning tasks are a superset of those for
a TMS.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:51:49 GMT"
}
] | 1,365,724,800,000 | [
[
"Falkenhainer",
"Brian",
""
]
] |
1304.3085 | B. R. Fox | B. R. Fox, Karl G. Kempf | Planning, Scheduling, and Uncertainty in the Sequence of Future Events | Appears in Proceedings of the Second Conference on Uncertainty in
Artificial Intelligence (UAI1986) | null | null | UAI-P-1986-PG-77-84 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Scheduling in the factory setting is compounded by computational complexity
and temporal uncertainty. Together, these two factors guarantee that the
process of constructing an optimal schedule will be costly and the chances of
executing that schedule will be slight. Temporal uncertainty in the task
execution time can be offset by several methods: eliminate uncertainty by
careful engineering, restore certainty whenever it is lost, reduce the
uncertainty by using more accurate sensors, and quantify and circumscribe the
remaining uncertainty. Unfortunately, these methods focus exclusively on the
sources of uncertainty and fail to apply knowledge of the tasks which are to be
scheduled. A complete solution must adapt the schedule of activities to be
performed according to the evolving state of the production world. The example
of vision-directed assembly is presented to illustrate that the principle of
least commitment, in the creation of a plan, in the representation of a
schedule, and in the execution of a schedule, enables a robot to operate
intelligently and efficiently, even in the presence of considerable uncertainty
in the sequence of future events.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:51:55 GMT"
}
] | 1,365,724,800,000 | [
[
"Fox",
"B. R.",
""
],
[
"Kempf",
"Karl G.",
""
]
] |
1304.3086 | Pascal Fua | Pascal Fua | Deriving And Combining Continuous Possibility Functions in the Framework
of Evidential Reasoning | Appears in Proceedings of the Second Conference on Uncertainty in
Artificial Intelligence (UAI1986) | null | null | UAI-P-1986-PG-85-90 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To develop an approach to utilizing continuous statistical information within
the Dempster- Shafer framework, we combine methods proposed by Strat and by
Shafero We first derive continuous possibility and mass functions from
probability-density functions. Then we propose a rule for combining such
evidence that is simpler and more efficiently computed than Dempster's rule. We
discuss the relationship between Dempster's rule and our proposed rule for
combining evidence over continuous frames.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:52:00 GMT"
}
] | 1,365,724,800,000 | [
[
"Fua",
"Pascal",
""
]
] |
1304.3087 | Benjamin N. Grosof | Benjamin N. Grosof | Non-Monotonicity in Probabilistic Reasoning | Appears in Proceedings of the Second Conference on Uncertainty in
Artificial Intelligence (UAI1986) | null | null | UAI-P-1986-PG-91-98 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We start by defining an approach to non-monotonic probabilistic reasoning in
terms of non-monotonic categorical (true-false) reasoning. We identify a type
of non-monotonic probabilistic reasoning, akin to default inheritance, that is
commonly found in practice, especially in "evidential" and "Bayesian"
reasoning. We formulate this in terms of the Maximization of Conditional
Independence (MCI), and identify a variety of applications for this sort of
default. We propose a formalization using Pointwise Circumscription. We compare
MCI to Maximum Entropy, another kind of non-monotonic principle, and conclude
by raising a number of open questions
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:52:05 GMT"
}
] | 1,365,724,800,000 | [
[
"Grosof",
"Benjamin N.",
""
]
] |
1304.3089 | Shohara L. Hardt | Shohara L. Hardt | Flexible Interpretations: A Computational Model for Dynamic Uncertainty
Assessment | Appears in Proceedings of the Second Conference on Uncertainty in
Artificial Intelligence (UAI1986) | null | null | UAI-P-1986-PG-109-114 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The investigations reported in this paper center on the process of dynamic
uncertainty assessment during interpretation tasks in real domain. In
particular, we are interested here in the nature of the control structure of
computer programs that can support multiple interpretation and smooth
transitions between them, in real time. Each step of the processing involves
the interpretation of one input item and the appropriate re-establishment of
the system's confidence of the correctness of its interpretation(s).
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:52:17 GMT"
}
] | 1,365,724,800,000 | [
[
"Hardt",
"Shohara L.",
""
]
] |
1304.3090 | David Heckerman | David Heckerman, Eric J. Horvitz | The Myth of Modularity in Rule-Based Systems | Appears in Proceedings of the Second Conference on Uncertainty in
Artificial Intelligence (UAI1986) | null | null | UAI-P-1986-PG-115-122 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we examine the concept of modularity, an often cited advantage
of the ruled-based representation methodology. We argue that the notion of
modularity consists of two distinct concepts which we call syntactic modularity
and semantic modularity. We argue that when reasoning under certainty, it is
reasonable to regard the rule-based approach as both syntactically and
semantically modular. However, we argue that in the case of plausible
reasoning, rules are syntactically modular but are rarely semantically modular.
To illustrate this point, we examine a particular approach for managing
uncertainty in rule-based systems called the MYCIN certainty factor model. We
formally define the concept of semantic modularity with respect to the
certainty factor model and discuss logical consequences of the definition. We
show that the assumption of semantic modularity imposes strong restrictions on
rules in a knowledge base. We argue that such restrictions are rarely valid in
practical applications. Finally, we suggest how the concept of semantic
modularity can be relaxed in a manner that makes it appropriate for plausible
reasoning.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:52:23 GMT"
}
] | 1,365,724,800,000 | [
[
"Heckerman",
"David",
""
],
[
"Horvitz",
"Eric J.",
""
]
] |
1304.3091 | David Heckerman | David Heckerman | An Axiomatic Framework for Belief Updates | Appears in Proceedings of the Second Conference on Uncertainty in
Artificial Intelligence (UAI1986) | null | null | UAI-P-1986-PG-123-128 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the 1940's, a physicist named Cox provided the first formal justification
for the axioms of probability based on the subjective or Bayesian
interpretation. He showed that if a measure of belief satisfies several
fundamental properties, then the measure must be some monotonic transformation
of a probability. In this paper, measures of change in belief or belief updates
are examined. In the spirit of Cox, properties for a measure of change in
belief are enumerated. It is shown that if a measure satisfies these
properties, it must satisfy other restrictive conditions. For example, it is
shown that belief updates in a probabilistic context must be equal to some
monotonic transformation of a likelihood ratio. It is hoped that this formal
explication of the belief update paradigm will facilitate critical discussion
and useful extensions of the approach.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:52:28 GMT"
}
] | 1,365,724,800,000 | [
[
"Heckerman",
"David",
""
]
] |
1304.3093 | Robert Hummel | Robert Hummel, Michael Landy | Evidence as Opinions of Experts | Appears in Proceedings of the Second Conference on Uncertainty in
Artificial Intelligence (UAI1986) | null | null | UAI-P-1986-PG-135-144 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe a viewpoint on the Dempster/Shafer 'Theory of Evidence', and
provide an interpretation which regards the combination formulas as statistics
of the opinions of "experts". This is done by introducing spaces with binary
operations that are simpler to interpret or simpler to implement than the
standard combination formula, and showing that these spaces can be mapped
homomorphically onto the Dempster/Shafer theory of evidence space. The experts
in the space of "opinions of experts" combine information in a Bayesian
fashion. We present alternative spaces for the combination of evidence
suggested by this viewpoint.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:52:40 GMT"
}
] | 1,365,724,800,000 | [
[
"Hummel",
"Robert",
""
],
[
"Landy",
"Michael",
""
]
] |
1304.3094 | Charles I. Kalme | Charles I. Kalme | Decision Under Uncertainty in Diagnosis | Appears in Proceedings of the Second Conference on Uncertainty in
Artificial Intelligence (UAI1986) | null | null | UAI-P-1986-PG-145-150 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes the incorporation of uncertainty in diagnostic reasoning
based on the set covering model of Reggia et. al. extended to what in the
Artificial Intelligence dichotomy between deep and compiled (shallow, surface)
knowledge based diagnosis may be viewed as the generic form at the compiled end
of the spectrum. A major undercurrent in this is advocating the need for a
strong underlying model and an integrated set of support tools for carrying
such a model in order to deal with uncertainty.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:52:45 GMT"
}
] | 1,365,724,800,000 | [
[
"Kalme",
"Charles I.",
""
]
] |
1304.3095 | Henry E. Kyburg Jr. | Henry E. Kyburg Jr | Knowledge and Uncertainty | Appears in Proceedings of the Second Conference on Uncertainty in
Artificial Intelligence (UAI1986) | null | null | UAI-P-1986-PG-151-158 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One purpose -- quite a few thinkers would say the main purpose -- of seeking
knowledge about the world is to enhance our ability to make good decisions. An
item of knowledge that can make no conceivable difference with regard to
anything we might do would strike many as frivolous. Whether or not we want to
be philosophical pragmatists in this strong sense with regard to everything we
might want to enquire about, it seems a perfectly appropriate attitude to adopt
toward artificial knowledge systems. If is granted that we are ultimately
concerned with decisions, then some constraints are imposed on our measures of
uncertainty at the level of decision making. If our measure of uncertainty is
real-valued, then it isn't hard to show that it must satisfy the classical
probability axioms. For example, if an act has a real-valued utility U(E) if
the event E obtains, and the same real-valued utility if the denial of E
obtains, so that U(E) = U(-E), then the expected utility of that act must be
U(E), and that must be the same as the uncertainty-weighted average of the
returns of the act, p-U(E) + q-U('E), where p and q represent the uncertainty
of E and-E respectively. But then we must have p + q = 1.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:52:50 GMT"
}
] | 1,365,724,800,000 | [
[
"Kyburg",
"Henry E.",
"Jr"
]
] |
1304.3096 | Kathryn Blackmond Laskey | Kathryn Blackmond Laskey, Marvin S. Cohen | An Application of Non-Monotonic Probabilistic Reasoning to Air Force
Threat Correlation | Appears in Proceedings of the Second Conference on Uncertainty in
Artificial Intelligence (UAI1986) | null | null | UAI-P-1986-PG-159-166 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current approaches to expert systems' reasoning under uncertainty fail to
capture the iterative revision process characteristic of intelligent human
reasoning. This paper reports on a system, called the Non-monotonic
Probabilist, or NMP (Cohen, et al., 1985). When its inferences result in
substantial conflict, NMP examines and revises the assumptions underlying the
inferences until conflict is reduced to acceptable levels. NMP has been
implemented in a demonstration computer-based system, described below, which
supports threat correlation and in-flight route replanning by Air Force pilots.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:52:56 GMT"
}
] | 1,365,724,800,000 | [
[
"Laskey",
"Kathryn Blackmond",
""
],
[
"Cohen",
"Marvin S.",
""
]
] |
1304.3097 | Tod S. Levitt | Tod S. Levitt | Bayesian Inference for Radar Imagery Based Surveillance | Appears in Proceedings of the Second Conference on Uncertainty in
Artificial Intelligence (UAI1986) | null | null | UAI-P-1986-PG-167-174 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We are interested in creating an automated or semi-automated system with the
capability of taking a set of radar imagery, collection parameters and a priori
map and other tactical data, and producing likely interpretations of the
possible military situations given the available evidence. This paper is
concerned with the problem of the interpretation and computation of certainty
or belief in the conclusions reached by such a system.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:53:02 GMT"
}
] | 1,365,724,800,000 | [
[
"Levitt",
"Tod S.",
""
]
] |
1304.3099 | Ronald P. Loui | Ronald P. Loui | Computing Reference Classes | Appears in Proceedings of the Second Conference on Uncertainty in
Artificial Intelligence (UAI1986) | null | null | UAI-P-1986-PG-183-188 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For any system with limited statistical knowledge, the combination of
evidence and the interpretation of sampling information require the
determination of the right reference class (or of an adequate one). The present
note (1) discusses the use of reference classes in evidential reasoning, and
(2) discusses implementations of Kyburg's rules for reference classes. This
paper contributes the first frank discussion of how much of Kyburg's system is
needed to be powerful, how much can be computed effectively, and how much is
philosophical fat.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:53:13 GMT"
}
] | 1,365,724,800,000 | [
[
"Loui",
"Ronald P.",
""
]
] |
1304.3100 | Uttam Mukhopadhyay | Uttam Mukhopadhyay | An Uncertainty Management Calculus for Ordering Searches in Distributed
Dynamic Databases | Appears in Proceedings of the Second Conference on Uncertainty in
Artificial Intelligence (UAI1986) | null | null | UAI-P-1986-PG-189-192 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | MINDS is a distributed system of cooperating query engines that customize,
document retrieval for each user in a dynamic environment. It improves its
performance and adapts to changing patterns of document distribution by
observing system-user interactions and modifying the appropriate certainty
factors, which act as search control parameters. It argued here that the
uncertainty management calculus must account for temporal precedence,
reliability of evidence, degree of support for a proposition, and saturation
effects. The calculus presented here possesses these features. Some results
obtained with this scheme are discussed.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:53:17 GMT"
}
] | 1,365,724,800,000 | [
[
"Mukhopadhyay",
"Uttam",
""
]
] |
1304.3101 | Steven W. Norton | Steven W. Norton | An Explanation Mechanism for Bayesian Inferencing Systems | Appears in Proceedings of the Second Conference on Uncertainty in
Artificial Intelligence (UAI1986) | null | null | UAI-P-1986-PG-193-200 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Explanation facilities are a particularly important feature of expert system
frameworks. It is an area in which traditional rule-based expert system
frameworks have had mixed results. While explanations about control are well
handled, facilities are needed for generating better explanations concerning
knowledge base content. This paper approaches the explanation problem by
examining the effect an event has on a variable of interest within a symmetric
Bayesian inferencing system. We argue that any effect measure operating in this
context must satisfy certain properties. Such a measure is proposed. It forms
the basis for an explanation facility which allows the user of the Generalized
Bayesian Inferencing System to question the meaning of the knowledge base. That
facility is described in detail.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:53:23 GMT"
}
] | 1,365,724,800,000 | [
[
"Norton",
"Steven W.",
""
]
] |
1304.3102 | Judea Pearl | Judea Pearl | Distributed Revision of Belief Commitment in Multi-Hypothesis
Interpretations | Appears in Proceedings of the Second Conference on Uncertainty in
Artificial Intelligence (UAI1986) | null | null | UAI-P-1986-PG-201-210 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper extends the applications of belief-networks to include the
revision of belief commitments, i.e., the categorical acceptance of a subset of
hypotheses which, together, constitute the most satisfactory explanation of the
evidence at hand. A coherent model of non-monotonic reasoning is established
and distributed algorithms for belief revision are presented. We show that, in
singly connected networks, the most satisfactory explanation can be found in
linear time by a message-passing algorithm similar to the one used in belief
updating. In multiply-connected networks, the problem may be exponentially hard
but, if the network is sparse, topological considerations can be used to render
the interpretation task tractable. In general, finding the most probable
combination of hypotheses is no more complex than computing the degree of
belief for any individual hypothesis. Applications to medical diagnosis are
illustrated.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:53:29 GMT"
}
] | 1,365,724,800,000 | [
[
"Pearl",
"Judea",
""
]
] |
1304.3103 | Igor Roizer | Igor Roizer, Judea Pearl | Learning Link-Probabilities in Causal Trees | Appears in Proceedings of the Second Conference on Uncertainty in
Artificial Intelligence (UAI1986) | null | null | UAI-P-1986-PG-211-214 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A learning algorithm is presented which given the structure of a causal tree,
will estimate its link probabilities by sequential measurements on the leaves
only. Internal nodes of the tree represent conceptual (hidden) variables
inaccessible to observation. The method described is incremental, local,
efficient, and remains robust to measurement imprecisions.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:53:34 GMT"
}
] | 1,365,724,800,000 | [
[
"Roizer",
"Igor",
""
],
[
"Pearl",
"Judea",
""
]
] |
1304.3104 | Enrique H. Ruspini | Enrique H. Ruspini | Approximate Deduction in Single Evidential Bodies | Appears in Proceedings of the Second Conference on Uncertainty in
Artificial Intelligence (UAI1986) | null | null | UAI-P-1986-PG-215-222 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Results on approximate deduction in the context of the calculus of evidence
of Dempster-Shafer and the theory of interval probabilities are reported.
Approximate conditional knowledge about the truth of conditional propositions
was assumed available and expressed as sets of possible values (actually
numeric intervals) of conditional probabilities. Under different
interpretations of this conditional knowledge, several formulas were produced
to integrate unconditioned estimates (assumed given as sets of possible values
of unconditioned probabilities) with conditional estimates. These formulas are
discussed together with the computational characteristics of the methods
derived from them. Of particular importance is one such evidence integration
formulation, produced under a belief oriented interpretation, which
incorporates both modus ponens and modus tollens inferential mechanisms, allows
integration of conditioned and unconditioned knowledge without resorting to
iterative or sequential approximations, and produces elementary mass
distributions as outputs using similar distributions as inputs.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:53:39 GMT"
}
] | 1,365,724,800,000 | [
[
"Ruspini",
"Enrique H.",
""
]
] |
1304.3105 | Shimon Schocken | Shimon Schocken | The Rational and Computational Scope of Probabilistic Rule-Based Expert
Systems | Appears in Proceedings of the Second Conference on Uncertainty in
Artificial Intelligence (UAI1986) | null | null | UAI-P-1986-PG-223-228 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Belief updating schemes in artificial intelligence may be viewed as three
dimensional languages, consisting of a syntax (e.g. probabilities or certainty
factors), a calculus (e.g. Bayesian or CF combination rules), and a semantics
(i.e. cognitive interpretations of competing formalisms). This paper studies
the rational scope of those languages on the syntax and calculus grounds. In
particular, the paper presents an endomorphism theorem which highlights the
limitations imposed by the conditional independence assumptions implicit in the
CF calculus. Implications of the theorem to the relationship between the CF and
the Bayesian languages and the Dempster-Shafer theory of evidence are
presented. The paper concludes with a discussion of some implications on
rule-based knowledge engineering in uncertain domains.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:53:46 GMT"
}
] | 1,365,724,800,000 | [
[
"Schocken",
"Shimon",
""
]
] |
1304.3106 | Stanley M. Schwartz | Stanley M. Schwartz, Jonathan Baron, John R. Clarke | A Causal Bayesian Model for the Diagnosis of Appendicitis | Appears in Proceedings of the Second Conference on Uncertainty in
Artificial Intelligence (UAI1986) | null | null | UAI-P-1986-PG-229-236 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The causal Bayesian approach is based on the assumption that effects (e.g.,
symptoms) that are not conditionally independent with respect to some causal
agent (e.g., a disease) are conditionally independent with respect to some
intermediate state caused by the agent, (e.g., a pathological condition). This
paper describes the development of a causal Bayesian model for the diagnosis of
appendicitis. The paper begins with a description of the standard Bayesian
approach to reasoning about uncertainty and the major critiques it faces. The
paper then lays the theoretical groundwork for the causal extension of the
Bayesian approach, and details specific improvements we have developed. The
paper then goes on to describe our knowledge engineering and implementation and
the results of a test of the system. The paper concludes with a discussion of
how the causal Bayesian approach deals with the criticisms of the standard
Bayesian model and why it is superior to alternative approaches to reasoning
about uncertainty popular in the Al community.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:53:52 GMT"
}
] | 1,365,724,800,000 | [
[
"Schwartz",
"Stanley M.",
""
],
[
"Baron",
"Jonathan",
""
],
[
"Clarke",
"John R.",
""
]
] |
1304.3107 | Ross D. Shachter | Ross D. Shachter, David Heckerman | A Backwards View for Assessment | Appears in Proceedings of the Second Conference on Uncertainty in
Artificial Intelligence (UAI1986) | null | null | UAI-P-1986-PG-237-242 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Much artificial intelligence research focuses on the problem of deducing the
validity of unobservable propositions or hypotheses from observable evidence.!
Many of the knowledge representation techniques designed for this problem
encode the relationship between evidence and hypothesis in a directed manner.
Moreover, the direction in which evidence is stored is typically from evidence
to hypothesis.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:53:57 GMT"
}
] | 1,365,724,800,000 | [
[
"Shachter",
"Ross D.",
""
],
[
"Heckerman",
"David",
""
]
] |
1304.3108 | Ross D. Shachter | Ross D. Shachter | DAVID: Influence Diagram Processing System for the Macintosh | Appears in Proceedings of the Second Conference on Uncertainty in
Artificial Intelligence (UAI1986) | null | null | UAI-P-1986-PG-243-248 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Influence diagrams are a directed graph representation for uncertainties as
probabilities. The graph distinguishes between those variables which are under
the control of a decision maker (decisions, shown as rectangles) and those
which are not (chances, shown as ovals), as well as explicitly denoting a goal
for solution (value, shown as a rounded rectangle.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:54:03 GMT"
}
] | 1,365,724,800,000 | [
[
"Shachter",
"Ross D.",
""
]
] |
1304.3109 | Prakash P. Shenoy | Prakash P. Shenoy, Glenn Shafer, Khaled Mellouli | Propagation of Belief Functions: A Distributed Approach | Appears in Proceedings of the Second Conference on Uncertainty in
Artificial Intelligence (UAI1986) | null | null | UAI-P-1986-PG-249-260 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we describe a scheme for propagating belief functions in
certain kinds of trees using only local computations. This scheme generalizes
the computational scheme proposed by Shafer and Logan1 for diagnostic trees of
the type studied by Gordon and Shortliffe, and the slightly more general scheme
given by Shafer for hierarchical evidence. It also generalizes the scheme
proposed by Pearl for Bayesian causal trees (see Shenoy and Shafer). Pearl's
causal trees and Gordon and Shortliffe's diagnostic trees are both ways of
breaking the evidence that bears on a large problem down into smaller items of
evidence that bear on smaller parts of the problem so that these smaller
problems can be dealt with one at a time. This localization of effort is often
essential in order to make the process of probability judgment feasible, both
for the person who is making probability judgments and for the machine that is
combining them. The basic structure for our scheme is a type of tree that
generalizes both Pearl's and Gordon and Shortliffe's trees. Trees of this
general type permit localized computation in Pearl's sense. They are based on
qualitative judgments of conditional independence. We believe that the scheme
we describe here will prove useful in expert systems. It is now clear that the
successful propagation of probabilities or certainty factors in expert systems
requires much more structure than can be provided in a pure production-system
framework. Bayesian schemes, on the other hand, often make unrealistic demands
for structure. The propagation of belief functions in trees and more general
networks stands on a middle ground where some sensible and useful things can be
done. We would like to emphasize that the basic idea of local computation for
propagating probabilities is due to Judea Pearl. It is a very innovative idea;
we do not believe that it can be found in the Bayesian literature prior to
Pearl's work. We see our contribution as extending the usefulness of Pearl's
idea by generalizing it from Bayesian probabilities to belief functions. In the
next section, we give a brief introduction to belief functions. The notions of
qualitative independence for partitions and a qualitative Markov tree are
introduced in Section III. Finally, in Section IV, we describe a scheme for
propagating belief functions in qualitative Markov trees.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:54:09 GMT"
}
] | 1,365,724,800,000 | [
[
"Shenoy",
"Prakash P.",
""
],
[
"Shafer",
"Glenn",
""
],
[
"Mellouli",
"Khaled",
""
]
] |
1304.3110 | David Sher | David Sher | Appropriate and Inappropriate Estimation Techniques | Appears in Proceedings of the Second Conference on Uncertainty in
Artificial Intelligence (UAI1986) | null | null | UAI-P-1986-PG-261-266 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mode {also called MAP} estimation, mean estimation and median estimation are
examined here to determine when they can be safely used to derive {posterior)
cost minimizing estimates. (These are all Bayes procedures, using the mode.
mean. or median of the posterior distribution). It is found that modal
estimation only returns cost minimizing estimates when the cost function is
0-t. If the cost function is a function of distance then mean estimation only
returns cost minimizing estimates when the cost function is squared distance
from the true value and median estimation only returns cost minimizing
estimates when the cost function ts the distance from the true value. Results
are presented on the goodness or modal estimation with non 0-t cost functions
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:54:15 GMT"
}
] | 1,365,724,800,000 | [
[
"Sher",
"David",
""
]
] |
1304.3111 | Randall Smith | Randall Smith, Matthew Self, Peter Cheeseman | Estimating Uncertain Spatial Relationships in Robotics | Appears in Proceedings of the Second Conference on Uncertainty in
Artificial Intelligence (UAI1986) | null | null | UAI-P-1986-PG-267-288 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we describe a representation for spatial information, called
the stochastic map, and associated procedures for building it, reading
information from it, and revising it incrementally as new information is
obtained. The map contains the estimates of relationships among objects in the
map, and their uncertainties, given all the available information. The
procedures provide a general solution to the problem of estimating uncertain
relative spatial relationships. The estimates are probabilistic in nature, an
advance over the previous, very conservative, worst-case approaches to the
problem. Finally, the procedures are developed in the context of
state-estimation and filtering theory, which provides a solid basis for
numerous extensions.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:54:21 GMT"
}
] | 1,365,724,800,000 | [
[
"Smith",
"Randall",
""
],
[
"Self",
"Matthew",
""
],
[
"Cheeseman",
"Peter",
""
]
] |
1304.3112 | Masaki Togai | Masaki Togai, Hiroyuki Watanabe | A VLSI Design and Implementation for a Real-Time Approximate Reasoning | Appears in Proceedings of the Second Conference on Uncertainty in
Artificial Intelligence (UAI1986) | null | null | UAI-P-1986-PG-289-296 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The role of inferencing with uncertainty is becoming more important in
rule-based expert systems (ES), since knowledge given by a human expert is
often uncertain or imprecise. We have succeeded in designing a VLSI chip which
can perform an entire inference process based on fuzzy logic. The design of the
VLSI fuzzy inference engine emphasizes simplicity, extensibility, and
efficiency (operational speed and layout area). It is fabricated in 2.5 um CMOS
technology. The inference engine consists of three major components; a rule set
memory, an inference processor, and a controller. In this implementation, a
rule set memory is realized by a read only memory (ROM). The controller
consists of two counters. In the inference processor, one data path is laid out
for each rule. The number of the inference rule can be increased adding more
data paths to the inference processor. All rules are executed in parallel, but
each rule is processed serially. The logical structure of fuzzy inference
proposed in the current paper maps nicely onto the VLSI structure. A two-phase
nonoverlapping clocking scheme is used. Timing tests indicate that the
inference engine can operate at approximately 20.8 MHz. This translates to an
execution speed of approximately 80,000 Fuzzy Logical Inferences Per Second
(FLIPS), and indicates that the inference engine is suitable for a demanding
real-time application. The potential applications include decision-making in
the area of command and control for intelligent robot systems, process control,
missile and aircraft guidance, and other high performance machines.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:54:27 GMT"
}
] | 1,365,724,800,000 | [
[
"Togai",
"Masaki",
""
],
[
"Watanabe",
"Hiroyuki",
""
]
] |
1304.3113 | Richard M. Tong | Richard M. Tong, Lee A. Appelbaum, D. G. Shapiro | A General Purpose Inference Engine for Evidential Reasoning Research | Appears in Proceedings of the Second Conference on Uncertainty in
Artificial Intelligence (UAI1986) | null | null | UAI-P-1986-PG-297-302 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The purpose of this paper is to report on the most recent developments in our
ongoing investigation of the representation and manipulation of uncertainty in
automated reasoning systems. In our earlier studies (Tong and Shapiro, 1985) we
described a series of experiments with RUBRIC (Tong et al., 1985), a system for
full-text document retrieval, that generated some interesting insights into the
effects of choosing among a class of scalar valued uncertainty calculi. [n
order to extend these results we have begun a new series of experiments with a
larger class of representations and calculi, and to help perform these
experiments we have developed a general purpose inference engine.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:54:33 GMT"
}
] | 1,365,724,800,000 | [
[
"Tong",
"Richard M.",
""
],
[
"Appelbaum",
"Lee A.",
""
],
[
"Shapiro",
"D. G.",
""
]
] |
1304.3114 | Silvio Ursic | Silvio Ursic | Generalizing Fuzzy Logic Probabilistic Inferences | Appears in Proceedings of the Second Conference on Uncertainty in
Artificial Intelligence (UAI1986) | null | null | UAI-P-1986-PG-303-310 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Linear representations for a subclass of boolean symmetric functions selected
by a parity condition are shown to constitute a generalization of the linear
constraints on probabilities introduced by Boole. These linear constraints are
necessary to compute probabilities of events with relations between the.
arbitrarily specified with propositional calculus boolean formulas.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:54:38 GMT"
}
] | 1,365,724,800,000 | [
[
"Ursic",
"Silvio",
""
]
] |
1304.3115 | Michael P. Wellman | Michael P. Wellman | Qualitative Probabilistic Networks for Planning Under Uncertainty | Appears in Proceedings of the Second Conference on Uncertainty in
Artificial Intelligence (UAI1986) | null | null | UAI-P-1986-PG-311-318 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bayesian networks provide a probabilistic semantics for qualitative
assertions about likelihood. A qualitative reasoner based on an algebra over
these assertions can derive further conclusions about the influence of actions.
While the conclusions are much weaker than those computed from complete
probability distributions, they are still valuable for suggesting potential
actions, eliminating obviously inferior plans, identifying important tradeoffs,
and explaining probabilistic models.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:54:44 GMT"
}
] | 1,365,724,800,000 | [
[
"Wellman",
"Michael P.",
""
]
] |
1304.3116 | Ben P. Wise | Ben P. Wise | Experimentally Comparing Uncertain Inference Systems to Probability | Appears in Proceedings of the Second Conference on Uncertainty in
Artificial Intelligence (UAI1986) | null | null | UAI-P-1986-PG-319-332 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper examines the biases and performance of several uncertain inference
systems: Mycin, a variant of Mycin. and a simplified version of probability
using conditional independence assumptions. We present axiomatic arguments for
using Minimum Cross Entropy inference as the best way to do uncertain
inference. For Mycin and its variant we found special situations where its
performance was very good, but also situations where performance was worse than
random guessing, or where data was interpreted as having the opposite of its
true import We have found that all three of these systems usually gave accurate
results, and that the conditional independence assumptions gave the most robust
results. We illustrate how the Importance of biases may be quantitatively
assessed and ranked. Considerations of robustness might be a critical factor is
selecting UlS's for a given application.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:54:50 GMT"
}
] | 1,365,724,800,000 | [
[
"Wise",
"Ben P.",
""
]
] |
1304.3117 | Robert M. Yadrick | Robert M. Yadrick, Bruce M. Perrin, David S. Vaughan, Peter D. Holden,
Karl G. Kempf | Evaluation of Uncertain Inference Models I: PROSPECTOR | Appears in Proceedings of the Second Conference on Uncertainty in
Artificial Intelligence (UAI1986) | null | null | UAI-P-1986-PG-333-338 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper examines the accuracy of the PROSPECTOR model for uncertain
reasoning. PROSPECTOR's solutions for a large number of computer-generated
inference networks were compared to those obtained from probability theory and
minimum cross-entropy calculations. PROSPECTOR's answers were generally
accurate for a restricted subset of problems that are consistent with its
assumptions. However, even within this subset, we identified conditions under
which PROSPECTOR's performance deteriorates.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:54:56 GMT"
}
] | 1,365,724,800,000 | [
[
"Yadrick",
"Robert M.",
""
],
[
"Perrin",
"Bruce M.",
""
],
[
"Vaughan",
"David S.",
""
],
[
"Holden",
"Peter D.",
""
],
[
"Kempf",
"Karl G.",
""
]
] |
1304.3118 | Ronald R. Yager | Ronald R. Yager | On Implementing Usual Values | Appears in Proceedings of the Second Conference on Uncertainty in
Artificial Intelligence (UAI1986) | null | null | UAI-P-1986-PG-339-346 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In many cases commonsense knowledge consists of knowledge of what is usual.
In this paper we develop a system for reasoning with usual information. This
system is based upon the fact that these pieces of commonsense information
involve both a probabilistic aspect and a granular aspect. We implement this
system with the aid of possibility-probability granules.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:55:01 GMT"
}
] | 1,365,724,800,000 | [
[
"Yager",
"Ronald R.",
""
]
] |
1304.3119 | Lotfi Zadeh | Lotfi Zadeh, Anca Ralescu | On the Combinality of Evidence in the Dempster-Shafer Theory | Appears in Proceedings of the Second Conference on Uncertainty in
Artificial Intelligence (UAI1986) | null | null | UAI-P-1986-PG-347-349 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the current versions of the Dempster-Shafer theory, the only essential
restriction on the validity of the rule of combination is that the sources of
evidence must be statistically independent. Under this assumption, it is
permissible to apply the Dempster-Shafer rule to two or mere distinct
probability distributions.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:55:05 GMT"
}
] | 1,365,724,800,000 | [
[
"Zadeh",
"Lotfi",
""
],
[
"Ralescu",
"Anca",
""
]
] |
1304.3144 | Emad Saad | Emad Saad | Logical Probability Preferences | arXiv admin note: substantial text overlap with arXiv:1304.2384,
arXiv:1304.2797 | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/3.0/ | We present a unified logical framework for representing and reasoning about
both probability quantitative and qualitative preferences in probability answer
set programming, called probability answer set optimization programs. The
proposed framework is vital to allow defining probability quantitative
preferences over the possible outcomes of qualitative preferences. We show the
application of probability answer set optimization programs to a variant of the
well-known nurse restoring problem, called the nurse restoring with probability
preferences problem. To the best of our knowledge, this development is the
first to consider a logical framework for reasoning about probability
quantitative preferences, in general, and reasoning about both probability
quantitative and qualitative preferences in particular.
| [
{
"version": "v1",
"created": "Fri, 5 Apr 2013 22:18:18 GMT"
}
] | 1,365,724,800,000 | [
[
"Saad",
"Emad",
""
]
] |
1304.3208 | Denis Berthier Pr. | Denis Berthier | From Constraints to Resolution Rules, Part I: Conceptual Framework | International Joint Conferences on Computer, Information, Systems
Sciences and Engineering (CISSE 08), December 5-13, 2008, Springer. Also a
chapter of the book "Advanced Techniques in Computing Sciences and Software
Engineering", Khaled Elleithy Editor, pp. 165-170, Springer, 2010, ISBN
9789094136599 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many real world problems naturally appear as constraints satisfaction
problems (CSP), for which very efficient algorithms are known. Most of these
involve the combination of two techniques: some direct propagation of
constraints between variables (with the goal of reducing their sets of possible
values) and some kind of structured search (depth-first, breadth-first,...).
But when such blind search is not possible or not allowed or when one wants a
'constructive' or a 'pattern-based' solution, one must devise more complex
propagation rules instead. In this case, one can introduce the notion of a
candidate (a 'still possible' value for a variable). Here, we give this
intuitive notion a well defined logical status, from which we can define the
concepts of a resolution rule and a resolution theory. In order to keep our
analysis as concrete as possible, we illustrate each definition with the well
known Sudoku example. Part I proposes a general conceptual framework based on
first order logic; with the introduction of chains and braids, Part II will
give much deeper results.
| [
{
"version": "v1",
"created": "Thu, 11 Apr 2013 06:37:20 GMT"
}
] | 1,365,724,800,000 | [
[
"Berthier",
"Denis",
""
]
] |
1304.3210 | Denis Berthier Pr. | Denis Berthier | From Constraints to Resolution Rules, Part II: chains, braids,
confluence and T&E | International Joint Conferences on Computer, Information, Systems
Sciences and Engineering (CISSE 08), December 5-13, 2008, Springer. Also a
chapter of the book 'Advanced Techniques in Computing Sciences and Software
Engineering', Khaled Elleithy Editor, pp. 171-176, Springer, 2010, ISBN
9789094136599 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this Part II, we apply the general theory developed in Part I to a
detailed analysis of the Constraint Satisfaction Problem (CSP). We show how
specific types of resolution rules can be defined. In particular, we introduce
the general notions of a chain and a braid. As in Part I, these notions are
illustrated in detail with the Sudoku example - a problem known to be
NP-complete and which is therefore typical of a broad class of hard problems.
For Sudoku, we also show how far one can go in 'approximating' a CSP with a
resolution theory and we give an empirical statistical analysis of how the
various puzzles, corresponding to different sets of entries, can be classified
along a natural scale of complexity. For any CSP, we also prove the confluence
property of some Resolution Theories based on braids and we show how it can be
used to define different resolution strategies. Finally, we prove that, in any
CSP, braids have the same solving capacity as Trial-and-Error (T&E) with no
guessing and we comment this result in the Sudoku case.
| [
{
"version": "v1",
"created": "Thu, 11 Apr 2013 06:40:22 GMT"
}
] | 1,365,724,800,000 | [
[
"Berthier",
"Denis",
""
]
] |
1304.3418 | Benjamin N. Grosof | Benjamin N. Grosof | An Inequality Paradigm for Probabilistic Knowledge | Appears in Proceedings of the First Conference on Uncertainty in
Artificial Intelligence (UAI1985) | null | null | UAI-P-1985-PG-1-8 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose an inequality paradigm for probabilistic reasoning based on a
logic of upper and lower bounds on conditional probabilities. We investigate a
family of probabilistic logics, generalizing the work of Nilsson [14]. We
develop a variety of logical notions for probabilistic reasoning, including
soundness, completeness justification; and convergence: reduction of a theory
to a simpler logical class. We argue that a bound view is especially useful for
describing the semantics of probabilistic knowledge representation and for
describing intermediate states of probabilistic inference and updating. We show
that the Dempster-Shafer theory of evidence is formally identical to a special
case of our generalized probabilistic logic. Our paradigm thus incorporates
both Bayesian "rule-based" approaches and avowedly non-Bayesian "evidential"
approaches such as MYCIN and DempsterShafer. We suggest how to integrate the
two "schools", and explore some possibilities for novel synthesis of a variety
of ideas in probabilistic reasoning.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:55:33 GMT"
}
] | 1,365,984,000,000 | [
[
"Grosof",
"Benjamin N.",
""
]
] |
1304.3419 | David Heckerman | David Heckerman | Probabilistic Interpretations for MYCIN's Certainty Factors | Appears in Proceedings of the First Conference on Uncertainty in
Artificial Intelligence (UAI1985) | null | null | UAI-P-1985-PG-9-20 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper examines the quantities used by MYCIN to reason with uncertainty,
called certainty factors. It is shown that the original definition of certainty
factors is inconsistent with the functions used in MYCIN to combine the
quantities. This inconsistency is used to argue for a redefinition of certainty
factors in terms of the intuitively appealing desiderata associated with the
combining functions. It is shown that this redefinition accommodates an
unlimited number of probabilistic interpretations. These interpretations are
shown to be monotonic transformations of the likelihood ratio p(EIH)/p(El H).
The construction of these interpretations provides insight into the assumptions
implicit in the certainty factor model. In particular, it is shown that if
uncertainty is to be propagated through an inference network in accordance with
the desiderata, evidence must be conditionally independent given the hypothesis
and its negation and the inference network must have a tree structure. It is
emphasized that assumptions implicit in the model are rarely true in practical
applications. Methods for relaxing the assumptions are suggested.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:55:40 GMT"
}
] | 1,365,984,000,000 | [
[
"Heckerman",
"David",
""
]
] |
1304.3420 | Daniel Hunter | Daniel Hunter | Uncertain Reasoning Using Maximum Entropy Inference | Appears in Proceedings of the First Conference on Uncertainty in
Artificial Intelligence (UAI1985) | null | null | UAI-P-1985-PG-21-27 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The use of maximum entropy inference in reasoning with uncertain information
is commonly justified by an information-theoretic argument. This paper
discusses a possible objection to this information-theoretic justification and
shows how it can be met. I then compare maximum entropy inference with certain
other currently popular methods for uncertain reasoning. In making such a
comparison, one must distinguish between static and dynamic theories of degrees
of belief: a static theory concerns the consistency conditions for degrees of
belief at a given time; whereas a dynamic theory concerns how one's degrees of
belief should change in the light of new information. It is argued that maximum
entropy is a dynamic theory and that a complete theory of uncertain reasoning
can be gotten by combining maximum entropy inference with probability theory,
which is a static theory. This total theory, I argue, is much better grounded
than are other theories of uncertain reasoning.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:55:46 GMT"
}
] | 1,365,984,000,000 | [
[
"Hunter",
"Daniel",
""
]
] |
1304.3421 | Rodney W. Johnson | Rodney W. Johnson | Independence and Bayesian Updating Methods | Appears in Proceedings of the First Conference on Uncertainty in
Artificial Intelligence (UAI1985) | null | null | UAI-P-1985-PG-28-30 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Duda, Hart, and Nilsson have set forth a method for rule-based inference
systems to use in updating the probabilities of hypotheses on the basis of
multiple items of new evidence. Pednault, Zucker, and Muresan claimed to give
conditions under which independence assumptions made by Duda et al. preclude
updating-that is, prevent the evidence from altering the probabilities of the
hypotheses. Glymour refutes Pednault et al.'s claim with a counterexample of a
rather special form (one item of evidence is incompatible with all but one of
the hypotheses); he raises, but leaves open, the question whether their result
would be true with an added assumption to rule out such special cases. We show
that their result does not hold even with the added assumption, but that it can
nevertheless be largely salvaged. Namely, under the conditions assumed by
Pednault et al., at most one of the items of evidence can alter the probability
of any given hypothesis; thus, although updating is possible, multiple updating
for any of the hypotheses is precluded.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:55:51 GMT"
}
] | 1,365,984,000,000 | [
[
"Johnson",
"Rodney W.",
""
]
] |
1304.3422 | Judea Pearl | Judea Pearl | A Constraint Propagation Approach to Probabilistic Reasoning | Appears in Proceedings of the First Conference on Uncertainty in
Artificial Intelligence (UAI1985) | null | null | UAI-P-1985-PG-31-42 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The paper demonstrates that strict adherence to probability theory does not
preclude the use of concurrent, self-activated constraint-propagation
mechanisms for managing uncertainty. Maintaining local records of
sources-of-belief allows both predictive and diagnostic inferences to be
activated simultaneously and propagate harmoniously towards a stable
equilibrium.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:55:56 GMT"
}
] | 1,365,984,000,000 | [
[
"Pearl",
"Judea",
""
]
] |
1304.3423 | John E. Shore | John E. Shore | Relative Entropy, Probabilistic Inference and AI | Appears in Proceedings of the First Conference on Uncertainty in
Artificial Intelligence (UAI1985) | null | null | UAI-P-1985-PG-43-47 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Various properties of relative entropy have led to its widespread use in
information theory. These properties suggest that relative entropy has a role
to play in systems that attempt to perform inference in terms of probability
distributions. In this paper, I will review some basic properties of relative
entropy as well as its role in probabilistic inference. I will also mention
briefly a few existing and potential applications of relative entropy to
so-called artificial intelligence (AI).
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:56:01 GMT"
}
] | 1,365,984,000,000 | [
[
"Shore",
"John E.",
""
]
] |
1304.3424 | Ray Solomonoff | Ray Solomonoff | Foundations of Probability Theory for AI - The Application of
Algorithmic Probability to Problems in Artificial Intelligence | Appears in Proceedings of the First Conference on Uncertainty in
Artificial Intelligence (UAI1985) | null | null | UAI-P-1985-PG-48-56 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper covers two topics: first an introduction to Algorithmic Complexity
Theory: how it defines probability, some of its characteristic properties and
past successful applications. Second, we apply it to problems in A.I. - where
it promises to give near optimum search procedures for two very broad classes
of problems.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:56:07 GMT"
}
] | 1,365,984,000,000 | [
[
"Solomonoff",
"Ray",
""
]
] |
1304.3425 | Piero P. Bonissone | Piero P. Bonissone, Keith S. Decker | Selecting Uncertainty Calculi and Granularity: An Experiment in
Trading-Off Precision and Complexity | Appears in Proceedings of the First Conference on Uncertainty in
Artificial Intelligence (UAI1985) | null | null | UAI-P-1985-PG-57-66 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The management of uncertainty in expert systems has usually been left to ad
hoc representations and rules of combinations lacking either a sound theory or
clear semantics. The objective of this paper is to establish a theoretical
basis for defining the syntax and semantics of a small subset of calculi of
uncertainty operating on a given term set of linguistic statements of
likelihood. Each calculus is defined by specifying a negation, a conjunction
and a disjunction operator. Families of Triangular norms and conorms constitute
the most general representations of conjunction and disjunction operators.
These families provide us with a formalism for defining an infinite number of
different calculi of uncertainty. The term set will define the uncertainty
granularity, i.e. the finest level of distinction among different
quantifications of uncertainty. This granularity will limit the ability to
differentiate between two similar operators. Therefore, only a small finite
subset of the infinite number of calculi will produce notably different
results. This result is illustrated by two experiments where nine and eleven
different calculi of uncertainty are used with three term sets containing five,
nine, and thirteen elements, respectively. Finally, the use of context
dependent rule set is proposed to select the most appropriate calculus for any
given situation. Such a rule set will be relatively small since it must only
describe the selection policies for a small number of calculi (resulting from
the analyzed trade-off between complexity and precision).
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:56:13 GMT"
}
] | 1,365,984,000,000 | [
[
"Bonissone",
"Piero P.",
""
],
[
"Decker",
"Keith S.",
""
]
] |
1304.3426 | Marvin S. Cohen | Marvin S. Cohen | A Framework for Non-Monotonic Reasoning About Probabilistic Assumptions | Appears in Proceedings of the First Conference on Uncertainty in
Artificial Intelligence (UAI1985) | null | null | UAI-P-1985-PG-67-75 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Attempts to replicate probabilistic reasoning in expert systems have
typically overlooked a critical ingredient of that process. Probabilistic
analysis typically requires extensive judgments regarding interdependencies
among hypotheses and data, and regarding the appropriateness of various
alternative models. The application of such models is often an iterative
process, in which the plausibility of the results confirms or disconfirms the
validity of assumptions made in building the model. In current expert systems,
by contrast, probabilistic information is encapsulated within modular rules
(involving, for example, "certainty factors"), and there is no mechanism for
reviewing the overall form of the probability argument or the validity of the
judgments entering into it.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:56:20 GMT"
}
] | 1,365,984,000,000 | [
[
"Cohen",
"Marvin S.",
""
]
] |
1304.3427 | Robert Fung | Robert Fung, Chee Yee Chong | Metaprobability and Dempster-Shafer in Evidential Reasoning | Appears in Proceedings of the First Conference on Uncertainty in
Artificial Intelligence (UAI1985) | null | null | UAI-P-1985-PG-76-83 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Evidential reasoning in expert systems has often used ad-hoc uncertainty
calculi. Although it is generally accepted that probability theory provides a
firm theoretical foundation, researchers have found some problems with its use
as a workable uncertainty calculus. Among these problems are representation of
ignorance, consistency of probabilistic judgements, and adjustment of a priori
judgements with experience. The application of metaprobability theory to
evidential reasoning is a new approach to solving these problems.
Metaprobability theory can be viewed as a way to provide soft or hard
constraints on beliefs in much the same manner as the Dempster-Shafer theory
provides constraints on probability masses on subsets of the state space. Thus,
we use the Dempster-Shafer theory, an alternative theory of evidential
reasoning to illuminate metaprobability theory as a theory of evidential
reasoning. The goal of this paper is to compare how metaprobability theory and
Dempster-Shafer theory handle the adjustment of beliefs with evidence with
respect to a particular thought experiment. Sections 2 and 3 give brief
descriptions of the metaprobability and Dempster-Shafer theories.
Metaprobability theory deals with higher order probabilities applied to
evidential reasoning. Dempster-Shafer theory is a generalization of probability
theory which has evolved from a theory of upper and lower probabilities.
Section 4 describes a thought experiment and the metaprobability and
DempsterShafer analysis of the experiment. The thought experiment focuses on
forming beliefs about a population with 6 types of members {1, 2, 3, 4, 5, 6}.
A type is uniquely defined by the values of three features: A, B, C. That is,
if the three features of one member of the population were known then its type
could be ascertained. Each of the three features has two possible values, (e.g.
A can be either "a0" or "al"). Beliefs are formed from evidence accrued from
two sensors: sensor A, and sensor B. Each sensor senses the corresponding
defining feature. Sensor A reports that half of its observations are "a0" and
half the observations are 'al'. Sensor B reports that half of its observations
are ``b0,' and half are "bl". Based on these two pieces of evidence, what
should be the beliefs on the distribution of types in the population? Note that
the third feature is not observed by any sensor.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:56:26 GMT"
}
] | 1,365,984,000,000 | [
[
"Fung",
"Robert",
""
],
[
"Chong",
"Chee Yee",
""
]
] |
1304.3428 | Matthew L. Ginsberg | Matthew L. Ginsberg | Implementing Probabilistic Reasoning | Appears in Proceedings of the First Conference on Uncertainty in
Artificial Intelligence (UAI1985) | null | null | UAI-P-1985-PG-84-90 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | General problems in analyzing information in a probabilistic database are
considered. The practical difficulties (and occasional advantages) of storing
uncertain data, of using it conventional forward- or backward-chaining
inference engines, and of working with a probabilistic version of resolution
are discussed. The background for this paper is the incorporation of uncertain
reasoning facilities in MRS, a general-purpose expert system building tool.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:56:32 GMT"
}
] | 1,365,984,000,000 | [
[
"Ginsberg",
"Matthew L.",
""
]
] |
1304.3429 | Glenn Shafer | Glenn Shafer | Probability Judgement in Artificial Intelligence | Appears in Proceedings of the First Conference on Uncertainty in
Artificial Intelligence (UAI1985) | null | null | UAI-P-1985-PG-91-98 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper is concerned with two theories of probability judgment: the
Bayesian theory and the theory of belief functions. It illustrates these
theories with some simple examples and discusses some of the issues that arise
when we try to implement them in expert systems. The Bayesian theory is well
known; its main ideas go back to the work of Thomas Bayes (1702-1761). The
theory of belief functions, often called the Dempster-Shafer theory in the
artificial intelligence community, is less well known, but it has even older
antecedents; belief-function arguments appear in the work of George Hooper
(16401723) and James Bernoulli (1654-1705). For elementary expositions of the
theory of belief functions, see Shafer (1976, 1985).
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:56:37 GMT"
}
] | 1,365,984,000,000 | [
[
"Shafer",
"Glenn",
""
]
] |
1304.3430 | Ben P. Wise | Ben P. Wise, Max Henrion | A Framework for Comparing Uncertain Inference Systems to Probability | Appears in Proceedings of the First Conference on Uncertainty in
Artificial Intelligence (UAI1985) | null | null | UAI-P-1985-PG-99-108 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Several different uncertain inference systems (UISs) have been developed for
representing uncertainty in rule-based expert systems. Some of these, such as
Mycin's Certainty Factors, Prospector, and Bayes' Networks were designed as
approximations to probability, and others, such as Fuzzy Set Theory and
DempsterShafer Belief Functions were not. How different are these UISs in
practice, and does it matter which you use? When combining and propagating
uncertain information, each UIS must, at least by implication, make certain
assumptions about correlations not explicily specified. The maximum entropy
principle with minimum cross-entropy updating, provides a way of making
assumptions about the missing specification that minimizes the additional
information assumed, and thus offers a standard against which the other UISs
can be compared. We describe a framework for the experimental comparison of the
performance of different UISs, and provide some illustrative results.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:56:43 GMT"
}
] | 1,365,984,000,000 | [
[
"Wise",
"Ben P.",
""
],
[
"Henrion",
"Max",
""
]
] |
1304.3431 | Norman C. Dalkey | Norman C. Dalkey | Inductive Inference and the Representation of Uncertainty | Appears in Proceedings of the First Conference on Uncertainty in
Artificial Intelligence (UAI1985) | null | null | UAI-P-1985-PG-109-116 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The form and justification of inductive inference rules depend strongly on
the representation of uncertainty. This paper examines one generic
representation, namely, incomplete information. The notion can be formalized by
presuming that the relevant probabilities in a decision problem are known only
to the extent that they belong to a class K of probability distributions. The
concept is a generalization of a frequent suggestion that uncertainty be
represented by intervals or ranges on probabilities. To make the representation
useful for decision making, an inductive rule can be formulated which
determines, in a well-defined manner, a best approximation to the unknown
probability, given the set K. In addition, the knowledge set notion entails a
natural procedure for updating -- modifying the set K given new evidence.
Several non-intuitive consequences of updating emphasize the differences
between inference with complete and inference with incomplete information.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:56:49 GMT"
}
] | 1,365,984,000,000 | [
[
"Dalkey",
"Norman C.",
""
]
] |
1304.3433 | Larry Rendell | Larry Rendell | Induction, of and by Probability | Appears in Proceedings of the First Conference on Uncertainty in
Artificial Intelligence (UAI1985) | null | null | UAI-P-1985-PG-129-134 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper examines some methods and ideas underlying the author's successful
probabilistic learning systems(PLS), which have proven uniquely effective and
efficient in generalization learning or induction. While the emerging
principles are generally applicable, this paper illustrates them in heuristic
search, which demands noise management and incremental learning. In our
approach, both task performance and learning are guided by probability.
Probabilities are incrementally normalized and revised, and their errors are
located and corrected.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:57:01 GMT"
}
] | 1,365,984,000,000 | [
[
"Rendell",
"Larry",
""
]
] |
1304.3434 | David S. Vaughan | David S. Vaughan, Bruce M. Perrin, Robert M. Yadrick, Peter D. Holden,
Karl G. Kempf | An Odds Ratio Based Inference Engine | Appears in Proceedings of the First Conference on Uncertainty in
Artificial Intelligence (UAI1985) | null | null | UAI-P-1985-PG-135-142 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Expert systems applications that involve uncertain inference can be
represented by a multidimensional contingency table. These tables offer a
general approach to inferring with uncertain evidence, because they can embody
any form of association between any number of pieces of evidence and
conclusions. (Simpler models may be required, however, if the number of pieces
of evidence bearing on a conclusion is large.) This paper presents a method of
using these tables to make uncertain inferences without assumptions of
conditional independence among pieces of evidence or heuristic combining rules.
As evidence is accumulated, new joint probabilities are calculated so as to
maintain any dependencies among the pieces of evidence that are found in the
contingency table. The new conditional probability of the conclusion is then
calculated directly from these new joint probabilities and the conditional
probabilities in the contingency table.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:57:05 GMT"
}
] | 1,365,984,000,000 | [
[
"Vaughan",
"David S.",
""
],
[
"Perrin",
"Bruce M.",
""
],
[
"Yadrick",
"Robert M.",
""
],
[
"Holden",
"Peter D.",
""
],
[
"Kempf",
"Karl G.",
""
]
] |
1304.3435 | Moshe Ben-Bassat | Moshe Ben-Bassat, Oded Maler | A Framework for Control Strategies in Uncertain Inference Networks | Appears in Proceedings of the First Conference on Uncertainty in
Artificial Intelligence (UAI1985) | null | null | UAI-P-1985-PG-143-151 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Control Strategies for hierarchical tree-like probabilistic inference
networks are formulated and investigated. Strategies that utilize staged
look-ahead and temporary focus on subgoals are formalized and refined using the
Depth Vector concept that serves as a tool for defining the 'virtual tree'
regarded by the control strategy. The concept is illustrated by four types of
control strategies for three-level trees that are characterized according to
their Depth Vector, and according to the way they consider intermediate nodes
and the role that they let these nodes play. INFERENTI is a computerized
inference system written in Prolog, which provides tools for exercising a
variety of control strategies. The system also provides tools for simulating
test data and for comparing the relative average performance under different
strategies.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:57:11 GMT"
}
] | 1,365,984,000,000 | [
[
"Ben-Bassat",
"Moshe",
""
],
[
"Maler",
"Oded",
""
]
] |
1304.3436 | Henry Hamburger | Henry Hamburger | Combining Uncertain Estimates | Appears in Proceedings of the First Conference on Uncertainty in
Artificial Intelligence (UAI1985) | null | null | UAI-P-1985-PG-152-159 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In a real expert system, one may have unreliable, unconfident, conflicting
estimates of the value for a particular parameter. It is important for decision
making that the information present in this aggregate somehow find its way into
use. We cast the problem of representing and combining uncertain estimates as
selection of two kinds of functions, one to determine an estimate, the other
its uncertainty. The paper includes a long list of properties that such
functions should satisfy, and it presents one method that satisfies them.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:57:16 GMT"
}
] | 1,365,984,000,000 | [
[
"Hamburger",
"Henry",
""
]
] |
1304.3437 | John F. Lemmer | John F. Lemmer | Confidence Factors, Empiricism and the Dempster-Shafer Theory of
Evidence | Appears in Proceedings of the First Conference on Uncertainty in
Artificial Intelligence (UAI1985) | null | null | UAI-P-1985-PG-160-176 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The issue of confidence factors in Knowledge Based Systems has become
increasingly important and Dempster-Shafer (DS) theory has become increasingly
popular as a basis for these factors. This paper discusses the need for an
empirical lnterpretatlon of any theory of confidence factors applied to
Knowledge Based Systems and describes an empirical lnterpretatlon of DS theory
suggesting that the theory has been extensively misinterpreted. For the
essentially syntactic DS theory, a model is developed based on sample spaces,
the traditional semantic model of probability theory. This model is used to
show that, if belief functions are based on reasonably accurate sampling or
observation of a sample space, then the beliefs and upper probabilities as
computed according to DS theory cannot be interpreted as frequency ratios.
Since many proposed applications of DS theory use belief functions in
situations with statistically derived evidence (Wesley [1]) and seem to appeal
to statistical intuition to provide an lnterpretatlon of the results as has
Garvey [2], it may be argued that DS theory has often been misapplied.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:57:24 GMT"
}
] | 1,365,984,000,000 | [
[
"Lemmer",
"John F.",
""
]
] |
1304.3438 | Alan Bundy | Alan Bundy | Incidence Calculus: A Mechanism for Probabilistic Reasoning | Appears in Proceedings of the First Conference on Uncertainty in
Artificial Intelligence (UAI1985) | null | null | UAI-P-1985-PG-177-184 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mechanisms for the automation of uncertainty are required for expert systems.
Sometimes these mechanisms need to obey the properties of probabilistic
reasoning. A purely numeric mechanism, like those proposed so far, cannot
provide a probabilistic logic with truth functional connectives. We propose an
alternative mechanism, Incidence Calculus, which is based on a representation
of uncertainty using sets of points, which might represent situations, models
or possible worlds. Incidence Calculus does provide a probabilistic logic with
truth functional connectives.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:57:29 GMT"
}
] | 1,365,984,000,000 | [
[
"Bundy",
"Alan",
""
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.