id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
sequencelengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1304.1497 | Eugene Charniak | Eugene Charniak, Robert P. Goldman | Plan Recognition in Stories and in Life | Appears in Proceedings of the Fifth Conference on Uncertainty in
Artificial Intelligence (UAI1989) | null | null | UAI-P-1989-PG-54-59 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Plan recognition does not work the same way in stories and in "real life"
(people tend to jump to conclusions more in stories). We present a theory of
this, for the particular case of how objects in stories (or in life) influence
plan recognition decisions. We provide a Bayesian network formalization of a
simple first-order theory of plans, and show how a particular network parameter
seems to govern the difference between "life-like" and "story-like" response.
We then show why this parameter would be influenced (in the desired way) by a
model of speaker (or author) topic selection which assumes that facts in
stories are typically "relevant".
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:37:23 GMT"
}
] | 1,365,379,200,000 | [
[
"Charniak",
"Eugene",
""
],
[
"Goldman",
"Robert P.",
""
]
] |
1304.1498 | R. Martin Chavez | R. Martin Chavez, Gregory F. Cooper | An Empirical Evaluation of a Randomized Algorithm for Probabilistic
Inference | Appears in Proceedings of the Fifth Conference on Uncertainty in
Artificial Intelligence (UAI1989) | null | null | UAI-P-1989-PG-60-70 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, researchers in decision analysis and artificial intelligence
(Al) have used Bayesian belief networks to build models of expert opinion.
Using standard methods drawn from the theory of computational complexity,
workers in the field have shown that the problem of probabilistic inference in
belief networks is difficult and almost certainly intractable. K N ET, a
software environment for constructing knowledge-based systems within the
axiomatic framework of decision theory, contains a randomized approximation
scheme for probabilistic inference. The algorithm can, in many circumstances,
perform efficient approximate inference in large and richly interconnected
models of medical diagnosis. Unlike previously described stochastic algorithms
for probabilistic inference, the randomized approximation scheme computes a
priori bounds on running time by analyzing the structure and contents of the
belief network. In this article, we describe a randomized algorithm for
probabilistic inference and analyze its performance mathematically. Then, we
devote the major portion of the paper to a discussion of the algorithm's
empirical behavior. The results indicate that the generation of good trials
(that is, trials whose distribution closely matches the true distribution),
rather than the computation of numerous mediocre trials, dominates the
performance of stochastic simulation. Key words: probabilistic inference,
belief networks, stochastic simulation, computational complexity theory,
randomized algorithms.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:37:29 GMT"
}
] | 1,365,379,200,000 | [
[
"Chavez",
"R. Martin",
""
],
[
"Cooper",
"Gregory F.",
""
]
] |
1304.1499 | Marvin S. Cohen | Marvin S. Cohen | Decision Making "Biases" and Support for Assumption-Based Higher-Order
Reasoning | Appears in Proceedings of the Fifth Conference on Uncertainty in
Artificial Intelligence (UAI1989) | null | null | UAI-P-1989-PG-71-80 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Unaided human decision making appears to systematically violate consistency
constraints imposed by normative theories; these biases in turn appear to
justify the application of formal decision-analytic models. It is argued that
both claims are wrong. In particular, we will argue that the "confirmation
bias" is premised on an overly narrow view of how conflicting evidence is and
ought to be handled. Effective decision aiding should focus on supporting the
contral processes by means of which knowledge is extended into novel situations
and in which assumptions are adopted, utilized, and revised. The Non- Monotonic
Probabilist represents initial work toward such an aid.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:37:35 GMT"
}
] | 1,365,379,200,000 | [
[
"Cohen",
"Marvin S.",
""
]
] |
1304.1500 | Didier Dubois | Didier Dubois, Jerome Lang, Henri Prade | Automated Reasoning Using Possibilistic Logic: Semantics, Belief
Revision and Variable Certainty Weights | Appears in Proceedings of the Fifth Conference on Uncertainty in
Artificial Intelligence (UAI1989) | null | null | UAI-P-1989-PG-81-87 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper an approach to automated deduction under uncertainty,based on
possibilistic logic, is proposed ; for that purpose we deal with clauses
weighted by a degree which is a lower bound of a necessity or a possibility
measure, according to the nature of the uncertainty. Two resolution rules are
used for coping with the different situations, and the refutation method can be
generalized. Besides the lower bounds are allowed to be functions of variables
involved in the clause, which gives hypothetical reasoning capabilities. The
relation between our approach and the idea of minimizing abnormality is briefly
discussed. In case where only lower bounds of necessity measures are involved,
a semantics is proposed, in which the completeness of the extended resolution
principle is proved. Moreover deduction from a partially inconsistent knowledge
base can be managed in this approach and displays some form of
non-monotonicity.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:37:41 GMT"
}
] | 1,365,379,200,000 | [
[
"Dubois",
"Didier",
""
],
[
"Lang",
"Jerome",
""
],
[
"Prade",
"Henri",
""
]
] |
1304.1501 | Christopher Elsaesser | Christopher Elsaesser, Max Henrion | How Much More Probable is "Much More Probable"? Verbal Expressions for
Probability Updates | Appears in Proceedings of the Fifth Conference on Uncertainty in
Artificial Intelligence (UAI1989) | null | null | UAI-P-1989-PG-88-94 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bayesian inference systems should be able to explain their reasoning to
users, translating from numerical to natural language. Previous empirical work
has investigated the correspondence between absolute probabilities and
linguistic phrases. This study extends that work to the correspondence between
changes in probabilities (updates) and relative probability phrases, such as
"much more likely" or "a little less likely." Subjects selected such phrases to
best describe numerical probability updates. We examined three hypotheses about
the correspondence, and found the most descriptively accurate of these three to
be that each such phrase corresponds to a fixed difference in probability
(rather than fixed ratio of probabilities or of odds). The empirically derived
phrase selection function uses eight phrases and achieved a 72% accuracy in
correspondence with the subjects' actual usage.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:37:47 GMT"
}
] | 1,365,379,200,000 | [
[
"Elsaesser",
"Christopher",
""
],
[
"Henrion",
"Max",
""
]
] |
1304.1502 | Henri Farrency | Henri Farrency, Henri Prade | Positive and Negative Explanations of Uncertain Reasoning in the
Framework of Possibility Theory | Appears in Proceedings of the Fifth Conference on Uncertainty in
Artificial Intelligence (UAI1989) | null | null | UAI-P-1989-PG-95-101 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents an approach for developing the explanation capabilities
of rule-based expert systems managing imprecise and uncertain knowledge. The
treatment of uncertainty takes place in the framework of possibility theory
where the available information concerning the value of a logical or numerical
variable is represented by a possibility distribution which restricts its more
or less possible values. We first discuss different kinds of queries asking for
explanations before focusing on the two following types : i) how, a particular
possibility distribution is obtained (emphasizing the main reasons only) ; ii)
why in a computed possibility distribution, a particular value has received a
possibility degree which is so high, so low or so contrary to the expectation.
The approach is based on the exploitation of equations in max-min algebra. This
formalism includes the limit case of certain and precise information.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:37:53 GMT"
}
] | 1,365,379,200,000 | [
[
"Farrency",
"Henri",
""
],
[
"Prade",
"Henri",
""
]
] |
1304.1503 | Kenneth W. Fertig | Kenneth W. Fertig, John S. Breese | Interval Influence Diagrams | Appears in Proceedings of the Fifth Conference on Uncertainty in
Artificial Intelligence (UAI1989) | null | null | UAI-P-1989-PG-102-111 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe a mechanism for performing probabilistic reasoning in influence
diagrams using interval rather than point valued probabilities. We derive the
procedures for node removal (corresponding to conditional expectation) and arc
reversal (corresponding to Bayesian conditioning) in influence diagrams where
lower bounds on probabilities are stored at each node. The resulting bounds for
the transformed diagram are shown to be optimal within the class of constraints
on probability distributions that can be expressed exclusively as lower bounds
on the component probabilities of the diagram. Sequences of these operations
can be performed to answer probabilistic queries with indeterminacies in the
input and for performing sensitivity analysis on an influence diagram. The
storage requirements and computational complexity of this approach are
comparable to those for point-valued probabilistic inference mechanisms, making
the approach attractive for performing sensitivity analysis and where
probability information is not available. Limited empirical data on an
implementation of the methodology are provided.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:37:59 GMT"
}
] | 1,365,379,200,000 | [
[
"Fertig",
"Kenneth W.",
""
],
[
"Breese",
"John S.",
""
]
] |
1304.1504 | Robert Fung | Robert Fung, Kuo-Chu Chang | Weighing and Integrating Evidence for Stochastic Simulation in Bayesian
Networks | Appears in Proceedings of the Fifth Conference on Uncertainty in
Artificial Intelligence (UAI1989) | null | null | UAI-P-1989-PG-112-117 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Stochastic simulation approaches perform probabilistic inference in Bayesian
networks by estimating the probability of an event based on the frequency that
the event occurs in a set of simulation trials. This paper describes the
evidence weighting mechanism, for augmenting the logic sampling stochastic
simulation algorithm [Henrion, 1986]. Evidence weighting modifies the logic
sampling algorithm by weighting each simulation trial by the likelihood of a
network's evidence given the sampled state node values for that trial. We also
describe an enhancement to the basic algorithm which uses the evidential
integration technique [Chin and Cooper, 1987]. A comparison of the basic
evidence weighting mechanism with the Markov blanket algorithm [Pearl, 1987],
the logic sampling algorithm, and the evidence integration algorithm is
presented. The comparison is aided by analyzing the performance of the
algorithms in a simple example network.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:38:05 GMT"
}
] | 1,365,379,200,000 | [
[
"Fung",
"Robert",
""
],
[
"Chang",
"Kuo-Chu",
""
]
] |
1304.1505 | Dan Geiger | Dan Geiger, Tom S. Verma, Judea Pearl | d-Separation: From Theorems to Algorithms | Appears in Proceedings of the Fifth Conference on Uncertainty in
Artificial Intelligence (UAI1989) | null | null | UAI-P-1989-PG-118-125 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An efficient algorithm is developed that identifies all independencies
implied by the topology of a Bayesian network. Its correctness and maximality
stems from the soundness and completeness of d-separation with respect to
probability theory. The algorithm runs in time O (l E l) where E is the number
of edges in the network.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:38:11 GMT"
}
] | 1,365,379,200,000 | [
[
"Geiger",
"Dan",
""
],
[
"Verma",
"Tom S.",
""
],
[
"Pearl",
"Judea",
""
]
] |
1304.1506 | Maria Angeles Gil | Maria Angeles Gil, Pramod Jain | The Effects of Perfect and Sample Information on Fuzzy Utilities in
Decision-Making | Appears in Proceedings of the Fifth Conference on Uncertainty in
Artificial Intelligence (UAI1989) | null | null | UAI-P-1989-PG-126-133 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we first consider a Bayesian framework and model the "utility
function" in terms of fuzzy random variables. On the basis of this model, we
define the "prior (fuzzy) expected utility" associated with each action, and
the corresponding "posterior (fuzzy) expected utility given sample information
from a random experiment". The aim of this paper is to analyze how sample
information can affect the expected utility. In this way, by using some fuzzy
preference relations, we conclude that sample information allows a decision
maker to increase the expected utility on the average. The upper bound on the
value of the expected utility is when the decision maker has perfect
information. Applications of this work to the field of artificial intelligence
are presented through two examples.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:38:16 GMT"
}
] | 1,365,379,200,000 | [
[
"Gil",
"Maria Angeles",
""
],
[
"Jain",
"Pramod",
""
]
] |
1304.1507 | Moises Goldszmidt | Moises Goldszmidt, Judea Pearl | Deciding Consistency of Databases Containing Defeasible and Strict
Information | Appears in Proceedings of the Fifth Conference on Uncertainty in
Artificial Intelligence (UAI1989) | null | null | UAI-P-1989-PG-134-141 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a norm of consistency for a mixed set of defeasible and strict
sentences, based on a probabilistic semantics. This norm establishes a clear
distinction between knowledge bases depicting exceptions and those containing
outright contradictions. We then define a notion of entailment based also on
probabilistic considerations and provide a characterization of the relation
between consistency and entailment. We derive necessary and sufficient
conditions for consistency, and provide a simple decision procedure for testing
consistency and deciding whether a sentence is entailed by a database. Finally,
it is shown that if al1 sentences are Horn clauses, consistency and entailment
can be tested in polynomial time.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:38:23 GMT"
}
] | 1,365,379,200,000 | [
[
"Goldszmidt",
"Moises",
""
],
[
"Pearl",
"Judea",
""
]
] |
1304.1508 | Joseph Y. Halpern | Joseph Y. Halpern | The Relationship between Knowledge, Belief and Certainty | Appears in Proceedings of the Fifth Conference on Uncertainty in
Artificial Intelligence (UAI1989) | null | null | UAI-P-1989-PG-142-151 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the relation between knowledge and certainty, where a fact is
known if it is true at all worlds an agent considers possible and is certain if
it holds with probability 1. We identify certainty with probabilistic belief.
We show that if we assume one fixed probability assignment, then the logic
KD45, which has been identified as perhaps the most appropriate for belief,
provides a complete axiomatization for reasoning about certainty. Just as an
agent may believe a fact although phi is false, he may be certain that a fact
phi, is true although phi is false. However, it is easy to see that an agent
can have such false (probabilistic) beliefs only at a set of worlds of
probability 0. If we restrict attention to structures where all worlds have
positive probability, then S5 provides a complete axiomatization. If we
consider a more general setting, where there might be a different probability
assignment at each world, then by placing appropriate conditions on the support
of the probability function (the set of worlds which have non-zero
probability), we can capture many other well-known modal logics, such as T and
S4. Finally, we consider which axioms characterize structures satisfying
Miller's principle.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:38:29 GMT"
}
] | 1,365,379,200,000 | [
[
"Halpern",
"Joseph Y.",
""
]
] |
1304.1509 | Othar Hansson | Othar Hansson, Andy Mayer | Heuristic Search as Evidential Reasoning | Appears in Proceedings of the Fifth Conference on Uncertainty in
Artificial Intelligence (UAI1989) | null | null | UAI-P-1989-PG-152-161 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | BPS, the Bayesian Problem Solver, applies probabilistic inference and
decision-theoretic control to flexible, resource-constrained problem-solving.
This paper focuses on the Bayesian inference mechanism in BPS, and contrasts it
with those of traditional heuristic search techniques. By performing sound
inference, BPS can outperform traditional techniques with significantly less
computational effort. Empirical tests on the Eight Puzzle show that after only
a few hundred node expansions, BPS makes better decisions than does the best
existing algorithm after several million node expansions
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:38:35 GMT"
}
] | 1,365,379,200,000 | [
[
"Hansson",
"Othar",
""
],
[
"Mayer",
"Andy",
""
]
] |
1304.1510 | David Heckerman | David Heckerman, John S. Breese, Eric J. Horvitz | The Compilation of Decision Models | Appears in Proceedings of the Fifth Conference on Uncertainty in
Artificial Intelligence (UAI1989) | null | null | UAI-P-1989-PG-162-173 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce and analyze the problem of the compilation of decision models
from a decision-theoretic perspective. The techniques described allow us to
evaluate various configurations of compiled knowledge given the nature of
evidential relationships in a domain, the utilities associated with alternative
actions, the costs of run-time delays, and the costs of memory. We describe
procedures for selecting a subset of the total observations available to be
incorporated into a compiled situation-action mapping, in the context of a
binary decision with conditional independence of evidence. The methods allow us
to incrementally select the best pieces of evidence to add to the set of
compiled knowledge in an engineering setting. After presenting several
approaches to compilation, we exercise one of the methods to provide insight
into the relationship between the distribution over weights of evidence and the
preferred degree of compilation.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:38:41 GMT"
}
] | 1,365,379,200,000 | [
[
"Heckerman",
"David",
""
],
[
"Breese",
"John S.",
""
],
[
"Horvitz",
"Eric J.",
""
]
] |
1304.1511 | David Heckerman | David Heckerman | A Tractable Inference Algorithm for Diagnosing Multiple Diseases | Appears in Proceedings of the Fifth Conference on Uncertainty in
Artificial Intelligence (UAI1989) | null | null | UAI-P-1989-PG-174-181 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We examine a probabilistic model for the diagnosis of multiple diseases. In
the model, diseases and findings are represented as binary variables. Also,
diseases are marginally independent, features are conditionally independent
given disease instances, and diseases interact to produce findings via a noisy
OR-gate. An algorithm for computing the posterior probability of each disease,
given a set of observed findings, called quickscore, is presented. The time
complexity of the algorithm is O(nm-2m+), where n is the number of diseases, m+
is the number of positive findings and m- is the number of negative findings.
Although the time complexity of quickscore i5 exponential in the number of
positive findings, the algorithm is useful in practice because the number of
observed positive findings is usually far less than the number of diseases
under consideration. Performance results for quickscore applied to a
probabilistic version of Quick Medical Reference (QMR) are provided.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:38:47 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Dec 2022 23:49:18 GMT"
}
] | 1,670,371,200,000 | [
[
"Heckerman",
"David",
""
]
] |
1304.1512 | Eric J. Horvitz | Eric J. Horvitz, Jaap Suermondt, Gregory F. Cooper | Bounded Conditioning: Flexible Inference for Decisions under Scarce
Resources | Appears in Proceedings of the Fifth Conference on Uncertainty in
Artificial Intelligence (UAI1989) | null | null | UAI-P-1989-PG-182-193 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a graceful approach to probabilistic inference called bounded
conditioning. Bounded conditioning monotonically refines the bounds on
posterior probabilities in a belief network with computation, and converges on
final probabilities of interest with the allocation of a complete resource
fraction. The approach allows a reasoner to exchange arbitrary quantities of
computational resource for incremental gains in inference quality. As such,
bounded conditioning holds promise as a useful inference technique for
reasoning under the general conditions of uncertain and varying reasoning
resources. The algorithm solves a probabilistic bounding problem in complex
belief networks by breaking the problem into a set of mutually exclusive,
tractable subproblems and ordering their solution by the expected effect that
each subproblem will have on the final answer. We introduce the algorithm,
discuss its characterization, and present its performance on several belief
networks, including a complex model for reasoning about problems in
intensive-care medicine.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:38:53 GMT"
}
] | 1,365,379,200,000 | [
[
"Horvitz",
"Eric J.",
""
],
[
"Suermondt",
"Jaap",
""
],
[
"Cooper",
"Gregory F.",
""
]
] |
1304.1513 | A. C. Kak | A. C. Kak, K. M. Andress, C. Lopez-Abadia, M. S. Carroll, J. R. Lewis | Hierarchical Evidence Accumulation in the Pseiki System and Experiments
in Model-Driven Mobile Robot Navigation | Appears in Proceedings of the Fifth Conference on Uncertainty in
Artificial Intelligence (UAI1989) | null | null | UAI-P-1989-PG-194-207 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we will review the process of evidence accumulation in the
PSEIKI system for expectation-driven interpretation of images of 3-D scenes.
Expectations are presented to PSEIKI as a geometrical hierarchy of
abstractions. PSEIKI's job is then to construct abstraction hierarchies in the
perceived image taking cues from the abstraction hierarchies in the
expectations. The Dempster-Shafer formalism is used for associating belief
values with the different possible labels for the constructed abstractions in
the perceived image. This system has been used successfully for autonomous
navigation of a mobile robot in indoor environments.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:38:59 GMT"
}
] | 1,365,379,200,000 | [
[
"Kak",
"A. C.",
""
],
[
"Andress",
"K. M.",
""
],
[
"Lopez-Abadia",
"C.",
""
],
[
"Carroll",
"M. S.",
""
],
[
"Lewis",
"J. R.",
""
]
] |
1304.1514 | Harold P. Lehmann | Harold P. Lehmann | A Decision-Theoretic Model for Using Scientific Data | Appears in Proceedings of the Fifth Conference on Uncertainty in
Artificial Intelligence (UAI1989) | null | null | UAI-P-1989-PG-208-215 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many Artificial Intelligence systems depend on the agent's updating its
beliefs about the world on the basis of experience. Experiments constitute one
type of experience, so scientific methodology offers a natural environment for
examining the issues attendant to using this class of evidence. This paper
presents a framework which structures the process of using scientific data from
research reports for the purpose of making decisions, using decision analysis
as the basis for the structure and using medical research as the general
scientific domain. The structure extends the basic influence diagram for
updating belief in an object domain parameter of interest by expanding the
parameter into four parts: those of the patient, the population, the study
sample, and the effective study sample. The structure uses biases to perform
the transformation of one parameter into another, so that, for instance,
selection biases, in concert with the population parameter, yield the study
sample parameter. The influence diagram structure provides decision theoretic
justification for practices of good clinical research such as randomized
assignment and blindfolding of care providers. The model covers most research
designs used in medicine: case-control studies, cohort studies, and controlled
clinical trials, and provides an architecture to separate clearly between
statistical knowledge and domain knowledge. The proposed general model can be
the basis for clinical epidemiological advisory systems, when coupled with
heuristic pruning of irrelevant biases; of statistical workstations, when the
computational machinery for calculation of posterior distributions is added;
and of meta-analytic reviews, when multiple studies may impact on a single
population parameter.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:39:05 GMT"
}
] | 1,365,379,200,000 | [
[
"Lehmann",
"Harold P.",
""
]
] |
1304.1515 | Paul E. Lehner | Paul E. Lehner, Theresa M. Mullin, Marvin S. Cohen | When Should a Decision Maker Ignore the Advice of a Decision Aid? | Appears in Proceedings of the Fifth Conference on Uncertainty in
Artificial Intelligence (UAI1989) | null | null | UAI-P-1989-PG-216-223 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper argues that the principal difference between decision aids and
most other types of information systems is the greater reliance of decision
aids on fallible algorithms--algorithms that sometimes generate incorrect
advice. It is shown that interactive problem solving with a decision aid that
is based on a fallible algorithm can easily result in aided performance which
is poorer than unaided performance, even if the algorithm, by itself, performs
significantly better than the unaided decision maker. This suggests that unless
certain conditions are satisfied, using a decision aid as an aid is
counterproductive. Some conditions under which a decision aid is best used as
an aid are derived.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:39:11 GMT"
}
] | 1,365,379,200,000 | [
[
"Lehner",
"Paul E.",
""
],
[
"Mullin",
"Theresa M.",
""
],
[
"Cohen",
"Marvin S.",
""
]
] |
1304.1516 | Paul E. Lehner | Paul E. Lehner | Inference Policies | Appears in Proceedings of the Fifth Conference on Uncertainty in
Artificial Intelligence (UAI1989) | null | null | UAI-P-1989-PG-224-232 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is suggested that an AI inference system should reflect an inference
policy that is tailored to the domain of problems to which it is applied -- and
furthermore that an inference policy need not conform to any general theory of
rational inference or induction. We note, for instance, that Bayesian reasoning
about the probabilistic characteristics of an inference domain may result in
the specification of an nonBayesian procedure for reasoning within the
inference domain. In this paper, the idea of an inference policy is explored in
some detail. To support this exploration, the characteristics of some standard
and nonstandard inference policies are examined.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:39:17 GMT"
}
] | 1,365,379,200,000 | [
[
"Lehner",
"Paul E.",
""
]
] |
1304.1518 | Ronald P. Loui | Ronald P. Loui | Defeasible Decisions: What the Proposal is and isn't | Appears in Proceedings of the Fifth Conference on Uncertainty in
Artificial Intelligence (UAI1989) | null | null | UAI-P-1989-PG-245-252 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In two recent papers, I have proposed a description of decision analysis that
differs from the Bayesian picture painted by Savage, Jeffrey and other classic
authors. Response to this view has been either overly enthusiastic or unduly
pessimistic. In this paper I try to place the idea in its proper place, which
must be somewhere in between. Looking at decision analysis as defeasible
reasoning produces a framework in which planning and decision theory can be
integrated, but work on the details has barely begun. It also produces a
framework in which the meta-decision regress can be stopped in a reasonable
way, but it does not allow us to ignore meta-level decisions. The heuristics
for producing arguments that I have presented are only supposed to be
suggestive; but they are not open to the egregious errors about which some have
worried. And though the idea is familiar to those who have studied heuristic
search, it is somewhat richer because the control of dialectic is more
interesting than the deepening of search.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:39:29 GMT"
}
] | 1,365,379,200,000 | [
[
"Loui",
"Ronald P.",
""
]
] |
1304.1519 | Mary McLeish | Mary McLeish, P. Yao, M. Cecile, T. Stirtzinger | Experiments Using Belief Functions and Weights of Evidence incorporating
Statistical Data and Expert Opinions | Appears in Proceedings of the Fifth Conference on Uncertainty in
Artificial Intelligence (UAI1989) | null | null | UAI-P-1989-PG-253-264 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents some ideas and results of using uncertainty management
methods in the presence of data in preference to other statistical and machine
learning methods. A medical domain is used as a test-bed with data available
from a large hospital database system which collects symptom and outcome
information about patients. Data is often missing, of many variable types and
sample sizes for particular outcomes is not large. Uncertainty management
methods are useful for such domains and have the added advantage of allowing
for expert modification of belief values originally obtained from data.
Methodological considerations for using belief functions on statistical data
are dealt with in some detail. Expert opinions are Incorporated at various
levels of the project development and results are reported on an application to
liver disease diagnosis. Recent results contrasting the use of weights of
evidence and logistic regression on another medical domain are also presented.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:39:35 GMT"
}
] | 1,365,379,200,000 | [
[
"McLeish",
"Mary",
""
],
[
"Yao",
"P.",
""
],
[
"Cecile",
"M.",
""
],
[
"Stirtzinger",
"T.",
""
]
] |
1304.1520 | W. R. Moninger | W. R. Moninger, J. A. Flueck, C. Lusk, W. F. Roberts | Shootout-89: A Comparative Evaluation of Knowledge-based Systems that
Forecast Severe Weather | Appears in Proceedings of the Fifth Conference on Uncertainty in
Artificial Intelligence (UAI1989) | null | null | UAI-P-1989-PG-265-271 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | During the summer of 1989, the Forecast Systems Laboratory of the National
Oceanic and Atmospheric Administration sponsored an evaluation of artificial
intelligence-based systems that forecast severe convective storms. The
evaluation experiment, called Shootout-89, took place in Boulder, and focussed
on storms over the northeastern Colorado foothills and plains (Moninger, et
al., 1990). Six systems participated in Shootout-89. These included traditional
expert systems, an analogy-based system, and a system developed using methods
from the cognitive science/judgment analysis tradition. Each day of the
exercise, the systems generated 2 to 9 hour forecasts of the probabilities of
occurrence of: non significant weather, significant weather, and severe
weather, in each of four regions in northeastern Colorado. A verification
coordinator working at the Denver Weather Service Forecast Office gathered
ground-truth data from a network of observers. Systems were evaluated on the
basis of several measures of forecast skill, and on other metrics such as
timeliness, ease of learning, and ease of use. Systems were generally easy to
operate, however the various systems required substantially different levels of
meteorological expertise on the part of their users--reflecting the various
operational environments for which the systems had been designed. Systems
varied in their statistical behavior, but on this difficult forecast problem,
the systems generally showed a skill approximately equal to that of persistence
forecasts and climatological (historical frequency) forecasts. The two systems
that appeared best able to discriminate significant from non significant
weather events were traditional expert systems. Both of these systems required
the operator to make relatively sophisticated meteorological judgments. We are
unable, based on only one summer's worth of data, to determine the extent to
which the greater skill of the two systems was due to the content of their
knowledge bases, or to the subjective judgments of the operator. A follow-on
experiment, Shootout-91, is currently being planned. Interested potential
participants are encouraged to contact the author at the address above.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:39:40 GMT"
}
] | 1,365,379,200,000 | [
[
"Moninger",
"W. R.",
""
],
[
"Flueck",
"J. A.",
""
],
[
"Lusk",
"C.",
""
],
[
"Roberts",
"W. F.",
""
]
] |
1304.1521 | Eric Neufeld | Eric Neufeld, J. D. Horton | Conditioning on Disjunctive Knowledge: Defaults and Probabilities | Appears in Proceedings of the Fifth Conference on Uncertainty in
Artificial Intelligence (UAI1989) | null | null | UAI-P-1989-PG-272-278 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many writers have observed that default logics appear to contain the "lottery
paradox" of probability theory. This arises when a default "proof by
contradiction" lets us conclude that a typical X is not a Y where Y is an
unusual subclass of X. We show that there is a similar problem with default
"proof by cases" and construct a setting where we might draw a different
conclusion knowing a disjunction than we would knowing any particular disjunct.
Though Reiter's original formalism is capable of representing this distinction,
other approaches are not. To represent and reason about this case, default
logicians must specify how a "typical" individual is selected. The problem is
closely related to Simpson's paradox of probability theory. If we accept a
simple probabilistic account of defaults based on the notion that one
proposition may favour or increase belief in another, the "multiple extension
problem" for both conjunctive and disjunctive knowledge vanishes.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:39:46 GMT"
}
] | 1,365,379,200,000 | [
[
"Neufeld",
"Eric",
""
],
[
"Horton",
"J. D.",
""
]
] |
1304.1522 | Michael Pittarelli | Michael Pittarelli | Maximum Uncertainty Procedures for Interval-Valued Probability
Distributions | Appears in Proceedings of the Fifth Conference on Uncertainty in
Artificial Intelligence (UAI1989) | null | null | UAI-P-1989-PG-279-286 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Measures of uncertainty and divergence are introduced for interval-valued
probability distributions and are shown to have desirable mathematical
properties. A maximum uncertainty inference procedure for marginal interval
distributions is presented. A technique for reconstruction of interval
distributions from projections is developed based on this inference procedure
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:39:51 GMT"
}
] | 1,365,379,200,000 | [
[
"Pittarelli",
"Michael",
""
]
] |
1304.1523 | Gregory M. Provan | Gregory M. Provan | A Logical Interpretation of Dempster-Shafer Theory, with Application to
Visual Recognition | Appears in Proceedings of the Fifth Conference on Uncertainty in
Artificial Intelligence (UAI1989) | null | null | UAI-P-1989-PG-287-294 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We formulate Dempster Shafer Belief functions in terms of Propositional Logic
using the implicit notion of provability underlying Dempster Shafer Theory.
Given a set of propositional clauses, assigning weights to certain
propositional literals enables the Belief functions to be explicitly computed
using Network Reliability techniques. Also, the logical procedure corresponding
to updating Belief functions using Dempster's Rule of Combination is shown.
This analysis formalizes the implementation of Belief functions within an
Assumption-based Truth Maintenance System (ATMS). We describe the extension of
an ATMS-based visual recognition system, VICTORS, with this logical formulation
of Dempster Shafer theory. Without Dempster Shafer theory, VICTORS computes all
possible visual interpretations (i.e. all logical models) without determining
the best interpretation(s). Incorporating Dempster Shafer theory enables
optimal visual interpretations to be computed and a logical semantics to be
maintained.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:39:57 GMT"
}
] | 1,365,379,200,000 | [
[
"Provan",
"Gregory M.",
""
]
] |
1304.1524 | Peter Sember | Peter Sember, Ingrid Zukerman | Strategies for Generating Micro Explanations for Bayesian Belief
Networks | Appears in Proceedings of the Fifth Conference on Uncertainty in
Artificial Intelligence (UAI1989) | null | null | UAI-P-1989-PG-295-302 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bayesian Belief Networks have been largely overlooked by Expert Systems
practitioners on the grounds that they do not correspond to the human inference
mechanism. In this paper, we introduce an explanation mechanism designed to
generate intuitive yet probabilistically sound explanations of inferences drawn
by a Bayesian Belief Network. In particular, our mechanism accounts for the
results obtained due to changes in the causal and the evidential support of a
node.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:40:04 GMT"
}
] | 1,365,379,200,000 | [
[
"Sember",
"Peter",
""
],
[
"Zukerman",
"Ingrid",
""
]
] |
1304.1525 | Ross D. Shachter | Ross D. Shachter | Evidence Absorption and Propagation through Evidence Reversals | Appears in Proceedings of the Fifth Conference on Uncertainty in
Artificial Intelligence (UAI1989) | null | null | UAI-P-1989-PG-303-310 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The arc reversal/node reduction approach to probabilistic inference is
extended to include the case of instantiated evidence by an operation called
"evidence reversal." This not only provides a technique for computing posterior
joint distributions on general belief networks, but also provides insight into
the methods of Pearl [1986b] and Lauritzen and Spiegelhalter [1988]. Although
it is well understood that the latter two algorithms are closely related, in
fact all three algorithms are identical whenever the belief network is a
forest.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:40:10 GMT"
}
] | 1,365,379,200,000 | [
[
"Shachter",
"Ross D.",
""
]
] |
1304.1526 | Ross D. Shachter | Ross D. Shachter, Mark Alan Peot | Simulation Approaches to General Probabilistic Inference on Belief
Networks | Appears in Proceedings of the Fifth Conference on Uncertainty in
Artificial Intelligence (UAI1989) | null | null | UAI-P-1989-PG-311-318 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A number of algorithms have been developed to solve probabilistic inference
problems on belief networks. These algorithms can be divided into two main
groups: exact techniques which exploit the conditional independence revealed
when the graph structure is relatively sparse, and probabilistic sampling
techniques which exploit the "conductance" of an embedded Markov chain when the
conditional probabilities have non-extreme values. In this paper, we
investigate a family of "forward" Monte Carlo sampling techniques similar to
Logic Sampling [Henrion, 1988] which appear to perform well even in some
multiply connected networks with extreme conditional probabilities, and thus
would be generally applicable. We consider several enhancements which reduce
the posterior variance using this approach and propose a framework and criteria
for choosing when to use those enhancements.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:40:16 GMT"
}
] | 1,365,379,200,000 | [
[
"Shachter",
"Ross D.",
""
],
[
"Peot",
"Mark Alan",
""
]
] |
1304.1527 | Philippe Smets | Philippe Smets | Decision under Uncertainty | Appears in Proceedings of the Fifth Conference on Uncertainty in
Artificial Intelligence (UAI1989) | null | null | UAI-P-1989-PG-319-326 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We derive axiomatically the probability function that should be used to make
decisions given any form of underlying uncertainty.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:40:21 GMT"
}
] | 1,365,379,200,000 | [
[
"Smets",
"Philippe",
""
]
] |
1304.1528 | Michael Smithson | Michael Smithson | Freedom: A Measure of Second-order Uncertainty for Intervalic
Probability Schemes | Appears in Proceedings of the Fifth Conference on Uncertainty in
Artificial Intelligence (UAI1989) | null | null | UAI-P-1989-PG-327-334 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper discusses a new measure that is adaptable to certain intervalic
probability frameworks, possibility theory, and belief theory. As such, it has
the potential for wide use in knowledge engineering, expert systems, and
related problems in the human sciences. This measure (denoted here by F) has
been introduced in Smithson (1988) and is more formally discussed in Smithson
(1989a)o Here, I propose to outline the conceptual basis for F and compare its
properties with other measures of second-order uncertainty. I will argue that F
is an indicator of nonspecificity or alternatively, of freedom, as
distinguished from either ambiguity or vagueness.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:40:27 GMT"
}
] | 1,365,379,200,000 | [
[
"Smithson",
"Michael",
""
]
] |
1304.1529 | David J. Spiegelhalter | David J. Spiegelhalter, Rodney C. Franklin, Kate Bull | Assessment, Criticism and Improvement of Imprecise Subjective
Probabilities for a Medical Expert System | Appears in Proceedings of the Fifth Conference on Uncertainty in
Artificial Intelligence (UAI1989) | null | null | UAI-P-1989-PG-335-342 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Three paediatric cardiologists assessed nearly 1000 imprecise subjective
conditional probabilities for a simple belief network representing congenital
heart disease, and the quality of the assessments has been measured using
prospective data on 200 babies. Quality has been assessed by a Brier scoring
rule, which decomposes into terms measuring lack of discrimination and
reliability. The results are displayed for each of 27 diseases and 24
questions, and generally the assessments are reliable although there was a
tendency for the probabilities to be too extreme. The imprecision allows the
judgements to be converted to implicit samples, and by combining with the
observed data the probabilities naturally adapt with experience. This appears
to be a practical procedure even for reasonably large expert systems.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:40:33 GMT"
}
] | 1,365,379,200,000 | [
[
"Spiegelhalter",
"David J.",
""
],
[
"Franklin",
"Rodney C.",
""
],
[
"Bull",
"Kate",
""
]
] |
1304.1530 | Sampath Srinivas | Sampath Srinivas, Stuart Russell, Alice M. Agogino | Automated Construction of Sparse Bayesian Networks from Unstructured
Probabilistic Models and Domain Information | Appears in Proceedings of the Fifth Conference on Uncertainty in
Artificial Intelligence (UAI1989) | null | null | UAI-P-1989-PG-343-350 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An algorithm for automated construction of a sparse Bayesian network given an
unstructured probabilistic model and causal domain information from an expert
has been developed and implemented. The goal is to obtain a network that
explicitly reveals as much information regarding conditional independence as
possible. The network is built incrementally adding one node at a time. The
expert's information and a greedy heuristic that tries to keep the number of
arcs added at each step to a minimum are used to guide the search for the next
node to add. The probabilistic model is a predicate that can answer queries
about independencies in the domain. In practice the model can be implemented in
various ways. For example, the model could be a statistical independence test
operating on empirical data or a deductive prover operating on a set of
independence statements about the domain.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:40:38 GMT"
}
] | 1,365,379,200,000 | [
[
"Srinivas",
"Sampath",
""
],
[
"Russell",
"Stuart",
""
],
[
"Agogino",
"Alice M.",
""
]
] |
1304.1531 | Thomas M. Strat | Thomas M. Strat | Making Decisions with Belief Functions | Appears in Proceedings of the Fifth Conference on Uncertainty in
Artificial Intelligence (UAI1989) | null | null | UAI-P-1989-PG-351-360 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A primary motivation for reasoning under uncertainty is to derive decisions
in the face of inconclusive evidence. However, Shafer's theory of belief
functions, which explicitly represents the underconstrained nature of many
reasoning problems, lacks a formal procedure for making decisions. Clearly,
when sufficient information is not available, no theory can prescribe actions
without making additional assumptions. Faced with this situation, some
assumption must be made if a clearly superior choice is to emerge. In this
paper we offer a probabilistic interpretation of a simple assumption that
disambiguates decision problems represented with belief functions. We prove
that it yields expected values identical to those obtained by a probabilistic
analysis that makes the same assumption. In addition, we show how the decision
analysis methodology frequently employed in probabilistic reasoning can be
extended for use with belief functions.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:40:45 GMT"
}
] | 1,365,379,200,000 | [
[
"Strat",
"Thomas M.",
""
]
] |
1304.1532 | Michael J. Swain | Michael J. Swain, Lambert E. Wixson, Paul B. Chou | Efficient Parallel Estimation for Markov Random Fields | Appears in Proceedings of the Fifth Conference on Uncertainty in
Artificial Intelligence (UAI1989) | null | null | UAI-P-1989-PG-361-368 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a new, deterministic, distributed MAP estimation algorithm for
Markov Random Fields called Local Highest Confidence First (Local HCF). The
algorithm has been applied to segmentation problems in computer vision and its
performance compared with stochastic algorithms. The experiments show that
Local HCF finds better estimates than stochastic algorithms with much less
computation.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:40:50 GMT"
}
] | 1,365,379,200,000 | [
[
"Swain",
"Michael J.",
""
],
[
"Wixson",
"Lambert E.",
""
],
[
"Chou",
"Paul B.",
""
]
] |
1304.1533 | David S. Vaughan | David S. Vaughan, Bruce M. Perrin, Robert M. Yadrick | Comparing Expert Systems Built Using Different Uncertain Inference
Systems | Appears in Proceedings of the Fifth Conference on Uncertainty in
Artificial Intelligence (UAI1989) | null | null | UAI-P-1989-PG-369-376 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study compares the inherent intuitiveness or usability of the most
prominent methods for managing uncertainty in expert systems, including those
of EMYCIN, PROSPECTOR, Dempster-Shafer theory, fuzzy set theory, simplified
probability theory (assuming marginal independence), and linear regression
using probability estimates. Participants in the study gained experience in a
simple, hypothetical problem domain through a series of learning trials. They
were then randomly assigned to develop an expert system using one of the six
Uncertain Inference Systems (UISs) listed above. Performance of the resulting
systems was then compared. The results indicate that the systems based on the
PROSPECTOR and EMYCIN models were significantly less accurate for certain types
of problems compared to systems based on the other UISs. Possible reasons for
these differences are discussed.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:40:56 GMT"
}
] | 1,365,379,200,000 | [
[
"Vaughan",
"David S.",
""
],
[
"Perrin",
"Bruce M.",
""
],
[
"Yadrick",
"Robert M.",
""
]
] |
1304.1534 | Wilson X. Wen | Wilson X. Wen | Directed Cycles in Belief Networks | Appears in Proceedings of the Fifth Conference on Uncertainty in
Artificial Intelligence (UAI1989) | null | null | UAI-P-1989-PG-377-384 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The most difficult task in probabilistic reasoning may be handling directed
cycles in belief networks. To the best knowledge of this author, there is no
serious discussion of this problem at all in the literature of probabilistic
reasoning so far.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:41:02 GMT"
}
] | 1,365,379,200,000 | [
[
"Wen",
"Wilson X.",
""
]
] |
1304.1535 | Yang Xiang | Yang Xiang, Michael P. Beddoes, David L Poole | Can Uncertainty Management be Realized in a Finite Totally Ordered
Probability Algebra? | Appears in Proceedings of the Fifth Conference on Uncertainty in
Artificial Intelligence (UAI1989) | null | null | UAI-P-1989-PG-385-393 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, the feasibility of using finite totally ordered probability
models under Alelinnas's Theory of Probabilistic Logic [Aleliunas, 1988] is
investigated. The general form of the probability algebra of these models is
derived and the number of possible algebras with given size is deduced. Based
on this analysis, we discuss problems of denominator-indifference and
ambiguity-generation that arise in reasoning by cases and abductive reasoning.
An example is given that illustrates how these problems arise. The
investigation shows that a finite probability model may be of very limited
usage.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:41:07 GMT"
}
] | 1,365,379,200,000 | [
[
"Xiang",
"Yang",
""
],
[
"Beddoes",
"Michael P.",
""
],
[
"Poole",
"David L",
""
]
] |
1304.1536 | Ronald R. Yager | Ronald R. Yager | Normalization and the Representation of Nonmonotonic Knowledge in the
Theory of Evidence | Appears in Proceedings of the Fifth Conference on Uncertainty in
Artificial Intelligence (UAI1989) | null | null | UAI-P-1989-PG-394-403 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We discuss the Dempster-Shafer theory of evidence. We introduce a concept of
monotonicity which is related to the diminution of the range between belief and
plausibility. We show that the accumulation of knowledge in this framework
exhibits a nonmonotonic property. We show how the belief structure can be used
to represent typical or commonsense knowledge.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:41:14 GMT"
}
] | 1,365,379,200,000 | [
[
"Yager",
"Ronald R.",
""
]
] |
1304.1684 | Emad Saad | Emad Saad | Probability Aggregates in Probability Answer Set Programming | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Probability answer set programming is a declarative programming that has been
shown effective for representing and reasoning about a variety of probability
reasoning tasks. However, the lack of probability aggregates, e.g. {\em
expected values}, in the language of disjunctive hybrid probability logic
programs (DHPP) disallows the natural and concise representation of many
interesting problems. In this paper, we extend DHPP to allow arbitrary
probability aggregates. We introduce two types of probability aggregates; a
type that computes the expected value of a classical aggregate, e.g., the
expected value of the minimum, and a type that computes the probability of a
classical aggregate, e.g, the probability of sum of values. In addition, we
define a probability answer set semantics for DHPP with arbitrary probability
aggregates including monotone, antimonotone, and nonmonotone probability
aggregates. We show that the proposed probability answer set semantics of DHPP
subsumes both the original probability answer set semantics of DHPP and the
classical answer set semantics of classical disjunctive logic programs with
classical aggregates, and consequently subsumes the classical answer set
semantics of the original disjunctive logic programs. We show that the proposed
probability answer sets of DHPP with probability aggregates are minimal
probability models and hence incomparable, which is an important property for
nonmonotonic probability reasoning.
| [
{
"version": "v1",
"created": "Fri, 5 Apr 2013 11:39:31 GMT"
}
] | 1,365,379,200,000 | [
[
"Saad",
"Emad",
""
]
] |
1304.1819 | Pierre Lison | Pierre Lison | Model-based Bayesian Reinforcement Learning for Dialogue Management | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reinforcement learning methods are increasingly used to optimise dialogue
policies from experience. Most current techniques are model-free: they directly
estimate the utility of various actions, without explicit model of the
interaction dynamics. In this paper, we investigate an alternative strategy
grounded in model-based Bayesian reinforcement learning. Bayesian inference is
used to maintain a posterior distribution over the model parameters, reflecting
the model uncertainty. This parameter distribution is gradually refined as more
data is collected and simultaneously used to plan the agent's actions. Within
this learning framework, we carried out experiments with two alternative
formalisations of the transition model, one encoded with standard multinomial
distributions, and one structured with probabilistic rules. We demonstrate the
potential of our approach with empirical results on a user simulator
constructed from Wizard-of-Oz data in a human-robot interaction scenario. The
results illustrate in particular the benefits of capturing prior domain
knowledge with high-level rules.
| [
{
"version": "v1",
"created": "Fri, 5 Apr 2013 20:47:02 GMT"
}
] | 1,365,465,600,000 | [
[
"Lison",
"Pierre",
""
]
] |
1304.1827 | Emad Saad | Emad Saad | Fuzzy Aggregates in Fuzzy Answer Set Programming | arXiv admin note: substantial text overlap with arXiv:1304.1684 | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Fuzzy answer set programming is a declarative framework for representing and
reasoning about knowledge in fuzzy environments. However, the unavailability of
fuzzy aggregates in disjunctive fuzzy logic programs, DFLP, with fuzzy answer
set semantics prohibits the natural and concise representation of many
interesting problems. In this paper, we extend DFLP to allow arbitrary fuzzy
aggregates. We define fuzzy answer set semantics for DFLP with arbitrary fuzzy
aggregates including monotone, antimonotone, and nonmonotone fuzzy aggregates.
We show that the proposed fuzzy answer set semantics subsumes both the original
fuzzy answer set semantics of DFLP and the classical answer set semantics of
classical disjunctive logic programs with classical aggregates, and
consequently subsumes the classical answer set semantics of classical
disjunctive logic programs. We show that the proposed fuzzy answer sets of DFLP
with fuzzy aggregates are minimal fuzzy models and hence incomparable, which is
an important property for nonmonotonic fuzzy reasoning.
| [
{
"version": "v1",
"created": "Fri, 5 Apr 2013 21:47:16 GMT"
}
] | 1,365,465,600,000 | [
[
"Saad",
"Emad",
""
]
] |
1304.2339 | John Mark Agosta | John Mark Agosta | The structure of Bayes nets for vision recognition | Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988) | null | null | UAI-P-1988-PG-1-7 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper is part of a study whose goal is to show the effciency of using
Bayes networks to carry out model based vision calculations. [Binford et al.
1987] Recognition proceeds by drawing up a network model from the object's
geometric and functional description that predicts the appearance of an object.
Then this network is used to find the object within a photographic image. Many
existing and proposed techniques for vision recognition resemble the
uncertainty calculations of a Bayes net. In contrast, though, they lack a
derivation from first principles, and tend to rely on arbitrary parameters that
we hope to avoid by a network model. The connectedness of the network depends
on what independence considerations can be identified in the vision problem.
Greater independence leads to easier calculations, at the expense of the net's
expressiveness. Once this trade-off is made and the structure of the network is
determined, it should be possible to tailor a solution technique for it. This
paper explores the use of a network with multiply connected paths, drawing on
both techniques of belief networks [Pearl 86] and influence diagrams. We then
demonstrate how one formulation of a multiply connected network can be solved.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:41:36 GMT"
}
] | 1,365,552,000,000 | [
[
"Agosta",
"John Mark",
""
]
] |
1304.2340 | Romas Aleliunas | Romas Aleliunas | Summary of A New Normative Theory of Probabilistic Logic | Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988) | null | null | UAI-P-1988-PG-8-14 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | By probabilistic logic I mean a normative theory of belief that explains how
a body of evidence affects one's degree of belief in a possible hypothesis. A
new axiomatization of such a theory is presented which avoids a finite
additivity axiom, yet which retains many useful inference rules. Many of the
examples of this theory--its models do not use numerical probabilities. Put
another way, this article gives sharper answers to the two questions: 1.What
kinds of sets can used as the range of a probability function? 2.Under what
conditions is the range set of a probability function isomorphic to the set of
real numbers in the interval 10,1/ with the usual arithmetical operations?
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:41:42 GMT"
}
] | 1,365,552,000,000 | [
[
"Aleliunas",
"Romas",
""
]
] |
1304.2341 | Fahiem Bacchus | Fahiem Bacchus | Probability Distributions Over Possible Worlds | Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988) | null | null | UAI-P-1988-PG-15-21 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In Probabilistic Logic Nilsson uses the device of a probability distribution
over a set of possible worlds to assign probabilities to the sentences of a
logical language. In his paper Nilsson concentrated on inference and associated
computational issues. This paper, on the other hand, examines the probabilistic
semantics in more detail, particularly for the case of first-order languages,
and attempts to explain some of the features and limitations of this form of
probability logic. It is pointed out that the device of assigning probabilities
to logical sentences has certain expressive limitations. In particular,
statistical assertions are not easily expressed by such a device. This leads to
certain difficulties with attempts to give probabilistic semantics to default
reasoning using probabilities assigned to logical sentences.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:41:48 GMT"
}
] | 1,365,552,000,000 | [
[
"Bacchus",
"Fahiem",
""
]
] |
1304.2342 | Paul K. Black | Paul K. Black, Kathryn Blackmond Laskey | Hierarchical Evidence and Belief Functions | Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988) | null | null | UAI-P-1988-PG-22-29 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dempster/Shafer (D/S) theory has been advocated as a way of representing
incompleteness of evidence in a system's knowledge base. Methods now exist for
propagating beliefs through chains of inference. This paper discusses how rules
with attached beliefs, a common representation for knowledge in automated
reasoning systems, can be transformed into the joint belief functions required
by propagation algorithms. A rule is taken as defining a conditional belief
function on the consequent given the antecedents. It is demonstrated by example
that different joint belief functions may be consistent with a given set of
rules. Moreover, different representations of the same rules may yield
different beliefs on the consequent hypotheses.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:41:53 GMT"
}
] | 1,365,552,000,000 | [
[
"Black",
"Paul K.",
""
],
[
"Laskey",
"Kathryn Blackmond",
""
]
] |
1304.2343 | John S. Breese | John S. Breese, Michael R. Fehling | Decision-Theoretic Control of Problem Solving: Principles and
Architecture | Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988) | null | null | UAI-P-1988-PG-30-37 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents an approach to the design of autonomous, real-time
systems operating in uncertain environments. We address issues of problem
solving and reflective control of reasoning under uncertainty in terms of two
fundamental elements: l) a set of decision-theoretic models for selecting among
alternative problem-solving methods and 2) a general computational architecture
for resource-bounded problem solving. The decisiontheoretic models provide a
set of principles for choosing among alternative problem-solving methods based
on their relative costs and benefits, where benefits are characterized in terms
of the value of information provided by the output of a reasoning activity. The
output may be an estimate of some uncertain quantity or a recommendation for
action. The computational architecture, called Schemer-ll, provides for
interleaving of and communication among various problem-solving subsystems.
These subsystems provide alternative approaches to information gathering,
belief refinement, solution construction, and solution execution. In
particular, the architecture provides a mechanism for interrupting the
subsystems in response to critical events. We provide a decision theoretic
account for scheduling problem-solving elements and for critical-event-driven
interruption of activities in an architecture such as Schemer-II.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:41:57 GMT"
}
] | 1,365,552,000,000 | [
[
"Breese",
"John S.",
""
],
[
"Fehling",
"Michael R.",
""
]
] |
1304.2344 | M. Cecile | M. Cecile, Mary McLeish, P. Pascoe, W. Taylor | Induction and Uncertainty Management Techniques Applied to Veterinary
Medical Diagnosis | Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988) | null | null | UAI-P-1988-PG-38-48 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper discusses a project undertaken between the Departments of
Computing Science, Statistics, and the College of Veterinary Medicine to design
a medical diagnostic system. On-line medical data has been collected in the
hospital database system for several years. A number of induction methods are
being used to extract knowledge from the data in an attempt to improve upon
simple diagnostic charts used by the clinicians. They also enhance the results
of classical statistical methods - finding many more significant variables. The
second part of the paper describes an essentially Bayesian method of evidence
combination using fuzzy events at an initial step. Results are presented and
comparisons are made with other methods.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:42:03 GMT"
}
] | 1,365,552,000,000 | [
[
"Cecile",
"M.",
""
],
[
"McLeish",
"Mary",
""
],
[
"Pascoe",
"P.",
""
],
[
"Taylor",
"W.",
""
]
] |
1304.2345 | R. Martin Chavez | R. Martin Chavez, Gregory F. Cooper | KNET: Integrating Hypermedia and Bayesian Modeling | Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988) | null | null | UAI-P-1988-PG-49-54 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | KNET is a general-purpose shell for constructing expert systems based on
belief networks and decision networks. Such networks serve as graphical
representations for decision models, in which the knowledge engineer must
define clearly the alternatives, states, preferences, and relationships that
constitute a decision basis. KNET contains a knowledge-engineering core written
in Object Pascal and an interface that tightly integrates HyperCard, a
hypertext authoring tool for the Apple Macintosh computer, into a novel
expert-system architecture. Hypertext and hypermedia have become increasingly
important in the storage management, and retrieval of information. In broad
terms, hypermedia deliver heterogeneous bits of information in dynamic,
extensively cross-referenced packages. The resulting KNET system features a
coherent probabilistic scheme for managing uncertainty, an objectoriented
graphics editor for drawing and manipulating decision networks, and HyperCard's
potential for quickly constructing flexible and friendly user interfaces. We
envision KNET as a useful prototyping tool for our ongoing research on a
variety of Bayesian reasoning problems, including tractable representation,
inference, and explanation.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:42:09 GMT"
}
] | 1,365,552,000,000 | [
[
"Chavez",
"R. Martin",
""
],
[
"Cooper",
"Gregory F.",
""
]
] |
1304.2346 | Gregory F. Cooper | Gregory F. Cooper | A Method for Using Belief Networks as Influence Diagrams | Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988) | null | null | UAI-P-1988-PG-55-63 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper demonstrates a method for using belief-network algorithms to solve
influence diagram problems. In particular, both exact and approximation
belief-network algorithms may be applied to solve influence-diagram problems.
More generally, knowing the relationship between belief-network and
influence-diagram problems may be useful in the design and development of more
efficient influence diagram algorithms.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:42:15 GMT"
}
] | 1,365,552,000,000 | [
[
"Cooper",
"Gregory F.",
""
]
] |
1304.2347 | Bruce D'Ambrosio | Bruce D'Ambrosio | Process, Structure, and Modularity in Reasoning with Uncertainty | Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988) | null | null | UAI-P-1988-PG-64-72 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Computational mechanisms for uncertainty management must support interactive
and incremental problem formulation, inference, hypothesis testing, and
decision making. However, most current uncertainty inference systems
concentrate primarily on inference, and provide no support for the larger
issues. We present a computational approach to uncertainty management which
provides direct support for the dynamic, incremental aspect of this task, while
at the same time permitting direct representation of the structure of
evidential relationships. At the same time, we show that this approach responds
to the modularity concerns of Heckerman and Horvitz [Heck87]. This paper
emphasizes examples of the capabilities of this approach. Another paper
[D'Am89] details the representations and algorithms involved.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:42:20 GMT"
}
] | 1,365,552,000,000 | [
[
"D'Ambrosio",
"Bruce",
""
]
] |
1304.2348 | Thomas L. Dean | Thomas L. Dean, Keiji Kanazawa | Probabilistic Causal Reasoning | Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988) | null | null | UAI-P-1988-PG-73-80 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Predicting the future is an important component of decision making. In most
situations, however, there is not enough information to make accurate
predictions. In this paper, we develop a theory of causal reasoning for
predictive inference under uncertainty. We emphasize a common type of
prediction that involves reasoning about persistence: whether or not a
proposition once made true remains true at some later time. We provide a
decision procedure with a polynomial-time algorithm for determining the
probability of the possible consequences of a set events and initial
conditions. The integration of simple probability theory with temporal
projection enables us to circumvent problems that nonmonotonic temporal
reasoning schemes have in dealing with persistence. The ideas in this paper
have been implemented in a prototype system that refines a database of causal
rules in the course of applying those rules to construct and carry out plans in
a manufacturing domain.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:42:26 GMT"
}
] | 1,365,552,000,000 | [
[
"Dean",
"Thomas L.",
""
],
[
"Kanazawa",
"Keiji",
""
]
] |
1304.2349 | Didier Dubois | Didier Dubois, Henri Prade | Modeling uncertain and vague knowledge in possibility and evidence
theories | Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988) | null | null | UAI-P-1988-PG-81-89 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper advocates the usefulness of new theories of uncertainty for the
purpose of modeling some facets of uncertain knowledge, especially vagueness,
in AI. It can be viewed as a partial reply to Cheeseman's (among others)
defense of probability.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:42:31 GMT"
}
] | 1,365,552,000,000 | [
[
"Dubois",
"Didier",
""
],
[
"Prade",
"Henri",
""
]
] |
1304.2350 | Soumitra Dutta | Soumitra Dutta | A Temporal Logic for Uncertain Events and An Outline of A Possible
Implementation in An Extension of PROLOG | Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988) | null | null | UAI-P-1988-PG-90-97 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There is uncertainty associated with the occurrence of many events in real
life. In this paper we develop a temporal logic to deal with such uncertain
events and outline a possible implementation in an extension of PROLOG. Events
are represented as fuzzy sets with the membership function giving the
possibility of occurrence of the event in a given interval of time. The
developed temporal logic is simple but powerful. It can determine effectively
the various temporal relations between uncertain events or their combinations.
PROLOG provides a uniform substrate on which to effectively implement such a
temporal logic for uncertain events
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:42:37 GMT"
}
] | 1,365,552,000,000 | [
[
"Dutta",
"Soumitra",
""
]
] |
1304.2351 | Christoph F. Eick | Christoph F. Eick | Uncertainty Management for Fuzzy Decision Support Systems | Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988) | null | null | UAI-P-1988-PG-98-108 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A new approach for uncertainty management for fuzzy, rule based decision
support systems is proposed: The domain expert's knowledge is expressed by a
set of rules that frequently refer to vague and uncertain propositions. The
certainty of propositions is represented using intervals [a, b] expressing that
the proposition's probability is at least a and at most b. Methods and
techniques for computing the overall certainty of fuzzy compound propositions
that have been defined by using logical connectives 'and', 'or' and 'not' are
introduced. Different inference schemas for applying fuzzy rules by using modus
ponens are discussed. Different algorithms for combining evidence that has been
received from different rules for the same proposition are provided. The
relationship of the approach to other approaches is analyzed and its problems
of knowledge acquisition and knowledge representation are discussed in some
detail. The basic concepts of a rule-based programming language called PICASSO,
for which the approach is a theoretical foundation, are outlined.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:42:43 GMT"
}
] | 1,365,552,000,000 | [
[
"Eick",
"Christoph F.",
""
]
] |
1304.2352 | Alan M. Frisch | Alan M. Frisch, Peter Haddawy | Probability as a Modal Operator | Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988) | null | null | UAI-P-1988-PG-109-118 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper argues for a modal view of probability. The syntax and semantics
of one particularly strong probability logic are discussed and some examples of
the use of the logic are provided. We show that it is both natural and useful
to think of probability as a modal operator. Contrary to popular belief in AI,
a probability ranging between 0 and 1 represents a continuum between
impossibility and necessity, not between simple falsity and truth. The present
work provides a clear semantics for quantification into the scope of the
probability operator and for higher-order probabilities. Probability logic is a
language for expressing both probabilistic and logical concepts.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:42:49 GMT"
}
] | 1,365,552,000,000 | [
[
"Frisch",
"Alan M.",
""
],
[
"Haddawy",
"Peter",
""
]
] |
1304.2353 | Li-Min Fu | Li-Min Fu | Truth Maintenance Under Uncertainty | Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988) | null | null | UAI-P-1988-PG-119-126 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper addresses the problem of resolving errors under uncertainty in a
rule-based system. A new approach has been developed that reformulates this
problem as a neural-network learning problem. The strength and the fundamental
limitations of this approach are explored and discussed. The main result is
that neural heuristics can be applied to solve some but not all problems in
rule-based systems.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:42:55 GMT"
}
] | 1,365,552,000,000 | [
[
"Fu",
"Li-Min",
""
]
] |
1304.2354 | Stephen I. Gallant | Stephen I. Gallant | Bayesian Assessment of a Connectionist Model for Fault Detection | Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988) | null | null | UAI-P-1988-PG-127-135 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A previous paper [2] showed how to generate a linear discriminant network
(LDN) that computes likely faults for a noisy fault detection problem by using
a modification of the perceptron learning algorithm called the pocket
algorithm. Here we compare the performance of this connectionist model with
performance of the optimal Bayesian decision rule for the example that was
previously described. We find that for this particular problem the
connectionist model performs about 97% as well as the optimal Bayesian
procedure. We then define a more general class of noisy single-pattern boolean
(NSB) fault detection problems where each fault corresponds to a single
:pattern of boolean instrument readings and instruments are independently
noisy. This is equivalent to specifying that instrument readings are
probabilistic but conditionally independent given any particular fault. We
prove:
1. The optimal Bayesian decision rule for every NSB fault detection problem
is representable by an LDN containing no intermediate nodes. (This slightly
extends a result first published by Minsky & Selfridge.) 2. Given an NSB fault
detection problem, then with arbitrarily high probability after sufficient
iterations the pocket algorithm will generate an LDN that computes an optimal
Bayesian decision rule for that problem. In practice we find that a reasonable
number of iterations of the pocket algorithm produces a network with good, but
not optimal, performance.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:43:01 GMT"
}
] | 1,365,552,000,000 | [
[
"Gallant",
"Stephen I.",
""
]
] |
1304.2355 | Dan Geiger | Dan Geiger, Judea Pearl | On the Logic of Causal Models | Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988) | null | null | UAI-P-1988-PG-136-147 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper explores the role of Directed Acyclic Graphs (DAGs) as a
representation of conditional independence relationships. We show that DAGs
offer polynomially sound and complete inference mechanisms for inferring
conditional independence relationships from a given causal set of such
relationships. As a consequence, d-separation, a graphical criterion for
identifying independencies in a DAG, is shown to uncover more valid
independencies then any other criterion. In addition, we employ the Armstrong
property of conditional independence to show that the dependence relationships
displayed by a DAG are inherently consistent, i.e. for every DAG D there exists
some probability distribution P that embodies all the conditional
independencies displayed in D and none other.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:43:07 GMT"
}
] | 1,365,552,000,000 | [
[
"Geiger",
"Dan",
""
],
[
"Pearl",
"Judea",
""
]
] |
1304.2356 | Othar Hansson | Othar Hansson, Andy Mayer | The Optimality of Satisficing Solutions | Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988) | null | null | UAI-P-1988-PG-148-157 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper addresses a prevailing assumption in single-agent heuristic search
theory- that problem-solving algorithms should guarantee shortest-path
solutions, which are typically called optimal. Optimality implies a metric for
judging solution quality, where the optimal solution is the solution with the
highest quality. When path-length is the metric, we will distinguish such
solutions as p-optimal.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:43:12 GMT"
}
] | 1,365,552,000,000 | [
[
"Hansson",
"Othar",
""
],
[
"Mayer",
"Andy",
""
]
] |
1304.2357 | David Heckerman | David Heckerman | An Empirical Comparison of Three Inference Methods | Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988). LaTex errors corrected in this version | null | null | UAI-P-1988-PG-158-169 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, an empirical evaluation of three inference methods for
uncertain reasoning is presented in the context of Pathfinder, a large expert
system for the diagnosis of lymph-node pathology. The inference procedures
evaluated are (1) Bayes' theorem, assuming evidence is conditionally
independent given each hypothesis; (2) odds-likelihood updating, assuming
evidence is conditionally independent given each hypothesis and given the
negation of each hypothesis; and (3) a inference method related to the
Dempster-Shafer theory of belief. Both expert-rating and decision-theoretic
metrics are used to compare the diagnostic accuracy of the inference methods.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:43:18 GMT"
},
{
"version": "v2",
"created": "Sun, 17 May 2015 00:00:15 GMT"
},
{
"version": "v3",
"created": "Tue, 24 Jan 2023 21:10:19 GMT"
}
] | 1,674,691,200,000 | [
[
"Heckerman",
"David",
""
]
] |
1304.2358 | Daniel Hunter | Daniel Hunter | Parallel Belief Revision | Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988) | null | null | UAI-P-1988-PG-170-177 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes a formal system of belief revision developed by Wolfgang
Spohn and shows that this system has a parallel implementation that can be
derived from an influence diagram in a manner similar to that in which Bayesian
networks are derived. The proof rests upon completeness results for an
axiomatization of the notion of conditional independence, with the Spohn system
being used as a semantics for the relation of conditional independence.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:43:24 GMT"
}
] | 1,365,552,000,000 | [
[
"Hunter",
"Daniel",
""
]
] |
1304.2359 | Pramod Jain | Pramod Jain, Alice M. Agogino | Stochastic Sensitivity Analysis Using Fuzzy Influence Diagrams | Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988) | null | null | UAI-P-1988-PG-178-188 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The practice of stochastic sensitivity analysis described in the decision
analysis literature is a testimonial to the need for considering deviations
from precise point estimates of uncertainty. We propose the use of Bayesian
fuzzy probabilities within an influence diagram computational scheme for
performing sensitivity analysis during the solution of probabilistic inference
and decision problems. Unlike other parametric approaches, the proposed scheme
does not require resolving the problem for the varying probability point
estimates. We claim that the solution to fuzzy influence diagrams provides as
much information as the classical point estimate approach plus additional
information concerning stochastic sensitivity. An example based on diagnostic
decision making in microcomputer assembly is used to illustrate this idea. We
claim that the solution to fuzzy influence diagrams provides as much
information as the classical point estimate approach plus additional interval
information that is useful for stochastic sensitivity analysis.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:43:30 GMT"
}
] | 1,365,552,000,000 | [
[
"Jain",
"Pramod",
""
],
[
"Agogino",
"Alice M.",
""
]
] |
1304.2360 | Holly B. Jimison | Holly B. Jimison | A Representation of Uncertainty to Aid Insight into Decision Models | Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988) | null | null | UAI-P-1988-PG-189-196 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many real world models can be characterized as weak, meaning that there is
significant uncertainty in both the data input and inferences. This lack of
determinism makes it especially difficult for users of computer decision aids
to understand and have confidence in the models. This paper presents a
representation for uncertainty and utilities that serves as a framework for
graphical summary and computer-generated explanation of decision models. The
application described that tests the methodology is a computer decision aid
designed to enhance the clinician-patient consultation process for patients
with angina (chest pain due to lack of blood flow to the heart muscle). The
angina model is represented as a Bayesian decision network. Additionally, the
probabilities and utilities are treated as random variables with probability
distributions on their range of possible values. The initial distributions
represent information on all patients with anginal symptoms, and the approach
allows for rapid tailoring to more patientspecific distributions. This
framework provides a metric for judging the importance of each variable in the
model dynamically.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:43:36 GMT"
}
] | 1,365,552,000,000 | [
[
"Jimison",
"Holly B.",
""
]
] |
1304.2361 | Carl Kadie | Carl Kadie | Rational Nonmonotonic Reasoning | Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988) | null | null | UAI-P-1988-PG-197-204 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nonmonotonic reasoning is a pattern of reasoning that allows an agent to make
and retract (tentative) conclusions from inconclusive evidence. This paper
gives a possible-worlds interpretation of the nonmonotonic reasoning problem
based on standard decision theory and the emerging probability logic. The
system's central principle is that a tentative conclusion is a decision to make
a bet, not an assertion of fact. The system is rational, and as sound as the
proof theory of its underlying probability log.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:43:41 GMT"
}
] | 1,365,552,000,000 | [
[
"Kadie",
"Carl",
""
]
] |
1304.2362 | Jayant Kalagnanam | Jayant Kalagnanam, Max Henrion | A Comparison of Decision Analysis and Expert Rules for Sequential
Diagnosis | Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988) | null | null | UAI-P-1988-PG-205-212 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There has long been debate about the relative merits of decision theoretic
methods and heuristic rule-based approaches for reasoning under uncertainty. We
report an experimental comparison of the performance of the two approaches to
troubleshooting, specifically to test selection for fault diagnosis. We use as
experimental testbed the problem of diagnosing motorcycle engines. The first
approach employs heuristic test selection rules obtained from expert mechanics.
We compare it with the optimal decision analytic algorithm for test selection
which employs estimated component failure probabilities and test costs. The
decision analytic algorithm was found to reduce the expected cost (i.e. time)
to arrive at a diagnosis by an average of 14% relative to the expert rules.
Sensitivity analysis shows the results are quite robust to inaccuracy in the
probability and cost estimates. This difference suggests some interesting
implications for knowledge acquisition.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:43:47 GMT"
}
] | 1,365,552,000,000 | [
[
"Kalagnanam",
"Jayant",
""
],
[
"Henrion",
"Max",
""
]
] |
1304.2364 | Henry E. Kyburg Jr. | Henry E. Kyburg Jr | Probabilistic Inference and Probabilistic Reasoning | Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988) | null | null | UAI-P-1988-PG-221-228 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Uncertainty enters into human reasoning and inference in at least two ways.
It is reasonable to suppose that there will be roles for these distinct uses of
uncertainty also in automated reasoning.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:43:58 GMT"
}
] | 1,365,552,000,000 | [
[
"Kyburg",
"Henry E.",
"Jr"
]
] |
1304.2365 | Henry E. Kyburg Jr. | Henry E. Kyburg Jr | Probabilistic and Non-Monotonic Inference | Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988) | null | null | UAI-P-1988-PG-229-236 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | (l) I have enough evidence to render the sentence S probable. (la) So,
relative to what I know, it is rational of me to believe S. (2) Now that I have
more evidence, S may no longer be probable. (2a) So now, relative to what I
know, it is not rational of me to believe S. These seem a perfectly ordinary,
common sense, pair of situations. Generally and vaguely, I take them to embody
what I shall call probabilistic inference. This form of inference is clearly
non-monotonic. Relatively few people have taken this form of inference, based
on high probability, to serve as a foundation for non-monotonic logic or for a
logical or defeasible inference. There are exceptions: Jane Nutter [16] thinks
that sometimes probability has something to do with non-monotonic reasoning.
Judea Pearl [ 17] has recently been exploring the possibility. There are any
number of people whom one might call probability enthusiasts who feel that
probability provides all the answers by itself, with no need of help from
logic. Cheeseman [1], Henrion [5] and others think it useful to look at a
distribution of probabilities over a whole algebra of statements, to update
that distribution in the light of new evidence, and to use the latest updated
distribution of probability over the algebra as a basis for planning and
decision making. A slightly weaker form of this approach is captured by Nilsson
[15], where one assumes certain probabilities for certain statements, and
infers the probabilities, or constraints on the probabilities of other
statement. None of this corresponds to what I call probabilistic inference. All
of the inference that is taking place, either in Bayesian updating, or in
probabilistic logic, is strictly deductive. Deductive inference, particularly
that concerned with the distribution of classical probabilities or chances, is
of great importance. But this is not to say that there is no important role for
what earlier logicians have called "ampliative" or "inductive" or "scientific"
inference, in which the conclusion goes beyond the premises, asserts more than
do the premises. This depends on what David Israel [6] has called "real rules
of inference". It is characteristic of any such logic or inference procedure
that it can go wrong: that statements accepted at one point may be rejected at
a later point. Research underlying the results reported here has been partially
supported by the Signals Warfare Center of the United States Army.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:44:04 GMT"
}
] | 1,480,118,400,000 | [
[
"Kyburg",
"Henry E.",
"Jr"
]
] |
1304.2366 | Henry E. Kyburg Jr. | Henry E. Kyburg Jr | Epistemological Relevance and Statistical Knowledge | Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988) | null | null | UAI-P-1988-PG-237-244 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For many years, at least since McCarthy and Hayes (1969), writers have
lamented, and attempted to compensate for, the alleged fact that we often do
not have adequate statistical knowledge for governing the uncertainty of
belief, for making uncertain inferences, and the like. It is hardly ever
spelled out what "adequate statistical knowledge" would be, if we had it, and
how adequate statistical knowledge could be used to control and regulate
epistemic uncertainty.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:44:10 GMT"
}
] | 1,365,552,000,000 | [
[
"Kyburg",
"Henry E.",
"Jr"
]
] |
1304.2368 | Ronald P. Loui | Ronald P. Loui | Evidential Reasoning in a Network Usage Prediction Testbed | Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988) | null | null | UAI-P-1988-PG-257-265 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper reports on empirical work aimed at comparing evidential reasoning
techniques. While there is prima facie evidence for some conclusions, this i6
work in progress; the present focus is methodology, with the goal that
subsequent results be meaningful. The domain is a network of UNIX* cycle
servers, and the task is to predict properties of the state of the network from
partial descriptions of the state. Actual data from the network are taken and
used for blindfold testing in a betting game that allows abstention. The focal
technique has been Kyburg's method for reasoning with data of varying relevance
to a particular query, though the aim is to be able eventually to compare
various uncertainty calculi. The conclusions are not novel, but are
instructive. 1. All of the calculi performed better than human subjects, so
unbiased access to sample experience is apparently of value. 2. Performance
depends on metric: (a) when trials are repeated, net = gains - losses favors
methods that place many bets, if the probability of placing a correct bet is
sufficiently high; that is, it favors point-valued formalisms; (b) yield =
gains/(gains + lossee) favors methods that bet only when sure to bet correctly;
that is, it favors interval-valued formalisms. 3. Among the calculi, there were
no clear winners or losers. Methods are identified for eliminating the bias of
the net as a performance criterion and for separating the calculi effectively:
in both cases by posting odds for the betting game in the appropriate way.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:44:22 GMT"
}
] | 1,365,552,000,000 | [
[
"Loui",
"Ronald P.",
""
]
] |
1304.2369 | Richard E. Neapolitan | Richard E. Neapolitan, James Kenevan | Justifying the Principle of Interval Constraints | Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988) | null | null | UAI-P-1988-PG-266-274 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | When knowledge is obtained from a database, it is only possible to deduce
confidence intervals for probability values. With confidence intervals
replacing point values, the results in the set covering model include interval
constraints for the probabilities of mutually exclusive and exhaustive
explanations. The Principle of Interval Constraints ranks these explanations by
determining the expected values of the probabilities based on distributions
determined from the interval, constraints. This principle was developed using
the Classical Approach to probability. This paper justifies the Principle of
Interval Constraints with a more rigorous statement of the Classical Approach
and by defending the concept of probabilities of probabilities.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:44:27 GMT"
}
] | 1,365,552,000,000 | [
[
"Neapolitan",
"Richard E.",
""
],
[
"Kenevan",
"James",
""
]
] |
1304.2370 | Eric Neufeld | Eric Neufeld, David L Poole | Probabilistic Semantics and Defaults | Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988) | null | null | UAI-P-1988-PG-275-282 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There is much interest in providing probabilistic semantics for defaults but
most approaches seem to suffer from one of two problems: either they require
numbers, a problem defaults were intended to avoid, or they generate peculiar
side effects. Rather than provide semantics for defaults, we address the
problem defaults were intended to solve: that of reasoning under uncertainty
where numeric probability distributions are not available. We describe a
non-numeric formalism called an inference graph based on standard probability
theory, conditional independence and sentences of favouring where a favours b -
favours(a, b) - p(a|b) > p(a). The formalism seems to handle the examples from
the nonmonotonic literature. Most importantly, the sentences of our system can
be verified by performing an appropriate experiment in the semantic domain.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:44:33 GMT"
}
] | 1,365,552,000,000 | [
[
"Neufeld",
"Eric",
""
],
[
"Poole",
"David L",
""
]
] |
1304.2371 | Michael Pittarelli | Michael Pittarelli | Decision Making with Linear Constraints on Probabilities | Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988) | null | null | UAI-P-1988-PG-283-290 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Techniques for decision making with knowledge of linear constraints on
condition probabilities are examined. These constraints arise naturally in many
situations: upper and lower condition probabilities are known; an ordering
among the probabilities is determined; marginal probabilities or bounds on such
probabilities are known, e.g., data are available in the form of a
probabilistic database (Cavallo and Pittarelli, 1987a); etc. Standard
situations of decision making under risk and uncertainty may also be
characterized by linear constraints. Each of these types of information may be
represented by a convex polyhedron of numerically determinate condition
probabilities. A uniform approach to decision making under risk, uncertainty,
and partial uncertainty based on a generalized version of a criterion of
Hurwicz is proposed, Methods for processing marginal probabilities to improve
decision making using any of the criteria discussed are presented.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:44:38 GMT"
}
] | 1,365,552,000,000 | [
[
"Pittarelli",
"Michael",
""
]
] |
1304.2372 | Thomas F. Reid | Thomas F. Reid, Gregory S. Parnell | Maintenance in Probabilistic Knowledge-Based Systems | Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988) | null | null | UAI-P-1988-PG-291-298 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent developments using directed acyclical graphs (i.e., influence diagrams
and Bayesian networks) for knowledge representation have lessened the problems
of using probability in knowledge-based systems (KBS). Most current research
involves the efficient propagation of new evidence, but little has been done
concerning the maintenance of domain-specific knowledge, which includes the
probabilistic information about the problem domain. By making use of
conditional independencies represented in she graphs, however, probability
assessments are required only for certain variables when the knowledge base is
updated. The purpose of this study was to investigate, for those variables
which require probability assessments, ways to reduce the amount of new
knowledge required from the expert when updating probabilistic information in a
probabilistic knowledge-based system. Three special cases (ignored outcome,
split outcome, and assumed constraint outcome) were identified under which many
of the original probabilities (those already in the knowledge-base) do not need
to be reassessed when maintenance is required.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:44:46 GMT"
}
] | 1,365,552,000,000 | [
[
"Reid",
"Thomas F.",
""
],
[
"Parnell",
"Gregory S.",
""
]
] |
1304.2373 | Ross D. Shachter | Ross D. Shachter | A Linear Approximation Method for Probabilistic Inference | Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988) | null | null | UAI-P-1988-PG-299-306 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An approximation method is presented for probabilistic inference with
continuous random variables. These problems can arise in many practical
problems, in particular where there are "second order" probabilities. The
approximation, based on the Gaussian influence diagram, iterates over linear
approximations to the inference problem.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:44:52 GMT"
}
] | 1,365,552,000,000 | [
[
"Shachter",
"Ross D.",
""
]
] |
1304.2374 | Prakash P. Shenoy | Prakash P. Shenoy, Glenn Shafer | An Axiomatic Framework for Bayesian and Belief-function Propagation | Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988) | null | null | UAI-P-1988-PG-307-314 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we describe an abstract framework and axioms under which exact
local computation of marginals is possible. The primitive objects of the
framework are variables and valuations. The primitive operators of the
framework are combination and marginalization. These operate on valuations. We
state three axioms for these operators and we derive the possibility of local
computation from the axioms. Next, we describe a propagation scheme for
computing marginals of a valuation when we have a factorization of the
valuation on a hypertree. Finally we show how the problem of computing
marginals of joint probability distributions and joint belief functions fits
the general framework.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:44:57 GMT"
}
] | 1,365,552,000,000 | [
[
"Shenoy",
"Prakash P.",
""
],
[
"Shafer",
"Glenn",
""
]
] |
1304.2375 | Wolfgang Spohn | Wolfgang Spohn | A General Non-Probabilistic Theory of Inductive Reasoning | Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988) | null | null | UAI-P-1988-PG-315-322 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Probability theory, epistemically interpreted, provides an excellent, if not
the best available account of inductive reasoning. This is so because there are
general and definite rules for the change of subjective probabilities through
information or experience; induction and belief change are one and same topic,
after all. The most basic of these rules is simply to conditionalize with
respect to the information received; and there are similar and more general
rules. 1 Hence, a fundamental reason for the epistemological success of
probability theory is that there at all exists a well-behaved concept of
conditional probability. Still, people have, and have reasons for, various
concerns over probability theory. One of these is my starting point:
Intuitively, we have the notion of plain belief; we believe propositions2 to be
true (or to be false or neither). Probability theory, however, offers no formal
counterpart to this notion. Believing A is not the same as having probability 1
for A, because probability 1 is incorrigible3; but plain belief is clearly
corrigible. And believing A is not the same as giving A a probability larger
than some 1 - c, because believing A and believing B is usually taken to be
equivalent to believing A & B.4 Thus, it seems that the formal representation
of plain belief has to take a non-probabilistic route. Indeed, representing
plain belief seems easy enough: simply represent an epistemic state by the set
of all propositions believed true in it or, since I make the common assumption
that plain belief is deductively closed, by the conjunction of all propositions
believed true in it. But this does not yet provide a theory of induction, i.e.
an answer to the question how epistemic states so represented are changed
tbrough information or experience. There is a convincing partial answer: if the
new information is compatible with the old epistemic state, then the new
epistemic state is simply represented by the conjunction of the new information
and the old beliefs. This answer is partial because it does not cover the quite
common case where the new information is incompatible with the old beliefs. It
is, however, important to complete the answer and to cover this case, too;
otherwise, we would not represent plain belief as conigible. The crucial
problem is that there is no good completion. When epistemic states are
represented simply by the conjunction of all propositions believed true in it,
the answer cannot be completed; and though there is a lot of fruitful work, no
other representation of epistemic states has been proposed, as far as I know,
which provides a complete solution to this problem. In this paper, I want to
suggest such a solution. In [4], I have more fully argued that this is the only
solution, if certain plausible desiderata are to be satisfied. Here, in section
2, I will be content with formally defining and intuitively explaining my
proposal. I will compare my proposal with probability theory in section 3. It
will turn out that the theory I am proposing is structurally homomorphic to
probability theory in important respects and that it is thus equally easily
implementable, but moreover computationally simpler. Section 4 contains a very
brief comparison with various kinds of logics, in particular conditional logic,
with Shackle's functions of potential surprise and related theories, and with
the Dempster - Shafer theory of belief functions.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:45:03 GMT"
}
] | 1,365,552,000,000 | [
[
"Spohn",
"Wolfgang",
""
]
] |
1304.2376 | Spencer Star | Spencer Star | Generating Decision Structures and Causal Explanations for Decision
Making | Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988) | null | null | UAI-P-1988-PG-323-334 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper examines two related problems that are central to developing an
autonomous decision-making agent, such as a robot. Both problems require
generating structured representafions from a database of unstructured
declarative knowledge that includes many facts and rules that are irrelevant in
the problem context. The first problem is how to generate a well structured
decision problem from such a database. The second problem is how to generate,
from the same database, a well-structured explanation of why some possible
world occurred. In this paper it is shown that the problem of generating the
appropriate decision structure or explanation is intractable without
introducing further constraints on the knowledge in the database. The paper
proposes that the problem search space can be constrained by adding knowledge
to the database about causal relafions between events. In order to determine
the causal knowledge that would be most useful, causal theories for
deterministic and indeterministic universes are proposed. A program that uses
some of these causal constraints has been used to generate explanations about
faulty plans. The program shows the expected increase in efficiency as the
causal constraints are introduced.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:45:09 GMT"
}
] | 1,365,552,000,000 | [
[
"Star",
"Spencer",
""
]
] |
1304.2377 | Jaap Suermondt | Jaap Suermondt, Gregory F. Cooper | Updating Probabilities in Multiply-Connected Belief Networks | Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988) | null | null | UAI-P-1988-PG-335-343 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper focuses on probability updates in multiply-connected belief
networks. Pearl has designed the method of conditioning, which enables us to
apply his algorithm for belief updates in singly-connected networks to
multiply-connected belief networks by selecting a loop-cutset for the network
and instantiating these loop-cutset nodes. We discuss conditions that need to
be satisfied by the selected nodes. We present a heuristic algorithm for
finding a loop-cutset that satisfies these conditions.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:45:15 GMT"
}
] | 1,365,552,000,000 | [
[
"Suermondt",
"Jaap",
""
],
[
"Cooper",
"Gregory F.",
""
]
] |
1304.2378 | Bjornar Tessem | Bjornar Tessem, Lars Johan Ersland | Handling uncertainty in a system for text-symbol context analysis | Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988) | null | null | UAI-P-1988-PG-344-351 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In pattern analysis, information regarding an object can often be drawn from
its surroundings. This paper presents a method for handling uncertainty when
using context of symbols and texts for analyzing technical drawings. The method
is based on Dempster-Shafer theory and possibility theory.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:45:21 GMT"
}
] | 1,365,552,000,000 | [
[
"Tessem",
"Bjornar",
""
],
[
"Ersland",
"Lars Johan",
""
]
] |
1304.2379 | Tom S. Verma | Tom S. Verma, Judea Pearl | Causal Networks: Semantics and Expressiveness | Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988) | null | null | UAI-P-1988-PG-352-359 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dependency knowledge of the form "x is independent of y once z is known"
invariably obeys the four graphoid axioms, examples include probabilistic and
database dependencies. Often, such knowledge can be represented efficiently
with graphical structures such as undirected graphs and directed acyclic graphs
(DAGs). In this paper we show that the graphical criterion called d-separation
is a sound rule for reading independencies from any DAG based on a causal input
list drawn from a graphoid. The rule may be extended to cover DAGs that
represent functional dependencies as well as conditional dependencies.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:45:27 GMT"
}
] | 1,365,552,000,000 | [
[
"Verma",
"Tom S.",
""
],
[
"Pearl",
"Judea",
""
]
] |
1304.2380 | Wilson X. Wen | Wilson X. Wen | MCE Reasoning in Recursive Causal Networks | Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988) | null | null | UAI-P-1988-PG-360-367 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A probabilistic method of reasoning under uncertainty is proposed based on
the principle of Minimum Cross Entropy (MCE) and concept of Recursive Causal
Model (RCM). The dependency and correlations among the variables are described
in a special language BNDL (Belief Networks Description Language). Beliefs are
propagated among the clauses of the BNDL programs representing the underlying
probabilistic distributions. BNDL interpreters in both Prolog and C has been
developed and the performance of the method is compared with those of the
others.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:45:33 GMT"
}
] | 1,365,552,000,000 | [
[
"Wen",
"Wilson X.",
""
]
] |
1304.2381 | Ronald R. Yager | Ronald R. Yager | Nonmonotonic Reasoning via Possibility Theory | Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988) | null | null | UAI-P-1988-PG-368-373 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce the operation of possibility qualification and show how. this
modal-like operator can be used to represent "typical" or default knowledge in
a theory of nonmonotonic reasoning. We investigate the representational power
of this approach by looking at a number of prototypical problems from the
nonmonotonic reasoning literature. In particular we look at the so called Yale
shooting problem and its relation to priority in default reasoning.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:45:38 GMT"
}
] | 1,365,552,000,000 | [
[
"Yager",
"Ronald R.",
""
]
] |
1304.2383 | John Yen | John Yen | Generalizing the Dempster-Shafer Theory to Fuzzy Sets | Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988) | null | null | UAI-P-1988-PG-382-391 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the desire to apply the Dempster-Shafer theory to complex real world
problems where the evidential strength is often imprecise and vague, several
attempts have been made to generalize the theory. However, the important
concept in the D-S theory that the belief and plausibility functions are lower
and upper probabilities is no longer preserved in these generalizations. In
this paper, we describe a generalized theory of evidence where the degree of
belief in a fuzzy set is obtained by minimizing the probability of the fuzzy
set under the constraints imposed by a basic probability assignment. To
formulate the probabilistic constraint of a fuzzy focal element, we decompose
it into a set of consonant non-fuzzy focal elements. By generalizing the
compatibility relation to a possibility theory, we are able to justify our
generalization to Dempster's rule based on possibility distribution. Our
generalization not only extends the application of the D-S theory but also
illustrates a way that probability theory and fuzzy set theory can be combined
to deal with different kinds of uncertain information in AI systems.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:45:50 GMT"
}
] | 1,365,552,000,000 | [
[
"Yen",
"John",
""
]
] |
1304.2384 | Emad Saad | Emad Saad | Logical Fuzzy Optimization | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/3.0/ | We present a logical framework to represent and reason about fuzzy
optimization problems based on fuzzy answer set optimization programming. This
is accomplished by allowing fuzzy optimization aggregates, e.g., minimum and
maximum in the language of fuzzy answer set optimization programming to allow
minimization or maximization of some desired criteria under fuzzy environments.
We show the application of the proposed logical fuzzy optimization framework
under the fuzzy answer set optimization programming to the fuzzy water
allocation optimization problem.
| [
{
"version": "v1",
"created": "Fri, 5 Apr 2013 21:57:03 GMT"
}
] | 1,365,552,000,000 | [
[
"Saad",
"Emad",
""
]
] |
1304.2418 | Minyar Sassi | Hanene Rezgui and Minyar Sassi-Hidri | Mod\`ele flou d'expression des pr\'ef\'erences bas\'e sur les CP-Nets | 2 pages, EGC 2013 | 13\`eme Conf\'erence Francophone sur l'Extraction et la Gestion
des Connaissances (EGC), pp. 27-28, 2013 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This article addresses the problem of expressing preferences in flexible
queries while basing on a combination of the fuzzy logic theory and Conditional
Preference Networks or CP-Nets.
| [
{
"version": "v1",
"created": "Sun, 31 Mar 2013 16:29:20 GMT"
}
] | 1,387,843,200,000 | [
[
"Rezgui",
"Hanene",
""
],
[
"Sassi-Hidri",
"Minyar",
""
]
] |
1304.2694 | Mathias Niepert | Mathias Niepert | Symmetry-Aware Marginal Density Estimation | To appear in proceedings of AAAI 2013 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Rao-Blackwell theorem is utilized to analyze and improve the scalability
of inference in large probabilistic models that exhibit symmetries. A novel
marginal density estimator is introduced and shown both analytically and
empirically to outperform standard estimators by several orders of magnitude.
The developed theory and algorithms apply to a broad class of probabilistic
models including statistical relational models considered not susceptible to
lifted probabilistic inference.
| [
{
"version": "v1",
"created": "Tue, 9 Apr 2013 18:47:47 GMT"
}
] | 1,365,552,000,000 | [
[
"Niepert",
"Mathias",
""
]
] |
1304.2711 | Paul K. Black | Paul K. Black | Is Shafer General Bayes? | Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987) | null | null | UAI-P-1987-PG-2-9 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper examines the relationship between Shafer's belief functions and
convex sets of probability distributions. Kyburg's (1986) result showed that
belief function models form a subset of the class of closed convex probability
distributions. This paper emphasizes the importance of Kyburg's result by
looking at simple examples involving Bernoulli trials. Furthermore, it is shown
that many convex sets of probability distributions generate the same belief
function in the sense that they support the same lower and upper values. This
has implications for a decision theoretic extension. Dempster's rule of
combination is also compared with Bayes' rule of conditioning.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:46:18 GMT"
}
] | 1,365,638,400,000 | [
[
"Black",
"Paul K.",
""
]
] |
1304.2712 | Paul Cohen | Paul Cohen, Glenn Shafer, Prakash P. Shenoy | Modifiable Combining Functions | Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987) | null | null | UAI-P-1987-PG-10-21 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modifiable combining functions are a synthesis of two common approaches to
combining evidence. They offer many of the advantages of these approaches and
avoid some disadvantages. Because they facilitate the acquisition,
representation, explanation, and modification of knowledge about combinations
of evidence, they are proposed as a tool for knowledge engineers who build
systems that reason under uncertainty, not as a normative theory of evidence.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:46:23 GMT"
}
] | 1,365,638,400,000 | [
[
"Cohen",
"Paul",
""
],
[
"Shafer",
"Glenn",
""
],
[
"Shenoy",
"Prakash P.",
""
]
] |
1304.2713 | Daniel Hunter | Daniel Hunter | Dempster-Shafer vs. Probabilistic Logic | Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987) | null | null | UAI-P-1987-PG-22-29 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The combination of evidence in Dempster-Shafer theory is compared with the
combination of evidence in probabilistic logic. Sufficient conditions are
stated for these two methods to agree. It is then shown that these conditions
are minimal in the sense that disagreement can occur when any one of them is
removed. An example is given in which the traditional assumption of conditional
independence of evidence on hypotheses holds and a uniform prior is assumed,
but probabilistic logic and Dempster's rule give radically different results
for the combination of two evidence events.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:46:27 GMT"
}
] | 1,365,638,400,000 | [
[
"Hunter",
"Daniel",
""
]
] |
1304.2714 | Henry E. Kyburg Jr. | Henry E. Kyburg Jr | Higher Order Probabilities | Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987) | null | null | UAI-P-1987-PG-30-38 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A number of writers have supposed that for the full specification of belief,
higher order probabilities are required. Some have even supposed that there may
be an unending sequence of higher order probabilities of probabilities of
probabilities.... In the present paper we show that higher order probabilities
can always be replaced by the marginal distributions of joint probability
distributions. We consider both the case in which higher order probabilities
are of the same sort as lower order probabilities and that in which higher
order probabilities are distinct in character, as when lower order
probabilities are construed as frequencies and higher order probabilities are
construed as subjective degrees of belief. In neither case do higher order
probabilities appear to offer any advantages, either conceptually or
computationally.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:46:32 GMT"
}
] | 1,365,638,400,000 | [
[
"Kyburg",
"Henry E.",
"Jr"
]
] |
1304.2715 | Kathryn Blackmond Laskey | Kathryn Blackmond Laskey | Belief in Belief Functions: An Examination of Shafer's Canonical
Examples | Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987) | null | null | UAI-P-1987-PG-39-46 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the canonical examples underlying Shafer-Dempster theory, beliefs over the
hypotheses of interest are derived from a probability model for a set of
auxiliary hypotheses. Beliefs are derived via a compatibility relation
connecting the auxiliary hypotheses to subsets of the primary hypotheses. A
belief function differs from a Bayesian probability model in that one does not
condition on those parts of the evidence for which no probabilities are
specified. The significance of this difference in conditioning assumptions is
illustrated with two examples giving rise to identical belief functions but
different Bayesian probability distributions.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:46:37 GMT"
}
] | 1,365,638,400,000 | [
[
"Laskey",
"Kathryn Blackmond",
""
]
] |
1304.2716 | Judea Pearl | Judea Pearl | Do We Need Higher-Order Probabilities and, If So, What Do They Mean? | Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987) | null | null | UAI-P-1987-PG-47-60 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The apparent failure of individual probabilistic expressions to distinguish
uncertainty about truths from uncertainty about probabilistic assessments have
prompted researchers to seek formalisms where the two types of uncertainties
are given notational distinction. This paper demonstrates that the desired
distinction is already a built-in feature of classical probabilistic models,
thus, specialized notations are unnecessary.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:46:42 GMT"
}
] | 1,365,638,400,000 | [
[
"Pearl",
"Judea",
""
]
] |
1304.2717 | Matthew Self | Matthew Self, Peter Cheeseman | Bayesian Prediction for Artificial Intelligence | Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987) | null | null | UAI-P-1987-PG-61-69 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper shows that the common method used for making predictions under
uncertainty in A1 and science is in error. This method is to use currently
available data to select the best model from a given class of models-this
process is called abduction-and then to use this model to make predictions
about future data. The correct method requires averaging over all the models to
make a prediction-we call this method transduction. Using transduction, an AI
system will not give misleading results when basing predictions on small
amounts of data, when no model is clearly best. For common classes of models we
show that the optimal solution can be given in closed form.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:46:47 GMT"
}
] | 1,365,638,400,000 | [
[
"Self",
"Matthew",
""
],
[
"Cheeseman",
"Peter",
""
]
] |
1304.2718 | John Yen | John Yen | Can Evidence Be Combined in the Dempster-Shafer Theory | Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987) | null | null | UAI-P-1987-PG-70-76 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dempster's rule of combination has been the most controversial part of the
Dempster-Shafer (D-S) theory. In particular, Zadeh has reached a conjecture on
the noncombinability of evidence from a relational model of the D-S theory. In
this paper, we will describe another relational model where D-S masses are
represented as conditional granular distributions. By comparing it with Zadeh's
relational model, we will show how Zadeh's conjecture on combinability does not
affect the applicability of Dempster's rule in our model.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:46:52 GMT"
}
] | 1,365,638,400,000 | [
[
"Yen",
"John",
""
]
] |
1304.2719 | John B. Bacon | John B. Bacon | An Interesting Uncertainty-Based Combinatoric Problem in Spare Parts
Forecasting: The FRED System | Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987) | null | null | UAI-P-1987-PG-78-85 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The domain of spare parts forecasting is examined, and is found to present
unique uncertainty based problems in the architectural design of a
knowledge-based system. A mixture of different uncertainty paradigms is
required for the solution, with an intriguing combinatoric problem arising from
an uncertain choice of inference engines. Thus, uncertainty in the system is
manifested in two different meta-levels. The different uncertainty paradigms
and meta-levels must be integrated into a functioning whole. FRED is an example
of a difficult real-world domain to which no existing uncertainty approach is
completely appropriate. This paper discusses the architecture of FRED,
highlighting: the points of uncertainty and other interesting features of the
domain, the specific implications of those features on the system design
(including the combinatoric explosions), their current implementation & future
plans,and other problems and issues with the architecture.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:46:57 GMT"
}
] | 1,365,638,400,000 | [
[
"Bacon",
"John B.",
""
]
] |
1304.2720 | Thomas O. Binford | Thomas O. Binford, Tod S. Levitt, Wallace B. Mann | Bayesian Inference in Model-Based Machine Vision | Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987) | null | null | UAI-P-1987-PG-86-97 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This is a preliminary version of visual interpretation integrating multiple
sensors in SUCCESSOR, an intelligent, model-based vision system. We pursue a
thorough integration of hierarchical Bayesian inference with comprehensive
physical representation of objects and their relations in a system for
reasoning with geometry, surface materials and sensor models in machine vision.
Bayesian inference provides a framework for accruing_ probabilities to rank
order hypotheses.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:47:03 GMT"
}
] | 1,365,638,400,000 | [
[
"Binford",
"Thomas O.",
""
],
[
"Levitt",
"Tod S.",
""
],
[
"Mann",
"Wallace B.",
""
]
] |
1304.2721 | Gautam Biswas | Gautam Biswas, Teywansh S. Anand | Using the Dempster-Shafer Scheme in a Diagnostic Expert System Shell | Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987) | null | null | UAI-P-1987-PG-98-105 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper discusses an expert system shell that integrates rule-based
reasoning and the Dempster-Shafer evidence combination scheme. Domain knowledge
is stored as rules with associated belief functions. The reasoning component
uses a combination of forward and backward inferencing mechanisms to allow
interaction with users in a mixed-initiative format.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:47:07 GMT"
}
] | 1,365,638,400,000 | [
[
"Biswas",
"Gautam",
""
],
[
"Anand",
"Teywansh S.",
""
]
] |
1304.2722 | Homer L. Chin | Homer L. Chin, Gregory F. Cooper | Stochastic Simulation of Bayesian Belief Networks | Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987) | null | null | UAI-P-1987-PG-106-113 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper examines Bayesian belief network inference using simulation as a
method for computing the posterior probabilities of network variables.
Specifically, it examines the use of a method described by Henrion, called
logic sampling, and a method described by Pearl, called stochastic simulation.
We first review the conditions under which logic sampling is computationally
infeasible. Such cases motivated the development of the Pearl's stochastic
simulation algorithm. We have found that this stochastic simulation algorithm,
when applied to certain networks, leads to much slower than expected
convergence to the true posterior probabilities. This behavior is a result of
the tendency for local areas in the network to become fixed through many
simulation cycles. The time required to obtain significant convergence can be
made arbitrarily long by strengthening the probabilistic dependency between
nodes. We propose the use of several forms of graph modification, such as graph
pruning, arc reversal, and node reduction, in order to convert some networks
into formats that are computationally more efficient for simulation.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:47:13 GMT"
}
] | 1,365,638,400,000 | [
[
"Chin",
"Homer L.",
""
],
[
"Cooper",
"Gregory F.",
""
]
] |
1304.2723 | Steve Hanks | Steve Hanks | Temporal Reasoning About Uncertain Worlds | Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987) | null | null | UAI-P-1987-PG-114-122 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a program that manages a database of temporally scoped beliefs.
The basic functionality of the system includes maintaining a network of
constraints among time points, supporting a variety of fetches, mediating the
application of causal rules, monitoring intervals of time for the addition of
new facts, and managing data dependencies that keep the database consistent. At
this level the system operates independent of any measure of belief or belief
calculus. We provide an example of how an application program mi9ght use this
functionality to implement a belief calculus.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:47:17 GMT"
}
] | 1,365,638,400,000 | [
[
"Hanks",
"Steve",
""
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.