id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
sequencelengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1303.5741 | Arthur Ramer | Arthur Ramer | Formal Model of Uncertainty for Possibilistic Rules | Appears in Proceedings of the Seventh Conference on Uncertainty in
Artificial Intelligence (UAI1991) | null | null | UAI-P-1991-PG-295-299 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given a universe of discourse X-a domain of possible outcomes-an experiment
may consist of selecting one of its elements, subject to the operation of
chance, or of observing the elements, subject to imprecision. A priori
uncertainty about the actual result of the experiment may be quantified,
representing either the likelihood of the choice of :r_X or the degree to which
any such X would be suitable as a description of the outcome. The former case
corresponds to a probability distribution, while the latter gives a possibility
assignment on X. The study of such assignments and their properties falls
within the purview of possibility theory [DP88, Y80, Z783. It, like probability
theory, assigns values between 0 and 1 to express likelihoods of outcomes.
Here, however, the similarity ends. Possibility theory uses the maximum and
minimum functions to combine uncertainties, whereas probability theory uses the
plus and times operations. This leads to very dissimilar theories in terms of
analytical framework, even though they share several semantic concepts. One of
the shared concepts consists of expressing quantitatively the uncertainty
associated with a given distribution. In probability theory its value
corresponds to the gain of information that would result from conducting an
experiment and ascertaining an actual result. This gain of information can
equally well be viewed as a decrease in uncertainty about the outcome of an
experiment. In this case the standard measure of information, and thus
uncertainty, is Shannon entropy [AD75, G77]. It enjoys several advantages-it is
characterized uniquely by a few, very natural properties, and it can be
conveniently used in decision processes. This application is based on the
principle of maximum entropy; it has become a popular method of relating
decisions to uncertainty. This paper demonstrates that an equally integrated
theory can be built on the foundation of possibility theory. We first show how
to define measures of in formation and uncertainty for possibility assignments.
Next we construct an information-based metric on the space of all possibility
distributions defined on a given domain. It allows us to capture the notion of
proximity in information content among the distributions. Lastly, we show that
all the above constructions can be carried out for continuous
distributions-possibility assignments on arbitrary measurable domains. We
consider this step very significant-finite domains of discourse are but
approximations of the real-life infinite domains. If possibility theory is to
represent real world situations, it must handle continuous distributions both
directly and through finite approximations. In the last section we discuss a
principle of maximum uncertainty for possibility distributions. We show how
such a principle could be formalized as an inference rule. We also suggest it
could be derived as a consequence of simple assumptions about combining
information. We would like to mention that possibility assignments can be
viewed as fuzzy sets and that every fuzzy set gives rise to an assignment of
possibilities. This correspondence has far reaching consequences in logic and
in control theory. Our treatment here is independent of any special
interpretation; in particular we speak of possibility distributions and
possibility measures, defining them as measurable mappings into the interval
[0, 1]. Our presentation is intended as a self-contained, albeit terse summary.
Topics discussed were selected with care, to demonstrate both the completeness
and a certain elegance of the theory. Proofs are not included; we only offer
illustrative examples.
| [
{
"version": "v1",
"created": "Wed, 20 Mar 2013 15:32:37 GMT"
}
] | 1,364,256,000,000 | [
[
"Ramer",
"Arthur",
""
]
] |
1303.5742 | Anand S. Rao | Anand S. Rao, Michael P. Georgeff | Deliberation and its Role in the Formation of Intentions | Appears in Proceedings of the Seventh Conference on Uncertainty in
Artificial Intelligence (UAI1991) | null | null | UAI-P-1991-PG-300-307 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deliberation plays an important role in the design of rational agents
embedded in the real-world. In particular, deliberation leads to the formation
of intentions, i.e., plans of action that the agent is committed to achieving.
In this paper, we present a branching time possible-worlds model for
representing and reasoning about, beliefs, goals, intentions, time, actions,
probabilities, and payoffs. We compare this possible-worlds approach with the
more traditional decision tree representation and provide a transformation from
decision trees to possible worlds. Finally, we illustrate how an agent can
perform deliberation using a decision-tree representation and then use a
possible-worlds model to form and reason about his intentions.
| [
{
"version": "v1",
"created": "Wed, 20 Mar 2013 15:32:42 GMT"
}
] | 1,364,256,000,000 | [
[
"Rao",
"Anand S.",
""
],
[
"Georgeff",
"Michael P.",
""
]
] |
1303.5743 | Bhavani Raskutti | Bhavani Raskutti, Ingrid Zukerman | Handling Uncertainty during Plan Recognition in Task-Oriented
Consultation Systems | Appears in Proceedings of the Seventh Conference on Uncertainty in
Artificial Intelligence (UAI1991) | null | null | UAI-P-1991-PG-308-315 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | During interactions with human consultants, people are used to providing
partial and/or inaccurate information, and still be understood and assisted. We
attempt to emulate this capability of human consultants; in computer
consultation systems. In this paper, we present a mechanism for handling
uncertainty in plan recognition during task-oriented consultations. The
uncertainty arises while choosing an appropriate interpretation of a user?s
statements among many possible interpretations. Our mechanism handles this
uncertainty by using probability theory to assess the probabilities of the
interpretations, and complements this assessment by taking into account the
information content of the interpretations. The information content of an
interpretation is a measure of how well defined an interpretation is in terms
of the actions to be performed on the basis of the interpretation. This measure
is used to guide the inference process towards interpretations with a higher
information content. The information content for an interpretation depends on
the specificity and the strength of the inferences in it, where the strength of
an inference depends on the reliability of the information on which the
inference is based. Our mechanism has been developed for use in task-oriented
consultation systems. The domain that we have chosen for exploration is that of
a travel agency.
| [
{
"version": "v1",
"created": "Wed, 20 Mar 2013 15:32:47 GMT"
}
] | 1,364,256,000,000 | [
[
"Raskutti",
"Bhavani",
""
],
[
"Zukerman",
"Ingrid",
""
]
] |
1303.5744 | Enrique H. Ruspini | Enrique H. Ruspini | Truth as Utility: A Conceptual Synthesis | Appears in Proceedings of the Seventh Conference on Uncertainty in
Artificial Intelligence (UAI1991) | null | null | UAI-P-1991-PG-316-322 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces conceptual relations that synthesize utilitarian and
logical concepts, extending the logics of preference of Rescher. We define
first, in the context of a possible worlds model, constraint-dependent measures
that quantify the relative quality of alternative solutions of reasoning
problems or the relative desirability of various policies in control, decision,
and planning problems. We show that these measures may be interpreted as truth
values in a multi valued logic and propose mechanisms for the representation of
complex constraints as combinations of simpler restrictions. These extended
logical operations permit also the combination and aggregation of goal-specific
quality measures into global measures of utility. We identify also relations
that represent differential preferences between alternative solutions and
relate them to the previously defined desirability measures. Extending
conventional modal logic formulations, we introduce structures for the
representation of ignorance about the utility of alternative solutions.
Finally, we examine relations between these concepts and similarity based
semantic models of fuzzy logic.
| [
{
"version": "v1",
"created": "Wed, 20 Mar 2013 15:32:53 GMT"
}
] | 1,364,256,000,000 | [
[
"Ruspini",
"Enrique H.",
""
]
] |
1303.5745 | Alessandro Saffiotti | Alessandro Saffiotti, Elisabeth Umkehrer | Pulcinella: A General Tool for Propagating Uncertainty in Valuation
Networks | Appears in Proceedings of the Seventh Conference on Uncertainty in
Artificial Intelligence (UAI1991) | null | null | UAI-P-1991-PG-323-331 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present PULCinella and its use in comparing uncertainty theories.
PULCinella is a general tool for Propagating Uncertainty based on the Local
Computation technique of Shafer and Shenoy. It may be specialized to different
uncertainty theories: at the moment, Pulcinella can propagate probabilities,
belief functions, Boolean values, and possibilities. Moreover, Pulcinella
allows the user to easily define his own specializations. To illustrate
Pulcinella, we analyze two examples by using each of the four theories above.
In the first one, we mainly focus on intrinsic differences between theories. In
the second one, we take a knowledge engineer viewpoint, and check the adequacy
of each theory to a given problem.
| [
{
"version": "v1",
"created": "Wed, 20 Mar 2013 15:32:58 GMT"
}
] | 1,364,256,000,000 | [
[
"Saffiotti",
"Alessandro",
""
],
[
"Umkehrer",
"Elisabeth",
""
]
] |
1303.5746 | Sandra Sandri | Sandra Sandri | Structuring Bodies of Evidence | Appears in Proceedings of the Seventh Conference on Uncertainty in
Artificial Intelligence (UAI1991) | null | null | UAI-P-1991-PG-332-338 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this article we present two ways of structuring bodies of evidence, which
allow us to reduce the complexity of the operations usually performed in the
framework of evidence theory. The first structure just partitions the focal
elements in a body of evidence by their cardinality. With this structure we are
able to reduce the complexity on the calculation of the belief functions Bel,
Pl, and Q. The other structure proposed here, the Hierarchical Trees, permits
us to reduce the complexity of the calculation of Bel, Pl, and Q, as well as of
the Dempster's rule of combination in relation to the brute-force algorithm.
Both these structures do not require the generation of all the subsets of the
reference domain.
| [
{
"version": "v1",
"created": "Wed, 20 Mar 2013 15:33:02 GMT"
}
] | 1,364,256,000,000 | [
[
"Sandri",
"Sandra",
""
]
] |
1303.5747 | Eugene Santos Jr. | Eugene Santos Jr | On the Generation of Alternative Explanations with Implications for
Belief Revision | Appears in Proceedings of the Seventh Conference on Uncertainty in
Artificial Intelligence (UAI1991) | null | null | UAI-P-1991-PG-339-347 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In general, the best explanation for a given observation makes no promises on
how good it is with respect to other alternative explanations. A major
deficiency of message-passing schemes for belief revision in Bayesian networks
is their inability to generate alternatives beyond the second best. In this
paper, we present a general approach based on linear constraint systems that
naturally generates alternative explanations in an orderly and highly efficient
manner. This approach is then applied to cost-based abduction problems as well
as belief revision in Bayesian net works.
| [
{
"version": "v1",
"created": "Wed, 20 Mar 2013 15:33:07 GMT"
}
] | 1,364,256,000,000 | [
[
"Santos",
"Eugene",
"Jr"
]
] |
1303.5748 | Kerstin Schill | Kerstin Schill, Ernst Poppel, Christoph Zetzsche | Completing Knowledge by Competing Hierarchies | Appears in Proceedings of the Seventh Conference on Uncertainty in
Artificial Intelligence (UAI1991) | null | null | UAI-P-1991-PG-348-352 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A control strategy for expert systems is presented which is based on Shafer's
Belief theory and the combination rule of Dempster. In contrast to well known
strategies it is not sequentially and hypotheses-driven, but parallel and self
organizing, determined by the concept of information gain. The information
gain, calculated as the maximal difference between the actual evidence
distribution in the knowledge base and the potential evidence determines each
consultation step. Hierarchically structured knowledge is an important
representation form and experts even use several hierarchies in parallel for
constituting their knowledge. Hence the control strategy is applied to a
layered set of distinct hierarchies. Depending on the actual data one of these
hierarchies is chosen by the control strategy for the next step in the
reasoning process. Provided the actual data are well matched to the structure
of one hierarchy, this hierarchy remains selected for a longer consultation
time. If no good match can be achieved, a switch from the actual hierarchy to a
competing one will result, very similar to the phenomenon of restructuring in
problem solving tasks. Up to now the control strategy is restricted to multi
hierarchical knowledge bases with disjunct hierarchies. It is implemented in
the expert system IBIG (inference by information gain), being presently applied
to acquired speech disorders (aphasia).
| [
{
"version": "v1",
"created": "Wed, 20 Mar 2013 15:33:12 GMT"
}
] | 1,364,256,000,000 | [
[
"Schill",
"Kerstin",
""
],
[
"Poppel",
"Ernst",
""
],
[
"Zetzsche",
"Christoph",
""
]
] |
1303.5749 | Ross D. Shachter | Ross D. Shachter | A Graph-Based Inference Method for Conditional Independence | Appears in Proceedings of the Seventh Conference on Uncertainty in
Artificial Intelligence (UAI1991) | null | null | UAI-P-1991-PG-353-360 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The graphoid axioms for conditional independence, originally described by
Dawid [1979], are fundamental to probabilistic reasoning [Pearl, 19881. Such
axioms provide a mechanism for manipulating conditional independence assertions
without resorting to their numerical definition. This paper explores a
representation for independence statements using multiple undirected graphs and
some simple graphical transformations. The independence statements derivable in
this system are equivalent to those obtainable by the graphoid axioms.
Therefore, this is a purely graphical proof technique for conditional
independence.
| [
{
"version": "v1",
"created": "Wed, 20 Mar 2013 15:33:17 GMT"
}
] | 1,364,256,000,000 | [
[
"Shachter",
"Ross D.",
""
]
] |
1303.5750 | Prakash P. Shenoy | Prakash P. Shenoy | A Fusion Algorithm for Solving Bayesian Decision Problems | Appears in Proceedings of the Seventh Conference on Uncertainty in
Artificial Intelligence (UAI1991) | null | null | UAI-P-1991-PG-361-369 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes a new method for solving Bayesian decision problems. The
method consists of representing a Bayesian decision problem as a
valuation-based system and applying a fusion algorithm for solving it. The
fusion algorithm is a hybrid of local computational methods for computation of
marginals of joint probability distributions and the local computational
methods for discrete optimization problems.
| [
{
"version": "v1",
"created": "Wed, 20 Mar 2013 15:33:23 GMT"
}
] | 1,364,256,000,000 | [
[
"Shenoy",
"Prakash P.",
""
]
] |
1303.5751 | Solomon Eyal Shimony | Solomon Eyal Shimony | Algorithms for Irrelevance-Based Partial MAPs | Appears in Proceedings of the Seventh Conference on Uncertainty in
Artificial Intelligence (UAI1991) | null | null | UAI-P-1991-PG-370-377 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Irrelevance-based partial MAPs are useful constructs for domain-independent
explanation using belief networks. We look at two definitions for such partial
MAPs, and prove important properties that are useful in designing algorithms
for computing them effectively. We make use of these properties in modifying
our standard MAP best-first algorithm, so as to handle irrelevance-based
partial MAPs.
| [
{
"version": "v1",
"created": "Wed, 20 Mar 2013 15:33:28 GMT"
}
] | 1,364,256,000,000 | [
[
"Shimony",
"Solomon Eyal",
""
]
] |
1303.5752 | Philippe Smets | Philippe Smets | About Updating | Appears in Proceedings of the Seventh Conference on Uncertainty in
Artificial Intelligence (UAI1991) | null | null | UAI-P-1991-PG-378-385 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Survey of several forms of updating, with a practical illustrative example.
We study several updating (conditioning) schemes that emerge naturally from a
common scenarion to provide some insights into their meaning. Updating is a
subtle operation and there is no single method, no single 'good' rule. The
choice of the appropriate rule must always be given due consideration. Planchet
(1989) presents a mathematical survey of many rules. We focus on the practical
meaning of these rules. After summarizing the several rules for conditioning,
we present an illustrative example in which the various forms of conditioning
can be explained.
| [
{
"version": "v1",
"created": "Wed, 20 Mar 2013 15:33:33 GMT"
}
] | 1,364,256,000,000 | [
[
"Smets",
"Philippe",
""
]
] |
1303.5753 | Paul Snow | Paul Snow | Compressed Constraints in Probabilistic Logic and Their Revision | Appears in Proceedings of the Seventh Conference on Uncertainty in
Artificial Intelligence (UAI1991) | null | null | UAI-P-1991-PG-386-391 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In probabilistic logic entailments, even moderate size problems can yield
linear constraint systems with so many variables that exact methods are
impractical. This difficulty can be remedied in many cases of interest by
introducing a three valued logic (true, false, and "don't care"). The
three-valued approach allows the construction of "compressed" constraint
systems which have the same solution sets as their two-valued counterparts, but
which may involve dramatically fewer variables. Techniques to calculate point
estimates for the posterior probabilities of entailed sentences are discussed.
| [
{
"version": "v1",
"created": "Wed, 20 Mar 2013 15:33:38 GMT"
}
] | 1,364,256,000,000 | [
[
"Snow",
"Paul",
""
]
] |
1303.5754 | Peter L. Spirtes | Peter L. Spirtes | Detecting Causal Relations in the Presence of Unmeasured Variables | Appears in Proceedings of the Seventh Conference on Uncertainty in
Artificial Intelligence (UAI1991) | null | null | UAI-P-1991-PG-392-397 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The presence of latent variables can greatly complicate inferences about
causal relations between measured variables from statistical data. In many
cases, the presence of latent variables makes it impossible to determine for
two measured variables A and B, whether A causes B, B causes A, or there is
some common cause. In this paper I present several theorems that state
conditions under which it is possible to reliably infer the causal relation
between two measured variables, regardless of whether latent variables are
acting or not.
| [
{
"version": "v1",
"created": "Wed, 20 Mar 2013 15:33:43 GMT"
}
] | 1,364,256,000,000 | [
[
"Spirtes",
"Peter L.",
""
]
] |
1303.5755 | Deborah L. Thurston | Deborah L. Thurston, Yun Qi Tian | A Method for Integrating Utility Analysis into an Expert System for
Design Evaluation | Appears in Proceedings of the Seventh Conference on Uncertainty in
Artificial Intelligence (UAI1991) | null | null | UAI-P-1991-PG-398-405 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In mechanical design, there is often unavoidable uncertainty in estimates of
design performance. Evaluation of design alternatives requires consideration of
the impact of this uncertainty. Expert heuristics embody assumptions regarding
the designer's attitude towards risk and uncertainty that might be reasonable
in most cases but inaccurate in others. We present a technique to allow
designers to incorporate their own unique attitude towards uncertainty as
opposed to those assumed by the domain expert's rules. The general approach is
to eliminate aspects of heuristic rules which directly or indirectly include
assumptions regarding the user's attitude towards risk, and replace them with
explicit, user-specified probabilistic multi attribute utility and probability
distribution functions. We illustrate the method in a system for material
selection for automobile bumpers.
| [
{
"version": "v1",
"created": "Wed, 20 Mar 2013 15:33:48 GMT"
}
] | 1,364,256,000,000 | [
[
"Thurston",
"Deborah L.",
""
],
[
"Tian",
"Yun Qi",
""
]
] |
1303.5756 | Wilson X. Wen | Wilson X. Wen | From Relational Databases to Belief Networks | Appears in Proceedings of the Seventh Conference on Uncertainty in
Artificial Intelligence (UAI1991) | null | null | UAI-P-1991-PG-406-413 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The relationship between belief networks and relational databases is
examined. Based on this analysis, a method to construct belief networks
automatically from statistical relational data is proposed. A comparison
between our method and other methods shows that our method has several
advantages when generalization or prediction is deeded.
| [
{
"version": "v1",
"created": "Wed, 20 Mar 2013 15:33:53 GMT"
}
] | 1,364,256,000,000 | [
[
"Wen",
"Wilson X.",
""
]
] |
1303.5757 | Nic Wilson | Nic Wilson | A Monte-Carlo Algorithm for Dempster-Shafer Belief | Appears in Proceedings of the Seventh Conference on Uncertainty in
Artificial Intelligence (UAI1991) | null | null | UAI-P-1991-PG-414-417 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A very computationally-efficient Monte-Carlo algorithm for the calculation of
Dempster-Shafer belief is described. If Bel is the combination using Dempster's
Rule of belief functions Bel, ..., Bel,7, then, for subset b of the frame C),
Bel(b) can be calculated in time linear in 1(31 and m (given that the weight of
conflict is bounded). The algorithm can also be used to improve the complexity
of the Shenoy-Shafer algorithms on Markov trees, and be generalised to
calculate Dempster-Shafer Belief over other logics.
| [
{
"version": "v1",
"created": "Wed, 20 Mar 2013 15:33:58 GMT"
}
] | 1,364,256,000,000 | [
[
"Wilson",
"Nic",
""
]
] |
1303.5758 | Michael S. K. M. Wong | Michael S. K. M. Wong, Y. Y. Yao, P. Lingras | Compatibility of Quantitative and Qualitative Representations of Belief | Appears in Proceedings of the Seventh Conference on Uncertainty in
Artificial Intelligence (UAI1991) | null | null | UAI-P-1991-PG-418-424 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The compatibility of quantitative and qualitative representations of beliefs
was studied extensively in probability theory. It is only recently that this
important topic is considered in the context of belief functions. In this
paper, the compatibility of various quantitative belief measures and
qualitative belief structures is investigated. Four classes of belief measures
considered are: the probability function, the monotonic belief function,
Shafer's belief function, and Smets' generalized belief function. The analysis
of their individual compatibility with different belief structures not only
provides a sound b<msis for these quantitative measures, but also alleviates
some of the difficulties in the acquisition and interpretation of numeric
belief numbers. It is shown that the structure of qualitative probability is
compatible with monotonic belief functions. Moreover, a belief structure
slightly weaker than that of qualitative belief is compatible with Smets'
generalized belief functions.
| [
{
"version": "v1",
"created": "Wed, 20 Mar 2013 15:34:02 GMT"
}
] | 1,364,256,000,000 | [
[
"Wong",
"Michael S. K. M.",
""
],
[
"Yao",
"Y. Y.",
""
],
[
"Lingras",
"P.",
""
]
] |
1303.5759 | Hong Xu | Hong Xu | An Efficient Implementation of Belief Function Propagation | Appears in Proceedings of the Seventh Conference on Uncertainty in
Artificial Intelligence (UAI1991) | null | null | UAI-P-1991-PG-425-432 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The local computation technique (Shafer et al. 1987, Shafer and Shenoy 1988,
Shenoy and Shafer 1986) is used for propagating belief functions in so called a
Markov Tree. In this paper, we describe an efficient implementation of belief
function propagation on the basis of the local computation technique. The
presented method avoids all the redundant computations in the propagation
process, and so makes the computational complexity decrease with respect to
other existing implementations (Hsia and Shenoy 1989, Zarley et al. 1988). We
also give a combined algorithm for both propagation and re-propagation which
makes the re-propagation process more efficient when one or more of the prior
belief functions is changed.
| [
{
"version": "v1",
"created": "Wed, 20 Mar 2013 15:34:07 GMT"
}
] | 1,364,256,000,000 | [
[
"Xu",
"Hong",
""
]
] |
1303.5760 | Ronald R. Yager | Ronald R. Yager | A Non-Numeric Approach to Multi-Criteria/Multi-Expert Aggregation Based
on Approximate Reasoning | Appears in Proceedings of the Seventh Conference on Uncertainty in
Artificial Intelligence (UAI1991) | null | null | UAI-P-1991-PG-433-437 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe a technique that can be used for the fusion of multiple sources
of information as well as for the evaluation and selection of alternatives
under multi-criteria. Three important properties contribute to the uniqueness
of the technique introduced. The first is the ability to do all necessary
operations and aggregations with information that is of a nonnumeric linguistic
nature. This facility greatly reduces the burden on the providers of
information, the experts. A second characterizing feature is the ability
assign, again linguistically, differing importance to the criteria or in the
case of information fusion to the individual sources of information. A third
significant feature of the approach is its ability to be used as method to find
a consensus of the opinion of multiple experts on the issue of concern. The
techniques used in this approach are base on ideas developed from the theory of
approximate reasoning. We illustrate the approach with a problem of project
selection.
| [
{
"version": "v1",
"created": "Wed, 20 Mar 2013 15:34:12 GMT"
}
] | 1,364,256,000,000 | [
[
"Yager",
"Ronald R.",
""
]
] |
1303.5761 | Henry E. Kyburg Jr. | Henry E. Kyburg Jr | Why Do We Need Foundations for Modelling Uncertainties? | Appears in Proceedings of the Seventh Conference on Uncertainty in
Artificial Intelligence (UAI1991) | null | null | UAI-P-1991-PG-438-442 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Surely we want solid foundations. What kind of castle can we build on sand?
What is the point of devoting effort to balconies and minarets, if the
foundation may be so weak as to allow the structure to collapse of its own
weight? We want our foundations set on bedrock, designed to last for
generations. Who would want an architect who cannot certify the soundness of
the foundations of his buildings?
| [
{
"version": "v1",
"created": "Wed, 20 Mar 2013 15:34:17 GMT"
}
] | 1,364,256,000,000 | [
[
"Kyburg",
"Henry E.",
"Jr"
]
] |
1303.5929 | Sourish Dasgupta | Sourish Dasgupta, Ankur Padia, Kushal Shah, Rupali KaPatel, Prasenjit
Majumder | DLOLIS-A: Description Logic based Text Ontology Learning | 11 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ontology Learning has been the subject of intensive study for the past
decade. Researchers in this field have been motivated by the possibility of
automatically building a knowledge base on top of text documents so as to
support reasoning based knowledge extraction. While most works in this field
have been primarily statistical (known as light-weight Ontology Learning) not
much attempt has been made in axiomatic Ontology Learning (called heavy-weight
Ontology Learning) from Natural Language text documents. Heavy-weight Ontology
Learning supports more precise formal logic-based reasoning when compared to
statistical ontology learning. In this paper we have proposed a sound Ontology
Learning tool DLOL_(IS-A) that maps English language IS-A sentences into their
equivalent Description Logic (DL) expressions in order to automatically
generate a consistent pair of T-box and A-box thereby forming both regular
(definitional form) and generalized (axiomatic form) DL ontology. The current
scope of the paper is strictly limited to IS-A sentences that exclude the
possible structures of: (i) implicative IS-A sentences, and (ii) "Wh" IS-A
questions. Other linguistic nuances that arise out of pragmatics and epistemic
of IS-A sentences are beyond the scope of this present work. We have adopted
Gold Standard based Ontology Learning evaluation on chosen IS-A rich Wikipedia
documents.
| [
{
"version": "v1",
"created": "Sun, 24 Mar 2013 08:39:18 GMT"
}
] | 1,364,256,000,000 | [
[
"Dasgupta",
"Sourish",
""
],
[
"Padia",
"Ankur",
""
],
[
"Shah",
"Kushal",
""
],
[
"KaPatel",
"Rupali",
""
],
[
"Majumder",
"Prasenjit",
""
]
] |
1303.6932 | Saleem Abdullah | Muhammad Aslam, Saleem Abdullah and Kifayat ullah | Bipolar Fuzzy Soft sets and its applications in decision making problem | null | null | 10.3233/IFS-131031 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this article, we combine the concept of a bipolar fuzzy set and a soft
set. We introduce the notion of bipolar fuzzy soft set and study fundamental
properties. We study basic operations on bipolar fuzzy soft set. We define
exdended union, intersection of two bipolar fuzzy soft set. We also give an
application of bipolar fuzzy soft set into decision making problem. We give a
general algorithm to solve decision making problems by using bipolar fuzzy soft
set.
| [
{
"version": "v1",
"created": "Sat, 23 Mar 2013 19:26:43 GMT"
}
] | 1,394,409,600,000 | [
[
"Aslam",
"Muhammad",
""
],
[
"Abdullah",
"Saleem",
""
],
[
"ullah",
"Kifayat",
""
]
] |
1303.7137 | Maksims Fiosins | A. Andronov and M. Fioshin | Discrete Optimization of Statistical Sample Sizes in Simulation by Using
the Hierarchical Bootstrap Method | 9 pages | proceedings of the 6th Tartu Conference, 1999 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Bootstrap method application in simulation supposes that value of random
variables are not generated during the simulation process but extracted from
available sample populations. In the case of Hierarchical Bootstrap the
function of interest is calculated recurrently using the calculation tree. In
the present paper we consider the optimization of sample sizes in each vertex
of the calculation tree. The dynamic programming method is used for this aim.
Proposed method allows to decrease a variance of system characteristic
estimators.
| [
{
"version": "v1",
"created": "Thu, 28 Mar 2013 14:48:44 GMT"
}
] | 1,364,515,200,000 | [
[
"Andronov",
"A.",
""
],
[
"Fioshin",
"M.",
""
]
] |
1303.7201 | Chrisantha Fernando Dr | Chrisantha Fernando, Vera Vasas | Design for a Darwinian Brain: Part 2. Cognitive Architecture | Submitted as Part 2 to Living Machines 2013, Natural History Museum,
London. Code available on github as it is being developed to implement the
cognitive architecture above, here...
https://github.com/ctf20/DarwinianNeurodynamics | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/3.0/ | The accumulation of adaptations in an open-ended manner during lifetime
learning is a holy grail in reinforcement learning, intrinsic motivation,
artificial curiosity, and developmental robotics. We present a specification
for a cognitive architecture that is capable of specifying an unlimited range
of behaviors. We then give examples of how it can stochastically explore an
interesting space of adjacent possible behaviors. There are two main novelties;
the first is a proper definition of the fitness of self-generated games such
that interesting games are expected to evolve. The second is a modular and
evolvable behavior language that has systematicity, productivity, and
compositionality, i.e. it is a physical symbol system. A part of the
architecture has already been implemented on a humanoid robot.
| [
{
"version": "v1",
"created": "Thu, 28 Mar 2013 18:47:32 GMT"
}
] | 1,364,515,200,000 | [
[
"Fernando",
"Chrisantha",
""
],
[
"Vasas",
"Vera",
""
]
] |
1304.0145 | Soumya Kambhampati | Soumya C. Kambhampati, Thomas Liu | Phase Transition and Network Structure in Realistic SAT Problems | null | Published as student abstract in Proceedings of AAAI 2013
(National Conference on Artificial Intelligence) | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A fundamental question in Computer Science is understanding when a specific
class of problems go from being computationally easy to hard. Because of its
generality and applications, the problem of Boolean Satisfiability (aka SAT) is
often used as a vehicle for investigating this question. A signal result from
these studies is that the hardness of SAT problems exhibits a dramatic
easy-to-hard phase transition with respect to the problem constrainedness. Past
studies have however focused mostly on SAT instances generated using uniform
random distributions, where all constraints are independently generated, and
the problem variables are all considered of equal importance. These assumptions
are unfortunately not satisfied by most real problems. Our project aims for a
deeper understanding of hardness of SAT problems that arise in practice. We
study two key questions: (i) How does easy-to-hard transition change with more
realistic distributions that capture neighborhood sensitivity and
rich-get-richer aspects of real problems and (ii) Can these changes be
explained in terms of the network properties (such as node centrality and
small-worldness) of the clausal networks of the SAT problems. Our results,
based on extensive empirical studies and network analyses, provide important
structural and computational insights into realistic SAT problems. Our
extensive empirical studies show that SAT instances from realistic
distributions do exhibit phase transition, but the transition occurs sooner (at
lower values of constrainedness) than the instances from uniform random
distribution. We show that this behavior can be explained in terms of their
clausal network properties such as eigenvector centrality and small-worldness
(measured indirectly in terms of the clustering coefficients and average node
distance).
| [
{
"version": "v1",
"created": "Sat, 30 Mar 2013 23:21:56 GMT"
}
] | 1,364,860,800,000 | [
[
"Kambhampati",
"Soumya C.",
""
],
[
"Liu",
"Thomas",
""
]
] |
1304.0620 | A. Anonymous | Heng Zhang, Yan Zhang | Disjunctive Logic Programs versus Normal Logic Programs | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper focuses on the expressive power of disjunctive and normal logic
programs under the stable model semantics over finite, infinite, or arbitrary
structures. A translation from disjunctive logic programs into normal logic
programs is proposed and then proved to be sound over infinite structures. The
equivalence of expressive power of two kinds of logic programs over arbitrary
structures is shown to coincide with that over finite structures, and coincide
with whether or not NP is closed under complement. Over finite structures, the
intranslatability from disjunctive logic programs to normal logic programs is
also proved if arities of auxiliary predicates and functions are bounded in a
certain way.
| [
{
"version": "v1",
"created": "Tue, 2 Apr 2013 12:59:41 GMT"
}
] | 1,364,947,200,000 | [
[
"Zhang",
"Heng",
""
],
[
"Zhang",
"Yan",
""
]
] |
1304.0806 | Faruk Karaaslan | Faruk Karaaslan, Naim Cagman, Saban Yilmaz | IFP-Intuitionistic fuzzy soft set theory and its applications | This paper has been withdrawn by the author due to a crucial errors
in the notation and some problems in the algorithm | null | null | null | cs.AI | http://creativecommons.org/licenses/by/3.0/ | In this work, we present definition of intuitionistic fuzzy parameterized
(IFP) intuitionistic fuzzy soft set and its operations. Then we define
IFP-aggregation operator to form IFP-intuitionistic fuzzy soft-decision-making
method which allows constructing more efficient decision processes.
| [
{
"version": "v1",
"created": "Tue, 2 Apr 2013 22:10:00 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Mar 2016 08:12:43 GMT"
},
{
"version": "v3",
"created": "Sun, 17 Apr 2016 19:56:30 GMT"
}
] | 1,461,024,000,000 | [
[
"Karaaslan",
"Faruk",
""
],
[
"Cagman",
"Naim",
""
],
[
"Yilmaz",
"Saban",
""
]
] |
1304.0897 | Martin Suda | Martin Suda | Duality in STRIPS planning | 6 pages (two columns), 4 tables | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe a duality mapping between STRIPS planning tasks. By exchanging
the initial and goal conditions, taking their respective complements, and
swapping for every action its precondition and delete list, one obtains for
every STRIPS task its dual version, which has a solution if and only if the
original does. This is proved by showing that the described transformation
essentially turns progression (forward search) into regression (backward
search) and vice versa.
The duality sheds new light on STRIPS planning by allowing a transfer of
ideas from one search approach to the other. It can be used to construct new
algorithms from old ones, or (equivalently) to obtain new benchmarks from
existing ones. Experiments show that the dual versions of IPC benchmarks are in
general quite difficult for modern planners. This may be seen as a new
challenge. On the other hand, the cases where the dual versions are easier to
solve demonstrate that the duality can also be made useful in practice.
| [
{
"version": "v1",
"created": "Wed, 3 Apr 2013 10:08:56 GMT"
}
] | 1,365,033,600,000 | [
[
"Suda",
"Martin",
""
]
] |
1304.1081 | Michael P. Wellman | Michael P. Wellman | Exploiting Functional Dependencies in Qualitative Probabilistic
Reasoning | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-2-9 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Functional dependencies restrict the potential interactions among variables
connected in a probabilistic network. This restriction can be exploited in
qualitative probabilistic reasoning by introducing deterministic variables and
modifying the inference rules to produce stronger conclusions in the presence
of functional relations. I describe how to accomplish these modifications in
qualitative probabilistic networks by exhibiting the update procedures for
graphical transformations involving probabilistic and deterministic variables
and combinations. A simple example demonstrates that the augmented scheme can
reduce qualitative ambiguity that would arise without the special treatment of
functional dependency. Analysis of qualitative synergy reveals that new
higher-order relations are required to reason effectively about synergistic
interactions among deterministic variables.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:54:31 GMT"
}
] | 1,365,120,000,000 | [
[
"Wellman",
"Michael P.",
""
]
] |
1304.1082 | Max Henrion | Max Henrion, Marek J. Druzdzel | Qualitative Propagation and Scenario-based Explanation of Probabilistic
Reasoning | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-10-20 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Comprehensible explanations of probabilistic reasoning are a prerequisite for
wider acceptance of Bayesian methods in expert systems and decision support
systems. A study of human reasoning under uncertainty suggests two different
strategies for explaining probabilistic reasoning: The first, qualitative
belief propagation, traces the qualitative effect of evidence through a belief
network from one variable to the next. This propagation algorithm is an
alternative to the graph reduction algorithms of Wellman (1988) for inference
in qualitative probabilistic networks. It is based on a qualitative analysis of
intercausal reasoning, which is a generalization of Pearl's "explaining away",
and an alternative to Wellman's definition of qualitative synergy. The other,
Scenario-based reasoning, involves the generation of alternative causal
"stories" accounting for the evidence. Comparing a few of the most probable
scenarios provides an approximate way to explain the results of probabilistic
reasoning. Both schemes employ causal as well as probabilistic knowledge.
Probabilities may be presented as phrases and/or numbers. Users can control the
style, abstraction and completeness of explanations.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:54:37 GMT"
}
] | 1,365,120,000,000 | [
[
"Henrion",
"Max",
""
],
[
"Druzdzel",
"Marek J.",
""
]
] |
1304.1083 | Thomas R. Shultz | Thomas R. Shultz | Managing Uncertainty in Rule Based Cognitive Models | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | Proceedings of the Sixth Conference on Uncertainty in Artificial
Intelligence (1990) (pp. 21-26) | null | UAI-P-1990-PG-21-26 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An experiment replicated and extended recent findings on psychologically
realistic ways of modeling propagation of uncertainty in rule based reasoning.
Within a single production rule, the antecedent evidence can be summarized by
taking the maximum of disjunctively connected antecedents and the minimum of
conjunctively connected antecedents. The maximum certainty factor attached to
each of the rule's conclusions can be sealed down by multiplication with this
summarized antecedent certainty. Heckerman's modified certainty factor
technique can be used to combine certainties for common conclusions across
production rules.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:54:42 GMT"
}
] | 1,625,184,000,000 | [
[
"Shultz",
"Thomas R.",
""
]
] |
1304.1084 | Yizong Cheng | Yizong Cheng | Context-Dependent Similarity | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-27-31 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Attribute weighting and differential weighting, two major mechanisms for
computing context-dependent similarity or dissimilarity measures are studied
and compared. A dissimilarity measure based on subset size in the context is
proposed and its metrization and application are given. It is also shown that
while all attribute weighting dissimilarity measures are metrics differential
weighting dissimilarity measures are usually non-metric.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:54:46 GMT"
}
] | 1,365,120,000,000 | [
[
"Cheng",
"Yizong",
""
]
] |
1304.1085 | David Heckerman | David Heckerman | Similarity Networks for the Construction of Multiple-Faults Belief
Networks | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-32-39 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A similarity network is a tool for constructing belief networks for the
diagnosis of a single fault. In this paper, we examine modifications to the
similarity-network representation that facilitate the construction of belief
networks for the diagnosis of multiple coexisting faults.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:54:52 GMT"
},
{
"version": "v2",
"created": "Sat, 16 May 2015 23:57:34 GMT"
}
] | 1,431,993,600,000 | [
[
"Heckerman",
"David",
""
]
] |
1304.1086 | Dekang Lin | Dekang Lin, Randy Goebel | Integrating Probabilistic, Taxonomic and Causal Knowledge in Abductive
Diagnosis | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-40-45 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose an abductive diagnosis theory that integrates probabilistic,
causal and taxonomic knowledge. Probabilistic knowledge allows us to select the
most likely explanation; causal knowledge allows us to make reasonable
independence assumptions; taxonomic knowledge allows causation to be modeled at
different levels of detail, and allows observations be described in different
levels of precision. Unlike most other approaches where a causal explanation is
a hypothesis that one or more causative events occurred, we define an
explanation of a set of observations to be an occurrence of a chain of
causation events. These causation events constitute a scenario where all the
observations are true. We show that the probabilities of the scenarios can be
computed from the conditional probabilities of the causation events. Abductive
reasoning is inherently complex even if only modest expressive power is
allowed. However, our abduction algorithm is exponential only in the number of
observations to be explained, and is polynomial in the size of the knowledge
base. This contrasts with many other abduction procedures that are exponential
in the size of the knowledge base.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:54:58 GMT"
}
] | 1,365,120,000,000 | [
[
"Lin",
"Dekang",
""
],
[
"Goebel",
"Randy",
""
]
] |
1304.1087 | David L Poole | David L. Poole, Gregory M. Provan | What is an Optimal Diagnosis? | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-46-53 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Within diagnostic reasoning there have been a number of proposed definitions
of a diagnosis, and thus of the most likely diagnosis, including most probable
posterior hypothesis, most probable interpretation, most probable covering
hypothesis, etc. Most of these approaches assume that the most likely diagnosis
must be computed, and that a definition of what should be computed can be made
a priori, independent of what the diagnosis is used for. We argue that the
diagnostic problem, as currently posed, is incomplete: it does not consider how
the diagnosis is to be used, or the utility associated with the treatment of
the abnormalities. In this paper we analyze several well-known definitions of
diagnosis, showing that the different definitions of the most likely diagnosis
have different qualitative meanings, even given the same input data. We argue
that the most appropriate definition of (optimal) diagnosis needs to take into
account the utility of outcomes and what the diagnosis is used for.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:55:03 GMT"
}
] | 1,365,120,000,000 | [
[
"Poole",
"David L.",
""
],
[
"Provan",
"Gregory M.",
""
]
] |
1304.1088 | Edward H. Herskovits | Edward H. Herskovits, Gregory F. Cooper | Kutato: An Entropy-Driven System for Construction of Probabilistic
Expert Systems from Databases | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-54-63 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Kutato is a system that takes as input a database of cases and produces a
belief network that captures many of the dependence relations represented by
those data. This system incorporates a module for determining the entropy of a
belief network and a module for constructing belief networks based on entropy
calculations. Kutato constructs an initial belief network in which all
variables in the database are assumed to be marginally independent. The entropy
of this belief network is calculated, and that arc is added that minimizes the
entropy of the resulting belief network. Conditional probabilities for an arc
are obtained directly from the database. This process continues until an
entropy-based threshold is reached. We have tested the system by generating
databases from networks using the probabilistic logic-sampling method, and then
using those databases as input to Kutato. The system consistently reproduces
the original belief networks with high fidelity.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:55:08 GMT"
}
] | 1,365,120,000,000 | [
[
"Herskovits",
"Edward H.",
""
],
[
"Cooper",
"Gregory F.",
""
]
] |
1304.1089 | John S. Breese | John S. Breese, Eric J. Horvitz | Ideal Reformulation of Belief Networks | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-64-72 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The intelligent reformulation or restructuring of a belief network can
greatly increase the efficiency of inference. However, time expended for
reformulation is not available for performing inference. Thus, under time
pressure, there is a tradeoff between the time dedicated to reformulating the
network and the time applied to the implementation of a solution. We
investigate this partition of resources into time applied to reformulation and
time used for inference. We shall describe first general principles for
computing the ideal partition of resources under uncertainty. These principles
have applicability to a wide variety of problems that can be divided into
interdependent phases of problem solving. After, we shall present results of
our empirical study of the problem of determining the ideal amount of time to
devote to searching for clusters in belief networks. In this work, we acquired
and made use of probability distributions that characterize (1) the performance
of alternative heuristic search methods for reformulating a network instance
into a set of cliques, and (2) the time for executing inference procedures on
various belief networks. Given a preference model describing the value of a
solution as a function of the delay required for its computation, the system
selects an ideal time to devote to reformulation.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:55:14 GMT"
}
] | 1,365,120,000,000 | [
[
"Breese",
"John S.",
""
],
[
"Horvitz",
"Eric J.",
""
]
] |
1304.1090 | David Einav | David Einav, Michael R. Fehling | Computationally-Optimal Real-Resource Strategies | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-73-81 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper focuses on managing the cost of deliberation before action. In
many problems, the overall quality of the solution reflects costs incurred and
resources consumed in deliberation as well as the cost and benefit of
execution, when both the resource consumption in deliberation phase, and the
costs in deliberation and execution are uncertain and may be described by
probability distribution functions. A feasible (in terms of resource
consumption) strategy that minimizes the expected total cost is termed
computationally-optimal. For a situation with several independent,
uninterruptible methods to solve the problem, we develop a
pseudopolynomial-time algorithm to construct generate-and-test computationally
optimal strategy. We show this strategy-construction problem to be NP-complete,
and apply Bellman's Optimality Principle to solve it efficiently.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:55:20 GMT"
}
] | 1,365,120,000,000 | [
[
"Einav",
"David",
""
],
[
"Fehling",
"Michael R.",
""
]
] |
1304.1091 | David Heckerman | David Heckerman, Eric J. Horvitz | Problem Formulation as the Reduction of a Decision Model | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-82-89 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we extend the QMRDT probabilistic model for the domain of
internal medicine to include decisions about treatments. In addition, we
describe how we can use the comprehensive decision model to construct a simpler
decision model for a specific patient. In so doing, we transform the task of
problem formulation to that of narrowing of a larger problem.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:55:26 GMT"
},
{
"version": "v2",
"created": "Sat, 16 May 2015 23:53:50 GMT"
}
] | 1,431,993,600,000 | [
[
"Heckerman",
"David",
""
],
[
"Horvitz",
"Eric J.",
""
]
] |
1304.1092 | Robert P. Goldman | Robert P. Goldman, Eugene Charniak | Dynamic Construction of Belief Networks | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-90-97 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe a method for incrementally constructing belief networks. We have
developed a network-construction language similar to a forward-chaining
language using data dependencies, but with additional features for specifying
distributions. Using this language, we can define parameterized classes of
probabilistic models. These parameterized models make it possible to apply
probabilistic reasoning to problems for which it is impractical to have a
single large static model.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:55:31 GMT"
}
] | 1,365,120,000,000 | [
[
"Goldman",
"Robert P.",
""
],
[
"Charniak",
"Eugene",
""
]
] |
1304.1093 | Solomon Eyal Shimony | Solomon Eyal Shimony, Eugene Charniak | A New Algorithm for Finding MAP Assignments to Belief Networks | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-98-105 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a new algorithm for finding maximum a-posterior) (MAP) assignments
of values to belief networks. The belief network is compiled into a network
consisting only of nodes with boolean (i.e. only 0 or 1) conditional
probabilities. The MAP assignment is then found using a best-first search on
the resulting network. We argue that, as one would anticipate, the algorithm is
exponential for the general case, but only linear in the size of the network
for poly trees.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:55:36 GMT"
}
] | 1,365,120,000,000 | [
[
"Shimony",
"Solomon Eyal",
""
],
[
"Charniak",
"Eugene",
""
]
] |
1304.1094 | K. Bayse | K. Bayse, M. Lejter, Keiji Kanazawa | Reducing Uncertainty in Navigation and Exploration | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-106-113 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A significant problem in designing mobile robot control systems involves
coping with the uncertainty that arises in moving about in an unknown or
partially unknown environment and relying on noisy or ambiguous sensor data to
acquire knowledge about that environment. We describe a control system that
chooses what activity to engage in next on the basis of expectations about how
the information re- turned as a result of a given activity will improve 2 its
knowledge about the spatial layout of its environment. Certain of the
higher-level components of the control system are specified in terms of
probabilistic decision models whose output is used to mediate the behavior of
lower-level control components responsible for movement and sensing.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:55:42 GMT"
}
] | 1,365,120,000,000 | [
[
"Bayse",
"K.",
""
],
[
"Lejter",
"M.",
""
],
[
"Kanazawa",
"Keiji",
""
]
] |
1304.1095 | Ingo Beinlich | Ingo Beinlich, Edward H. Herskovits | Ergo: A Graphical Environment for Constructing Bayesian | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-114-121 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe an environment that considerably simplifies the process of
generating Bayesian belief networks. The system has been implemented on readily
available, inexpensive hardware, and provides clarity and high performance. We
present an introduction to Bayesian belief networks, discuss algorithms for
inference with these networks, and delineate the classes of problems that can
be solved with this paradigm. We then describe the hardware and software that
constitute the system, and illustrate Ergo's use with several example
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:55:48 GMT"
}
] | 1,365,120,000,000 | [
[
"Beinlich",
"Ingo",
""
],
[
"Herskovits",
"Edward H.",
""
]
] |
1304.1096 | John S. Breese | John S. Breese, Kenneth W. Fertig | Decision Making with Interval Influence Diagrams | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-122-129 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In previous work (Fertig and Breese, 1989; Fertig and Breese, 1990) we
defined a mechanism for performing probabilistic reasoning in influence
diagrams using interval rather than point-valued probabilities. In this paper
we extend these procedures to incorporate decision nodes and interval-valued
value functions in the diagram. We derive the procedures for chance node
removal (calculating expected value) and decision node removal (optimization)
in influence diagrams where lower bounds on probabilities are stored at each
chance node and interval bounds are stored on the value function associated
with the diagram's value node. The output of the algorithm are a set of
admissible alternatives for each decision variable and a set of bounds on
expected value based on the imprecision in the input. The procedure can be
viewed as an approximation to a full e-dimensional sensitivity analysis where n
are the number of imprecise probability distributions in the input. We show the
transformations are optimal and sound. The performance of the algorithm on an
influence diagrams is investigated and compared to an exact algorithm.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:55:54 GMT"
}
] | 1,365,120,000,000 | [
[
"Breese",
"John S.",
""
],
[
"Fertig",
"Kenneth W.",
""
]
] |
1304.1097 | R. Martin Chavez | R. Martin Chavez, Gregory F. Cooper | A Randomized Approximation Algorithm of Logic Sampling | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-130-135 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, researchers in decision analysis and artificial intelligence
(AI) have used Bayesian belief networks to build models of expert opinion.
Using standard methods drawn from the theory of computational complexity,
workers in the field have shown that the problem of exact probabilistic
inference on belief networks almost certainly requires exponential computation
in the worst ease [3]. We have previously described a randomized approximation
scheme, called BN-RAS, for computation on belief networks [ 1, 2, 4]. We gave
precise analytic bounds on the convergence of BN-RAS and showed how to trade
running time for accuracy in the evaluation of posterior marginal
probabilities. We now extend our previous results and demonstrate the
generality of our framework by applying similar mathematical techniques to the
analysis of convergence for logic sampling [7], an alternative simulation
algorithm for probabilistic inference.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:55:59 GMT"
}
] | 1,365,120,000,000 | [
[
"Chavez",
"R. Martin",
""
],
[
"Cooper",
"Gregory F.",
""
]
] |
1304.1099 | Peter Haddawy | Peter Haddawy | Time, Chance, and Action | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-147-154 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To operate intelligently in the world, an agent must reason about its
actions. The consequences of an action are a function of both the state of the
world and the action itself. Many aspects of the world are inherently
stochastic, so a representation for reasoning about actions must be able to
express chances of world states as well as indeterminacy in the effects of
actions and other events. This paper presents a propositional temporal
probability logic for representing and reasoning about actions. The logic can
represent the probability that facts hold and events occur at various times. It
can represent the probability that actions and other events affect the future.
It can represent concurrent actions and conditions that hold or change during
execution of an action. The model of probability relates probabilities over
time. The logical language integrates both modal and probabilistic constructs
and can thus represent and distinguish between possibility, probability, and
truth. Several examples illustrating the use of the logic are given.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:56:11 GMT"
}
] | 1,365,120,000,000 | [
[
"Haddawy",
"Peter",
""
]
] |
1304.1100 | Michael C. Horsch | Michael C. Horsch, David L. Poole | A Dynamic Approach to Probabilistic Inference | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-155-161 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we present a framework for dynamically constructing Bayesian
networks. We introduce the notion of a background knowledge base of schemata,
which is a collection of parameterized conditional probability statements.
These schemata explicitly separate the general knowledge of properties an
individual may have from the specific knowledge of particular individuals that
may have these properties. Knowledge of individuals can be combined with this
background knowledge to create Bayesian networks, which can then be used in any
propagation scheme. We discuss the theory and assumptions necessary for the
implementation of dynamic Bayesian networks, and indicate where our approach
may be useful.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:56:17 GMT"
}
] | 1,365,120,000,000 | [
[
"Horsch",
"Michael C.",
""
],
[
"Poole",
"David L.",
""
]
] |
1304.1101 | Frank Jensen | Frank Jensen, S. K. Anderson | Approximations in Bayesian Belief Universe for Knowledge Based Systems | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-162-169 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | When expert systems based on causal probabilistic networks (CPNs) reach a
certain size and complexity, the "combinatorial explosion monster" tends to be
present. We propose an approximation scheme that identifies rarely occurring
cases and excludes these from being processed as ordinary cases in a CPN-based
expert system. Depending on the topology and the probability distributions of
the CPN, the numbers (representing probabilities of state combinations) in the
underlying numerical representation can become very small. Annihilating these
numbers and utilizing the resulting sparseness through data structuring
techniques often results in several orders of magnitude of improvement in the
consumption of computer resources. Bounds on the errors introduced into a
CPN-based expert system through approximations are established. Finally,
reports on empirical studies of applying the approximation scheme to a
real-world CPN are given.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:56:23 GMT"
}
] | 1,365,120,000,000 | [
[
"Jensen",
"Frank",
""
],
[
"Anderson",
"S. K.",
""
]
] |
1304.1102 | Paul E. Lehner | Paul E. Lehner | Robust Inference Policies | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-170-179 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A series of monte carlo studies were performed to assess the extent to which
different inference procedures robustly output reasonable belief values in the
context of increasing levels of judgmental imprecision. It was found that, when
compared to an equal-weights linear model, the Bayesian procedures are more
likely to deduce strong support for a hypothesis. But, the Bayesian procedures
are also more likely to strongly support the wrong hypothesis. Bayesian
techniques are more powerful, but are also more error prone.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:56:28 GMT"
}
] | 1,365,120,000,000 | [
[
"Lehner",
"Paul E.",
""
]
] |
1304.1103 | L. Liu | L. Liu, Y. Ma, D. Wilkins, Z. Bian, X. Ying | Minimum Error Tree Decomposition | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-180-185 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes a generalization of previous methods for constructing
tree-structured belief network with hidden variables. The major new feature of
the described method is the ability to produce a tree decomposition even when
there are errors in the correlation data among the input variables. This is an
important extension of existing methods since the correlational coefficients
usually cannot be measured with precision. The technique involves using a
greedy search algorithm that locally minimizes an error function.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:56:34 GMT"
}
] | 1,365,120,000,000 | [
[
"Liu",
"L.",
""
],
[
"Ma",
"Y.",
""
],
[
"Wilkins",
"D.",
""
],
[
"Bian",
"Z.",
""
],
[
"Ying",
"X.",
""
]
] |
1304.1104 | J. W. Miller | J. W. Miller, R. M. Goodman | A Polynomial Time Algorithm for Finding Bayesian Probabilities from
Marginal Constraints | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-186-193 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A method of calculating probability values from a system of marginal
constraints is presented. Previous systems for finding the probability of a
single attribute have either made an independence assumption concerning the
evidence or have required, in the worst case, time exponential in the number of
attributes of the system. In this paper a closed form solution to the
probability of an attribute given the evidence is found. The closed form
solution, however does not enforce the (non-linear) constraint that all terms
in the underlying distribution be positive. The equation requires O(r^3) steps
to evaluate, where r is the number of independent marginal constraints
describing the system at the time of evaluation. Furthermore, a marginal
constraint may be exchanged with a new constraint, and a new solution
calculated in O(r^2) steps. This method is appropriate for calculating
probabilities in a real time expert system
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:56:39 GMT"
}
] | 1,365,120,000,000 | [
[
"Miller",
"J. W.",
""
],
[
"Goodman",
"R. M.",
""
]
] |
1304.1105 | Richard E. Neapolitan | Richard E. Neapolitan, James Kenevan | Computation of Variances in Causal Networks | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-194-203 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The causal (belief) network is a well-known graphical structure for
representing independencies in a joint probability distribution. The exact
methods and the approximation methods, which perform probabilistic inference in
causal networks, often treat the conditional probabilities which are stored in
the network as certain values. However, if one takes either a subjectivistic or
a limiting frequency approach to probability, one can never be certain of
probability values. An algorithm for probabilistic inference should not only be
capable of reporting the inferred probabilities; it should also be capable of
reporting the uncertainty in these probabilities relative to the uncertainty in
the probabilities which are stored in the network. In section 2 of this paper a
method is given for determining the prior variances of the probabilities of all
the nodes. Section 3 contains an approximation method for determining the
variances in inferred probabilities.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:56:45 GMT"
}
] | 1,365,120,000,000 | [
[
"Neapolitan",
"Richard E.",
""
],
[
"Kenevan",
"James",
""
]
] |
1304.1106 | Keung-Chi Ng | Keung-Chi Ng, Bruce Abramson | A Sensitivity Analysis of Pathfinder | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-204-211 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge elicitation is one of the major bottlenecks in expert system
design. Systems based on Bayes nets require two types of information--network
structure and parameters (or probabilities). Both must be elicited from the
domain expert. In general, parameters have greater opacity than structure, and
more time is spent in their refinement than in any other phase of elicitation.
Thus, it is important to determine the point of diminishing returns, beyond
which further refinements will promise little (if any) improvement. Sensitivity
analyses address precisely this issue--the sensitivity of a model to the
precision of its parameters. In this paper, we report the results of a
sensitivity analysis of Pathfinder, a Bayes net based system for diagnosing
pathologies of the lymph system. This analysis is intended to shed some light
on the relative importance of structure and parameters to system performance,
as well as the sensitivity of a system based on a Bayes net to noise in its
assessed parameters.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:56:51 GMT"
}
] | 1,365,120,000,000 | [
[
"Ng",
"Keung-Chi",
""
],
[
"Abramson",
"Bruce",
""
]
] |
1304.1107 | Sampath Srinivas | Sampath Srinivas, John S. Breese | IDEAL: A Software Package for Analysis of Influence Diagrams | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-212-219 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | IDEAL (Influence Diagram Evaluation and Analysis in Lisp) is a software
environment for creation and evaluation of belief networks and influence
diagrams. IDEAL is primarily a research tool and provides an implementation of
many of the latest developments in belief network and influence diagram
evaluation in a unified framework. This paper describes IDEAL and some lessons
learned during its development.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:56:56 GMT"
}
] | 1,365,120,000,000 | [
[
"Srinivas",
"Sampath",
""
],
[
"Breese",
"John S.",
""
]
] |
1304.1108 | Tom S. Verma | Tom S. Verma, Judea Pearl | On the Equivalence of Causal Models | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-220-227 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Scientists often use directed acyclic graphs (days) to model the qualitative
structure of causal theories, allowing the parameters to be estimated from
observational data. Two causal models are equivalent if there is no experiment
which could distinguish one from the other. A canonical representation for
causal models is presented which yields an efficient graphical criterion for
deciding equivalence, and provides a theoretical basis for extracting causal
structures from empirical data. This representation is then extended to the
more general case of an embedded causal model, that is, a dag in which only a
subset of the variables are observable. The canonical representation presented
here yields an efficient algorithm for determining when two embedded causal
models reflect the same dependency information. This algorithm leads to a model
theoretic definition of causation in terms of statistical dependencies.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:57:02 GMT"
}
] | 1,365,120,000,000 | [
[
"Verma",
"Tom S.",
""
],
[
"Pearl",
"Judea",
""
]
] |
1304.1109 | Lambert E. Wixson | Lambert E. Wixson | Application of Confidence Intervals to the Autonomous Acquisition of
High-level Spatial Knowledge | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-228-236 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Objects in the world usually appear in context, participating in spatial
relationships and interactions that are predictable and expected. Knowledge of
these contexts can be used in the task of using a mobile camera to search for a
specified object in a room. We call this the object search task. This paper is
concerned with representing this knowledge in a manner facilitating its
application to object search while at the same time lending itself to
autonomous learning by a robot. The ability for the robot to learn such
knowledge without supervision is crucial due to the vast number of possible
relationships that can exist for any given set of objects. Moreover, since a
robot will not have an infinite amount of time to learn, it must be able to
determine an order in which to look for possible relationships so as to
maximize the rate at which new knowledge is gained. In effect, there must be a
"focus of interest" operator that allows the robot to choose which examples are
likely to convey the most new information and should be examined first. This
paper demonstrates how a representation based on statistical confidence
intervals allows the construction of a system that achieves the above goals. An
algorithm, based on the Highest Impact First heuristic, is presented as a means
for providing a "focus of interest" with which to control the learning process,
and examples are given.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:57:08 GMT"
}
] | 1,365,120,000,000 | [
[
"Wixson",
"Lambert E.",
""
]
] |
1304.1110 | Ross D. Shachter | Ross D. Shachter, Stig K. Andersen, Kim-Leng Poh | Directed Reduction Algorithms and Decomposable Graphs | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-237-244 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, there have been intense research efforts to develop
efficient methods for probabilistic inference in probabilistic influence
diagrams or belief networks. Many people have concluded that the best methods
are those based on undirected graph structures, and that those methods are
inherently superior to those based on node reduction operations on the
influence diagram. We show here that these two approaches are essentially the
same, since they are explicitly or implicity building and operating on the same
underlying graphical structures. In this paper we examine those graphical
structures and show how this insight can lead to an improved class of directed
reduction methods.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:57:14 GMT"
}
] | 1,365,120,000,000 | [
[
"Shachter",
"Ross D.",
""
],
[
"Andersen",
"Stig K.",
""
],
[
"Poh",
"Kim-Leng",
""
]
] |
1304.1111 | Wilson X. Wen | Wilson X. Wen | Optimal Decomposition of Belief Networks | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-245-256 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, optimum decomposition of belief networks is discussed. Some
methods of decomposition are examined and a new method - the method of Minimum
Total Number of States (MTNS) - is proposed. The problem of optimum belief
network decomposition under our framework, as under all the other frameworks,
is shown to be NP-hard. According to the computational complexity analysis, an
algorithm of belief network decomposition is proposed in (Wee, 1990a) based on
simulated annealing.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:57:20 GMT"
}
] | 1,365,120,000,000 | [
[
"Wen",
"Wilson X.",
""
]
] |
1304.1112 | Michelle Baker | Michelle Baker, Terrance E. Boult | Pruning Bayesian Networks for Efficient Computation | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-257-264 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper analyzes the circumstances under which Bayesian networks can be
pruned in order to reduce computational complexity without altering the
computation for variables of interest. Given a problem instance which consists
of a query and evidence for a set of nodes in the network, it is possible to
delete portions of the network which do not participate in the computation for
the query. Savings in computational complexity can be large when the original
network is not singly connected. Results analogous to those described in this
paper have been derived before [Geiger, Verma, and Pearl 89, Shachter 88] but
the implications for reducing complexity of the computations in Bayesian
networks have not been stated explicitly. We show how a preprocessing step can
be used to prune a Bayesian network prior to using standard algorithms to solve
a given problem instance. We also show how our results can be used in a
parallel distributed implementation in order to achieve greater savings. We
define a computationally equivalent subgraph of a Bayesian network. The
algorithm developed in [Geiger, Verma, and Pearl 89] is modified to construct
the subgraphs described in this paper with O(e) complexity, where e is the
number of edges in the Bayesian network. Finally, we define a minimal
computationally equivalent subgraph and prove that the subgraphs described are
minimal.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:57:25 GMT"
}
] | 1,365,120,000,000 | [
[
"Baker",
"Michelle",
""
],
[
"Boult",
"Terrance E.",
""
]
] |
1304.1113 | Jonathan Stillman | Jonathan Stillman | On Heuristics for Finding Loop Cutsets in Multiply-Connected Belief
Networks | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-265-272 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a new heuristic algorithm for the problem of finding minimum
size loop cutsets in multiply connected belief networks. We compare this
algorithm to that proposed in [Suemmondt and Cooper, 1988]. We provide lower
bounds on the performance of these algorithms with respect to one another and
with respect to optimal. We demonstrate that no heuristic algorithm for this
problem cam be guaranteed to produce loop cutsets within a constant difference
from optimal. We discuss experimental results based on randomly generated
networks, and discuss future work and open questions.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:57:31 GMT"
}
] | 1,365,120,000,000 | [
[
"Stillman",
"Jonathan",
""
]
] |
1304.1114 | Jaap Suermondt | Jaap Suermondt, Gregory F. Cooper, David Heckerman | A Combination of Cutset Conditioning with Clique-Tree Propagation in the
Pathfinder System | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-273-280 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cutset conditioning and clique-tree propagation are two popular methods for
performing exact probabilistic inference in Bayesian belief networks. Cutset
conditioning is based on decomposition of a subset of network nodes, whereas
clique-tree propagation depends on aggregation of nodes. We describe a means to
combine cutset conditioning and clique- tree propagation in an approach called
aggregation after decomposition (AD). We discuss the application of the AD
method in the Pathfinder system, a medical expert system that offers assistance
with diagnosis in hematopathology.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:57:38 GMT"
}
] | 1,365,120,000,000 | [
[
"Suermondt",
"Jaap",
""
],
[
"Cooper",
"Gregory F.",
""
],
[
"Heckerman",
"David",
""
]
] |
1304.1115 | Enrique H. Ruspini | Enrique H. Ruspini | Possibility as Similarity: the Semantics of Fuzzy Logic | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-281-289 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper addresses fundamental issues on the nature of the concepts and
structures of fuzzy logic, focusing, in particular, on the conceptual and
functional differences that exist between probabilistic and possibilistic
approaches. A semantic model provides the basic framework to define
possibilistic structures and concepts by means of a function that quantifies
proximity, closeness, or resemblance between pairs of possible worlds. The
resulting model is a natural extension, based on multiple conceivability
relations, of the modal logic concepts of necessity and possibility. By
contrast, chance-oriented probabilistic concepts and structures rely on
measures of set extension that quantify the proportion of possible worlds where
a proposition is true. Resemblance between possible worlds is quantified by a
generalized similarity relation: a function that assigns a number between O and
1 to every pair of possible worlds. Using this similarity relation, which is a
form of numerical complement of a classic metric or distance, it is possible to
define and interpret the major constructs and methods of fuzzy logic:
conditional and unconditioned possibility and necessity distributions and the
generalized modus ponens of Zadeh.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:57:44 GMT"
}
] | 1,365,120,000,000 | [
[
"Ruspini",
"Enrique H.",
""
]
] |
1304.1116 | Soumitra Dutta | Soumitra Dutta, Piero P. Bonissone | Integrating Case-Based and Rule-Based Reasoning: the Possibilistic
Connection | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-290-300 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Rule based reasoning (RBR) and case based reasoning (CBR) have emerged as two
important and complementary reasoning methodologies in artificial intelligence
(Al). For problem solving in complex, real world situations, it is useful to
integrate RBR and CBR. This paper presents an approach to achieve a compact and
seamless integration of RBR and CBR within the base architecture of rules. The
paper focuses on the possibilistic nature of the approximate reasoning
methodology common to both CBR and RBR. In CBR, the concept of similarity is
casted as the complement of the distance between cases. In RBR the transitivity
of similarity is the basis for the approximate deductions based on the
generalized modus ponens. It is shown that the integration of CBR and RBR is
possible without altering the inference engine of RBR. This integration is
illustrated in the financial domain of mergers and acquisitions. These ideas
have been implemented in a prototype system called MARS.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:57:49 GMT"
}
] | 1,365,120,000,000 | [
[
"Dutta",
"Soumitra",
""
],
[
"Bonissone",
"Piero P.",
""
]
] |
1304.1117 | Ronald R. Yager | Ronald R. Yager | Credibility Discounting in the Theory of Approximate Reasoning | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-301-306 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We are concerned with the problem of introducing credibility type information
into reasoning systems. The concept of credibility allows us to discount
information provided by agents. An important characteristic of this kind of
procedure is that a complete lack of credibility rather than resulting in the
negation of the information provided results in the nullification of the
information provided. We suggest a representational scheme for credibility
qualification in the theory of approximate reasoning. We discuss the concept of
relative credibility. By this idea we mean to indicate situations in which the
credibility of a piece of evidence is determined by its compatibility with
higher priority evidence. This situation leads to structures very much in the
spirit of nonmonotonic reasoning.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:57:54 GMT"
}
] | 1,365,120,000,000 | [
[
"Yager",
"Ronald R.",
""
]
] |
1304.1118 | Didier Dubois | Didier Dubois, Henri Prade | Updating with Belief Functions, Ordinal Conditioning Functions and
Possibility Measures | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-307-316 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper discusses how a measure of uncertainty representing a state of
knowledge can be updated when a new information, which may be pervaded with
uncertainty, becomes available. This problem is considered in various
framework, namely: Shafer's evidence theory, Zadeh's possibility theory,
Spohn's theory of epistemic states. In the two first cases, analogues of
Jeffrey's rule of conditioning are introduced and discussed. The relations
between Spohn's model and possibility theory are emphasized and Spohn's
updating rule is contrasted with the Jeffrey-like rule of conditioning in
possibility theory. Recent results by Shenoy on the combination of ordinal
conditional functions are reinterpreted in the language of possibility theory.
It is shown that Shenoy's combination rule has a well-known possibilistic
counterpart.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:58:00 GMT"
}
] | 1,365,120,000,000 | [
[
"Dubois",
"Didier",
""
],
[
"Prade",
"Henri",
""
]
] |
1304.1120 | Philippe Smets | Philippe Smets | The Transferable Belief Model and Other Interpretations of
Dempster-Shafer's Model | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-326-333 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dempster-Shafer's model aims at quantifying degrees of belief But there are
so many interpretations of Dempster-Shafer's theory in the literature that it
seems useful to present the various contenders in order to clarify their
respective positions. We shall successively consider the classical probability
model, the upper and lower probabilities model, Dempster's model, the
transferable belief model, the evidentiary value model, the provability or
necessity model. None of these models has received the qualification of
Dempster-Shafer. In fact the transferable belief model is our interpretation
not of Dempster's work but of Shafer's work as presented in his book (Shafer
1976, Smets 1988). It is a ?purified' form of Dempster-Shafer's model in which
any connection with probability concept has been deleted. Any model for belief
has at least two components: one static that describes our state of belief, the
other dynamic that explains how to update our belief given new pieces of
information. We insist on the fact that both components must be considered in
order to study these models. Too many authors restrict themselves to the static
component and conclude that Dempster-Shafer theory is the same as some other
theory. But once the dynamic component is considered, these conclusions break
down. Any comparison based only on the static component is too restricted. The
dynamic component must also be considered as the originality of the models
based on belief functions lies in its dynamic component.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:58:12 GMT"
}
] | 1,365,120,000,000 | [
[
"Smets",
"Philippe",
""
]
] |
1304.1121 | Prakash P. Shenoy | Prakash P. Shenoy, Glenn Shafer | Valuation-Based Systems for Discrete Optimization | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-334-343 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes valuation-based systems for representing and solving
discrete optimization problems. In valuation-based systems, we represent
information in an optimization problem using variables, sample spaces of
variables, a set of values, and functions that map sample spaces of sets of
variables to the set of values. The functions, called valuations, represent the
factors of an objective function. Solving the optimization problem involves
using two operations called combination and marginalization. Combination tells
us how to combine the factors of the joint objective function. Marginalization
is either maximization or minimization. Solving an optimization problem can be
simply described as finding the marginal of the joint objective function for
the empty set. We state some simple axioms that combination and marginalization
need to satisfy to enable us to solve an optimization problem using local
computation. For optimization problems, the solution method of valuation-based
systems reduces to non-serial dynamic programming. Thus our solution method for
VBS can be regarded as an abstract description of dynamic programming. And our
axioms can be viewed as conditions that permit the use of dynamic programming.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:58:17 GMT"
}
] | 1,365,120,000,000 | [
[
"Shenoy",
"Prakash P.",
""
],
[
"Shafer",
"Glenn",
""
]
] |
1304.1122 | Robert Kennes | Robert Kennes, Philippe Smets | Computational Aspects of the Mobius Transform | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-344-351 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we associate with every (directed) graph G a transformation
called the Mobius transformation of the graph G. The Mobius transformation of
the graph (O) is of major significance for Dempster-Shafer theory of evidence.
However, because it is computationally very heavy, the Mobius transformation
together with Dempster's rule of combination is a major obstacle to the use of
Dempster-Shafer theory for handling uncertainty in expert systems. The major
contribution of this paper is the discovery of the 'fast Mobius
transformations' of (O). These 'fast Mobius transformations' are the fastest
algorithms for computing the Mobius transformation of (O). As an easy but
useful application, we provide, via the commonality function, an algorithm for
computing Dempster's rule of combination which is much faster than the usual
one.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:58:24 GMT"
}
] | 1,365,120,000,000 | [
[
"Kennes",
"Robert",
""
],
[
"Smets",
"Philippe",
""
]
] |
1304.1123 | Alessandro Saffiotti | Alessandro Saffiotti | Using Dempster-Shafer Theory in Knowledge Representation | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-352-361 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we suggest marrying Dempster-Shafer (DS) theory with Knowledge
Representation (KR). Born out of this marriage is the definition of
"Dempster-Shafer Belief Bases", abstract data types representing uncertain
knowledge that use DS theory for representing strength of belief about our
knowledge, and the linguistic structures of an arbitrary KR system for
representing the knowledge itself. A formal result guarantees that both the
properties of the given KR system and of DS theory are preserved. The general
model is exemplified by defining DS Belief Bases where First Order Logic and
(an extension of) KRYPTON are used as KR systems. The implementation problem is
also touched upon.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:58:30 GMT"
}
] | 1,365,120,000,000 | [
[
"Saffiotti",
"Alessandro",
""
]
] |
1304.1124 | Hamid R. Berenji | Hamid R. Berenji, Yung-Yaw Chen, Chuen-Chien Lee, Jyh-Shing Jang, S.
Murugesan | A Hierarchical Approach to Designing Approximate Reasoning-Based
Controllers for Dynamic Physical Systems | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-362-369 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a new technique for the design of approximate reasoning
based controllers for dynamic physical systems with interacting goals. In this
approach, goals are achieved based on a hierarchy defined by a control
knowledge base and remain highly interactive during the execution of the
control task. The approach has been implemented in a rule-based computer
program which is used in conjunction with a prototype hardware system to solve
the cart-pole balancing problem in real-time. It provides a complementary
approach to the conventional analytical control methodology, and is of
substantial use where a precise mathematical model of the process being
controlled is not available.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:58:36 GMT"
}
] | 1,365,120,000,000 | [
[
"Berenji",
"Hamid R.",
""
],
[
"Chen",
"Yung-Yaw",
""
],
[
"Lee",
"Chuen-Chien",
""
],
[
"Jang",
"Jyh-Shing",
""
],
[
"Murugesan",
"S.",
""
]
] |
1304.1125 | L. W. Chang | L. W. Chang, Rangasami L. Kashyap | Evidence Combination and Reasoning and Its Application to Real-World
Problem-Solving | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-370-377 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper a new mathematical procedure is presented for combining
different pieces of evidence which are represented in the interval form to
reflect our knowledge about the truth of a hypothesis. Evidences may be
correlated to each other (dependent evidences) or conflicting in supports
(conflicting evidences). First, assuming independent evidences, we propose a
methodology to construct combination rules which obey a set of essential
properties. The method is based on a geometric model. We compare results
obtained from Dempster-Shafer's rule and the proposed combination rules with
both conflicting and non-conflicting data and show that the values generated by
proposed combining rules are in tune with our intuition in both cases.
Secondly, in the case that evidences are known to be dependent, we consider
extensions of the rules derived for handling conflicting evidence. The
performance of proposed rules are shown by different examples. The results show
that the proposed rules reasonably make decision under dependent evidences
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:58:42 GMT"
}
] | 1,365,120,000,000 | [
[
"Chang",
"L. W.",
""
],
[
"Kashyap",
"Rangasami L.",
""
]
] |
1304.1126 | F. Correa da Silva | F. Correa da Silva, Alan Bundy | On Some Equivalence Relations between Incidence Calculus and
Dempster-Shafer Theory of Evidence | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-378-383 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Incidence Calculus and Dempster-Shafer Theory of Evidence are both theories
to describe agents' degrees of belief in propositions, thus being appropriate
to represent uncertainty in reasoning systems. This paper presents a
straightforward equivalence proof between some special cases of these theories.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:58:47 GMT"
}
] | 1,365,120,000,000 | [
[
"da Silva",
"F. Correa",
""
],
[
"Bundy",
"Alan",
""
]
] |
1304.1127 | Mary McLeish | Mary McLeish, P. Yao, T. Stirtzinger | Using Belief Functions for Uncertainty Management and Knowledge
Acquisition: An Expert Application | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-384-391 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes recent work on an ongoing project in medical diagnosis
at the University of Guelph. A domain on which experts are not very good at
pinpointing a single disease outcome is explored. On-line medical data is
available over a relatively short period of time. Belief Functions
(Dempster-Shafer theory) are first extracted from data and then modified with
expert opinions. Several methods for doing this are compared and results show
that one formulation statistically outperforms the others, including a method
suggested by Shafer. Expert opinions and statistically derived information
about dependencies among symptoms are also compared. The benefits of using
uncertainty management techniques as methods for knowledge acquisition from
data are discussed.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:58:53 GMT"
}
] | 1,365,120,000,000 | [
[
"McLeish",
"Mary",
""
],
[
"Yao",
"P.",
""
],
[
"Stirtzinger",
"T.",
""
]
] |
1304.1128 | Robert Fung | Robert Fung, S. L. Crawford, Lee A. Appelbaum, Richard M. Tong | An Architecture for Probabilistic Concept-Based Information Retrieval | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-392-404 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While concept-based methods for information retrieval can provide improved
performance over more conventional techniques, they require large amounts of
effort to acquire the concepts and their qualitative and quantitative
relationships. This paper discusses an architecture for probabilistic
concept-based information retrieval which addresses the knowledge acquisition
problem. The architecture makes use of the probabilistic networks technology
for representing and reasoning about concepts and includes a knowledge
acquisition component which partially automates the construction of concept
knowledge bases from data. We describe two experiments that apply the
architecture to the task of retrieving documents about terrorism from a set of
documents from the Reuters news service. The experiments provide positive
evidence that the architecture design is feasible and that there are advantages
to concept-based methods.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:58:58 GMT"
}
] | 1,365,120,000,000 | [
[
"Fung",
"Robert",
""
],
[
"Crawford",
"S. L.",
""
],
[
"Appelbaum",
"Lee A.",
""
],
[
"Tong",
"Richard M.",
""
]
] |
1304.1129 | A. J. Hanson | A. J. Hanson | Amplitude-Based Approach to Evidence Accumulation | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-405-414 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We point out the need to use probability amplitudes rather than probabilities
to model evidence accumulation in decision processes involving real physical
sensors. Optical information processing systems are given as typical examples
of systems that naturally gather evidence in this manner. We derive a new,
amplitude-based generalization of the Hough transform technique used for object
recognition in machine vision. We argue that one should use complex Hough
accumulators and square their magnitudes to get a proper probabilistic
interpretation of the likelihood that an object is present. Finally, we suggest
that probability amplitudes may have natural applications in connectionist
models, as well as in formulating knowledge-based reasoning problems.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:59:04 GMT"
}
] | 1,365,120,000,000 | [
[
"Hanson",
"A. J.",
""
]
] |
1304.1130 | Kathryn Blackmond Laskey | Kathryn Blackmond Laskey | A Probabilistic Reasoning Environment | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-415-422 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A framework is presented for a computational theory of probabilistic
argument. The Probabilistic Reasoning Environment encodes knowledge at three
levels. At the deepest level are a set of schemata encoding the system's domain
knowledge. This knowledge is used to build a set of second-level arguments,
which are structured for efficient recapture of the knowledge used to construct
them. Finally, at the top level is a Bayesian network constructed from the
arguments. The system is designed to facilitate not just propagation of beliefs
and assimilation of evidence, but also the dynamic process of constructing a
belief network, evaluating its adequacy, and revising it when necessary.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:59:09 GMT"
}
] | 1,365,120,000,000 | [
[
"Laskey",
"Kathryn Blackmond",
""
]
] |
1304.1131 | Hung-Trung Nguyen | Hung-Trung Nguyen | On Non-monotonic Conditional Reasoning | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-423-427 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This note is concerned with a formal analysis of the problem of non-monotonic
reasoning in intelligent systems, especially when the uncertainty is taken into
account in a quantitative way. A firm connection between logic and probability
is established by introducing conditioning notions by means of formal
structures that do not rely on quantitative measures. The associated
conditional logic, compatible with conditional probability evaluations, is
non-monotonic relative to additional evidence. Computational aspects of
conditional probability logic are mentioned. The importance of this development
lies on its role to provide a conceptual basis for various forms of evidence
combination and on its significance to unify multi-valued and non-monotonic
logics
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:59:14 GMT"
}
] | 1,365,120,000,000 | [
[
"Nguyen",
"Hung-Trung",
""
]
] |
1304.1132 | Michael Pittarelli | Michael Pittarelli | Decisions with Limited Observations over a Finite Product Space: the
Klir Effect | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-428-435 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Probability estimation by maximum entropy reconstruction of an initial
relative frequency estimate from its projection onto a hypergraph model of the
approximate conditional independence relations exhibited by it is investigated.
The results of this study suggest that use of this estimation technique may
improve the quality of decisions that must be made on the basis of limited
observations over a decomposable finite product space.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:59:20 GMT"
}
] | 1,365,120,000,000 | [
[
"Pittarelli",
"Michael",
""
]
] |
1304.1133 | Stuart Russell | Stuart Russell | Fine-Grained Decision-Theoretic Search Control | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-436-442 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Decision-theoretic control of search has previously used as its basic unit.
of computation the generation and evaluation of a complete set of successors.
Although this simplifies analysis, it results in some lost opportunities for
pruning and satisficing. This paper therefore extends the analysis of the value
of computation to cover individual successor evaluations. The analytic
techniques used may prove useful for control of reasoning in more general
settings. A formula is developed for the expected value of a node, k of whose n
successors have been evaluated. This formula is used to estimate the value of
expanding further successors, using a general formula for the value of a
computation in game-playing developed in earlier work. We exhibit an improved
version of the MGSS* algorithm, giving empirical results for the game of
Othello.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:59:26 GMT"
}
] | 1,365,120,000,000 | [
[
"Russell",
"Stuart",
""
]
] |
1304.1134 | Nic Wilson | Nic Wilson | Rules, Belief Functions and Default Logic | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-443-449 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes a natural framework for rules, based on belief
functions, which includes a repre- sentation of numerical rules, default rules
and rules allowing and rules not allowing contraposition. In particular it
justifies the use of the Dempster-Shafer Theory for representing a particular
class of rules, Belief calculated being a lower probability given certain
independence assumptions on an underlying space. It shows how a belief function
framework can be generalised to other logics, including a general Monte-Carlo
algorithm for calculating belief, and how a version of Reiter's Default Logic
can be seen as a limiting case of a belief function formalism.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:59:32 GMT"
}
] | 1,365,120,000,000 | [
[
"Wilson",
"Nic",
""
]
] |
1304.1135 | Michael S. K. M. Wong | Michael S. K. M. Wong, P. Lingras | Combination of Evidence Using the Principle of Minimum Information Gain | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-450-459 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the most important aspects in any treatment of uncertain information
is the rule of combination for updating the degrees of uncertainty. The theory
of belief functions uses the Dempster rule to combine two belief functions
defined by independent bodies of evidence. However, with limited dependency
information about the accumulated belief the Dempster rule may lead to
unsatisfactory results. The present study suggests a method to determine the
accumulated belief based on the premise that the information gain from the
combination process should be minimum. This method provides a mechanism that is
equivalent to the Bayes rule when all the conditional probabilities are
available and to the Dempster rule when the normalization constant is equal to
one. The proposed principle of minimum information gain is shown to be
equivalent to the maximum entropy formalism, a special case of the principle of
minimum cross-entropy. The application of this principle results in a monotonic
increase in belief with accumulation of consistent evidence. The suggested
approach may provide a more reasonable criterion for identifying conflicts
among various bodies of evidence.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:59:37 GMT"
}
] | 1,365,120,000,000 | [
[
"Wong",
"Michael S. K. M.",
""
],
[
"Lingras",
"P.",
""
]
] |
1304.1136 | Thomas D. Wu | Thomas D. Wu | Probabilistic Evaluation of Candidates and Symptom Clustering for
Multidisorder Diagnosis | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-460-467 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper derives a formula for computing the conditional probability of a
set of candidates, where a candidate is a set of disorders that explain a given
set of positive findings. Such candidate sets are produced by a recent method
for multidisorder diagnosis called symptom clustering. A symptom clustering
represents a set of candidates compactly as a cartesian product of differential
diagnoses. By evaluating the probability of a candidate set, then, a large set
of candidates can be validated or pruned simultaneously. The probability of a
candidate set is then specialized to obtain the probability of a single
candidate. Unlike earlier results, the equation derived here allows the
specification of positive, negative, and unknown symptoms and does not make
assumptions about disorders not in the candidate.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:59:43 GMT"
}
] | 1,365,120,000,000 | [
[
"Wu",
"Thomas D.",
""
]
] |
1304.1137 | John Yen | John Yen, Piero P. Bonissone | Extending Term Subsumption systems for Uncertainty Management | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-468-474 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A major difficulty in developing and maintaining very large knowledge bases
originates from the variety of forms in which knowledge is made available to
the KB builder. The objective of this research is to bring together two
complementary knowledge representation schemes: term subsumption languages,
which represent and reason about defining characteristics of concepts, and
proximate reasoning models, which deal with uncertain knowledge and data in
expert systems. Previous works in this area have primarily focused on
probabilistic inheritance. In this paper, we address two other important issues
regarding the integration of term subsumption-based systems and approximate
reasoning models. First, we outline a general architecture that specifies the
interactions between the deductive reasoner of a term subsumption system and an
approximate reasoner. Second, we generalize the semantics of terminological
language so that terminological knowledge can be used to make plausible
inferences. The architecture, combined with the generalized semantics, forms
the foundation of a synergistic tight integration of term subsumption systems
and approximate reasoning models.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:59:48 GMT"
}
] | 1,365,120,000,000 | [
[
"Yen",
"John",
""
],
[
"Bonissone",
"Piero P.",
""
]
] |
1304.1138 | Kuo-Chu Chang | Kuo-Chu Chang, Robert Fung | Refinement and Coarsening of Bayesian Networks | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-475-482 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In almost all situation assessment problems, it is useful to dynamically
contract and expand the states under consideration as assessment proceeds.
Contraction is most often used to combine similar events or low probability
events together in order to reduce computation. Expansion is most often used to
make distinctions of interest which have significant probability in order to
improve the quality of the assessment. Although other uncertainty calculi,
notably Dempster-Shafer [Shafer, 1976], have addressed these operations, there
has not yet been any approach of refining and coarsening state spaces for the
Bayesian Network technology. This paper presents two operations for refining
and coarsening the state space in Bayesian Networks. We also discuss their
practical implications for knowledge acquisition.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 13:59:54 GMT"
}
] | 1,365,120,000,000 | [
[
"Chang",
"Kuo-Chu",
""
],
[
"Fung",
"Robert",
""
]
] |
1304.1139 | Gerhard Paa{\ss} | Gerhard Paa{\ss} | Second Order Probabilities for Uncertain and Conflicting Evidence | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-483-490 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper the elicitation of probabilities from human experts is
considered as a measurement process, which may be disturbed by random
'measurement noise'. Using Bayesian concepts a second order probability
distribution is derived reflecting the uncertainty of the input probabilities.
The algorithm is based on an approximate sample representation of the basic
probabilities. This sample is continuously modified by a stochastic simulation
procedure, the Metropolis algorithm, such that the sequence of successive
samples corresponds to the desired posterior distribution. The procedure is
able to combine inconsistent probabilities according to their reliability and
is applicable to general inference networks with arbitrary structure.
Dempster-Shafer probability mass functions may be included using specific
measurement distributions. The properties of the approach are demonstrated by
numerical experiments.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 14:00:00 GMT"
}
] | 1,365,120,000,000 | [
[
"Paaß",
"Gerhard",
""
]
] |
1304.1140 | Linda C. van der Gaag | Linda C. van der Gaag | Computing Probability Intervals Under Independency Constraints | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-491-497 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many AI researchers argue that probability theory is only capable of dealing
with uncertainty in situations where a full specification of a joint
probability distribution is available, and conclude that it is not suitable for
application in knowledge-based systems. Probability intervals, however,
constitute a means for expressing incompleteness of information. We present a
method for computing such probability intervals for probabilities of interest
from a partial specification of a joint probability distribution. Our method
improves on earlier approaches by allowing for independency relationships
between statistical variables to be exploited.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 14:00:05 GMT"
}
] | 1,365,120,000,000 | [
[
"van der Gaag",
"Linda C.",
""
]
] |
1304.1141 | Michael Shwe | Michael Shwe, Gregory F. Cooper | An Empirical Analysis of Likelihood-Weighting Simulation on a Large,
Multiply-Connected Belief Network | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-498-508 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We analyzed the convergence properties of likelihood- weighting algorithms on
a two-level, multiply connected, belief-network representation of the QMR
knowledge base of internal medicine. Specifically, on two difficult diagnostic
cases, we examined the effects of Markov blanket scoring, importance sampling,
demonstrating that the Markov blanket scoring and self-importance sampling
significantly improve the convergence of the simulation on our model.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 14:00:12 GMT"
}
] | 1,365,120,000,000 | [
[
"Shwe",
"Michael",
""
],
[
"Cooper",
"Gregory F.",
""
]
] |
1304.1142 | David Sher | David Sher | Towards a Normative Theory of Scientific Evidence | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-509-517 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A scientific reasoning system makes decisions using objective evidence in the
form of independent experimental trials, propositional axioms, and constraints
on the probabilities of events. As a first step towards this goal, we propose a
system that derives probability intervals from objective evidence in those
forms. Our reasoning system can manage uncertainty about data and rules in a
rule based expert system. We expect that our system will be particularly
applicable to diagnosis and analysis in domains with a wealth of experimental
evidence such as medicine. We discuss limitations of this solution and propose
future directions for this research. This work can be considered a
generalization of Nilsson's "probabilistic logic" [Nil86] to intervals and
experimental observations.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 14:00:17 GMT"
}
] | 1,365,120,000,000 | [
[
"Sher",
"David",
""
]
] |
1304.1143 | Mary McLeish | Mary McLeish | A Model for Non-Monotonic Reasoning Using Dempster's Rule | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-518-528 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Considerable attention has been given to the problem of non-monotonic
reasoning in a belief function framework. Earlier work (M. Ginsberg) proposed
solutions introducing meta-rules which recognized conditional independencies in
a probabilistic sense. More recently an e-calculus formulation of default
reasoning (J. Pearl) shows that the application of Dempster's rule to a
non-monotonic situation produces erroneous results. This paper presents a new
belief function interpretation of the problem which combines the rules in a way
which is more compatible with probabilistic results and respects conditions of
independence necessary for the application of Dempster's combination rule. A
new general framework for combining conflicting evidence is also proposed in
which the normalization factor becomes modified. This produces more intuitively
acceptable results.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 14:00:23 GMT"
}
] | 1,365,120,000,000 | [
[
"McLeish",
"Mary",
""
]
] |
1304.1144 | Philippe Smets | Philippe Smets, Yen-Teh Hsia | Default Reasoning and the Transferable Belief Model | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-529-537 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Inappropriate use of Dempster's rule of combination has led some authors to
reject the Dempster-Shafer model, arguing that it leads to supposedly
unacceptable conclusions when defaults are involved. A most classic example is
about the penguin Tweety. This paper will successively present: the origin of
the miss-management of the Tweety example; two types of default; the correct
solution for both types based on the transferable belief model (our
interpretation of the Dempster-Shafer model (Shafer 1976, Smets 1988)); Except
when explicitly stated, all belief functions used in this paper are simple
support functions, i.e. belief functions for which only one proposition (the
focus) of the frame of discernment receives a positive basic belief mass with
the remaining mass being given to the tautology. Each belief function will be
described by its focus and the weight of the focus (e.g. m(A)=.9). Computation
of the basic belief masses are always performed by vacuously extending each
belief function to the product space built from all variables involved,
combining them on that space by Dempster's rule of combination, and projecting
the result to the space corresponding to each individual variable.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 14:00:29 GMT"
}
] | 1,365,120,000,000 | [
[
"Smets",
"Philippe",
""
],
[
"Hsia",
"Yen-Teh",
""
]
] |
1304.1145 | Dan Geiger | Dan Geiger, David Heckerman | Separable and transitive graphoids | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-538-545 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We examine three probabilistic formulations of the sentence a and b are
totally unrelated with respect to a given set of variables U. First, two
variables a and b are totally independent if they are independent given any
value of any subset of the variables in U. Second, two variables are totally
uncoupled if U can be partitioned into two marginally independent sets
containing a and b respectively. Third, two variables are totally disconnected
if the corresponding nodes are disconnected in every belief network
representation. We explore the relationship between these three formulations of
unrelatedness and explain their relevance to the process of acquiring
probabilistic knowledge from human experts.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 14:00:34 GMT"
},
{
"version": "v2",
"created": "Sun, 19 May 2013 19:42:17 GMT"
},
{
"version": "v3",
"created": "Sat, 16 May 2015 23:58:55 GMT"
}
] | 1,431,993,600,000 | [
[
"Geiger",
"Dan",
""
],
[
"Heckerman",
"David",
""
]
] |
1304.1146 | Bo Chamberlain | Bo Chamberlain, Finn Verner Jensen, Frank Jensen, Torsten Nordahl | Analysis in HUGIN of Data Conflict | Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990) | null | null | UAI-P-1990-PG-546-554 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | After a brief introduction to causal probabilistic networks and the HUGIN
approach, the problem of conflicting data is discussed. A measure of conflict
is defined, and it is used in the medical diagnostic system MUNIN. Finally, it
is discussed how to distinguish between conflicting data and a rare case.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 14:00:40 GMT"
}
] | 1,365,120,000,000 | [
[
"Chamberlain",
"Bo",
""
],
[
"Jensen",
"Finn Verner",
""
],
[
"Jensen",
"Frank",
""
],
[
"Nordahl",
"Torsten",
""
]
] |
1304.1402 | Bernardo Cuenca Grau | Bernardo Cuenca Grau, Boris Motik, Giorgos Stoilos, Ian Horrocks | Computing Datalog Rewritings beyond Horn Ontologies | 14 pages. To appear at IJCAI 2013 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Rewriting-based approaches for answering queries over an OWL 2 DL ontology
have so far been developed mainly for Horn fragments of OWL 2 DL. In this
paper, we study the possibilities of answering queries over non-Horn ontologies
using datalog rewritings. We prove that this is impossible in general even for
very simple ontology languages, and even if PTIME = NP. Furthermore, we present
a resolution-based procedure for $\SHI$ ontologies that, in case it terminates,
produces a datalog rewriting of the ontology. Our procedure necessarily
terminates on DL-Lite_{bool}^H ontologies---an extension of OWL 2 QL with
transitive roles and Boolean connectives.
| [
{
"version": "v1",
"created": "Thu, 4 Apr 2013 15:31:45 GMT"
},
{
"version": "v2",
"created": "Mon, 8 Apr 2013 10:49:12 GMT"
}
] | 1,365,465,600,000 | [
[
"Grau",
"Bernardo Cuenca",
""
],
[
"Motik",
"Boris",
""
],
[
"Stoilos",
"Giorgos",
""
],
[
"Horrocks",
"Ian",
""
]
] |
1304.1491 | Fahiem Bacchus | Fahiem Bacchus | Lp : A Logic for Statistical Information | Appears in Proceedings of the Fifth Conference on Uncertainty in
Artificial Intelligence (UAI1989) | null | null | UAI-P-1989-PG-1-6 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This extended abstract presents a logic, called Lp, that is capable of
representing and reasoning with a wide variety of both qualitative and
quantitative statistical information. The advantage of this logical formalism
is that it offers a declarative representation of statistical knowledge;
knowledge represented in this manner can be used for a variety of reasoning
tasks. The logic differs from previous work in probability logics in that it
uses a probability distribution over the domain of discourse, whereas most
previous work (e.g., Nilsson [2], Scott et al. [3], Gaifinan [4], Fagin et al.
[5]) has investigated the attachment of probabilities to the sentences of the
logic (also, see Halpern [6] and Bacchus [7] for further discussion of the
differences). The logic Lp possesses some further important features. First, Lp
is a superset of first order logic, hence it can represent ordinary logical
assertions. This means that Lp provides a mechanism for integrating statistical
information and reasoning about uncertainty into systems based solely on logic.
Second, Lp possesses transparent semantics, based on sets and probabilities of
those sets. Hence, knowledge represented in Lp can be understood in terms of
the simple primative concepts of sets and probabilities. And finally, the there
is a sound proof theory that has wide coverage (the proof theory is complete
for certain classes of models). The proof theory captures a sufficient range of
valid inferences to subsume most previous probabilistic uncertainty reasoning
systems. For example, the linear constraints like those generated by Nilsson's
probabilistic entailment [2] can be generated by the proof theory, and the
Bayesian inference underlying belief nets [8] can be performed. In addition,
the proof theory integrates quantitative and qualitative reasoning as well as
statistical and logical reasoning. In the next section we briefly examine
previous work in probability logics, comparing it to Lp. Then we present some
of the varieties of statistical information that Lp is capable of expressing.
After this we present, briefly, the syntax, semantics, and proof theory of the
logic. We conclude with a few examples of knowledge representation and
reasoning in Lp, pointing out the advantages of the declarative representation
offered by Lp. We close with a brief discussion of probabilities as degrees of
belief, indicating how such probabilities can be generated from statistical
knowledge encoded in Lp. The reader who is interested in a more complete
treatment should consult Bacchus [7].
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:36:47 GMT"
}
] | 1,365,379,200,000 | [
[
"Bacchus",
"Fahiem",
""
]
] |
1304.1492 | Kenneth Basye | Kenneth Basye, Thomas L. Dean | Map Learning with Indistinguishable Locations | Appears in Proceedings of the Fifth Conference on Uncertainty in
Artificial Intelligence (UAI1989) | null | null | UAI-P-1989-PG-7-13 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nearly all spatial reasoning problems involve uncertainty of one sort or
another. Uncertainty arises due to the inaccuracies of sensors used in
measuring distances and angles. We refer to this as directional uncertainty.
Uncertainty also arises in combining spatial information when one location is
mistakenly identified with another. We refer to this as recognition
uncertainty. Most problems in constructing spatial representations (maps) for
the purpose of navigation involve both directional and recognition uncertainty.
In this paper, we show that a particular class of spatial reasoning problems
involving the construction of representations of large-scale space can be
solved efficiently even in the presence of directional and recognition
uncertainty. We pay particular attention to the problems that arise due to
recognition uncertainty.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:36:53 GMT"
}
] | 1,365,379,200,000 | [
[
"Basye",
"Kenneth",
""
],
[
"Dean",
"Thomas L.",
""
]
] |
1304.1493 | Carlo Berzuini | Carlo Berzuini, Riccardo Bellazzi, Silvana Quaglini | Temporal Reasoning with Probabilities | Appears in Proceedings of the Fifth Conference on Uncertainty in
Artificial Intelligence (UAI1989) | null | null | UAI-P-1989-PG-14-21 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we explore representations of temporal knowledge based upon the
formalism of Causal Probabilistic Networks (CPNs). Two different
?continuous-time? representations are proposed. In the first, the CPN includes
variables representing ?event-occurrence times?, possibly on different time
scales, and variables representing the ?state? of the system at these times. In
the second, the CPN describes the influences between random variables with
values in () representing dates, i.e. time-points associated with the
occurrence of relevant events. However, structuring a system of inter-related
dates as a network where all links commit to a single specific notion of cause
and effect is in general far from trivial and leads to severe difficulties. We
claim that we should recognize explicitly different kinds of relation between
dates, such as ?cause?, ?inhibition?, ?competition?, etc., and propose a method
whereby these relations are coherently embedded in a CPN using additional
auxiliary nodes corresponding to "instrumental" variables. Also discussed,
though not covered in detail, is the topic concerning how the quantitative
specifications to be inserted in a temporal CPN can be learned from specific
data.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:36:59 GMT"
}
] | 1,365,379,200,000 | [
[
"Berzuini",
"Carlo",
""
],
[
"Bellazzi",
"Riccardo",
""
],
[
"Quaglini",
"Silvana",
""
]
] |
1304.1494 | Piero P. Bonissone | Piero P. Bonissone | Now that I Have a Good Theory of Uncertainty, What Else Do I Need? | Appears in Proceedings of the Fifth Conference on Uncertainty in
Artificial Intelligence (UAI1989) | null | null | UAI-P-1989-PG-22-33 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Rather than discussing the isolated merits of a nominative theory of
uncertainty, this paper focuses on a class of problems, referred to as Dynamic
Classification Problem (DCP), which requires the integration of many theories,
including a prescriptive theory of uncertainty. We start by analyzing the
Dynamic Classification Problem and by defining its induced requirements on a
supporting (plausible) reasoning system. We provide a summary of the underlying
theory (based on the semantics of many-valed logics) and illustrate the
constraints imposed upon it to ensure the modularity and computational
performance required by the applications. We describe the technologies used for
knowledge engineering (such as object-based simulator to exercise requirements,
and development tools to build the Knowledge Base and functionally validate
it). We emphasize the difference between development environment and run-time
system, describe the rule cross-compiler, and the real-time inference engine
with meta-reasoning capabilities. Finally, we illustrate how our proposed
technology satisfies the pop's requirements and analyze some of the lessons
reamed from its applications to situation assessment problems for Pilot's
Associate and Submarine Commander Associate.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:37:05 GMT"
}
] | 1,365,379,200,000 | [
[
"Bonissone",
"Piero P.",
""
]
] |
1304.1495 | Piero P. Bonissone | Piero P. Bonissone, David A. Cyrluk, James W. Goodwin, Jonathan
Stillman | Uncertainty and Incompleteness | Appears in Proceedings of the Fifth Conference on Uncertainty in
Artificial Intelligence (UAI1989) | null | null | UAI-P-1989-PG-34-45 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Two major difficulties in using default logics are their intractability and
the problem of selecting among multiple extensions. We propose an approach to
these problems based on integrating nommonotonic reasoning with plausible
reasoning based on triangular norms. A previously proposed system for reasoning
with uncertainty (RUM) performs uncertain monotonic inferences on an acyclic
graph. We have extended RUM to allow nommonotonic inferences and cycles within
nonmonotonic rules. By restricting the size and complexity of the nommonotonic
cycles we can still perform efficient inferences. Uncertainty measures provide
a basis for deciding among multiple defaults. Different algorithms and
heuristics for finding the optimal defaults are discussed.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:37:11 GMT"
}
] | 1,365,379,200,000 | [
[
"Bonissone",
"Piero P.",
""
],
[
"Cyrluk",
"David A.",
""
],
[
"Goodwin",
"James W.",
""
],
[
"Stillman",
"Jonathan",
""
]
] |
1304.1496 | Lashon B. Booker | Lashon B. Booker, Naveen Hota, Connie Loggia Ramsey | BaRT: A Bayesian Reasoning Tool for Knowledge Based Systems | Appears in Proceedings of the Fifth Conference on Uncertainty in
Artificial Intelligence (UAI1989) | null | null | UAI-P-1989-PG-46-53 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As the technology for building knowledge based systems has matured, important
lessons have been learned about the relationship between the architecture of a
system and the nature of the problems it is intended to solve. We are
implementing a knowledge engineering tool called BART that is designed with
these lessons in mind. BART is a Bayesian reasoning tool that makes belief
networks and other probabilistic techniques available to knowledge engineers
building classificatory problem solvers. BART has already been used to develop
a decision aid for classifying ship images, and it is currently being used to
manage uncertainty in systems concerned with analyzing intelligence reports.
This paper discusses how state-of-the-art probabilistic methods fit naturally
into a knowledge based approach to classificatory problem solving, and
describes the current capabilities of BART.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:37:17 GMT"
}
] | 1,365,379,200,000 | [
[
"Booker",
"Lashon B.",
""
],
[
"Hota",
"Naveen",
""
],
[
"Ramsey",
"Connie Loggia",
""
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.