id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
sequencelengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1304.3439 | Benjamin N. Grosof | Benjamin N. Grosof | Evidential Confirmation as Transformed Probability | Appears in Proceedings of the First Conference on Uncertainty in
Artificial Intelligence (UAI1985) | null | null | UAI-P-1985-PG-185-192 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A considerable body of work in AI has been concerned with aggregating
measures of confirmatory and disconfirmatory evidence for a common set of
propositions. Claiming classical probability to be inadequate or inappropriate,
several researchers have gone so far as to invent new formalisms and methods.
We show how to represent two major such alternative approaches to evidential
confirmation not only in terms of transformed (Bayesian) probability, but also
in terms of each other. This unifies two of the leading approaches to
confirmation theory, by showing that a revised MYCIN Certainty Factor method
[12] is equivalent to a special case of Dempster-Shafer theory. It yields a
well-understood axiomatic basis, i.e. conditional independence, to interpret
previous work on quantitative confirmation theory. It substantially resolves
the "taxe-them-or-leave-them" problem of priors: MYCIN had to leave them out,
while PROSPECTOR had to have them in. It recasts some of confirmation theory's
advantages in terms of the psychological accessibility of probabilistic
information in different (transformed) formats. Finally, it helps to unify the
representation of uncertain reasoning (see also [11]).
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:57:35 GMT"
}
] | 1,365,984,000,000 | [
[
"Grosof",
"Benjamin N.",
""
]
] |
1304.3440 | Ronald P. Loui | Ronald P. Loui | Interval-Based Decisions for Reasoning Systems | Appears in Proceedings of the First Conference on Uncertainty in
Artificial Intelligence (UAI1985) | null | null | UAI-P-1985-PG-193-200 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This essay looks at decision-making with interval-valued probability
measures. Existing decision methods have either supplemented expected utility
methods with additional criteria of optimality, or have attempted to supplement
the interval-valued measures. We advocate a new approach, which makes the
following questions moot: 1. which additional criteria to use, and 2. how wide
intervals should be. In order to implement the approach, we need more
epistemological information. Such information can be generated by a rule of
acceptance with a parameter that allows various attitudes toward error, or can
simply be declared. In sketch, the argument is: 1. probability intervals are
useful and natural in All. systems; 2. wide intervals avoid error, but are
useless in some risk sensitive decision-making; 3. one may obtain narrower
intervals if one is less cautious; 4. if bodies of knowledge can be ordered by
their caution, one should perform the decision analysis with the acceptable
body of knowledge that is the most cautious, of those that are useful. The
resulting behavior differs from that of a behavioral probabilist (a Bayesian)
because in the proposal, 5. intervals based on successive bodies of knowledge
are not always nested; 6. if the agent uses a probability for a particular
decision, she need not commit to that probability for credence or future
decision; and 7. there may be no acceptable body of knowledge that is useful;
hence, sometimes no decision is mandated.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:57:41 GMT"
}
] | 1,365,984,000,000 | [
[
"Loui",
"Ronald P.",
""
]
] |
1304.3441 | James E. Corter | James E. Corter, Mark A. Gluck | Machine Generalization and Human Categorization: An
Information-Theoretic View | Appears in Proceedings of the First Conference on Uncertainty in
Artificial Intelligence (UAI1985) | null | null | UAI-P-1985-PG-201-207 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In designing an intelligent system that must be able to explain its reasoning
to a human user, or to provide generalizations that the human user finds
reasonable, it may be useful to take into consideration psychological data on
what types of concepts and categories people naturally use. The psychological
literature on concept learning and categorization provides strong evidence that
certain categories are more easily learned, recalled, and recognized than
others. We show here how a measure of the informational value of a category
predicts the results of several important categorization experiments better
than standard alternative explanations. This suggests that information-based
approaches to machine generalization may prove particularly useful and natural
for human users of the systems.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:57:47 GMT"
}
] | 1,365,984,000,000 | [
[
"Corter",
"James E.",
""
],
[
"Gluck",
"Mark A.",
""
]
] |
1304.3442 | Samuel Holtzman | Samuel Holtzman, John S. Breese | Exact Reasoning Under Uncertainty | Appears in Proceedings of the First Conference on Uncertainty in
Artificial Intelligence (UAI1985) | null | null | UAI-P-1985-PG-208-216 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper focuses on designing expert systems to support decision making in
complex, uncertain environments. In this context, our research indicates that
strictly probabilistic representations, which enable the use of
decision-theoretic reasoning, are highly preferable to recently proposed
alternatives (e.g., fuzzy set theory and Dempster-Shafer theory). Furthermore,
we discuss the language of influence diagrams and a corresponding methodology
-decision analysis -- that allows decision theory to be used effectively and
efficiently as a decision-making aid. Finally, we use RACHEL, a system that
helps infertile couples select medical treatments, to illustrate the
methodology of decision analysis as basis for expert decision systems.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:57:53 GMT"
}
] | 1,365,984,000,000 | [
[
"Holtzman",
"Samuel",
""
],
[
"Breese",
"John S.",
""
]
] |
1304.3443 | Alf C. Zimmer | Alf C. Zimmer | The Estimation of Subjective Probabilities via Categorical Judgments of
Uncertainty | Appears in Proceedings of the First Conference on Uncertainty in
Artificial Intelligence (UAI1985) | null | null | UAI-P-1985-PG-217-224 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Theoretically as well as experimentally it is investigated how people
represent their knowledge in order to make decisions or to share their
knowledge with others. Experiment 1 probes into the ways how people 6ather
information about the frequencies of events and how the requested response
mode, that is, numerical vs. verbal estimates interferes with this knowledge.
The least interference occurs if the subjects are allowed to give verbal
responses. From this it is concluded that processing knowledge about
uncertainty categorically, that is, by means of verbal expressions, imposes
less mental work load on the decision matter than numerical processing.
Possibility theory is used as a framework for modeling the individual usage of
verbal categories for grades of uncertainty. The 'elastic' constraints on the
verbal expressions for every sing1e subject are determined in Experiment 2 by
means of sequential calibration. In further experiments it is shown that the
superiority of the verbal processing of knowledge about uncertainty guise
generally reduces persistent biases reported in the literature: conservatism
(Experiment 3) and neg1igence of regression (Experiment 4). The reanalysis of
Hormann's data reveal that in verbal Judgments people exhibit sensitivity for
base rates and are not prone to the conjunction fallacy. In a final experiment
(5) about predictions in a real-life situation it turns out that in a numerical
forecasting task subjects restricted themselves to those parts of their
knowledge which are numerical. On the other hand subjects in a verbal
forecasting task accessed verbally as well as numerically stated knowledge.
Forecasting is structurally related to the estimation of probabilities for rare
events insofar as supporting and contradicting arguments have to be evaluated
and the choice of the final Judgment has to be Justified according to the
evidence brought forward. In order to assist people in such choice situations a
formal model for the interactive checking of arguments has been developed. The
model transforms the normal-language quantifiers used in the arguments into
fuzzy numbers and evaluates the given train of arguments by means of fuzzy
numerica1 operations. Ambiguities in the meanings of quantifiers are resolved
interactively.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:57:59 GMT"
}
] | 1,365,984,000,000 | [
[
"Zimmer",
"Alf C.",
""
]
] |
1304.3444 | Bruce Abramson | Bruce Abramson | A Cure for Pathological Behavior in Games that Use Minimax | Appears in Proceedings of the First Conference on Uncertainty in
Artificial Intelligence (UAI1985) | null | null | UAI-P-1985-PG-225-231 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The traditional approach to choosing moves in game-playing programs is the
minimax procedure. The general belief underlying its use is that increasing
search depth improves play. Recent research has shown that given certain
simplifying assumptions about a game tree's structure, this belief is
erroneous: searching deeper decreases the probability of making a correct move.
This phenomenon is called game tree pathology. Among these simplifying
assumptions is uniform depth of win/loss (terminal) nodes, a condition which is
not true for most real games. Analytic studies in [10] have shown that if every
node in a pathological game tree is made terminal with probability exceeding a
certain threshold, the resulting tree is nonpathological. This paper considers
a new evaluation function which recognizes increasing densities of forced wins
at deeper levels in the tree. This property raises two points that strengthen
the hypothesis that uniform win depth causes pathology. First, it proves
mathematically that as search deepens, an evaluation function that does not
explicitly check for certain forced win patterns becomes decreasingly likely to
force wins. This failing predicts the pathological behavior of the original
evaluation function. Second, it shows empirically that despite recognizing
fewer mid-game wins than the theoretically predicted minimum, the new function
is nonpathological.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:58:05 GMT"
}
] | 1,365,984,000,000 | [
[
"Abramson",
"Bruce",
""
]
] |
1304.3445 | Dana Nau | Dana Nau, Paul Purdom, Chun-Hung Tzeng | An Evaluation of Two Alternatives to Minimax | Appears in Proceedings of the First Conference on Uncertainty in
Artificial Intelligence (UAI1985) | null | null | UAI-P-1985-PG-232-236 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the field of Artificial Intelligence, traditional approaches to choosing
moves in games involve the we of the minimax algorithm. However, recent
research results indicate that minimizing may not always be the best approach.
In this paper we summarize the results of some measurements on several model
games with several different evaluation functions. These measurements, which
are presented in detail in [NPT], show that there are some new algorithms that
can make significantly better use of evaluation function values than the
minimax algorithm does.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:58:11 GMT"
}
] | 1,365,984,000,000 | [
[
"Nau",
"Dana",
""
],
[
"Purdom",
"Paul",
""
],
[
"Tzeng",
"Chun-Hung",
""
]
] |
1304.3446 | Ross D. Shachter | Ross D. Shachter | Intelligent Probabilistic Inference | Appears in Proceedings of the First Conference on Uncertainty in
Artificial Intelligence (UAI1985) | null | null | UAI-P-1985-PG-237-244 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The analysis of practical probabilistic models on the computer demands a
convenient representation for the available knowledge and an efficient
algorithm to perform inference. An appealing representation is the influence
diagram, a network that makes explicit the random variables in a model and
their probabilistic dependencies. Recent advances have developed solution
procedures based on the influence diagram. In this paper, we examine the
fundamental properties that underlie those techniques, and the information
about the probabilistic structure that is available in the influence diagram
representation. The influence diagram is a convenient representation for
computer processing while also being clear and non-mathematical. It displays
probabilistic dependence precisely, in a way that is intuitive for decision
makers and experts to understand and communicate. As a result, the same
influence diagram can be used to build, assess and analyze a model,
facilitating changes in the formulation and feedback from sensitivity analysis.
The goal in this paper is to determine arbitrary conditional probability
distributions from a given probabilistic model. Given qualitative information
about the dependence of the random variables in the model we can, for a
specific conditional expression, specify precisely what quantitative
information we need to be able to determine the desired conditional probability
distribution. It is also shown how we can find that probability distribution by
performing operations locally, that is, over subspaces of the joint
distribution. In this way, we can exploit the conditional independence present
in the model to avoid having to construct or manipulate the full joint
distribution. These results are extended to include maximal processing when the
information available is incomplete, and optimal decision making in an
uncertain environment. Influence diagrams as a computer-aided modeling tool
were developed by Miller, Merkofer, and Howard [5] and extended by Howard and
Matheson [2]. Good descriptions of how to use them in modeling are in Owen [7]
and Howard and Matheson [2]. The notion of solving a decision problem through
influence diagrams was examined by Olmsted [6] and such an algorithm was
developed by Shachter [8]. The latter paper also shows how influence diagrams
can be used to perform a variety of sensitivity analyses. This paper extends
those results by developing a theory of the properties of the diagram that are
used by the algorithm, and the information needed to solve arbitrary
probability inference problems. Section 2 develops the notation and the
framework for the paper and the relationship between influence diagrams and
joint probability distributions. The general probabilistic inference problem is
posed in Section 3. In Section 4 the transformations on the diagram are
developed and then put together into a solution procedure in Section 5. In
Section 6, this procedure is used to calculate the information requirement to
solve an inference problem and the maximal processing that can be performed
with incomplete information. Section 7 contains a summary of results.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:58:16 GMT"
}
] | 1,365,984,000,000 | [
[
"Shachter",
"Ross D.",
""
]
] |
1304.3448 | John Fox | John Fox | Strong & Weak Methods: A Logical View of Uncertainty | Appears in Proceedings of the First Conference on Uncertainty in
Artificial Intelligence (UAI1985) | null | null | UAI-P-1985-PG-253-257 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The last few years has seen a growing debate about techniques for managing
uncertainty in AI systems. Unfortunately this debate has been cast as a rivalry
between AI methods and classical probability based ones. Three arguments for
extending the probability framework of uncertainty are presented, none of which
imply a challenge to classical methods. These are (1) explicit representation
of several types of uncertainty, specifically possibility and plausibility, as
well as probability, (2) the use of weak methods for uncertainty management in
problems which are poorly defined, and (3) symbolic representation of different
uncertainty calculi and methods for choosing between them.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:58:28 GMT"
}
] | 1,365,984,000,000 | [
[
"Fox",
"John",
""
]
] |
1304.3450 | Tod S. Levitt | Tod S. Levitt | Probabilistic Conflict Resolution in Hierarchical Hypothesis Spaces | Appears in Proceedings of the First Conference on Uncertainty in
Artificial Intelligence (UAI1985) | null | null | UAI-P-1985-PG-265-272 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Artificial intelligence applications such as industrial robotics, military
surveillance, and hazardous environment clean-up, require situation
understanding based on partial, uncertain, and ambiguous or erroneous evidence.
It is necessary to evaluate the relative likelihood of multiple possible
hypotheses of the (current) situation faced by the decision making program.
Often, the evidence and hypotheses are hierarchical in nature. In image
understanding tasks, for example, evidence begins with raw imagery, from which
ambiguous features are extracted which have multiple possible aggregations
providing evidential support for the presence of multiple hypothesis of objects
and terrain, which in turn aggregate in multiple ways to provide partial
evidence for different interpretations of the ambient scene. Information fusion
for military situation understanding has a similar evidence/hypothesis
hierarchy from multiple sensor through message level interpretations, and also
provides evidence at multiple levels of the doctrinal hierarchy of military
forces.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:58:40 GMT"
}
] | 1,365,984,000,000 | [
[
"Levitt",
"Tod S.",
""
]
] |
1304.3451 | Gerald Shao-Hung Liu | Gerald Shao-Hung Liu | Knowledge Structures and Evidential Reasoning in Decision Analysis | Appears in Proceedings of the First Conference on Uncertainty in
Artificial Intelligence (UAI1985) | null | null | UAI-P-1985-PG-273-282 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The roles played by decision factors in making complex subject are decisions
are characterized by how these factors affect the overall decision. Evidence
that partially matches a factor is evaluated, and then effective computational
rules are applied to these roles to form an appropriate aggregation of the
evidence. The use of this technique supports the expression of deeper levels of
causality, and may also preserve the cognitive structure of the decision maker
better than the usual weighting methods, certainty-factor or other
probabilistic models can.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2013 19:58:46 GMT"
}
] | 1,365,984,000,000 | [
[
"Liu",
"Gerald Shao-Hung",
""
]
] |
1304.3489 | Emad Saad | Emad Saad | Logical Stochastic Optimization | arXiv admin note: substantial text overlap with arXiv:1304.2384,
arXiv:1304.2797, arXiv:1304.1684, arXiv:1304.3144 | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/3.0/ | We present a logical framework to represent and reason about stochastic
optimization problems based on probability answer set programming. This is
established by allowing probability optimization aggregates, e.g., minimum and
maximum in the language of probability answer set programming to allow
minimization or maximization of some desired criteria under the probabilistic
environments. We show the application of the proposed logical stochastic
optimization framework under the probability answer set programming to two
stages stochastic optimization problems with recourse.
| [
{
"version": "v1",
"created": "Sat, 6 Apr 2013 06:54:24 GMT"
}
] | 1,365,984,000,000 | [
[
"Saad",
"Emad",
""
]
] |
1304.3762 | Mark Burgin | Mark Burgin and Eugene Eberbach | Evolutionary Turing in the Context of Evolutionary Machines | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the roots of evolutionary computation was the idea of Turing about
unorganized machines. The goal of this work is the development of foundations
for evolutionary computations, connecting Turing's ideas and the contemporary
state of art in evolutionary computations. To achieve this goal, we develop a
general approach to evolutionary processes in the computational context,
building mathematical models of computational systems, functioning of which is
based on evolutionary processes, and studying properties of such systems.
Operations with evolutionary machines are described and it is explored when
definite classes of evolutionary machines are closed with respect to basic
operations with these machines. We also study such properties as linguistic and
functional equivalence of evolutionary machines and their classes, as well as
computational power of evolutionary machines and their classes, comparing of
evolutionary machines to conventional automata, such as finite automata or
Turing machines.
| [
{
"version": "v1",
"created": "Sat, 13 Apr 2013 02:16:46 GMT"
}
] | 1,366,070,400,000 | [
[
"Burgin",
"Mark",
""
],
[
"Eberbach",
"Eugene",
""
]
] |
1304.3842 | Craig Boutilier | Craig Boutilier, Moises Goldszmidt | Proceedings of the Sixteenth Conference on Uncertainty in Artificial
Intelligence (2000) | null | null | null | UAI2000 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This is the Proceedings of the Sixteenth Conference on Uncertainty in
Artificial Intelligence, which was held in San Francisco, CA, June 30 - July 3,
2000
| [
{
"version": "v1",
"created": "Sat, 13 Apr 2013 20:21:00 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Aug 2014 04:11:05 GMT"
}
] | 1,409,270,400,000 | [
[
"Boutilier",
"Craig",
""
],
[
"Goldszmidt",
"Moises",
""
]
] |
1304.3843 | Kathryn Laskey | Kathryn Laskey, Henri Prade | Proceedings of the Fifteenth Conference on Uncertainty in Artificial
Intelligence (1999) | null | null | null | UAI1999 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This is the Proceedings of the Fifteenth Conference on Uncertainty in
Artificial Intelligence, which was held in Stockholm Sweden, July 30 - August
1, 1999
| [
{
"version": "v1",
"created": "Sat, 13 Apr 2013 20:29:18 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Aug 2014 04:10:01 GMT"
}
] | 1,409,270,400,000 | [
[
"Laskey",
"Kathryn",
""
],
[
"Prade",
"Henri",
""
]
] |
1304.3844 | Gregory Cooper | Gregory Cooper, Serafin Moral | Proceedings of the Fourteenth Conference on Uncertainty in Artificial
Intelligence (1998) | null | null | null | UAI1998 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This is the Proceedings of the Fourteenth Conference on Uncertainty in
Artificial Intelligence, which was held in Madison, WI, July 24-26, 1998
| [
{
"version": "v1",
"created": "Sat, 13 Apr 2013 20:34:30 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Aug 2014 04:08:44 GMT"
}
] | 1,409,270,400,000 | [
[
"Cooper",
"Gregory",
""
],
[
"Moral",
"Serafin",
""
]
] |
1304.3846 | Dan Geiger | Dan Geiger, Prakash Shenoy | Proceedings of the Thirteenth Conference on Uncertainty in Artificial
Intelligence (1997) | null | null | null | UAI1997 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This is the Proceedings of the Thirteenth Conference on Uncertainty in
Artificial Intelligence, which was held in Providence, RI, August 1-3, 1997
| [
{
"version": "v1",
"created": "Sat, 13 Apr 2013 20:44:25 GMT"
}
] | 1,366,070,400,000 | [
[
"Geiger",
"Dan",
""
],
[
"Shenoy",
"Prakash",
""
]
] |
1304.3847 | Eric Horvitz | Eric Horvitz, Finn Jensen | Proceedings of the Twelfth Conference on Uncertainty in Artificial
Intelligence (1996) | null | null | null | UAI1996 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This is the Proceedings of the Twelfth Conference on Uncertainty in
Artificial Intelligence, which was held in Portland, OR, August 1-4, 1996
| [
{
"version": "v1",
"created": "Sat, 13 Apr 2013 20:49:49 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Aug 2014 04:06:12 GMT"
}
] | 1,409,270,400,000 | [
[
"Horvitz",
"Eric",
""
],
[
"Jensen",
"Finn",
""
]
] |
1304.3848 | Philippe Besnard | Philippe Besnard, Steve Hanks | Proceedings of the Eleventh Conference on Uncertainty in Artificial
Intelligence (1995) | null | null | null | UAI1995 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This is the Proceedings of the Eleventh Conference on Uncertainty in
Artificial Intelligence, which was held in Montreal, QU, August 18-20, 1995
| [
{
"version": "v1",
"created": "Sat, 13 Apr 2013 20:53:46 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Aug 2014 04:04:46 GMT"
}
] | 1,409,270,400,000 | [
[
"Besnard",
"Philippe",
""
],
[
"Hanks",
"Steve",
""
]
] |
1304.3849 | Ramon Lopez de Mantaras | Ramon Lopez de Mantaras, David Poole | Proceedings of the Tenth Conference on Uncertainty in Artificial
Intelligence (1994) | null | null | null | UAI1994 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This is the Proceedings of the Tenth Conference on Uncertainty in Artificial
Intelligence, which was held in Seattle, WA, July 29-31, 1994
| [
{
"version": "v1",
"created": "Sat, 13 Apr 2013 20:58:41 GMT"
}
] | 1,366,070,400,000 | [
[
"de Mantaras",
"Ramon Lopez",
""
],
[
"Poole",
"David",
""
]
] |
1304.3851 | David Heckerman | David Heckerman, E. Mamdani | Proceedings of the Ninth Conference on Uncertainty in Artificial
Intelligence (1993) | null | null | null | UAI1993 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This is the Proceedings of the Ninth Conference on Uncertainty in Artificial
Intelligence, which was held in Washington, DC, July 9-11, 1993
| [
{
"version": "v1",
"created": "Sat, 13 Apr 2013 21:03:12 GMT"
}
] | 1,366,070,400,000 | [
[
"Heckerman",
"David",
""
],
[
"Mamdani",
"E.",
""
]
] |
1304.3852 | Bruce D'Ambrosio | Bruce D'Ambrosio, Didier Dubois, Philippe Smets, Michael Wellman | Proceedings of the Eighth Conference on Uncertainty in Artificial
Intelligence (1992) | null | null | null | UAI1992 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This is the Proceedings of the Eighth Conference on Uncertainty in Artificial
Intelligence, which was held in Stanford, CA, July 17-19, 1992
| [
{
"version": "v1",
"created": "Sat, 13 Apr 2013 21:10:50 GMT"
}
] | 1,366,070,400,000 | [
[
"D'Ambrosio",
"Bruce",
""
],
[
"Dubois",
"Didier",
""
],
[
"Smets",
"Philippe",
""
],
[
"Wellman",
"Michael",
""
]
] |
1304.3853 | Piero Bonissone | Piero Bonissone, Bruce D'Ambrosio, Philippe Smets | Proceedings of the Seventh Conference on Uncertainty in Artificial
Intelligence (1991) | null | null | null | UAI1991 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This is the Proceedings of the Seventh Conference on Uncertainty in
Artificial Intelligence, which was held in Los Angeles, CA, July 13-15, 1991
| [
{
"version": "v1",
"created": "Sat, 13 Apr 2013 21:18:04 GMT"
}
] | 1,366,070,400,000 | [
[
"Bonissone",
"Piero",
""
],
[
"D'Ambrosio",
"Bruce",
""
],
[
"Smets",
"Philippe",
""
]
] |
1304.3854 | Piero Bonissone | Piero Bonissone, Max Henrion, Laveen Kanal, John Lemmer | Proceedings of the Sixth Conference on Uncertainty in Artificial
Intelligence (1990) | null | null | null | UAI1990 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This is the Proceedings of the Sixth Conference on Uncertainty in Artificial
Intelligence, which was held in Cambridge, MA, Jul 27 - Jul 29, 1990
| [
{
"version": "v1",
"created": "Sat, 13 Apr 2013 21:21:17 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Aug 2014 04:03:18 GMT"
}
] | 1,409,270,400,000 | [
[
"Bonissone",
"Piero",
""
],
[
"Henrion",
"Max",
""
],
[
"Kanal",
"Laveen",
""
],
[
"Lemmer",
"John",
""
]
] |
1304.3855 | Max Henrion | Max Henrion, Laveen Kanal, John Lemmer, Ross Shachter | Proceedings of the Fifth Conference on Uncertainty in Artificial
Intelligence (1989) | null | null | null | UAI1989 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This is the Proceedings of the Fifth Conference on Uncertainty in Artificial
Intelligence, which was held in Windsor, ON, August 18-20, 1989
| [
{
"version": "v1",
"created": "Sat, 13 Apr 2013 21:26:12 GMT"
}
] | 1,366,070,400,000 | [
[
"Henrion",
"Max",
""
],
[
"Kanal",
"Laveen",
""
],
[
"Lemmer",
"John",
""
],
[
"Shachter",
"Ross",
""
]
] |
1304.3856 | Laveen Kanal | Laveen Kanal, John Lemmer, Tod Levitt, Ross Shachter | Proceedings of the Fourth Conference on Uncertainty in Artificial
Intelligence (1988) | null | null | null | UAI1988 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This is the Proceedings of the Fourth Conference on Uncertainty in Artificial
Intelligence, which was held in Minneapolis, MN, July 10-12, 1988
| [
{
"version": "v1",
"created": "Sat, 13 Apr 2013 21:30:26 GMT"
}
] | 1,366,070,400,000 | [
[
"Kanal",
"Laveen",
""
],
[
"Lemmer",
"John",
""
],
[
"Levitt",
"Tod",
""
],
[
"Shachter",
"Ross",
""
]
] |
1304.3857 | Laveen Kanal | Laveen Kanal, John Lemmer, Tod Levitt | Proceedings of the Third Conference on Uncertainty in Artificial
Intelligence (1987) | null | null | null | UAI1987 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This is the Proceedings of the Third Conference on Uncertainty in Artificial
Intelligence, which was held in Seattle, WA, July 10-12, 1987
| [
{
"version": "v1",
"created": "Sat, 13 Apr 2013 21:34:06 GMT"
}
] | 1,366,070,400,000 | [
[
"Kanal",
"Laveen",
""
],
[
"Lemmer",
"John",
""
],
[
"Levitt",
"Tod",
""
]
] |
1304.3859 | Laveen Kanal | Laveen Kanal, John Lemmer | Proceedings of the Second Conference on Uncertainty in Artificial
Intelligence (1986) | null | null | null | UAI1986 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This is the Proceedings of the Second Conference on Uncertainty in Artificial
Intelligence, which was held in Philadelphia, PA, August 8-10, 1986
| [
{
"version": "v1",
"created": "Sat, 13 Apr 2013 21:37:12 GMT"
}
] | 1,366,070,400,000 | [
[
"Kanal",
"Laveen",
""
],
[
"Lemmer",
"John",
""
]
] |
1304.3860 | Adrian Groza | Ioan Alfred Letia and Adrian Groza | Justificatory and Explanatory Argumentation for Committing Agents | null | ARGMAS 2012 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the interaction between agents we can have an explicative discourse, when
communicating preferences or intentions, and a normative discourse, when
considering normative knowledge. For justifying their actions our agents are
endowed with a Justification and Explanation Logic (JEL), capable to cover both
the justification for their commitments and explanations why they had to act in
that way, due to the current situation in the environment. Social commitments
are used to formalise justificatory and explanatory patterns. The combination
of ex- planation, justification, and commitments
| [
{
"version": "v1",
"created": "Sat, 13 Apr 2013 21:51:32 GMT"
}
] | 1,366,070,400,000 | [
[
"Letia",
"Ioan Alfred",
""
],
[
"Groza",
"Adrian",
""
]
] |
1304.4182 | Laveen Kanal | Laveen Kanal, John Lemmer | Proceedings of the First Conference on Uncertainty in Artificial
Intelligence (1985) | null | null | null | UAI1985 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This is the Proceedings of the First Conference on Uncertainty in Artificial
Intelligence, which was held in Los Angeles, CA, July 10-12, 1985
| [
{
"version": "v1",
"created": "Mon, 15 Apr 2013 17:35:22 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Aug 2014 03:59:55 GMT"
}
] | 1,409,270,400,000 | [
[
"Kanal",
"Laveen",
""
],
[
"Lemmer",
"John",
""
]
] |
1304.4379 | Jan Noessner | Jan Noessner, Mathias Niepert, Heiner Stuckenschmidt | RockIt: Exploiting Parallelism and Symmetry for MAP Inference in
Statistical Relational Models | To appear in proceedings of AAAI 2013 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | RockIt is a maximum a-posteriori (MAP) query engine for statistical
relational models. MAP inference in graphical models is an optimization problem
which can be compiled to integer linear programs (ILPs). We describe several
advances in translating MAP queries to ILP instances and present the novel
meta-algorithm cutting plane aggregation (CPA). CPA exploits local
context-specific symmetries and bundles up sets of linear constraints. The
resulting counting constraints lead to more compact ILPs and make the symmetry
of the ground model more explicit to state-of-the-art ILP solvers. Moreover,
RockIt parallelizes most parts of the MAP inference pipeline taking advantage
of ubiquitous shared-memory multi-core architectures.
We report on extensive experiments with Markov logic network (MLN) benchmarks
showing that RockIt outperforms the state-of-the-art systems Alchemy, Markov
TheBeast, and Tuffy both in terms of efficiency and quality of results.
| [
{
"version": "v1",
"created": "Tue, 16 Apr 2013 09:29:58 GMT"
},
{
"version": "v2",
"created": "Tue, 30 Apr 2013 10:19:58 GMT"
}
] | 1,367,366,400,000 | [
[
"Noessner",
"Jan",
""
],
[
"Niepert",
"Mathias",
""
],
[
"Stuckenschmidt",
"Heiner",
""
]
] |
1304.4415 | Lakhdar Sais | Said Jabbour and Lakhdar Sais and Yakoub Salhi | Mining to Compact CNF Propositional Formulae | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a first application of data mining techniques to
propositional satisfiability. Our proposed Mining4SAT approach aims to discover
and to exploit hidden structural knowledge for reducing the size of
propositional formulae in conjunctive normal form (CNF). Mining4SAT combines
both frequent itemset mining techniques and Tseitin's encoding for a compact
representation of CNF formulae. The experiments of our Mining4SAT approach show
interesting reductions of the sizes of many application instances taken from
the last SAT competitions.
| [
{
"version": "v1",
"created": "Tue, 16 Apr 2013 12:26:41 GMT"
}
] | 1,366,156,800,000 | [
[
"Jabbour",
"Said",
""
],
[
"Sais",
"Lakhdar",
""
],
[
"Salhi",
"Yakoub",
""
]
] |
1304.4925 | Manfred Eppe | Manfred Eppe, Mehul Bhatt, Frank Dylla | h-approximation: History-Based Approximation of Possible World Semantics
as ASP | 12th International Conference on Logic Programming and Nonmonotonic
Reasoning (LPNMR 2013) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose an approximation of the Possible Worlds Semantics (PWS) for action
planning. A corresponding planning system is implemented by a transformation of
the action specification to an Answer-Set Program. A novelty is support for
postdiction wrt. (a) the plan existence problem in our framework can be solved
in NP, as compared to $\Sigma_2^P$ for non-approximated PWS of Baral(2000); and
(b) the planner generates optimal plans wrt. a minimal number of actions in
$\Delta_2^P$. We demo the planning system with standard problems, and
illustrate its integration in a larger software framework for robot control in
a smart home.
| [
{
"version": "v1",
"created": "Wed, 17 Apr 2013 19:28:42 GMT"
},
{
"version": "v2",
"created": "Wed, 22 May 2013 18:14:53 GMT"
},
{
"version": "v3",
"created": "Thu, 13 Jun 2013 10:42:46 GMT"
},
{
"version": "v4",
"created": "Fri, 14 Jun 2013 12:24:13 GMT"
}
] | 1,371,427,200,000 | [
[
"Eppe",
"Manfred",
""
],
[
"Bhatt",
"Mehul",
""
],
[
"Dylla",
"Frank",
""
]
] |
1304.4965 | Mark Levin | Mark Sh. Levin | Improvement/Extension of Modular Systems as Combinatorial Reengineering
(Survey) | 24 pages, 28 figures, 14 tables. arXiv admin note: text overlap with
arXiv:1212.1735 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The paper describes development (improvement/extension) approaches for
composite (modular) systems (as combinatorial reengineering). The following
system improvement/extension actions are considered: (a) improvement of systems
component(s) (e.g., improvement of a system component, replacement of a system
component); (b) improvement of system component interconnection
(compatibility); (c) joint improvement improvement of system components(s) and
their interconnection; (d) improvement of system structure (replacement of
system part(s), addition of a system part, deletion of a system part,
modification of system structure). The study of system improvement approaches
involve some crucial issues: (i) scales for evaluation of system components and
component compatibility (quantitative scale, ordinal scale, poset-like scale,
scale based on interval multiset estimate), (ii) evaluation of integrated
system quality, (iii) integration methods to obtain the integrated system
quality. The system improvement/extension strategies can be examined as
seleciton/combination of the improvement action(s) above and as modification of
system structure. The strategies are based on combinatorial optimization
problems (e.g., multicriteria selection, knapsack problem, multiple choice
problem, combinatorial synthesis based on morphological clique problem,
assignment/reassignment problem, graph recoloring problem, spanning problems,
hotlink assignment). Here, heuristics are used. Various system
improvement/extension strategies are presented including illustrative numerical
examples.
| [
{
"version": "v1",
"created": "Wed, 17 Apr 2013 20:41:05 GMT"
}
] | 1,366,329,600,000 | [
[
"Levin",
"Mark Sh.",
""
]
] |
1304.5449 | Christophe Lecoutre | Christophe Lecoutre and Nicolas Paris and Olivier Roussel and
S\'ebastien Tabary | Solving WCSP by Extraction of Minimal Unsatisfiable Cores | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Usual techniques to solve WCSP are based on cost transfer operations coupled
with a branch and bound algorithm. In this paper, we focus on an approach
integrating extraction and relaxation of Minimal Unsatisfiable Cores in order
to solve this problem. We decline our approach in two ways: an incomplete,
greedy, algorithm and a complete one.
| [
{
"version": "v1",
"created": "Fri, 19 Apr 2013 15:36:53 GMT"
}
] | 1,366,588,800,000 | [
[
"Lecoutre",
"Christophe",
""
],
[
"Paris",
"Nicolas",
""
],
[
"Roussel",
"Olivier",
""
],
[
"Tabary",
"Sébastien",
""
]
] |
1304.5550 | Adrian Groza | Adrian Groza, Gabriel Barbur, Bogdan Blaga | OntoRich - A Support Tool for Semi-Automatic Ontology Enrichment and
Evaluation | ACAM 2011 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents the OntoRich framework, a support tool for semi-automatic
ontology enrichment and evaluation. The WordNet is used to extract candidates
for dynamic ontology enrichment from RSS streams. With the integration of
OpenNLP the system gains access to syntactic analysis of the RSS news. The
enriched ontologies are evaluated against several qualitative metrics.
| [
{
"version": "v1",
"created": "Fri, 19 Apr 2013 21:17:19 GMT"
}
] | 1,366,675,200,000 | [
[
"Groza",
"Adrian",
""
],
[
"Barbur",
"Gabriel",
""
],
[
"Blaga",
"Bogdan",
""
]
] |
1304.5554 | Adrian Groza | Adrian Groza and Sergiu Indrie | Enacting Social Argumentative Machines in Semantic Wikipedia | UBICC 2011 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This research advocates the idea of combining argumentation theory with the
social web technology, aiming to enact large scale or mass argumentation. The
proposed framework allows mass-collaborative editing of structured arguments in
the style of semantic wikipedia. The long term goal is to apply the abstract
machinery of argumentation theory to more practical applications based on human
generated arguments, such as deliberative democracy, business negotiation, or
self-care. The ARGNET system was developed based on ther Semantic MediaWiki
framework and on the Argument Interchange Format (AIF) ontology.
| [
{
"version": "v1",
"created": "Fri, 19 Apr 2013 21:36:59 GMT"
}
] | 1,366,675,200,000 | [
[
"Groza",
"Adrian",
""
],
[
"Indrie",
"Sergiu",
""
]
] |
1304.5705 | Rajendra Bera | Rajendra K. Bera | A novice looks at emotional cognition | 11 pages, 1 figure | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modeling emotional-cognition is in a nascent stage and therefore wide-open
for new ideas and discussions. In this paper the author looks at the modeling
problem by bringing in ideas from axiomatic mathematics, information theory,
computer science, molecular biology, non-linear dynamical systems and quantum
computing and explains how ideas from these disciplines may have applications
in modeling emotional-cognition.
| [
{
"version": "v1",
"created": "Sun, 21 Apr 2013 08:08:38 GMT"
}
] | 1,366,675,200,000 | [
[
"Bera",
"Rajendra K.",
""
]
] |
1304.5810 | Marcelo Arenas | Marcelo Arenas, Elena Botoeva, Diego Calvanese, and Vladislav Ryzhikov | Exchanging OWL 2 QL Knowledge Bases | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge base exchange is an important problem in the area of data exchange
and knowledge representation, where one is interested in exchanging information
between a source and a target knowledge base connected through a mapping. In
this paper, we study this fundamental problem for knowledge bases and mappings
expressed in OWL 2 QL, the profile of OWL 2 based on the description logic
DL-Lite_R. More specifically, we consider the problem of computing universal
solutions, identified as one of the most desirable translations to be
materialized, and the problem of computing UCQ-representations, which optimally
capture in a target TBox the information that can be extracted from a source
TBox and a mapping by means of unions of conjunctive queries. For the former we
provide a novel automata-theoretic technique, and complexity results that range
from NP to EXPTIME, while for the latter we show NLOGSPACE-completeness.
| [
{
"version": "v1",
"created": "Sun, 21 Apr 2013 23:03:06 GMT"
},
{
"version": "v2",
"created": "Fri, 3 May 2013 12:05:57 GMT"
},
{
"version": "v3",
"created": "Mon, 1 Jul 2013 21:10:21 GMT"
}
] | 1,372,809,600,000 | [
[
"Arenas",
"Marcelo",
""
],
[
"Botoeva",
"Elena",
""
],
[
"Calvanese",
"Diego",
""
],
[
"Ryzhikov",
"Vladislav",
""
]
] |
1304.5897 | Isis Truck | Mohammed-Amine Abchir and Isis Truck | Towards an Extension of the 2-tuple Linguistic Model to Deal With
Unbalanced Linguistic Term sets | null | Kybernetika, 2013 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the domain of Computing with words (CW), fuzzy linguistic approaches are
known to be relevant in many decision-making problems. Indeed, they allow us to
model the human reasoning in replacing words, assessments, preferences,
choices, wishes... by ad hoc variables, such as fuzzy sets or more
sophisticated variables.
This paper focuses on a particular model: Herrera & Martinez' 2-tuple
linguistic model and their approach to deal with unbalanced linguistic term
sets. It is interesting since the computations are accomplished without loss of
information while the results of the decision-making processes always refer to
the initial linguistic term set. They propose a fuzzy partition which
distributes data on the axis by using linguistic hierarchies to manage the
non-uniformity. However, the required input (especially the density around the
terms) taken by their fuzzy partition algorithm may be considered as too much
demanding in a real-world application, since density is not always easy to
determine. Moreover, in some limit cases (especially when two terms are very
closed semantically to each other), the partition doesn't comply with the data
themselves, it isn't close to the reality. Therefore we propose to modify the
required input, in order to offer a simpler and more faithful partition. We
have added an extension to the package jFuzzyLogic and to the corresponding
script language FCL. This extension supports both 2-tuple models: Herrera &
Martinez' and ours. In addition to the partition algorithm, we present two
aggregation algorithms: the arithmetic means and the addition. We also discuss
these kinds of 2-tuple models.
| [
{
"version": "v1",
"created": "Mon, 22 Apr 2013 09:54:07 GMT"
}
] | 1,366,675,200,000 | [
[
"Abchir",
"Mohammed-Amine",
""
],
[
"Truck",
"Isis",
""
]
] |
1304.5970 | Thierry Petit | Nina Narodytska, Thierry Petit, Mohamed Siala and Toby Walsh | Three Generalizations of the FOCUS Constraint | null | IJCAI 2013 proceedings | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The FOCUS constraint expresses the notion that solutions are concentrated. In
practice, this constraint suffers from the rigidity of its semantics. To tackle
this issue, we propose three generalizations of the FOCUS constraint. We
provide for each one a complete filtering algorithm as well as discussing
decompositions.
| [
{
"version": "v1",
"created": "Mon, 22 Apr 2013 14:48:58 GMT"
}
] | 1,366,675,200,000 | [
[
"Narodytska",
"Nina",
""
],
[
"Petit",
"Thierry",
""
],
[
"Siala",
"Mohamed",
""
],
[
"Walsh",
"Toby",
""
]
] |
1304.6078 | Adrian Groza | Ioan Alfred Letia and Adrian Groza | Automating the Dispute Resolution in Task Dependency Network | IAT 2005. arXiv admin note: substantial text overlap with
arXiv:1304.5545 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | When perturbation or unexpected events do occur, agents need protocols for
repairing or reforming the supply chain. Unfortunate contingency could increase
too much the cost of performance, while breaching the current contract may be
more efficient. In our framework the principles of contract law are applied to
set penalties: expectation damages, opportunity cost, reliance damages, and
party design remedies, and they are introduced in the task dependency model
| [
{
"version": "v1",
"created": "Fri, 19 Apr 2013 21:47:03 GMT"
}
] | 1,366,761,600,000 | [
[
"Letia",
"Ioan Alfred",
""
],
[
"Groza",
"Adrian",
""
]
] |
1304.6442 | Marco Montali | Diego Calvanese, Evgeny Kharlamov, Marco Montali, Ario Santoso,
Dmitriy Zheleznyakov | Verification of Inconsistency-Aware Knowledge and Action Bases (Extended
Version) | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Description Logic Knowledge and Action Bases (KABs) have been recently
introduced as a mechanism that provides a semantically rich representation of
the information on the domain of interest in terms of a DL KB and a set of
actions to change such information over time, possibly introducing new objects.
In this setting, decidability of verification of sophisticated temporal
properties over KABs, expressed in a variant of first-order mu-calculus, has
been shown. However, the established framework treats inconsistency in a
simplistic way, by rejecting inconsistent states produced through action
execution. We address this problem by showing how inconsistency handling based
on the notion of repairs can be integrated into KABs, resorting to
inconsistency-tolerant semantics. In this setting, we establish decidability
and complexity of verification.
| [
{
"version": "v1",
"created": "Tue, 23 Apr 2013 23:24:31 GMT"
}
] | 1,366,848,000,000 | [
[
"Calvanese",
"Diego",
""
],
[
"Kharlamov",
"Evgeny",
""
],
[
"Montali",
"Marco",
""
],
[
"Santoso",
"Ario",
""
],
[
"Zheleznyakov",
"Dmitriy",
""
]
] |
1304.7168 | Emad Saad | Emad Saad | Non Deterministic Logic Programs | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Non deterministic applications arise in many domains, including, stochastic
optimization, multi-objectives optimization, stochastic planning, contingent
stochastic planning, reinforcement learning, reinforcement learning in
partially observable Markov decision processes, and conditional planning. We
present a logic programming framework called non deterministic logic programs,
along with a declarative semantics and fixpoint semantics, to allow
representing and reasoning about inherently non deterministic real-world
applications. The language of non deterministic logic programs framework is
extended with non-monotonic negation, and two alternative semantics are
defined: the stable non deterministic model semantics and the well-founded non
deterministic model semantics as well as their relationship is studied. These
semantics subsume the deterministic stable model semantics and the
deterministic well-founded semantics of deterministic normal logic programs,
and they reduce to the semantics of deterministic definite logic programs
without negation. We show the application of the non deterministic logic
programs framework to a conditional planning problem.
| [
{
"version": "v1",
"created": "Fri, 26 Apr 2013 13:55:05 GMT"
}
] | 1,367,193,600,000 | [
[
"Saad",
"Emad",
""
]
] |
1304.7238 | Arindam Chaudhuri AC | Arindam Chaudhuri, Kajal De, Dipak Chatterjee | Solution of the Decision Making Problems using Fuzzy Soft Relations | 29 Pages Journal Paper, International Journal of Information
Technology, Volume 15, Number 1, 2009 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Fuzzy Modeling has been applied in a wide variety of fields such as
Engineering and Management Sciences and Social Sciences to solve a number
Decision Making Problems which involve impreciseness, uncertainty and vagueness
in data. In particular, applications of this Modeling technique in Decision
Making Problems have remarkable significance. These problems have been tackled
using various theories such as Probability theory, Fuzzy Set Theory, Rough Set
Theory, Vague Set Theory, Approximate Reasoning Theory etc. which lack in
parameterization of the tools due to which they could not be applied
successfully to such problems. The concept of Soft Set has a promising
potential for giving an optimal solution for these problems. With the
motivation of this new concept, in this paper we define the concepts of Soft
Relation and Fuzzy Soft Relation and then apply them to solve a number of
Decision Making Problems. The advantages of Fuzzy Soft Relation compared to
other paradigms are discussed. To the best of our knowledge this is the first
work on the application of Fuzzy Soft Relation to the Decision Making Problems.
| [
{
"version": "v1",
"created": "Fri, 26 Apr 2013 17:36:14 GMT"
}
] | 1,367,193,600,000 | [
[
"Chaudhuri",
"Arindam",
""
],
[
"De",
"Kajal",
""
],
[
"Chatterjee",
"Dipak",
""
]
] |
1304.7239 | Arindam Chaudhuri AC | Arindam Chaudhuri, Kajal De, Dipak Chatterjee | Solution of System of Linear Equations - A Neuro-Fuzzy Approach | 11 Pages, Journal Article, East West Journal of Mathematics, 2008 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neuro-Fuzzy Modeling has been applied in a wide variety of fields such as
Decision Making, Engineering and Management Sciences etc. In particular,
applications of this Modeling technique in Decision Making by involving complex
Systems of Linear Algebraic Equations have remarkable significance. In this
Paper, we present Polak-Ribiere Conjugate Gradient based Neural Network with
Fuzzy rules to solve System of Simultaneous Linear Algebraic Equations. This is
achieved using Fuzzy Backpropagation Learning Rule. The implementation results
show that the proposed Neuro-Fuzzy Network yields effective solutions for
exactly determined, underdetermined and over-determined Systems of Linear
Equations. This fact is demonstrated by the Computational Complexity analysis
of the Neuro-Fuzzy Algorithm. The proposed Algorithm is simulated effectively
using MATLAB software. To the best of our knowledge this is the first work of
the Systems of Linear Algebraic Equations using Neuro-Fuzzy Modeling.
| [
{
"version": "v1",
"created": "Fri, 26 Apr 2013 17:44:33 GMT"
}
] | 1,367,193,600,000 | [
[
"Chaudhuri",
"Arindam",
""
],
[
"De",
"Kajal",
""
],
[
"Chatterjee",
"Dipak",
""
]
] |
1305.0574 | Lakhdar Sais | Said Jabbour and Lakhdar Sais and Yakoub Salhi | Extending Modern SAT Solvers for Enumerating All Models | This paper is withdrawn by the authors due to a missing reference.
The authors work further on this issue and conduct exhaustive experimental
comparison with other related works | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we address the problem of enumerating all models of a Boolean
formula in conjunctive normal form (CNF). We propose an extension of CDCL-based
SAT solvers to deal with this fundamental problem. Then, we provide an
experimental evaluation of our proposed SAT model enumeration algorithms on
both satisfiable SAT instances taken from the last SAT challenge and on
instances from the SAT-based encoding of sequence mining problems.
| [
{
"version": "v1",
"created": "Thu, 2 May 2013 20:37:29 GMT"
},
{
"version": "v2",
"created": "Mon, 6 May 2013 20:45:25 GMT"
}
] | 1,367,971,200,000 | [
[
"Jabbour",
"Said",
""
],
[
"Sais",
"Lakhdar",
""
],
[
"Salhi",
"Yakoub",
""
]
] |
1305.1060 | Gian Luca Pozzato | Laura Giordano and Valentina Gliozzi and Nicola Olivetti and Gian Luca
Pozzato | On Rational Closure in Description Logics of Typicality | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We define the notion of rational closure in the context of Description Logics
extended with a tipicality operator. We start from ALC+T, an extension of ALC
with a typicality operator T: intuitively allowing to express concepts of the
form T(C), meant to select the "most normal" instances of a concept C. The
semantics we consider is based on rational model. But we further restrict the
semantics to minimal models, that is to say, to models that minimise the rank
of domain elements. We show that this semantics captures exactly a notion of
rational closure which is a natural extension to Description Logics of Lehmann
and Magidor's original one. We also extend the notion of rational closure to
the Abox component. We provide an ExpTime algorithm for computing the rational
closure of an Abox and we show that it is sound and complete with respect to
the minimal model semantics.
| [
{
"version": "v1",
"created": "Sun, 5 May 2013 22:32:16 GMT"
}
] | 1,367,884,800,000 | [
[
"Giordano",
"Laura",
""
],
[
"Gliozzi",
"Valentina",
""
],
[
"Olivetti",
"Nicola",
""
],
[
"Pozzato",
"Gian Luca",
""
]
] |
1305.1169 | Marc Schoenauer | Mostepha Redouane Khouadjia (INRIA Saclay - Ile de France), Marc
Schoenauer (INRIA Saclay - Ile de France, LRI), Vincent Vidal (DCSD), Johann
Dr\'eo (TRT), Pierre Sav\'eant (TRT) | Multi-Objective AI Planning: Comparing Aggregation and Pareto Approaches | null | EvoCOP -- 13th European Conference on Evolutionary Computation in
Combinatorial Optimisation 7832 (2013) 202-213 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most real-world Planning problems are multi-objective, trying to minimize
both the makespan of the solution plan, and some cost of the actions involved
in the plan. But most, if not all existing approaches are based on
single-objective planners, and use an aggregation of the objectives to remain
in the single-objective context. Divide and Evolve (DaE) is an evolutionary
planner that won the temporal deterministic satisficing track at the last
International Planning Competitions (IPC). Like all Evolutionary Algorithms
(EA), it can easily be turned into a Pareto-based Multi-Objective EA. It is
however important to validate the resulting algorithm by comparing it with the
aggregation approach: this is the goal of this paper. The comparative
experiments on a recently proposed benchmark set that are reported here
demonstrate the usefulness of going Pareto-based in AI Planning.
| [
{
"version": "v1",
"created": "Mon, 6 May 2013 12:53:25 GMT"
}
] | 1,367,884,800,000 | [
[
"Khouadjia",
"Mostepha Redouane",
"",
"INRIA Saclay - Ile de France"
],
[
"Schoenauer",
"Marc",
"",
"INRIA Saclay - Ile de France, LRI"
],
[
"Vidal",
"Vincent",
"",
"DCSD"
],
[
"Dréo",
"Johann",
"",
"TRT"
],
[
"Savéant",
"Pierre",
"",
"TRT"
]
] |
1305.1655 | Jose Hernandez-Orallo | Jose Hernandez-Orallo | A short note on estimating intelligence from user profiles in the
context of universal psychometrics: prospects and caveats | Keywords: intelligence; user profiles; cognitive abilities; social
networks; universal psychometrics; games; virtual worlds | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There has been an increasing interest in inferring some personality traits
from users and players in social networks and games, respectively. This goes
beyond classical sentiment analysis, and also much further than customer
profiling. The purpose here is to have a characterisation of users in terms of
personality traits, such as openness, conscientiousness, extraversion,
agreeableness, and neuroticism. While this is an incipient area of research, we
ask the question of whether cognitive abilities, and intelligence in
particular, are also measurable from user profiles. However, we pose the
question as broadly as possible in terms of subjects, in the context of
universal psychometrics, including humans, machines and hybrids. Namely, in
this paper we analyse the following question: is it possible to measure the
intelligence of humans and (non-human) bots in a social network or a game just
from their user profiles, i.e., by observation, without the use of interactive
tests, such as IQ tests, the Turing test or other more principled machine
intelligence tests?
| [
{
"version": "v1",
"created": "Tue, 7 May 2013 21:39:57 GMT"
}
] | 1,368,057,600,000 | [
[
"Hernandez-Orallo",
"Jose",
""
]
] |
1305.1991 | Jose Hernandez-Orallo | David L. Dowe, Jose Hernandez-Orallo | On the universality of cognitive tests | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The analysis of the adaptive behaviour of many different kinds of systems
such as humans, animals and machines, requires more general ways of assessing
their cognitive abilities. This need is strengthened by increasingly more tasks
being analysed for and completed by a wider diversity of systems, including
swarms and hybrids. The notion of universal test has recently emerged in the
context of machine intelligence evaluation as a way to define and use the same
cognitive test for a variety of systems, using some principled tasks and
adapting the interface to each particular subject. However, how far can
universal tests be taken? This paper analyses this question in terms of
subjects, environments, space-time resolution, rewards and interfaces. This
leads to a number of findings, insights and caveats, according to several
levels where universal tests may be progressively more difficult to conceive,
implement and administer. One of the most significant contributions is given by
the realisation that more universal tests are defined as maximisations of less
universal tests for a variety of configurations. This means that universal
tests must be necessarily adaptive.
| [
{
"version": "v1",
"created": "Thu, 9 May 2013 01:46:38 GMT"
}
] | 1,368,144,000,000 | [
[
"Dowe",
"David L.",
""
],
[
"Hernandez-Orallo",
"Jose",
""
]
] |
1305.2254 | William Yang Wang | William Yang Wang, Kathryn Mazaitis, William W. Cohen | Programming with Personalized PageRank: A Locally Groundable First-Order
Probabilistic Logic | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In many probabilistic first-order representation systems, inference is
performed by "grounding"---i.e., mapping it to a propositional representation,
and then performing propositional inference. With a large database of facts,
groundings can be very large, making inference and learning computationally
expensive. Here we present a first-order probabilistic language which is
well-suited to approximate "local" grounding: every query $Q$ can be
approximately grounded with a small graph. The language is an extension of
stochastic logic programs where inference is performed by a variant of
personalized PageRank. Experimentally, we show that the approach performs well
without weight learning on an entity resolution task; that supervised
weight-learning improves accuracy; and that grounding time is independent of DB
size. We also show that order-of-magnitude speedups are possible by
parallelizing learning.
| [
{
"version": "v1",
"created": "Fri, 10 May 2013 04:16:15 GMT"
}
] | 1,368,403,200,000 | [
[
"Wang",
"William Yang",
""
],
[
"Mazaitis",
"Kathryn",
""
],
[
"Cohen",
"William W.",
""
]
] |
1305.2265 | Marc Schoenauer | Mostepha Redouane Khouadjia (INRIA Saclay - Ile de France), Marc
Schoenauer (INRIA Saclay - Ile de France, LRI), Vincent Vidal (DCSD), Johann
Dr\'eo (TRT), Pierre Sav\'eant (TRT) | Quality Measures of Parameter Tuning for Aggregated Multi-Objective
Temporal Planning | arXiv admin note: substantial text overlap with arXiv:1305.1169 | LION7 - Learning and Intelligent OptimizatioN Conference (2013) | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Parameter tuning is recognized today as a crucial ingredient when tackling an
optimization problem. Several meta-optimization methods have been proposed to
find the best parameter set for a given optimization algorithm and (set of)
problem instances. When the objective of the optimization is some scalar
quality of the solution given by the target algorithm, this quality is also
used as the basis for the quality of parameter sets. But in the case of
multi-objective optimization by aggregation, the set of solutions is given by
several single-objective runs with different weights on the objectives, and it
turns out that the hypervolume of the final population of each single-objective
run might be a better indicator of the global performance of the aggregation
method than the best fitness in its population. This paper discusses this issue
on a case study in multi-objective temporal planning using the evolutionary
planner DaE-YAHSP and the meta-optimizer ParamILS. The results clearly show how
ParamILS makes a difference between both approaches, and demonstrate that
indeed, in this context, using the hypervolume indicator as ParamILS target is
the best choice. Other issues pertaining to parameter tuning in the proposed
context are also discussed.
| [
{
"version": "v1",
"created": "Fri, 10 May 2013 06:34:05 GMT"
}
] | 1,368,403,200,000 | [
[
"Khouadjia",
"Mostepha Redouane",
"",
"INRIA Saclay - Ile de France"
],
[
"Schoenauer",
"Marc",
"",
"INRIA Saclay - Ile de France, LRI"
],
[
"Vidal",
"Vincent",
"",
"DCSD"
],
[
"Dréo",
"Johann",
"",
"TRT"
],
[
"Savéant",
"Pierre",
"",
"TRT"
]
] |
1305.2415 | Djallel Bouneffouf | Djallel Bouneffouf | Exponentiated Gradient LINUCB for Contextual Multi-Armed Bandits | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present Exponentiated Gradient LINUCB, an algorithm for con-textual
multi-armed bandits. This algorithm uses Exponentiated Gradient to find the
optimal exploration of the LINUCB. Within a deliberately designed offline
simulation framework we conduct evaluations with real online event log data.
The experimental results demonstrate that our algorithm outperforms surveyed
algorithms.
| [
{
"version": "v1",
"created": "Fri, 10 May 2013 11:13:14 GMT"
}
] | 1,368,489,600,000 | [
[
"Bouneffouf",
"Djallel",
""
]
] |
1305.2498 | Jun He | Boris Mitavskiy and Jun He | A Further Generalization of the Finite-Population Geiringer-like Theorem
for POMDPs to Allow Recombination Over Arbitrary Set Covers | arXiv admin note: text overlap with arXiv:1110.4657 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A popular current research trend deals with expanding the Monte-Carlo tree
search sampling methodologies to the environments with uncertainty and
incomplete information. Recently a finite population version of Geiringer
theorem with nonhomologous recombination has been adopted to the setting of
Monte-Carlo tree search to cope with randomness and incomplete information by
exploiting the entrinsic similarities within the state space of the problem.
The only limitation of the new theorem is that the similarity relation was
assumed to be an equivalence relation on the set of states. In the current
paper we lift this "curtain of limitation" by allowing the similarity relation
to be modeled in terms of an arbitrary set cover of the set of state-action
pairs.
| [
{
"version": "v1",
"created": "Sat, 11 May 2013 11:42:09 GMT"
}
] | 1,368,489,600,000 | [
[
"Mitavskiy",
"Boris",
""
],
[
"He",
"Jun",
""
]
] |
1305.2561 | Kartik Talamadupula | Kartik Talamadupula and Octavian Udrea and Anton Riabov and Anand
Ranganathan | Strategic Planning for Network Data Analysis | 9 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As network traffic monitoring software for cybersecurity, malware detection,
and other critical tasks becomes increasingly automated, the rate of alerts and
supporting data gathered, as well as the complexity of the underlying model,
regularly exceed human processing capabilities. Many of these applications
require complex models and constituent rules in order to come up with decisions
that influence the operation of entire systems. In this paper, we motivate the
novel "strategic planning" problem -- one of gathering data from the world and
applying the underlying model of the domain in order to come up with decisions
that will monitor the system in an automated manner. We describe our use of
automated planning methods to this problem, including the technique that we
used to solve it in a manner that would scale to the demands of a real-time,
real world scenario. We then present a PDDL model of one such application
scenario related to network administration and monitoring, followed by a
description of a novel integrated system that was built to accept generated
plans and to continue the execution process. Finally, we present evaluations of
two different automated planners and their different capabilities with our
integrated system, both on a six-month window of network data, and using a
simulator.
| [
{
"version": "v1",
"created": "Sun, 12 May 2013 05:52:08 GMT"
}
] | 1,368,489,600,000 | [
[
"Talamadupula",
"Kartik",
""
],
[
"Udrea",
"Octavian",
""
],
[
"Riabov",
"Anton",
""
],
[
"Ranganathan",
"Anand",
""
]
] |
1305.2724 | Said Broumi | Said Broumi | Generalized Neutrosophic Soft Set | 14 pages, 11 figures | International Journal of Computer Science, Engineering and
Information Technology (IJCSEIT), Vol.3, No.2,April2013 | 10.5121/ijcseit.2013.3202 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we present a new concept called generalized neutrosophic soft
set. This concept incorporates the beneficial properties of both generalized
neutrosophic set introduced by A.A. Salama [7]and soft set techniques proposed
by Molodtsov [4]. We also study some properties of this concept. Some
definitions and operations have been introduced on generalized neutrosophic
soft set. Finally we present an application of generalized neuutrosophic soft
set in decision making problem.
| [
{
"version": "v1",
"created": "Mon, 13 May 2013 09:42:50 GMT"
}
] | 1,368,489,600,000 | [
[
"Broumi",
"Said",
""
]
] |
1305.3321 | Lakhdar Sais | Said Jabbour, Lakhdar Sais, Yakoub Salhi | A Mining-Based Compression Approach for Constraint Satisfaction Problems | arXiv admin note: substantial text overlap with arXiv:1304.4415 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose an extension of our Mining for SAT framework to
Constraint satisfaction Problem (CSP). We consider n-ary extensional
constraints (table constraints). Our approach aims to reduce the size of the
CSP by exploiting the structure of the constraints graph and of its associated
microstructure. More precisely, we apply itemset mining techniques to search
for closed frequent itemsets on these two representation. Using Tseitin
extension, we rewrite the whole CSP to another compressed CSP equivalent with
respect to satisfiability. Our approach contrast with previous proposed
approach by Katsirelos and Walsh, as we do not change the structure of the
constraints.
| [
{
"version": "v1",
"created": "Tue, 14 May 2013 23:17:49 GMT"
}
] | 1,368,662,400,000 | [
[
"Jabbour",
"Said",
""
],
[
"Sais",
"Lakhdar",
""
],
[
"Salhi",
"Yakoub",
""
]
] |
1305.4859 | Jia Xu | Jia Xu, Patrick Shironoshita, Ubbo Visser, Nigel John, Mansur Kabuka | Extract ABox Modules for Efficient Ontology Querying | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The extraction of logically-independent fragments out of an ontology ABox can
be useful for solving the tractability problem of querying ontologies with
large ABoxes. In this paper, we propose a formal definition of an ABox module,
such that it guarantees complete preservation of facts about a given set of
individuals, and thus can be reasoned independently w.r.t. the ontology TBox.
With ABox modules of this type, isolated or distributed (parallel) ABox
reasoning becomes feasible, and more efficient data retrieval from ontology
ABoxes can be attained. To compute such an ABox module, we present a
theoretical approach and also an approximation for $\mathcal{SHIQ}$ ontologies.
Evaluation of the module approximation on different types of ontologies shows
that, on average, extracted ABox modules are significantly smaller than the
entire ABox, and the time for ontology reasoning based on ABox modules can be
improved significantly.
| [
{
"version": "v1",
"created": "Tue, 21 May 2013 15:35:03 GMT"
},
{
"version": "v2",
"created": "Wed, 17 Jul 2013 21:16:14 GMT"
},
{
"version": "v3",
"created": "Thu, 21 Nov 2013 15:48:25 GMT"
},
{
"version": "v4",
"created": "Wed, 11 Jun 2014 12:16:53 GMT"
}
] | 1,402,531,200,000 | [
[
"Xu",
"Jia",
""
],
[
"Shironoshita",
"Patrick",
""
],
[
"Visser",
"Ubbo",
""
],
[
"John",
"Nigel",
""
],
[
"Kabuka",
"Mansur",
""
]
] |
1305.5030 | David Tolpin | David Tolpin, Tal Beja, Solomon Eyal Shimony, Ariel Felner, Erez
Karpas | Towards Rational Deployment of Multiple Heuristics in A* | 7 pages, IJCAI 2013, to appear | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The obvious way to use several admissible heuristics in A* is to take their
maximum. In this paper we aim to reduce the time spent on computing heuristics.
We discuss Lazy A*, a variant of A* where heuristics are evaluated lazily: only
when they are essential to a decision to be made in the A* search process. We
present a new rational meta-reasoning based scheme, rational lazy A*, which
decides whether to compute the more expensive heuristics at all, based on a
myopic value of information estimate. Both methods are examined theoretically.
Empirical evaluation on several domains supports the theoretical results, and
shows that lazy A* and rational lazy A* are state-of-the-art heuristic
combination methods.
| [
{
"version": "v1",
"created": "Wed, 22 May 2013 06:41:00 GMT"
}
] | 1,369,267,200,000 | [
[
"Tolpin",
"David",
""
],
[
"Beja",
"Tal",
""
],
[
"Shimony",
"Solomon Eyal",
""
],
[
"Felner",
"Ariel",
""
],
[
"Karpas",
"Erez",
""
]
] |
1305.5506 | Robert R. Tucci | Robert R. Tucci | Introduction to Judea Pearl's Do-Calculus | 16 pages (11 files: 1 .tex, 1 .sty, 9 .jpg) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This is a purely pedagogical paper with no new results. The goal of the paper
is to give a fairly self-contained introduction to Judea Pearl's do-calculus,
including proofs of his 3 rules.
| [
{
"version": "v1",
"created": "Fri, 26 Apr 2013 02:36:43 GMT"
}
] | 1,369,353,600,000 | [
[
"Tucci",
"Robert R.",
""
]
] |
1305.5610 | Tao Ye | Fred Glover and Tao Ye and Abraham P. Punnen and Gary Kochenberger | Integrating tabu search and VLSN search to develop enhanced algorithms:
A case study using bipartite boolean quadratic programs | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The bipartite boolean quadratic programming problem (BBQP) is a
generalization of the well studied boolean quadratic programming problem. The
model has a variety of real life applications; however, empirical studies of
the model are not available in the literature, except in a few isolated
instances. In this paper, we develop efficient heuristic algorithms based on
tabu search, very large scale neighborhood (VLSN) search, and a hybrid
algorithm that integrates the two. The computational study establishes that
effective integration of simple tabu search with VLSN search results in
superior outcomes, and suggests the value of such an integration in other
settings. Complexity analysis and implementation details are provided along
with conclusions drawn from experimental analysis. In addition, we obtain
solutions better than the best previously known for almost all medium and large
size benchmark instances.
| [
{
"version": "v1",
"created": "Fri, 24 May 2013 03:36:00 GMT"
}
] | 1,369,612,800,000 | [
[
"Glover",
"Fred",
""
],
[
"Ye",
"Tao",
""
],
[
"Punnen",
"Abraham P.",
""
],
[
"Kochenberger",
"Gary",
""
]
] |
1305.5665 | Abdelali Boussadi | Boussadi Abdelali, Caruba Thibaut, Karras Alexandre, Berdot Sarah,
Degoulet Patrice, Durieux Pierre, Sabatier Brigitte | Validity of a clinical decision rule based alert system for drug dose
adjustment in patients with renal failure intended to improve pharmacists'
analysis of medication orders in hospitals | Word count Body: 3753 Abstract: 280 tables: 5 figures: 1 pages: 26
references: 29 This article is the pre print version of an article submitted
to the International Journal of Medical Informatics (IJMI, Elsevier) funding:
This work was supported by Programme de recherche en qualit\'e hospitali\`ere
(PREQHOS-PHRQ 1034 SADPM), The French Ministry of Health, grant number 115189 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Objective: The main objective of this study was to assess the diagnostic
performances of an alert system integrated into the CPOE/EMR system for renally
cleared drug dosing control. The generated alerts were compared with the daily
routine practice of pharmacists as part of the analysis of medication orders.
Materials and Methods: The pharmacists performed their analysis of medication
orders as usual and were not aware of the alert system interventions that were
not displayed for the purpose of the study neither to the physician nor to the
pharmacist but kept with associate recommendations in a log file. A senior
pharmacist analyzed the results of medication order analysis with and without
the alert system. The unit of analysis was the drug prescription line. The
primary study endpoints were the detection of drug-dose prescription errors and
inter-rater reliability between the alert system and the pharmacists in the
detection of drug dose error. Results: The alert system fired alerts in 8.41%
(421/5006) of cases: 5.65% (283/5006) exceeds max daily dose alerts and 2.76%
(138/5006) under dose alerts. The alert system and the pharmacists showed a
relatively poor concordance: 0.106 (CI 95% [0.068, 0.144]). According to the
senior pharmacist review, the alert system fired more appropriate alerts than
pharmacists, and made fewer errors than pharmacists in analyzing drug dose
prescriptions: 143 for the alert system and 261 for the pharmacists. Unlike the
alert system, most diagnostic errors made by the pharmacists were false
negatives. The pharmacists were not able to analyze a significant number (2097;
25.42%) of drug prescription lines because understaffing. Conclusion: This
study strongly suggests that an alert system would be complementary to the
pharmacists activity and contribute to drug prescription safety.
| [
{
"version": "v1",
"created": "Fri, 24 May 2013 09:37:54 GMT"
}
] | 1,369,612,800,000 | [
[
"Abdelali",
"Boussadi",
""
],
[
"Thibaut",
"Caruba",
""
],
[
"Alexandre",
"Karras",
""
],
[
"Sarah",
"Berdot",
""
],
[
"Patrice",
"Degoulet",
""
],
[
"Pierre",
"Durieux",
""
],
[
"Brigitte",
"Sabatier",
""
]
] |
1305.6187 | Steven Prestwich | S. D. Prestwich | Improved Branch-and-Bound for Low Autocorrelation Binary Sequences | Journal paper in preparation | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Low Autocorrelation Binary Sequence problem has applications in
telecommunications, is of theoretical interest to physicists, and has inspired
many optimisation researchers. Metaheuristics for the problem have progressed
greatly in recent years but complete search has not progressed since a
branch-and-bound method of 1996. In this paper we find four ways of improving
branch-and-bound, leading to a tighter relaxation, faster convergence to
optimality, and better empirical scalability.
| [
{
"version": "v1",
"created": "Mon, 27 May 2013 11:57:40 GMT"
},
{
"version": "v2",
"created": "Tue, 23 Jul 2013 14:42:15 GMT"
}
] | 1,374,624,000,000 | [
[
"Prestwich",
"S. D.",
""
]
] |
1305.7058 | Sahar Mokhtar | Nora Y. Ibrahim, Sahar A. Mokhtar and Hany M. Harb | Towards an Ontology based integrated Framework for Semantic Web | null | International Journal of Computer Science and Information Security
(IJCSIS) Vol. 10, No. 9, September 2012 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This Ontologies are widely used as a means for solving the information
heterogeneity problems on the web because of their capability to provide
explicit meaning to the information. They become an efficient tool for
knowledge representation in a structured manner. There is always more than one
ontology for the same domain. Furthermore, there is no standard method for
building ontologies, and there are many ontology building tools using different
ontology languages. Because of these reasons, interoperability between the
ontologies is very low. Current ontology tools mostly use functions to build,
edit and inference the ontology. Methods for merging heterogeneous domain
ontologies are not included in most tools. This paper presents ontology merging
methodology for building a single global ontology from heterogeneous eXtensible
Markup Language (XML) data sources to capture and maintain all the knowledge
which XML data sources can contain
| [
{
"version": "v1",
"created": "Thu, 30 May 2013 10:53:07 GMT"
}
] | 1,370,304,000,000 | [
[
"Ibrahim",
"Nora Y.",
""
],
[
"Mokhtar",
"Sahar A.",
""
],
[
"Harb",
"Hany M.",
""
]
] |
1305.7185 | Philippe Martin | Philippe A. Martin | Collaborative ontology sharing and editing | 12 pages, 2 figures, journal | IJCSIS 6 (2011) 14-29 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This article first lists reasons why - in the long term or when creating a
new knowledge base (KB) for general knowledge sharing purposes -
collaboratively building a well-organized KB does/can provide more
possibilities, with on the whole no more costs, than the mainstream approach
where knowledge creation and re-use involves searching, merging and creating
(semi-)independent (relatively small) ontologies or semi-formal documents. The
article lists elements required to achieve this and describes the main one: a
KB editing protocol that keeps the KB free of automatically/manually detected
inconsistencies while not forcing them to discuss or agree on terminology and
beliefs nor requiring a selection committee.
| [
{
"version": "v1",
"created": "Thu, 30 May 2013 18:06:05 GMT"
}
] | 1,369,958,400,000 | [
[
"Martin",
"Philippe A.",
""
]
] |
1305.7254 | Imen Ayachi | I. Ayachi, R. Kammarti, M.Ksouri, P.Borne LACS, ENIT, Tunis-Belvedere
Tunisie LAGIS, ECL, Villeneuve d Ascq, France | Harmony search to solve the container storage problem with different
container types | 7 pages | International Journal of Computer Applications, June 2012 | null | Volume 48-- No.22, June 2012 | cs.AI | http://creativecommons.org/licenses/by/3.0/ | This paper presents an adaptation of the harmony search algorithm to solve
the storage allocation problem for inbound and outbound containers. This
problem is studied considering multiple container type (regular, open side,
open top, tank, empty and refrigerated) which lets the situation more
complicated, as various storage constraints appeared. The objective is to find
an optimal container arrangement which respects their departure dates, and
minimize the re-handle operations of containers. The performance of the
proposed approach is verified comparing to the results generated by genetic
algorithm and LIFO algorithm.
| [
{
"version": "v1",
"created": "Thu, 30 May 2013 21:13:25 GMT"
}
] | 1,370,217,600,000 | [
[
"Ayachi",
"I.",
""
],
[
"Kammarti",
"R.",
""
],
[
"Ksouri",
"M.",
""
],
[
"LACS",
"P. Borne",
""
],
[
"ENIT",
"",
""
],
[
"LAGIS",
"Tunis-Belvedere Tunisie",
""
],
[
"ECL",
"",
""
],
[
"Ascq",
"Villeneuve d",
""
],
[
"France",
"",
""
]
] |
1305.7345 | Diedrich Wolter | Frank Dylla, Till Mossakowski, Thomas Schneider and Diedrich Wolter | Algebraic Properties of Qualitative Spatio-Temporal Calculi | COSIT 2013 paper including supplementary material | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Qualitative spatial and temporal reasoning is based on so-called qualitative
calculi. Algebraic properties of these calculi have several implications on
reasoning algorithms. But what exactly is a qualitative calculus? And to which
extent do the qualitative calculi proposed meet these demands? The literature
provides various answers to the first question but only few facts about the
second. In this paper we identify the minimal requirements to binary
spatio-temporal calculi and we discuss the relevance of the according axioms
for representation and reasoning. We also analyze existing qualitative calculi
and provide a classification involving different notions of a relation algebra.
| [
{
"version": "v1",
"created": "Fri, 31 May 2013 10:15:18 GMT"
},
{
"version": "v2",
"created": "Fri, 13 Sep 2013 11:59:17 GMT"
}
] | 1,379,289,600,000 | [
[
"Dylla",
"Frank",
""
],
[
"Mossakowski",
"Till",
""
],
[
"Schneider",
"Thomas",
""
],
[
"Wolter",
"Diedrich",
""
]
] |
1306.0095 | Sergey Rodionov | Alexey Potapov and Sergey Rodionov | Universal Induction with Varying Sets of Combinators | To appear in the proceedings of AGI 2013, Lecture Notes in Artificial
Intelligence, Vol. 7999, pp. 88-97, Springer-Verlag, 2013. The final
publication is available at link.springer.com | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Universal induction is a crucial issue in AGI. Its practical applicability
can be achieved by the choice of the reference machine or representation of
algorithms agreed with the environment. This machine should be updatable for
solving subsequent tasks more efficiently. We study this problem on an example
of combinatory logic as the very simple Turing-complete reference machine,
which enables modifying program representations by introducing different sets
of primitive combinators. Genetic programming system is used to search for
combinator expressions, which are easily decomposed into sub-expressions being
recombined in crossover. Our experiments show that low-complexity induction or
prediction tasks can be solved by the developed system (much more efficiently
than using brute force); useful combinators can be revealed and included into
the representation simplifying more difficult tasks. However, optimal sets of
combinators depend on the specific task, so the reference machine should be
adaptively chosen in coordination with the search engine.
| [
{
"version": "v1",
"created": "Sat, 1 Jun 2013 10:47:23 GMT"
}
] | 1,370,304,000,000 | [
[
"Potapov",
"Alexey",
""
],
[
"Rodionov",
"Sergey",
""
]
] |
1306.0751 | Nima Taghipour | Nima Taghipour, Jesse Davis, Hendrik Blockeel | First-Order Decomposition Trees | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Lifting attempts to speed up probabilistic inference by exploiting symmetries
in the model. Exact lifted inference methods, like their propositional
counterparts, work by recursively decomposing the model and the problem. In the
propositional case, there exist formal structures, such as decomposition trees
(dtrees), that represent such a decomposition and allow us to determine the
complexity of inference a priori. However, there is currently no equivalent
structure nor analogous complexity results for lifted inference. In this paper,
we introduce FO-dtrees, which upgrade propositional dtrees to the first-order
level. We show how these trees can characterize a lifted inference solution for
a probabilistic logical model (in terms of a sequence of lifted operations),
and make a theoretical analysis of the complexity of lifted inference in terms
of the novel notion of lifted width for the tree.
| [
{
"version": "v1",
"created": "Tue, 4 Jun 2013 12:43:07 GMT"
}
] | 1,370,390,400,000 | [
[
"Taghipour",
"Nima",
""
],
[
"Davis",
"Jesse",
""
],
[
"Blockeel",
"Hendrik",
""
]
] |
1306.1031 | Lars Kotthoff | Lars Kotthoff | LLAMA: Leveraging Learning to Automatically Manage Algorithms | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Algorithm portfolio and selection approaches have achieved remarkable
improvements over single solvers. However, the implementation of such systems
is often highly customised and specific to the problem domain. This makes it
difficult for researchers to explore different techniques for their specific
problems. We present LLAMA, a modular and extensible toolkit implemented as an
R package that facilitates the exploration of a range of different portfolio
techniques on any problem domain. It implements the algorithm selection
approaches most commonly used in the literature and leverages the extensive
library of machine learning algorithms and techniques in R. We describe the
current capabilities and limitations of the toolkit and illustrate its usage on
a set of example SAT problems.
| [
{
"version": "v1",
"created": "Wed, 5 Jun 2013 09:35:35 GMT"
},
{
"version": "v2",
"created": "Fri, 5 Jul 2013 13:31:08 GMT"
},
{
"version": "v3",
"created": "Wed, 30 Apr 2014 12:55:03 GMT"
}
] | 1,398,902,400,000 | [
[
"Kotthoff",
"Lars",
""
]
] |
1306.1553 | Sergey Rodionov | Sergey Rodionov, Alexey Potapov, Yurii Vinogradov | Direct Uncertainty Estimation in Reinforcement Learning | AGI-13 Workshop paper | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Optimal probabilistic approach in reinforcement learning is computationally
infeasible. Its simplification consisting in neglecting difference between true
environment and its model estimated using limited number of observations causes
exploration vs exploitation problem. Uncertainty can be expressed in terms of a
probability distribution over the space of environment models, and this
uncertainty can be propagated to the action-value function via Bellman
iterations, which are computationally insufficiently efficient though. We
consider possibility of directly measuring uncertainty of the action-value
function, and analyze sufficiency of this facilitated approach.
| [
{
"version": "v1",
"created": "Thu, 6 Jun 2013 20:57:19 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Jun 2013 14:32:12 GMT"
}
] | 1,372,204,800,000 | [
[
"Rodionov",
"Sergey",
""
],
[
"Potapov",
"Alexey",
""
],
[
"Vinogradov",
"Yurii",
""
]
] |
1306.1557 | Sergey Rodionov | Alexey Potapov, Sergey Rodionov | Extending Universal Intelligence Models with Formal Notion of
Representation | proceedings of AGI 2012, Lecture Notes in Artificial Intelligence,
Vol. 7716, pp. 242-251, Springer-Verlag, 2012. The final publication is
available at link.springer.com | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Solomonoff induction is known to be universal, but incomputable. Its
approximations, namely, the Minimum Description (or Message) Length (MDL)
principles, are adopted in practice in the efficient, but non-universal form.
Recent attempts to bridge this gap leaded to development of the
Representational MDL principle that originates from formal decomposition of the
task of induction. In this paper, possible extension of the RMDL principle in
the context of universal intelligence agents is considered, for which
introduction of representations is shown to be an unavoidable meta-heuristic
and a step toward efficient general intelligence. Hierarchical representations
and model optimization with the use of information-theoretic interpretation of
the adaptive resonance are also discussed.
| [
{
"version": "v1",
"created": "Thu, 6 Jun 2013 21:11:19 GMT"
}
] | 1,370,822,400,000 | [
[
"Potapov",
"Alexey",
""
],
[
"Rodionov",
"Sergey",
""
]
] |
1306.2025 | Tshilidzi Marwala | Tshilidzi Marwala | Flexibly-bounded Rationality and Marginalization of Irrationality
Theories for Decision Making | 17 pages, submitted to Springer-Verlag. arXiv admin note: substantial
text overlap with arXiv:1305.6037 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper the theory of flexibly-bounded rationality which is an
extension to the theory of bounded rationality is revisited. Rational decision
making involves using information which is almost always imperfect and
incomplete together with some intelligent machine which if it is a human being
is inconsistent to make decisions. In bounded rationality, this decision is
made irrespective of the fact that the information to be used is incomplete and
imperfect and that the human brain is inconsistent and thus this decision that
is to be made is taken within the bounds of these limitations. In the theory of
flexibly-bounded rationality, advanced information analysis is used, the
correlation machine is applied to complete missing information and artificial
intelligence is used to make more consistent decisions. Therefore
flexibly-bounded rationality expands the bounds within which rationality is
exercised. Because human decision making is essentially irrational, this paper
proposes the theory of marginalization of irrationality in decision making to
deal with the problem of satisficing in the presence of irrationality.
| [
{
"version": "v1",
"created": "Sun, 9 Jun 2013 14:58:23 GMT"
}
] | 1,370,908,800,000 | [
[
"Marwala",
"Tshilidzi",
""
]
] |
1306.2558 | William Cohen | William W. Cohen and David P. Redlawsk and Douglas Pierce | The Effect of Biased Communications On Both Trusting and Suspicious
Voters | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent studies of political decision-making, apparently anomalous behavior
has been observed on the part of voters, in which negative information about a
candidate strengthens, rather than weakens, a prior positive opinion about the
candidate. This behavior appears to run counter to rational models of decision
making, and it is sometimes interpreted as evidence of non-rational "motivated
reasoning". We consider scenarios in which this effect arises in a model of
rational decision making which includes the possibility of deceptive
information. In particular, we will consider a model in which there are two
classes of voters, which we will call trusting voters and suspicious voters,
and two types of information sources, which we will call unbiased sources and
biased sources. In our model, new data about a candidate can be efficiently
incorporated by a trusting voter, and anomalous updates are impossible;
however, anomalous updates can be made by suspicious voters, if the information
source mistakenly plans for an audience of trusting voters, and if the partisan
goals of the information source are known by the suspicious voter to be
"opposite" to his own. Our model is based on a formalism introduced by the
artificial intelligence community called "multi-agent influence diagrams",
which generalize Bayesian networks to settings involving multiple agents with
distinct goals.
| [
{
"version": "v1",
"created": "Tue, 11 Jun 2013 15:45:11 GMT"
}
] | 1,370,995,200,000 | [
[
"Cohen",
"William W.",
""
],
[
"Redlawsk",
"David P.",
""
],
[
"Pierce",
"Douglas",
""
]
] |
1306.3317 | Mohsen Joneidi | Mohsen Joneidi | Sparse Auto-Regressive: Robust Estimation of AR Parameters | 4 pages, 4 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this paper I present a new approach for regression of time series using
their own samples. This is a celebrated problem known as Auto-Regression.
Dealing with outlier or missed samples in a time series makes the problem of
estimation difficult, so it should be robust against them. Moreover for coding
purposes I will show that it is desired the residual of auto-regression be
sparse. To these aims, I first assume a multivariate Gaussian prior on the
residual and then obtain the estimation. Two simple simulations have been done
on spectrum estimation and speech coding.
| [
{
"version": "v1",
"created": "Fri, 14 Jun 2013 07:49:44 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Aug 2015 16:59:06 GMT"
}
] | 1,439,942,400,000 | [
[
"Joneidi",
"Mohsen",
""
]
] |
1306.3542 | Saadat Anwar | Saadat Anwar, Chitta Baral, Katsumi Inoue | Encoding Petri Nets in Answer Set Programming for Simulation Based
Reasoning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of our long term research goals is to develop systems to answer realistic
questions (e.g., some mentioned in textbooks) about biological pathways that a
biologist may ask. To answer such questions we need formalisms that can model
pathways, simulate their execution, model intervention to those pathways, and
compare simulations under different circumstances. We found Petri Nets to be
the starting point of a suitable formalism for the modeling and simulation
needs. However, we need to make extensions to the Petri Net model and also
reason with multiple simulation runs and parallel state evolutions. Towards
that end Answer Set Programming (ASP) implementation of Petri Nets would allow
us to do both. In this paper we show how ASP can be used to encode basic Petri
Nets in an intuitive manner. We then show how we can modify this encoding to
model several Petri Net extensions by making small changes. We then highlight
some of the reasoning capabilities that we will use to accomplish our ultimate
research goal.
| [
{
"version": "v1",
"created": "Sat, 15 Jun 2013 03:10:56 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Jun 2013 18:27:12 GMT"
}
] | 1,372,118,400,000 | [
[
"Anwar",
"Saadat",
""
],
[
"Baral",
"Chitta",
""
],
[
"Inoue",
"Katsumi",
""
]
] |
1306.3548 | Saadat Anwar | Saadat Anwar, Chitta Baral, Katsumi Inoue | Encoding Higher Level Extensions of Petri Nets in Answer Set Programming | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Answering realistic questions about biological systems and pathways similar
to the ones used by text books to test understanding of students about
biological systems is one of our long term research goals. Often these
questions require simulation based reasoning. To answer such questions, we need
formalisms to build pathway models, add extensions, simulate, and reason with
them. We chose Petri Nets and Answer Set Programming (ASP) as suitable
formalisms, since Petri Net models are similar to biological pathway diagrams;
and ASP provides easy extension and strong reasoning abilities. We found that
certain aspects of biological pathways, such as locations and substance types,
cannot be represented succinctly using regular Petri Nets. As a result, we need
higher level constructs like colored tokens. In this paper, we show how Petri
Nets with colored tokens can be encoded in ASP in an intuitive manner, how
additional Petri Net extensions can be added by making small code changes, and
how this work furthers our long term research goals. Our approach can be
adapted to other domains with similar modeling needs.
| [
{
"version": "v1",
"created": "Sat, 15 Jun 2013 04:28:49 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Jun 2013 18:27:24 GMT"
}
] | 1,372,118,400,000 | [
[
"Anwar",
"Saadat",
""
],
[
"Baral",
"Chitta",
""
],
[
"Inoue",
"Katsumi",
""
]
] |
1306.3884 | Martin Slota | Martin Slota and Jo\~ao Leite | The Rise and Fall of Semantic Rule Updates Based on SE-Models | 38 pages, to appear in Theory and Practice of Logic Programming
(TPLP) | Theory and Practice of Logic Programming 14 (2014) 869-907 | 10.1017/S1471068413000100 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Logic programs under the stable model semantics, or answer-set programs,
provide an expressive rule-based knowledge representation framework, featuring
a formal, declarative and well-understood semantics. However, handling the
evolution of rule bases is still a largely open problem. The AGM framework for
belief change was shown to give inappropriate results when directly applied to
logic programs under a non-monotonic semantics such as the stable models. The
approaches to address this issue, developed so far, proposed update semantics
based on manipulating the syntactic structure of programs and rules.
More recently, AGM revision has been successfully applied to a significantly
more expressive semantic characterisation of logic programs based on SE-models.
This is an important step, as it changes the focus from the evolution of a
syntactic representation of a rule base to the evolution of its semantic
content.
In this paper, we borrow results from the area of belief update to tackle the
problem of updating (instead of revising) answer-set programs. We prove a
representation theorem which makes it possible to constructively define any
operator satisfying a set of postulates derived from Katsuno and Mendelzon's
postulates for belief update. We define a specific operator based on this
theorem, examine its computational complexity and compare the behaviour of this
operator with syntactic rule update semantics from the literature. Perhaps
surprisingly, we uncover a serious drawback of all rule update operators based
on Katsuno and Mendelzon's approach to update and on SE-models.
| [
{
"version": "v1",
"created": "Mon, 17 Jun 2013 15:02:11 GMT"
}
] | 1,582,070,400,000 | [
[
"Slota",
"Martin",
""
],
[
"Leite",
"João",
""
]
] |
1306.3888 | J. G. Wolff | J. Gerard Wolff | The SP theory of intelligence: an overview | arXiv admin note: text overlap with arXiv:cs/0401009,
arXiv:1303.2071, arXiv:cs/0307010, arXiv:1212.0229, arXiv:1303.2013 | J G Wolff, Information, 4 (3), 283-341, 2013 | 10.3390/info4030283 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This article is an overview of the "SP theory of intelligence". The theory
aims to simplify and integrate concepts across artificial intelligence,
mainstream computing and human perception and cognition, with information
compression as a unifying theme. It is conceived as a brain-like system that
receives 'New' information and stores some or all of it in compressed form as
'Old' information. It is realised in the form of a computer model -- a first
version of the SP machine. The concept of "multiple alignment" is a powerful
central idea. Using heuristic techniques, the system builds multiple alignments
that are 'good' in terms of information compression. For each multiple
alignment, probabilities may be calculated. These provide the basis for
calculating the probabilities of inferences. The system learns new structures
from partial matches between patterns. Using heuristic techniques, the system
searches for sets of structures that are 'good' in terms of information
compression. These are normally ones that people judge to be 'natural', in
accordance with the 'DONSVIC' principle -- the discovery of natural structures
via information compression. The SP theory may be applied in several areas
including 'computing', aspects of mathematics and logic, representation of
knowledge, natural language processing, pattern recognition, several kinds of
reasoning, information storage and retrieval, planning and problem solving,
information compression, neuroscience, and human perception and cognition.
Examples include the parsing and production of language including discontinuous
dependencies in syntax, pattern recognition at multiple levels of abstraction
and its integration with part-whole relations, nonmonotonic reasoning and
reasoning with default values, reasoning in Bayesian networks including
'explaining away', causal diagnosis, and the solving of a geometric analogy
problem.
| [
{
"version": "v1",
"created": "Thu, 13 Jun 2013 11:51:17 GMT"
},
{
"version": "v2",
"created": "Tue, 2 Jul 2013 16:31:15 GMT"
},
{
"version": "v3",
"created": "Sun, 8 Sep 2013 12:16:05 GMT"
},
{
"version": "v4",
"created": "Wed, 7 Jan 2015 11:44:26 GMT"
}
] | 1,420,675,200,000 | [
[
"Wolff",
"J. Gerard",
""
]
] |
1306.3890 | J. G. Wolff | J. Gerard Wolff | Big data and the SP theory of intelligence | Accepted for publication in IEEE Access | J G Wolff, IEEE Access, 2, 301-315, 2014 | 10.1109/ACCESS.2014.2315297 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This article is about how the "SP theory of intelligence" and its realisation
in the "SP machine" may, with advantage, be applied to the management and
analysis of big data. The SP system -- introduced in the article and fully
described elsewhere -- may help to overcome the problem of variety in big data:
it has potential as "a universal framework for the representation and
processing of diverse kinds of knowledge" (UFK), helping to reduce the
diversity of formalisms and formats for knowledge and the different ways in
which they are processed. It has strengths in the unsupervised learning or
discovery of structure in data, in pattern recognition, in the parsing and
production of natural language, in several kinds of reasoning, and more. It
lends itself to the analysis of streaming data, helping to overcome the problem
of velocity in big data. Central in the workings of the system is lossless
compression of information: making big data smaller and reducing problems of
storage and management. There is potential for substantial economies in the
transmission of data, for big cuts in the use of energy in computing, for
faster processing, and for smaller and lighter computers. The system provides a
handle on the problem of veracity in big data, with potential to assist in the
management of errors and uncertainties in data. It lends itself to the
visualisation of knowledge structures and inferential processes. A
high-parallel, open-source version of the SP machine would provide a means for
researchers everywhere to explore what can be done with the system and to
create new versions of it.
| [
{
"version": "v1",
"created": "Thu, 13 Jun 2013 13:15:41 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Feb 2014 16:34:23 GMT"
},
{
"version": "v3",
"created": "Tue, 18 Mar 2014 17:18:20 GMT"
},
{
"version": "v4",
"created": "Mon, 31 Mar 2014 19:45:42 GMT"
}
] | 1,412,035,200,000 | [
[
"Wolff",
"J. Gerard",
""
]
] |
1306.4411 | Nguyen Vo | Chitta Baral, Nguyen H. Vo | Event-Object Reasoning with Curated Knowledge Bases: Deriving Missing
Information | 13 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The broader goal of our research is to formulate answers to why and how
questions with respect to knowledge bases, such as AURA. One issue we face when
reasoning with many available knowledge bases is that at times needed
information is missing. Examples of this include partially missing information
about next sub-event, first sub-event, last sub-event, result of an event,
input to an event, destination of an event, and raw material involved in an
event. In many cases one can recover part of the missing knowledge through
reasoning. In this paper we give a formal definition about how such missing
information can be recovered and then give an ASP implementation of it. We then
discuss the implication of this with respect to answering why and how
questions.
| [
{
"version": "v1",
"created": "Wed, 19 Jun 2013 01:58:21 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Jun 2013 00:19:24 GMT"
}
] | 1,371,772,800,000 | [
[
"Baral",
"Chitta",
""
],
[
"Vo",
"Nguyen H.",
""
]
] |
1306.4418 | Geoffrey Chu | Geoffrey Chu, Peter J. Stuckey | Structure Based Extended Resolution for Constraint Programming | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nogood learning is a powerful approach to reducing search in Constraint
Programming (CP) solvers. The current state of the art, called Lazy Clause
Generation (LCG), uses resolution to derive nogoods expressing the reasons for
each search failure. Such nogoods can prune other parts of the search tree,
producing exponential speedups on a wide variety of problems. Nogood learning
solvers can be seen as resolution proof systems. The stronger the proof system,
the faster it can solve a CP problem. It has recently been shown that the proof
system used in LCG is at least as strong as general resolution. However,
stronger proof systems such as \emph{extended resolution} exist. Extended
resolution allows for literals expressing arbitrary logical concepts over
existing variables to be introduced and can allow exponentially smaller proofs
than general resolution. The primary problem in using extended resolution is to
figure out exactly which literals are useful to introduce. In this paper, we
show that we can use the structural information contained in a CP model in
order to introduce useful literals, and that this can translate into
significant speedups on a range of problems.
| [
{
"version": "v1",
"created": "Wed, 19 Jun 2013 04:18:45 GMT"
}
] | 1,371,686,400,000 | [
[
"Chu",
"Geoffrey",
""
],
[
"Stuckey",
"Peter J.",
""
]
] |
1306.4460 | Michal \v{C}ertick\'y | Michal Certicky | Implementing a Wall-In Building Placement in StarCraft with Declarative
Programming | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In real-time strategy games like StarCraft, skilled players often block the
entrance to their base with buildings to prevent the opponent's units from
getting inside. This technique, called "walling-in", is a vital part of
player's skill set, allowing him to survive early aggression. However, current
artificial players (bots) do not possess this skill, due to numerous
inconveniences surfacing during its implementation in imperative languages like
C++ or Java. In this text, written as a guide for bot programmers, we address
the problem of finding an appropriate building placement that would block the
entrance to player's base, and present a ready to use declarative solution
employing the paradigm of answer set programming (ASP). We also encourage the
readers to experiment with different declarative approaches to this problem.
| [
{
"version": "v1",
"created": "Wed, 19 Jun 2013 09:08:48 GMT"
}
] | 1,371,686,400,000 | [
[
"Certicky",
"Michal",
""
]
] |
1306.5601 | Moritz M\"uhlenthaler | Moritz M\"uhlenthaler and Rolf Wanka | A Decomposition of the Max-min Fair Curriculum-based Course Timetabling
Problem | revised version (fixed problems in the notation and general
improvements); original paper: 16 pages, accepted for publication at the
Multidisciplinary International Scheduling Conference 2013 (MISTA 2013) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a decomposition of the max-min fair curriculum-based course
timetabling (MMF-CB-CTT) problem. The decomposition models the room assignment
subproblem as a generalized lexicographic bottleneck optimization problem
(LBOP). We show that the generalized LBOP can be solved efficiently if the
corresponding sum optimization problem can be solved efficiently. As a
consequence, the room assignment subproblem of the MMF-CB-CTT problem can be
solved efficiently. We use this insight to improve a previously proposed
heuristic algorithm for the MMF-CB-CTT problem. Our experimental results
indicate that using the new decomposition improves the performance of the
algorithm on most of the 21 ITC2007 test instances with respect to the quality
of the best solution found. Furthermore, we introduce a measure of the quality
of a solution to a max-min fair optimization problem. This measure helps to
overcome some limitations imposed by the qualitative nature of max-min fairness
and aids the statistical evaluation of the performance of randomized algorithms
for such problems. We use this measure to show that using the new decomposition
the algorithm outperforms the original one on most instances with respect to
the average solution quality.
| [
{
"version": "v1",
"created": "Mon, 24 Jun 2013 12:54:50 GMT"
},
{
"version": "v2",
"created": "Sun, 25 Aug 2013 13:33:24 GMT"
}
] | 1,377,561,600,000 | [
[
"Mühlenthaler",
"Moritz",
""
],
[
"Wanka",
"Rolf",
""
]
] |
1306.5606 | Barry Hurley | Barry Hurley, Lars Kotthoff, Yuri Malitsky, Barry O'Sullivan | Proteus: A Hierarchical Portfolio of Solvers and Transformations | 11th International Conference on Integration of AI and OR Techniques
in Constraint Programming for Combinatorial Optimization Problems. The final
publication is available at link.springer.com | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, portfolio approaches to solving SAT problems and CSPs have
become increasingly common. There are also a number of different encodings for
representing CSPs as SAT instances. In this paper, we leverage advances in both
SAT and CSP solving to present a novel hierarchical portfolio-based approach to
CSP solving, which we call Proteus, that does not rely purely on CSP solvers.
Instead, it may decide that it is best to encode a CSP problem instance into
SAT, selecting an appropriate encoding and a corresponding SAT solver. Our
experimental evaluation used an instance of Proteus that involved four CSP
solvers, three SAT encodings, and six SAT solvers, evaluated on the most
challenging problem instances from the CSP solver competitions, involving
global and intensional constraints. We show that significant performance
improvements can be achieved by Proteus obtained by exploiting alternative
view-points and solvers for combinatorial problem-solving.
| [
{
"version": "v1",
"created": "Mon, 24 Jun 2013 13:11:54 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Feb 2014 12:26:45 GMT"
}
] | 1,392,681,600,000 | [
[
"Hurley",
"Barry",
""
],
[
"Kotthoff",
"Lars",
""
],
[
"Malitsky",
"Yuri",
""
],
[
"O'Sullivan",
"Barry",
""
]
] |
1306.5960 | Shofwatul Uyun Mrs | Sri Hartati, Shofwatul 'Uyun | Computation of Diet Composition for Patients Suffering from Kidney and
Urinary Tract Diseases with the Fuzzy Genetic System | 8 pages | International Journal of Computer Applications (0975-8887)-Volume
36, No.6, December 2011 | 10.5120/4499-6350 | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Determination of dietary food consumed a day for patients with diseases in
general, greatly affect the health of the body and the healing process, is no
exception for people with kidney disease and urinary tract. This paper presents
the determination of diet composition in the form of food subtance for people
with kidney and urinary tract diseases with a genetic fuzzy approach. This
approach combines fuzzy logic and genetic algorithms, which utilizing fuzzy
logic fuzzy tools and techniques to model the components of the genetic
algorithm and adapting genetic algorithm control parameters, with the aim of
improving system performance. The Mamdani fuzzy inference model and fuzzy rules
based on population parameters and generation are used to determine the
probability of crossover and mutation, and was using In this study, 400 food
survey data along with their substances was used as test material. From the
data, a varying amount of population is established. Each chromosome has 10
genes in which the value of each gene indicates the index number of foodstuffs
in the database. The fuzzy genetic approach produces 10 best food substance and
their compositions. The composition of these foods has nutritional value in
accordance with the number of calories needed by people with kidney and urinary
tract diseases by type of food.
| [
{
"version": "v1",
"created": "Tue, 25 Jun 2013 13:43:27 GMT"
}
] | 1,372,204,800,000 | [
[
"Hartati",
"Sri",
""
],
[
"'Uyun",
"Shofwatul",
""
]
] |
1306.6375 | Vena Pearl Bongolan Dr. | Vena Pearl Bongolan, Florencio C. Ballesteros, Jr., Joyce Anne M.
Banting, Aina Marie Q. Olaes, Charlymagne R. Aquino | Metaheuristics in Flood Disaster Management and Risk Assessment | UP ICE Centennial Conference Harmonizing Infrastructure with the
Environment November 12, 2010 in Manila, Philippines 8th National conference
on Information Technology Education (NCITE 2010) October 20-23, 2010 in
Boracay, Philippines | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A conceptual area is divided into units or barangays, each was allowed to
evolve under a physical constraint. A risk assessment method was then used to
identify the flood risk in each community using the following risk factors: the
area's urbanized area ratio, literacy rate, mortality rate, poverty incidence,
radio/TV penetration, and state of structural and non-structural measures.
Vulnerability is defined as a weighted-sum of these components. A penalty was
imposed for reduced vulnerability. Optimization comparison was done with
MatLab's Genetic Algorithms and Simulated Annealing; results showed 'extreme'
solutions and realistic designs, for simulated annealing and genetic algorithm,
respectively.
| [
{
"version": "v1",
"created": "Wed, 26 Jun 2013 22:59:01 GMT"
}
] | 1,372,377,600,000 | [
[
"Bongolan",
"Vena Pearl",
""
],
[
"Ballesteros,",
"Florencio C.",
"Jr."
],
[
"Banting",
"Joyce Anne M.",
""
],
[
"Olaes",
"Aina Marie Q.",
""
],
[
"Aquino",
"Charlymagne R.",
""
]
] |
1306.6489 | Shofwatul Uyun Mrs | Shofwatul 'Uyun, Imam Riadi | A Fuzzy Topsis Multiple-Attribute Decision Making for Scholarship
Selection | 10 pages, 5 figures, arXiv admin note: substantial text overlap with
arXiv:1306.5960 | TELKOMNIKA Journal Vol.9 No.1 April 2011 | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/3.0/ | As the education fees are becoming more expensive, more students apply for
scholarships. Consequently, hundreds and even thousands of applications need to
be handled by the sponsor. To solve the problems, some alternatives based on
several attributes (criteria) need to be selected. In order to make a decision
on such fuzzy problems, Fuzzy Multiple Attribute Decision Making (FMDAM) can be
applied. In this study, Unified Modeling Language (UML) in FMADM with TOPSIS
and Weighted Product (WP) methods is applied to select the candidates for
academic and non-academic scholarships at Universitas Islam Negeri Sunan
Kalijaga. Data used were a crisp and fuzzy data. The results show that TOPSIS
and Weighted Product FMADM methods can be used to select the most suitable
candidates to receive the scholarships since the preference values applied in
this method can show applicants with the highest eligibility
| [
{
"version": "v1",
"created": "Thu, 27 Jun 2013 13:11:41 GMT"
}
] | 1,372,377,600,000 | [
[
"'Uyun",
"Shofwatul",
""
],
[
"Riadi",
"Imam",
""
]
] |
1306.6852 | Matteo Brunelli | Matteo Brunelli and Michele Fedrizzi | Axiomatic properties of inconsistency indices for pairwise comparisons | 25 pages, 3 figures | Journal of the Operational Research Society, 66(1), 1-15, (2015) | 10.1057/jors.2013.135 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pairwise comparisons are a well-known method for the representation of the
subjective preferences of a decision maker. Evaluating their inconsistency has
been a widely studied and discussed topic and several indices have been
proposed in the literature to perform this task. Since an acceptable level of
consistency is closely related with the reliability of preferences, a suitable
choice of an inconsistency index is a crucial phase in decision making
processes. The use of different methods for measuring consistency must be
carefully evaluated, as it can affect the decision outcome in practical
applications. In this paper, we present five axioms aimed at characterizing
inconsistency indices. In addition, we prove that some of the indices proposed
in the literature satisfy these axioms, while others do not, and therefore, in
our view, they may fail to correctly evaluate inconsistency.
| [
{
"version": "v1",
"created": "Fri, 28 Jun 2013 14:27:03 GMT"
}
] | 1,419,465,600,000 | [
[
"Brunelli",
"Matteo",
""
],
[
"Fedrizzi",
"Michele",
""
]
] |
1307.0339 | Cheng-Yuan Liou | Cheng-Yuan Liou, Bo-Shiang Huang, Daw-Ran Liou and Alex A. Simak | Syntactic sensitive complexity for symbol-free sequence | 11 pages, 5 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work uses the L-system to construct a tree structure for the text
sequence and derives its complexity. It serves as a measure of structural
complexity of the text. It is applied to anomaly detection in data
transmission.
| [
{
"version": "v1",
"created": "Mon, 1 Jul 2013 12:00:59 GMT"
},
{
"version": "v2",
"created": "Tue, 2 Jul 2013 02:08:48 GMT"
}
] | 1,372,809,600,000 | [
[
"Liou",
"Cheng-Yuan",
""
],
[
"Huang",
"Bo-Shiang",
""
],
[
"Liou",
"Daw-Ran",
""
],
[
"Simak",
"Alex A.",
""
]
] |
1307.0845 | J. G. Wolff | J Gerard Wolff | The SP theory of intelligence: benefits and applications | arXiv admin note: substantial text overlap with arXiv:1212.0229 | J G Wolff, Information, 5 (1), 1-27, 2014 | 10.3390/info5010001 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This article describes existing and expected benefits of the "SP theory of
intelligence", and some potential applications. The theory aims to simplify and
integrate ideas across artificial intelligence, mainstream computing, and human
perception and cognition, with information compression as a unifying theme. It
combines conceptual simplicity with descriptive and explanatory power across
several areas of computing and cognition. In the "SP machine" -- an expression
of the SP theory which is currently realized in the form of a computer model --
there is potential for an overall simplification of computing systems,
including software. The SP theory promises deeper insights and better solutions
in several areas of application including, most notably, unsupervised learning,
natural language processing, autonomous robots, computer vision, intelligent
databases, software engineering, information compression, medical diagnosis and
big data. There is also potential in areas such as the semantic web,
bioinformatics, structuring of documents, the detection of computer viruses,
data fusion, new kinds of computer, and the development of scientific theories.
The theory promises seamless integration of structures and functions within and
between different areas of application. The potential value, worldwide, of
these benefits and applications is at least $190 billion each year. Further
development would be facilitated by the creation of a high-parallel,
open-source version of the SP machine, available to researchers everywhere.
| [
{
"version": "v1",
"created": "Thu, 13 Jun 2013 13:31:47 GMT"
},
{
"version": "v2",
"created": "Mon, 23 Dec 2013 09:58:18 GMT"
}
] | 1,388,361,600,000 | [
[
"Wolff",
"J Gerard",
""
]
] |
1307.1388 | Hong Qiao | Qiao Hong, Li Yinlin, Tang Tang, Wang Peng | Introducing Memory and Association Mechanism into a Biologically
Inspired Visual Model | 9 pages, 10 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A famous biologically inspired hierarchical model firstly proposed by
Riesenhuber and Poggio has been successfully applied to multiple visual
recognition tasks. The model is able to achieve a set of position- and
scale-tolerant recognition, which is a central problem in pattern recognition.
In this paper, based on some other biological experimental results, we
introduce the Memory and Association Mechanisms into the above biologically
inspired model. The main motivations of the work are (a) to mimic the active
memory and association mechanism and add the 'top down' adjustment to the above
biologically inspired hierarchical model and (b) to build up an algorithm which
can save the space and keep a good recognition performance. The new model is
also applied to object recognition processes. The primary experimental results
show that our method is efficient with much less memory requirement.
| [
{
"version": "v1",
"created": "Thu, 4 Jul 2013 16:08:56 GMT"
}
] | 1,372,982,400,000 | [
[
"Hong",
"Qiao",
""
],
[
"Yinlin",
"Li",
""
],
[
"Tang",
"Tang",
""
],
[
"Peng",
"Wang",
""
]
] |
1307.1482 | Lavindra de Silva | Lavindra de Silva and Amit Kumar Pandey and Mamoun Gharbi and Rachid
Alami | Towards Combining HTN Planning and Geometric Task Planning | RSS Workshop on Combined Robot Motion Planning and AI Planning for
Practical Applications, June 2013 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we present an interface between a symbolic planner and a
geometric task planner, which is different to a standard trajectory planner in
that the former is able to perform geometric reasoning on abstract
entities---tasks. We believe that this approach facilitates a more principled
interface to symbolic planning, while also leaving more room for the geometric
planner to make independent decisions. We show how the two planners could be
interfaced, and how their planning and backtracking could be interleaved. We
also provide insights for a methodology for using the combined system, and
experimental results to use as a benchmark with future extensions to both the
combined system, as well as to the geometric task planner.
| [
{
"version": "v1",
"created": "Thu, 4 Jul 2013 20:28:40 GMT"
}
] | 1,373,241,600,000 | [
[
"de Silva",
"Lavindra",
""
],
[
"Pandey",
"Amit Kumar",
""
],
[
"Gharbi",
"Mamoun",
""
],
[
"Alami",
"Rachid",
""
]
] |
1307.1568 | Chau Do | Chau Do and Eric J. Pauwels | Using MathML to Represent Units of Measurement for Improved Ontology
Alignment | Conferences on Intelligent Computer Mathematics (CICM 2013), Bath,
England | CICM 2013, LNAI (7961), Springer, 2013 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ontologies provide a formal description of concepts and their relationships
in a knowledge domain. The goal of ontology alignment is to identify
semantically matching concepts and relationships across independently developed
ontologies that purport to describe the same knowledge. In order to handle the
widest possible class of ontologies, many alignment algorithms rely on
terminological and structural meth- ods, but the often fuzzy nature of concepts
complicates the matching process. However, one area that should provide clear
matching solutions due to its mathematical nature, is units of measurement.
Several on- tologies for units of measurement are available, but there has been
no attempt to align them, notwithstanding the obvious importance for tech-
nical interoperability. We propose a general strategy to map these (and
similar) ontologies by introducing MathML to accurately capture the semantic
description of concepts specified therein. We provide mapping results for three
ontologies, and show that our approach improves on lexical comparisons.
| [
{
"version": "v1",
"created": "Fri, 5 Jul 2013 10:05:34 GMT"
}
] | 1,373,241,600,000 | [
[
"Do",
"Chau",
""
],
[
"Pauwels",
"Eric J.",
""
]
] |
1307.1790 | Evgenij Thorstensen | Evgenij Thorstensen | Lifting Structural Tractability to CSP with Global Constraints | To appear in proceedings of CP'13, LNCS 8124 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A wide range of problems can be modelled as constraint satisfaction problems
(CSPs), that is, a set of constraints that must be satisfied simultaneously.
Constraints can either be represented extensionally, by explicitly listing
allowed combinations of values, or implicitly, by special-purpose algorithms
provided by a solver. Such implicitly represented constraints, known as global
constraints, are widely used; indeed, they are one of the key reasons for the
success of constraint programming in solving real-world problems.
In recent years, a variety of restrictions on the structure of CSP instances
that yield tractable classes have been identified. However, many such
restrictions fail to guarantee tractability for CSPs with global constraints.
In this paper, we investigate the properties of extensionally represented
constraints that these restrictions exploit to achieve tractability, and show
that there are large classes of global constraints that also possess these
properties. This allows us to lift these restrictions to the global case, and
identify new tractable classes of CSPs with global constraints.
| [
{
"version": "v1",
"created": "Sat, 6 Jul 2013 14:54:18 GMT"
}
] | 1,373,328,000,000 | [
[
"Thorstensen",
"Evgenij",
""
]
] |
1307.1890 | Arindam Chaudhuri AC | Arindam Chaudhuri | Solution of Rectangular Fuzzy Games by Principle of Dominance Using
LR-type Trapezoidal Fuzzy Numbers | Proceedings of 2nd International Conference on Advanced Computing &
Communication Technologies, Asia Pacific Institute of Information Technology,
Panipat, Haryana, India, 2007 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fuzzy Set Theory has been applied in many fields such as Operations Research,
Control Theory, and Management Sciences etc. In particular, an application of
this theory in Managerial Decision Making Problems has a remarkable
significance. In this Paper, we consider a solution of Rectangular Fuzzy game
with pay-off as imprecise numbers instead of crisp numbers viz., interval and
LR-type Trapezoidal Fuzzy Numbers. The solution of such Fuzzy games with pure
strategies by minimax-maximin principle is discussed. The Algebraic Method to
solve Fuzzy games without saddle point by using mixed strategies is also
illustrated. Here, pay-off matrix is reduced to pay-off matrix by Dominance
Method. This fact is illustrated by means of Numerical Example.
| [
{
"version": "v1",
"created": "Sun, 7 Jul 2013 18:07:03 GMT"
}
] | 1,373,328,000,000 | [
[
"Chaudhuri",
"Arindam",
""
]
] |
1307.1891 | Arindam Chaudhuri AC | Arindam Chaudhuri, Kajal De | A Comparative study of Transportation Problem under Probabilistic and
Fuzzy Uncertainties | GANIT, Journal of Bangladesh Mathematical Society, Bangladesh
Mathematical Society, Dhaka, Bangladesh, 2010 (In Press) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Transportation Problem is an important aspect which has been widely studied
in Operations Research domain. It has been studied to simulate different real
life problems. In particular, application of this Problem in NP- Hard Problems
has a remarkable significance. In this Paper, we present a comparative study of
Transportation Problem through Probabilistic and Fuzzy Uncertainties. Fuzzy
Logic is a computational paradigm that generalizes classical two-valued logic
for reasoning under uncertainty. In order to achieve this, the notation of
membership in a set needs to become a matter of degree. By doing this we
accomplish two things viz., (i) ease of describing human knowledge involving
vague concepts and (ii) enhanced ability to develop cost-effective solution to
real-world problem. The multi-valued nature of Fuzzy Sets allows handling
uncertain and vague information. It is a model-less approach and a clever
disguise of Probability Theory. We give comparative simulation results of both
approaches and discuss the Computational Complexity. To the best of our
knowledge, this is the first work on comparative study of Transportation
Problem using Probabilistic and Fuzzy Uncertainties.
| [
{
"version": "v1",
"created": "Sun, 7 Jul 2013 18:18:25 GMT"
}
] | 1,373,328,000,000 | [
[
"Chaudhuri",
"Arindam",
""
],
[
"De",
"Kajal",
""
]
] |
1307.1893 | Arindam Chaudhuri AC | Arindam Chaudhuri, Kajal De, Dipak Chatterjee, Pabitra Mitra | Trapezoidal Fuzzy Numbers for the Transportation Problem | International Journal of Intelligent Computing and Applications,
Volume 1, Number 2, 2009 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Transportation Problem is an important problem which has been widely studied
in Operations Research domain. It has been often used to simulate different
real life problems. In particular, application of this Problem in NP Hard
Problems has a remarkable significance. In this Paper, we present the closed,
bounded and non empty feasible region of the transportation problem using fuzzy
trapezoidal numbers which ensures the existence of an optimal solution to the
balanced transportation problem. The multivalued nature of Fuzzy Sets allows
handling of uncertainty and vagueness involved in the cost values of each cells
in the transportation table. For finding the initial solution of the
transportation problem we use the Fuzzy Vogel Approximation Method and for
determining the optimality of the obtained solution Fuzzy Modified Distribution
Method is used. The fuzzification of the cost of the transportation problem is
discussed with the help of a numerical example. Finally, we discuss the
computational complexity involved in the problem. To the best of our knowledge,
this is the first work on obtaining the solution of the transportation problem
using fuzzy trapezoidal numbers.
| [
{
"version": "v1",
"created": "Sun, 7 Jul 2013 18:32:23 GMT"
}
] | 1,373,328,000,000 | [
[
"Chaudhuri",
"Arindam",
""
],
[
"De",
"Kajal",
""
],
[
"Chatterjee",
"Dipak",
""
],
[
"Mitra",
"Pabitra",
""
]
] |
1307.1895 | Arindam Chaudhuri AC | Arindam Chaudhuri, Kajal De, Dipak Chatterjee | Discovering Stock Price Prediction Rules of Bombay Stock Exchange Using
Rough Fuzzy Multi Layer Perception Networks | Book Chapter: Forecasting Financial Markets in India, Rudra P.
Pradhan, Indian Institute of Technology Kharagpur, (Editor), Allied
Publishers, India, 2009 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In India financial markets have existed for many years. A functionally
accented, diverse, efficient and flexible financial system is vital to the
national objective of creating a market driven, productive and competitive
economy. Today markets of varying maturity exist in equity, debt, commodities
and foreign exchange. In this work we attempt to generate prediction rules
scheme for stock price movement at Bombay Stock Exchange using an important
Soft Computing paradigm viz., Rough Fuzzy Multi Layer Perception. The use of
Computational Intelligence Systems such as Neural Networks, Fuzzy Sets, Genetic
Algorithms, etc. for Stock Market Predictions has been widely established. The
process is to extract knowledge in the form of rules from daily stock
movements. These rules can then be used to guide investors. To increase the
efficiency of the prediction process, Rough Sets is used to discretize the
data. The methodology uses a Genetic Algorithm to obtain a structured network
suitable for both classification and rule extraction. The modular concept,
based on divide and conquer strategy, provides accelerated training and a
compact network suitable for generating a minimum number of rules with high
certainty values. The concept of variable mutation operator is introduced for
preserving the localized structure of the constituting Knowledge Based
sub-networks, while they are integrated and evolved. Rough Set Dependency Rules
are generated directly from the real valued attribute table containing Fuzzy
membership values. The paradigm is thus used to develop a rule extraction
algorithm. The extracted rules are compared with some of the related rule
extraction techniques on the basis of some quantitative performance indices.
The proposed methodology extracts rules which are less in number, are accurate,
have high certainty factor and have low confusion with less computation time.
| [
{
"version": "v1",
"created": "Sun, 7 Jul 2013 18:47:19 GMT"
}
] | 1,373,328,000,000 | [
[
"Chaudhuri",
"Arindam",
""
],
[
"De",
"Kajal",
""
],
[
"Chatterjee",
"Dipak",
""
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.