id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
sequencelengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1404.2267 | Peter van der Helm | Peter A. van der Helm | Transparallel mind: Classical computing with quantum power | 38 pages (incl. Appendix with proofs), 10 figures, Supplementary
Material (incl. algorithm) available at
http://perswww.kuleuven.be/~u0084530/doc/pisa.html. Minor revision: added 2
figures, 7 references, and a few clarifications | null | 10.1007/s10462-015-9429-7 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Inspired by the extraordinary computing power promised by quantum computers,
the quantum mind hypothesis postulated that quantum mechanical phenomena are
the source of neuronal synchronization, which, in turn, might underlie
consciousness. Here, I present an alternative inspired by a classical computing
method with quantum power. This method relies on special distributed
representations called hyperstrings. Hyperstrings are superpositions of up to
an exponential number of strings, which -- by a single-processor classical
computer -- can be evaluated in a transparallel fashion, that is,
simultaneously as if only one string were concerned. Building on a neurally
plausible model of human visual perceptual organization, in which hyperstrings
are formal counterparts of transient neural assemblies, I postulate that
synchronization in such assemblies is a manifestation of transparallel
information processing. This accounts for the high combinatorial capacity and
speed of human visual perceptual organization and strengthens ideas that
self-organizing cognitive architecture bridges the gap between neurons and
consciousness.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2014 12:36:13 GMT"
},
{
"version": "v2",
"created": "Sun, 15 Feb 2015 15:54:44 GMT"
}
] | 1,429,228,800,000 | [
[
"van der Helm",
"Peter A.",
""
]
] |
1404.2768 | Einollah Pira | Einollah pira, Mohammad Reza Zand Miralvand and Fakhteh Soltani | Verification of confliction and unreachability in rule-based expert
systems with model checking | 7 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is important to find optimal solutions for structural errors in rule-based
expert systems .Solutions to discovering such errors by using model checking
techniques have already been proposed, but these solutions have problems such
as state space explosion. In this paper, to overcome these problems, we model
the rule-based systems as finite state transition systems and express
confliction and unreachability as Computation Tree Logic (CTL) logic formula
and then use the technique of model checking to detect confliction and
unreachability in rule-based systems with the model checker UPPAAL.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2014 10:55:14 GMT"
}
] | 1,397,174,400,000 | [
[
"pira",
"Einollah",
""
],
[
"Miralvand",
"Mohammad Reza Zand",
""
],
[
"Soltani",
"Fakhteh",
""
]
] |
1404.3285 | Mahdi Moeini | Mahdi Moeini, Zied Jemai, Evren Sahin | An Integer Programming Model for the Dynamic Location and Relocation of
Emergency Vehicles: A Case Study | Proceedings of the 12th International Symposium on Operational
Research (SOR'2013), Slovenia, September 2013, pp. 343-350, (2013) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we address the dynamic Emergency Medical Service (EMS)
systems. A dynamic location model is presented that tries to locate and
relocate the ambulances. The proposed model controls the movements and
locations of ambulances in order to provide a better coverage of the demand
points under different fluctuation patterns that may happen during a given
period of time. Some numerical experiments have been carried out by using some
real-world data sets that have been collected through the French EMS system.
| [
{
"version": "v1",
"created": "Sat, 12 Apr 2014 12:27:06 GMT"
}
] | 1,397,520,000,000 | [
[
"Moeini",
"Mahdi",
""
],
[
"Jemai",
"Zied",
""
],
[
"Sahin",
"Evren",
""
]
] |
1404.3301 | William Yang Wang | William Yang Wang, Kathryn Mazaitis, Ni Lao, Tom Mitchell, William W.
Cohen | Efficient Inference and Learning in a Large Knowledge Base: Reasoning
with Extracted Information using a Locally Groundable First-Order
Probabilistic Logic | arXiv admin note: substantial text overlap with arXiv:1305.2254 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One important challenge for probabilistic logics is reasoning with very large
knowledge bases (KBs) of imperfect information, such as those produced by
modern web-scale information extraction systems. One scalability problem shared
by many probabilistic logics is that answering queries involves "grounding" the
query---i.e., mapping it to a propositional representation---and the size of a
"grounding" grows with database size. To address this bottleneck, we present a
first-order probabilistic language called ProPPR in which that approximate
"local groundings" can be constructed in time independent of database size.
Technically, ProPPR is an extension to stochastic logic programs (SLPs) that is
biased towards short derivations; it is also closely related to an earlier
relational learning algorithm called the path ranking algorithm (PRA). We show
that the problem of constructing proofs for this logic is related to
computation of personalized PageRank (PPR) on a linearized version of the proof
space, and using on this connection, we develop a proveably-correct approximate
grounding scheme, based on the PageRank-Nibble algorithm. Building on this, we
develop a fast and easily-parallelized weight-learning algorithm for ProPPR. In
experiments, we show that learning for ProPPR is orders magnitude faster than
learning for Markov logic networks; that allowing mutual recursion (joint
learning) in KB inference leads to improvements in performance; and that ProPPR
can learn weights for a mutually recursive program with hundreds of clauses,
which define scores of interrelated predicates, over a KB containing one
million entities.
| [
{
"version": "v1",
"created": "Sat, 12 Apr 2014 16:59:30 GMT"
}
] | 1,397,520,000,000 | [
[
"Wang",
"William Yang",
""
],
[
"Mazaitis",
"Kathryn",
""
],
[
"Lao",
"Ni",
""
],
[
"Mitchell",
"Tom",
""
],
[
"Cohen",
"William W.",
""
]
] |
1404.3370 | Xinyang Deng | Meizhu Li, Qi Zhang, Xinyang Deng, Yong Deng | Distance function of D numbers | 29 pages, 7 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dempster-Shafer theory is widely applied in uncertainty modelling and
knowledge reasoning due to its ability of expressing uncertain information. A
distance between two basic probability assignments(BPAs) presents a measure of
performance for identification algorithms based on the evidential theory of
Dempster-Shafer. However, some conditions lead to limitations in practical
application for Dempster-Shafer theory, such as exclusiveness hypothesis and
completeness constraint. To overcome these shortcomings, a novel theory called
D numbers theory is proposed. A distance function of D numbers is proposed to
measure the distance between two D numbers. The distance function of D numbers
is an generalization of distance between two BPAs, which inherits the advantage
of Dempster-Shafer theory and strengthens the capability of uncertainty
modeling. An illustrative case is provided to demonstrate the effectiveness of
the proposed function.
| [
{
"version": "v1",
"created": "Sun, 13 Apr 2014 11:56:08 GMT"
}
] | 1,397,520,000,000 | [
[
"Li",
"Meizhu",
""
],
[
"Zhang",
"Qi",
""
],
[
"Deng",
"Xinyang",
""
],
[
"Deng",
"Yong",
""
]
] |
1404.3659 | Amir Konigsberg | Amir Konigsberg | Avoiding Undesired Choices Using Intelligent Adaptive Systems | null | International Journal of Artificial Intelligence & Applications
(IJAIA), Vol. 5, No. 2, March 2014 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a number of heuristics that can be used for identifying when
intransitive choice behaviour is likely to occur in choice situations. We also
suggest two methods for avoiding undesired choice behaviour, namely transparent
communication and adaptive choice-set generation. We believe that these two
ways can contribute to the avoidance of decision biases in choice situations
that may often be regretted.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2014 07:33:04 GMT"
}
] | 1,397,520,000,000 | [
[
"Konigsberg",
"Amir",
""
]
] |
1404.4089 | Guy Van den Broeck | Guy Van den Broeck, Adnan Darwiche | On the Role of Canonicity in Bottom-up Knowledge Compilation | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of bottom-up compilation of knowledge bases, which is
usually predicated on the existence of a polytime function for combining
compilations using Boolean operators (usually called an Apply function). While
such a polytime Apply function is known to exist for certain languages (e.g.,
OBDDs) and not exist for others (e.g., DNNF), its existence for certain
languages remains unknown. Among the latter is the recently introduced language
of Sentential Decision Diagrams (SDDs), for which a polytime Apply function
exists for unreduced SDDs, but remains unknown for reduced ones (i.e. canonical
SDDs). We resolve this open question in this paper and consider some of its
theoretical and practical implications. Some of the findings we report question
the common wisdom on the relationship between bottom-up compilation, language
canonicity and the complexity of the Apply function.
| [
{
"version": "v1",
"created": "Tue, 15 Apr 2014 21:43:41 GMT"
}
] | 1,397,692,800,000 | [
[
"Broeck",
"Guy Van den",
""
],
[
"Darwiche",
"Adnan",
""
]
] |
1404.4258 | Gavin Taylor | Gavin Taylor and Connor Geer and David Piekut | An Analysis of State-Relevance Weights and Sampling Distributions on
L1-Regularized Approximate Linear Programming Approximation Accuracy | Identical to the ICML 2014 paper of the same name, but with full
proofs. Please cite the ICML paper | null | null | null | cs.AI | http://creativecommons.org/licenses/publicdomain/ | Recent interest in the use of $L_1$ regularization in the use of value
function approximation includes Petrik et al.'s introduction of
$L_1$-Regularized Approximate Linear Programming (RALP). RALP is unique among
$L_1$-regularized approaches in that it approximates the optimal value function
using off-policy samples. Additionally, it produces policies which outperform
those of previous methods, such as LSPI. RALP's value function approximation
quality is affected heavily by the choice of state-relevance weights in the
objective function of the linear program, and by the distribution from which
samples are drawn; however, there has been no discussion of these
considerations in the previous literature. In this paper, we discuss and
explain the effects of choices in the state-relevance weights and sampling
distribution on approximation quality, using both theoretical and experimental
illustrations. The results provide insight not only onto these effects, but
also provide intuition into the types of MDPs which are especially well suited
for approximation with RALP.
| [
{
"version": "v1",
"created": "Wed, 16 Apr 2014 14:15:43 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Apr 2014 15:50:46 GMT"
}
] | 1,398,384,000,000 | [
[
"Taylor",
"Gavin",
""
],
[
"Geer",
"Connor",
""
],
[
"Piekut",
"David",
""
]
] |
1404.4785 | Olegs Verhodubs | Olegs Verhodubs | Ontology as a Source for Rule Generation | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper discloses the potential of OWL (Web Ontology Language) ontologies
for generation of rules. The main purpose of this paper is to identify new
types of rules, which may be generated from OWL ontologies. Rules, generated
from OWL ontologies, are necessary for the functioning of the Semantic Web
Expert System. It is expected that the Semantic Web Expert System (SWES) will
be able to process ontologies from the Web with the purpose to supplement or
even to develop its knowledge base.
| [
{
"version": "v1",
"created": "Fri, 18 Apr 2014 13:36:17 GMT"
}
] | 1,398,038,400,000 | [
[
"Verhodubs",
"Olegs",
""
]
] |
1404.4789 | Xinyang Deng | Hongming Mo, Yong Deng | A new combination approach based on improved evidence distance | 14 pages, 1 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dempster-Shafer evidence theory is a powerful tool in information fusion.
When the evidence are highly conflicting, the counter-intuitive results will be
presented. To adress this open issue, a new method based on evidence distance
of Jousselme and Hausdorff distance is proposed. Weight of each evidence can be
computed, preprocess the original evidence to generate a new evidence. The
Dempster's combination rule is used to combine the new evidence. Comparing with
the existing methods, the new proposed method is efficient.
| [
{
"version": "v1",
"created": "Fri, 18 Apr 2014 13:55:36 GMT"
}
] | 1,398,038,400,000 | [
[
"Mo",
"Hongming",
""
],
[
"Deng",
"Yong",
""
]
] |
1404.4801 | Xinyang Deng | Yong Deng | Generalized Evidence Theory | 39 pages, 5 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Conflict management is still an open issue in the application of Dempster
Shafer evidence theory. A lot of works have been presented to address this
issue. In this paper, a new theory, called as generalized evidence theory
(GET), is proposed. Compared with existing methods, GET assumes that the
general situation is in open world due to the uncertainty and incomplete
knowledge. The conflicting evidence is handled under the framework of GET. It
is shown that the new theory can explain and deal with the conflicting evidence
in a more reasonable way.
| [
{
"version": "v1",
"created": "Thu, 17 Apr 2014 08:08:56 GMT"
}
] | 1,398,038,400,000 | [
[
"Deng",
"Yong",
""
]
] |
1404.4983 | Nisheeth Joshi | Iti Mathur, Nisheeth Joshi, Hemant Darbari and Ajai Kumar | Shiva++: An Enhanced Graph based Ontology Matcher | arXiv admin note: text overlap with arXiv:1403.7465 | International Journal of Computer Applications 92(16):30-34, April
2014 | 10.5120/16095-5393 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the web getting bigger and assimilating knowledge about different
concepts and domains, it is becoming very difficult for simple database driven
applications to capture the data for a domain. Thus developers have come out
with ontology based systems which can store large amount of information and can
apply reasoning and produce timely information. Thus facilitating effective
knowledge management. Though this approach has made our lives easier, but at
the same time has given rise to another problem. Two different ontologies
assimilating same knowledge tend to use different terms for the same concepts.
This creates confusion among knowledge engineers and workers, as they do not
know which is a better term then the other. Thus we need to merge ontologies
working on same domain so that the engineers can develop a better application
over it. This paper shows the development of one such matcher which merges the
concepts available in two ontologies at two levels; 1) at string level and 2)
at semantic level; thus producing better merged ontologies. We have used a
graph matching technique which works at the core of the system. We have also
evaluated the system and have tested its performance with its predecessor which
works only on string matching. Thus current approach produces better results.
| [
{
"version": "v1",
"created": "Sat, 19 Apr 2014 19:12:52 GMT"
}
] | 1,398,124,800,000 | [
[
"Mathur",
"Iti",
""
],
[
"Joshi",
"Nisheeth",
""
],
[
"Darbari",
"Hemant",
""
],
[
"Kumar",
"Ajai",
""
]
] |
1404.5078 | Ethan Petuchowski | Ethan Petuchowski, Matthew Lease | TurKPF: TurKontrol as a Particle Filter | 8 pages, 6 figures, formula appendix | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | TurKontrol, and algorithm presented in (Dai et al. 2010), uses a POMDP to
model and control an iterative workflow for crowdsourced work. Here, TurKontrol
is re-implemented as "TurKPF," which uses a Particle Filter to reduce
computation time & memory usage. Most importantly, in our experimental
environment with default parameter settings, the action is chosen nearly
instantaneously. Through a series of experiments we see that TurKPF and
TurKontrol perform similarly.
| [
{
"version": "v1",
"created": "Sun, 20 Apr 2014 22:47:32 GMT"
}
] | 1,398,124,800,000 | [
[
"Petuchowski",
"Ethan",
""
],
[
"Lease",
"Matthew",
""
]
] |
1404.5454 | Adish Singla | Adish Singla, Eric Horvitz, Ece Kamar, Ryen White | Stochastic Privacy | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Online services such as web search and e-commerce applications typically rely
on the collection of data about users, including details of their activities on
the web. Such personal data is used to enhance the quality of service via
personalization of content and to maximize revenues via better targeting of
advertisements and deeper engagement of users on sites. To date, service
providers have largely followed the approach of either requiring or requesting
consent for opting-in to share their data. Users may be willing to share
private information in return for better quality of service or for incentives,
or in return for assurances about the nature and extend of the logging of data.
We introduce \emph{stochastic privacy}, a new approach to privacy centering on
a simple concept: A guarantee is provided to users about the upper-bound on the
probability that their personal data will be used. Such a probability, which we
refer to as \emph{privacy risk}, can be assessed by users as a preference or
communicated as a policy by a service provider. Service providers can work to
personalize and to optimize revenues in accordance with preferences about
privacy risk. We present procedures, proofs, and an overall system for
maximizing the quality of services, while respecting bounds on allowable or
communicated privacy risk. We demonstrate the methodology with a case study and
evaluation of the procedures applied to web search personalization. We show how
we can achieve near-optimal utility of accessing information with provable
guarantees on the probability of sharing data.
| [
{
"version": "v1",
"created": "Tue, 22 Apr 2014 10:55:19 GMT"
}
] | 1,398,211,200,000 | [
[
"Singla",
"Adish",
""
],
[
"Horvitz",
"Eric",
""
],
[
"Kamar",
"Ece",
""
],
[
"White",
"Ryen",
""
]
] |
1404.5668 | Pedro Alejandro Ortega | Pedro A. Ortega, Daniel D. Lee | An Adversarial Interpretation of Information-Theoretic Bounded
Rationality | 7 pages, 4 figures. Proceedings of AAAI-14 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, there has been a growing interest in modeling planning with
information constraints. Accordingly, an agent maximizes a regularized expected
utility known as the free energy, where the regularizer is given by the
information divergence from a prior to a posterior policy. While this approach
can be justified in various ways, including from statistical mechanics and
information theory, it is still unclear how it relates to decision-making
against adversarial environments. This connection has previously been suggested
in work relating the free energy to risk-sensitive control and to extensive
form games. Here, we show that a single-agent free energy optimization is
equivalent to a game between the agent and an imaginary adversary. The
adversary can, by paying an exponential penalty, generate costs that diminish
the decision maker's payoffs. It turns out that the optimal strategy of the
adversary consists in choosing costs so as to render the decision maker
indifferent among its choices, which is a definining property of a Nash
equilibrium, thus tightening the connection between free energy optimization
and game theory.
| [
{
"version": "v1",
"created": "Tue, 22 Apr 2014 23:21:14 GMT"
}
] | 1,411,516,800,000 | [
[
"Ortega",
"Pedro A.",
""
],
[
"Lee",
"Daniel D.",
""
]
] |
1404.6059 | Dibya Jyoti Bora | Dibya Jyoti Bora and Dr. Anil Kumar Gupta | A Comparative study Between Fuzzy Clustering Algorithm and Hard
Clustering Algorithm | Data Clustering,6 pages,6 figures,Published with International
Journal of Computer Trends and Technology (IJCTT) | International Journal of Computer Trends and Technology (IJCTT)
V10(2):108-113, Apr 2014. ISSN:2231-2803 | 10.14445/22312803/IJCTT-V10P119 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data clustering is an important area of data mining. This is an unsupervised
study where data of similar types are put into one cluster while data of
another types are put into different cluster. Fuzzy C means is a very important
clustering technique based on fuzzy logic. Also we have some hard clustering
techniques available like K-means among the popular ones. In this paper a
comparative study is done between Fuzzy clustering algorithm and hard
clustering algorithm
| [
{
"version": "v1",
"created": "Thu, 24 Apr 2014 09:02:38 GMT"
}
] | 1,406,764,800,000 | [
[
"Bora",
"Dibya Jyoti",
""
],
[
"Gupta",
"Dr. Anil Kumar",
""
]
] |
1404.6566 | Oliver Fernandez Gil | Oliver Fern\'andez Gil | On the Non-Monotonic Description Logic
$\mathcal{ALC}$+T$_{\mathsf{min}}$ | Proceedings of the 15th International Workshop on Non-Monotonic
Reasoning (NMR 2014) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the last 20 years many proposals have been made to incorporate
non-monotonic reasoning into description logics, ranging from approaches based
on default logic and circumscription to those based on preferential semantics.
In particular, the non-monotonic description logic
$\mathcal{ALC}$+T$_{\mathsf{min}}$ uses a combination of the preferential
semantics with minimization of a certain kind of concepts, which represent
atypical instances of a class of elements. One of its drawbacks is that it
suffers from the problem known as the \emph{property blocking inheritance},
which can be seen as a weakness from an inferential point of view. In this
paper we propose an extension of $\mathcal{ALC}$+T$_{\mathsf{min}}$, namely
$\mathcal{ALC}$+T$^+_{\mathsf{min}}$, with the purpose to solve the mentioned
problem. In addition, we show the close connection that exists between
$\mathcal{ALC}$+T$^+_{\mathsf{min}}$ and concept-circumscribed knowledge bases.
Finally, we study the complexity of deciding the classical reasoning tasks in
$\mathcal{ALC}$+T$^+_{\mathsf{min}}$.
| [
{
"version": "v1",
"created": "Fri, 25 Apr 2014 21:45:44 GMT"
}
] | 1,398,729,600,000 | [
[
"Gil",
"Oliver Fernández",
""
]
] |
1404.6696 | Thibaut Vidal | Thibaut Vidal, Maria Battarra, Anand Subramanian, G\"une\c{s}
Erdo\v{g}an | Hybrid Metaheuristics for the Clustered Vehicle Routing Problem | Working Paper, MIT -- 22 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Clustered Vehicle Routing Problem (CluVRP) is a variant of the
Capacitated Vehicle Routing Problem in which customers are grouped into
clusters. Each cluster has to be visited once, and a vehicle entering a cluster
cannot leave it until all customers have been visited. This article presents
two alternative hybrid metaheuristic algorithms for the CluVRP. The first
algorithm is based on an Iterated Local Search algorithm, in which only
feasible solutions are explored and problem-specific local search moves are
utilized. The second algorithm is a Hybrid Genetic Search, for which the
shortest Hamiltonian path between each pair of vertices within each cluster
should be precomputed. Using this information, a sequence of clusters can be
used as a solution representation and large neighborhoods can be efficiently
explored by means of bi-directional dynamic programming, sequence
concatenations, by using appropriate data structures. Extensive computational
experiments are performed on benchmark instances from the literature, as well
as new large scale ones. Recommendations on promising algorithm choices are
provided relatively to average cluster size.
| [
{
"version": "v1",
"created": "Sat, 26 Apr 2014 23:52:47 GMT"
}
] | 1,398,729,600,000 | [
[
"Vidal",
"Thibaut",
""
],
[
"Battarra",
"Maria",
""
],
[
"Subramanian",
"Anand",
""
],
[
"Erdoǧan",
"Güneş",
""
]
] |
1404.6784 | Joao Leite | Martin Slota, Martin Bal\'az, Jo\~ao Leite | On Strong and Default Negation in Logic Program Updates (Extended
Version) | 14 pages, extended version of the paper to appear in the online
supplement of Theory and Practice of Logic Programming (TPLP), and presented
at the 15th International Workshop on Non-Monotonic Reasoning (NMR 2014) and
at the 30th International Conference on Logic Programming (ICLP 2014) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing semantics for answer-set program updates fall into two categories:
either they consider only strong negation in heads of rules, or they primarily
rely on default negation in heads of rules and optionally provide support for
strong negation by means of a syntactic transformation. In this paper we
pinpoint the limitations of both these approaches and argue that both types of
negation should be first-class citizens in the context of updates. We identify
principles that plausibly constrain their interaction but are not
simultaneously satisfied by any existing rule update semantics. Then we extend
one of the most advanced semantics with direct support for strong negation and
show that it satisfies the outlined principles as well as a variety of other
desirable properties.
| [
{
"version": "v1",
"created": "Sun, 27 Apr 2014 16:33:42 GMT"
},
{
"version": "v2",
"created": "Thu, 8 May 2014 10:46:56 GMT"
},
{
"version": "v3",
"created": "Wed, 11 Jun 2014 23:30:20 GMT"
},
{
"version": "v4",
"created": "Wed, 9 Jul 2014 16:05:40 GMT"
}
] | 1,404,950,400,000 | [
[
"Slota",
"Martin",
""
],
[
"Baláz",
"Martin",
""
],
[
"Leite",
"João",
""
]
] |
1404.6883 | Jozef Frtus | Jozef Frt\'us | Credulous and Skeptical Argument Games for Complete Semantics in
Conflict Resolution based Argumentation | appears in the Proceedings of the 15th International Workshop on
Non-Monotonic Reasoning (NMR 2014) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Argumentation is one of the most popular approaches of defining
a~non-monotonic formalism and several argumentation based semantics were
proposed for defeasible logic programs. Recently, a new approach based on
notions of conflict resolutions was proposed, however with declarative
semantics only. This paper gives a more procedural counterpart by developing
skeptical and credulous argument games for complete semantics and soundness and
completeness theorems for both games are provided. After that, distribution of
defeasible logic program into several contexts is investigated and both
argument games are adapted for multi-context system.
| [
{
"version": "v1",
"created": "Mon, 28 Apr 2014 07:24:57 GMT"
}
] | 1,398,729,600,000 | [
[
"Frtús",
"Jozef",
""
]
] |
1404.6974 | Claudia Schon | Ulrich Furbach and Claudia Schon | Deontic Logic for Human Reasoning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deontic logic is shown to be applicable for modelling human reasoning. For
this the Wason selection task and the suppression task are discussed in detail.
Different versions of modelling norms with deontic logic are introduced and in
the case of the Wason selection task it is demonstrated how differences in the
performance of humans in the abstract and in the social contract case can be
explained. Furthermore it is shown that an automated theorem prover can be used
as a reasoning tool for deontic logic.
| [
{
"version": "v1",
"created": "Mon, 28 Apr 2014 13:34:20 GMT"
},
{
"version": "v2",
"created": "Thu, 18 Sep 2014 08:46:31 GMT"
}
] | 1,411,084,800,000 | [
[
"Furbach",
"Ulrich",
""
],
[
"Schon",
"Claudia",
""
]
] |
1404.6999 | Carmine Dodaro | Mario Alviano, Carmine Dodaro and Francesco Ricca | Preliminary Report on WASP 2.0 | The paper appears in the Proceedings of the 15th International
Workshop on Non-Monotonic Reasoning (NMR 2014) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Answer Set Programming (ASP) is a declarative programming paradigm. The
intrinsic complexity of the evaluation of ASP programs makes the development of
more effective and faster systems a challenging research topic. This paper
reports on the recent improvements of the ASP solver WASP. WASP is undergoing a
refactoring process which will end up in the release of a new and more
performant version of the software. In particular the paper focus on the
improvements to the core evaluation algorithms working on normal programs. A
preliminary experiment on benchmarks from the 3rd ASP competition belonging to
the NP class is reported. The previous version of WASP was often not
competitive with alternative solutions on this class. The new version of WASP
shows a substantial increase in performance.
| [
{
"version": "v1",
"created": "Mon, 28 Apr 2014 14:26:12 GMT"
}
] | 1,398,729,600,000 | [
[
"Alviano",
"Mario",
""
],
[
"Dodaro",
"Carmine",
""
],
[
"Ricca",
"Francesco",
""
]
] |
1404.7173 | Daniel Schwartz | Daniel G. Schwartz | Nonmonotonic Reasoning as a Temporal Activity | Proceedings of the 15th International Workshop on Non-Monotonic
Reasoning (NMR 2014), Vienna, Austria, 17-19 July 2014 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A {\it dynamic reasoning system} (DRS) is an adaptation of a conventional
formal logical system that explicitly portrays reasoning as a temporal
activity, with each extralogical input to the system and each inference rule
application being viewed as occurring at a distinct time step. Every DRS
incorporates some well-defined logic together with a controller that serves to
guide the reasoning process in response to user inputs. Logics are generic,
whereas controllers are application-specific. Every controller does,
nonetheless, provide an algorithm for nonmonotonic belief revision. The general
notion of a DRS comprises a framework within which one can formulate the logic
and algorithms for a given application and prove that the algorithms are
correct, i.e., that they serve to (i) derive all salient information and (ii)
preserve the consistency of the belief set. This paper illustrates the idea
with ordinary first-order predicate calculus, suitably modified for the present
purpose, and an example. The example revisits some classic nonmonotonic
reasoning puzzles (Opus the Penguin, Nixon Diamond) and shows how these can be
resolved in the context of a DRS, using an expanded version of first-order
logic that incorporates typed predicate symbols. All concepts are rigorously
defined and effectively computable, thereby providing the foundation for a
future software implementation.
| [
{
"version": "v1",
"created": "Mon, 28 Apr 2014 21:35:32 GMT"
}
] | 1,398,816,000,000 | [
[
"Schwartz",
"Daniel G.",
""
]
] |
1404.7279 | Michael Gr. Voskoglou Prof. Dr. | Michael Gr. Voskoglou | Assessing the players'performance in the game of bridge: A fuzzy logic
approach | 6 pages, 2 figures, 2 tables | American Journal of Applied Mathematics and Statistics, vol. 2,
no. 3 (2014), 115-120 | 10.12691/ajams-2-3-5 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Contract bridge occupies nowadays a position of great prestige being,
together with chess, the only mind games officially recognized by the
International Olympic Committee. In the present paper an innovative method for
assessing the total performance of bridge- players' belonging to groups of
special interest(e.g. different bridge clubs during a tournament, men and
women, new and old players, etc) is introduced, which is based on principles of
fuzzy logic. For this, the cohorts under assessment are represented as fuzzy
subsets of a set of linguistic labels characterizing their performance and the
centroid defuzzification method is used to convert the fuzzy data collected
from the game to a crisp number. This new method of assessment could be used
informally as a complement of the official bridge-scoring methods for
statistical and other obvious reasons. Two real applications related to
simultaneous tournaments with pre-dealt boards, organized by the Hellenic
Bridge Federation, are also presented, illustrating the importance of our
results in practice.
| [
{
"version": "v1",
"created": "Tue, 29 Apr 2014 08:56:44 GMT"
}
] | 1,398,816,000,000 | [
[
"Voskoglou",
"Michael Gr.",
""
]
] |
1404.7428 | Anthony Hunter | Anthony Hunter | Analysis of Dialogical Argumentation via Finite State Machines | 10 pages | Proceedings of the International Conference on Scalable
Uncertainty Management (SUM'13), LNCS 8078, Pages 1-14, Springer, 2013 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dialogical argumentation is an important cognitive activity by which agents
exchange arguments and counterarguments as part of some process such as
discussion, debate, persuasion and negotiation. Whilst numerous formal systems
have been proposed, there is a lack of frameworks for implementing and
evaluating these proposals. First-order executable logic has been proposed as a
general framework for specifying and analysing dialogical argumentation. In
this paper, we investigate how we can implement systems for dialogical
argumentation using propositional executable logic. Our approach is to present
and evaluate an algorithm that generates a finite state machine that reflects a
propositional executable logic specification for a dialogical argumentation
together with an initial state. We also consider how the finite state machines
can be analysed, with the minimax strategy being used as an illustration of the
kinds of empirical analysis that can be undertaken.
| [
{
"version": "v1",
"created": "Tue, 29 Apr 2014 16:49:33 GMT"
}
] | 1,398,816,000,000 | [
[
"Hunter",
"Anthony",
""
]
] |
1404.7719 | Nico Roos | Wenzhao Qiao and Nico Roos | An argumentation system for reasoning with conflict-minimal
paraconsistent ALC | null | Proceedings of the 15th International Workshop on Non-Monotonic
Reasoning (NMR 2014) | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The semantic web is an open and distributed environment in which it is hard
to guarantee consistency of knowledge and information. Under the standard
two-valued semantics everything is entailed if knowledge and information is
inconsistent. The semantics of the paraconsistent logic LP offers a solution.
However, if the available knowledge and information is consistent, the set of
conclusions entailed under the three-valued semantics of the paraconsistent
logic LP is smaller than the set of conclusions entailed under the two-valued
semantics. Preferring conflict-minimal three-valued interpretations eliminates
this difference.
Preferring conflict-minimal interpretations introduces non-monotonicity. To
handle the non-monotonicity, this paper proposes an assumption-based
argumentation system. Assumptions needed to close branches of a semantic
tableaux form the arguments. Stable extensions of the set of derived arguments
correspond to conflict minimal interpretations and conclusions entailed by all
conflict-minimal interpretations are supported by arguments in all stable
extensions.
| [
{
"version": "v1",
"created": "Wed, 30 Apr 2014 13:32:27 GMT"
}
] | 1,398,902,400,000 | [
[
"Qiao",
"Wenzhao",
""
],
[
"Roos",
"Nico",
""
]
] |
1404.7734 | Thomas Linsbichler | Ringo Baumann, Wolfgang Dvor\'ak, Thomas Linsbichler, Hannes Strass
and Stefan Woltran | Compact Argumentation Frameworks | Contribution to the 15th International Workshop on Non-Monotonic
Reasoning, 2014, Vienna | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Abstract argumentation frameworks (AFs) are one of the most studied
formalisms in AI. In this work, we introduce a certain subclass of AFs which we
call compact. Given an extension-based semantics, the corresponding compact AFs
are characterized by the feature that each argument of the AF occurs in at
least one extension. This not only guarantees a certain notion of fairness;
compact AFs are thus also minimal in the sense that no argument can be removed
without changing the outcome. We address the following questions in the paper:
(1) How are the classes of compact AFs related for different semantics? (2)
Under which circumstances can AFs be transformed into equivalent compact ones?
(3) Finally, we show that compact AFs are indeed a non-trivial subclass, since
the verification problem remains coNP-hard for certain semantics.
| [
{
"version": "v1",
"created": "Wed, 30 Apr 2014 14:23:40 GMT"
}
] | 1,398,902,400,000 | [
[
"Baumann",
"Ringo",
""
],
[
"Dvorák",
"Wolfgang",
""
],
[
"Linsbichler",
"Thomas",
""
],
[
"Strass",
"Hannes",
""
],
[
"Woltran",
"Stefan",
""
]
] |
1405.0034 | Aaron Hunter | Aaron Hunter | Belief Revision and Trust | Appears in the Proceedings of the 15th International Workshop on
Non-Monotonic Reasoning (NMR 2014) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Belief revision is the process in which an agent incorporates a new piece of
information together with a pre-existing set of beliefs. When the new
information comes in the form of a report from another agent, then it is clear
that we must first determine whether or not that agent should be trusted. In
this paper, we provide a formal approach to modeling trust as a pre-processing
step before belief revision. We emphasize that trust is not simply a relation
between agents; the trust that one agent has in another is often restricted to
a particular domain of expertise. We demonstrate that this form of trust can be
captured by associating a state-partition with each agent, then relativizing
all reports to this state partition before performing belief revision. In this
manner, we incorporate only the part of a report that falls under the perceived
domain of expertise of the reporting agent. Unfortunately, state partitions
based on expertise do not allow us to compare the relative strength of trust
held with respect to different agents. To address this problem, we introduce
pseudometrics over states to represent differing degrees of trust. This allows
us to incorporate simultaneous reports from multiple agents in a way that
ensures the most trusted reports will be believed.
| [
{
"version": "v1",
"created": "Wed, 30 Apr 2014 21:08:55 GMT"
}
] | 1,398,988,800,000 | [
[
"Hunter",
"Aaron",
""
]
] |
1405.0406 | Sylwia Polberg | Sylwia Polberg | Extension-based Semantics of Abstract Dialectical Frameworks | To appear in the Proceedings of the 15th International Workshop on
Non-Monotonic Reasoning (NMR 2014) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the most prominent tools for abstract argumentation is the Dung's
framework, AF for short. It is accompanied by a variety of semantics including
grounded, complete, preferred and stable. Although powerful, AFs have their
shortcomings, which led to development of numerous enrichments. Among the most
general ones are the abstract dialectical frameworks, also known as the ADFs.
They make use of the so-called acceptance conditions to represent arbitrary
relations. This level of abstraction brings not only new challenges, but also
requires addressing existing problems in the field. One of the most
controversial issues, recognized not only in argumentation, concerns the
support cycles. In this paper we introduce a new method to ensure acyclicity of
the chosen arguments and present a family of extension-based semantics built on
it. We also continue our research on the semantics that permit cycles and fill
in the gaps from the previous works. Moreover, we provide ADF versions of the
properties known from the Dung setting. Finally, we also introduce a
classification of the developed sub-semantics and relate them to the existing
labeling-based approaches.
| [
{
"version": "v1",
"created": "Fri, 2 May 2014 14:04:50 GMT"
}
] | 1,399,248,000,000 | [
[
"Polberg",
"Sylwia",
""
]
] |
1405.0423 | Sachin Lakra | Sachin Lakra, T.V. Prasad and G. Ramakrishna | Representation of a Sentence using a Polar Fuzzy Neutrosophic Semantic
Net | arXiv admin note: text overlap with arXiv:math/0101228,
arXiv:math/0412424, arXiv:math/0306384 by other authors | International Journal of Advanced Computer Science and
Applications, Special Issue on Natural Language Processing, Volume 4, Issue
1, April 2014, pp. 1-8 | 10.14569/SpecialIssue.2014.040101 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A semantic net can be used to represent a sentence. A sentence in a language
contains semantics which are polar in nature, that is, semantics which are
positive, neutral and negative. Neutrosophy is a relatively new field of
science which can be used to mathematically represent triads of concepts. These
triads include truth, indeterminacy and falsehood, and so also positivity,
neutrality and negativity. Thus a conventional semantic net has been extended
in this paper using neutrosophy into a Polar Fuzzy Neutrosophic Semantic Net. A
Polar Fuzzy Neutrosophic Semantic Net has been implemented in MATLAB and has
been used to illustrate a polar sentence in English language. The paper
demonstrates a method for the representation of polarity in a computers memory.
Thus, polar concepts can be applied to imbibe a machine such as a robot, with
emotions, making machine emotion representation possible.
| [
{
"version": "v1",
"created": "Fri, 2 May 2014 15:05:55 GMT"
}
] | 1,399,248,000,000 | [
[
"Lakra",
"Sachin",
""
],
[
"Prasad",
"T. V.",
""
],
[
"Ramakrishna",
"G.",
""
]
] |
1405.0720 | Matthias Nickles | Matthias Nickles, Alessandra Mileo | Probabilistic Inductive Logic Programming Based on Answer Set
Programming | Appears in the Proceedings of the 15th International Workshop on
Non-Monotonic Reasoning (NMR 2014) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a new formal language for the expressive representation of
probabilistic knowledge based on Answer Set Programming (ASP). It allows for
the annotation of first-order formulas as well as ASP rules and facts with
probabilities and for learning of such weights from data (parameter
estimation). Weighted formulas are given a semantics in terms of soft and hard
constraints which determine a probability distribution over answer sets. In
contrast to related approaches, we approach inference by optionally utilizing
so-called streamlining XOR constraints, in order to reduce the number of
computed answer sets. Our approach is prototypically implemented. Examples
illustrate the introduced concepts and point at issues and topics for future
research.
| [
{
"version": "v1",
"created": "Sun, 4 May 2014 17:18:49 GMT"
}
] | 1,399,334,400,000 | [
[
"Nickles",
"Matthias",
""
],
[
"Mileo",
"Alessandra",
""
]
] |
1405.0795 | Valmi Dufour-Lussier | Valmi Dufour-Lussier (INRIA Nancy - Grand Est / LORIA), Alice Hermann
(INRIA Nancy - Grand Est / LORIA), Florence Le Ber (ICube), Jean Lieber
(INRIA Nancy - Grand Est / LORIA) | Belief revision in the propositional closure of a qualitative algebra
(extended version) | This is the extended version of an article originally presented at
the 14th International Conference on Principles of Knowledge Representation
and Reasoning | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Belief revision is an operation that aims at modifying old beliefs so that
they become consistent with new ones. The issue of belief revision has been
studied in various formalisms, in particular, in qualitative algebras (QAs) in
which the result is a disjunction of belief bases that is not necessarily
representable in a QA. This motivates the study of belief revision in
formalisms extending QAs, namely, their propositional closures: in such a
closure, the result of belief revision belongs to the formalism. Moreover, this
makes it possible to define a contraction operator thanks to the Harper
identity. Belief revision in the propositional closure of QAs is studied, an
algorithm for a family of revision operators is designed, and an open-source
implementation is made freely available on the web. (This is the extended
version of an article originally presented at the 14th International Conference
on Principles of Knowledge Representation and Reasoning.)
| [
{
"version": "v1",
"created": "Mon, 5 May 2014 07:04:01 GMT"
}
] | 1,399,334,400,000 | [
[
"Dufour-Lussier",
"Valmi",
"",
"INRIA Nancy - Grand Est / LORIA"
],
[
"Hermann",
"Alice",
"",
"INRIA Nancy - Grand Est / LORIA"
],
[
"Ber",
"Florence Le",
"",
"ICube"
],
[
"Lieber",
"Jean",
"",
"INRIA Nancy - Grand Est / LORIA"
]
] |
1405.0868 | Zhana Bao | Zhana Bao | Finding Inner Outliers in High Dimensional Space | 9 pages, 9 Figures, 3 tables | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Outlier detection in a large-scale database is a significant and complex
issue in knowledge discovering field. As the data distributions are obscure and
uncertain in high dimensional space, most existing solutions try to solve the
issue taking into account the two intuitive points: first, outliers are
extremely far away from other points in high dimensional space; second,
outliers are detected obviously different in projected-dimensional subspaces.
However, for a complicated case that outliers are hidden inside the normal
points in all dimensions, existing detection methods fail to find such inner
outliers. In this paper, we propose a method with twice dimension-projections,
which integrates primary subspace outlier detection and secondary
point-projection between subspaces, and sums up the multiple weight values for
each point. The points are computed with local density ratio separately in
twice-projected dimensions. After the process, outliers are those points
scoring the largest values of weight. The proposed method succeeds to find all
inner outliers on the synthetic test datasets with the dimension varying from
100 to 10000. The experimental results also show that the proposed algorithm
can work in low dimensional space and can achieve perfect performance in high
dimensional space. As for this reason, our proposed approach has considerable
potential to apply it in multimedia applications helping to process images or
video with large-scale attributes.
| [
{
"version": "v1",
"created": "Mon, 5 May 2014 12:01:14 GMT"
}
] | 1,399,334,400,000 | [
[
"Bao",
"Zhana",
""
]
] |
1405.0876 | Luca Pulina | Marco Maratea, Luca Pulina, Francesco Ricca | The Multi-engine ASP Solver ME-ASP: Progress Report | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | MEASP is a multi-engine solver for ground ASP programs. It exploits algorithm
selection techniques based on classification to select one among a set of
out-of-the-box heterogeneous ASP solvers used as black-box engines. In this
paper we report on (i) a new optimized implementation of MEASP; and (ii) an
attempt of applying algorithm selection to non-ground programs. An experimental
analysis reported in the paper shows that (i) the new implementation of \measp
is substantially faster than the previous version; and (ii) the multi-engine
recipe can be applied to the evaluation of non-ground programs with some
benefits.
| [
{
"version": "v1",
"created": "Mon, 5 May 2014 12:45:07 GMT"
}
] | 1,399,334,400,000 | [
[
"Maratea",
"Marco",
""
],
[
"Pulina",
"Luca",
""
],
[
"Ricca",
"Francesco",
""
]
] |
1405.0915 | Riccardo Zese | Riccardo Zese | Reasoning with Probabilistic Logics | An extended abstract / full version of a paper accepted to be
presented at the Doctoral Consortium of the 30th International Conference on
Logic Programming (ICLP 2014), July 19-22, Vienna, Austria | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The interest in the combination of probability with logics for modeling the
world has rapidly increased in the last few years. One of the most effective
approaches is the Distribution Semantics which was adopted by many logic
programming languages and in Descripion Logics. In this paper, we illustrate
the work we have done in this research field by presenting a probabilistic
semantics for description logics and reasoning and learning algorithms. In
particular, we present in detail the system TRILL P, which computes the
probability of queries w.r.t. probabilistic knowledge bases, which has been
implemented in Prolog. Note: An extended abstract / full version of a paper
accepted to be presented at the Doctoral Consortium of the 30th International
Conference on Logic Programming (ICLP 2014), July 19-22, Vienna, Austria
| [
{
"version": "v1",
"created": "Mon, 5 May 2014 14:57:08 GMT"
},
{
"version": "v2",
"created": "Tue, 13 May 2014 13:18:59 GMT"
},
{
"version": "v3",
"created": "Thu, 29 Jan 2015 09:26:41 GMT"
}
] | 1,422,576,000,000 | [
[
"Zese",
"Riccardo",
""
]
] |
1405.0961 | Ulrich Furbach | Ulrike Barthelmess and Ulrich Furbach | Do we need Asimov's Laws? | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this essay the stance on robots is discussed. The attitude against robots
in history, starting in Ancient Greek culture until the industrial revolution
is described. The uncanny valley and some possible explanations are given. Some
differences in Western and Asian understanding of robots are listed and finally
we answer the question raised with the title.
| [
{
"version": "v1",
"created": "Tue, 29 Apr 2014 08:30:49 GMT"
}
] | 1,416,355,200,000 | [
[
"Barthelmess",
"Ulrike",
""
],
[
"Furbach",
"Ulrich",
""
]
] |
1405.0999 | Shiqi Zhang | Shiqi Zhang, Mohan Sridharan, Michael Gelfond, Jeremy Wyatt | KR$^3$: An Architecture for Knowledge Representation and Reasoning in
Robotics | The paper appears in the Proceedings of the 15th International
Workshop on Non-Monotonic Reasoning (NMR 2014) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes an architecture that combines the complementary
strengths of declarative programming and probabilistic graphical models to
enable robots to represent, reason with, and learn from, qualitative and
quantitative descriptions of uncertainty and knowledge. An action language is
used for the low-level (LL) and high-level (HL) system descriptions in the
architecture, and the definition of recorded histories in the HL is expanded to
allow prioritized defaults. For any given goal, tentative plans created in the
HL using default knowledge and commonsense reasoning are implemented in the LL
using probabilistic algorithms, with the corresponding observations used to
update the HL history. Tight coupling between the two levels enables automatic
selection of relevant variables and generation of suitable action policies in
the LL for each HL action, and supports reasoning with violation of defaults,
noisy observations and unreliable actions in large and complex domains. The
architecture is evaluated in simulation and on physical robots transporting
objects in indoor domains; the benefit on robots is a reduction in task
execution time of 39% compared with a purely probabilistic, but still
hierarchical, approach.
| [
{
"version": "v1",
"created": "Mon, 5 May 2014 19:13:06 GMT"
}
] | 1,399,334,400,000 | [
[
"Zhang",
"Shiqi",
""
],
[
"Sridharan",
"Mohan",
""
],
[
"Gelfond",
"Michael",
""
],
[
"Wyatt",
"Jeremy",
""
]
] |
1405.1071 | Jean-Fran\c{c}ois Baget | Jean-Fran\c{c}ois Baget and Fabien Garreau and Marie-Laure Mugnier and
Swan Rocher | Revisiting Chase Termination for Existential Rules and their Extension
to Nonmonotonic Negation | This paper appears in the Proceedings of the 15th International
Workshop on Non-Monotonic Reasoning (NMR 2014) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existential rules have been proposed for representing ontological knowledge,
specifically in the context of Ontology- Based Data Access. Entailment with
existential rules is undecidable. We focus in this paper on conditions that
ensure the termination of a breadth-first forward chaining algorithm known as
the chase. Several variants of the chase have been proposed. In the first part
of this paper, we propose a new tool that allows to extend existing acyclicity
conditions ensuring chase termination, while keeping good complexity
properties. In the second part, we study the extension to existential rules
with nonmonotonic negation under stable model semantics, discuss the relevancy
of the chase variants for these rules and further extend acyclicity results
obtained in the positive case.
| [
{
"version": "v1",
"created": "Mon, 5 May 2014 20:58:01 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Jul 2014 12:49:54 GMT"
}
] | 1,406,505,600,000 | [
[
"Baget",
"Jean-François",
""
],
[
"Garreau",
"Fabien",
""
],
[
"Mugnier",
"Marie-Laure",
""
],
[
"Rocher",
"Swan",
""
]
] |
1405.1124 | Marcello Balduccini | Marcello Balduccini, William C. Regli, Duc N. Nguyen | An ASP-Based Architecture for Autonomous UAVs in Dynamic Environments:
Progress Report | Proceedings of the 15th International Workshop on Non-Monotonic
Reasoning (NMR 2014) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traditional AI reasoning techniques have been used successfully in many
domains, including logistics, scheduling and game playing. This paper is part
of a project aimed at investigating how such techniques can be extended to
coordinate teams of unmanned aerial vehicles (UAVs) in dynamic environments.
Specifically challenging are real-world environments where UAVs and other
network-enabled devices must communicate to coordinate---and communication
actions are neither reliable nor free. Such network-centric environments are
common in military, public safety and commercial applications, yet most
research (even multi-agent planning) usually takes communications among
distributed agents as a given. We address this challenge by developing an agent
architecture and reasoning algorithms based on Answer Set Programming (ASP).
ASP has been chosen for this task because it enables high flexibility of
representation, both of knowledge and of reasoning tasks. Although ASP has been
used successfully in a number of applications, and ASP-based architectures have
been studied for about a decade, to the best of our knowledge this is the first
practical application of a complete ASP-based agent architecture. It is also
the first practical application of ASP involving a combination of centralized
reasoning, decentralized reasoning, execution monitoring, and reasoning about
network communications. This work has been empirically validated using a
distributed network-centric software evaluation testbed and the results provide
guidance to designers in how to understand and control intelligent systems that
operate in these environments.
| [
{
"version": "v1",
"created": "Tue, 6 May 2014 02:05:04 GMT"
}
] | 1,399,420,800,000 | [
[
"Balduccini",
"Marcello",
""
],
[
"Regli",
"William C.",
""
],
[
"Nguyen",
"Duc N.",
""
]
] |
1405.1183 | Daniel Le Berre | Daniel Le Berre | Some thoughts about benchmarks for NMR | Proceedings of the 15th International Workshop on Non-Monotonic
Reasoning (NMR 2014) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The NMR community would like to build a repository of benchmarks to push
forward the design of systems implementing NMR as it has been the case for many
other areas in AI. There are a number of lessons which can be learned from the
experience of other communi- ties. Here are a few thoughts about the
requirements and choices to make before building such a repository.
| [
{
"version": "v1",
"created": "Tue, 6 May 2014 08:09:13 GMT"
}
] | 1,399,420,800,000 | [
[
"Berre",
"Daniel Le",
""
]
] |
1405.1287 | Mario Alviano | Mario Alviano and Wolfgang Faber | Semantics and Compilation of Answer Set Programming with Generalized
Atoms | The paper appears in the Proceedings of the 15th International
Workshop on Non-Monotonic Reasoning (NMR 2014) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Answer Set Programming (ASP) is logic programming under the stable model or
answer set semantics. During the last decade, this paradigm has seen several
extensions by generalizing the notion of atom used in these programs. Among
these, there are aggregate atoms, HEX atoms, generalized quantifiers, and
abstract constraints. In this paper we refer to these constructs collectively
as generalized atoms. The idea common to all of these constructs is that their
satisfaction depends on the truth values of a set of (non-generalized) atoms,
rather than the truth value of a single (non-generalized) atom. Motivated by
several examples, we argue that for some of the more intricate generalized
atoms, the previously suggested semantics provide unintuitive results and
provide an alternative semantics, which we call supportedly stable or SFLP
answer sets. We show that it is equivalent to the major previously proposed
semantics for programs with convex generalized atoms, and that it in general
admits more intended models than other semantics in the presence of non-convex
generalized atoms. We show that the complexity of supportedly stable models is
on the second level of the polynomial hierarchy, similar to previous proposals
and to stable models of disjunctive logic programs. Given these complexity
results, we provide a compilation method that compactly transforms programs
with generalized atoms in disjunctive normal form to programs without
generalized atoms. Variants are given for the new supportedly stable and the
existing FLP semantics, for which a similar compilation technique has not been
known so far.
| [
{
"version": "v1",
"created": "Tue, 6 May 2014 14:44:40 GMT"
}
] | 1,399,420,800,000 | [
[
"Alviano",
"Mario",
""
],
[
"Faber",
"Wolfgang",
""
]
] |
1405.1397 | Shamim Ripon | Shamim Ripon, Aoyan Barua, and Mohammad Salah Uddin | Analysis Tool for UNL-Based Knowledge Representation | 8 pages, 5 figures. arXiv admin note: text overlap with
arXiv:cs/0404030 by other authors | Journal of Advanced Computer Science and Technology Research
(JACSTR) Vol. 2, No. 4, pp. 176-183, 2012 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The fundamental issue in knowledge representation is to provide a precise
definition of the knowledge that they possess in a manner that is independent
of procedural considerations, context free and easy to manipulate, exchange and
reason about. Knowledge must be accessible to everyone regardless of their
native languages. Universal Networking Language (UNL) is a declarative formal
language and a generalized form of human language in a machine independent
digital platform for defining, recapitulating, amending, storing and
dissipating knowledge among people of different affiliations. UNL extracts
semantic data from a native language for Interlingua machine translation. This
paper presents the development of a graphical tool that incorporates UNL to
provide a visual mean to represent the semantic data available in a native
text. UNL represents the semantics of a sentence as a conceptual hyper-graph.
We translate this information into XML format and create a graph from XML,
representing the actual concepts available in the native language
| [
{
"version": "v1",
"created": "Sun, 4 May 2014 19:50:49 GMT"
}
] | 1,399,420,800,000 | [
[
"Ripon",
"Shamim",
""
],
[
"Barua",
"Aoyan",
""
],
[
"Uddin",
"Mohammad Salah",
""
]
] |
1405.1520 | Marius Lindauer | Holger Hoos and Marius Lindauer and Torsten Schaub | claspfolio 2: Advances in Algorithm Selection for Answer Set Programming | To appear in Theory and Practice of Logic Programming (TPLP) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To appear in Theory and Practice of Logic Programming (TPLP). Building on the
award-winning, portfolio-based ASP solver claspfolio, we present claspfolio 2,
a modular and open solver architecture that integrates several different
portfolio-based algorithm selection approaches and techniques. The claspfolio 2
solver framework supports various feature generators, solver selection
approaches, solver portfolios, as well as solver-schedule-based pre-solving
techniques. The default configuration of claspfolio 2 relies on a light-weight
version of the ASP solver clasp to generate static and dynamic instance
features. The flexible open design of claspfolio 2 is a distinguishing factor
even beyond ASP. As such, it provides a unique framework for comparing and
combining existing portfolio-based algorithm selection approaches and
techniques in a single, unified framework. Taking advantage of this, we
conducted an extensive experimental study to assess the impact of different
feature sets, selection approaches and base solver portfolios. In addition to
gaining substantial insights into the utility of the various approaches and
techniques, we identified a default configuration of claspfolio 2 that achieves
substantial performance gains not only over clasp's default configuration and
the earlier version of claspfolio 2, but also over manually tuned
configurations of clasp.
| [
{
"version": "v1",
"created": "Wed, 7 May 2014 07:25:58 GMT"
}
] | 1,399,507,200,000 | [
[
"Hoos",
"Holger",
""
],
[
"Lindauer",
"Marius",
""
],
[
"Schaub",
"Torsten",
""
]
] |
1405.1524 | Mohammad Mohammadi | Mohammad Mohammadi, Shahram Jafari | An expert system for recommending suitable ornamental fish addition to
an aquarium based on aquarium condition | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Expert systems prove to be suitable replacement for human experts when human
experts are unavailable for different reasons. Various expert system has been
developed for wide range of application. Although some expert systems in the
field of fishery and aquaculture has been developed but a system that aids user
in process of selecting a new addition to their aquarium tank never been
designed. This paper proposed an expert system that suggests new addition to an
aquarium tank based on current environmental condition of aquarium and
currently existing fishes in aquarium. The system suggest the best fit for
aquarium condition and most compatible to other fishes in aquarium.
| [
{
"version": "v1",
"created": "Wed, 7 May 2014 07:45:09 GMT"
}
] | 1,399,507,200,000 | [
[
"Mohammadi",
"Mohammad",
""
],
[
"Jafari",
"Shahram",
""
]
] |
1405.1544 | Alexander Semenov | Ilya Otpuschennikov, Alexander Semenov, Stepan Kochemazov | Transalg: a Tool for Translating Procedural Descriptions of Discrete
Functions to SAT | null | Proceedings of The 5th International Workshop on Computer Science
and Engineering: Information Processing and Control Engineering (WCSE
2015-IPCE) (2015) 289-294 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we present the Transalg system, designed to produce SAT
encodings for discrete functions, written as programs in a specific language.
Translation of such programs to SAT is based on propositional encoding methods
for formal computing models and on the concept of symbolic execution. We used
the Transalg system to make SAT encodings for a number of cryptographic
functions.
| [
{
"version": "v1",
"created": "Wed, 7 May 2014 09:30:55 GMT"
},
{
"version": "v2",
"created": "Thu, 29 Oct 2015 10:34:47 GMT"
}
] | 1,446,163,200,000 | [
[
"Otpuschennikov",
"Ilya",
""
],
[
"Semenov",
"Alexander",
""
],
[
"Kochemazov",
"Stepan",
""
]
] |
1405.1675 | Stefano Teso | Stefano Teso and Roberto Sebastiani and Andrea Passerini | Structured Learning Modulo Theories | 46 pages, 11 figures, submitted to Artificial Intelligence Journal
Special Issue on Combining Constraint Solving with Mining and Learning | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modelling problems containing a mixture of Boolean and numerical variables is
a long-standing interest of Artificial Intelligence. However, performing
inference and learning in hybrid domains is a particularly daunting task. The
ability to model this kind of domains is crucial in "learning to design" tasks,
that is, learning applications where the goal is to learn from examples how to
perform automatic {\em de novo} design of novel objects. In this paper we
present Structured Learning Modulo Theories, a max-margin approach for learning
in hybrid domains based on Satisfiability Modulo Theories, which allows to
combine Boolean reasoning and optimization over continuous linear arithmetical
constraints. The main idea is to leverage a state-of-the-art generalized
Satisfiability Modulo Theory solver for implementing the inference and
separation oracles of Structured Output SVMs. We validate our method on
artificial and real world scenarios.
| [
{
"version": "v1",
"created": "Wed, 7 May 2014 17:41:43 GMT"
},
{
"version": "v2",
"created": "Thu, 18 Dec 2014 15:57:29 GMT"
}
] | 1,418,947,200,000 | [
[
"Teso",
"Stefano",
""
],
[
"Sebastiani",
"Roberto",
""
],
[
"Passerini",
"Andrea",
""
]
] |
1405.2058 | Ari Saptawijaya | Ari Saptawijaya and Lu\'is Moniz Pereira | Joint Tabling of Logic Program Abductions and Updates | To appear in Theory and Practice of Logic Programming (TPLP), 10
pages plus bibliography | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Abductive logic programs offer a formalism to declaratively represent and
reason about problems in a variety of areas: diagnosis, decision making,
hypothetical reasoning, etc. On the other hand, logic program updates allow us
to express knowledge changes, be they internal (or self) and external (or
world) changes. Abductive logic programs and logic program updates thus
naturally coexist in problems that are susceptible to hypothetical reasoning
about change. Taking this as a motivation, in this paper we integrate abductive
logic programs and logic program updates by jointly exploiting tabling features
of logic programming. The integration is based on and benefits from the two
implementation techniques we separately devised previously, viz., tabled
abduction and incremental tabling for query-driven propagation of logic program
updates. A prototype of the integrated system is implemented in XSB Prolog.
| [
{
"version": "v1",
"created": "Thu, 8 May 2014 19:39:01 GMT"
}
] | 1,399,593,600,000 | [
[
"Saptawijaya",
"Ari",
""
],
[
"Pereira",
"Luís Moniz",
""
]
] |
1405.2501 | Roman Bart\'ak | Roman Bart\'ak, Neng-Fa Zhou | Using Tabled Logic Programming to Solve the Petrobras Planning Problem | To appear in Theory and Practice of Logic Programming (TPLP) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tabling has been used for some time to improve efficiency of Prolog programs
by memorizing answered queries. The same idea can be naturally used to memorize
visited states during search for planning. In this paper we present a planner
developed in the Picat language to solve the Petrobras planning problem. Picat
is a novel Prolog-like language that provides pattern matching, deterministic
and non-deterministic rules, and tabling as its core modelling and solving
features. We demonstrate these capabilities using the Petrobras problem, where
the goal is to plan transport of cargo items from ports to platforms using
vessels with limited capacity. Monte Carlo Tree Search has been so far the best
technique to tackle this problem and we will show that by using tabling we can
achieve much better runtime efficiency and better plan quality.
| [
{
"version": "v1",
"created": "Sun, 11 May 2014 06:38:25 GMT"
}
] | 1,399,939,200,000 | [
[
"Barták",
"Roman",
""
],
[
"Zhou",
"Neng-Fa",
""
]
] |
1405.2590 | Ilias Tachmazidis | Ilias Tachmazidis, Grigoris Antoniou and Wolfgang Faber | Efficient Computation of the Well-Founded Semantics over Big Data | 16 pages, 4 figures, ICLP 2014, 30th International Conference on
Logic Programming July 19-22, Vienna, Austria | Theory and Practice of Logic Programming 14 (2014) 445-459 | 10.1017/S1471068414000131 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data originating from the Web, sensor readings and social media result in
increasingly huge datasets. The so called Big Data comes with new scientific
and technological challenges while creating new opportunities, hence the
increasing interest in academia and industry. Traditionally, logic programming
has focused on complex knowledge structures/programs, so the question arises
whether and how it can work in the face of Big Data. In this paper, we examine
how the well-founded semantics can process huge amounts of data through mass
parallelization. More specifically, we propose and evaluate a parallel approach
using the MapReduce framework. Our experimental results indicate that our
approach is scalable and that well-founded semantics can be applied to billions
of facts. To the best of our knowledge, this is the first work that addresses
large scale nonmonotonic reasoning without the restriction of stratification
for predicates of arbitrary arity. To appear in Theory and Practice of Logic
Programming (TPLP).
| [
{
"version": "v1",
"created": "Sun, 11 May 2014 21:57:50 GMT"
}
] | 1,582,070,400,000 | [
[
"Tachmazidis",
"Ilias",
""
],
[
"Antoniou",
"Grigoris",
""
],
[
"Faber",
"Wolfgang",
""
]
] |
1405.3175 | Yong Deng | Yong Deng | D numbers theory: a generalization of Dempster-Shafer evidence theory | 31 pages, 5 figures. arXiv admin note: substantial text overlap with
arXiv:1404.0540 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Efficient modeling of uncertain information in real world is still an open
issue. Dempster-Shafer evidence theory is one of the most commonly used
methods. However, the Dempster-Shafer evidence theory has the assumption that
the hypothesis in the framework of discernment is exclusive of each other. This
condition can be violated in real applications, especially in linguistic
decision making since the linguistic variables are not exclusive of each others
essentially. In this paper, a new theory, called as D numbers theory (DNT), is
systematically developed to address this issue. The combination rule of two D
numbers is presented. An coefficient is defined to measure the exclusive degree
among the hypotheses in the framework of discernment. The combination rule of
two D numbers is presented. If the exclusive coefficient is one which means
that the hypothesis in the framework of discernment is exclusive of each other
totally, the D combination is degenerated as the classical Dempster combination
rule. Finally, a linguistic variables transformation of D numbers is presented
to make a decision. A numerical example on linguistic evidential decision
making is used to illustrate the efficiency of the proposed D numbers theory.
| [
{
"version": "v1",
"created": "Tue, 13 May 2014 14:50:23 GMT"
}
] | 1,400,025,600,000 | [
[
"Deng",
"Yong",
""
]
] |
1405.3218 | Riccardo Zese | Elena Bellodi, Evelina Lamma, Fabrizio Riguzzi, Vitor Santos Costa and
Riccardo Zese | Lifted Variable Elimination for Probabilistic Logic Programming | To appear in Theory and Practice of Logic Programming (TPLP). arXiv
admin note: text overlap with arXiv:1402.0565 by other authors | Theory and Practice of Logic Programming 14 (2014) 681-695 | 10.1017/S1471068414000283 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Lifted inference has been proposed for various probabilistic logical
frameworks in order to compute the probability of queries in a time that
depends on the size of the domains of the random variables rather than the
number of instances. Even if various authors have underlined its importance for
probabilistic logic programming (PLP), lifted inference has been applied up to
now only to relational languages outside of logic programming. In this paper we
adapt Generalized Counting First Order Variable Elimination (GC-FOVE) to the
problem of computing the probability of queries to probabilistic logic programs
under the distribution semantics. In particular, we extend the Prolog Factor
Language (PFL) to include two new types of factors that are needed for
representing ProbLog programs. These factors take into account the existing
causal independence relationships among random variables and are managed by the
extension to variable elimination proposed by Zhang and Poole for dealing with
convergent variables and heterogeneous factors. Two new operators are added to
GC-FOVE for treating heterogeneous factors. The resulting algorithm, called
LP$^2$ for Lifted Probabilistic Logic Programming, has been implemented by
modifying the PFL implementation of GC-FOVE and tested on three benchmarks for
lifted inference. A comparison with PITA and ProbLog2 shows the potential of
the approach.
| [
{
"version": "v1",
"created": "Tue, 13 May 2014 16:29:37 GMT"
},
{
"version": "v2",
"created": "Wed, 14 May 2014 19:48:43 GMT"
},
{
"version": "v3",
"created": "Fri, 16 May 2014 08:28:20 GMT"
},
{
"version": "v4",
"created": "Fri, 10 Oct 2014 12:00:38 GMT"
}
] | 1,582,070,400,000 | [
[
"Bellodi",
"Elena",
""
],
[
"Lamma",
"Evelina",
""
],
[
"Riguzzi",
"Fabrizio",
""
],
[
"Costa",
"Vitor Santos",
""
],
[
"Zese",
"Riccardo",
""
]
] |
1405.3342 | Ali Bakhshi | M. Ehsan Shafiee, Emily M. Zechman | An Agent-based Modeling Framework for Sociotechnical Simulation of Water
Distribution Contamination Events | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the event that a bacteriological or chemical toxin is intro- duced to a
water distribution network, a large population of consumers may become exposed
to the contaminant. A contamination event may be poorly predictable dynamic
process due to the interactions of consumers and utility managers during an
event. Consumers that become aware of a threat may select protective actions
that change their water demands from typical demand patterns, and new hydraulic
conditions can arise that differ from conditions that are predicted when
demands are considered as exogenous inputs. Consequently, the movement of the
contaminant plume in the pipe network may shift from its expected trajectory. A
sociotechnical model is developed here to integrate agent-based models of
consumers with an engineering water distribution system model and capture the
dynamics between consumer behaviors and the water distribution system for
predicting contaminant transport and public exposure. Consumers are simulated
as agents with behaviors defined for water use activities, mobility,
word-of-mouth communication, and demand reduction, based on a set of rules
representing an agents autonomy and reaction to health impacts, the
environment, and the actions of other agents. As consumers decrease their water
use, the demand exerted on the water distribution system is updated; as the
flow directions and volumes shift in response, the location of the contaminant
plume is updated and the amount of contaminant consumed by each agent is
calculated. The framework is tested through simulating realistic contamination
scenarios for a virtual city and water distribution system.
| [
{
"version": "v1",
"created": "Wed, 14 May 2014 02:01:58 GMT"
}
] | 1,400,112,000,000 | [
[
"Shafiee",
"M. Ehsan",
""
],
[
"Zechman",
"Emily M.",
""
]
] |
1405.3362 | Rehan Abdul Aziz | Rehan Abdul Aziz and Geoffrey Chu and Peter James Stuckey | Grounding Bound Founded Answer Set Programs | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To appear in Theory and Practice of Logic Programming (TPLP)
Bound Founded Answer Set Programming (BFASP) is an extension of Answer Set
Programming (ASP) that extends stable model semantics to numeric variables.
While the theory of BFASP is defined on ground rules, in practice BFASP
programs are written as complex non-ground expressions. Flattening of BFASP is
a technique used to simplify arbitrary expressions of the language to a small
and well defined set of primitive expressions. In this paper, we first show how
we can flatten arbitrary BFASP rule expressions, to give equivalent BFASP
programs. Next, we extend the bottom-up grounding technique and magic set
transformation used by ASP to BFASP programs. Our implementation shows that for
BFASP problems, these techniques can significantly reduce the ground program
size, and improve subsequent solving.
| [
{
"version": "v1",
"created": "Wed, 14 May 2014 05:06:55 GMT"
}
] | 1,400,112,000,000 | [
[
"Aziz",
"Rehan Abdul",
""
],
[
"Chu",
"Geoffrey",
""
],
[
"Stuckey",
"Peter James",
""
]
] |
1405.3367 | Rehan Abdul Aziz | Rehan Abdul Aziz | Bound Founded Answer Set Programming | An extended abstract / full version of a paper accepted to be
presented at the Doctoral Consortium of the 30th International Conference on
Logic Programming (ICLP 2014), July 19-22, Vienna, Austria | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Answer Set Programming (ASP) is a powerful modelling formalism that is very
efficient in solving combinatorial problems. ASP solvers implement the stable
model semantics that eliminates circular derivations between Boolean variables
from the solutions of a logic program. Due to this, ASP solvers are better
suited than propositional satisfiability (SAT) and Constraint Programming (CP)
solvers to solve a certain class of problems whose specification includes
inductive definitions such as reachability in a graph. On the other hand, ASP
solvers suffer from the grounding bottleneck that occurs due to their inability
to model finite domain variables. Furthermore, the existing stable model
semantics are not sufficient to disallow circular reasoning on the bounds of
numeric variables. An example where this is required is in modelling shortest
paths between nodes in a graph. Just as reachability can be encoded as an
inductive definition with one or more base cases and recursive rules, shortest
paths between nodes can also be modelled with similar base cases and recursive
rules for their upper bounds. This deficiency of stable model semantics
introduces another type of grounding bottleneck in ASP systems that cannot be
removed by naively merging ASP with CP solvers, but requires a theoretical
extension of the semantics from Booleans and normal rules to bounds over
numeric variables and more general rules. In this work, we propose Bound
Founded Answer Set Programming (BFASP) that resolves this issue and
consequently, removes all types of grounding bottleneck inherent in ASP
systems.
| [
{
"version": "v1",
"created": "Wed, 14 May 2014 05:40:16 GMT"
}
] | 1,400,112,000,000 | [
[
"Aziz",
"Rehan Abdul",
""
]
] |
1405.3376 | Matthias Thimm | Anthony Hunter and Matthias Thimm | Probabilistic Argumentation with Epistemic Extensions and Incomplete
Information | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Abstract argumentation offers an appealing way of representing and evaluating
arguments and counterarguments. This approach can be enhanced by a probability
assignment to each argument. There are various interpretations that can be
ascribed to this assignment. In this paper, we regard the assignment as
denoting the belief that an agent has that an argument is justifiable, i.e.,
that both the premises of the argument and the derivation of the claim of the
argument from its premises are valid. This leads to the notion of an epistemic
extension which is the subset of the arguments in the graph that are believed
to some degree (which we defined as the arguments that have a probability
assignment greater than 0.5). We consider various constraints on the
probability assignment. Some constraints correspond to standard notions of
extensions, such as grounded or stable extensions, and some constraints give us
new kinds of extensions.
| [
{
"version": "v1",
"created": "Wed, 14 May 2014 06:38:15 GMT"
}
] | 1,400,112,000,000 | [
[
"Hunter",
"Anthony",
""
],
[
"Thimm",
"Matthias",
""
]
] |
1405.3486 | Zhizheng Zhang | Zhizheng Zhang and Kaikai Zhao | ESmodels: An Epistemic Specification Solver | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | (To appear in Theory and Practice of Logic Programming (TPLP))
ESmodels is designed and implemented as an experiment platform to investigate
the semantics, language, related reasoning algorithms, and possible
applications of epistemic specifications.We first give the epistemic
specification language of ESmodels and its semantics. The language employs only
one modal operator K but we prove that it is able to represent luxuriant modal
operators by presenting transformation rules. Then, we describe basic
algorithms and optimization approaches used in ESmodels. After that, we discuss
possible applications of ESmodels in conformant planning and constraint
satisfaction. Finally, we conclude with perspectives.
| [
{
"version": "v1",
"created": "Wed, 14 May 2014 13:26:11 GMT"
}
] | 1,400,112,000,000 | [
[
"Zhang",
"Zhizheng",
""
],
[
"Zhao",
"Kaikai",
""
]
] |
1405.3487 | Petr Baudi\v{s} | Petr Baudi\v{s} | COCOpf: An Algorithm Portfolio Framework | POSTER2014. arXiv admin note: text overlap with arXiv:1206.5780 by
other authors without attribution | Poster 2014 --- the 18th International Student Conference on
Electrical Engineering. Czech Technical University, Prague, Czech Republic
(2014) | null | null | cs.AI | http://creativecommons.org/licenses/by/3.0/ | Algorithm portfolios represent a strategy of composing multiple heuristic
algorithms, each suited to a different class of problems, within a single
general solver that will choose the best suited algorithm for each input. This
approach recently gained popularity especially for solving combinatoric
problems, but optimization applications are still emerging. The COCO platform
of the BBOB workshop series is the current standard way to measure performance
of continuous black-box optimization algorithms.
As an extension to the COCO platform, we present the Python-based COCOpf
framework that allows composing portfolios of optimization algorithms and
running experiments with different selection strategies. In our framework, we
focus on black-box algorithm portfolio and online adaptive selection. As a
demonstration, we measure the performance of stock SciPy optimization
algorithms and the popular CMA algorithm alone and in a portfolio with two
simple selection strategies. We confirm that even a naive selection strategy
can provide improved performance across problem classes.
| [
{
"version": "v1",
"created": "Wed, 14 May 2014 13:26:57 GMT"
}
] | 1,400,198,400,000 | [
[
"Baudiš",
"Petr",
""
]
] |
1405.3546 | Mario Alviano | Mario Alviano, Carmine Dodaro and Francesco Ricca | Anytime Computation of Cautious Consequences in Answer Set Programming | To appear in Theory and Practice of Logic Programming | Theory and Practice of Logic Programming 14 (2014) 755-770 | 10.1017/S1471068414000325 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Query answering in Answer Set Programming (ASP) is usually solved by
computing (a subset of) the cautious consequences of a logic program. This task
is computationally very hard, and there are programs for which computing
cautious consequences is not viable in reasonable time. However, current ASP
solvers produce the (whole) set of cautious consequences only at the end of
their computation. This paper reports on strategies for computing cautious
consequences, also introducing anytime algorithms able to produce sound answers
during the computation.
| [
{
"version": "v1",
"created": "Wed, 14 May 2014 15:46:33 GMT"
},
{
"version": "v2",
"created": "Thu, 15 May 2014 09:27:38 GMT"
},
{
"version": "v3",
"created": "Mon, 15 Sep 2014 12:29:17 GMT"
}
] | 1,582,070,400,000 | [
[
"Alviano",
"Mario",
""
],
[
"Dodaro",
"Carmine",
""
],
[
"Ricca",
"Francesco",
""
]
] |
1405.3570 | Daniel Gall | Daniel Gall and Thom Fr\"uhwirth | Exchanging Conflict Resolution in an Adaptable Implementation of ACT-R | To appear in Theory and Practice of Logic Programming (TPLP).
Accepted paper for ICLP 2014. 12 pages + appendix | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In computational cognitive science, the cognitive architecture ACT-R is very
popular. It describes a model of cognition that is amenable to computer
implementation, paving the way for computational psychology. Its underlying
psychological theory has been investigated in many psychological experiments,
but ACT-R lacks a formal definition of its underlying concepts from a
mathematical-computational point of view. Although the canonical implementation
of ACT-R is now modularized, this production rule system is still hard to adapt
and extend in central components like the conflict resolution mechanism (which
decides which of the applicable rules to apply next).
In this work, we present a concise implementation of ACT-R based on
Constraint Handling Rules which has been derived from a formalization in prior
work. To show the adaptability of our approach, we implement several different
conflict resolution mechanisms discussed in the ACT-R literature. This results
in the first implementation of one such mechanism. For the other mechanisms, we
empirically evaluate if our implementation matches the results of reference
implementations of ACT-R.
| [
{
"version": "v1",
"created": "Wed, 14 May 2014 16:57:36 GMT"
}
] | 1,400,112,000,000 | [
[
"Gall",
"Daniel",
""
],
[
"Frühwirth",
"Thom",
""
]
] |
1405.3637 | Yuanlin Zhang | Michael Gelfond and Yuanlin Zhang | Vicious Circle Principle and Logic Programs with Aggregates | null | Theory and Practice of Logic Programming 14 (2014) 587-601 | 10.1017/S1471068414000222 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The paper presents a knowledge representation language $\mathcal{A}log$ which
extends ASP with aggregates. The goal is to have a language based on simple
syntax and clear intuitive and mathematical semantics. We give some properties
of $\mathcal{A}log$, an algorithm for computing its answer sets, and comparison
with other approaches.
| [
{
"version": "v1",
"created": "Wed, 14 May 2014 19:36:38 GMT"
},
{
"version": "v2",
"created": "Thu, 15 May 2014 02:42:50 GMT"
}
] | 1,582,070,400,000 | [
[
"Gelfond",
"Michael",
""
],
[
"Zhang",
"Yuanlin",
""
]
] |
1405.3710 | Martin Gebser | Francesco Calimeri, Martin Gebser, Marco Maratea, Francesco Ricca | The Design of the Fifth Answer Set Programming Competition | 10 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Answer Set Programming (ASP) is a well-established paradigm of declarative
programming that has been developed in the field of logic programming and
nonmonotonic reasoning. Advances in ASP solving technology are customarily
assessed in competition events, as it happens for other closely-related
problem-solving technologies like SAT/SMT, QBF, Planning and Scheduling. ASP
Competitions are (usually) biennial events; however, the Fifth ASP Competition
departs from tradition, in order to join the FLoC Olympic Games at the Vienna
Summer of Logic 2014, which is expected to be the largest event in the history
of logic. This edition of the ASP Competition series is jointly organized by
the University of Calabria (Italy), the Aalto University (Finland), and the
University of Genova (Italy), and is affiliated with the 30th International
Conference on Logic Programming (ICLP 2014). It features a completely
re-designed setup, with novelties involving the design of tracks, the scoring
schema, and the adherence to a fixed modeling language in order to push the
adoption of the ASP-Core-2 standard. Benchmark domains are taken from past
editions, and best system packages submitted in 2013 are compared with new
versions and solvers.
To appear in Theory and Practice of Logic Programming (TPLP).
| [
{
"version": "v1",
"created": "Wed, 14 May 2014 22:15:50 GMT"
},
{
"version": "v2",
"created": "Fri, 16 May 2014 07:35:23 GMT"
},
{
"version": "v3",
"created": "Mon, 26 May 2014 05:01:26 GMT"
},
{
"version": "v4",
"created": "Tue, 3 Jun 2014 17:16:36 GMT"
}
] | 1,401,840,000,000 | [
[
"Calimeri",
"Francesco",
""
],
[
"Gebser",
"Martin",
""
],
[
"Maratea",
"Marco",
""
],
[
"Ricca",
"Francesco",
""
]
] |
1405.3713 | Emmanuelle-Anna Dietz | Lu\'is Moniz Pereira, Emmanuelle-Anna Dietz, Steffen H\"olldobler | Contextual Abductive Reasoning with Side-Effects | 14 pages, no figures, 1 table | Theory and Practice of Logic Programming 14 (2014) 633-648 | 10.1017/S1471068414000258 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The belief bias effect is a phenomenon which occurs when we think that we
judge an argument based on our reasoning, but are actually influenced by our
beliefs and prior knowledge. Evans, Barston and Pollard carried out a
psychological syllogistic reasoning task to prove this effect. Participants
were asked whether they would accept or reject a given syllogism. We discuss
one specific case which is commonly assumed to be believable but which is
actually not logically valid. By introducing abnormalities, abduction and
background knowledge, we adequately model this case under the weak completion
semantics. Our formalization reveals new questions about possible extensions in
abductive reasoning. For instance, observations and their explanations might
include some relevant prior abductive contextual information concerning some
side-effect or leading to a contestable or refutable side-effect. A weaker
notion indicates the support of some relevant consequences by a prior abductive
context. Yet another definition describes jointly supported relevant
consequences, which captures the idea of two observations containing mutually
supportive side-effects. Though motivated with and exemplified by the running
psychology application, the various new general abductive context definitions
are introduced here and given a declarative semantics for the first time, and
have a much wider scope of application. Inspection points, a concept introduced
by Pereira and Pinto, allows us to express these definitions syntactically and
intertwine them into an operational semantics.
| [
{
"version": "v1",
"created": "Wed, 14 May 2014 23:12:45 GMT"
},
{
"version": "v2",
"created": "Fri, 16 May 2014 02:27:40 GMT"
}
] | 1,582,070,400,000 | [
[
"Pereira",
"Luís Moniz",
""
],
[
"Dietz",
"Emmanuelle-Anna",
""
],
[
"Hölldobler",
"Steffen",
""
]
] |
1405.3729 | Priyanka Saini | Priyanka Saini | Building a Classification Model for Enrollment In Higher Educational
Courses using Data Mining Techniques | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data Mining is the process of extracting useful patterns from the huge amount
of database and many data mining techniques are used for mining these patterns.
Recently, one of the remarkable facts in higher educational institute is the
rapid growth data and this educational data is expanding quickly without any
advantage to the educational management. The main aim of the management is to
refine the education standard; therefore by applying the various data mining
techniques on this data one can get valuable information. This research study
proposed the "classification model for the student's enrollment process in
higher educational courses using data mining techniques". Additionally, this
study contributes to finding some patterns that are meaningful to management.
| [
{
"version": "v1",
"created": "Thu, 15 May 2014 02:53:44 GMT"
}
] | 1,400,198,400,000 | [
[
"Saini",
"Priyanka",
""
]
] |
1405.3824 | Federico Chesani | Marco Gavanelli, Stefano Bragaglia, Michela Milano, Federico Chesani,
Elisa Marengo, Paolo Cagnoli | Multi-Criteria Optimal Planning for Energy Policies in CLP | Accepted at ICLP2014 Conference as Technical Communication, due to
appear in Theory and Practice of Logic Programming (TPLP) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the policy making process a number of disparate and diverse issues such as
economic development, environmental aspects, as well as the social acceptance
of the policy, need to be considered. A single person might not have all the
required expertises, and decision support systems featuring optimization
components can help to assess policies. Leveraging on previous work on
Strategic Environmental Assessment, we developed a fully-fledged system that is
able to provide optimal plans with respect to a given objective, to perform
multi-objective optimization and provide sets of Pareto optimal plans, and to
visually compare them. Each plan is environmentally assessed and its footprint
is evaluated. The heart of the system is an application developed in a popular
Constraint Logic Programming system on the Reals sort. It has been equipped
with a web service module that can be queried through standard interfaces, and
an intuitive graphic user interface.
| [
{
"version": "v1",
"created": "Thu, 15 May 2014 12:48:35 GMT"
}
] | 1,400,198,400,000 | [
[
"Gavanelli",
"Marco",
""
],
[
"Bragaglia",
"Stefano",
""
],
[
"Milano",
"Michela",
""
],
[
"Chesani",
"Federico",
""
],
[
"Marengo",
"Elisa",
""
],
[
"Cagnoli",
"Paolo",
""
]
] |
1405.3826 | Heike Stephan | Heike Stephan | Application of Methods for Syntax Analysis of Context-Free Languages to
Query Evaluation of Logic Programs | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | My research goal is to employ a parser generation algorithm based on the
Earley parsing algorithm to the evaluation and compilation of queries to logic
programs, especially to deductive databases. By means of partial deduction,
from a query to a logic program a parameterized automaton is to be generated
that models the evaluation of this query. This automaton can be compiled to
executable code; thus we expect a speedup in runtime of query evaluation. An
extended abstract/ full version of a paper accepted to be presented at the
Doctoral Consortium of the 30th International Conference on Logic Programming
(ICLP 2014), July 19-22, Vienna, Austria
| [
{
"version": "v1",
"created": "Thu, 15 May 2014 12:56:03 GMT"
}
] | 1,400,198,400,000 | [
[
"Stephan",
"Heike",
""
]
] |
1405.3896 | Mario Abrantes | M\'ario Abrantes, Lu\'is Moniz Pereira | Properties of Stable Model Semantics Extensions | To appear in Theory and Practice of Logic Programming (TPLP), 10
pages plus appendix | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The stable model (SM) semantics lacks the properties of existence, relevance
and cumulativity. If we prospectively consider the class of conservative
extensions of SM semantics (i.e., semantics that for each normal logic program
P retrieve a superset of the set of stable models of P), one may wander how do
the semantics of this class behave in what concerns the aforementioned
properties. That is the type of issue dealt with in this paper. We define a
large class of conservative extensions of the SM semantics, dubbed affix stable
model semantics, ASM, and study the above referred properties into two
non-disjoint subfamilies of the class ASM, here dubbed ASMh and ASMm. From this
study a number of results stem which facilitate the assessment of semantics in
the class ASMh U ASMm with respect to the properties of existence, relevance
and cumulativity, whilst unveiling relations among these properties. As a
result of the approach taken in our work, light is shed on the characterization
of the SM semantics, as we show that the properties of (lack of) existence and
(lack of) cautious monotony are equivalent, which opposes statements on this
issue that may be found in the literature; we also characterize the relevance
failure of SM semantics in a more clear way than usually stated in the
literature.
| [
{
"version": "v1",
"created": "Thu, 15 May 2014 16:13:47 GMT"
}
] | 1,400,198,400,000 | [
[
"Abrantes",
"Mário",
""
],
[
"Pereira",
"Luís Moniz",
""
]
] |
1405.4138 | Reza Azizi | Reza Azizi | Empirical Study of Artificial Fish Swarm Algorithm | null | International Journal of Computing, Communications and Networking
(IJCCN) , Volume 3, No.1, Pages 01-07, March 2014 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Artificial fish swarm algorithm (AFSA) is one of the swarm intelligence
optimization algorithms that works based on population and stochastic search.
In order to achieve acceptable result, there are many parameters needs to be
adjusted in AFSA. Among these parameters, visual and step are very significant
in view of the fact that artificial fish basically move based on these
parameters. In standard AFSA, these two parameters remain constant until the
algorithm termination. Large values of these parameters increase the capability
of algorithm in global search, while small values improve the local search
ability of the algorithm. In this paper, we empirically study the performance
of the AFSA and different approaches to balance between local and global
exploration have been tested based on the adaptive modification of visual and
step during algorithm execution. The proposed approaches have been evaluated
based on the four well-known benchmark functions. Experimental results show
considerable positive impact on the performance of AFSA.
| [
{
"version": "v1",
"created": "Fri, 16 May 2014 12:02:42 GMT"
}
] | 1,401,840,000,000 | [
[
"Azizi",
"Reza",
""
]
] |
1405.4180 | Liang Chang | Liang Chang, Uli Sattler, Tianlong Gu | Algorithm for Adapting Cases Represented in a Tractable Description
Logic | 21 pages. ICCBR 2014 | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Case-based reasoning (CBR) based on description logics (DLs) has gained a lot
of attention lately. Adaptation is a basic task in the CBR inference that can
be modeled as the knowledge base revision problem and solved in propositional
logic. However, in DLs, it is still a challenge problem since existing revision
operators only work well for strictly restricted DLs of the \emph{DL-Lite}
family, and it is difficult to design a revision algorithm which is
syntax-independent and fine-grained. In this paper, we present a new method for
adaptation based on the DL $\mathcal{EL_{\bot}}$. Following the idea of
adaptation as revision, we firstly extend the logical basis for describing
cases from propositional logic to the DL $\mathcal{EL_{\bot}}$, and present a
formalism for adaptation based on $\mathcal{EL_{\bot}}$. Then we present an
adaptation algorithm for this formalism and demonstrate that our algorithm is
syntax-independent and fine-grained. Our work provides a logical basis for
adaptation in CBR systems where cases and domain knowledge are described by the
tractable DL $\mathcal{EL_{\bot}}$.
| [
{
"version": "v1",
"created": "Fri, 16 May 2014 14:24:42 GMT"
}
] | 1,400,457,600,000 | [
[
"Chang",
"Liang",
""
],
[
"Sattler",
"Uli",
""
],
[
"Gu",
"Tianlong",
""
]
] |
1405.4206 | Joachim Jansen | Joachim Jansen | Model revision inference for extensions of first order logic | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | I am Joachim Jansen and this is my research summary, part of my application
to the Doctoral Consortium at ICLP'14. I am a PhD student in the Knowledge
Representation and Reasoning (KRR) research group, a subgroup of the
Declarative Languages and Artificial Intelligence (DTAI) group at the
department of Computer Science at KU Leuven. I started my PhD in September
2012. My promotor is prof. dr. ir. Gerda Janssens and my co-promotor is prof.
dr. Marc Denecker. I can be contacted at [email protected] or at:
Room 01.167 Celestijnenlaan 200A 3001 Heverlee Belgium An extended abstract /
full version of a paper accepted to be presented at the Doctoral Consortium of
the 30th International Conference on Logic Programming (ICLP 2014), July 19-22,
Vienna, Austria
| [
{
"version": "v1",
"created": "Fri, 16 May 2014 15:23:24 GMT"
}
] | 1,400,457,600,000 | [
[
"Jansen",
"Joachim",
""
]
] |
1405.5048 | Mihai Polceanu M.Sc. | Mihai Polceanu, C\'edric Buche | Towards A Theory-Of-Mind-Inspired Generic Decision-Making Framework | 7 pages, 5 figures, IJCAI 2013 Symposium on AI in Angry Birds | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Simulation is widely used to make model-based predictions, but few approaches
have attempted this technique in dynamic physical environments of medium to
high complexity or in general contexts. After an introduction to the cognitive
science concepts from which this work is inspired and the current development
in the use of simulation as a decision-making technique, we propose a generic
framework based on theory of mind, which allows an agent to reason and perform
actions using multiple simulations of automatically created or externally
inputted models of the perceived environment. A description of a partial
implementation is given, which aims to solve a popular game within the
IJCAI2013 AIBirds contest. Results of our approach are presented, in comparison
with the competition benchmark. Finally, future developments regarding the
framework are discussed.
| [
{
"version": "v1",
"created": "Tue, 20 May 2014 11:54:21 GMT"
}
] | 1,400,630,400,000 | [
[
"Polceanu",
"Mihai",
""
],
[
"Buche",
"Cédric",
""
]
] |
1405.5066 | Erik Cuevas E | Erik Cuevas, Alonso Echavarria and Marte A. Ramirez-Ortegon | An optimization algorithm inspired by the States of Matter that improves
the balance between exploration and exploitation | 22 pages | Applied Intelligence, 40(2) , (2014), 256-272 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ability of an Evolutionary Algorithm (EA) to find a global optimal
solution depends on its capacity to find a good rate between exploitation of
found so far elements and exploration of the search space. Inspired by natural
phenomena, researchers have developed many successful evolutionary algorithms
which, at original versions, define operators that mimic the way nature solves
complex problems, with no actual consideration of the exploration/exploitation
balance. In this paper, a novel nature-inspired algorithm called the States of
Matter Search (SMS) is introduced. The SMS algorithm is based on the simulation
of the states of matter phenomenon. In SMS, individuals emulate molecules which
interact to each other by using evolutionary operations which are based on the
physical principles of the thermal-energy motion mechanism. The algorithm is
devised by considering each state of matter at one different
exploration/exploitation ratio. The evolutionary process is divided into three
phases which emulate the three states of matter: gas, liquid and solid. In each
state, molecules (individuals) exhibit different movement capacities. Beginning
from the gas state (pure exploration), the algorithm modifies the intensities
of exploration and exploitation until the solid state (pure exploitation) is
reached. As a result, the approach can substantially improve the balance
between exploration/exploitation, yet preserving the good search capabilities
of an evolutionary approach.
| [
{
"version": "v1",
"created": "Tue, 20 May 2014 13:03:18 GMT"
}
] | 1,400,630,400,000 | [
[
"Cuevas",
"Erik",
""
],
[
"Echavarria",
"Alonso",
""
],
[
"Ramirez-Ortegon",
"Marte A.",
""
]
] |
1405.5172 | Erik Cuevas E | Erik Cuevas, Diego Oliva, Daniel Zaldivar, Marco Perez and Gonzalo
Pajares | Opposition Based ElectromagnetismLike for Global Optimization | 27 Pages | International Journal of Innovative Computing, Information and
Control, 8 (12) , (2012), pp. 8181-8198 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Electromagnetismlike Optimization (EMO) is a global optimization algorithm,
particularly well suited to solve problems featuring nonlinear and multimodal
cost functions. EMO employs searcher agents that emulate a population of
charged particles which interact to each other according to electromagnetisms
laws of attraction and repulsion. However, EMO usually requires a large number
of iterations for a local search procedure; any reduction or cancelling over
such number, critically perturb other issues such as convergence, exploration,
population diversity and accuracy. This paper presents an enhanced EMO
algorithm called OBEMO, which employs the Opposition-Based Learning (OBL)
approach to accelerate the global convergence speed. OBL is a machine
intelligence strategy which considers the current candidate solution and its
opposite value at the same time, achieving a faster exploration of the search
space. The proposed OBEMO method significantly reduces the required
computational effort yet avoiding any detriment to the good search capabilities
of the original EMO algorithm. Experiments are conducted over a comprehensive
set of benchmark functions, showing that OBEMO obtains promising performance
for most of the discussed test problems.
| [
{
"version": "v1",
"created": "Tue, 20 May 2014 17:52:57 GMT"
}
] | 1,400,630,400,000 | [
[
"Cuevas",
"Erik",
""
],
[
"Oliva",
"Diego",
""
],
[
"Zaldivar",
"Daniel",
""
],
[
"Perez",
"Marco",
""
],
[
"Pajares",
"Gonzalo",
""
]
] |
1405.5459 | Adi Makmal | Alexey A. Melnikov, Adi Makmal, and Hans J. Briegel | Projective simulation applied to the grid-world and the mountain-car
problem | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the model of projective simulation (PS) which is a novel approach to
artificial intelligence (AI). Recently it was shown that the PS agent performs
well in a number of simple task environments, also when compared to standard
models of reinforcement learning (RL). In this paper we study the performance
of the PS agent further in more complicated scenarios. To that end we chose two
well-studied benchmarking problems, namely the "grid-world" and the
"mountain-car" problem, which challenge the model with large and continuous
input space. We compare the performance of the PS agent model with those of
existing models and show that the PS agent exhibits competitive performance
also in such scenarios.
| [
{
"version": "v1",
"created": "Wed, 21 May 2014 15:51:18 GMT"
}
] | 1,400,716,800,000 | [
[
"Melnikov",
"Alexey A.",
""
],
[
"Makmal",
"Adi",
""
],
[
"Briegel",
"Hans J.",
""
]
] |
1405.5643 | Martin Josef Geiger | Sandra Huber, Martin Josef Geiger, Marc Sevaux | Interactive Reference Point-Based Guided Local Search for the
Bi-objective Inventory Routing Problem | null | Proceedings of the 10th Metaheuristics International Conference
MIC 2013, August 5-8, 2013, Singapore, Pages 152-161 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Eliciting preferences of a decision maker is a key factor to successfully
combine search and decision making in an interactive method. Therefore, the
progressively integration and simulation of the decision maker is a main
concern in an application. We contribute in this direction by proposing an
interactive method based on a reference point-based guided local search to the
bi-objective Inventory Routing Problem. A local search metaheuristic, working
on the delivery intervals, and the Clarke & Wright savings heuristic is
employed for the subsequently obtained Vehicle Routing Problem. To elicit
preferences, the decision maker selects a reference point to guide the search
in interesting subregions. Additionally, the reference point is used as a
reservation point to discard solutions outside the cone, introduced as a
convergence criterion. Computational results of the reference point-based
guided local search are reported and analyzed on benchmark data in order to
show the applicability of the approach.
| [
{
"version": "v1",
"created": "Thu, 22 May 2014 07:23:38 GMT"
}
] | 1,400,803,200,000 | [
[
"Huber",
"Sandra",
""
],
[
"Geiger",
"Martin Josef",
""
],
[
"Sevaux",
"Marc",
""
]
] |
1405.6142 | Phil Maguire | Phil Maguire, Philippe Moser, Rebecca Maguire, Mark Keane | A Computational Theory of Subjective Probability | Maguire, P., Moser, P. Maguire, R. & Keane, M.T. (2013) "A
computational theory of subjective probability." In M. Knauff, M. Pauen, N.
Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Conference of
the Cognitive Science Society (pp. 960-965). Austin, TX: Cognitive Science
Society | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this article we demonstrate how algorithmic probability theory is applied
to situations that involve uncertainty. When people are unsure of their model
of reality, then the outcome they observe will cause them to update their
beliefs. We argue that classical probability cannot be applied in such cases,
and that subjective probability must instead be used. In Experiment 1 we show
that, when judging the probability of lottery number sequences, people apply
subjective rather than classical probability. In Experiment 2 we examine the
conjunction fallacy and demonstrate that the materials used by Tversky and
Kahneman (1983) involve model uncertainty. We then provide a formal
mathematical proof that, for every uncertain model, there exists a conjunction
of outcomes which is more subjectively probable than either of its constituents
in isolation.
| [
{
"version": "v1",
"created": "Thu, 8 May 2014 13:15:32 GMT"
}
] | 1,401,062,400,000 | [
[
"Maguire",
"Phil",
""
],
[
"Moser",
"Philippe",
""
],
[
"Maguire",
"Rebecca",
""
],
[
"Keane",
"Mark",
""
]
] |
1405.6369 | Ben Ruijl | Ben Ruijl, Jos Vermaseren, Aske Plaat, Jaap van den Herik | HEPGAME and the Simplification of Expressions | Keynote at the 11th International Workshop on Boolean Problems,
Freiberg Germany | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Advances in high energy physics have created the need to increase
computational capacity. Project HEPGAME was composed to address this challenge.
One of the issues is that numerical integration of expressions of current
interest have millions of terms and takes weeks to compute. We have
investigated ways to simplify these expressions, using Horner schemes and
common subexpression elimination. Our approach applies MCTS, a search procedure
that has been successful in AI. We use it to find near-optimal Horner schemes.
Although MCTS finds better solutions, this approach gives rise to two further
challenges. (1) MCTS (with UCT) introduces a constant, $C_p$ that governs the
balance between exploration and exploitation. This constant has to be tuned
manually. (2) There should be more guided exploration at the bottom of the
tree, since the current approach reduces the quality of the solution towards
the end of the expression. We investigate NMCS (Nested Monte Carlo Search) to
address both issues, but find that NMCS is computationally unfeasible for our
problem. Then, we modify the MCTS formula by introducing a dynamic
exploration-exploitation parameter $T$ that decreases linearly with the
iteration number. Consequently, we provide a performance analysis. We observe
that a variable $C_p$ solves our domain: it yields more exploration at the
bottom and as a result the tuning problem has been simplified. The region in
$C_p$ for which good values are found is increased by more than a tenfold. This
result encourages us to continue our research to solve other prominent problems
in High Energy Physics.
| [
{
"version": "v1",
"created": "Sun, 25 May 2014 10:13:50 GMT"
}
] | 1,401,148,800,000 | [
[
"Ruijl",
"Ben",
""
],
[
"Vermaseren",
"Jos",
""
],
[
"Plaat",
"Aske",
""
],
[
"Herik",
"Jaap van den",
""
]
] |
1405.6509 | Edmond Awad | Edmond Awad, Richard Booth, Fernando Tohme, Iyad Rahwan | Judgment Aggregation in Multi-Agent Argumentation | null | J Logic Computation (2017) 27 (1): 227-259 | 10.1093/logcom/exv055 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given a set of conflicting arguments, there can exist multiple plausible
opinions about which arguments should be accepted, rejected, or deemed
undecided. We study the problem of how multiple such judgments can be
aggregated. We define the problem by adapting various classical
social-choice-theoretic properties for the argumentation domain. We show that
while argument-wise plurality voting satisfies many properties, it fails to
guarantee the collective rationality of the outcome, and struggles with ties.
We then present more general results, proving multiple impossibility results on
the existence of any good aggregation operator. After characterising the
sufficient and necessary conditions for satisfying collective rationality, we
study whether restricting the domain of argument-wise plurality voting to
classical semantics allows us to escape the impossibility result. We close by
listing graph-theoretic restrictions under which argument-wise plurality rule
does produce collectively rational outcomes. In addition to identifying
fundamental barriers to collective argument evaluation, our results open up the
door for a new research agenda for the argumentation and computational social
choice communities.
| [
{
"version": "v1",
"created": "Mon, 26 May 2014 09:13:38 GMT"
},
{
"version": "v2",
"created": "Thu, 19 Jun 2014 15:31:03 GMT"
},
{
"version": "v3",
"created": "Sun, 19 Jul 2015 10:53:38 GMT"
}
] | 1,497,916,800,000 | [
[
"Awad",
"Edmond",
""
],
[
"Booth",
"Richard",
""
],
[
"Tohme",
"Fernando",
""
],
[
"Rahwan",
"Iyad",
""
]
] |
1405.7076 | Vilem Vychodil | Vilem Vychodil | On minimal sets of graded attribute implications | null | Information Sciences 294 (2015), 478-488 | 10.1016/j.ins.2014.09.059 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We explore the structure of non-redundant and minimal sets consisting of
graded if-then rules. The rules serve as graded attribute implications in
object-attribute incidence data and as similarity-based functional dependencies
in a similarity-based generalization of the relational model of data. Based on
our observations, we derive a polynomial-time algorithm which transforms a
given finite set of rules into an equivalent one which has the least size in
terms of the number of rules.
| [
{
"version": "v1",
"created": "Tue, 27 May 2014 22:00:35 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Aug 2014 19:07:05 GMT"
}
] | 1,418,083,200,000 | [
[
"Vychodil",
"Vilem",
""
]
] |
1405.7295 | Peter Nov\'ak | Peter Nov\'ak and Cees Witteveen | On the cost-complexity of multi-context systems | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-context systems provide a powerful framework for modelling
information-aggregation systems featuring heterogeneous reasoning components.
Their execution can, however, incur non-negligible cost. Here, we focus on
cost-complexity of such systems. To that end, we introduce cost-aware
multi-context systems, an extension of non-monotonic multi-context systems
framework taking into account costs incurred by execution of semantic operators
of the individual contexts. We formulate the notion of cost-complexity for
consistency and reasoning problems in MCSs. Subsequently, we provide a series
of results related to gradually more and more constrained classes of MCSs and
finally introduce an incremental cost-reducing algorithm solving the reasoning
problem for definite MCSs.
| [
{
"version": "v1",
"created": "Wed, 28 May 2014 16:13:53 GMT"
}
] | 1,401,321,600,000 | [
[
"Novák",
"Peter",
""
],
[
"Witteveen",
"Cees",
""
]
] |
1405.7567 | Michael Gr. Voskoglou Prof. Dr. | Michael Gr. Voskoglou, Abdel-Badeeh M. Salem | Analogy-Based and Case-Based Reasoning: Two sides of the same coin | 47 pages, 2 figures, 1 table, 124 references | International Journal of Applications of Fuzzy Sets and Artificial
Intelligence (IJAFSAI), Vol. 4, 5-51, 2014 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Analogy-Based (or Analogical) and Case-Based Reasoning (ABR and CBR) are two
similar problem solving processes based on the adaptation of the solution of
past problems for use with a new analogous problem. In this paper we review
these two processes and we give some real world examples with emphasis to the
field of Medicine, where one can find some of the most common and useful CBR
applications. We also underline the differences between CBR and the classical
rule-induction algorithms, we discuss the criticism for CBR methods and we
focus on the future trends of research in the area of CBR.
| [
{
"version": "v1",
"created": "Thu, 29 May 2014 14:38:52 GMT"
}
] | 1,401,408,000,000 | [
[
"Voskoglou",
"Michael Gr.",
""
],
[
"Salem",
"Abdel-Badeeh M.",
""
]
] |
1405.7944 | Shruti Jadon | Shruti Jadon, Anubhav Singhal, Suma Dawn | Military Simulator - A Case Study of Behaviour Tree and Unity based
architecture | 4 pages, 4 figures. International Journal of Computer Applications
@2014 | null | 10.5120/15350-3691 | Volume 88 - Number 5 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we show how the combination of Behaviour Tree and Utility Based
AI architecture can be used to design more realistic bots for Military
Simulators. In this work, we have designed a mathematical model of a simulator
system which in turn helps in analyzing the results and finding out the various
spaces on which our favorable situation might exist, this is done
geometrically. In the mathematical model, we have explained the matrix
formation and its significance followed up in dynamic programming approach we
explained the possible graph formation which will led improvisation of AI,
latter we explained the possible geometrical structure of the matrix operations
and its impact on a particular decision, we also explained the conditions under
which it tend to fail along with a possible solution in future works.
| [
{
"version": "v1",
"created": "Fri, 30 May 2014 18:22:59 GMT"
}
] | 1,574,726,400,000 | [
[
"Jadon",
"Shruti",
""
],
[
"Singhal",
"Anubhav",
""
],
[
"Dawn",
"Suma",
""
]
] |
1405.7964 | Faruk Karaaslan | Faruk Karaaslan | Neutrosophic soft sets with applications in decision making | arXiv admin note: text overlap with arXiv:1305.2724 by other authors | International Journal of Information Science and Intelligent
System, 4(2), 1-20, 2015 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We firstly present definitions and properties in study of Maji
\cite{maji-2013} on neutrosophic soft sets. We then give a few notes on his
study. Next, based on \c{C}a\u{g}man \cite{cagman-2014}, we redefine the notion
of neutrosophic soft set and neutrosophic soft set operations to make more
functional. By using these new definitions we construct a decision making
method and a group decision making method which selects a set of optimum
elements from the alternatives. We finally present examples which shows that
the methods can be successfully applied to many problems that contain
uncertainties.
| [
{
"version": "v1",
"created": "Fri, 30 May 2014 19:32:36 GMT"
},
{
"version": "v2",
"created": "Mon, 2 Jun 2014 21:44:43 GMT"
}
] | 1,453,334,400,000 | [
[
"Karaaslan",
"Faruk",
""
]
] |
1406.0062 | Fahem Kebair fk | Fahem Kebair and Fr\'ed\'eric Serin | Towards a Multiagent Decision Support System for crisis Management | 14 pages. arXiv admin note: text overlap with arXiv:0907.0499 | J. Intelligent Systems 20(1): 47-60 (2011) | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Crisis management is a complex problem raised by the scientific community
currently. Decision support systems are a suitable solution for such issues,
they are indeed able to help emergency managers to prevent and to manage crisis
in emergency situations. However, they should be enough flexible and adaptive
in order to be reliable to solve complex problems that are plunged in dynamic
and unpredictable environments. The approach we propose in this paper addresses
this challenge. We expose here a modelling of information for an emergency
environment and an architecture of a multiagent decision support system that
deals with these information in order to prevent and to manage the occur of a
crisis in emergency situations. We focus on the first level of the system
mechanism which intends to perceive and to reflect the evolution of the current
situation. The general approach and experimentations are provided here.
| [
{
"version": "v1",
"created": "Sat, 31 May 2014 09:57:02 GMT"
}
] | 1,401,753,600,000 | [
[
"Kebair",
"Fahem",
""
],
[
"Serin",
"Frédéric",
""
]
] |
1406.0155 | Badran Raddaoui Dr. | Said Jabbour, Yue Ma, Badran Raddaoui, Lakhdar Sais, Yakoub Salhi | On the measure of conflicts: A MUS-Decomposition Based Framework | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Measuring inconsistency is viewed as an important issue related to handling
inconsistencies. Good measures are supposed to satisfy a set of rational
properties. However, defining sound properties is sometimes problematic. In
this paper, we emphasize one such property, named Decomposability, rarely
discussed in the literature due to its modeling difficulties. To this end, we
propose an independent decomposition which is more intuitive than existing
proposals. To analyze inconsistency in a more fine-grained way, we introduce a
graph representation of a knowledge base and various MUSdecompositions. One
particular MUS-decomposition, named distributable MUS-decomposition leads to an
interesting partition of inconsistencies in a knowledge base such that multiple
experts can check inconsistencies in parallel, which is impossible under
existing measures. Such particular MUSdecomposition results in an inconsistency
measure that satisfies a number of desired properties. Moreover, we give an
upper bound complexity of the measure that can be computed using 0/1 linear
programming or Min Cost Satisfiability problems, and conduct preliminary
experiments to show its feasibility.
| [
{
"version": "v1",
"created": "Sun, 1 Jun 2014 11:35:36 GMT"
}
] | 1,401,753,600,000 | [
[
"Jabbour",
"Said",
""
],
[
"Ma",
"Yue",
""
],
[
"Raddaoui",
"Badran",
""
],
[
"Sais",
"Lakhdar",
""
],
[
"Salhi",
"Yakoub",
""
]
] |
1406.0486 | Marc Lanctot | Marc Lanctot and Mark H.M. Winands and Tom Pepels and Nathan R.
Sturtevant | Monte Carlo Tree Search with Heuristic Evaluations using Implicit
Minimax Backups | 24 pages, 7 figures, 9 tables, expanded version of paper presented at
IEEE Conference on Computational Intelligence and Games (CIG) 2014 conference | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Monte Carlo Tree Search (MCTS) has improved the performance of game engines
in domains such as Go, Hex, and general game playing. MCTS has been shown to
outperform classic alpha-beta search in games where good heuristic evaluations
are difficult to obtain. In recent years, combining ideas from traditional
minimax search in MCTS has been shown to be advantageous in some domains, such
as Lines of Action, Amazons, and Breakthrough. In this paper, we propose a new
way to use heuristic evaluations to guide the MCTS search by storing the two
sources of information, estimated win rates and heuristic evaluations,
separately. Rather than using the heuristic evaluations to replace the
playouts, our technique backs them up implicitly during the MCTS simulations.
These minimax values are then used to guide future simulations. We show that
using implicit minimax backups leads to stronger play performance in Kalah,
Breakthrough, and Lines of Action.
| [
{
"version": "v1",
"created": "Mon, 2 Jun 2014 19:32:12 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Jun 2014 19:00:30 GMT"
},
{
"version": "v3",
"created": "Sun, 8 Jun 2014 14:06:23 GMT"
},
{
"version": "v4",
"created": "Thu, 19 Jun 2014 22:07:30 GMT"
}
] | 1,403,481,600,000 | [
[
"Lanctot",
"Marc",
""
],
[
"Winands",
"Mark H. M.",
""
],
[
"Pepels",
"Tom",
""
],
[
"Sturtevant",
"Nathan R.",
""
]
] |
1406.0941 | Siamak Ravanbakhsh | Siamak Ravanbakhsh, Reihaneh Rabbany, Russell Greiner | Augmentative Message Passing for Traveling Salesman Problem and Graph
Partitioning | null | null | null | Advances in Neural Information Processing Systems 27 (NIPS 2014) | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The cutting plane method is an augmentative constrained optimization
procedure that is often used with continuous-domain optimization techniques
such as linear and convex programs. We investigate the viability of a similar
idea within message passing -- which produces integral solutions -- in the
context of two combinatorial problems: 1) For Traveling Salesman Problem (TSP),
we propose a factor-graph based on Held-Karp formulation, with an exponential
number of constraint factors, each of which has an exponential but sparse
tabular form. 2) For graph-partitioning (a.k.a., community mining) using
modularity optimization, we introduce a binary variable model with a large
number of constraints that enforce formation of cliques. In both cases we are
able to derive surprisingly simple message updates that lead to competitive
solutions on benchmark instances. In particular for TSP we are able to find
near-optimal solutions in the time that empirically grows with N^3,
demonstrating that augmentation is practical and efficient.
| [
{
"version": "v1",
"created": "Wed, 4 Jun 2014 05:08:17 GMT"
}
] | 1,440,115,200,000 | [
[
"Ravanbakhsh",
"Siamak",
""
],
[
"Rabbany",
"Reihaneh",
""
],
[
"Greiner",
"Russell",
""
]
] |
1406.0955 | Yan Gu | Yan Gu | Cascading A*: a Parallel Approach to Approximate Heuristic Search | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we proposed a new approximate heuristic search algorithm:
Cascading A*, which is a two-phrase algorithm combining A* and IDA* by a new
concept "envelope ball". The new algorithm CA* is efficient, able to generate
approximate solution and any-time solution, and parallel friendly.
| [
{
"version": "v1",
"created": "Wed, 4 Jun 2014 07:12:16 GMT"
},
{
"version": "v2",
"created": "Tue, 3 May 2016 03:20:46 GMT"
}
] | 1,462,320,000,000 | [
[
"Gu",
"Yan",
""
]
] |
1406.1638 | Xiaoyu Chen | Xiaoyu Chen, Dan Song, Dongming Wang | Automated Generation of Geometric Theorems from Images of Diagrams | 31 pages. Submitted to Annals of Mathematics and Artificial
Intelligence (special issue on Geometric Reasoning) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose an approach to generate geometric theorems from electronic images
of diagrams automatically. The approach makes use of techniques of Hough
transform to recognize geometric objects and their labels and of numeric
verification to mine basic geometric relations. Candidate propositions are
generated from the retrieved information by using six strategies and geometric
theorems are obtained from the candidates via algebraic computation.
Experiments with a preliminary implementation illustrate the effectiveness and
efficiency of the proposed approach for generating nontrivial theorems from
images of diagrams. This work demonstrates the feasibility of automated
discovery of profound geometric knowledge from simple image data and has
potential applications in geometric knowledge management and education.
| [
{
"version": "v1",
"created": "Fri, 6 Jun 2014 10:52:28 GMT"
}
] | 1,402,272,000,000 | [
[
"Chen",
"Xiaoyu",
""
],
[
"Song",
"Dan",
""
],
[
"Wang",
"Dongming",
""
]
] |
1406.1697 | Xinyang Deng | Meizhu Li, Qi Zhang, Yong Deng | Multiscale probability transformation of basic probability assignment | 22 pages, 1 figure | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Decision making is still an open issue in the application of Dempster-Shafer
evidence theory. A lot of works have been presented for it. In the transferable
belief model (TBM), pignistic probabilities based on the basic probability as-
signments are used for decision making. In this paper, multiscale probability
transformation of basic probability assignment based on the belief function and
the plausibility function is proposed, which is a generalization of the
pignistic probability transformation. In the multiscale probability function, a
factor q based on the Tsallis entropy is used to make the multiscale prob-
abilities diversified. An example is shown that the multiscale probability
transformation is more reasonable in the decision making.
| [
{
"version": "v1",
"created": "Fri, 6 Jun 2014 14:38:29 GMT"
}
] | 1,402,272,000,000 | [
[
"Li",
"Meizhu",
""
],
[
"Zhang",
"Qi",
""
],
[
"Deng",
"Yong",
""
]
] |
1406.2000 | Florentin Smarandache | Florentin Smarandache | Introduction to Neutrosophic Statistics | 122 pages, many geometrical figures, many tables | Published as a book by Sitech in 2014 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neutrosophic Statistics means statistical analysis of population or sample
that has indeterminate (imprecise, ambiguous, vague, incomplete, unknown) data.
For example, the population or sample size might not be exactly determinate
because of some individuals that partially belong to the population or sample,
and partially they do not belong, or individuals whose appurtenance is
completely unknown. Also, there are population or sample individuals whose data
could be indeterminate. In this book, we develop the 1995 notion of
neutrosophic statistics. We present various practical examples. It is possible
to define the neutrosophic statistics in many ways, because there are various
types of indeterminacies, depending on the problem to solve.
| [
{
"version": "v1",
"created": "Sun, 8 Jun 2014 16:44:49 GMT"
}
] | 1,402,358,400,000 | [
[
"Smarandache",
"Florentin",
""
]
] |
1406.2023 | Gian Luca Pozzato | Laura Giordano, Valentina Gliozzi, Nicola Olivetti, Gian Luca Pozzato | Rational Closure in SHIQ | 30 pages, extended version of paper accepted to DL2014 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We define a notion of rational closure for the logic SHIQ, which does not
enjoys the finite model property, building on the notion of rational closure
introduced by Lehmann and Magidor in [23]. We provide a semantic
characterization of rational closure in SHIQ in terms of a preferential
semantics, based on a finite rank characterization of minimal models. We show
that the rational closure of a TBox can be computed in EXPTIME using entailment
in SHIQ.
| [
{
"version": "v1",
"created": "Sun, 8 Jun 2014 20:16:30 GMT"
}
] | 1,402,358,400,000 | [
[
"Giordano",
"Laura",
""
],
[
"Gliozzi",
"Valentina",
""
],
[
"Olivetti",
"Nicola",
""
],
[
"Pozzato",
"Gian Luca",
""
]
] |
1406.2128 | Xinyang Deng | Yang Liu, Xiaoge Zhang and Yong Deng | A bio-inspired algorithm for fuzzy user equilibrium problem by aid of
Physarum Polycephalum | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The user equilibrium in traffic assignment problem is based on the fact that
travelers choose the minimum-cost path between every origin-destination pair
and on the assumption that such a behavior will lead to an equilibrium of the
traffic network. In this paper, we consider this problem when the traffic
network links are fuzzy cost. Therefore, a Physarum-type algorithm is developed
to unify the Physarum network and the traffic network for taking full of
advantage of Physarum Polycephalum's adaptivity in network design to solve the
user equilibrium problem. Eventually, some experiments are used to test the
performance of this method. The results demonstrate that our approach is
competitive when compared with other existing algorithms.
| [
{
"version": "v1",
"created": "Mon, 9 Jun 2014 10:41:06 GMT"
}
] | 1,402,358,400,000 | [
[
"Liu",
"Yang",
""
],
[
"Zhang",
"Xiaoge",
""
],
[
"Deng",
"Yong",
""
]
] |
1406.3191 | Hadi Fanaee-T | Hadi Fanaee-T and Joao Gama | An eigenvector-based hotspot detection | null | In Proceedings of 16th Portuguese Conference on Artificial
Intelligence (EPIA 2013), Acores, Portugal, 9-12 September 2013, PP. 290-301 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Space and time are two critical components of many real world systems. For
this reason, analysis of anomalies in spatiotemporal data has been a great of
interest. In this work, application of tensor decomposition and eigenspace
techniques on spatiotemporal hotspot detection is investigated. An algorithm
called SST-Hotspot is proposed which accounts for spatiotemporal variations in
data and detect hotspots using matching of eigenvector elements of two cases
and population tensors. The experimental results reveal the interesting
application of tensor decomposition and eigenvector-based techniques in hotspot
analysis.
| [
{
"version": "v1",
"created": "Thu, 12 Jun 2014 10:55:08 GMT"
},
{
"version": "v2",
"created": "Fri, 13 Jun 2014 10:44:49 GMT"
}
] | 1,402,876,800,000 | [
[
"Fanaee-T",
"Hadi",
""
],
[
"Gama",
"Joao",
""
]
] |
1406.3266 | Hadi Fanaee-T | Hadi Fanaee-T and M\'arcia D. B. Oliveira and Jo\~ao Gama and Simon
Malinowski and Ricardo Morla | Event and Anomaly Detection Using Tucker3 Decomposition | null | In Proceedings of 20th European Conference on Artificial
Intelligence (ECAI'2013)- Ubiquitous Data Mining Workshop, pp. 8-12, vol. 1,
August 27-31, 2012 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Failure detection in telecommunication networks is a vital task. So far,
several supervised and unsupervised solutions have been provided for
discovering failures in such networks. Among them unsupervised approaches has
attracted more attention since no label data is required. Often, network
devices are not able to provide information about the type of failure. In such
cases the type of failure is not known in advance and the unsupervised setting
is more appropriate for diagnosis. Among unsupervised approaches, Principal
Component Analysis (PCA) is a well-known solution which has been widely used in
the anomaly detection literature and can be applied to matrix data (e.g.
Users-Features). However, one of the important properties of network data is
their temporal sequential nature. So considering the interaction of dimensions
over a third dimension, such as time, may provide us better insights into the
nature of network failures. In this paper we demonstrate the power of three-way
analysis to detect events and anomalies in time-evolving network data.
| [
{
"version": "v1",
"created": "Thu, 12 Jun 2014 15:33:11 GMT"
}
] | 1,402,617,600,000 | [
[
"Fanaee-T",
"Hadi",
""
],
[
"Oliveira",
"Márcia D. B.",
""
],
[
"Gama",
"João",
""
],
[
"Malinowski",
"Simon",
""
],
[
"Morla",
"Ricardo",
""
]
] |
1406.3877 | Fuan Pu | Fuan Pu, Jian Luo, Yulai Zhang, and Guiming Luo | Argument Ranking with Categoriser Function | null | null | 10.1007/978-3-319-12096-6_26 | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Recently, ranking-based semantics is proposed to rank-order arguments from
the most acceptable to the weakest one(s), which provides a graded assessment
to arguments. In general, the ranking on arguments is derived from the strength
values of the arguments. Categoriser function is a common approach that assigns
a strength value to a tree of arguments. When it encounters an argument system
with cycles, then the categoriser strength is the solution of the non-linear
equations. However, there is no detail about the existence and uniqueness of
the solution, and how to find the solution (if exists). In this paper, we will
cope with these issues via fixed point technique. In addition, we define the
categoriser-based ranking semantics in light of categoriser strength, and
investigate some general properties of it. Finally, the semantics is shown to
satisfy some of the axioms that a ranking-based semantics should satisfy.
| [
{
"version": "v1",
"created": "Mon, 16 Jun 2014 01:19:02 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Jul 2014 14:40:27 GMT"
}
] | 1,417,651,200,000 | [
[
"Pu",
"Fuan",
""
],
[
"Luo",
"Jian",
""
],
[
"Zhang",
"Yulai",
""
],
[
"Luo",
"Guiming",
""
]
] |
1406.4324 | Viswanadh Konjeti | Garimella Rama Murthy | Towards a theory of granular sets | 6 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivated by the application problem of sensor fusion the author introduced
the concept of graded set. It is reasoned that in classification problem
arising in an information system (represented by information table), a novel
set called Granular set naturally arises. It is realized that in any
hierarchical classification problem, Granular set naturally arises. Also when
the target set of objects forms a graded set the lower and upper approximations
of target sets form a graded set. This generalizes the concept of rough set. It
is hoped that a detailed theory of granular/ graded sets finds several
applications.
| [
{
"version": "v1",
"created": "Tue, 17 Jun 2014 11:26:32 GMT"
}
] | 1,403,049,600,000 | [
[
"Murthy",
"Garimella Rama",
""
]
] |
1406.4462 | Erfan Khaji Mr. | Erfan Khaji | Soccer League Optimization: A heuristic Algorithm Inspired by the
Football System in European Countries | 6 Pages, 12 Figures, 4 Tables, Accepted in GEM 2014, but rejected due
to lack of money | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper a new heuristic optimization algorithm has been introduced
based on the performance of the major football leagues within each season in EU
countries. The algorithm starts with an initial population including three
different groups of teams: the wealthiest (strongest), the regular, the poorest
(weakest). Each individual of population constitute a football team while each
player is an indication of a player in a post. The optimization can hopefully
occurs when the competition among the teams in all the leagues is imitated as
the strongest teams usually purchase the best players of the regular teams and
in turn, regular teams purchase the best players of the weakest who should
always discover young players instead of buying professionals. It has been
shown that the algorithm can hopefully converge to an acceptable solution
solving various benchmarks. Key words: Heuristic Algorithms
| [
{
"version": "v1",
"created": "Sun, 15 Jun 2014 15:10:20 GMT"
}
] | 1,403,049,600,000 | [
[
"Khaji",
"Erfan",
""
]
] |
1406.4882 | Ovidiu Andrei Schipor OA | Ovidiu-Andrei Schipor, Stefan-Gheorghe Pentiuc, Doina-Maria Schipor | Knowledge Base of an Expert System Used for Dyslalic Children Therapy | 4 pages, 11 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In order to improve children speech therapy, we develop a Fuzzy Expert System
based on a speech therapy guide. This guide, write in natural language, was
formalized using fuzzy logic paradigm. In this manner we obtain a knowledge
base with over 150 rules and 19 linguistic variables. All these researches,
including expert system validation, are part of TERAPERS project.
| [
{
"version": "v1",
"created": "Thu, 29 May 2014 07:06:19 GMT"
}
] | 1,403,222,400,000 | [
[
"Schipor",
"Ovidiu-Andrei",
""
],
[
"Pentiuc",
"Stefan-Gheorghe",
""
],
[
"Schipor",
"Doina-Maria",
""
]
] |
1406.4973 | Gaetan Marceau | Ga\'etan Marceau (LRI, INRIA Saclay - Ile de France), Marc Schoenauer
(LRI, INRIA Saclay - Ile de France) | Racing Multi-Objective Selection Probabilities | null | 13th International Conference on Parallel Problem Solving from
Nature, Ljubljana : France (2014) | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the context of Noisy Multi-Objective Optimization, dealing with
uncertainties requires the decision maker to define some preferences about how
to handle them, through some statistics (e.g., mean, median) to be used to
evaluate the qualities of the solutions, and define the corresponding Pareto
set. Approximating these statistics requires repeated samplings of the
population, drastically increasing the overall computational cost. To tackle
this issue, this paper proposes to directly estimate the probability of each
individual to be selected, using some Hoeffding races to dynamically assign the
estimation budget during the selection step. The proposed racing approach is
validated against static budget approaches with NSGA-II on noisy versions of
the ZDT benchmark functions.
| [
{
"version": "v1",
"created": "Thu, 19 Jun 2014 08:07:47 GMT"
}
] | 1,403,222,400,000 | [
[
"Marceau",
"Gaétan",
"",
"LRI, INRIA Saclay - Ile de France"
],
[
"Schoenauer",
"Marc",
"",
"LRI, INRIA Saclay - Ile de France"
]
] |
1406.6102 | Kewen Wang | Kewen Wang, Lian Wen, Kedian Mu | Random Logic Programs: Linear Model | 33 pages. To appear in: Theory and Practice of Logic Programming | Theory and Practice of Logic Programming 15 (2014) 818-853 | 10.1017/S1471068414000611 | GUICTWK2014-1 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes a model, the linear model, for randomly generating logic
programs with low density of rules and investigates statistical properties of
such random logic programs. It is mathematically shown that the average number
of answer sets for a random program converges to a constant when the number of
atoms approaches infinity. Several experimental results are also reported,
which justify the suitability of the linear model. It is also experimentally
shown that, under this model, the size distribution of answer sets for random
programs tends to a normal distribution when the number of atoms is
sufficiently large.
| [
{
"version": "v1",
"created": "Mon, 23 Jun 2014 22:23:11 GMT"
}
] | 1,444,176,000,000 | [
[
"Wang",
"Kewen",
""
],
[
"Wen",
"Lian",
""
],
[
"Mu",
"Kedian",
""
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.