id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
sequencelengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1410.7953 | Paula Severi | Regina Motz, Edelweis Rohrer and Paula Severi | Reasoning for ALCQ extended with a flexible meta-modelling hierarchy | This is the long version of the paper submitted to JIST2014 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This works is motivated by a real-world case study where it is necessary to
integrate and relate existing ontologies through meta- modelling. For this, we
introduce the Description Logic ALCQM which is obtained from ALCQ by adding
statements that equate individuals to concepts in a knowledge base. In this new
extension, a concept can be an individual of another concept (called
meta-concept) which themselves can be individuals of yet another concept
(called meta meta-concept) and so on. We define a tableau algorithm for
checking consistency of an ontology in ALCQM and prove its correctness.
| [
{
"version": "v1",
"created": "Wed, 29 Oct 2014 12:23:24 GMT"
}
] | 1,414,627,200,000 | [
[
"Motz",
"Regina",
""
],
[
"Rohrer",
"Edelweis",
""
],
[
"Severi",
"Paula",
""
]
] |
1410.8233 | Brian Tomasik | Brian Tomasik | Do Artificial Reinforcement-Learning Agents Matter Morally? | 37 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Artificial reinforcement learning (RL) is a widely used technique in
artificial intelligence that provides a general method for training agents to
perform a wide variety of behaviours. RL as used in computer science has
striking parallels to reward and punishment learning in animal and human
brains. I argue that present-day artificial RL agents have a very small but
nonzero degree of ethical importance. This is particularly plausible for views
according to which sentience comes in degrees based on the abilities and
complexities of minds, but even binary views on consciousness should assign
nonzero probability to RL programs having morally relevant experiences. While
RL programs are not a top ethical priority today, they may become more
significant in the coming decades as RL is increasingly applied to industry,
robotics, video games, and other areas. I encourage scientists, philosophers,
and citizens to begin a conversation about our ethical duties to reduce the
harm that we inflict on powerless, voiceless RL agents.
| [
{
"version": "v1",
"created": "Thu, 30 Oct 2014 02:34:48 GMT"
}
] | 1,414,713,600,000 | [
[
"Tomasik",
"Brian",
""
]
] |
1411.0156 | Subbarao Kambhampati | William Cushing, J. Benton, Patrick Eyerich, Subbarao Kambhampati | Surrogate Search As a Way to Combat Harmful Effects of Ill-behaved
Evaluation Functions | arXiv admin note: substantial text overlap with arXiv:1103.3687 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, several researchers have found that cost-based satisficing search
with A* often runs into problems. Although some "work arounds" have been
proposed to ameliorate the problem, there has been little concerted effort to
pinpoint its origin. In this paper, we argue that the origins of this problem
can be traced back to the fact that most planners that try to optimize cost
also use cost-based evaluation functions (i.e., f(n) is a cost estimate). We
show that cost-based evaluation functions become ill-behaved whenever there is
a wide variance in action costs; something that is all too common in planning
domains. The general solution to this malady is what we call a surrogatesearch,
where a surrogate evaluation function that doesn't directly track the cost
objective, and is resistant to cost-variance, is used. We will discuss some
compelling choices for surrogate evaluation functions that are based on size
rather that cost. Of particular practical interest is a cost-sensitive version
of size-based evaluation function -- where the heuristic estimates the size of
cheap paths, as it provides attractive quality vs. speed tradeoffs
| [
{
"version": "v1",
"created": "Sat, 1 Nov 2014 19:04:17 GMT"
}
] | 1,415,059,200,000 | [
[
"Cushing",
"William",
""
],
[
"Benton",
"J.",
""
],
[
"Eyerich",
"Patrick",
""
],
[
"Kambhampati",
"Subbarao",
""
]
] |
1411.0359 | Carleton Coffrin | Carleton Coffrin, Dan Gordon, and Paul Scott | NESTA, The NICTA Energy System Test Case Archive | This archive is discontinued | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years the power systems research community has seen an explosion of
work applying operations research techniques to challenging power network
optimization problems. Regardless of the application under consideration, all
of these works rely on power system test cases for evaluation and validation.
However, many of the well established power system test cases were developed as
far back as the 1960s with the aim of testing AC power flow algorithms. It is
unclear if these power flow test cases are suitable for power system
optimization studies. This report surveys all of the publicly available AC
transmission system test cases, to the best of our knowledge, and assess their
suitability for optimization tasks. It finds that many of the traditional test
cases are missing key network operation constraints, such as line thermal
limits and generator capability curves. To incorporate these missing
constraints, data driven models are developed from a variety of publicly
available data sources. The resulting extended test cases form a compressive
archive, NESTA, for the evaluation and validation of power system optimization
algorithms.
| [
{
"version": "v1",
"created": "Mon, 3 Nov 2014 04:16:51 GMT"
},
{
"version": "v2",
"created": "Fri, 27 Feb 2015 08:13:59 GMT"
},
{
"version": "v3",
"created": "Mon, 11 May 2015 07:34:21 GMT"
},
{
"version": "v4",
"created": "Fri, 24 Jun 2016 21:48:26 GMT"
},
{
"version": "v5",
"created": "Thu, 11 Aug 2016 04:24:02 GMT"
},
{
"version": "v6",
"created": "Tue, 3 Sep 2019 02:49:19 GMT"
}
] | 1,567,555,200,000 | [
[
"Coffrin",
"Carleton",
""
],
[
"Gordon",
"Dan",
""
],
[
"Scott",
"Paul",
""
]
] |
1411.0406 | Arjun Bhardwaj | Arjun Bhardwaj and Sangeetha | GC-SROIQ(C) : Expressive Constraint Modelling and Grounded
Circumscription for SROIQ | For an improved formulation of the problem, which addresses critical
shortcomings of this paper, please refer to the following : Extending SROIQ
with Constraint Networks and Grounded Circumscription, arXiv:1508.00116 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Developments in semantic web technologies have promoted ontological encoding
of knowledge from diverse domains. However, modelling many practical domains
requires more expressive representations schemes than what the standard
description logics(DLs) support. We extend the DL SROIQ with constraint
networks and grounded circumscription. Applications of constraint modelling
include embedding ontologies with temporal or spatial information, while
grounded circumscription allows defeasible inference and closed world
reasoning. This paper overcomes restrictions on existing constraint modelling
approaches by introducing expressive constructs. Grounded circumscription
allows concept and role minimization and is decidable for DL. We provide a
general and intuitive algorithm for the framework of grounded circumscription
that can be applied to a whole range of logics. We present the resulting logic:
GC-SROIQ(C), and describe a tableau decision procedure for it.
| [
{
"version": "v1",
"created": "Mon, 3 Nov 2014 10:05:29 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Nov 2014 07:46:47 GMT"
},
{
"version": "v3",
"created": "Tue, 28 Jul 2015 08:45:52 GMT"
},
{
"version": "v4",
"created": "Mon, 3 Apr 2017 18:04:45 GMT"
}
] | 1,491,350,400,000 | [
[
"Bhardwaj",
"Arjun",
""
],
[
"Sangeetha",
"",
""
]
] |
1411.0440 | Joseph Corneli | Joseph Corneli, Anna Jordanous, Christian Guckelsberger, Alison Pease,
Simon Colton | Modelling serendipity in a computational context | 68pp, submitted to New Generation Computing special issue on New
Directions in Computational Creativity | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The term serendipity describes a creative process that develops, in context,
with the active participation of a creative agent, but not entirely within that
agent's control. While a system cannot be made to perform serendipitously on
demand, we argue that its $\mathit{serendipity\ potential}$ can be increased by
means of a suitable system architecture and other design choices. We distil a
unified description of serendipitous occurrences from historical theorisations
of serendipity and creativity. This takes the form of a framework with six
phases: $\mathit{perception}$, $\mathit{attention}$, $\mathit{interest}$,
$\mathit{explanation}$, $\mathit{bridge}$, and $\mathit{valuation}$. We then
use this framework to organise a survey of literature in cognitive science,
philosophy, and computing, which yields practical definitions of the six
phases, along with heuristics for implementation. We use the resulting model to
evaluate the serendipity potential of four existing systems developed by
others, and two systems previously developed by two of the authors. Most
existing research that considers serendipity in a computing context deals with
serendipity as a service; here we relate theories of serendipity to the
development of autonomous systems and computational creativity practice. We
argue that serendipity is not teleologically blind, and outline representative
directions for future applications of our model. We conclude that it is
feasible to equip computational systems with the potential for serendipity, and
that this could be beneficial in varied computational creativity/AI
applications, particularly those designed to operate responsively in real-world
contexts.
| [
{
"version": "v1",
"created": "Mon, 3 Nov 2014 11:50:19 GMT"
},
{
"version": "v2",
"created": "Tue, 26 May 2015 11:23:44 GMT"
},
{
"version": "v3",
"created": "Sun, 14 Feb 2016 17:47:29 GMT"
},
{
"version": "v4",
"created": "Wed, 27 Jul 2016 13:19:32 GMT"
},
{
"version": "v5",
"created": "Tue, 16 May 2017 11:56:12 GMT"
},
{
"version": "v6",
"created": "Thu, 6 Dec 2018 16:12:42 GMT"
},
{
"version": "v7",
"created": "Fri, 30 Aug 2019 09:47:39 GMT"
},
{
"version": "v8",
"created": "Sun, 19 Apr 2020 19:58:37 GMT"
}
] | 1,587,427,200,000 | [
[
"Corneli",
"Joseph",
""
],
[
"Jordanous",
"Anna",
""
],
[
"Guckelsberger",
"Christian",
""
],
[
"Pease",
"Alison",
""
],
[
"Colton",
"Simon",
""
]
] |
1411.1080 | Raka Jovanovic | Raka Jovanovic, Abdelkader Bousselham, Stefan Voss | A Heuristic Method for Solving the Problem of Partitioning Graphs with
Supply and Demand | null | null | 10.1007/s10479-015-1930-5 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we present a greedy algorithm for solving the problem of the
maximum partitioning of graphs with supply and demand (MPGSD). The goal of the
method is to solve the MPGSD for large graphs in a reasonable time limit. This
is done by using a two stage greedy algorithm, with two corresponding types of
heuristics. The solutions acquired in this way are improved by applying a
computationally inexpensive, hill climbing like, greedy correction procedure.
In our numeric experiments we analyze different heuristic functions for each
stage of the greedy algorithm, and show that their performance is highly
dependent on the properties of the specific instance. Our tests show that by
exploring a relatively small number of solutions generated by combining
different heuristic functions, and applying the proposed correction procedure
we can find solutions within only a few percent of the optimal ones.
| [
{
"version": "v1",
"created": "Sun, 2 Nov 2014 08:29:59 GMT"
}
] | 1,438,300,800,000 | [
[
"Jovanovic",
"Raka",
""
],
[
"Bousselham",
"Abdelkader",
""
],
[
"Voss",
"Stefan",
""
]
] |
1411.1373 | Bill Hibbard | Bill Hibbard | Ethical Artificial Intelligence | minor edit: remove page break between Figure 10.2 and its caption | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This book-length article combines several peer reviewed papers and new
material to analyze the issues of ethical artificial intelligence (AI). The
behavior of future AI systems can be described by mathematical equations, which
are adapted to analyze possible unintended AI behaviors and ways that AI
designs can avoid them. This article makes the case for utility-maximizing
agents and for avoiding infinite sets in agent definitions. It shows how to
avoid agent self-delusion using model-based utility functions and how to avoid
agents that corrupt their reward generators (sometimes called "perverse
instantiation") using utility functions that evaluate outcomes at one point in
time from the perspective of humans at a different point in time. It argues
that agents can avoid unintended instrumental actions (sometimes called "basic
AI drives" or "instrumental goals") by accurately learning human values. This
article defines a self-modeling agent framework and shows how it can avoid
problems of resource limits, being predicted by other agents, and inconsistency
between the agent's utility function and its definition (one version of this
problem is sometimes called "motivated value selection"). This article also
discusses how future AI will differ from current AI, the politics of AI, and
the ultimate use of AI to help understand the nature of the universe and our
place in it.
| [
{
"version": "v1",
"created": "Wed, 5 Nov 2014 19:40:02 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Nov 2014 19:11:41 GMT"
},
{
"version": "v3",
"created": "Thu, 20 Nov 2014 18:37:22 GMT"
},
{
"version": "v4",
"created": "Thu, 4 Dec 2014 10:22:11 GMT"
},
{
"version": "v5",
"created": "Wed, 24 Dec 2014 18:45:16 GMT"
},
{
"version": "v6",
"created": "Mon, 19 Jan 2015 13:15:45 GMT"
},
{
"version": "v7",
"created": "Wed, 4 Feb 2015 11:49:39 GMT"
},
{
"version": "v8",
"created": "Thu, 5 Mar 2015 17:49:32 GMT"
},
{
"version": "v9",
"created": "Tue, 17 Nov 2015 20:54:38 GMT"
}
] | 1,447,804,800,000 | [
[
"Hibbard",
"Bill",
""
]
] |
1411.1497 | Xiaoyu Chen | Xiaoyu Chen, Dongming Wang | The Spaces of Data, Information, and Knowledge | 14 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the data space $D$ of any given data set $X$ and explain how
functions and relations are defined over $D$. From $D$ and for a specific
domain $\Delta$ we construct the information space $I$ of $X$ by interpreting
variables, functions, and explicit relations over $D$ in $\Delta$ and by
including other relations that $D$ implies under the interpretation in
$\Delta$. Then from $I$ we build up the knowledge space $K$ of $X$ as the
product of two spaces $K_T$ and $K_P$, where $K_T$ is obtained from $I$ by
using the induction principle to generalize propositional relations to
quantified relations, the deduction principle to generate new relations, and
standard mechanisms to validate relations and $K_P$ is the space of
specifications of methods with operational instructions which are valid in
$K_T$. Through our construction of the three topological spaces the following
key observation is made clear: the retrieval of information from the given data
set for $\Delta$ consists essentially in mining domain objects and relations,
and the discovery of knowledge from the retrieved information consists
essentially in applying the induction and deduction principles to generate
propositions, synthesizing and modeling the information to generate
specifications of methods with operational instructions, and validating the
propositions and specifications. Based on this observation, efficient
approaches may be designed to discover profound knowledge automatically from
simple data, as demonstrated by the result of our study in the case of
geometry.
| [
{
"version": "v1",
"created": "Thu, 6 Nov 2014 04:50:45 GMT"
}
] | 1,415,318,400,000 | [
[
"Chen",
"Xiaoyu",
""
],
[
"Wang",
"Dongming",
""
]
] |
1411.1629 | Ernest Davis | Ernest Davis | The Limitations of Standardized Science Tests as Benchmarks for
Artificial Intelligence Research: Position Paper | 24 pages, 5 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this position paper, I argue that standardized tests for elementary
science such as SAT or Regents tests are not very good benchmarks for measuring
the progress of artificial intelligence systems in understanding basic science.
The primary problem is that these tests are designed to test aspects of
knowledge and ability that are challenging for people; the aspects that are
challenging for AI systems are very different. In particular, standardized
tests do not test knowledge that is obvious for people; none of this knowledge
can be assumed in AI systems. Individual standardized tests also have specific
features that are not necessarily appropriate for an AI benchmark. I analyze
the Physics subject SAT in some detail and the New York State Regents Science
test more briefly. I also argue that the apparent advantages offered by using
standardized tests are mostly either minor or illusory. The one major real
advantage is that the significance is easily explained to the public; but I
argue that even this is a somewhat mixed blessing. I conclude by arguing that,
first, more appropriate collections of exam style problems could be assembled,
and second, that there are better kinds of benchmarks than exam-style problems.
In an appendix I present a collection of sample exam-style problems that test
kinds of knowledge missing from the standardized tests.
| [
{
"version": "v1",
"created": "Thu, 6 Nov 2014 14:44:12 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Oct 2015 20:17:31 GMT"
}
] | 1,445,299,200,000 | [
[
"Davis",
"Ernest",
""
]
] |
1411.3346 | Olegs Verhodubs | Olegs Verhodubs | Membership Function Assignment for Elements of Single OWL Ontology | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper develops the idea of membership function assignment for OWL (Web
Ontology Language) ontology elements in order to subsequently generate fuzzy
rules from this ontology. The task of membership function assignment for OWL
ontology elements had already been partially described, but this concerned the
case, when several OWL ontologies of the same domain were available, and they
were merged into a single ontology. The purpose of this paper is to present the
way of membership function assignment for OWL ontology elements in the case,
when there is the only one available ontology. Fuzzy rules, generated from the
OWL ontology, are necessary for supplement of the SWES (Semantic Web Expert
System) knowledge base. SWES is an expert system, which will be able to extract
knowledge from OWL ontologies, found in the Web, and will serve as a universal
expert for the user.
| [
{
"version": "v1",
"created": "Wed, 12 Nov 2014 21:13:08 GMT"
}
] | 1,415,923,200,000 | [
[
"Verhodubs",
"Olegs",
""
]
] |
1411.3880 | Martin Chmel\'ik | Krishnendu Chatterjee, Martin Chmel\'ik, Raghav Gupta, Ayush Kanodia | Optimal Cost Almost-sure Reachability in POMDPs | Full Version of Optimal Cost Almost-sure Reachability in POMDPs, AAAI
2015. arXiv admin note: text overlap with arXiv:1207.4166 by other authors | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider partially observable Markov decision processes (POMDPs) with a
set of target states and every transition is associated with an integer cost.
The optimization objective we study asks to minimize the expected total cost
till the target set is reached, while ensuring that the target set is reached
almost-surely (with probability 1). We show that for integer costs
approximating the optimal cost is undecidable. For positive costs, our results
are as follows: (i) we establish matching lower and upper bounds for the
optimal cost and the bound is double exponential; (ii) we show that the problem
of approximating the optimal cost is decidable and present approximation
algorithms developing on the existing algorithms for POMDPs with finite-horizon
objectives. While the worst-case running time of our algorithm is double
exponential, we also present efficient stopping criteria for the algorithm and
show experimentally that it performs well in many examples of interest.
| [
{
"version": "v1",
"created": "Fri, 14 Nov 2014 12:13:45 GMT"
}
] | 1,416,182,400,000 | [
[
"Chatterjee",
"Krishnendu",
""
],
[
"Chmelík",
"Martin",
""
],
[
"Gupta",
"Raghav",
""
],
[
"Kanodia",
"Ayush",
""
]
] |
1411.4023 | Umair Z Ahmed | Umair Z. Ahmed, Krishnendu Chatterjee, Sumit Gulwani | Automatic Generation of Alternative Starting Positions for Simple
Traditional Board Games | A conference version of the paper will appear in AAAI 2015 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Simple board games, like Tic-Tac-Toe and CONNECT-4, play an important role
not only in the development of mathematical and logical skills, but also in the
emotional and social development. In this paper, we address the problem of
generating targeted starting positions for such games. This can facilitate new
approaches for bringing novice players to mastery, and also leads to discovery
of interesting game variants. We present an approach that generates starting
states of varying hardness levels for player~$1$ in a two-player board game,
given rules of the board game, the desired number of steps required for
player~$1$ to win, and the expertise levels of the two players. Our approach
leverages symbolic methods and iterative simulation to efficiently search the
extremely large state space. We present experimental results that include
discovery of states of varying hardness levels for several simple grid-based
board games. The presence of such states for standard game variants like $4
\times 4$ Tic-Tac-Toe opens up new games to be played that have never been
played as the default start state is heavily biased.
| [
{
"version": "v1",
"created": "Fri, 14 Nov 2014 19:43:12 GMT"
}
] | 1,416,873,600,000 | [
[
"Ahmed",
"Umair Z.",
""
],
[
"Chatterjee",
"Krishnendu",
""
],
[
"Gulwani",
"Sumit",
""
]
] |
1411.4156 | Peter Patel-Schneider | Peter F. Patel-Schneider | Using Description Logics for RDF Constraint Checking and Closed-World
Recognition | Extended version of a paper of the same name that will appear in
AAAI-2015 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | RDF and Description Logics work in an open-world setting where absence of
information is not information about absence. Nevertheless, Description Logic
axioms can be interpreted in a closed-world setting and in this setting they
can be used for both constraint checking and closed-world recognition against
information sources. When the information sources are expressed in well-behaved
RDF or RDFS (i.e., RDF graphs interpreted in the RDF or RDFS semantics) this
constraint checking and closed-world recognition is simple to describe. Further
this constraint checking can be implemented as SPARQL querying and thus
effectively performed.
| [
{
"version": "v1",
"created": "Sat, 15 Nov 2014 15:33:38 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Jan 2015 21:09:56 GMT"
}
] | 1,421,971,200,000 | [
[
"Patel-Schneider",
"Peter F.",
""
]
] |
1411.4192 | Glenn Hofford | Glenn R. Hofford | Introduction to ROSS: A New Representational Scheme | 32 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | ROSS ("Representation, Ontology, Structure, Star") is introduced as a new
method for knowledge representation that emphasizes representational constructs
for physical structure. The ROSS representational scheme includes a language
called "Star" for the specification of ontology classes. The ROSS method also
includes a formal scheme called the "instance model". Instance models are used
in the area of natural language meaning representation to represent situations.
This paper provides both the rationale and the philosophical background for the
ROSS method.
| [
{
"version": "v1",
"created": "Sat, 15 Nov 2014 22:31:05 GMT"
}
] | 1,416,268,800,000 | [
[
"Hofford",
"Glenn R.",
""
]
] |
1411.4616 | Antoni Lig\k{e}za | Antoni Lig\k{e}za | A Note on Systematic Conflict Generation in CA-EN-type Causal Structures | This report is available form LAAS - Toulouse, France, from 1996.
Report No.: 96317 http://www.laas.fr/pulman/pulman-isens/web/app.php/ | null | null | LAAS Report No. 96317, 22 pp. (1996) | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper is aimed at providing a very first, more "global", systematic
point of view with respect to possible conflict generation in CA-EN-like causal
structures. For simplicity, only the outermost level of graphs is taken into
account. Localization of the "conflict area", diagnostic preferences, and bases
for systematic conflict generation are considered. A notion of {\em Potential
Conflict Structure} ({\em PCS}) constituting a basic tool for identification of
possible conflicts is proposed and its use is discussed.
| [
{
"version": "v1",
"created": "Mon, 17 Nov 2014 20:07:45 GMT"
}
] | 1,416,268,800,000 | [
[
"Ligęza",
"Antoni",
""
]
] |
1411.4823 | Claudia Schon | Ulrich Furbach and Claudia Schon and Frieder Stolzenburg | Automated Reasoning in Deontic Logic | null | In M. Narasimha Murty, Xiangjian He, Raghavendra Rao Chillarige,
and Paul Weng, editors, Proc. of MIWAI 2014: Multi-disciplinary International
Workshop on Artificial Intelligence, LNAI 8875, pp. 57-68, Bangalore, India,
2014. Springer | 10.1007/978-3-319-13365-2_6 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deontic logic is a very well researched branch of mathematical logic and
philosophy. Various kinds of deontic logics are discussed for different
application domains like argumentation theory, legal reasoning, and acts in
multi-agent systems. In this paper, we show how standard deontic logic can be
stepwise transformed into description logic and DL- clauses, such that it can
be processed by Hyper, a high performance theorem prover which uses a
hypertableau calculus. Two use cases, one from multi-agent research and one
from the development of normative system are investigated.
| [
{
"version": "v1",
"created": "Tue, 18 Nov 2014 12:27:01 GMT"
}
] | 1,537,142,400,000 | [
[
"Furbach",
"Ulrich",
""
],
[
"Schon",
"Claudia",
""
],
[
"Stolzenburg",
"Frieder",
""
]
] |
1411.5410 | Rehan Abdul Aziz | Rehan Abdul Aziz, Geoffrey Chu, Christian Muise, Peter Stuckey | Stable Model Counting and Its Application in Probabilistic Logic
Programming | Accepted in AAAI, 2015 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Model counting is the problem of computing the number of models that satisfy
a given propositional theory. It has recently been applied to solving inference
tasks in probabilistic logic programming, where the goal is to compute the
probability of given queries being true provided a set of mutually independent
random variables, a model (a logic program) and some evidence. The core of
solving this inference task involves translating the logic program to a
propositional theory and using a model counter. In this paper, we show that for
some problems that involve inductive definitions like reachability in a graph,
the translation of logic programs to SAT can be expensive for the purpose of
solving inference tasks. For such problems, direct implementation of stable
model semantics allows for more efficient solving. We present two
implementation techniques, based on unfounded set detection, that extend a
propositional model counter to a stable model counter. Our experiments show
that for particular problems, our approach can outperform a state-of-the-art
probabilistic logic programming solver by several orders of magnitude in terms
of running time and space requirements, and can solve instances of
significantly larger sizes on which the current solver runs out of time or
memory.
| [
{
"version": "v1",
"created": "Thu, 20 Nov 2014 00:54:45 GMT"
}
] | 1,416,528,000,000 | [
[
"Aziz",
"Rehan Abdul",
""
],
[
"Chu",
"Geoffrey",
""
],
[
"Muise",
"Christian",
""
],
[
"Stuckey",
"Peter",
""
]
] |
1411.5416 | Marius Silaghi | Marius C. Silaghi and Roussi Roussev | Recommending the Most Encompassing Opposing and Endorsing Arguments in
Debates | 10 pages. This report was reviewed by a committee within Florida Tech
during April 2014, and had been written in Summer 2013 by summarizing a set
of emails exchanged during Spring 2013, concerning the DirectDemocracyP2P.net
system | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Arguments are essential objects in DirectDemocracyP2P, where they can occur
both in association with signatures for petitions, or in association with other
debated decisions, such as bug sorting by importance. The arguments of a signer
on a given issue are grouped into one single justification, are classified by
the type of signature (e.g., supporting or opposing), and can be subject to
various types of threading.
Given the available inputs, the two addressed problems are: (i) how to
recommend the best justification, of a given type, to a new voter, (ii) how to
recommend a compact list of justifications subsuming the majority of known
arguments for (or against) an issue.
We investigate solutions based on weighted bipartite graphs.
| [
{
"version": "v1",
"created": "Thu, 20 Nov 2014 01:29:19 GMT"
}
] | 1,416,528,000,000 | [
[
"Silaghi",
"Marius C.",
""
],
[
"Roussev",
"Roussi",
""
]
] |
1411.5635 | Claudia Schulz | Claudia Schulz and Francesca Toni | Justifying Answer Sets using Argumentation | This article has been accepted for publication in Theory and Practice
of Logic Programming | Theory and Practice of Logic Programming 16 (2016) 59-110 | 10.1017/S1471068414000702 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An answer set is a plain set of literals which has no further structure that
would explain why certain literals are part of it and why others are not. We
show how argumentation theory can help to explain why a literal is or is not
contained in a given answer set by defining two justification methods, both of
which make use of the correspondence between answer sets of a logic program and
stable extensions of the Assumption-Based Argumentation (ABA) framework
constructed from the same logic program. Attack Trees justify a literal in
argumentation-theoretic terms, i.e. using arguments and attacks between them,
whereas ABA-Based Answer Set Justifications express the same justification
structure in logic programming terms, that is using literals and their
relationships. Interestingly, an ABA-Based Answer Set Justification corresponds
to an admissible fragment of the answer set in question, and an Attack Tree
corresponds to an admissible fragment of the stable extension corresponding to
this answer set.
| [
{
"version": "v1",
"created": "Thu, 20 Nov 2014 18:37:12 GMT"
},
{
"version": "v2",
"created": "Tue, 2 Dec 2014 14:52:20 GMT"
}
] | 1,582,070,400,000 | [
[
"Schulz",
"Claudia",
""
],
[
"Toni",
"Francesca",
""
]
] |
1411.6300 | Do L Paul Minh | Do Le Paul Minh | Discrete Bayesian Networks: The Exact Posterior Marginal Distributions | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/3.0/ | In a Bayesian network, we wish to evaluate the marginal probability of a
query variable, which may be conditioned on the observed values of some
evidence variables. Here we first present our "border algorithm," which
converts a BN into a directed chain. For the polytrees, we then present in
details, with some modifications and within the border algorithm framework, the
"revised polytree algorithm" by Peot & Shachter (1991). Finally, we present our
"parentless polytree method," which, coupled with the border algorithm,
converts any Bayesian network into a polytree, rendering the complexity of our
inferences independent of the size of network, and linear with the number of
its evidence and query variables. All quantities in this paper have
probabilistic interpretations.
| [
{
"version": "v1",
"created": "Sun, 23 Nov 2014 21:19:44 GMT"
}
] | 1,416,873,600,000 | [
[
"Minh",
"Do Le Paul",
""
]
] |
1411.6593 | David Tolpin | David Tolpin, Oded Betzalel, Ariel Felner, Solomon Eyal Shimony | Rational Deployment of Multiple Heuristics in IDA* | 7 pages, 6 tables, 20 references | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in metareasoning for search has shown its usefulness in
improving numerous search algorithms. This paper applies rational metareasoning
to IDA* when several admissible heuristics are available. The obvious basic
approach of taking the maximum of the heuristics is improved upon by lazy
evaluation of the heuristics, resulting in a variant known as Lazy IDA*. We
introduce a rational version of lazy IDA* that decides whether to compute the
more expensive heuristics or to bypass it, based on a myopic expected regret
estimate. Empirical evaluation in several domains supports the theoretical
results, and shows that rational lazy IDA* is a state-of-the-art heuristic
combination method.
| [
{
"version": "v1",
"created": "Mon, 24 Nov 2014 20:04:20 GMT"
}
] | 1,416,873,600,000 | [
[
"Tolpin",
"David",
""
],
[
"Betzalel",
"Oded",
""
],
[
"Felner",
"Ariel",
""
],
[
"Shimony",
"Solomon Eyal",
""
]
] |
1411.7149 | Mart\'in Pereira-Fari\~na | M. Pereira-Fari\~na, Juan C. Vidal, F. D\'iaz-Hermida, A. Bugar\'in | A Fuzzy Syllogistic Reasoning Schema for Generalized Quantifiers | 22 pages, 6 figures, journal paper | (2014) Fuzzy Sets and Systems 234, 79-96 | 10.1016/j.fss.2013.02.007 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, a new approximate syllogistic reasoning schema is described
that expands some of the approaches expounded in the literature into two ways:
(i) a number of different types of quantifiers (logical, absolute,
proportional, comparative and exception) taken from Theory of Generalized
Quantifiers and similarity quantifiers, taken from statistics, are considered
and (ii) any number of premises can be taken into account within the reasoning
process. Furthermore, a systematic reasoning procedure to solve the syllogism
is also proposed, interpreting it as an equivalent mathematical optimization
problem, where the premises constitute the constraints of the searching space
for the quantifier in the conclusion.
| [
{
"version": "v1",
"created": "Wed, 26 Nov 2014 09:26:14 GMT"
}
] | 1,417,046,400,000 | [
[
"Pereira-Fariña",
"M.",
""
],
[
"Vidal",
"Juan C.",
""
],
[
"Díaz-Hermida",
"F.",
""
],
[
"Bugarín",
"A.",
""
]
] |
1411.7480 | Christopher Rosin | Christopher D. Rosin | Unweighted Stochastic Local Search can be Effective for Random CSP
Benchmarks | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present ULSA, a novel stochastic local search algorithm for random binary
constraint satisfaction problems (CSP). ULSA is many times faster than the
prior state of the art on a widely-studied suite of random CSP benchmarks.
Unlike the best previous methods for these benchmarks, ULSA is a simple
unweighted method that does not require dynamic adaptation of weights or
penalties. ULSA obtains new record best solutions satisfying 99 of 100
variables in the challenging frb100-40 benchmark instance.
| [
{
"version": "v1",
"created": "Thu, 27 Nov 2014 06:41:22 GMT"
}
] | 1,417,392,000,000 | [
[
"Rosin",
"Christopher D.",
""
]
] |
1411.7525 | Mart\'in Pereira-Fari\~na | M. Pereira-Fari\~na, F. D\'iaz-Hermida, A. Bugar\'in | On the analysis of set-based fuzzy quantified reasoning using classical
syllogistics | 19 pages, 4 figures | "Fuzzy Sets and Systems", vol. 214(1), 83-94 | 10.1016/j.fss.2012.03.015 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Syllogism is a type of deductive reasoning involving quantified statements.
The syllogistic reasoning scheme in the classical Aristotelian framework
involves three crisp term sets and four linguistic quantifiers, for which the
main support is the linguistic properties of the quantifiers. A number of fuzzy
approaches for defining an approximate syllogism have been proposed for which
the main support is cardinality calculus. In this paper we analyze fuzzy
syllogistic models previously described by Zadeh and Dubois et al. and compare
their behavior with that of the classical Aristotelian framework to check which
of the 24 classical valid syllogistic reasoning patterns or moods are
particular crisp cases of these fuzzy approaches. This allows us to assess to
what extent these approaches can be considered as either plausible extensions
of the classical crisp syllogism or a basis for a general approach to the
problem of approximate syllogism.
| [
{
"version": "v1",
"created": "Thu, 27 Nov 2014 10:12:21 GMT"
}
] | 1,417,392,000,000 | [
[
"Pereira-Fariña",
"M.",
""
],
[
"Díaz-Hermida",
"F.",
""
],
[
"Bugarín",
"A.",
""
]
] |
1412.0315 | Guy Van den Broeck | Guy Van den Broeck and Mathias Niepert | Lifted Probabilistic Inference for Asymmetric Graphical Models | To appear in Proceedings of AAAI-2015 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Lifted probabilistic inference algorithms have been successfully applied to a
large number of symmetric graphical models. Unfortunately, the majority of
real-world graphical models is asymmetric. This is even the case for relational
representations when evidence is given. Therefore, more recent work in the
community moved to making the models symmetric and then applying existing
lifted inference algorithms. However, this approach has two shortcomings.
First, all existing over-symmetric approximations require a relational
representation such as Markov logic networks. Second, the induced symmetries
often change the distribution significantly, making the computed probabilities
highly biased. We present a framework for probabilistic sampling-based
inference that only uses the induced approximate symmetries to propose steps in
a Metropolis-Hastings style Markov chain. The framework, therefore, leads to
improved probability estimates while remaining unbiased. Experiments
demonstrate that the approach outperforms existing MCMC algorithms.
| [
{
"version": "v1",
"created": "Mon, 1 Dec 2014 00:40:33 GMT"
}
] | 1,417,478,400,000 | [
[
"Broeck",
"Guy Van den",
""
],
[
"Niepert",
"Mathias",
""
]
] |
1412.0854 | Thomas Hassan | Thomas Hassan (Le2i), Rafael Peixoto, Christophe Cruz (Le2i), Aurlie
Bertaux (Le2i), Nuno Silva | Semantic HMC for Big Data Analysis | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Analyzing Big Data can help corporations to im-prove their efficiency. In
this work we present a new vision to derive Value from Big Data using a
Semantic Hierarchical Multi-label Classification called Semantic HMC based in a
non-supervised Ontology learning process. We also proposea Semantic HMC
process, using scalable Machine-Learning techniques and Rule-based reasoning.
| [
{
"version": "v1",
"created": "Tue, 2 Dec 2014 10:44:24 GMT"
}
] | 1,417,564,800,000 | [
[
"Hassan",
"Thomas",
"",
"Le2i"
],
[
"Peixoto",
"Rafael",
"",
"Le2i"
],
[
"Cruz",
"Christophe",
"",
"Le2i"
],
[
"Bertaux",
"Aurlie",
"",
"Le2i"
],
[
"Silva",
"Nuno",
""
]
] |
1412.1044 | Ram\'on Casares | Ram\'on Casares | Problem Theory | 43 pages | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The Turing machine, as it was presented by Turing himself, models the
calculations done by a person. This means that we can compute whatever any
Turing machine can compute, and therefore we are Turing complete. The question
addressed here is why, Why are we Turing complete? Being Turing complete also
means that somehow our brain implements the function that a universal Turing
machine implements. The point is that evolution achieved Turing completeness,
and then the explanation should be evolutionary, but our explanation is
mathematical. The trick is to introduce a mathematical theory of problems,
under the basic assumption that solving more problems provides more survival
opportunities. So we build a problem theory by fusing set and computing
theories. Then we construct a series of resolvers, where each resolver is
defined by its computing capacity, that exhibits the following property: all
problems solved by a resolver are also solved by the next resolver in the
series if certain condition is satisfied. The last of the conditions is to be
Turing complete. This series defines a resolvers hierarchy that could be seen
as a framework for the evolution of cognition. Then the answer to our question
would be: to solve most problems. By the way, the problem theory defines
adaptation, perception, and learning, and it shows that there are just three
ways to resolve any problem: routine, trial, and analogy. And, most
importantly, this theory demonstrates how problems can be used to found
mathematics and computing on biology.
| [
{
"version": "v1",
"created": "Mon, 1 Dec 2014 18:13:34 GMT"
},
{
"version": "v2",
"created": "Mon, 26 Jan 2015 10:03:24 GMT"
},
{
"version": "v3",
"created": "Sun, 12 Apr 2015 10:37:07 GMT"
},
{
"version": "v4",
"created": "Wed, 3 Jun 2015 08:55:52 GMT"
},
{
"version": "v5",
"created": "Tue, 4 Aug 2015 08:46:12 GMT"
},
{
"version": "v6",
"created": "Fri, 2 Sep 2016 09:08:05 GMT"
}
] | 1,473,033,600,000 | [
[
"Casares",
"Ramón",
""
]
] |
1412.1913 | Santosh Mungle | Santosh Mungle | A Portfolio Approach to Algorithm Selection for Discrete Time-Cost
Trade-off Problem | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is a known fact that the performance of optimization algorithms for
NP-Hard problems vary from instance to instance. We observed the same trend
when we comprehensively studied multi-objective evolutionary algorithms (MOEAs)
on a six benchmark instances of discrete time-cost trade-off problem (DTCTP) in
a construction project. In this paper, instead of using a single algorithm to
solve DTCTP, we use a portfolio approach that takes multiple algorithms as its
constituent. We proposed portfolio comprising of four MOEAs, Non-dominated
Sorting Genetic Algorithm II (NSGA-II), the strength Pareto Evolutionary
Algorithm II (SPEA-II), Pareto archive evolutionary strategy (PAES) and Niched
Pareto Genetic Algorithm II (NPGA-II) to solve DTCTP. The result shows that the
portfolio approach is computationally fast and qualitatively superior to its
constituent algorithms for all benchmark instances. Moreover, portfolio
approach provides an insight in selecting the best algorithm for all benchmark
instances of DTCTP.
| [
{
"version": "v1",
"created": "Fri, 5 Dec 2014 07:58:30 GMT"
},
{
"version": "v2",
"created": "Thu, 10 Aug 2017 16:48:09 GMT"
}
] | 1,502,409,600,000 | [
[
"Mungle",
"Santosh",
""
]
] |
1412.2114 | Toru Ohira | Toru Ohira | Chases and Escapes, and Optimization Problems | 3 pages, 4 figures. To appear in the Proceedings of the International
Symposium on Artificial Life and Robotics (AROB20th), Beppu, Oita Japan,
January 21-23, 2015 | Artificial Life Robotics (2015) 20: 257 | 10.1007/s10015-015-0220-2 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a new approach for solving combinatorial optimization problem by
utilizing the mechanism of chases and escapes, which has a long history in
mathematics. In addition to the well-used steepest descent and neighboring
search, we perform a chase and escape game on the "landscape" of the cost
function. We have created a concrete algorithm for the Traveling Salesman
Problem. Our preliminary test indicates a possibility that this new fusion of
chases and escapes problem into combinatorial optimization search is fruitful.
| [
{
"version": "v1",
"created": "Mon, 1 Dec 2014 06:47:28 GMT"
}
] | 1,524,614,400,000 | [
[
"Ohira",
"Toru",
""
]
] |
1412.2985 | Joseph Y. Halpern | Joseph Y. Halpern | Cause, Responsibility, and Blame: oA Structural-Model Approach | To appear, Law, Probability, and Risk | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A definition of causality introduced by Halpern and Pearl, which uses
structural equations, is reviewed. A more refined definition is then
considered, which takes into account issues of normality and typicality, which
are well known to affect causal ascriptions. Causality is typically an
all-or-nothing notion: either A is a cause of B or it is not. An extension of
the definition of causality to capture notions of degree of responsibility and
degree of blame, due to Chockler and Halpern, is reviewed. For example, if
someone wins an election 11-0, then each person who votes for him is less
responsible for the victory than if he had won 6-5. Degree of blame takes into
account an agent's epistemic state. Roughly speaking, the degree of blame of A
for B is the expected degree of responsibility of A for B, taken over the
epistemic state of an agent. Finally, the structural-equations definition of
causality is compared to Wright's NESS test.
| [
{
"version": "v1",
"created": "Tue, 9 Dec 2014 14:58:58 GMT"
}
] | 1,418,169,600,000 | [
[
"Halpern",
"Joseph Y.",
""
]
] |
1412.3076 | Joseph Y. Halpern | Gadi Aleksandrowicz, Hana Chockler, Joseph Y. Halpern, Alexander Ivrii | The Computational Complexity of Structure-Based Causality | Appears in AAAI 2015 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Halpern and Pearl introduced a definition of actual causality; Eiter and
Lukasiewicz showed that computing whether X=x is a cause of Y=y is NP-complete
in binary models (where all variables can take on only two values) and\
Sigma_2^P-complete in general models. In the final version of their paper,
Halpern and Pearl slightly modified the definition of actual cause, in order to
deal with problems pointed by Hopkins and Pearl. As we show, this modification
has a nontrivial impact on the complexity of computing actual cause. To
characterize the complexity, a new family D_k^P, k= 1, 2, 3, ..., of complexity
classes is introduced, which generalizes the class DP introduced by
Papadimitriou and Yannakakis (DP is just D_1^P). %joe2 %We show that the
complexity of computing causality is $\D_2$-complete %under the new definition.
Chockler and Halpern \citeyear{CH04} extended the We show that the complexity
of computing causality under the updated definition is $D_2^P$-complete.
Chockler and Halpern extended the definition of causality by introducing
notions of responsibility and blame. The complexity of determining the degree
of responsibility and blame using the original definition of causality was
completely characterized. Again, we show that changing the definition of
causality affects the complexity, and completely characterize it using the
updated definition.
| [
{
"version": "v1",
"created": "Tue, 9 Dec 2014 19:58:51 GMT"
}
] | 1,418,169,600,000 | [
[
"Aleksandrowicz",
"Gadi",
""
],
[
"Chockler",
"Hana",
""
],
[
"Halpern",
"Joseph Y.",
""
],
[
"Ivrii",
"Alexander",
""
]
] |
1412.3137 | Shashishekar Ramakrishna | Naouel Karam, Shashishekar Ramakrishna and Adrian Paschke | Rule reasoning for legal norm validation of FSTP facts | 1st International workshop on Artificial Intelligence and IP Law,
AIIP- Jurix 2012- Amsterdam | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Non-obviousness or inventive step is a general requirement for patentability
in most patent law systems. An invention should be at an adequate distance
beyond its prior art in order to be patented. This short paper provides an
overview on a methodology proposed for legal norm validation of FSTP facts
using rule reasoning approach.
| [
{
"version": "v1",
"created": "Fri, 5 Dec 2014 21:03:53 GMT"
}
] | 1,418,256,000,000 | [
[
"Karam",
"Naouel",
""
],
[
"Ramakrishna",
"Shashishekar",
""
],
[
"Paschke",
"Adrian",
""
]
] |
1412.3279 | Jerome Euzenat | J\'er\^ome Euzenat (INRIA Grenoble Rh\^one-Alpes / LIG Laboratoire
d'Informatique de Grenoble) | The category of networks of ontologies | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The semantic web has led to the deployment of ontologies on the web connected
through various relations and, in particular, alignments of their vocabularies.
There exists several semantics for alignments which make difficult
interoperation between different interpretation of networks of ontologies. Here
we present an abstraction of these semantics which allows for defining the
notions of closure and consistency for networks of ontologies independently
from the precise semantics. We also show that networks of ontologies with
specific notions of morphisms define categories of networks of ontologies.
| [
{
"version": "v1",
"created": "Wed, 10 Dec 2014 12:34:04 GMT"
}
] | 1,418,256,000,000 | [
[
"Euzenat",
"Jérôme",
"",
"INRIA Grenoble Rhône-Alpes / LIG Laboratoire\n d'Informatique de Grenoble"
]
] |
1412.3518 | Joseph Y. Halpern | Joseph Y. Halpern | Appropriate Causal Models and the Stability of Causation | A preliminary version of this paper appears in the Proceedings of the
Fourteenth International Conference on Principles of Knowledge Representation
and Reasoning (KR 2014)}, 2014. To appear, Review of Symbolic Logic | The Review of Symbolic Logic 9 (2016) 76-102 | 10.1017/S1755020315000246 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Causal models defined in terms of structural equations have proved to be
quite a powerful way of representing knowledge regarding causality. However, a
number of authors have given examples that seem to show that the Halpern-Pearl
(HP) definition of causality gives intuitively unreasonable answers. Here it is
shown that, for each of these examples, we can give two stories consistent with
the description in the example, such that intuitions regarding causality are
quite different for each story. By adding additional variables, we can
disambiguate the stories. Moreover, in the resulting causal models, the HP
definition of causality gives the intuitively correct answer. It is also shown
that, by adding extra variables, a modification to the original HP definition
made to deal with an example of Hopkins and Pearl may not be necessary. Given
how much can be done by adding extra variables, there might be a concern that
the notion of causality is somewhat unstable. Can adding extra variables in a
"conservative" way (i.e., maintaining all the relations between the variables
in the original model) cause the answer to the question "Is X=x a cause of Y=y"
to alternate between "yes" and "no"? It is shown that we can have such
alternation infinitely often, but if we take normality into consideration, we
cannot. Indeed, under appropriate normality assumptions. adding an extra
variable can change the answer from "yes" to "no", but after that, it cannot
cannot change back to "yes".
| [
{
"version": "v1",
"created": "Thu, 11 Dec 2014 02:16:39 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Aug 2015 17:14:55 GMT"
}
] | 1,550,620,800,000 | [
[
"Halpern",
"Joseph Y.",
""
]
] |
1412.3802 | Neil Rubens | Neil Rubens | Turing Test for the Internet of Things | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How smart is your kettle? How smart are things in your kitchen, your house,
your neighborhood, on the internet? With the advent of Internet of Things, and
the move of making devices `smart' by utilizing AI, a natural question arrises,
how can we evaluate the progress. The standard way of evaluating AI is through
the Turing Test. While Turing Test was designed for AI; the device that it was
tailored to was a computer. Applying the test to variety of devices that
constitute Internet of Things poses a number of challenges which could be
addressed through a number of adaptations.
| [
{
"version": "v1",
"created": "Thu, 11 Dec 2014 09:49:07 GMT"
}
] | 1,418,601,600,000 | [
[
"Rubens",
"Neil",
""
]
] |
1412.3908 | Valmi Dufour-Lussier | Valmi Dufour-Lussier (INRIA Nancy - Grand Est / LORIA), Alice Hermann
(INRIA Nancy - Grand Est / LORIA), Florence Le Ber (ICube), Jean Lieber
(INRIA Nancy - Grand Est / LORIA) | Belief revision in the propositional closure of a qualitative algebra | null | 14th International Conference on Principles of Knowledge
Representation and Reasoning, Jul 2014, Vienne, Austria. AAAI Press, pp.4 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Belief revision is an operation that aims at modifying old be-liefs so that
they become consistent with new ones. The issue of belief revision has been
studied in various formalisms, in particular, in qualitative algebras (QAs) in
which the result is a disjunction of belief bases that is not necessarily
repre-sentable in a QA. This motivates the study of belief revision in
formalisms extending QAs, namely, their propositional clo-sures: in such a
closure, the result of belief revision belongs to the formalism. Moreover, this
makes it possible to define a contraction operator thanks to the Harper
identity. Belief revision in the propositional closure of QAs is studied, an
al-gorithm for a family of revision operators is designed, and an open-source
implementation is made freely available on the web.
| [
{
"version": "v1",
"created": "Fri, 12 Dec 2014 07:52:28 GMT"
}
] | 1,418,601,600,000 | [
[
"Dufour-Lussier",
"Valmi",
"",
"INRIA Nancy - Grand Est / LORIA"
],
[
"Hermann",
"Alice",
"",
"INRIA Nancy - Grand Est / LORIA"
],
[
"Ber",
"Florence Le",
"",
"ICube"
],
[
"Lieber",
"Jean",
"",
"INRIA Nancy - Grand Est / LORIA"
]
] |
1412.4465 | Meysam Ghaffari | Mostafa Sepahvand, Ghasem Alikhajeh, Meysam Ghaffari, Abdolreza
Mirzaei | Generating Graphical Chain by Mutual Matching of Bayesian Network and
Extracted Rules of Bayesian Network Using Genetic Algorithm | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the technology development, the need of analyze and extraction of useful
information is increasing. Bayesian networks contain knowledge from data and
experts that could be used for decision making processes But they are not
easily understandable thus the rule extraction methods have been used but they
have high computation costs. To overcome this problem we extract rules from
Bayesian network using genetic algorithm. Then we generate the graphical chain
by mutually matching the extracted rules and Bayesian network. This graphical
chain could shows the sequence of events that lead to the target which could
help the decision making process. The experimental results on small networks
show that the proposed method has comparable results with brute force method
which has a significantly higher computation cost.
| [
{
"version": "v1",
"created": "Mon, 15 Dec 2014 05:33:21 GMT"
}
] | 1,418,688,000,000 | [
[
"Sepahvand",
"Mostafa",
""
],
[
"Alikhajeh",
"Ghasem",
""
],
[
"Ghaffari",
"Meysam",
""
],
[
"Mirzaei",
"Abdolreza",
""
]
] |
1412.4802 | Vasile Patrascu | Vasile Patrascu | Neutrosophic information in the framework of multi-valued representation | null | null | 10.13140/2.1.4717.2169 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The paper presents some steps for multi-valued representation of neutrosophic
information. These steps are provided in the framework of multi-valued logics
using the following logical value: true, false, neutral, unknown and saturated.
Also, this approach provides some calculus formulae for the following
neutrosophic features: truth, falsity, neutrality, ignorance,
under-definedness, over-definedness, saturation and entropy. In addition, it
was defined net truth, definedness and neutrosophic score.
| [
{
"version": "v1",
"created": "Mon, 1 Dec 2014 16:07:24 GMT"
}
] | 1,418,774,400,000 | [
[
"Patrascu",
"Vasile",
""
]
] |
1412.4972 | Sejun Park | Sejun Park, Jinwoo Shin | Max-Product Belief Propagation for Linear Programming: Applications to
Combinatorial Optimization | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The max-product {belief propagation} (BP) is a popular message-passing
heuristic for approximating a maximum-a-posteriori (MAP) assignment in a joint
distribution represented by a graphical model (GM). In the past years, it has
been shown that BP can solve a few classes of linear programming (LP)
formulations to combinatorial optimization problems including maximum weight
matching, shortest path and network flow, i.e., BP can be used as a
message-passing solver for certain combinatorial optimizations. However, those
LPs and corresponding BP analysis are very sensitive to underlying problem
setups, and it has been not clear what extent these results can be generalized
to. In this paper, we obtain a generic criteria that BP converges to the
optimal solution of given LP, and show that it is satisfied in LP formulations
associated to many classical combinatorial optimization problems including
maximum weight perfect matching, shortest path, traveling salesman, cycle
packing, vertex/edge cover and network flow.
| [
{
"version": "v1",
"created": "Tue, 16 Dec 2014 12:18:34 GMT"
},
{
"version": "v2",
"created": "Fri, 6 Mar 2015 01:43:00 GMT"
},
{
"version": "v3",
"created": "Sun, 4 Oct 2015 06:03:41 GMT"
},
{
"version": "v4",
"created": "Thu, 8 Dec 2016 10:37:48 GMT"
},
{
"version": "v5",
"created": "Wed, 28 Jun 2017 17:15:25 GMT"
}
] | 1,498,694,400,000 | [
[
"Park",
"Sejun",
""
],
[
"Shin",
"Jinwoo",
""
]
] |
1412.5202 | Ridvan Sahin | R{\i}dvan \c{S}ahin | Multi-criteria neutrosophic decision making method based on score and
accuracy functions under neutrosophic environment | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A neutrosophic set is a more general platform, which can be used to present
uncertainty, imprecise, incomplete and inconsistent. In this paper a score
function and an accuracy function for single valued neutrosophic sets is
firstly proposed to make the distinction between them. Then the idea is
extended to interval neutrosophic sets. A multi-criteria decision making method
based on the developed score-accuracy functions is established in which
criterion values for alternatives are single valued neutrosophic sets and
interval neutrosophic sets. In decision making process, the neutrosophic
weighted aggregation operators (arithmetic and geometric average operators) are
adopted to aggregate the neutrosophic information related to each alternative.
Thus, we can rank all alternatives and make the selection of the best of one(s)
according to the score-accuracy functions. Finally, some illustrative examples
are presented to verify the developed approach and to demonstrate its
practicality and effectiveness.
| [
{
"version": "v1",
"created": "Wed, 17 Dec 2014 12:15:08 GMT"
}
] | 1,418,860,800,000 | [
[
"Şahin",
"Rıdvan",
""
]
] |
1412.5980 | Swakkhar Shatabda | Mohammad Murtaza Mahmud, Swakkhar Shatabda and Mohammad Nurul Huda | GraATP: A Graph Theoretic Approach for Automated Theorem Proving in
Plane Geometry | The 8th International Conference on Software, Knowledge, Information
Management and Applications (SKIMA 2014) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automated Theorem Proving (ATP) is an established branch of Artificial
Intelligence. The purpose of ATP is to design a system which can automatically
figure out an algorithm either to prove or disprove a mathematical claim, on
the basis of a set of given premises, using a set of fundamental postulates and
following the method of logical inference. In this paper, we propose GraATP, a
generalized framework for automated theorem proving in plane geometry. Our
proposed method translates the geometric entities into nodes of a graph and the
relations between them as edges of that graph. The automated system searches
for different ways to reach the conclusion for a claim via graph traversal by
which the validity of the geometric theorem is examined.
| [
{
"version": "v1",
"created": "Thu, 18 Dec 2014 18:10:03 GMT"
}
] | 1,418,947,200,000 | [
[
"Mahmud",
"Mohammad Murtaza",
""
],
[
"Shatabda",
"Swakkhar",
""
],
[
"Huda",
"Mohammad Nurul",
""
]
] |
1412.5984 | Swakkhar Shatabda | Muktadir Hossain, Tajkia Tasnim, Swakkhar Shatabda and Dewan M. Farid | Stochastic Local Search for Pattern Set Mining | The 8th International Conference on Software, Knowledge, Information
Management and Applications (SKIMA 2014) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Local search methods can quickly find good quality solutions in cases where
systematic search methods might take a large amount of time. Moreover, in the
context of pattern set mining, exhaustive search methods are not applicable due
to the large search space they have to explore. In this paper, we propose the
application of stochastic local search to solve the pattern set mining.
Specifically, to the task of concept learning. We applied a number of local
search algorithms on a standard benchmark instances for pattern set mining and
the results show the potentials for further exploration.
| [
{
"version": "v1",
"created": "Thu, 18 Dec 2014 18:16:52 GMT"
}
] | 1,418,947,200,000 | [
[
"Hossain",
"Muktadir",
""
],
[
"Tasnim",
"Tajkia",
""
],
[
"Shatabda",
"Swakkhar",
""
],
[
"Farid",
"Dewan M.",
""
]
] |
1412.6413 | Gowri Shankar Ramaswamy | Gowri Shankar Ramaswamy, F Sagayaraj Francis | Towards a Consistent, Sound and Complete Conceptual Knowledge | null | International Journal of Computer Trends and Technology (IJCTT)
V17(2):61-63, Nov 2014 | 10.14445/22312803/IJCTT-V17P112 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge is only good if it is sound, consistent and complete. The same
holds true for conceptual knowledge, which holds knowledge about concepts and
its association. Conceptual knowledge no matter what format they are
represented in, must be consistent, sound and complete in order to realise its
practical use. This paper discusses consistency, soundness and completeness in
the ambit of conceptual knowledge and the need to consider these factors as
fundamental to the development of conceptual knowledge.
| [
{
"version": "v1",
"created": "Mon, 24 Nov 2014 14:42:16 GMT"
}
] | 1,419,206,400,000 | [
[
"Ramaswamy",
"Gowri Shankar",
""
],
[
"Francis",
"F Sagayaraj",
""
]
] |
1412.6973 | Guangming Lang | Guangming Lang | Decision-theoretic rough sets-based three-way approximations of
interval-valued fuzzy sets | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/3.0/ | In practical situations, interval-valued fuzzy sets are frequently
encountered. In this paper, firstly, we present shadowed sets for interpreting
and understanding interval fuzzy sets. We also provide an analytic solution to
computing the pair of thresholds by searching for a balance of uncertainty in
the framework of shadowed sets. Secondly, we construct errors-based three-way
approximations of interval-valued fuzzy sets. We also provide an alternative
decision-theoretic formulation for calculating the pair of thresholds by
transforming interval-valued loss functions into single-valued loss functions,
in which the required thresholds are computed by minimizing decision costs.
Thirdly, we compute errors-based three-way approximations of interval-valued
fuzzy sets by using interval-valued loss functions. Finally, we employ several
examples to illustrate that how to take an action for an object with
interval-valued membership grade by using interval-valued loss functions.
| [
{
"version": "v1",
"created": "Mon, 22 Dec 2014 13:34:04 GMT"
}
] | 1,419,292,800,000 | [
[
"Lang",
"Guangming",
""
]
] |
1412.7585 | Jia Xu | Jia Xu, Patrick Shironoshita, Ubbo Visser, Nigel John, Mansur Kabuka | Converting Instance Checking to Subsumption: A Rethink for Object
Queries over Practical Ontologies | null | International Journal of Intelligence Science, Vol. 5 No. 1,
44-62, 2015 | 10.4236/ijis.2015.51005 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Efficiently querying Description Logic (DL) ontologies is becoming a vital
task in various data-intensive DL applications. Considered as a basic service
for answering object queries over DL ontologies, instance checking can be
realized by using the most specific concept (MSC) method, which converts
instance checking into subsumption problems. This method, however, loses its
simplicity and efficiency when applied to large and complex ontologies, as it
tends to generate very large MSC's that could lead to intractable reasoning. In
this paper, we propose a revision to this MSC method for DL SHI, allowing it to
generate much simpler and smaller concepts that are specific-enough to answer a
given query. With independence between computed MSC's, scalability for query
answering can also be achieved by distributing and parallelizing the
computations. An empirical evaluation shows the efficacy of our revised MSC
method and the significant efficiency achieved when using it for answering
object queries.
| [
{
"version": "v1",
"created": "Wed, 24 Dec 2014 02:18:01 GMT"
},
{
"version": "v2",
"created": "Tue, 17 Feb 2015 20:23:48 GMT"
},
{
"version": "v3",
"created": "Thu, 26 Feb 2015 17:18:41 GMT"
}
] | 1,424,995,200,000 | [
[
"Xu",
"Jia",
""
],
[
"Shironoshita",
"Patrick",
""
],
[
"Visser",
"Ubbo",
""
],
[
"John",
"Nigel",
""
],
[
"Kabuka",
"Mansur",
""
]
] |
1412.7961 | Martin Homola | Marjan Alirezaie and Amy Loutfi | Reasoning for Improved Sensor Data Interpretation in a Smart Home | ARCOE-Logic 2014 Workshop Notes, pp. 1-12 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper an ontological representation and reasoning paradigm has been
proposed for interpretation of time-series signals. The signals come from
sensors observing a smart environment. The signal chosen for the annotation
process is a set of unintuitive and complex gas sensor data. The ontology of
this paradigm is inspired form the SSN ontology (Semantic Sensor Network) and
used for representation of both the sensor data and the contextual information.
The interpretation process is mainly done by an incremental ASP solver which as
input receives a logic program that is generated from the contents of the
ontology. The contextual information together with high level domain knowledge
given in the ontology are used to infer explanations (answer sets) for changes
in the ambient air detected by the gas sensors.
| [
{
"version": "v1",
"created": "Fri, 26 Dec 2014 17:38:19 GMT"
}
] | 1,419,897,600,000 | [
[
"Alirezaie",
"Marjan",
""
],
[
"Loutfi",
"Amy",
""
]
] |
1412.7964 | Martin Homola | Loris Bozzato and Luciano Serafini | Knowledge Propagation in Contextualized Knowledge Repositories: an
Experimental Evaluation | ARCOE-Logic 2014 Workshop Notes, pp. 13-24 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As the interest in the representation of context dependent knowledge in the
Semantic Web has been recognized, a number of logic based solutions have been
proposed in this regard. In our recent works, in response to this need, we
presented the description logic-based Contextualized Knowledge Repository (CKR)
framework. CKR is not only a theoretical framework, but it has been effectively
implemented over state-of-the-art tools for the management of Semantic Web
data: inference inside and across contexts has been realized in the form of
forward SPARQL-based rules over different RDF named graphs. In this paper we
present the first evaluation results for such CKR implementation. In
particular, in first experiment we study its scalability with respect to
different reasoning regimes. In a second experiment we analyze the effects of
knowledge propagation on the computation of inferences.
| [
{
"version": "v1",
"created": "Fri, 26 Dec 2014 18:00:45 GMT"
}
] | 1,419,897,600,000 | [
[
"Bozzato",
"Loris",
""
],
[
"Serafini",
"Luciano",
""
]
] |
1412.7965 | Martin Homola | Diego Calvanese, \.Ismail \.Ilkan Ceylan, Marco Montali, and Ario
Santoso | Adding Context to Knowledge and Action Bases | ARCOE-Logic 2014 Workshop Notes, pp. 25-36 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge and Action Bases (KABs) have been recently proposed as a formal
framework to capture the dynamics of systems which manipulate Description Logic
(DL) Knowledge Bases (KBs) through action execution. In this work, we enrich
the KAB setting with contextual information, making use of different context
dimensions. On the one hand, context is determined by the environment using
context-changing actions that make use of the current state of the KB and the
current context. On the other hand, it affects the set of TBox assertions that
are relevant at each time point, and that have to be considered when processing
queries posed over the KAB. Here we extend to our enriched setting the results
on verification of rich temporal properties expressed in mu-calculus, which had
been established for standard KABs. Specifically, we show that under a
run-boundedness condition, verification stays decidable.
| [
{
"version": "v1",
"created": "Fri, 26 Dec 2014 18:14:20 GMT"
}
] | 1,419,897,600,000 | [
[
"Calvanese",
"Diego",
""
],
[
"Ceylan",
"İsmail İlkan",
""
],
[
"Montali",
"Marco",
""
],
[
"Santoso",
"Ario",
""
]
] |
1412.7967 | Martin Homola | Martin Homola and Theodore Patkos | Different Types of Conflicting Knowledge in AmI Environments | ARCOE-Logic 2014 Workshop Notes, pp. 37-43 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We characterize different types of conflicts that may occur in complex
distributed multi-agent scenarios, such as in Ambient Intelligence (AmI)
environments, and we argue that these conflicts should be resolved in a
suitable order and with the appropriate strategies for each individual conflict
type. We call for further research with the goal of turning conflict resolution
in AmI environments and similar multi-agent domains into a more coordinated and
agreed upon process.
| [
{
"version": "v1",
"created": "Fri, 26 Dec 2014 18:21:20 GMT"
}
] | 1,419,897,600,000 | [
[
"Homola",
"Martin",
""
],
[
"Patkos",
"Theodore",
""
]
] |
1412.7968 | Martin Homola | Martin Ringsquandl, Steffen Lamparter, and Raffaello Lepratti | Context-Aware Analytics in MOM Applications | ARCOE-Logic 2014 Workshop Notes, pp. 44-49 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Manufacturing Operations Management (MOM) systems are complex in the sense
that they integrate data from heterogeneous systems inside the automation
pyramid. The need for context-aware analytics arises from the dynamics of these
systems that influence data generation and hamper comparability of analytics,
especially predictive models (e.g. predictive maintenance), where concept drift
affects application of these models in the future. Recently, an increasing
amount of research has been directed towards data integration using semantic
context models. Manual construction of such context models is an elaborate and
error-prone task. Therefore, we pose the challenge to apply combinations of
knowledge extraction techniques in the domain of analytics in MOM, which
comprises the scope of data integration within Product Life-cycle Management
(PLM), Enterprise Resource Planning (ERP), and Manufacturing Execution Systems
(MES). We describe motivations, technological challenges and show benefits of
context-aware analytics, which leverage from and regard the interconnectedness
of semantic context data. Our example scenario shows the need for distribution
and effective change tracking of context information.
| [
{
"version": "v1",
"created": "Fri, 26 Dec 2014 18:32:58 GMT"
}
] | 1,419,897,600,000 | [
[
"Ringsquandl",
"Martin",
""
],
[
"Lamparter",
"Steffen",
""
],
[
"Lepratti",
"Raffaello",
""
]
] |
1412.8529 | Jose Hernandez-Orallo | Jose Hernandez-Orallo | A note about the generalisation of the C-tests | 16 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this exploratory note we ask the question of what a measure of performance
for all tasks is like if we use a weighting of tasks based on a difficulty
function. This difficulty function depends on the complexity of the
(acceptable) solution for the task (instead of a universal distribution over
tasks or an adaptive test). The resulting aggregations and decompositions are
(now retrospectively) seen as the natural (and trivial) interactive
generalisation of the C-tests.
| [
{
"version": "v1",
"created": "Tue, 30 Dec 2014 01:48:10 GMT"
},
{
"version": "v2",
"created": "Thu, 26 Mar 2015 00:27:30 GMT"
}
] | 1,427,414,400,000 | [
[
"Hernandez-Orallo",
"Jose",
""
]
] |
1412.8531 | Martin Homola | Michael Fink, Martin Homola, and Alessandra Mileo | Workshop Notes of the 6th International Workshop on Acquisition,
Representation and Reasoning about Context with Logic (ARCOE-Logic 2014) | ARCOE-Logic 2014, 5 papers | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | ARCOE-Logic 2014, the 6th International Workshop on Acquisition,
Representation and Reasoning about Context with Logic, was held in co-location
with the 19th International Conference on Knowledge Engineering and Knowledge
Management (EKAW 2014) on November 25, 2014 in Link\"oping, Sweden. These notes
contain the five papers which were accepted and presented at the workshop.
| [
{
"version": "v1",
"created": "Tue, 30 Dec 2014 01:55:41 GMT"
}
] | 1,419,984,000,000 | [
[
"Fink",
"Michael",
""
],
[
"Homola",
"Martin",
""
],
[
"Mileo",
"Alessandra",
""
]
] |
1501.00601 | Eray Ozkural | Eray \"Ozkural | Ultimate Intelligence Part I: Physical Completeness and Objectivity of
Induction | Under review at AGI-2015 conference. An early draft was submitted to
ALT-2014. This paper is now being split into two papers, one philosophical,
and one more technical. We intend that all installments of the paper series
will be on the arxiv | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose that Solomonoff induction is complete in the physical sense via
several strong physical arguments. We also argue that Solomonoff induction is
fully applicable to quantum mechanics. We show how to choose an objective
reference machine for universal induction by defining a physical message
complexity and physical message probability, and argue that this choice
dissolves some well-known objections to universal induction. We also introduce
many more variants of physical message complexity based on energy and action,
and discuss the ramifications of our proposals.
| [
{
"version": "v1",
"created": "Sat, 3 Jan 2015 20:19:57 GMT"
},
{
"version": "v2",
"created": "Sun, 5 Apr 2015 15:47:22 GMT"
},
{
"version": "v3",
"created": "Thu, 9 Apr 2015 18:36:27 GMT"
}
] | 1,428,883,200,000 | [
[
"Özkural",
"Eray",
""
]
] |
1501.01178 | Benjamin Negrevergne | Benjamin Negrevergne and Tias Guns | Constraint-based sequence mining using constraint programming | In Integration of AI and OR Techniques in Constraint Programming
(CPAIOR), 2015 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The goal of constraint-based sequence mining is to find sequences of symbols
that are included in a large number of input sequences and that satisfy some
constraints specified by the user. Many constraints have been proposed in the
literature, but a general framework is still missing. We investigate the use of
constraint programming as general framework for this task. We first identify
four categories of constraints that are applicable to sequence mining. We then
propose two constraint programming formulations. The first formulation
introduces a new global constraint called exists-embedding. This formulation is
the most efficient but does not support one type of constraint. To support such
constraints, we develop a second formulation that is more general but incurs
more overhead. Both formulations can use the projected database technique used
in specialised algorithms. Experiments demonstrate the flexibility towards
constraint-based settings and compare the approach to existing methods.
| [
{
"version": "v1",
"created": "Tue, 6 Jan 2015 13:47:24 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Jan 2015 13:50:53 GMT"
},
{
"version": "v3",
"created": "Wed, 25 Feb 2015 16:31:27 GMT"
}
] | 1,424,908,800,000 | [
[
"Negrevergne",
"Benjamin",
""
],
[
"Guns",
"Tias",
""
]
] |
1501.02732 | Ilya Goldin | April Galyardt and Ilya Goldin | Predicting Performance During Tutoring with Models of Recent Performance | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In educational technology and learning sciences, there are multiple uses for
a predictive model of whether a student will perform a task correctly or not.
For example, an intelligent tutoring system may use such a model to estimate
whether or not a student has mastered a skill. We analyze the significance of
data recency in making such predictions, i.e., asking whether relatively more
recent observations of a student's performance matter more than relatively
older observations. We develop a new Recent-Performance Factors Analysis model
that takes data recency into account. The new model significantly improves
predictive accuracy over both existing logistic-regression performance models
and over novel baseline models in evaluations on real-world and synthetic
datasets. As a secondary contribution, we demonstrate how the widely used
cross-validation with 0-1 loss is inferior to AIC and to cross-validation with
L1 prediction error loss as a measure of model performance.
| [
{
"version": "v1",
"created": "Mon, 12 Jan 2015 17:39:53 GMT"
}
] | 1,421,107,200,000 | [
[
"Galyardt",
"April",
""
],
[
"Goldin",
"Ilya",
""
]
] |
1501.03784 | Denis Kleyko | Denis Kleyko, Evgeny Osipov, Alexander Senior, Asad I. Khan and Y.
Ahmet \c{S}ekercio\u{g}lu | Holographic Graph Neuron: a Bio-Inspired Architecture for Pattern
Processing | 9 pages, 13 figures | IEEE Transactions on Neural Networks and Learning Systems 28
(2017) 1250 - 1262 | 10.1109/TNNLS.2016.2535338 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This article proposes the use of Vector Symbolic Architectures for
implementing Hierarchical Graph Neuron, an architecture for memorizing patterns
of generic sensor stimuli. The adoption of a Vector Symbolic representation
ensures a one-layered design for the approach, while maintaining the previously
reported properties and performance characteristics of Hierarchical Graph
Neuron, and also improving the noise resistance of the architecture. The
proposed architecture enables a linear (with respect to the number of stored
entries) time search for an arbitrary sub-pattern.
| [
{
"version": "v1",
"created": "Thu, 15 Jan 2015 19:25:32 GMT"
}
] | 1,565,654,400,000 | [
[
"Kleyko",
"Denis",
""
],
[
"Osipov",
"Evgeny",
""
],
[
"Senior",
"Alexander",
""
],
[
"Khan",
"Asad I.",
""
],
[
"Şekercioğlu",
"Y. Ahmet",
""
]
] |
1501.04177 | Andrea Schaerf | Sara Ceschia, Nguyen Thi Thanh Dang, Patrick De Causmaecker, Stefaan
Haspeslagh, Andrea Schaerf | Second International Nurse Rostering Competition (INRC-II) --- Problem
Description and Rules --- | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we provide all information to participate to the Second
International Nurse Rostering Competition (INRC-II). First, we describe the
problem formulation, which, differently from INRC-I, is a multi-stage
procedure. Second, we illustrate all the necessary infrastructure do be used
together with the participant's solver, including the testbed, the file
formats, and the validation/simulation tools. Finally, we state the rules of
the competition. All update-to-date information about the competition is
available at http://mobiz.vives.be/inrc2/.
| [
{
"version": "v1",
"created": "Sat, 17 Jan 2015 09:06:08 GMT"
}
] | 1,421,712,000,000 | [
[
"Ceschia",
"Sara",
""
],
[
"Dang",
"Nguyen Thi Thanh",
""
],
[
"De Causmaecker",
"Patrick",
""
],
[
"Haspeslagh",
"Stefaan",
""
],
[
"Schaerf",
"Andrea",
""
]
] |
1501.04242 | Hector Zenil | Nicolas Gauvrit, Hector Zenil, Jesper Tegn\'er | The Information-theoretic and Algorithmic Approach to Human, Animal and
Artificial Cognition | 22 pages. Forthcoming in Gordana Dodig-Crnkovic and Raffaela
Giovagnoli (eds). Representation and Reality: Humans, Animals and Machines,
Springer Verlag | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We survey concepts at the frontier of research connecting artificial, animal
and human cognition to computation and information processing---from the Turing
test to Searle's Chinese Room argument, from Integrated Information Theory to
computational and algorithmic complexity. We start by arguing that passing the
Turing test is a trivial computational problem and that its pragmatic
difficulty sheds light on the computational nature of the human mind more than
it does on the challenge of artificial intelligence. We then review our
proposed algorithmic information-theoretic measures for quantifying and
characterizing cognition in various forms. These are capable of accounting for
known biases in human behavior, thus vindicating a computational algorithmic
view of cognition as first suggested by Turing, but this time rooted in the
concept of algorithmic probability, which in turn is based on computational
universality while being independent of computational model, and which has the
virtue of being predictive and testable as a model theory of cognitive
behavior.
| [
{
"version": "v1",
"created": "Sat, 17 Jan 2015 22:55:48 GMT"
},
{
"version": "v2",
"created": "Fri, 23 Jan 2015 23:55:43 GMT"
},
{
"version": "v3",
"created": "Tue, 27 Jan 2015 01:23:36 GMT"
},
{
"version": "v4",
"created": "Wed, 28 Jan 2015 15:51:30 GMT"
},
{
"version": "v5",
"created": "Thu, 24 Dec 2015 13:54:22 GMT"
}
] | 1,451,001,600,000 | [
[
"Gauvrit",
"Nicolas",
""
],
[
"Zenil",
"Hector",
""
],
[
"Tegnér",
"Jesper",
""
]
] |
1501.04786 | Arnaud Martin | Mouna Chebbah (IRISA), Mouloud Kharoune (IRISA), Arnaud Martin
(IRISA), Boutheina Ben Yaghlane | Consid{\'e}rant la d{\'e}pendance dans la th{\'e}orie des fonctions de
croyance | in French | Revue des Nouvelles Technologies Informatiques (RNTI), 2014,
Fouille de donn{\'e}es complexes, RNTI-E-27, pp.43-64 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose to learn sources independence in order to choose
the appropriate type of combination rules when aggregating their beliefs. Some
combination rules are used with the assumption of their sources independence
whereas others combine beliefs of dependent sources. Therefore, the choice of
the combination rule depends on the independence of sources involved in the
combination. In this paper, we propose also a measure of independence, positive
and negative dependence to integrate in mass functions before the combinaision
with the independence assumption.
| [
{
"version": "v1",
"created": "Tue, 20 Jan 2015 12:48:41 GMT"
}
] | 1,421,798,400,000 | [
[
"Chebbah",
"Mouna",
"",
"IRISA"
],
[
"Kharoune",
"Mouloud",
"",
"IRISA"
],
[
"Martin",
"Arnaud",
"",
"IRISA"
],
[
"Yaghlane",
"Boutheina Ben",
""
]
] |
1501.05272 | Arnaud Martin | Imen Ouled Dlala (IRISA), Dorra Attiaoui (IRISA), Arnaud Martin
(IRISA), Boutheina Ben Yaghlane | Trolls Identification within an Uncertain Framework | International Conference on Tools with Artificial Intelligence -
ICTAI , Nov 2014, Limassol, Cyprus | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The web plays an important role in people's social lives since the emergence
of Web 2.0. It facilitates the interaction between users, gives them the
possibility to freely interact, share and collaborate through social networks,
online communities forums, blogs, wikis and other online collaborative media.
However, an other side of the web is negatively taken such as posting
inflammatory messages. Thus, when dealing with the online communities forums,
the managers seek to always enhance the performance of such platforms. In fact,
to keep the serenity and prohibit the disturbance of the normal atmosphere,
managers always try to novice users against these malicious persons by posting
such message (DO NOT FEED TROLLS). But, this kind of warning is not enough to
reduce this phenomenon. In this context we propose a new approach for detecting
malicious people also called 'Trolls' in order to allow community managers to
take their ability to post online. To be more realistic, our proposal is
defined within an uncertain framework. Based on the assumption consisting on
the trolls' integration in the successful discussion threads, we try to detect
the presence of such malicious users. Indeed, this method is based on a
conflict measure of the belief function theory applied between the different
messages of the thread. In order to show the feasibility and the result of our
approach, we test it in different simulated data.
| [
{
"version": "v1",
"created": "Wed, 21 Jan 2015 19:34:23 GMT"
}
] | 1,421,884,800,000 | [
[
"Dlala",
"Imen Ouled",
"",
"IRISA"
],
[
"Attiaoui",
"Dorra",
"",
"IRISA"
],
[
"Martin",
"Arnaud",
"",
"IRISA"
],
[
"Yaghlane",
"Boutheina Ben",
""
]
] |
1501.05530 | Arnaud Martin | Siwar Jendoubi (IRISA), Boutheina Ben Yaghlane, Arnaud Martin (IRISA) | Belief Hidden Markov Model for speech recognition | null | International Conference on Modeling, Simulation and Applied
Optimization (ICMSAO), Apr 2013, Hammamet, Tunisia. pp.1 - 6 | 10.1109/ICMSAO.2013.6552563 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Speech Recognition searches to predict the spoken words automatically. These
systems are known to be very expensive because of using several pre-recorded
hours of speech. Hence, building a model that minimizes the cost of the
recognizer will be very interesting. In this paper, we present a new approach
for recognizing speech based on belief HMMs instead of proba-bilistic HMMs.
Experiments shows that our belief recognizer is insensitive to the lack of the
data and it can be trained using only one exemplary of each acoustic unit and
it gives a good recognition rates. Consequently, using the belief HMM
recognizer can greatly minimize the cost of these systems.
| [
{
"version": "v1",
"created": "Thu, 22 Jan 2015 15:20:28 GMT"
}
] | 1,421,971,200,000 | [
[
"Jendoubi",
"Siwar",
"",
"IRISA"
],
[
"Yaghlane",
"Boutheina Ben",
"",
"IRISA"
],
[
"Martin",
"Arnaud",
"",
"IRISA"
]
] |
1501.05612 | Arnaud Martin | Anthony Fiche, Jean-Christophe Cexus, Arnaud Martin (IRISA), Ali
Khenchaf | Features modeling with an $\alpha$-stable distribution: Application to
pattern recognition based on continuous belief functions | null | Information Fusion, Elsevier, 2013, 14, pp.504 - 520 | 10.1016/j.inffus.2013.02.004 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The aim of this paper is to show the interest in fitting features with an
$\alpha$-stable distribution to classify imperfect data. The supervised pattern
recognition is thus based on the theory of continuous belief functions, which
is a way to consider imprecision and uncertainty of data. The distributions of
features are supposed to be unimodal and estimated by a single Gaussian and
$\alpha$-stable model. Experimental results are first obtained from synthetic
data by combining two features of one dimension and by considering a vector of
two features. Mass functions are calculated from plausibility functions by
using the generalized Bayes theorem. The same study is applied to the automatic
classification of three types of sea floor (rock, silt and sand) with features
acquired by a mono-beam echo-sounder. We evaluate the quality of the
$\alpha$-stable model and the Gaussian model by analyzing qualitative results,
using a Kolmogorov-Smirnov test (K-S test), and quantitative results with
classification rates. The performances of the belief classifier are compared
with a Bayesian approach.
| [
{
"version": "v1",
"created": "Thu, 22 Jan 2015 19:55:58 GMT"
}
] | 1,421,971,200,000 | [
[
"Fiche",
"Anthony",
"",
"IRISA"
],
[
"Cexus",
"Jean-Christophe",
"",
"IRISA"
],
[
"Martin",
"Arnaud",
"",
"IRISA"
],
[
"Khenchaf",
"Ali",
""
]
] |
1501.05613 | Arnaud Martin | Jungyeul Park (IRISA), Mouna Chebbah (IRISA), Siwar Jendoubi (IRISA),
Arnaud Martin (IRISA) | Second-Order Belief Hidden Markov Models | null | Belief 2014, Sep 2014, Oxford, United Kingdom. pp.284 - 293 | 10.1007/978-3-319-11191-9_31 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hidden Markov Models (HMMs) are learning methods for pattern recognition. The
probabilistic HMMs have been one of the most used techniques based on the
Bayesian model. First-order probabilistic HMMs were adapted to the theory of
belief functions such that Bayesian probabilities were replaced with mass
functions. In this paper, we present a second-order Hidden Markov Model using
belief functions. Previous works in belief HMMs have been focused on the
first-order HMMs. We extend them to the second-order model.
| [
{
"version": "v1",
"created": "Thu, 22 Jan 2015 19:56:34 GMT"
}
] | 1,421,971,200,000 | [
[
"Park",
"Jungyeul",
"",
"IRISA"
],
[
"Chebbah",
"Mouna",
"",
"IRISA"
],
[
"Jendoubi",
"Siwar",
"",
"IRISA"
],
[
"Martin",
"Arnaud",
"",
"IRISA"
]
] |
1501.05614 | Arnaud Martin | Mouloud Kharoune (IRISA), Arnaud Martin (IRISA) | Int{\'e}gration d'une mesure d'ind{\'e}pendance pour la fusion
d'informations | in French, appears in Atelier Fouille de donn{\'e}es complexes,
Extraction et Gestion des Connaissances (EGC), Jan 2013, Toulouse, France.
arXiv admin note: substantial text overlap with arXiv:1501.04786 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many information sources are considered into data fusion in order to improve
the decision in terms of uncertainty and imprecision. For each technique used
for data fusion, the asumption on independance is usually made. We propose in
this article an approach to take into acount an independance measure befor to
make the combination of information in the context of the theory of belief
functions.
| [
{
"version": "v1",
"created": "Thu, 22 Jan 2015 19:57:59 GMT"
}
] | 1,421,971,200,000 | [
[
"Kharoune",
"Mouloud",
"",
"IRISA"
],
[
"Martin",
"Arnaud",
"",
"IRISA"
]
] |
1501.05724 | Arnaud Martin | Amira Essaid, Arnaud Martin (IRISA), Gr\'egory Smits, Boutheina Ben
Yaghlane | Uncertainty in Ontology Matching: A Decision Rule-Based Approach | null | International Conference on Information Processing and Management
of Uncertainty in Knowledge-Based Systems (IPMU), Jul 2014, Montpellier,
France. pp.46 - 55 | 10.1007/978-3-319-08795-5_6 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Considering the high heterogeneity of the ontologies pub-lished on the web,
ontology matching is a crucial issue whose aim is to establish links between an
entity of a source ontology and one or several entities from a target ontology.
Perfectible similarity measures, consid-ered as sources of information, are
combined to establish these links. The theory of belief functions is a powerful
mathematical tool for combining such uncertain information. In this paper, we
introduce a decision pro-cess based on a distance measure to identify the best
possible matching entities for a given source entity.
| [
{
"version": "v1",
"created": "Fri, 23 Jan 2015 07:17:37 GMT"
}
] | 1,422,230,400,000 | [
[
"Essaid",
"Amira",
"",
"IRISA"
],
[
"Martin",
"Arnaud",
"",
"IRISA"
],
[
"Smits",
"Grégory",
""
],
[
"Yaghlane",
"Boutheina Ben",
""
]
] |
1501.05882 | Anand Subramanian D.Sc. | Anand Subramanian, Katyanne Farias | Efficient local search limitation strategy for single machine total
weighted tardiness scheduling with sequence-dependent setup times | 32 pages, 4 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper concerns the single machine total weighted tardiness scheduling
with sequence-dependent setup times, usually referred as $1|s_{ij}|\sum
w_jT_j$. In this $\mathcal{NP}$-hard problem, each job has an associated
processing time, due date and a weight. For each pair of jobs $i$ and $j$,
there may be a setup time before starting to process $j$ in case this job is
scheduled immediately after $i$. The objective is to determine a schedule that
minimizes the total weighted tardiness, where the tardiness of a job is equal
to its completion time minus its due date, in case the job is completely
processed only after its due date, and is equal to zero otherwise. Due to its
complexity, this problem is most commonly solved by heuristics. The aim of this
work is to develop a simple yet effective limitation strategy that speeds up
the local search procedure without a significant loss in the solution quality.
Such strategy consists of a filtering mechanism that prevents unpromising moves
to be evaluated. The proposed strategy has been embedded in a local search
based metaheuristic from the literature and tested in classical benchmark
instances. Computational experiments revealed that the limitation strategy
enabled the metaheuristic to be extremely competitive when compared to other
algorithms from the literature, since it allowed the use of a large number of
neighborhood structures without a significant increase in the CPU time and,
consequently, high quality solutions could be achieved in a matter of seconds.
In addition, we analyzed the effectiveness of the proposed strategy in two
other well-known metaheuristics. Further experiments were also carried out on
benchmark instances of problem $1|s_{ij}|\sum T_j$.
| [
{
"version": "v1",
"created": "Fri, 23 Jan 2015 17:20:50 GMT"
},
{
"version": "v2",
"created": "Mon, 26 Jan 2015 02:41:26 GMT"
},
{
"version": "v3",
"created": "Mon, 30 Nov 2015 21:13:24 GMT"
}
] | 1,449,014,400,000 | [
[
"Subramanian",
"Anand",
""
],
[
"Farias",
"Katyanne",
""
]
] |
1501.05917 | Igor Subbotin | Igor Yakov Subbotin | On Generalized Rectangular Fuzzy Model for Assessment | arXiv admin note: text overlap with arXiv:1404.7279 by other authors | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The article is dedicated to the analysis of the existing models for
assessment based of the fuzzy logic centroid technique. A new Generalized
Rectangular Model were developed. Some generalizations of the existing models
are offered.
| [
{
"version": "v1",
"created": "Mon, 5 Jan 2015 19:54:57 GMT"
}
] | 1,422,230,400,000 | [
[
"Subbotin",
"Igor Yakov",
""
]
] |
1501.06705 | Arnaud Martin | Dorra Attiaoui (IRISA), Pierre-Emmanuel Dor\'e, Arnaud Martin (IRISA),
Boutheina Ben Yaghlane | Inclusion within Continuous Belief Functions | International Conference on Information Fusion - (FUSION 2013), Jul
2013, Istanbul, Turkey | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Defining and modeling the relation of inclusion between continuous belief
function may be considered as an important operation in order to study their
behaviors. Within this paper we will propose and present two forms of
inclusion: The strict and the partial one. In order to develop this relation,
we will study the case of consonant belief function. To do so, we will simulate
normal distributions allowing us to model and analyze these relations. Based on
that, we will determine the parameters influencing and characterizing the two
forms of inclusion.
| [
{
"version": "v1",
"created": "Tue, 27 Jan 2015 09:23:23 GMT"
}
] | 1,422,403,200,000 | [
[
"Attiaoui",
"Dorra",
"",
"IRISA"
],
[
"Doré",
"Pierre-Emmanuel",
"",
"IRISA"
],
[
"Martin",
"Arnaud",
"",
"IRISA"
],
[
"Yaghlane",
"Boutheina Ben",
""
]
] |
1501.07008 | Arnaud Martin | Amira Essaid (IRISA), Arnaud Martin (IRISA), Gr\'egory Smits,
Boutheina Ben Yaghlane | A Distance-Based Decision in the Credal Level | null | International Conference on Artificial Intelligence and Symbolic
Computation (AISC 2014), Dec 2014, Sevilla, Spain. pp.147 - 156 | 10.1007/978-3-319-13770-4_13 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Belief function theory provides a flexible way to combine information
provided by different sources. This combination is usually followed by a
decision making which can be handled by a range of decision rules. Some rules
help to choose the most likely hypothesis. Others allow that a decision is made
on a set of hypotheses. In [6], we proposed a decision rule based on a distance
measure. First, in this paper, we aim to demonstrate that our proposed decision
rule is a particular case of the rule proposed in [4]. Second, we give
experiments showing that our rule is able to decide on a set of hypotheses.
Some experiments are handled on a set of mass functions generated randomly,
others on real databases.
| [
{
"version": "v1",
"created": "Wed, 28 Jan 2015 07:24:12 GMT"
}
] | 1,422,489,600,000 | [
[
"Essaid",
"Amira",
"",
"IRISA"
],
[
"Martin",
"Arnaud",
"",
"IRISA"
],
[
"Smits",
"Grégory",
""
],
[
"Yaghlane",
"Boutheina Ben",
""
]
] |
1501.07250 | Alejandro Torre\~no | Alejandro Torre\~no, Eva Onaindia, \'Oscar Sapena | FMAP: Distributed Cooperative Multi-Agent Planning | 21 pages, 11 figures | Applied Intelligence, Volume 41, Issue 2, pp. 606-626, Year 2014 | 10.1007/s10489-014-0540-2 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes FMAP (Forward Multi-Agent Planning), a fully-distributed
multi-agent planning method that integrates planning and coordination. Although
FMAP is specifically aimed at solving problems that require cooperation among
agents, the flexibility of the domain-independent planning model allows FMAP to
tackle multi-agent planning tasks of any type. In FMAP, agents jointly explore
the plan space by building up refinement plans through a complete and flexible
forward-chaining partial-order planner. The search is guided by $h_{DTG}$, a
novel heuristic function that is based on the concepts of Domain Transition
Graph and frontier state and is optimized to evaluate plans in distributed
environments. Agents in FMAP apply an advanced privacy model that allows them
to adequately keep private information while communicating only the data of the
refinement plans that is relevant to each of the participating agents.
Experimental results show that FMAP is a general-purpose approach that
efficiently solves tightly-coupled domains that have specialized agents and
cooperative goals as well as loosely-coupled problems. Specifically, the
empirical evaluation shows that FMAP outperforms current MAP systems at solving
complex planning tasks that are adapted from the International Planning
Competition benchmarks.
| [
{
"version": "v1",
"created": "Wed, 28 Jan 2015 19:38:35 GMT"
}
] | 1,422,576,000,000 | [
[
"Torreño",
"Alejandro",
""
],
[
"Onaindia",
"Eva",
""
],
[
"Sapena",
"Óscar",
""
]
] |
1501.07256 | Alejandro Torre\~no | Alejandro Torre\~no, Eva Onaindia, \'Oscar Sapena | An approach to multi-agent planning with incomplete information | 6 pages, 2 figures | 20th European Conference of Artificial Intelligence (ECAI 2012),
Volume 242, pp. 762-767, Year 2012 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-agent planning (MAP) approaches have been typically conceived for
independent or loosely-coupled problems to enhance the benefits of distributed
planning between autonomous agents as solving this type of problems require
less coordination between the agents' sub-plans. However, when it comes to
tightly-coupled agents' tasks, MAP has been relegated in favour of centralized
approaches and little work has been done in this direction. In this paper, we
present a general-purpose MAP capable to efficiently handle planning problems
with any level of coupling between agents. We propose a cooperative refinement
planning approach, built upon the partial-order planning paradigm, that allows
agents to work with incomplete information and to have incomplete views of the
world, i.e. being ignorant of other agents' information, as well as maintaining
their own private information. We show various experiments to compare the
performance of our system with a distributed CSP-based MAP approach over a
suite of problems.
| [
{
"version": "v1",
"created": "Wed, 28 Jan 2015 20:02:14 GMT"
}
] | 1,422,576,000,000 | [
[
"Torreño",
"Alejandro",
""
],
[
"Onaindia",
"Eva",
""
],
[
"Sapena",
"Óscar",
""
]
] |
1501.07423 | Alejandro Torre\~no | Alejandro Torre\~no, Eva Onaindia, \'Oscar Sapena | A Flexible Coupling Approach to Multi-Agent Planning under Incomplete
Information | 40 pages, 10 figures | Knowledge and Information Systems, Volume 38, Issue 1, pp.
141-178, Year 2014 | 10.1007/s10115-012-0569-7 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-agent planning (MAP) approaches are typically oriented at solving
loosely-coupled problems, being ineffective to deal with more complex,
strongly-related problems. In most cases, agents work under complete
information, building complete knowledge bases. The present article introduces
a general-purpose MAP framework designed to tackle problems of any coupling
levels under incomplete information. Agents in our MAP model are partially
unaware of the information managed by the rest of agents and share only the
critical information that affects other agents, thus maintaining a distributed
vision of the task.
| [
{
"version": "v1",
"created": "Thu, 29 Jan 2015 11:56:41 GMT"
}
] | 1,422,576,000,000 | [
[
"Torreño",
"Alejandro",
""
],
[
"Onaindia",
"Eva",
""
],
[
"Sapena",
"Óscar",
""
]
] |
1502.00152 | Samantha Leung | Joseph Y. Halpern, Samantha Leung | Minimizing Regret in Dynamic Decision Problems | Full version | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The menu-dependent nature of regret-minimization creates subtleties when it
is applied to dynamic decision problems. Firstly, it is not clear whether
\emph{forgone opportunities} should be included in the \emph{menu}, with
respect to which regrets are computed, at different points of the decision
problem. If forgone opportunities are included, however, we can characterize
when a form of dynamic consistency is guaranteed. Secondly, more subtleties
arise when sophistication is used to deal with dynamic inconsistency. In the
full version of this paper, we examine, axiomatically and by common examples,
the implications of different menu definitions for sophisticated,
regret-minimizing agents.
| [
{
"version": "v1",
"created": "Sat, 31 Jan 2015 19:17:54 GMT"
},
{
"version": "v2",
"created": "Thu, 18 Jun 2015 16:35:53 GMT"
}
] | 1,434,672,000,000 | [
[
"Halpern",
"Joseph Y.",
""
],
[
"Leung",
"Samantha",
""
]
] |
1502.01497 | Tomas Teijeiro | Tom\'as Teijeiro, Paulo F\'elix and Jes\'us Presedo | Using temporal abduction for biosignal interpretation: A case study on
QRS detection | 7 pages, Healthcare Informatics (ICHI), 2014 IEEE International
Conference on | Proceedings of the 2014 IEEE International Conference on
Healthcare Informatics (ICHI) (pp. 334-339). IEEE | 10.1109/ICHI.2014.52 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we propose an abductive framework for biosignal interpretation,
based on the concept of Temporal Abstraction Patterns. A temporal abstraction
pattern defines an abstraction relation between an observation hypothesis and a
set of observations constituting its evidence support. New observations are
generated abductively from any subset of the evidence of a pattern, building an
abstraction hierarchy of observations in which higher levels contain those
observations with greater interpretative value of the physiological processes
underlying a given signal. Non-monotonic reasoning techniques have been applied
to this model in order to find the best interpretation of a set of initial
observations, permitting even to correct these observations by removing, adding
or modifying them in order to make them consistent with the available domain
knowledge. Some preliminary experiments have been conducted to apply this
framework to a well known and bounded problem: the QRS detection on ECG
signals. The objective is not to provide a new better QRS detector, but to test
the validity of an abductive paradigm. These experiments show that a knowledge
base comprising just a few very simple rhythm abstraction patterns can enhance
the results of a state of the art algorithm by significantly improving its
detection F1-score, besides proving the ability of the abductive framework to
correct both sensitivity and specificity failures.
| [
{
"version": "v1",
"created": "Thu, 5 Feb 2015 10:57:07 GMT"
}
] | 1,639,008,000,000 | [
[
"Teijeiro",
"Tomás",
""
],
[
"Félix",
"Paulo",
""
],
[
"Presedo",
"Jesús",
""
]
] |
1502.02193 | Liane Gabora | Liane Gabora | The Silver Lining Around Fearful Living | 4 pages, Psychology Today (online).
https://www.psychologytoday.com/blog/mindbloggling/201502/the-silver-lining-around-fearful-living-0
(2015) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper discusses in layperson's terms human and computational studies of
the impact of threat and fear on exploration and creativity. A first study
showed that both killifish from a lake with predators and from a lake without
predators explore a new environment to the same degree and plotting number of
new spaces covered over time generates a hump-shaped curve. However, for the
fish from the lake with predators the curve is shifted to the right; they take
longer. This pattern was replicated by a computer model of exploratory behavior
varying only one parameter, the fear parameter. A second study showed that
stories inspired by threatening photographs were rated as more creative than
stories inspired by non-threatening photographs. Various explanations for the
findings are discussed.
| [
{
"version": "v1",
"created": "Sat, 7 Feb 2015 23:27:48 GMT"
}
] | 1,423,526,400,000 | [
[
"Gabora",
"Liane",
""
]
] |
1502.02298 | Jamal Atif | Marc Aiguier and Jamal Atif and Isabelle Bloch and C\'eline Hudelot | Belief Revision, Minimal Change and Relaxation: A General Framework
based on Satisfaction Systems, and Applications to Description Logics | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Belief revision of knowledge bases represented by a set of sentences in a
given logic has been extensively studied but for specific logics, mainly
propositional, and also recently Horn and description logics. Here, we propose
to generalize this operation from a model-theoretic point of view, by defining
revision in an abstract model theory known under the name of satisfaction
systems. In this framework, we generalize to any satisfaction systems the
characterization of the well known AGM postulates given by Katsuno and
Mendelzon for propositional logic in terms of minimal change among
interpretations. Moreover, we study how to define revision, satisfying the AGM
postulates, from relaxation notions that have been first introduced in
description logics to define dissimilarity measures between concepts, and the
consequence of which is to relax the set of models of the old belief until it
becomes consistent with the new pieces of knowledge. We show how the proposed
general framework can be instantiated in different logics such as
propositional, first-order, description and Horn logics. In particular for
description logics, we introduce several concrete relaxation operators tailored
for the description logic $\ALC{}$ and its fragments $\EL{}$ and $\ELext{}$,
discuss their properties and provide some illustrative examples.
| [
{
"version": "v1",
"created": "Sun, 8 Feb 2015 20:26:10 GMT"
},
{
"version": "v2",
"created": "Fri, 13 Jan 2017 20:46:28 GMT"
}
] | 1,484,611,200,000 | [
[
"Aiguier",
"Marc",
""
],
[
"Atif",
"Jamal",
""
],
[
"Bloch",
"Isabelle",
""
],
[
"Hudelot",
"Céline",
""
]
] |
1502.02414 | Schiex Thomas | David Allouche, Christian Bessiere, Patrice Boizumault, Simon de
Givry, Patricia Gutierrez, Jimmy H.M. Lee, Kam Lun Leung, Samir Loudni,
Jean-Philippe M\'etivier, Thomas Schiex, Yi Wu | Tractability and Decompositions of Global Cost Functions | 45 pages for the main paper, extra Appendix with examples of
DAG-decomposed global cost functions | null | 10.1016/j.artint.2016.06.005 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Enforcing local consistencies in cost function networks is performed by
applying so-called Equivalent Preserving Transformations (EPTs) to the cost
functions. As EPTs transform the cost functions, they may break the property
that was making local consistency enforcement tractable on a global cost
function. A global cost function is called tractable projection-safe when
applying an EPT to it is tractable and does not break the tractability
property. In this paper, we prove that depending on the size r of the smallest
scopes used for performing EPTs, the tractability of global cost functions can
be preserved (r = 0) or destroyed (r > 1). When r = 1, the answer is
indefinite. We show that on a large family of cost functions, EPTs can be
computed via dynamic programming-based algorithms, leading to tractable
projection-safety. We also show that when a global cost function can be
decomposed into a Berge acyclic network of bounded arity cost functions, soft
local consistencies such as soft Directed or Virtual Arc Consistency can
directly emulate dynamic programming. These different approaches to
decomposable cost functions are then embedded in a solver for extensive
experiments that confirm the feasibility and efficiency of our proposal.
| [
{
"version": "v1",
"created": "Mon, 9 Feb 2015 10:09:35 GMT"
},
{
"version": "v2",
"created": "Wed, 29 Jun 2016 16:24:11 GMT"
},
{
"version": "v3",
"created": "Thu, 30 Jun 2016 11:21:20 GMT"
}
] | 1,469,750,400,000 | [
[
"Allouche",
"David",
""
],
[
"Bessiere",
"Christian",
""
],
[
"Boizumault",
"Patrice",
""
],
[
"de Givry",
"Simon",
""
],
[
"Gutierrez",
"Patricia",
""
],
[
"Lee",
"Jimmy H. M.",
""
],
[
"Leung",
"Kam Lun",
""
],
[
"Loudni",
"Samir",
""
],
[
"Métivier",
"Jean-Philippe",
""
],
[
"Schiex",
"Thomas",
""
],
[
"Wu",
"Yi",
""
]
] |
1502.02417 | Antonio Nicola | Cecilia Camporeale, Antonio De Nicola, Maria Luisa Villani | Semantics-based services for a low carbon society: An application on
emissions trading system data and scenarios management | null | Environmental Modelling & Software, Vol 64, Feb 2015, Pages
124-142 | 10.1016/j.envsoft.2014.11.007 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A low carbon society aims at fighting global warming by stimulating synergic
efforts from governments, industry and scientific communities. Decision support
systems should be adopted to provide policy makers with possible scenarios,
options for prompt countermeasures in case of side effects on environment,
economy and society due to low carbon society policies, and also options for
information management. A necessary precondition to fulfill this agenda is to
face the complexity of this multi-disciplinary domain and to reach a common
understanding on it as a formal specification. Ontologies are widely accepted
means to share knowledge. Together with semantic rules, they enable advanced
semantic services to manage knowledge in a smarter way. Here we address the
European Emissions Trading System (EU-ETS) and we present a knowledge base
consisting of the EREON ontology and a catalogue of rules. Then we describe two
innovative semantic services to manage ETS data and information on ETS
scenarios.
| [
{
"version": "v1",
"created": "Mon, 9 Feb 2015 10:19:18 GMT"
}
] | 1,423,526,400,000 | [
[
"Camporeale",
"Cecilia",
""
],
[
"De Nicola",
"Antonio",
""
],
[
"Villani",
"Maria Luisa",
""
]
] |
1502.02454 | Thuc Le Ph.D | Thuc Duy Le, Tao Hoang, Jiuyong Li, Lin Liu, and Huawen Liu | A fast PC algorithm for high dimensional causal discovery with
multi-core PCs | Thuc Le, Tao Hoang, Jiuyong Li, Lin Liu, Huawen Liu, Shu Hu, "A fast
PC algorithm for high dimensional causal discovery with multi-core PCs",
IEEE/ACM Transactions on Computational Biology and Bioinformatics,
doi:10.1109/TCBB.2016.2591526 | null | 10.1109/TCBB.2016.2591526 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Discovering causal relationships from observational data is a crucial problem
and it has applications in many research areas. The PC algorithm is the
state-of-the-art constraint based method for causal discovery. However, runtime
of the PC algorithm, in the worst-case, is exponential to the number of nodes
(variables), and thus it is inefficient when being applied to high dimensional
data, e.g. gene expression datasets. On another note, the advancement of
computer hardware in the last decade has resulted in the widespread
availability of multi-core personal computers. There is a significant
motivation for designing a parallelised PC algorithm that is suitable for
personal computers and does not require end users' parallel computing knowledge
beyond their competency in using the PC algorithm. In this paper, we develop
parallel-PC, a fast and memory efficient PC algorithm using the parallel
computing technique. We apply our method to a range of synthetic and real-world
high dimensional datasets. Experimental results on a dataset from the DREAM 5
challenge show that the original PC algorithm could not produce any results
after running more than 24 hours; meanwhile, our parallel-PC algorithm managed
to finish within around 12 hours with a 4-core CPU computer, and less than 6
hours with a 8-core CPU computer. Furthermore, we integrate parallel-PC into a
causal inference method for inferring miRNA-mRNA regulatory relationships. The
experimental results show that parallel-PC helps improve both the efficiency
and accuracy of the causal inference algorithm.
| [
{
"version": "v1",
"created": "Mon, 9 Feb 2015 12:15:21 GMT"
},
{
"version": "v2",
"created": "Sat, 11 Jul 2015 03:03:16 GMT"
},
{
"version": "v3",
"created": "Thu, 10 Nov 2016 12:23:48 GMT"
}
] | 1,478,822,400,000 | [
[
"Le",
"Thuc Duy",
""
],
[
"Hoang",
"Tao",
""
],
[
"Li",
"Jiuyong",
""
],
[
"Liu",
"Lin",
""
],
[
"Liu",
"Huawen",
""
]
] |
1502.02467 | Evgenij Thorstensen | Evgenij Thorstensen | Structural Decompositions for Problems with Global Constraints | The final publication is available at Springer via
http://dx.doi.org/10.1007/s10601-015-9181-2 | null | 10.1007/s10601-015-9181-2 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A wide range of problems can be modelled as constraint satisfaction problems
(CSPs), that is, a set of constraints that must be satisfied simultaneously.
Constraints can either be represented extensionally, by explicitly listing
allowed combinations of values, or implicitly, by special-purpose algorithms
provided by a solver.
Such implicitly represented constraints, known as global constraints, are
widely used; indeed, they are one of the key reasons for the success of
constraint programming in solving real-world problems. In recent years, a
variety of restrictions on the structure of CSP instances have been shown to
yield tractable classes of CSPs. However, most such restrictions fail to
guarantee tractability for CSPs with global constraints. We therefore study the
applicability of structural restrictions to instances with such constraints.
We show that when the number of solutions to a CSP instance is bounded in key
parts of the problem, structural restrictions can be used to derive new
tractable classes. Furthermore, we show that this result extends to
combinations of instances drawn from known tractable classes, as well as to CSP
instances where constraints assign costs to satisfying assignments.
| [
{
"version": "v1",
"created": "Mon, 9 Feb 2015 12:55:36 GMT"
}
] | 1,423,526,400,000 | [
[
"Thorstensen",
"Evgenij",
""
]
] |
1502.02535 | Maria Paola Bonacina | Maria Paola Bonacina, Ulrich Furbach, Viorica Sofronie-Stokkermans | On First-Order Model-Based Reasoning | In Narciso Marti-Oliet, Peter Olveczky, and Carolyn Talcott (Eds.),
"Logic, Rewriting, and Concurrency: Essays in Honor of Jose Meseguer"
Springer, Lecture Notes in Computer Science 9200, September 2015, 24 pages.
Version v4 in arxiv fixes a typo on page 15 that remains in the version
published in the Springer book | null | 10.1007/978-3-319-23165-5_8 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reasoning semantically in first-order logic is notoriously a challenge. This
paper surveys a selection of semantically-guided or model-based methods that
aim at meeting aspects of this challenge. For first-order logic we touch upon
resolution-based methods, tableaux-based methods, DPLL-inspired methods, and we
give a preview of a new method called SGGS, for Semantically-Guided
Goal-Sensitive reasoning. For first-order theories we highlight hierarchical
and locality-based methods, concluding with the recent Model-Constructing
satisfiability calculus.
| [
{
"version": "v1",
"created": "Mon, 9 Feb 2015 16:14:40 GMT"
},
{
"version": "v2",
"created": "Tue, 9 Jun 2015 15:22:17 GMT"
},
{
"version": "v3",
"created": "Fri, 31 Jul 2015 21:16:13 GMT"
},
{
"version": "v4",
"created": "Wed, 20 Nov 2019 19:25:16 GMT"
}
] | 1,574,380,800,000 | [
[
"Bonacina",
"Maria Paola",
""
],
[
"Furbach",
"Ulrich",
""
],
[
"Sofronie-Stokkermans",
"Viorica",
""
]
] |
1502.02799 | Yisong Wang | Yisong Wang | On Forgetting in Tractable Propositional Fragments | 27 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Distilling from a knowledge base only the part that is relevant to a subset
of alphabet, which is recognized as forgetting, has attracted extensive
interests in AI community. In standard propositional logic, a general algorithm
of forgetting and its computation-oriented investigation in various fragments
whose satisfiability are tractable are still lacking. The paper aims at filling
the gap. After exploring some basic properties of forgetting in propositional
logic, we present a resolution-based algorithm of forgetting for CNF fragment,
and some complexity results about forgetting in Horn, renamable Horn, q-Horn,
Krom, DNF and CNF fragments of propositional logic.
| [
{
"version": "v1",
"created": "Tue, 10 Feb 2015 07:05:56 GMT"
}
] | 1,423,612,800,000 | [
[
"Wang",
"Yisong",
""
]
] |
1502.03248 | Anna Harutyunyan | Anna Harutyunyan and Tim Brys and Peter Vrancx and Ann Nowe | Off-Policy Reward Shaping with Ensembles | To be presented at ALA-15. Short version to appear at AAMAS-15 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Potential-based reward shaping (PBRS) is an effective and popular technique
to speed up reinforcement learning by leveraging domain knowledge. While PBRS
is proven to always preserve optimal policies, its effect on learning speed is
determined by the quality of its potential function, which, in turn, depends on
both the underlying heuristic and the scale. Knowing which heuristic will prove
effective requires testing the options beforehand, and determining the
appropriate scale requires tuning, both of which introduce additional sample
complexity. We formulate a PBRS framework that reduces learning speed, but does
not incur extra sample complexity. For this, we propose to simultaneously learn
an ensemble of policies, shaped w.r.t. many heuristics and on a range of
scales. The target policy is then obtained by voting. The ensemble needs to be
able to efficiently and reliably learn off-policy: requirements fulfilled by
the recent Horde architecture, which we take as our basis. We demonstrate
empirically that (1) our ensemble policy outperforms both the base policy, and
its single-heuristic components, and (2) an ensemble over a general range of
scales performs at least as well as one with optimally tuned components.
| [
{
"version": "v1",
"created": "Wed, 11 Feb 2015 10:27:15 GMT"
},
{
"version": "v2",
"created": "Mon, 23 Mar 2015 13:35:59 GMT"
}
] | 1,427,155,200,000 | [
[
"Harutyunyan",
"Anna",
""
],
[
"Brys",
"Tim",
""
],
[
"Vrancx",
"Peter",
""
],
[
"Nowe",
"Ann",
""
]
] |
1502.03556 | Md. Hanif Seddiqui | Md. Hanif Seddiqui, Rudra Pratap Deb Nath, Masaki Aono | An Efficient Metric of Automatic Weight Generation for Properties in
Instance Matching Technique | 17 pages, 5 figures, 3 tables, pp. 1-17, publication year 2015,
journal publication, vol. 6 number 1 | Journal of Web and Semantic Technology (IJWeST), vol.6 no.1, pp.
1-17 (2015) | 10.5121/ijwest.2015.6101 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The proliferation of heterogeneous data sources of semantic knowledge base
intensifies the need of an automatic instance matching technique. However, the
efficiency of instance matching is often influenced by the weight of a property
associated to instances. Automatic weight generation is a non-trivial, however
an important task in instance matching technique. Therefore, identifying an
appropriate metric for generating weight for a property automatically is
nevertheless a formidable task. In this paper, we investigate an approach of
generating weights automatically by considering hypotheses: (1) the weight of a
property is directly proportional to the ratio of the number of its distinct
values to the number of instances contain the property, and (2) the weight is
also proportional to the ratio of the number of distinct values of a property
to the number of instances in a training dataset. The basic intuition behind
the use of our approach is the classical theory of information content that
infrequent words are more informative than frequent ones. Our mathematical
model derives a metric for generating property weights automatically, which is
applied in instance matching system to produce re-conciliated instances
efficiently. Our experiments and evaluations show the effectiveness of our
proposed metric of automatic weight generation for properties in an instance
matching technique.
| [
{
"version": "v1",
"created": "Thu, 12 Feb 2015 07:51:39 GMT"
}
] | 1,423,785,600,000 | [
[
"Seddiqui",
"Md. Hanif",
""
],
[
"Nath",
"Rudra Pratap Deb",
""
],
[
"Aono",
"Masaki",
""
]
] |
1502.03890 | Song-Ju Kim Dr. | Song-Ju Kim and Masashi Aono | Decision Maker using Coupled Incompressible-Fluid Cylinders | 5 pages, 5 figures, Waseda AICS Symposium and the 14th Slovenia-Japan
Seminar, Waseda University, Tokyo, 24-26 October 2014. in Special Issue of
ASTE: Advances in Science, Technology and Environmentology (2015) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The multi-armed bandit problem (MBP) is the problem of finding, as accurately
and quickly as possible, the most profitable option from a set of options that
gives stochastic rewards by referring to past experiences. Inspired by
fluctuated movements of a rigid body in a tug-of-war game, we formulated a
unique search algorithm that we call the `tug-of-war (TOW) dynamics' for
solving the MBP efficiently. The cognitive medium access, which refers to
multi-user channel allocations in cognitive radio, can be interpreted as the
competitive multi-armed bandit problem (CMBP); the problem is to determine the
optimal strategy for allocating channels to users which yields maximum total
rewards gained by all users. Here we show that it is possible to construct a
physical device for solving the CMBP, which we call the `TOW Bombe', by
exploiting the TOW dynamics existed in coupled incompressible-fluid cylinders.
This analog computing device achieves the `socially-maximum' resource
allocation that maximizes the total rewards in cognitive medium access without
paying a huge computational cost that grows exponentially as a function of the
problem size.
| [
{
"version": "v1",
"created": "Fri, 13 Feb 2015 05:31:13 GMT"
}
] | 1,424,044,800,000 | [
[
"Kim",
"Song-Ju",
""
],
[
"Aono",
"Masashi",
""
]
] |
1502.03986 | Roberto Amadini | Roberto Amadini, Maurizio Gabbrielli, Jacopo Mauro | A Multicore Tool for Constraint Solving | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | *** To appear in IJCAI 2015 proceedings *** In Constraint Programming (CP), a
portfolio solver uses a variety of different solvers for solving a given
Constraint Satisfaction / Optimization Problem. In this paper we introduce
sunny-cp2: the first parallel CP portfolio solver that enables a dynamic,
cooperative, and simultaneous execution of its solvers in a multicore setting.
It incorporates state-of-the-art solvers, providing also a usable and
configurable framework. Empirical results are very promising. sunny-cp2 can
even outperform the performance of the oracle solver which always selects the
best solver of the portfolio for a given problem.
| [
{
"version": "v1",
"created": "Fri, 13 Feb 2015 13:45:54 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Apr 2015 17:28:26 GMT"
},
{
"version": "v3",
"created": "Thu, 30 Apr 2015 10:57:43 GMT"
}
] | 1,430,438,400,000 | [
[
"Amadini",
"Roberto",
""
],
[
"Gabbrielli",
"Maurizio",
""
],
[
"Mauro",
"Jacopo",
""
]
] |
1502.04120 | Ohad Asor | Ohad Asor | About Tau-Chain | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tau-chain is a decentralized peer-to-peer network having three unified faces:
Rules, Proofs, and Computer Programs, allowing a generalization of virtually
any centralized or decentralized P2P network, together with many new abilities,
as we present on this note.
| [
{
"version": "v1",
"created": "Mon, 16 Feb 2015 17:01:40 GMT"
}
] | 1,424,131,200,000 | [
[
"Asor",
"Ohad",
""
]
] |
1502.04495 | Vasile Patrascu | Vasile Patrascu | A Generalization of Gustafson-Kessel Algorithm Using a New Constraint
Parameter | Proceedings of the Joint 4th Conference of the European Society for
Fuzzy Logic and Technology and the 11th Rencontres Francophones sur la
Logique Floue et ses Applications, pp. 1250-1255, Barcelona, Spain, September
7-9, 2005 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper one presents a new fuzzy clustering algorithm based on a
dissimilarity function determined by three parameters. This algorithm can be
considered a generalization of the Gustafson-Kessel algorithm for fuzzy
clustering.
| [
{
"version": "v1",
"created": "Mon, 16 Feb 2015 11:09:52 GMT"
}
] | 1,424,131,200,000 | [
[
"Patrascu",
"Vasile",
""
]
] |
1502.04593 | Nicolas Maudet | K. Belahcene, C. Labreuche, N. Maudet, V. Mousseau, W. Ouerdane | Explaining robust additive utility models by sequences of preference
swaps | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multicriteria decision analysis aims at supporting a person facing a decision
problem involving conflicting criteria. We consider an additive utility model
which provides robust conclusions based on preferences elicited from the
decision maker. The recommendations based on these robust conclusions are even
more convincing if they are complemented by explanations. We propose a general
scheme, based on sequence of preference swaps, in which explanations can be
computed. We show first that the length of explanations can be unbounded in the
general case. However, in the case of binary reference scales, this length is
bounded and we provide an algorithm to compute the corresponding explanation.
| [
{
"version": "v1",
"created": "Mon, 16 Feb 2015 16:11:44 GMT"
}
] | 1,424,131,200,000 | [
[
"Belahcene",
"K.",
""
],
[
"Labreuche",
"C.",
""
],
[
"Maudet",
"N.",
""
],
[
"Mousseau",
"V.",
""
],
[
"Ouerdane",
"W.",
""
]
] |
1502.04665 | Michele Stawowy | Michele Stawowy | Optimizations for Decision Making and Planning in Description Logic
Dynamic Knowledge Bases | 16 pages, extended version | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Artifact-centric models for business processes recently raised a lot of
attention, as they manage to combine structural (i.e. data related) with
dynamical (i.e. process related) aspects in a seamless way. Many frameworks
developed under this approach, although, are not built explicitly for planning,
one of the most prominent operations related to business processes. In this
paper, we try to overcome this by proposing a framework named Dynamic Knowledge
Bases, aimed at describing rich business domains through Description
Logic-based ontologies, and where a set of actions allows the system to evolve
by modifying such ontologies. This framework, by offering action rewriting and
knowledge partialization, represents a viable and formal environment to develop
decision making and planning techniques for DL-based artifact-centric business
domains.
| [
{
"version": "v1",
"created": "Mon, 16 Feb 2015 19:06:25 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Mar 2015 22:17:37 GMT"
},
{
"version": "v3",
"created": "Thu, 12 Mar 2015 16:44:12 GMT"
},
{
"version": "v4",
"created": "Thu, 19 Mar 2015 17:24:55 GMT"
},
{
"version": "v5",
"created": "Fri, 3 Apr 2015 11:52:17 GMT"
},
{
"version": "v6",
"created": "Wed, 29 Apr 2015 16:56:58 GMT"
},
{
"version": "v7",
"created": "Mon, 4 May 2015 16:05:29 GMT"
}
] | 1,430,784,000,000 | [
[
"Stawowy",
"Michele",
""
]
] |
1502.04780 | Qiong Wu | Qiong Wu | Computational Curiosity (A Book Draft) | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This book discusses computational curiosity, from the psychology of curiosity
to the computational models of curiosity, and then showcases several
interesting applications of computational curiosity. A brief overview of the
book is given as follows. Chapter 1 discusses the underpinnings of curiosity in
human beings, including the major categories of curiosity, curiosity-related
emotions and behaviors, and the benefits of curiosity. Chapter 2 reviews the
arousal theories of curiosity in psychology and summarizes a general two-step
process model for computational curiosity. Base on the perspective of the
two-step process model, Chapter 3 reviews and analyzes some of the traditional
computational models of curiosity. Chapter 4 introduces a novel generic
computational model of curiosity, which is developed based on the arousal
theories of curiosity. After the discussion of computational models of
curiosity, we outline the important applications where computational curiosity
may bring significant impacts in Chapter 5. Chapter 6 discusses the application
of the generic computational model of curiosity in a machine learning
framework. Chapter 7 discusses the application of the generic computational
model of curiosity in a recommender system. In Chapter 8 and Chapter 9, the
generic computational model of curiosity is studied in two types of pedagogical
agents. In Chapter 8, a curious peer learner is studied. It is a non-player
character that aims to provide a believable virtual learning environment for
users. In Chapter 9, a curious learning companion is studied. It aims to
enhance users' learning experience through providing meaningful interactions
with them. Chapter 10 discusses open questions in the research field of
computation curiosity.
| [
{
"version": "v1",
"created": "Tue, 17 Feb 2015 02:42:36 GMT"
}
] | 1,424,217,600,000 | [
[
"Wu",
"Qiong",
""
]
] |
1502.05021 | Olegs Verhodubs | Olegs Verhodubs | Inductive Learning for Rule Generation from Ontology | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents an idea of inductive learning use for rule generation
from ontologies. The main purpose of the paper is to evaluate the possibility
of inductive learning use in rule generation from ontologies and to develop the
way how this can be done. Generated rules are necessary to supplement or even
to develop the Semantic Web Expert System (SWES) knowledge base. The SWES
emerges as the result of evolution of expert system concept toward the Web, and
the SWES is based on the Semantic Web technologies. Available publications show
that the problem of rule generation from ontologies based on inductive learning
is not investigated deeply enough.
| [
{
"version": "v1",
"created": "Tue, 17 Feb 2015 20:17:19 GMT"
}
] | 1,424,217,600,000 | [
[
"Verhodubs",
"Olegs",
""
]
] |
1502.05040 | Tarek Sobh | Tamer M. Abo Neama, Ismail A. Ismail, Tarek S. Sobh, M. Zaki | Design of a Framework to Facilitate Decisions Using Information Fusion | 17 pages, 5 figures, Journal of Al Azhar University Engineering
Sector, Vol. 8, No. 28, July 2013, 1237-1250. arXiv admin note: text overlap
with arXiv:cs/0409007 by other authors | Journal of Al Azhar University Engineering Sector, Vol. 8, No. 28,
July 2013, 1237-1250 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Information fusion is an advanced research area which can assist decision
makers in enhancing their decisions. This paper aims at designing a new
multi-layer framework that can support the process of performing decisions from
the obtained beliefs using information fusion. Since it is not an easy task to
cross the gap between computed beliefs of certain hypothesis and decisions, the
proposed framework consists of the following layers in order to provide a
suitable architecture (ordered bottom up): 1. A layer for combination of basic
belief assignments using an information fusion approach. Such approach exploits
Dezert-Smarandache Theory, DSmT, and proportional conflict redistribution to
provide more realistic final beliefs. 2. A layer for computation of pignistic
probability of the underlying propositions from the corresponding final
beliefs. 3. A layer for performing probabilistic reasoning using a Bayesian
network that can obtain the probable reason of a proposition from its pignistic
probability. 4. Ranking the system decisions is ultimately used to support
decision making. A case study has been accomplished at various operational
conditions in order to prove the concept, in addition it pointed out that: 1.
The use of DSmT for information fusion yields not only more realistic beliefs
but also reliable pignistic probabilities for the underlying propositions. 2.
Exploiting the pignistic probability for the integration of the information
fusion with the Bayesian network provides probabilistic inference and enable
decision making on the basis of both belief based probabilities for the
underlying propositions and Bayesian based probabilities for the corresponding
reasons. A comparative study of the proposed framework with respect to other
information fusion systems confirms its superiority to support decision making.
| [
{
"version": "v1",
"created": "Tue, 17 Feb 2015 12:24:58 GMT"
},
{
"version": "v2",
"created": "Sat, 21 Feb 2015 13:12:25 GMT"
}
] | 1,424,736,000,000 | [
[
"Neama",
"Tamer M. Abo",
""
],
[
"Ismail",
"Ismail A.",
""
],
[
"Sobh",
"Tarek S.",
""
],
[
"Zaki",
"M.",
""
]
] |
1502.05450 | Jean-Marc Alliot | Jean-Marc Alliot | The (Final) countdown | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Countdown game is one of the oldest TV show running in the world. It
started broadcasting in 1972 on the french television and in 1982 on British
channel 4, and it has been running since in both countries. The game, while
extremely popular, never received any serious scientific attention, probably
because it seems too simple at first sight. We present in this article an
in-depth analysis of the numbers round of the countdown game. This includes a
complexity analysis of the game, an analysis of existing algorithms, the
presentation of a new algorithm that increases resolution speed by a factor of
20. It also includes some leads on how to turn the game into a more difficult
one, both for a human player and for a computer, and even to transform it into
a probably undecidable problem.
| [
{
"version": "v1",
"created": "Thu, 19 Feb 2015 00:41:56 GMT"
}
] | 1,424,390,400,000 | [
[
"Alliot",
"Jean-Marc",
""
]
] |
1502.05562 | Vasile Patrascu | Vasile Patrascu | A New Penta-valued Logic Based Knowledge Representation | The 12th International Conference Information Processing and
Management of Uncertainty in Knowledge-Based Systems, June 22-27, 2008,
Malaga, Spain | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper a knowledge representation model are proposed, FP5, which
combine the ideas from fuzzy sets and penta-valued logic. FP5 represents
imprecise properties whose accomplished degree is undefined, contradictory or
indeterminate for some objects. Basic operations of conjunction, disjunction
and negation are introduced. Relations to other representation models like
fuzzy sets, intuitionistic, paraconsistent and bipolar fuzzy sets are
discussed.
| [
{
"version": "v1",
"created": "Thu, 19 Feb 2015 13:23:06 GMT"
}
] | 1,424,390,400,000 | [
[
"Patrascu",
"Vasile",
""
]
] |
1502.05615 | C\`esar Ferri | Fernando Mart\'inez-Plumed, C\`esar Ferri, Jos\'e Hern\'andez-Orallo,
Mar\'ia Jos\'e Ram\'irez-Quintana | Forgetting and consolidation for incremental and cumulative knowledge
acquisition systems | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The application of cognitive mechanisms to support knowledge acquisition is,
from our point of view, crucial for making the resulting models coherent,
efficient, credible, easy to use and understandable. In particular, there are
two characteristic features of intelligence that are essential for knowledge
development: forgetting and consolidation. Both plays an important role in
knowledge bases and learning systems to avoid possible information overflow and
redundancy, and in order to preserve and strengthen important or frequently
used rules and remove (or forget) useless ones. We present an incremental,
long-life view of knowledge acquisition which tries to improve task after task
by determining what to keep, what to consolidate and what to forget, overcoming
The Stability-Plasticity dilemma. In order to do that, we rate rules by
introducing several metrics through the first adaptation, to our knowledge, of
the Minimum Message Length (MML) principle to a coverage graph, a hierarchical
assessment structure which treats evidence and rules in a unified way. The
metrics are not only used to forget some of the worst rules, but also to set a
consolidation process to promote those selected rules to the knowledge base,
which is also mirrored by a demotion system. We evaluate the framework with a
series of tasks in a chess rule learning domain.
| [
{
"version": "v1",
"created": "Thu, 19 Feb 2015 16:25:49 GMT"
}
] | 1,424,390,400,000 | [
[
"Martínez-Plumed",
"Fernando",
""
],
[
"Ferri",
"Cèsar",
""
],
[
"Hernández-Orallo",
"José",
""
],
[
"Ramírez-Quintana",
"María José",
""
]
] |
1502.05864 | Sukanta Nayak | Sukanta Nayak and Snehashish Chakraverty | Pseudo Fuzzy Set | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Here a novel idea to handle imprecise or vague set viz. Pseudo fuzzy set has
been proposed. Pseudo fuzzy set is a triplet of element and its two membership
functions. Both the membership functions may or may not be dependent. The
hypothesis is that every positive sense has some negative sense. So, one
membership function has been considered as positive and another as negative.
Considering this concept, here the development of Pseudo fuzzy set and its
property along with Pseudo fuzzy numbers has been discussed.
| [
{
"version": "v1",
"created": "Fri, 20 Feb 2015 13:16:05 GMT"
}
] | 1,424,649,600,000 | [
[
"Nayak",
"Sukanta",
""
],
[
"Chakraverty",
"Snehashish",
""
]
] |
1502.05888 | Marija Slavkovik | Jer\^ome Lang and Gabriella Pigozzi and Marija Slavkovik and Leendert
van der Torre and Srdjan Vesic | A partial taxonomy of judgment aggregation rules, and their properties | null | null | 10.1007/s00355-016-1006-8 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The literature on judgment aggregation is moving from studying impossibility
results regarding aggregation rules towards studying specific judgment
aggregation rules. Here we give a structured list of most rules that have been
proposed and studied recently in the literature, together with various
properties of such rules. We first focus on the majority-preservation property,
which generalizes Condorcet-consistency, and identify which of the rules
satisfy it. We study the inclusion relationships that hold between the rules.
Finally, we consider two forms of unanimity, monotonicity, homogeneity, and
reinforcement, and we identify which of the rules satisfy these properties.
| [
{
"version": "v1",
"created": "Fri, 20 Feb 2015 14:50:53 GMT"
},
{
"version": "v2",
"created": "Thu, 18 Feb 2016 15:49:39 GMT"
},
{
"version": "v3",
"created": "Tue, 27 Sep 2016 15:55:41 GMT"
}
] | 1,481,068,800,000 | [
[
"Lang",
"Jerôme",
""
],
[
"Pigozzi",
"Gabriella",
""
],
[
"Slavkovik",
"Marija",
""
],
[
"van der Torre",
"Leendert",
""
],
[
"Vesic",
"Srdjan",
""
]
] |
1502.06512 | Roman Yampolskiy | Roman V. Yampolskiy | From Seed AI to Technological Singularity via Recursively Self-Improving
Software | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Software capable of improving itself has been a dream of computer scientists
since the inception of the field. In this work we provide definitions for
Recursively Self-Improving software, survey different types of self-improving
software, review the relevant literature, analyze limits on computation
restricting recursive self-improvement and introduce RSI Convergence Theory
which aims to predict general behavior of RSI systems. Finally, we address
security implications from self-improving intelligent software.
| [
{
"version": "v1",
"created": "Mon, 23 Feb 2015 17:08:30 GMT"
}
] | 1,424,736,000,000 | [
[
"Yampolskiy",
"Roman V.",
""
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.