id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
sequencelengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1502.06657 | Sahin Geyik | Sahin Cem Geyik, Abhishek Saxena, Ali Dasdan | Multi-Touch Attribution Based Budget Allocation in Online Advertising | This paper has been published in ADKDD 2014, August 24, New York
City, New York, U.S.A | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Budget allocation in online advertising deals with distributing the campaign
(insertion order) level budgets to different sub-campaigns which employ
different targeting criteria and may perform differently in terms of
return-on-investment (ROI). In this paper, we present the efforts at Turn on
how to best allocate campaign budget so that the advertiser or campaign-level
ROI is maximized. To do this, it is crucial to be able to correctly determine
the performance of sub-campaigns. This determination is highly related to the
action-attribution problem, i.e. to be able to find out the set of ads, and
hence the sub-campaigns that provided them to a user, that an action should be
attributed to. For this purpose, we employ both last-touch (last ad gets all
credit) and multi-touch (many ads share the credit) attribution methodologies.
We present the algorithms deployed at Turn for the attribution problem, as well
as their parallel implementation on the large advertiser performance datasets.
We conclude the paper with our empirical comparison of last-touch and
multi-touch attribution-based budget allocation in a real online advertising
setting.
| [
{
"version": "v1",
"created": "Tue, 24 Feb 2015 00:09:05 GMT"
}
] | 1,424,822,400,000 | [
[
"Geyik",
"Sahin Cem",
""
],
[
"Saxena",
"Abhishek",
""
],
[
"Dasdan",
"Ali",
""
]
] |
1502.06818 | Ben Usman | Ben Usman, Ivan Oseledets | Tensor SimRank for Heterogeneous Information Networks | Submited on KDD'15 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a generalization of SimRank similarity measure for heterogeneous
information networks. Given the information network, the intraclass similarity
score s(a, b) is high if the set of objects that are related with a and the set
of objects that are related with b are pair-wise similar according to all
imposed relations.
| [
{
"version": "v1",
"created": "Tue, 24 Feb 2015 14:10:04 GMT"
}
] | 1,424,822,400,000 | [
[
"Usman",
"Ben",
""
],
[
"Oseledets",
"Ivan",
""
]
] |
1502.06956 | Xinyang Deng | Xinyang Deng and Yong Deng | Transformation of basic probability assignments to probabilities based
on a new entropy measure | 14 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dempster-Shafer evidence theory is an efficient mathematical tool to deal
with uncertain information. In that theory, basic probability assignment (BPA)
is the basic element for the expression and inference of uncertainty.
Decision-making based on BPA is still an open issue in Dempster-Shafer evidence
theory. In this paper, a novel approach of transforming basic probability
assignments to probabilities is proposed based on Deng entropy which is a new
measure for the uncertainty of BPA. The principle of the proposed method is to
minimize the difference of uncertainties involving in the given BPA and
obtained probability distribution. Numerical examples are given to show the
proposed approach.
| [
{
"version": "v1",
"created": "Tue, 24 Feb 2015 04:02:00 GMT"
}
] | 1,424,908,800,000 | [
[
"Deng",
"Xinyang",
""
],
[
"Deng",
"Yong",
""
]
] |
1502.07314 | David Tolpin | David Tolpin, Brooks Paige, Jan Willem van de Meent, Frank Wood | Path Finding under Uncertainty through Probabilistic Inference | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a new approach to solving path-finding problems under
uncertainty by representing them as probabilistic models and applying
domain-independent inference algorithms to the models. This approach separates
problem representation from the inference algorithm and provides a framework
for efficient learning of path-finding policies. We evaluate the new approach
on the Canadian Traveler Problem, which we formulate as a probabilistic model,
and show how probabilistic inference allows high performance stochastic
policies to be obtained for this problem.
| [
{
"version": "v1",
"created": "Wed, 25 Feb 2015 19:21:04 GMT"
},
{
"version": "v2",
"created": "Sat, 2 May 2015 21:53:39 GMT"
},
{
"version": "v3",
"created": "Mon, 8 Jun 2015 05:02:53 GMT"
}
] | 1,433,808,000,000 | [
[
"Tolpin",
"David",
""
],
[
"Paige",
"Brooks",
""
],
[
"van de Meent",
"Jan Willem",
""
],
[
"Wood",
"Frank",
""
]
] |
1502.07428 | Elad Liebman | Elad Liebman, Benny Chor and Peter Stone | Representative Selection in Non Metric Datasets | null | null | 10.1080/08839514.2015.1071092 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper considers the problem of representative selection: choosing a
subset of data points from a dataset that best represents its overall set of
elements. This subset needs to inherently reflect the type of information
contained in the entire set, while minimizing redundancy. For such purposes,
clustering may seem like a natural approach. However, existing clustering
methods are not ideally suited for representative selection, especially when
dealing with non-metric data, where only a pairwise similarity measure exists.
In this paper we propose $\delta$-medoids, a novel approach that can be viewed
as an extension to the $k$-medoids algorithm and is specifically suited for
sample representative selection from non-metric data. We empirically validate
$\delta$-medoids in two domains, namely music analysis and motion analysis. We
also show some theoretical bounds on the performance of $\delta$-medoids and
the hardness of representative selection in general.
| [
{
"version": "v1",
"created": "Thu, 26 Feb 2015 04:16:31 GMT"
},
{
"version": "v2",
"created": "Fri, 19 Jun 2015 22:44:29 GMT"
}
] | 1,443,484,800,000 | [
[
"Liebman",
"Elad",
""
],
[
"Chor",
"Benny",
""
],
[
"Stone",
"Peter",
""
]
] |
1502.07628 | Jamal Atif | Marc Aiguier, Jamal Atif, Isabelle Bloch and C\'eline Hudelot | Relaxation-based revision operators in description logics | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As ontologies and description logics (DLs) reach out to a broader audience,
several reasoning services are developed in this context. Belief revision is
one of them, of prime importance when knowledge is prone to change and
inconsistency. In this paper we address both the generalization of the
well-known AGM postulates, and the definition of concrete and well-founded
revision operators in different DL families. We introduce a model-theoretic
version of the AGM postulates with a general definition of inconsistency, hence
enlarging their scope to a wide family of non-classical logics, in particular
negation-free DL families. We propose a general framework for defining revision
operators based on the notion of relaxation, introduced recently for defining
dissimilarity measures between DL concepts. A revision operator in this
framework amounts to relax the set of models of the old belief until it reaches
the sets of models of the new piece of knowledge. We demonstrate that such a
relaxation-based revision operator defines a faithful assignment and satisfies
the generalized AGM postulates. Another important contribution concerns the
definition of several concrete relaxation operators suited to the syntax of
some DLs (ALC and its fragments EL and ELU).
| [
{
"version": "v1",
"created": "Thu, 26 Feb 2015 16:41:13 GMT"
}
] | 1,424,995,200,000 | [
[
"Aiguier",
"Marc",
""
],
[
"Atif",
"Jamal",
""
],
[
"Bloch",
"Isabelle",
""
],
[
"Hudelot",
"Céline",
""
]
] |
1503.00899 | Raka Jovanovic | Raka Jovanovic, Milan Tuba, Stefan Voss | An Ant Colony Optimization Algorithm for Partitioning Graphs with Supply
and Demand | null | null | 10.1016/j.asoc.2016.01.013 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we focus on finding high quality solutions for the problem of
maximum partitioning of graphs with supply and demand (MPGSD). There is a
growing interest for the MPGSD due to its close connection to problems
appearing in the field of electrical distribution systems, especially for the
optimization of self-adequacy of interconnected microgrids. We propose an ant
colony optimization algorithm for the problem. With the goal of further
improving the algorithm we combine it with a previously developed correction
procedure. In our computational experiments we evaluate the performance of the
proposed algorithm on both trees and general graphs. The tests show that the
method manages to find optimal solutions in more than 50% of the problem
instances, and has an average relative error of less than 0.5% when compared to
known optimal solutions.
| [
{
"version": "v1",
"created": "Tue, 3 Mar 2015 11:26:02 GMT"
}
] | 1,454,284,800,000 | [
[
"Jovanovic",
"Raka",
""
],
[
"Tuba",
"Milan",
""
],
[
"Voss",
"Stefan",
""
]
] |
1503.00980 | Jin-Kao Hao | Xiangjing Lai and Jin-Kao Hao | On memetic search for the max-mean dispersion problem | 22 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given a set $V$ of $n$ elements and a distance matrix $[d_{ij}]_{n\times n}$
among elements, the max-mean dispersion problem (MaxMeanDP) consists in
selecting a subset $M$ from $V$ such that the mean dispersion (or distance)
among the selected elements is maximized. Being a useful model to formulate
several relevant applications, MaxMeanDP is known to be NP-hard and thus
computationally difficult. In this paper, we present a highly effective memetic
algorithm for MaxMeanDP which relies on solution recombination and local
optimization to find high quality solutions. Computational experiments on the
set of 160 benchmark instances with up to 1000 elements commonly used in the
literature show that the proposed algorithm improves or matches the published
best known results for all instances in a short computing time, with only one
exception, while achieving a high success rate of 100\%. In particular, we
improve 59 previous best results out of the 60 most challenging instances.
Results on a set of 40 new large instances with 3000 and 5000 elements are also
presented. The key ingredients of the proposed algorithm are investigated to
shed light on how they affect the performance of the algorithm.
| [
{
"version": "v1",
"created": "Tue, 3 Mar 2015 15:43:36 GMT"
}
] | 1,425,427,200,000 | [
[
"Lai",
"Xiangjing",
""
],
[
"Hao",
"Jin-Kao",
""
]
] |
1503.01051 | Sander Beckers | Sander Beckers and Joost Vennekens | Combining Probabilistic, Causal, and Normative Reasoning in CP-logic | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years the search for a proper formal definition of actual causation
-- i.e., the relation of cause-effect as it is instantiated in specific
observations, rather than general causal relations -- has taken on impressive
proportions. In part this is due to the insight that this concept plays a
fundamental role in many different fields, such as legal theory, engineering,
medicine, ethics, etc. Because of this diversity in applications, some
researchers have shifted focus from a single idealized definition towards a
more pragmatic, context-based account. For instance, recent work by Halpern and
Hitchcock draws on empirical research regarding people's causal judgments, to
suggest a graded and context-sensitive notion of causation. Although we
sympathize with many of their observations, their restriction to a merely
qualitative ordering runs into trouble for more complex examples. Therefore we
aim to improve on their approach, by using the formal language of CP-logic
(Causal Probabilistic logic), and the framework for defining actual causation
that was developed by the current authors using it. First we rephrase their
ideas into our quantitative, probabilistic setting, after which we modify it to
accommodate a greater class of examples. Further, we introduce a formal
distinction between statistical and normative considerations.
| [
{
"version": "v1",
"created": "Tue, 3 Mar 2015 18:50:40 GMT"
}
] | 1,425,427,200,000 | [
[
"Beckers",
"Sander",
""
],
[
"Vennekens",
"Joost",
""
]
] |
1503.01299 | Naji Shajarisales | Naji Shajarisales, Dominik Janzing, Bernhard Shoelkopf, Michel
Besserve | Telling cause from effect in deterministic linear dynamical systems | This article is under review for a peer-reviewed conference | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Inferring a cause from its effect using observed time series data is a major
challenge in natural and social sciences. Assuming the effect is generated by
the cause trough a linear system, we propose a new approach based on the
hypothesis that nature chooses the "cause" and the "mechanism that generates
the effect from the cause" independent of each other. We therefore postulate
that the power spectrum of the time series being the cause is uncorrelated with
the square of the transfer function of the linear filter generating the effect.
While most causal discovery methods for time series mainly rely on the noise,
our method relies on asymmetries of the power spectral density properties that
can be exploited even in the context of deterministic systems. We describe
mathematical assumptions in a deterministic model under which the causal
direction is identifiable with this approach. We also discuss the method's
performance under the additive noise model and its relationship to Granger
causality. Experiments show encouraging results on synthetic as well as
real-world data. Overall, this suggests that the postulate of Independence of
Cause and Mechanism is a promising principle for causal inference on empirical
time series.
| [
{
"version": "v1",
"created": "Wed, 4 Mar 2015 12:48:44 GMT"
}
] | 1,425,513,600,000 | [
[
"Shajarisales",
"Naji",
""
],
[
"Janzing",
"Dominik",
""
],
[
"Shoelkopf",
"Bernhard",
""
],
[
"Besserve",
"Michel",
""
]
] |
1503.01327 | Liat Cohen | Liat Cohen, Solomon Eyal Shimony, Gera Weiss | Estimating the Probability of Meeting a Deadline in Hierarchical Plans | A jornal version of an IJCAI-2015 paper: "Estimating the Probability
of Meeting a Deadline in Hierarchical Plans" | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given a hierarchical plan (or schedule) with uncertain task times, we propose
a deterministic polynomial (time and memory) algorithm for estimating the
probability that its meets a deadline, or, alternately, that its {\em makespan}
is less than a given duration. Approximation is needed as it is known that this
problem is NP-hard even for sequential plans (just, a sum of random variables).
In addition, we show two new complexity results: (1) Counting the number of
events that do not cross deadline is \#P-hard; (2)~Computing the expected
makespan of a hierarchical plan is NP-hard. For the proposed approximation
algorithm, we establish formal approximation bounds and show that the time and
memory complexities grow polynomially with the required accuracy, the number of
nodes in the plan, and with the size of the support of the random variables
that represent the durations of the primitive tasks. We examine these
approximation bounds empirically and demonstrate, using task networks taken
from the literature, how our scheme outperforms sampling techniques and exact
computation in terms of accuracy and run-time. As the empirical data shows much
better error bounds than guaranteed, we also suggest a method for tightening
the bounds in some cases.
| [
{
"version": "v1",
"created": "Wed, 4 Mar 2015 14:56:55 GMT"
},
{
"version": "v2",
"created": "Sun, 24 Dec 2017 19:47:45 GMT"
}
] | 1,514,332,800,000 | [
[
"Cohen",
"Liat",
""
],
[
"Shimony",
"Solomon Eyal",
""
],
[
"Weiss",
"Gera",
""
]
] |
1503.01446 | Selene Baez Santamaria | Selene Baez | Predicting opponent team activity in a RoboCup environment | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The goal of this project is to predict the opponent's configuration in a
RoboCup SSL environment. For simplicity, a Markov model assumption is made such
that the predicted formation of the opponent team only depends on its current
formation. The field is divided into a grid and a robot state per player is
created with information about its position and its velocity. To gather a more
general sense of what the opposing team is doing, the state also incorporates
the team's average position (centroid). All possible state transitions are
stored in a hash table that requires minimum storage space. The table is
populated with transition probabilities that are learned by reading vision
packages and counting the state transitions regardless of the specific robot
player. Therefore, the computation during the game is reduced to interpreting a
given vision package to assign each player to a state, and looking for the most
likely state it will transition to. The confidence of the predicted team's
formation is the product of each individual player's probability. The project
is noteworthy in that it minimizes the time and space complexity requirements
for opponent's moves prediction.
| [
{
"version": "v1",
"created": "Wed, 4 Mar 2015 20:23:21 GMT"
}
] | 1,425,513,600,000 | [
[
"Baez",
"Selene",
""
]
] |
1503.02521 | Kieran Greer Dr | Kieran Greer | A Single-Pass Classifier for Categorical Data | null | Special Issue on: IJCSysE Recent Advances in Evolutionary and
Natural Computing Practice and Applications, Int. J. Computational Systems
Engineering, Inderscience, Vol. 3, Nos. 1/2, pp. 27 - 34, 2017 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes a new method for classifying a dataset that partitions
elements into their categories. It has relations with neural networks but a
slightly different structure, requiring only a single pass through the
classifier to generate the weight sets. A grid-like structure is required as
part of a novel idea of converting a 1-D row of real values into a 2-D
structure of value bands. Each cell in any band then stores a distinct set of
weights, to represent its own importance and its relation to each output
category. During classification, all of the output weight lists can be
retrieved and summed to produce a probability for what the correct output
category is. The bands possibly work like hidden layers of neurons, but they
are variable specific, making the process orthogonal. The construction process
can be a single update process without iterations, making it potentially much
faster. It can also be compared with k-NN and may be practical for partial or
competitive updating.
| [
{
"version": "v1",
"created": "Mon, 9 Mar 2015 15:28:32 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Apr 2015 16:32:44 GMT"
},
{
"version": "v3",
"created": "Wed, 14 Oct 2015 18:06:44 GMT"
},
{
"version": "v4",
"created": "Wed, 29 Jun 2016 10:40:50 GMT"
}
] | 1,546,560,000,000 | [
[
"Greer",
"Kieran",
""
]
] |
1503.02626 | David Windridge | David Windridge | On the Intrinsic Limits to Representationally-Adaptive Machine-Learning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Online learning is a familiar problem setting within Machine-Learning in
which data is presented serially in time to a learning agent, requiring it to
progressively adapt within the constraints of the learning algorithm. More
sophisticated variants may involve concepts such as transfer-learning which
increase this adaptive capability, enhancing the learner's cognitive capacities
in a manner that can begin to imitate the open-ended learning capabilities of
human beings.
We shall argue in this paper, however, that a full realization of this notion
requires that, in addition to the capacity to adapt to novel data, autonomous
online learning must ultimately incorporate the capacity to update its own
representational capabilities in relation to the data. We therefore enquire
about the philosophical limits of this process, and argue that only fully
embodied learners exhibiting an a priori perception-action link in order to
ground representational adaptations are capable of exhibiting the full range of
human cognitive capability.
| [
{
"version": "v1",
"created": "Mon, 9 Mar 2015 19:17:49 GMT"
}
] | 1,425,945,600,000 | [
[
"Windridge",
"David",
""
]
] |
1503.02917 | Karl-Heinz Weis | Karl-Heinz Weis | A Case Based Reasoning Approach for Answer Reranking in Question
Answering | in Proceedings Informatik 2013, Koblenz, Germany, 2013 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this document I present an approach to answer validation and reranking for
question answering (QA) systems. A cased-based reasoning (CBR) system judges
answer candidates for questions from annotated answer candidates for earlier
questions. The promise of this approach is that user feedback will result in
improved answers of the QA system, due to the growing case base. In the paper,
I present the adequate structuring of the case base and the appropriate
selection of relevant similarity measures, in order to solve the answer
validation problem. The structural case base is built from annotated MultiNet
graphs, which provide representations for natural language expressions, and
corresponding graph similarity measures. I cover a priori relations to
experienced answer candidates for former questions. I compare the CBR System
results to current approaches in an experiment integrating CBR into an existing
framework for answer validation and reranking. This integration is achieved by
adding CBR-related features to the input of a learned ranking model that
determines the final answer ranking. In the experiments based on QA@CLEF
questions, the best learned models make heavy use of CBR features. Observing
the results with a continually growing case base, I present a positive effect
of the size of the case base on the accuracy of the CBR subsystem.
| [
{
"version": "v1",
"created": "Tue, 10 Mar 2015 14:10:47 GMT"
}
] | 1,426,032,000,000 | [
[
"Weis",
"Karl-Heinz",
""
]
] |
1503.03787 | Norbert B\'atfai Ph.D. | Norbert B\'atfai | Are there intelligent Turing machines? | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces a new computing model based on the cooperation among
Turing machines called orchestrated machines. Like universal Turing machines,
orchestrated machines are also designed to simulate Turing machines but they
can also modify the original operation of the included Turing machines to
create a new layer of some kind of collective behavior. Using this new model we
can define some interested notions related to cooperation ability of Turing
machines such as the intelligence quotient or the emotional intelligence
quotient for Turing machines.
| [
{
"version": "v1",
"created": "Thu, 12 Mar 2015 16:00:32 GMT"
}
] | 1,426,204,800,000 | [
[
"Bátfai",
"Norbert",
""
]
] |
1503.04187 | Manuel Baltieri Mr | Simon McGregor, Manuel Baltieri and Christopher L. Buckley | A Minimal Active Inference Agent | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Research on the so-called "free-energy principle'' (FEP) in cognitive
neuroscience is becoming increasingly high-profile. To date, introductions to
this theory have proved difficult for many readers to follow, but it depends
mainly upon two relatively simple ideas: firstly that normative or teleological
values can be expressed as probability distributions (active inference), and
secondly that approximate Bayesian reasoning can be effectively performed by
gradient descent on model parameters (the free-energy principle). The notion of
active inference is of great interest for a number of disciplines including
cognitive science and artificial intelligence, as well as cognitive
neuroscience, and deserves to be more widely known.
This paper attempts to provide an accessible introduction to active inference
and informational free-energy, for readers from a range of scientific
backgrounds. In this work introduce an agent-based model with an agent trying
to make predictions about its position in a one-dimensional discretized world
using methods from the FEP.
| [
{
"version": "v1",
"created": "Fri, 13 Mar 2015 18:58:25 GMT"
}
] | 1,426,464,000,000 | [
[
"McGregor",
"Simon",
""
],
[
"Baltieri",
"Manuel",
""
],
[
"Buckley",
"Christopher L.",
""
]
] |
1503.04220 | Arindam Chaudhuri AC | Arindam Chaudhuri, Dipak Chatterjee | Fuzzy Mixed Integer Optimization Model for Regression Approach | Conference Paper | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mixed Integer Optimization has been a topic of active research in past
decades. It has been used to solve Statistical problems of classification and
regression involving massive data. However, there is an inherent degree of
vagueness present in huge real life data. This impreciseness is handled by
Fuzzy Sets. In this Paper, Fuzzy Mixed Integer Optimization Method (FMIOM) is
used to find solution to Regression problem. The methodology exploits discrete
character of problem. In this way large scale problems are solved within
practical limits. The data points are separated into different polyhedral
regions and each region has its own distinct regression coefficients. In this
attempt, an attention is drawn to Statistics and Data Mining community that
Integer Optimization can be significantly used to revisit different Statistical
problems. Computational experimentations with generated and real data sets show
that FMIOM is comparable to and often outperforms current leading methods. The
results illustrate potential for significant impact of Fuzzy Integer
Optimization methods on Computational Statistics and Data Mining.
| [
{
"version": "v1",
"created": "Fri, 13 Mar 2015 21:10:38 GMT"
}
] | 1,426,550,400,000 | [
[
"Chaudhuri",
"Arindam",
""
],
[
"Chatterjee",
"Dipak",
""
]
] |
1503.04222 | Arindam Chaudhuri AC | Arindam Chaudhuri, Dipak Chatterjee, Ritesh Rajput | Fuzzy Mixed Integer Linear Programming for Air Vehicles Operations
Optimization | Conference Paper | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multiple Air Vehicles (AVs) to prosecute geographically dispersed targets is
an important optimization problem. Associated multiple tasks viz., target
classification, attack and verification are successively performed on each
target. The optimal minimum time performance of these tasks requires
cooperation among vehicles such that critical time constraints are satisfied
i.e. target must be classified before it can be attacked and AV is sent to
target area to verify its destruction after target has been attacked. Here,
optimal task scheduling problem from Indian Air Force is formulated as Fuzzy
Mixed Integer Linear Programming (FMILP) problem. The solution assigns all
tasks to vehicles and performs scheduling in an optimal manner including
scheduled staged departure times. Coupled tasks involving time and task order
constraints are addressed. When AVs have sufficient endurance, existence of
optimal solution is guaranteed. The solution developed can serve as an
effective heuristic for different categories of AV optimization problems.
| [
{
"version": "v1",
"created": "Fri, 13 Mar 2015 21:14:49 GMT"
}
] | 1,426,550,400,000 | [
[
"Chaudhuri",
"Arindam",
""
],
[
"Chatterjee",
"Dipak",
""
],
[
"Rajput",
"Ritesh",
""
]
] |
1503.04333 | Kieran Greer Dr | Kieran Greer | A More Human Way to Play Computer Chess | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper suggests a forward-pruning technique for computer chess that uses
'Move Tables', which are like Transposition Tables, but for moves not
positions. They use an efficient memory structure and has put the design into
the context of long and short-term memories. The long-term memory updates a
play path with weight reinforcement, while the short-term memory can be
immediately added or removed. With this, 'long branches' can play a short path,
before returning to a full search at the resulting leaf nodes. Re-using an
earlier search path allows the tree to be forward-pruned, which is known to be
dangerous, because it removes part of the search process. Additional checks are
therefore made and moves can even be re-added when the search result is
unsatisfactory. Automatic feature analysis is now central to the algorithm,
where key squares and related squares can be generated automatically and used
to guide the search process. Using this analysis, if a search result is
inferior, it can re-insert un-played moves that cover these key squares only.
On the tactical side, a type of move that the forward-pruning will fail on is
recognised and a pattern-based solution to that problem is suggested. This has
completed the theory of an earlier paper and resulted in a more human-like
approach to searching for a chess move. Tests demonstrate that the obvious
blunders associated with forward pruning are no longer present and that it can
compete at the top level with regard to playing strength.
| [
{
"version": "v1",
"created": "Sat, 14 Mar 2015 18:47:07 GMT"
},
{
"version": "v2",
"created": "Sun, 28 Jun 2015 14:23:20 GMT"
},
{
"version": "v3",
"created": "Tue, 16 May 2017 12:11:10 GMT"
},
{
"version": "v4",
"created": "Mon, 11 Jun 2018 08:37:36 GMT"
},
{
"version": "v5",
"created": "Thu, 17 Jan 2019 12:31:20 GMT"
}
] | 1,547,769,600,000 | [
[
"Greer",
"Kieran",
""
]
] |
1503.05055 | Arnaud Martin | Mouna Chebbah (IRISA), Arnaud Martin (IRISA), Boutheina Ben Yaghlane | Combining partially independent belief functions | Decision Support Systems, Elsevier, 2015 | null | 10.1016/j.dss.2015.02.017 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The theory of belief functions manages uncertainty and also proposes a set of
combination rules to aggregate opinions of several sources. Some combination
rules mix evidential information where sources are independent; other rules are
suited to combine evidential information held by dependent sources. In this
paper we have two main contributions: First we suggest a method to quantify
sources' degree of independence that may guide the choice of the more
appropriate set of combination rules. Second, we propose a new combination rule
that takes consideration of sources' degree of independence. The proposed
method is illustrated on generated mass functions.
| [
{
"version": "v1",
"created": "Tue, 17 Mar 2015 14:04:38 GMT"
}
] | 1,426,636,800,000 | [
[
"Chebbah",
"Mouna",
"",
"IRISA"
],
[
"Martin",
"Arnaud",
"",
"IRISA"
],
[
"Yaghlane",
"Boutheina Ben",
""
]
] |
1503.05501 | Odinaldo Rodrigues | D. M. Gabbay and O. Rodrigues | Probabilistic Argumentation. An Equational Approach | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There is a generic way to add any new feature to a system. It involves 1)
identifying the basic units which build up the system and 2) introducing the
new feature to each of these basic units.
In the case where the system is argumentation and the feature is
probabilistic we have the following. The basic units are: a. the nature of the
arguments involved; b. the membership relation in the set S of arguments; c.
the attack relation; and d. the choice of extensions.
Generically to add a new aspect (probabilistic, or fuzzy, or temporal, etc)
to an argumentation network <S,R> can be done by adding this feature to each
component a-d. This is a brute-force method and may yield a non-intuitive or
meaningful result.
A better way is to meaningfully translate the object system into another
target system which does have the aspect required and then let the target
system endow the aspect on the initial system. In our case we translate
argumentation into classical propositional logic and get probabilistic
argumentation from the translation.
Of course what we get depends on how we translate.
In fact, in this paper we introduce probabilistic semantics to abstract
argumentation theory based on the equational approach to argumentation
networks. We then compare our semantics with existing proposals in the
literature including the approaches by M. Thimm and by A. Hunter. Our
methodology in general is discussed in the conclusion.
| [
{
"version": "v1",
"created": "Wed, 18 Mar 2015 17:29:24 GMT"
}
] | 1,426,723,200,000 | [
[
"Gabbay",
"D. M.",
""
],
[
"Rodrigues",
"O.",
""
]
] |
1503.05667 | Sourish Dasgupta | Sourish Dasgupta, Gaurav Maheshwari, Priyansh Trivedi | BitSim: An Algebraic Similarity Measure for Description Logics Concepts | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/3.0/ | In this paper, we propose an algebraic similarity measure {\sigma}BS (BS
stands for BitSim) for assigning semantic similarity score to concept
definitions in ALCH+ an expressive fragment of Description Logics (DL). We
define an algebraic interpretation function, I_B, that maps a concept
definition to a unique string ({\omega}_B) called bit-code) over an alphabet
{\Sigma}_B of 11 symbols belonging to L_B - the language over P B. IB has
semantic correspondence with conventional model-theoretic interpretation of DL.
We then define {\sigma}_BS on L_B. A detailed analysis of I_B and {\sigma}_BS
has been given.
| [
{
"version": "v1",
"created": "Thu, 19 Mar 2015 08:05:03 GMT"
}
] | 1,426,809,600,000 | [
[
"Dasgupta",
"Sourish",
""
],
[
"Maheshwari",
"Gaurav",
""
],
[
"Trivedi",
"Priyansh",
""
]
] |
1503.06087 | Frieder Stolzenburg | Ulrich Furbach, Claudia Schon, Frieder Stolzenburg, Karl-Heinz Weis,
Claus-Peter Wirth | The RatioLog Project: Rational Extensions of Logical Reasoning | 7 pages, 3 figures | KI, 29(3):271-277, 2015 | 10.1007/s13218-015-0377-9 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Higher-level cognition includes logical reasoning and the ability of question
answering with common sense. The RatioLog project addresses the problem of
rational reasoning in deep question answering by methods from automated
deduction and cognitive computing. In a first phase, we combine techniques from
information retrieval and machine learning to find appropriate answer
candidates from the huge amount of text in the German version of the free
encyclopedia "Wikipedia". In a second phase, an automated theorem prover tries
to verify the answer candidates on the basis of their logical representations.
In a third phase - because the knowledge may be incomplete and inconsistent -,
we consider extensions of logical reasoning to improve the results. In this
context, we work toward the application of techniques from human reasoning: We
employ defeasible reasoning to compare the answers w.r.t. specificity, deontic
logic, normative reasoning, and model construction. Moreover, we use integrated
case-based reasoning and machine learning techniques on the basis of the
semantic structure of the questions and answer candidates to learn giving the
right answers.
| [
{
"version": "v1",
"created": "Fri, 20 Mar 2015 14:33:48 GMT"
},
{
"version": "v2",
"created": "Thu, 30 Jul 2015 08:21:03 GMT"
}
] | 1,438,300,800,000 | [
[
"Furbach",
"Ulrich",
""
],
[
"Schon",
"Claudia",
""
],
[
"Stolzenburg",
"Frieder",
""
],
[
"Weis",
"Karl-Heinz",
""
],
[
"Wirth",
"Claus-Peter",
""
]
] |
1503.07341 | Catarina Moreira | Catarina Moreira | An Experiment on Using Bayesian Networks for Process Mining | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Process mining is a technique that performs an automatic analysis of business
processes from a log of events with the promise of understanding how processes
are executed in an organisation.
Several models have been proposed to address this problem, however, here we
propose a different approach to deal with uncertainty. By uncertainty, we mean
estimating the probability of some sequence of tasks occurring in a business
process, given that only a subset of tasks may be observable.
In this sense, this work proposes a new approach to perform process mining
using Bayesian Networks. These structures can take into account the probability
of a task being present or absent in the business process. Moreover, Bayesian
Networks are able to automatically learn these probabilities through mechanisms
such as the maximum likelihood estimate and EM clustering.
Experiments made over a Loan Application Case study suggest that Bayesian
Networks are adequate structures for process mining and enable a deep analysis
of the business process model that can be used to answer queries about that
process.
| [
{
"version": "v1",
"created": "Wed, 25 Mar 2015 11:34:31 GMT"
}
] | 1,427,328,000,000 | [
[
"Moreira",
"Catarina",
""
]
] |
1503.07587 | Jose Hernandez-Orallo | Jose Hernandez-Orallo | Universal Psychometrics Tasks: difficulty, composition and decomposition | 30 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This note revisits the concepts of task and difficulty. The notion of
cognitive task and its use for the evaluation of intelligent systems is still
replete with issues. The view of tasks as MDP in the context of reinforcement
learning has been especially useful for the formalisation of learning tasks.
However, this alternate interaction does not accommodate well for some other
tasks that are usual in artificial intelligence and, most especially, in animal
and human evaluation. In particular, we want to have a more general account of
episodes, rewards and responses, and, most especially, the computational
complexity of the algorithm behind an agent solving a task. This is crucial for
the determination of the difficulty of a task as the (logarithm of the) number
of computational steps required to acquire an acceptable policy for the task,
which includes the exploration of policies and their verification. We introduce
a notion of asynchronous-time stochastic tasks. Based on this interpretation,
we can see what task difficulty is, what instance difficulty is (relative to a
task) and also what task compositions and decompositions are.
| [
{
"version": "v1",
"created": "Thu, 26 Mar 2015 00:34:34 GMT"
}
] | 1,427,414,400,000 | [
[
"Hernandez-Orallo",
"Jose",
""
]
] |
1503.07715 | Daniel Kovach Jr. | Daniel Kovach | The Computational Theory of Intelligence: Data Aggregation | Published in IJMNTA | null | 10.4236/ijmnta.2014.34016 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we will expound upon the concepts proffered in [1], where we
proposed an information theoretic approach to intelligence in the computational
sense. We will examine data and meme aggregation, and study the effect of
limited resources on the resulting meme amplitudes.
| [
{
"version": "v1",
"created": "Wed, 24 Dec 2014 07:47:46 GMT"
}
] | 1,427,414,400,000 | [
[
"Kovach",
"Daniel",
""
]
] |
1503.07845 | Heike Trautmann | Luis Marti, Christian Grimme, Pascal Kerschke, Heike Trautmann,
G\"unter Rudolph | Averaged Hausdorff Approximations of Pareto Fronts based on
Multiobjective Estimation of Distribution Algorithms | 13 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the a posteriori approach of multiobjective optimization the Pareto front
is approximated by a finite set of solutions in the objective space. The
quality of the approximation can be measured by different indicators that take
into account the approximation's closeness to the Pareto front and its
distribution along the Pareto front. In particular, the averaged Hausdorff
indicator prefers an almost uniform distribution. An observed drawback of
multiobjective estimation of distribution algorithms (MEDAs) is that - as
common for randomized metaheuristics - the final population usually is not
uniformly distributed along the Pareto front. Therefore, we propose a
postprocessing strategy which consists of applying the averaged Hausdorff
indicator to the complete archive of generated solutions after optimization in
order to select a uniformly distributed subset of nondominated solutions from
the archive. In this paper, we put forward a strategy for extracting the above
described subset. The effectiveness of the proposal is contrasted in a series
of experiments that involve different MEDAs and filtering techniques.
| [
{
"version": "v1",
"created": "Thu, 26 Mar 2015 19:44:48 GMT"
}
] | 1,427,414,400,000 | [
[
"Marti",
"Luis",
""
],
[
"Grimme",
"Christian",
""
],
[
"Kerschke",
"Pascal",
""
],
[
"Trautmann",
"Heike",
""
],
[
"Rudolph",
"Günter",
""
]
] |
1503.08275 | Rosemarie Velik | Rosemarie Velik, Pascal Nicolay | Energy Management in Storage-Augmented, Grid-Connected Prosumer
Buildings and Neighbourhoods Using a Modified Simulated Annealing
Optimization | Computers & Operations Research, 2015 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This article introduces a modified simulated annealing optimization approach
for automatically determining optimal energy management strategies in
grid-connected, storage-augmented, photovoltaics-supplied prosumer buildings
and neighbourhoods based on user-specific goals. For evaluating the modified
simulated annealing optimizer, a number of test scenarios in the field of
energy self-consumption maximization are defined and results are compared to a
gradient descent and a total state space search approach. The benchmarking
against these two reference methods demonstrates that the modified simulated
annealing approach is able to find significantly better solutions than the
gradient descent algorithm - being equal or very close to the global optimum -
with significantly less computational effort and processing time than the total
state space search approach.
| [
{
"version": "v1",
"created": "Sat, 28 Mar 2015 07:16:22 GMT"
}
] | 1,427,760,000,000 | [
[
"Velik",
"Rosemarie",
""
],
[
"Nicolay",
"Pascal",
""
]
] |
1503.08289 | Matteo Brunelli | Matteo Brunelli | Recent advances on inconsistency indices for pairwise comparisons - a
commentary | 13 pages, 2 figures | Fundamenta Informaticae, 144(3-4), 321-332, 2016 | 10.3233/FI-2016-1338 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper recalls the definition of consistency for pairwise comparison
matrices and briefly presents the concept of inconsistency index in connection
to other aspects of the theory of pairwise comparisons. By commenting on a
recent contribution by Koczkodaj and Szwarc, it will be shown that the
discussion on inconsistency indices is far from being over, and the ground is
still fertile for debates.
| [
{
"version": "v1",
"created": "Sat, 28 Mar 2015 10:24:43 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Jul 2015 08:56:35 GMT"
},
{
"version": "v3",
"created": "Thu, 10 Mar 2016 14:47:12 GMT"
}
] | 1,457,654,400,000 | [
[
"Brunelli",
"Matteo",
""
]
] |
1503.08345 | Pravendra Singh | Pravendra Singh | Implementing an intelligent version of the classical sliding-puzzle game
for unix terminals using Golang's concurrency primitives | 8 pages | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | An intelligent version of the sliding-puzzle game is developed using the new
Go programming language, which uses a concurrent version of the A* Informed
Search Algorithm to power solver-bot that runs in the background. The game runs
in computer system's terminals. Mainly, it was developed for UNIX-type systems
but it works pretty well in nearly all the operating systems because of
cross-platform compatibility of the programming language used. The game uses
language's concurrency primitives to simplify most of the hefty parts of the
game. A real-time notification delivery architecture is developed using
language's built-in concurrency support, which performs similar to event based
context aware invocations like we see on the web platform.
| [
{
"version": "v1",
"created": "Sat, 28 Mar 2015 20:35:02 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Aug 2015 17:07:32 GMT"
}
] | 1,440,460,800,000 | [
[
"Singh",
"Pravendra",
""
]
] |
1503.09137 | Vit Novacek | Vit Novacek | Formalising Hypothesis Virtues in Knowledge Graphs: A General
Theoretical Framework and its Validation in Literature-Based Discovery
Experiments | Pre-print of an article submitted to Artificial Intelligence Journal
(after the manuscript has been refused by the editors of Journal of Web
Semantics before the peer review process due to being out of scope for that
journal) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce an approach to discovery informatics that uses so called
knowledge graphs as the essential representation structure. Knowledge graph is
an umbrella term that subsumes various approaches to tractable representation
of large volumes of loosely structured knowledge in a graph form. It has been
used primarily in the Web and Linked Open Data contexts, but is applicable to
any other area dealing with knowledge representation. In the perspective of our
approach motivated by the challenges of discovery informatics, knowledge graphs
correspond to hypotheses. We present a framework for formalising so called
hypothesis virtues within knowledge graphs. The framework is based on a classic
work in philosophy of science, and naturally progresses from mostly informative
foundational notions to actionable specifications of measures corresponding to
particular virtues. These measures can consequently be used to determine
refined sub-sets of knowledge graphs that have large relative potential for
making discoveries. We validate the proposed framework by experiments in
literature-based discovery. The experiments have demonstrated the utility of
our work and its superiority w.r.t. related approaches.
| [
{
"version": "v1",
"created": "Tue, 31 Mar 2015 17:29:58 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Apr 2015 11:51:12 GMT"
}
] | 1,430,265,600,000 | [
[
"Novacek",
"Vit",
""
]
] |
1504.00136 | Guangming Lang | Guangming Lang | Knowledge reduction of dynamic covering decision information systems
with immigration of more objects | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In practical situations, it is of interest to investigate computing
approximations of sets as an important step of knowledge reduction of dynamic
covering decision information systems. In this paper, we present incremental
approaches to computing the type-1 and type-2 characteristic matrices of
dynamic coverings whose cardinalities increase with immigration of more
objects. We also present the incremental algorithms of computing the second and
sixth lower and upper approximations of sets in dynamic covering approximation
spaces.
| [
{
"version": "v1",
"created": "Wed, 1 Apr 2015 08:12:01 GMT"
}
] | 1,427,932,800,000 | [
[
"Lang",
"Guangming",
""
]
] |
1504.01004 | Zhen Zhang Dr. | Zhen Zhang, Chonghui Guo, Luis Mart\'inez | Managing Multi-Granular Linguistic Distribution Assessments in
Large-Scale Multi-Attribute Group Decision Making | 32 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Linguistic large-scale group decision making (LGDM) problems are more and
more common nowadays. In such problems a large group of decision makers are
involved in the decision process and elicit linguistic information that are
usually assessed in different linguistic scales with diverse granularity
because of decision makers' distinct knowledge and background. To keep maximum
information in initial stages of the linguistic LGDM problems, the use of
multi-granular linguistic distribution assessments seems a suitable choice,
however to manage such multigranular linguistic distribution assessments, it is
necessary the development of a new linguistic computational approach. In this
paper it is proposed a novel computational model based on the use of extended
linguistic hierarchies, which not only can be used to operate with
multi-granular linguistic distribution assessments, but also can provide
interpretable linguistic results to decision makers. Based on this new
linguistic computational model, an approach to linguistic large-scale
multi-attribute group decision making is proposed and applied to a talent
selection process in universities.
| [
{
"version": "v1",
"created": "Sat, 4 Apr 2015 10:52:47 GMT"
},
{
"version": "v2",
"created": "Wed, 18 Nov 2015 06:01:06 GMT"
}
] | 1,447,891,200,000 | [
[
"Zhang",
"Zhen",
""
],
[
"Guo",
"Chonghui",
""
],
[
"Martínez",
"Luis",
""
]
] |
1504.01173 | Arthur Choi | Arthur Choi and Adnan Darwiche | Dual Decomposition from the Perspective of Relax, Compensate and then
Recover | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Relax, Compensate and then Recover (RCR) is a paradigm for approximate
inference in probabilistic graphical models that has previously provided
theoretical and practical insights on iterative belief propagation and some of
its generalizations. In this paper, we characterize the technique of dual
decomposition in the terms of RCR, viewing it as a specific way to compensate
for relaxed equivalence constraints. Among other insights gathered from this
perspective, we propose novel heuristics for recovering relaxed equivalence
constraints with the goal of incrementally tightening dual decomposition
approximations, all the way to reaching exact solutions. We also show
empirically that recovering equivalence constraints can sometimes tighten the
corresponding approximation (and obtaining exact results), without increasing
much the complexity of inference.
| [
{
"version": "v1",
"created": "Sun, 5 Apr 2015 23:49:11 GMT"
}
] | 1,428,364,800,000 | [
[
"Choi",
"Arthur",
""
],
[
"Darwiche",
"Adnan",
""
]
] |
1504.02027 | Vasile Patrascu | Vasile Patrascu | The Neutrosophic Entropy and its Five Components | null | Neutrosophic Sets and Systems, Vol.7, 2015,pp. 40-46 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents two variants of penta-valued representation for
neutrosophic entropy. The first is an extension of Kaufmann's formula and the
second is an extension of Kosko's formula.
Based on the primary three-valued information represented by the degree of
truth, degree of falsity and degree of neutrality there are built some
penta-valued representations that better highlights some specific features of
neutrosophic entropy. Thus, we highlight five features of neutrosophic
uncertainty such as ambiguity, ignorance, contradiction, neutrality and
saturation. These five features are supplemented until a seven partition of
unity by adding two features of neutrosophic certainty such as truth and
falsity.
The paper also presents the particular forms of neutrosophic entropy obtained
in the case of bifuzzy representations, intuitionistic fuzzy representations,
paraconsistent fuzzy representations and finally the case of fuzzy
representations.
| [
{
"version": "v1",
"created": "Thu, 5 Feb 2015 07:06:16 GMT"
}
] | 1,428,537,600,000 | [
[
"Patrascu",
"Vasile",
""
]
] |
1504.02281 | Ratlamwala Khatija Yusuf | Ahlam Ansari, Mohd Amin Sayyed, Khatija Ratlamwala, Parvin Shaikh | An Optimized Hybrid Approach for Path Finding | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Path finding algorithm addresses problem of finding shortest path from source
to destination avoiding obstacles. There exist various search algorithms namely
A*, Dijkstra's and ant colony optimization. Unlike most path finding algorithms
which require destination co-ordinates to compute path, the proposed algorithm
comprises of a new method which finds path using backtracking without requiring
destination co-ordinates. Moreover, in existing path finding algorithm, the
number of iterations required to find path is large. Hence, to overcome this,
an algorithm is proposed which reduces number of iterations required to
traverse the path. The proposed algorithm is hybrid of backtracking and a new
technique(modified 8- neighbor approach). The proposed algorithm can become
essential part in location based, network, gaming applications. grid traversal,
navigation, gaming applications, mobile robot and Artificial Intelligence.
| [
{
"version": "v1",
"created": "Thu, 9 Apr 2015 12:49:53 GMT"
}
] | 1,428,624,000,000 | [
[
"Ansari",
"Ahlam",
""
],
[
"Sayyed",
"Mohd Amin",
""
],
[
"Ratlamwala",
"Khatija",
""
],
[
"Shaikh",
"Parvin",
""
]
] |
1504.02882 | Liu Feng | Feng Liu, Yong Shi | Quantitative Analysis of Whether Machine Intelligence Can Surpass Human
Intelligence | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Whether the machine intelligence can surpass the human intelligence is a
controversial issue. On the basis of traditional IQ, this article presents the
Universal IQ test method suitable for both the machine intelligence and the
human intelligence. With the method, machine and human intelligences were
divided into 4 major categories and 15 subcategories. A total of 50 search
engines across the world and 150 persons at different ages were subject to the
relevant test. And then, the Universal IQ ranking list of 2014 for the test
objects was obtained.
| [
{
"version": "v1",
"created": "Sat, 11 Apr 2015 14:48:23 GMT"
}
] | 1,428,969,600,000 | [
[
"Liu",
"Feng",
""
],
[
"Shi",
"Yong",
""
]
] |
1504.03303 | Eray Ozkural | Eray \"Ozkural | Ultimate Intelligence Part II: Physical Measure and Complexity of
Intelligence | This paper was initially submitted to ALT-2014. We are taking the
valuable opinions of the anonymous reviewers into account. Many thanks to
Laurent Orseau for his constructive comments on the draft, which inspired
this revision. arXiv admin note: substantial text overlap with
arXiv:1501.00601 This is a major revision over the last version edited | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We continue our analysis of volume and energy measures that are appropriate
for quantifying inductive inference systems. We extend logical depth and
conceptual jump size measures in AIT to stochastic problems, and physical
measures that involve volume and energy. We introduce a graphical model of
computational complexity that we believe to be appropriate for intelligent
machines. We show several asymptotic relations between energy, logical depth
and volume of computation for inductive inference. In particular, we arrive at
a "black-hole equation" of inductive inference, which relates energy, volume,
space, and algorithmic information for an optimal inductive inference solution.
We introduce energy-bounded algorithmic entropy. We briefly apply our ideas to
the physical limits of intelligent computation in our universe.
| [
{
"version": "v1",
"created": "Thu, 9 Apr 2015 20:39:14 GMT"
},
{
"version": "v2",
"created": "Wed, 11 May 2016 10:39:16 GMT"
}
] | 1,463,011,200,000 | [
[
"Özkural",
"Eray",
""
]
] |
1504.03451 | Song-Ju Kim Dr. | Song-Ju Kim, Makoto Naruse and Masashi Aono | Harnessing Natural Fluctuations: Analogue Computer for Efficient
Socially Maximal Decision Making | 30 pages, 3 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Each individual handles many tasks of finding the most profitable option from
a set of options that stochastically provide rewards. Our society comprises a
collection of such individuals, and the society is expected to maximise the
total rewards, while the individuals compete for common rewards. Such
collective decision making is formulated as the `competitive multi-armed bandit
problem (CBP)', requiring a huge computational cost. Herein, we demonstrate a
prototype of an analog computer that efficiently solves CBPs by exploiting the
physical dynamics of numerous fluids in coupled cylinders. This device enables
the maximisation of the total rewards for the society without paying the
conventionally required computational cost; this is because the fluids estimate
the reward probabilities of the options for the exploitation of past knowledge
and generate random fluctuations for the exploration of new knowledge. Our
results suggest that to optimise the social rewards, the utilisation of
fluid-derived natural fluctuations is more advantageous than applying
artificial external fluctuations. Our analog computing scheme is expected to
trigger further studies for harnessing the huge computational power of natural
phenomena for resolving a wide variety of complex problems in modern
information society.
| [
{
"version": "v1",
"created": "Tue, 14 Apr 2015 08:30:27 GMT"
}
] | 1,429,056,000,000 | [
[
"Kim",
"Song-Ju",
""
],
[
"Naruse",
"Makoto",
""
],
[
"Aono",
"Masashi",
""
]
] |
1504.03558 | Nguyen Minh van | Nguyen Van Minh and Le Hoang Son | Fuzzy approaches to context variable in fuzzy geographically weighted
clustering | 11 pages | null | 10.5121/csit.2015.50503 | null | cs.AI | http://creativecommons.org/licenses/publicdomain/ | Fuzzy Geographically Weighted Clustering (FGWC) is considered as a suitable
tool for the analysis of geo-demographic data that assists the provision and
planning of products and services to local people. Context variables were
attached to FGWC in order to accelerate the computing speed of the algorithm
and to focus the results on the domain of interests. Nonetheless, the
determination of exact, crisp values of the context variable is a hard task. In
this paper, we propose two novel methods using fuzzy approaches for that
determination. A numerical example is given to illustrate the uses of the
proposed methods.
| [
{
"version": "v1",
"created": "Mon, 13 Apr 2015 10:34:02 GMT"
}
] | 1,429,056,000,000 | [
[
"Van Minh",
"Nguyen",
""
],
[
"Son",
"Le Hoang",
""
]
] |
1504.03592 | Louise Dennis Dr | Louise A. Dennis, Michael Fisher, Alan F. T. Winfield | Towards Verifiably Ethical Robot Behaviour | Presented at the 1st International Workshop on AI and Ethics, Sunday
25th January 2015, Hill Country A, Hyatt Regency Austin. Will appear in the
workshop proceedings published by AAAI | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ensuring that autonomous systems work ethically is both complex and
difficult. However, the idea of having an additional `governor' that assesses
options the system has, and prunes them to select the most ethical choices is
well understood. Recent work has produced such a governor consisting of a
`consequence engine' that assesses the likely future outcomes of actions then
applies a Safety/Ethical logic to select actions. Although this is appealing,
it is impossible to be certain that the most ethical options are actually
taken. In this paper we extend and apply a well-known agent verification
approach to our consequence engine, allowing us to verify the correctness of
its ethical decision-making.
| [
{
"version": "v1",
"created": "Tue, 14 Apr 2015 15:49:40 GMT"
}
] | 1,429,056,000,000 | [
[
"Dennis",
"Louise A.",
""
],
[
"Fisher",
"Michael",
""
],
[
"Winfield",
"Alan F. T.",
""
]
] |
1504.05381 | Ryuta Arisaka | Ryuta Arisaka | How do you revise your belief set with %$;@*? | Corrected the following: 1. In Definition 1, the function I and Assoc
were both defined to map into 2^Props x 2^Props, but they should be clearly
into 2^{Props x Props}. 2. In Definition 1, one disjunctive case was being
omitted. One case (5th item) was inserted to complete the picture | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the classic AGM belief revision theory, beliefs are static and do not
change their own shape. For instance, if p is accepted by a rational agent, it
will remain p to the agent. But such rarely happens to us. Often, when we
accept some information p, what is actually accepted is not the whole p, but
only a portion of it; not necessarily because we select the portion but because
p must be perceived. Only the perceived p is accepted; and the perception is
subject to what we already believe (know). What may, however, happen to the
rest of p that initially escaped our attention? In this work we argue that the
invisible part is also accepted to the agent, if only unconsciously. Hence some
parts of p are accepted as visible, while some other parts as latent, beliefs.
The division is not static. As the set of beliefs changes, what were hidden may
become visible. We present a perception-based belief theory that incorporates
latent beliefs.
| [
{
"version": "v1",
"created": "Tue, 21 Apr 2015 10:44:07 GMT"
},
{
"version": "v2",
"created": "Thu, 21 May 2015 11:45:49 GMT"
},
{
"version": "v3",
"created": "Wed, 27 Jan 2016 03:29:16 GMT"
}
] | 1,453,939,200,000 | [
[
"Arisaka",
"Ryuta",
""
]
] |
1504.05411 | Daniel Nyga | Daniel Nyga and Michael Beetz | Reasoning about Unmodelled Concepts - Incorporating Class Taxonomies in
Probabilistic Relational Models | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A key problem in the application of first-order probabilistic methods is the
enormous size of graphical models they imply. The size results from the
possible worlds that can be generated by a domain of objects and relations. One
of the reasons for this explosion is that so far the approaches do not
sufficiently exploit the structure and similarity of possible worlds in order
to encode the models more compactly. We propose fuzzy inference in Markov logic
networks, which enables the use of taxonomic knowledge as a source of imposing
structure onto possible worlds. We show that by exploiting this structure,
probability distributions can be represented more compactly and that the
reasoning systems become capable of reasoning about concepts not contained in
the probabilistic knowledge base.
| [
{
"version": "v1",
"created": "Tue, 21 Apr 2015 13:04:24 GMT"
}
] | 1,429,660,800,000 | [
[
"Nyga",
"Daniel",
""
],
[
"Beetz",
"Michael",
""
]
] |
1504.05696 | Murray Shanahan | Murray Shanahan | Ascribing Consciousness to Artificial Intelligence | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper critically assesses the anti-functionalist stance on consciousness
adopted by certain advocates of integrated information theory (IIT), a
corollary of which is that human-level artificial intelligence implemented on
conventional computing hardware is necessarily not conscious. The critique
draws on variations of a well-known gradual neuronal replacement thought
experiment, as well as bringing out tensions in IIT's treatment of
self-knowledge. The aim, though, is neither to reject IIT outright nor to
champion functionalism in particular. Rather, it is suggested that both ideas
have something to offer a scientific understanding of consciousness, as long as
they are not dressed up as solutions to illusory metaphysical problems. As for
human-level AI, we must await its development before we can decide whether or
not to ascribe consciousness to it.
| [
{
"version": "v1",
"created": "Wed, 22 Apr 2015 08:50:16 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Sep 2015 08:40:33 GMT"
}
] | 1,441,670,400,000 | [
[
"Shanahan",
"Murray",
""
]
] |
1504.05846 | Peter Nightingale | James Caldwell and Ian P. Gent and Peter Nightingale | Generalized Support and Formal Development of Constraint Propagators | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Constraint programming is a family of techniques for solving combinatorial
problems, where the problem is modelled as a set of decision variables
(typically with finite domains) and a set of constraints that express relations
among the decision variables. One key concept in constraint programming is
propagation: reasoning on a constraint or set of constraints to derive new
facts, typically to remove values from the domains of decision variables.
Specialised propagation algorithms (propagators) exist for many classes of
constraints.
The concept of support is pervasive in the design of propagators.
Traditionally, when a domain value ceases to have support, it may be removed
because it takes part in no solutions. Arc-consistency algorithms such as
AC2001 make use of support in the form of a single domain value. GAC algorithms
such as GAC-Schema use a tuple of values to support each literal. We generalize
these notions of support in two ways. First, we allow a set of tuples to act as
support. Second, the supported object is generalized from a set of literals
(GAC-Schema) to an entire constraint or any part of it.
We design a methodology for developing correct propagators using generalized
support. A constraint is expressed as a family of support properties, which may
be proven correct against the formal semantics of the constraint. Using
Curry-Howard isomorphism to interpret constructive proofs as programs, we show
how to derive correct propagators from the constructive proofs of the support
properties. The framework is carefully designed to allow efficient algorithms
to be produced. Derived algorithms may make use of dynamic literal triggers or
watched literals for efficiency. Finally, two case studies of deriving
efficient algorithms are given.
| [
{
"version": "v1",
"created": "Wed, 22 Apr 2015 15:34:56 GMT"
},
{
"version": "v2",
"created": "Mon, 30 May 2016 11:50:53 GMT"
}
] | 1,464,652,800,000 | [
[
"Caldwell",
"James",
""
],
[
"Gent",
"Ian P.",
""
],
[
"Nightingale",
"Peter",
""
]
] |
1504.06374 | Vijay Saraswat | Cristina Cornelio and Andrea Loreggia and Vijay Saraswat | Logical Conditional Preference Theories | 15 pages, 1 figure, submitted to CP 2015 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | CP-nets represent the dominant existing framework for expressing qualitative
conditional preferences between alternatives, and are used in a variety of
areas including constraint solving. Over the last fifteen years, a significant
literature has developed exploring semantics, algorithms, implementation and
use of CP-nets. This paper introduces a comprehensive new framework for
conditional preferences: logical conditional preference theories (LCP
theories). To express preferences, the user specifies arbitrary (constraint)
Datalog programs over a binary ordering relation on outcomes. We show how LCP
theories unify and generalize existing conditional preference proposals, and
leverage the rich semantic, algorithmic and implementation frameworks of
Datalog.
| [
{
"version": "v1",
"created": "Fri, 24 Apr 2015 02:07:36 GMT"
}
] | 1,430,092,800,000 | [
[
"Cornelio",
"Cristina",
""
],
[
"Loreggia",
"Andrea",
""
],
[
"Saraswat",
"Vijay",
""
]
] |
1504.06423 | Adish Singla | Adish Singla, Eric Horvitz, Pushmeet Kohli, Ryen White, Andreas Krause | Information Gathering in Networks via Active Exploration | Longer version of IJCAI'15 paper | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How should we gather information in a network, where each node's visibility
is limited to its local neighborhood? This problem arises in numerous
real-world applications, such as surveying and task routing in social networks,
team formation in collaborative networks and experimental design with
dependency constraints. Often the informativeness of a set of nodes can be
quantified via a submodular utility function. Existing approaches for
submodular optimization, however, require that the set of all nodes that can be
selected is known ahead of time, which is often unrealistic. In contrast, we
propose a novel model where we start our exploration from an initial node, and
new nodes become visible and available for selection only once one of their
neighbors has been chosen. We then present a general algorithm NetExp for this
problem, and provide theoretical bounds on its performance dependent on
structural properties of the underlying network. We evaluate our methodology on
various simulated problem instances as well as on data collected from social
question answering system deployed within a large enterprise.
| [
{
"version": "v1",
"created": "Fri, 24 Apr 2015 08:41:08 GMT"
},
{
"version": "v2",
"created": "Wed, 6 May 2015 15:39:42 GMT"
}
] | 1,430,956,800,000 | [
[
"Singla",
"Adish",
""
],
[
"Horvitz",
"Eric",
""
],
[
"Kohli",
"Pushmeet",
""
],
[
"White",
"Ryen",
""
],
[
"Krause",
"Andreas",
""
]
] |
1504.06529 | Dmitriy Zheleznyakov | Bernardo Cuenca Grau, Evgeny Kharlamov, Egor V. Kostylev, Dmitriy
Zheleznyakov | Controlled Query Evaluation for Datalog and OWL 2 Profile Ontologies | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study confidentiality enforcement in ontologies under the Controlled Query
Evaluation framework, where a policy specifies the sensitive information and a
censor ensures that query answers that may compromise the policy are not
returned. We focus on censors that ensure confidentiality while maximising
information access, and consider both Datalog and the OWL 2 profiles as
ontology languages.
| [
{
"version": "v1",
"created": "Fri, 24 Apr 2015 14:49:18 GMT"
}
] | 1,430,092,800,000 | [
[
"Grau",
"Bernardo Cuenca",
""
],
[
"Kharlamov",
"Evgeny",
""
],
[
"Kostylev",
"Egor V.",
""
],
[
"Zheleznyakov",
"Dmitriy",
""
]
] |
1504.06700 | Kedian Mu | Kedian Mu and Kewen Wang and Lian Wen | Preferential Multi-Context Systems | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-context systems (MCS) presented by Brewka and Eiter can be considered
as a promising way to interlink decentralized and heterogeneous knowledge
contexts. In this paper, we propose preferential multi-context systems (PMCS),
which provide a framework for incorporating a total preorder relation over
contexts in a multi-context system. In a given PMCS, its contexts are divided
into several parts according to the total preorder relation over them,
moreover, only information flows from a context to ones of the same part or
less preferred parts are allowed to occur. As such, the first $l$ preferred
parts of an PMCS always fully capture the information exchange between contexts
of these parts, and then compose another meaningful PMCS, termed the
$l$-section of that PMCS. We generalize the equilibrium semantics for an MCS to
the (maximal) $l_{\leq}$-equilibrium which represents belief states at least
acceptable for the $l$-section of an PMCS. We also investigate inconsistency
analysis in PMCS and related computational complexity issues.
| [
{
"version": "v1",
"created": "Sat, 25 Apr 2015 08:20:37 GMT"
}
] | 1,430,179,200,000 | [
[
"Mu",
"Kedian",
""
],
[
"Wang",
"Kewen",
""
],
[
"Wen",
"Lian",
""
]
] |
1504.07020 | Dov Gabbay | D. M. Gabbay | Theory of Semi-Instantiation in Abstract Argumentation | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study instantiated abstract argumentation frames of the form $(S,R,I)$,
where $(S,R)$ is an abstract argumentation frame and where the arguments $x$ of
$S$ are instantiated by $I(x)$ as well formed formulas of a well known logic,
for example as Boolean formulas or as predicate logic formulas or as modal
logic formulas. We use the method of conceptual analysis to derive the
properties of our proposed system. We seek to define the notion of complete
extensions for such systems and provide algorithms for finding such extensions.
We further develop a theory of instantiation in the abstract, using the
framework of Boolean attack formations and of conjunctive and disjunctive
attacks. We discuss applications and compare critically with the existing
related literature.
| [
{
"version": "v1",
"created": "Mon, 27 Apr 2015 10:48:28 GMT"
}
] | 1,430,179,200,000 | [
[
"Gabbay",
"D. M.",
""
]
] |
1504.07168 | Spyros Angelopoulos | Spyros Angelopoulos | Further Connections Between Contract-Scheduling and Ray-Searching
Problems | Full version of conference paper, to appear in Proceedings of IJCAI
2015 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper addresses two classes of different, yet interrelated optimization
problems. The first class of problems involves a robot that must locate a
hidden target in an environment that consists of a set of concurrent rays. The
second class pertains to the design of interruptible algorithms by means of a
schedule of contract algorithms. We study several variants of these families of
problems, such as searching and scheduling with probabilistic considerations,
redundancy and fault-tolerance issues, randomized strategies, and trade-offs
between performance and preemptions. For many of these problems we present the
first known results that apply to multi-ray and multi-problem domains. Our
objective is to demonstrate that several well-motivated settings can be
addressed using the same underlying approach.
| [
{
"version": "v1",
"created": "Mon, 27 Apr 2015 17:23:39 GMT"
}
] | 1,430,179,200,000 | [
[
"Angelopoulos",
"Spyros",
""
]
] |
1504.07182 | Ji Wu | Ji Wu, Miao Li, Chin-Hui Lee | A Probabilistic Framework for Representing Dialog Systems and
Entropy-Based Dialog Management through Dynamic Stochastic State Evolution | 10 pages, 6 figures, 6 tables, | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present a probabilistic framework for goal-driven spoken
dialog systems. A new dynamic stochastic state (DS-state) is then defined to
characterize the goal set of a dialog state at different stages of the dialog
process. Furthermore, an entropy minimization dialog management(EMDM) strategy
is also proposed to combine with the DS-states to facilitate a robust and
efficient solution in reaching a user's goals. A Song-On-Demand task, with a
total of 38117 songs and 12 attributes corresponding to each song, is used to
test the performance of the proposed approach. In an ideal simulation, assuming
no errors, the EMDM strategy is the most efficient goal-seeking method among
all tested approaches, returning the correct song within 3.3 dialog turns on
average. Furthermore, in a practical scenario, with top five candidates to
handle the unavoidable automatic speech recognition (ASR) and natural language
understanding (NLU) errors, the results show that only 61.7\% of the dialog
goals can be successfully obtained in 6.23 dialog turns on average when random
questions are asked by the system, whereas if the proposed DS-states are
updated with the top 5 candidates from the SLU output using the proposed EMDM
strategy executed at every DS-state, then a 86.7\% dialog success rate can be
accomplished effectively within 5.17 dialog turns on average. We also
demonstrate that entropy-based DM strategies are more efficient than
non-entropy based DM. Moreover, using the goal set distributions in EMDM, the
results are better than those without them, such as in sate-of-the-art database
summary DM.
| [
{
"version": "v1",
"created": "Mon, 27 Apr 2015 17:55:53 GMT"
}
] | 1,430,179,200,000 | [
[
"Wu",
"Ji",
""
],
[
"Li",
"Miao",
""
],
[
"Lee",
"Chin-Hui",
""
]
] |
1504.07302 | Yuyin Sun | Yuyin Sun, Adish Singla, Dieter Fox, Andreas Krause | Building Hierarchies of Concepts via Crowdsourcing | 12 pages, 8 pages of main paper, 4 pages of appendix, IJCAI2015 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hierarchies of concepts are useful in many applications from navigation to
organization of objects. Usually, a hierarchy is created in a centralized
manner by employing a group of domain experts, a time-consuming and expensive
process. The experts often design one single hierarchy to best explain the
semantic relationships among the concepts, and ignore the natural uncertainty
that may exist in the process. In this paper, we propose a crowdsourcing system
to build a hierarchy and furthermore capture the underlying uncertainty. Our
system maintains a distribution over possible hierarchies and actively selects
questions to ask using an information gain criterion. We evaluate our
methodology on simulated data and on a set of real world application domains.
Experimental results show that our system is robust to noise, efficient in
picking questions, cost-effective and builds high quality hierarchies.
| [
{
"version": "v1",
"created": "Mon, 27 Apr 2015 23:14:32 GMT"
},
{
"version": "v2",
"created": "Sun, 3 May 2015 21:42:14 GMT"
},
{
"version": "v3",
"created": "Sat, 1 Aug 2015 00:27:21 GMT"
}
] | 1,438,646,400,000 | [
[
"Sun",
"Yuyin",
""
],
[
"Singla",
"Adish",
""
],
[
"Fox",
"Dieter",
""
],
[
"Krause",
"Andreas",
""
]
] |
1504.07443 | Marie-Laure Mugnier | Jean-Fran\c{c}ois Baget and Meghyn Bienvenu and Marie-Laure Mugnier
and Swan Rocher | Combining Existential Rules and Transitivity: Next Steps | This is an extended version, completed with full proofs, of an
article appearing in IJCAI'15 - revised version (December 2016) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider existential rules (aka Datalog+) as a formalism for specifying
ontologies. In recent years, many classes of existential rules have been
exhibited for which conjunctive query (CQ) entailment is decidable. However,
most of these classes cannot express transitivity of binary relations, a
frequently used modelling construct. In this paper, we address the issue of
whether transitivity can be safely combined with decidable classes of
existential rules.
First, we prove that transitivity is incompatible with one of the simplest
decidable classes, namely aGRD (acyclic graph of rule dependencies), which
clarifies the landscape of `finite expansion sets' of rules.
Second, we show that transitivity can be safely added to linear rules (a
subclass of guarded rules, which generalizes the description logic DL-Lite-R)
in the case of atomic CQs, and also for general CQs if we place a minor
syntactic restriction on the rule set. This is shown by means of a novel query
rewriting algorithm that is specially tailored to handle transitivity rules.
Third, for the identified decidable cases, we pinpoint the combined and data
complexities of query entailment.
| [
{
"version": "v1",
"created": "Tue, 28 Apr 2015 12:22:48 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Jan 2017 08:49:35 GMT"
}
] | 1,483,660,800,000 | [
[
"Baget",
"Jean-François",
""
],
[
"Bienvenu",
"Meghyn",
""
],
[
"Mugnier",
"Marie-Laure",
""
],
[
"Rocher",
"Swan",
""
]
] |
1504.07877 | Amina Kemmar | Amina Kemmar, Samir Loudni, Yahia Lebbah, Patrice Boizumault, Thierry
Charnois | Prefix-Projection Global Constraint for Sequential Pattern Mining | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sequential pattern mining under constraints is a challenging data mining
task. Many efficient ad hoc methods have been developed for mining sequential
patterns, but they are all suffering from a lack of genericity. Recent works
have investigated Constraint Programming (CP) methods, but they are not still
effective because of their encoding. In this paper, we propose a global
constraint based on the projected databases principle which remedies to this
drawback. Experiments show that our approach clearly outperforms CP approaches
and competes well with ad hoc methods on large datasets.
| [
{
"version": "v1",
"created": "Wed, 29 Apr 2015 14:48:07 GMT"
},
{
"version": "v2",
"created": "Tue, 23 Jun 2015 09:31:49 GMT"
}
] | 1,435,104,000,000 | [
[
"Kemmar",
"Amina",
""
],
[
"Loudni",
"Samir",
""
],
[
"Lebbah",
"Yahia",
""
],
[
"Boizumault",
"Patrice",
""
],
[
"Charnois",
"Thierry",
""
]
] |
1504.08241 | Alexander Rass | Alexander Ra{\ss}, Manuel Schmitt, Rolf Wanka | Explanation of Stagnation at Points that are not Local Optima in
Particle Swarm Optimization by Potential Analysis | Full version of poster on Genetic and Evolutionary Computation
Conference (GECCO) 15 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Particle Swarm Optimization (PSO) is a nature-inspired meta-heuristic for
solving continuous optimization problems. In the literature, the potential of
the particles of swarm has been used to show that slightly modified PSO
guarantees convergence to local optima. Here we show that under specific
circumstances the unmodified PSO, even with swarm parameters known (from the
literature) to be good, almost surely does not yield convergence to a local
optimum is provided. This undesirable phenomenon is called stagnation. For this
purpose, the particles' potential in each dimension is analyzed mathematically.
Additionally, some reasonable assumptions on the behavior if the particles'
potential are made. Depending on the objective function and, interestingly, the
number of particles, the potential in some dimensions may decrease much faster
than in other dimensions. Therefore, these dimensions lose relevance, i.e., the
contribution of their entries to the decisions about attractor updates becomes
insignificant and, with positive probability, they never regain relevance. If
Brownian Motion is assumed to be an approximation of the time-dependent drop of
potential, practical, i.e., large values for this probability are calculated.
Finally, on chosen multidimensional polynomials of degree two, experiments are
provided showing that the required circumstances occur quite frequently.
Furthermore, experiments are provided showing that even when the very simple
sphere function is processed the described stagnation phenomenon occurs.
Consequently, unmodified PSO does not converge to any local optimum of the
chosen functions for tested parameter settings.
| [
{
"version": "v1",
"created": "Thu, 30 Apr 2015 14:28:44 GMT"
}
] | 1,430,438,400,000 | [
[
"Raß",
"Alexander",
""
],
[
"Schmitt",
"Manuel",
""
],
[
"Wanka",
"Rolf",
""
]
] |
1505.00002 | Anthony Di Franco | Anthony Di Franco | FIFTH system for general-purpose connectionist computation | Submitted, COSYNE 2015 (extended abstract) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To date, work on formalizing connectionist computation in a way that is at
least Turing-complete has focused on recurrent architectures and developed
equivalences to Turing machines or similar super-Turing models, which are of
more theoretical than practical significance. We instead develop connectionist
computation within the framework of information propagation networks extended
with unbounded recursion, which is related to constraint logic programming and
is more declarative than the semantics typically used in practical programming,
but is still formally known to be Turing-complete. This approach yields
contributions to the theory and practice of both connectionist computation and
programming languages. Connectionist computations are carried out in a way that
lets them communicate with, and be understood and interrogated directly in
terms of the high-level semantics of a general-purpose programming language.
Meanwhile, difficult (unbounded-dimension, NP-hard) search problems in
programming that have previously been left to the programmer to solve in a
heuristic, domain-specific way are solved uniformly a priori in a way that
approximately achieves information-theoretic limits on performance.
| [
{
"version": "v1",
"created": "Wed, 29 Apr 2015 22:20:04 GMT"
}
] | 1,430,697,600,000 | [
[
"Di Franco",
"Anthony",
""
]
] |
1505.00162 | Joseph Y. Halpern | Joseph Y. Halpern | A Modification of the Halpern-Pearl Definition of Causality | This is an extended version of a paper that will appear in IJCAI 2015 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The original Halpern-Pearl definition of causality [Halpern and Pearl, 2001]
was updated in the journal version of the paper [Halpern and Pearl, 2005] to
deal with some problems pointed out by Hopkins and Pearl [2003]. Here the
definition is modified yet again, in a way that (a) leads to a simpler
definition, (b) handles the problems pointed out by Hopkins and Pearl, and many
others, (c) gives reasonable answers (that agree with those of the original and
updated definition) in the standard problematic examples of causality, and (d)
has lower complexity than either the original or updated definitions.
| [
{
"version": "v1",
"created": "Fri, 1 May 2015 11:44:51 GMT"
}
] | 1,430,697,600,000 | [
[
"Halpern",
"Joseph Y.",
""
]
] |
1505.00278 | Michal \v{C}ertick\'y | Bj\"orn Persson Mattsson, Tom\'a\v{s} Vajda, Michal \v{C}ertick\'y | Automatic Observer Script for StarCraft: Brood War Bot Games (technical
report) | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This short report describes an automated BWAPI-based script developed for
live streams of a StarCraft Brood War bot tournament, SSCAIT. The script
controls the in-game camera in order to follow the relevant events and improve
the viewer experience. We enumerate its novel features and provide a few
implementation notes.
| [
{
"version": "v1",
"created": "Fri, 1 May 2015 20:41:19 GMT"
}
] | 1,430,784,000,000 | [
[
"Mattsson",
"Björn Persson",
""
],
[
"Vajda",
"Tomáš",
""
],
[
"Čertický",
"Michal",
""
]
] |
1505.00284 | Benjamin Rosman | Benjamin Rosman, Majd Hawasly, Subramanian Ramamoorthy | Bayesian Policy Reuse | 32 pages, submitted to the Machine Learning Journal | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A long-lived autonomous agent should be able to respond online to novel
instances of tasks from a familiar domain. Acting online requires 'fast'
responses, in terms of rapid convergence, especially when the task instance has
a short duration, such as in applications involving interactions with humans.
These requirements can be problematic for many established methods for learning
to act. In domains where the agent knows that the task instance is drawn from a
family of related tasks, albeit without access to the label of any given
instance, it can choose to act through a process of policy reuse from a
library, rather than policy learning from scratch. In policy reuse, the agent
has prior knowledge of the class of tasks in the form of a library of policies
that were learnt from sample task instances during an offline training phase.
We formalise the problem of policy reuse, and present an algorithm for
efficiently responding to a novel task instance by reusing a policy from the
library of existing policies, where the choice is based on observed 'signals'
which correlate to policy performance. We achieve this by posing the problem as
a Bayesian choice problem with a corresponding notion of an optimal response,
but the computation of that response is in many cases intractable. Therefore,
to reduce the computation cost of the posterior, we follow a Bayesian
optimisation approach and define a set of policy selection functions, which
balance exploration in the policy library against exploitation of previously
tried policies, together with a model of expected performance of the policy
library on their corresponding task instances. We validate our method in
several simulated domains of interactive, short-duration episodic tasks,
showing rapid convergence in unknown task variations.
| [
{
"version": "v1",
"created": "Fri, 1 May 2015 21:13:00 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Dec 2015 15:44:51 GMT"
}
] | 1,450,137,600,000 | [
[
"Rosman",
"Benjamin",
""
],
[
"Hawasly",
"Majd",
""
],
[
"Ramamoorthy",
"Subramanian",
""
]
] |
1505.00399 | Christopher Lin | Christopher H. Lin and Andrey Kolobov and Ece Kamar and Eric Horvitz | Metareasoning for Planning Under Uncertainty | Extended version of IJCAI 2015 paper | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The conventional model for online planning under uncertainty assumes that an
agent can stop and plan without incurring costs for the time spent planning.
However, planning time is not free in most real-world settings. For example, an
autonomous drone is subject to nature's forces, like gravity, even while it
thinks, and must either pay a price for counteracting these forces to stay in
place, or grapple with the state change caused by acquiescing to them. Policy
optimization in these settings requires metareasoning---a process that trades
off the cost of planning and the potential policy improvement that can be
achieved. We formalize and analyze the metareasoning problem for Markov
Decision Processes (MDPs). Our work subsumes previously studied special cases
of metareasoning and shows that in the general case, metareasoning is at most
polynomially harder than solving MDPs with any given algorithm that disregards
the cost of thinking. For reasons we discuss, optimal general metareasoning
turns out to be impractical, motivating approximations. We present approximate
metareasoning procedures which rely on special properties of the BRTDP planning
algorithm and explore the effectiveness of our methods on a variety of
problems.
| [
{
"version": "v1",
"created": "Sun, 3 May 2015 07:09:08 GMT"
}
] | 1,430,784,000,000 | [
[
"Lin",
"Christopher H.",
""
],
[
"Kolobov",
"Andrey",
""
],
[
"Kamar",
"Ece",
""
],
[
"Horvitz",
"Eric",
""
]
] |
1505.01603 | Aske Plaat | Aske Plaat, Jonathan Schaeffer, Wim Pijls, Arie de Bruin | Best-First and Depth-First Minimax Search in Practice | Computer Science in the Netherlands 1995. arXiv admin note: text
overlap with arXiv:1404.1515 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most practitioners use a variant of the Alpha-Beta algorithm, a simple
depth-first pro- cedure, for searching minimax trees. SSS*, with its best-first
search strategy, reportedly offers the potential for more efficient search.
However, the complex formulation of the al- gorithm and its alleged excessive
memory requirements preclude its use in practice. For two decades, the search
efficiency of "smart" best-first SSS* has cast doubt on the effectiveness of
"dumb" depth-first Alpha-Beta. This paper presents a simple framework for
calling Alpha-Beta that allows us to create a variety of algorithms, including
SSS* and DUAL*. In effect, we formulate a best-first algorithm using
depth-first search. Expressed in this framework SSS* is just a special case of
Alpha-Beta, solving all of the perceived drawbacks of the algorithm. In
practice, Alpha-Beta variants typically evaluate less nodes than SSS*. A new
instance of this framework, MTD(f), out-performs SSS* and NegaScout, the
Alpha-Beta variant of choice by practitioners.
| [
{
"version": "v1",
"created": "Thu, 7 May 2015 06:54:26 GMT"
}
] | 1,431,043,200,000 | [
[
"Plaat",
"Aske",
""
],
[
"Schaeffer",
"Jonathan",
""
],
[
"Pijls",
"Wim",
""
],
[
"de Bruin",
"Arie",
""
]
] |
1505.01825 | Joseph Ramsey | Joseph D. Ramsey | Effects of Nonparanormal Transform on PC and GES Search Accuracies | 10 pages, 18 tables, tech report | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Liu, et al., 2009 developed a transformation of a class of non-Gaussian
univariate distributions into Gaussian distributions. Liu and collaborators
(2012) subsequently applied the transform to search for graphical causal models
for a number of empirical data sets. To our knowledge, there has been no
published investigation by simulation of the conditions under which the
transform aids, or harms, standard graphical model search procedures. We
consider here how the transform affects the performance of two search
algorithms in particular, PC (Spirtes et al., 2000; Meek 1995) and GES (Meek
1997; Chickering 2002). We find that the transform is harmless but ineffective
for most cases but quite effective in very special cases for GES, namely, for
moderate non-Gaussianity and moderate non-linearity. For strong-linearity,
another algorithm, PC-GES (a combination of PC with GES), is equally effective.
| [
{
"version": "v1",
"created": "Thu, 7 May 2015 19:39:22 GMT"
},
{
"version": "v2",
"created": "Fri, 8 May 2015 20:20:44 GMT"
}
] | 1,431,388,800,000 | [
[
"Ramsey",
"Joseph D.",
""
]
] |
1505.02070 | Mirko Stojadinovi\'c | Mirko Stojadinovi\'c, Mladen Nikoli\'c, Filip Mari\'c | Short Portfolio Training for CSP Solving | 21 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many different approaches for solving Constraint Satisfaction Problems (CSPs)
and related Constraint Optimization Problems (COPs) exist. However, there is no
single solver (nor approach) that performs well on all classes of problems and
many portfolio approaches for selecting a suitable solver based on simple
syntactic features of the input CSP instance have been developed. In this paper
we first present a simple portfolio method for CSP based on k-nearest neighbors
method. Then, we propose a new way of using portfolio systems --- training them
shortly in the exploitation time, specifically for the set of instances to be
solved and using them on that set. Thorough evaluation has been performed and
has shown that the approach yields good results. We evaluated several machine
learning techniques for our portfolio. Due to its simplicity and efficiency,
the selected k-nearest neighbors method is especially suited for our short
training approach and it also yields the best results among the tested methods.
We also confirm that our approach yields good results on SAT domain.
| [
{
"version": "v1",
"created": "Fri, 8 May 2015 15:42:13 GMT"
}
] | 1,431,302,400,000 | [
[
"Stojadinović",
"Mirko",
""
],
[
"Nikolić",
"Mladen",
""
],
[
"Marić",
"Filip",
""
]
] |
1505.02405 | Vasco Manquinho | Miguel Neves and Ruben Martins and Mikol\'a\v{s} Janota and In\^es
Lynce and Vasco Manquinho | Exploiting Resolution-based Representations for MaxSAT Solving | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most recent MaxSAT algorithms rely on a succession of calls to a SAT solver
in order to find an optimal solution. In particular, several algorithms take
advantage of the ability of SAT solvers to identify unsatisfiable subformulas.
Usually, these MaxSAT algorithms perform better when small unsatisfiable
subformulas are found early. However, this is not the case in many problem
instances, since the whole formula is given to the SAT solver in each call. In
this paper, we propose to partition the MaxSAT formula using a resolution-based
graph representation. Partitions are then iteratively joined by using a
proximity measure extracted from the graph representation of the formula. The
algorithm ends when only one partition remains and the optimal solution is
found. Experimental results show that this new approach further enhances a
state of the art MaxSAT solver to optimally solve a larger set of industrial
problem instances.
| [
{
"version": "v1",
"created": "Sun, 10 May 2015 16:38:15 GMT"
}
] | 1,431,388,800,000 | [
[
"Neves",
"Miguel",
""
],
[
"Martins",
"Ruben",
""
],
[
"Janota",
"Mikoláš",
""
],
[
"Lynce",
"Inês",
""
],
[
"Manquinho",
"Vasco",
""
]
] |
1505.02433 | Miao Fan | Miao Fan, Qiang Zhou, Andrew Abel, Thomas Fang Zheng and Ralph
Grishman | Probabilistic Belief Embedding for Knowledge Base Completion | arXiv admin note: text overlap with arXiv:1503.08155 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper contributes a novel embedding model which measures the probability
of each belief $\langle h,r,t,m\rangle$ in a large-scale knowledge repository
via simultaneously learning distributed representations for entities ($h$ and
$t$), relations ($r$), and the words in relation mentions ($m$). It facilitates
knowledge completion by means of simple vector operations to discover new
beliefs. Given an imperfect belief, we can not only infer the missing entities,
predict the unknown relations, but also tell the plausibility of the belief,
just leveraging the learnt embeddings of remaining evidences. To demonstrate
the scalability and the effectiveness of our model, we conduct experiments on
several large-scale repositories which contain millions of beliefs from
WordNet, Freebase and NELL, and compare it with other cutting-edge approaches
via competing the performances assessed by the tasks of entity inference,
relation prediction and triplet classification with respective metrics.
Extensive experimental results show that the proposed model outperforms the
state-of-the-arts with significant improvements.
| [
{
"version": "v1",
"created": "Sun, 10 May 2015 20:22:47 GMT"
},
{
"version": "v2",
"created": "Mon, 18 May 2015 02:19:39 GMT"
},
{
"version": "v3",
"created": "Tue, 19 May 2015 14:56:16 GMT"
},
{
"version": "v4",
"created": "Fri, 22 May 2015 16:58:33 GMT"
}
] | 1,432,512,000,000 | [
[
"Fan",
"Miao",
""
],
[
"Zhou",
"Qiang",
""
],
[
"Abel",
"Andrew",
""
],
[
"Zheng",
"Thomas Fang",
""
],
[
"Grishman",
"Ralph",
""
]
] |
1505.02449 | Daniel Raggi | Daniel Raggi, Alan Bundy, Gudmund Grov, Alison Pease | Automating change of representation for proofs in discrete mathematics | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Representation determines how we can reason about a specific problem.
Sometimes one representation helps us find a proof more easily than others.
Most current automated reasoning tools focus on reasoning within one
representation. There is, therefore, a need for the development of better tools
to mechanise and automate formal and logically sound changes of representation.
In this paper we look at examples of representational transformations in
discrete mathematics, and show how we have used Isabelle's Transfer tool to
automate the use of these transformations in proofs. We give a brief overview
of a general theory of transformations that we consider appropriate for
thinking about the matter, and we explain how it relates to the Transfer
package. We show our progress towards developing a general tactic that
incorporates the automatic search for representation within the proving
process.
| [
{
"version": "v1",
"created": "Sun, 10 May 2015 22:14:55 GMT"
}
] | 1,431,388,800,000 | [
[
"Raggi",
"Daniel",
""
],
[
"Bundy",
"Alan",
""
],
[
"Grov",
"Gudmund",
""
],
[
"Pease",
"Alison",
""
]
] |
1505.02487 | Caroline Even | Caroline Even, Andreas Schutt, and Pascal Van Hentenryck | A Constraint Programming Approach for Non-Preemptive Evacuation
Scheduling | Submitted to the 21st International Conference on Principles and
Practice of Constraint Programming (CP 2015). 15 pages + 1 reference page | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large-scale controlled evacuations require emergency services to select
evacuation routes, decide departure times, and mobilize resources to issue
orders, all under strict time constraints. Existing algorithms almost always
allow for preemptive evacuation schedules, which are less desirable in
practice. This paper proposes, for the first time, a constraint-based
scheduling model that optimizes the evacuation flow rate (number of vehicles
sent at regular time intervals) and evacuation phasing of widely populated
areas, while ensuring a nonpreemptive evacuation for each residential zone. Two
optimization objectives are considered: (1) to maximize the number of evacuees
reaching safety and (2) to minimize the overall duration of the evacuation.
Preliminary results on a set of real-world instances show that the approach can
produce, within a few seconds, a non-preemptive evacuation schedule which is
either optimal or at most 6% away of the optimal preemptive solution.
| [
{
"version": "v1",
"created": "Mon, 11 May 2015 05:38:24 GMT"
}
] | 1,431,388,800,000 | [
[
"Even",
"Caroline",
""
],
[
"Schutt",
"Andreas",
""
],
[
"Van Hentenryck",
"Pascal",
""
]
] |
1505.02552 | Guillaume Perez | Guillaume Perez and Jean-Charles R\'egin | Relations between MDDs and Tuples and Dynamic Modifications of MDDs
based constraints | 15 pages, 16 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the relations between Multi-valued Decision Diagrams (MDD) and
tuples (i.e. elements of the Cartesian Product of variables). First, we improve
the existing methods for transforming a set of tuples, Global Cut Seeds,
sequences of tuples into MDDs. Then, we present some in-place algorithms for
adding and deleting tuples from an MDD. Next, we consider an MDD constraint
which is modified during the search by deleting some tuples. We give an
algorithm which adapts MDD-4R to these dynamic and persistent modifications.
Some experiments show that MDD constraints are competitive with Table
constraints.
| [
{
"version": "v1",
"created": "Mon, 11 May 2015 10:32:59 GMT"
}
] | 1,431,388,800,000 | [
[
"Perez",
"Guillaume",
""
],
[
"Régin",
"Jean-Charles",
""
]
] |
1505.02830 | Yun-Ching Liu | Yun-Ching Liu and Yoshimasa Tsuruoka | Adapting Improved Upper Confidence Bounds for Monte-Carlo Tree Search | To appear in the 14th International Conference on Advances in
Computer Games (ACG 2015) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The UCT algorithm, which combines the UCB algorithm and Monte-Carlo Tree
Search (MCTS), is currently the most widely used variant of MCTS. Recently, a
number of investigations into applying other bandit algorithms to MCTS have
produced interesting results. In this research, we will investigate the
possibility of combining the improved UCB algorithm, proposed by Auer et al.
(2010), with MCTS. However, various characteristics and properties of the
improved UCB algorithm may not be ideal for a direct application to MCTS.
Therefore, some modifications were made to the improved UCB algorithm, making
it more suitable for the task of game tree search. The Mi-UCT algorithm is the
application of the modified UCB algorithm applied to trees. The performance of
Mi-UCT is demonstrated on the games of $9\times 9$ Go and $9\times 9$ NoGo, and
has shown to outperform the plain UCT algorithm when only a small number of
playouts are given, and rougly on the same level when more playouts are
available.
| [
{
"version": "v1",
"created": "Mon, 11 May 2015 22:59:31 GMT"
}
] | 1,431,475,200,000 | [
[
"Liu",
"Yun-Ching",
""
],
[
"Tsuruoka",
"Yoshimasa",
""
]
] |
1505.03101 | Albert Mero\~no-Pe\~nuela | Albert Mero\~no-Pe\~nuela, Christophe Gu\'eret and Stefan Schlobach | Release Early, Release Often: Predicting Change in Versioned Knowledge
Organization Systems on the Web | 16 pages, 6 figures, ISWC 2015 conference pre-print The paper has
been withdrawn due to significant overlap with a subsequent paper submitted
to a conference for review | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Semantic Web is built on top of Knowledge Organization Systems (KOS)
(vocabularies, ontologies, concept schemes) that provide a structured,
interoperable and distributed access to Linked Data on the Web. The maintenance
of these KOS over time has produced a number of KOS version chains: subsequent
unique version identifiers to unique states of a KOS. However, the release of
new KOS versions pose challenges to both KOS publishers and users. For
publishers, updating a KOS is a knowledge intensive task that requires a lot of
manual effort, often implying deep deliberation on the set of changes to
introduce. For users that link their datasets to these KOS, a new version
compromises the validity of their links, often creating ramifications. In this
paper we describe a method to automatically detect which parts of a Web KOS are
likely to change in a next version, using supervised learning on past versions
in the KOS version chain. We use a set of ontology change features to model and
predict change in arbitrary Web KOS. We apply our method on 139 varied datasets
systematically retrieved from the Semantic Web, obtaining robust results at
correctly predicting change. To illustrate the accuracy, genericity and domain
independence of the method, we study the relationship between its effectiveness
and several characterizations of the evaluated datasets, finding that
predictors like the number of versions in a chain and their release frequency
have a fundamental impact in predictability of change in Web KOS. Consequently,
we argue for adopting a release early, release often philosophy in Web KOS
development cycles.
| [
{
"version": "v1",
"created": "Tue, 12 May 2015 18:03:21 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Sep 2015 20:11:34 GMT"
}
] | 1,442,448,000,000 | [
[
"Meroño-Peñuela",
"Albert",
""
],
[
"Guéret",
"Christophe",
""
],
[
"Schlobach",
"Stefan",
""
]
] |
1505.04107 | Kaladzavi Guidedi | Guidedi Kaladzavi, Papa Fary Diallo, Kolyang, Moussa Lo | OntoSOC: Sociocultural Knowledge Ontology | 8 pages, 5 figures, 2 tables | IJWesT Vol. 6, No. 2 (2015) | 10.5121/ijwest.2015.6201 | null | cs.AI | http://creativecommons.org/licenses/by/3.0/ | This paper presents a sociocultural knowledge ontology (OntoSOC) modeling
approach. OntoSOC modeling approach is based on Engestrom Human Activity Theory
(HAT). That Theory allowed us to identify fundamental concepts and
relationships between them. The top-down precess has been used to define
differents sub-concepts. The modeled vocabulary permits us to organise data, to
facilitate information retrieval by introducing a semantic layer in social web
platform architecture, we project to implement. This platform can be considered
as a collective memory and Participative and Distributed Information System
(PDIS) which will allow Cameroonian communities to share an co-construct
knowledge on permanent organized activities.
| [
{
"version": "v1",
"created": "Fri, 15 May 2015 16:17:54 GMT"
}
] | 1,431,907,200,000 | [
[
"Kaladzavi",
"Guidedi",
""
],
[
"Diallo",
"Papa Fary",
""
],
[
"Kolyang",
"",
""
],
[
"Lo",
"Moussa",
""
]
] |
1505.04265 | Viktoras Veitas Mr. | Viktoras Veitas and David Weinbaum (Weaver) | Cognitive Development of the Web | Working paper, 22 pages, 2 figures | null | null | ECCO working paper 2015-02 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The sociotechnological system is a system constituted of human individuals
and their artifacts: technological artifacts, institutions, conceptual and
representational systems, worldviews, knowledge systems, culture and the whole
biosphere as a volutionary niche. In our view the sociotechnological system as
a super-organism is shaped and determined both by the characteristics of the
agents involved and the characteristics emergent in their interactions at
multiple scales. Our approach to sociotechnological dynamics will maintain a
balance between perspectives: the individual and the collective. Accordingly,
we analyze dynamics of the Web as a sociotechnological system made of people,
computers and digital artifacts (Web pages, databases, search engines, etc.).
Making sense of the sociotechnological system while being part of it, is also a
constant interplay between pragmatic and value based approaches. The first is
focusing on the actualities of the system while the second highlights the
observer's projections. In our attempt to model sociotechnological dynamics and
envision its future, we take special care to make explicit our values as part
of the analysis. In sociotechnological systems with a high degree of
reflexivity (coupling between the perception of the system and the system's
behavior), highlighting values is of critical importance. In this essay, we
choose to see the future evolution of the web as facilitating a basic value,
that is, continuous open-ended intelligence expansion. By that we mean that we
see intelligence expansion as the determinant of the 'greater good' and 'well
being' of both of individuals and collectives at all scales. Our working
definition of intelligence here is the progressive process of sense-making of
self, other, environment and universe. Intelligence expansion, therefore, means
an increasing ability of sense-making.
| [
{
"version": "v1",
"created": "Sat, 16 May 2015 11:55:56 GMT"
}
] | 1,431,993,600,000 | [
[
"Veitas",
"Viktoras",
"",
"Weaver"
],
[
"Weinbaum",
"David",
"",
"Weaver"
]
] |
1505.04497 | Jan Leike | Mayank Daswani and Jan Leike | A Definition of Happiness for Reinforcement Learning Agents | AGI 2015 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | What is happiness for reinforcement learning agents? We seek a formal
definition satisfying a list of desiderata. Our proposed definition of
happiness is the temporal difference error, i.e. the difference between the
value of the obtained reward and observation and the agent's expectation of
this value. This definition satisfies most of our desiderata and is compatible
with empirical research on humans. We state several implications and discuss
examples.
| [
{
"version": "v1",
"created": "Mon, 18 May 2015 03:14:39 GMT"
}
] | 1,431,993,600,000 | [
[
"Daswani",
"Mayank",
""
],
[
"Leike",
"Jan",
""
]
] |
1505.04677 | Vilem Vychodil | Vilem Vychodil | On sets of graded attribute implications with witnessed non-redundancy | null | Information Sciences 329 (2016), 434-446 | 10.1016/j.ins.2015.09.044 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study properties of particular non-redundant sets of if-then rules
describing dependencies between graded attributes. We introduce notions of
saturation and witnessed non-redundancy of sets of graded attribute
implications are show that bases of graded attribute implications given by
systems of pseudo-intents correspond to non-redundant sets of graded attribute
implications with saturated consequents where the non-redundancy is witnessed
by antecedents of the contained graded attribute implications. We introduce an
algorithm which transforms any complete set of graded attribute implications
parameterized by globalization into a base given by pseudo-intents.
Experimental evaluation is provided to compare the method of obtaining bases
for general parameterizations by hedges with earlier graph-based approaches.
| [
{
"version": "v1",
"created": "Mon, 18 May 2015 15:15:53 GMT"
}
] | 1,451,347,200,000 | [
[
"Vychodil",
"Vilem",
""
]
] |
1505.04813 | Hao Wu | Hao Wu | What is Learning? A primary discussion about information and
Representation | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/3.0/ | Nowadays, represented by Deep Learning techniques, the field of machine
learning is experiencing unprecedented prosperity and its influence is
demonstrated in academia, industry and civil society. "Intelligent" has become
a label which could not be neglected for most applications; celebrities and
scientists also warned that the development of full artificial intelligence may
spell the end of the human race. It seems that the answer to building a
computer system that could automatically improve with experience is right on
the next corner. While for AI and machine learning researchers, it is a
consensus that we are not anywhere near the core technique which could bring
the Terminator, Number 5 or R2D2 into real life, and there is not even a formal
definition about what is intelligence, or one of its basic properties:
Learning. Therefore, even though researchers know these concerns are not
necessary currently, there is no generalized explanation about why these
concerns are not necessary, and what properties people should take into account
that would make these concerns to be necessary. In this paper, starts from
analysing the relation between information and its representation, a necessary
condition for a model to be a learning model is proposed. This condition and
related future works could be used to verify whether a system is able to learn
or not, and enrich our understanding of learning: one important property of
Intelligence.
| [
{
"version": "v1",
"created": "Tue, 19 May 2015 01:17:47 GMT"
}
] | 1,432,080,000,000 | [
[
"Wu",
"Hao",
""
]
] |
1505.05063 | Conrado Miranda | Conrado Silva Miranda, Fernando Jos\'e Von Zuben | Necessary and Sufficient Conditions for Surrogate Functions of Pareto
Frontiers and Their Synthesis Using Gaussian Processes | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces the necessary and sufficient conditions that surrogate
functions must satisfy to properly define frontiers of non-dominated solutions
in multi-objective optimization problems. These new conditions work directly on
the objective space, thus being agnostic about how the solutions are evaluated.
Therefore, real objectives or user-designed objectives' surrogates are allowed,
opening the possibility of linking independent objective surrogates. To
illustrate the practical consequences of adopting the proposed conditions, we
use Gaussian processes as surrogates endowed with monotonicity soft constraints
and with an adjustable degree of flexibility, and compare them to regular
Gaussian processes and to a frontier surrogate method in the literature that is
the closest to the method proposed in this paper. Results show that the
necessary and sufficient conditions proposed here are finely managed by the
constrained Gaussian process, guiding to high-quality surrogates capable of
suitably synthesizing an approximation to the Pareto frontier in challenging
instances of multi-objective optimization, while an existing approach that does
not take the theory proposed in consideration defines surrogates which greatly
violate the conditions to describe a valid frontier.
| [
{
"version": "v1",
"created": "Tue, 19 May 2015 16:09:23 GMT"
},
{
"version": "v2",
"created": "Wed, 20 May 2015 22:45:29 GMT"
},
{
"version": "v3",
"created": "Fri, 18 Dec 2015 06:01:11 GMT"
}
] | 1,450,656,000,000 | [
[
"Miranda",
"Conrado Silva",
""
],
[
"Von Zuben",
"Fernando José",
""
]
] |
1505.05312 | Kieran Greer Dr | Kieran Greer | A New Oscillating-Error Technique for Classifiers | null | Cogent Engineering, 4:1, 2017 | 10.1080/23311916.2017.1293480 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes a new method for reducing the error in a classifier. It
uses an error correction update that includes the very simple rule of either
adding or subtracting the error adjustment, based on whether the variable value
is currently larger or smaller than the desired value. While a traditional
neuron would sum the inputs together and then apply a function to the total,
this new method can change the function decision for each input value. This
gives added flexibility to the convergence procedure, where through a series of
transpositions, variables that are far away can continue towards the desired
value, whereas variables that are originally much closer can oscillate from one
side to the other. Tests show that the method can successfully classify some
benchmark datasets. It can also work in a batch mode, with reduced training
times and can be used as part of a neural network architecture. Some
comparisons with an earlier wave shape paper are also made.
| [
{
"version": "v1",
"created": "Wed, 20 May 2015 10:43:21 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Jul 2015 18:50:41 GMT"
},
{
"version": "v3",
"created": "Fri, 30 Oct 2015 15:54:16 GMT"
},
{
"version": "v4",
"created": "Thu, 7 Apr 2016 16:51:57 GMT"
},
{
"version": "v5",
"created": "Mon, 31 Oct 2016 14:42:06 GMT"
},
{
"version": "v6",
"created": "Sat, 4 Feb 2017 19:34:18 GMT"
},
{
"version": "v7",
"created": "Tue, 10 Oct 2017 07:47:39 GMT"
},
{
"version": "v8",
"created": "Tue, 21 Nov 2017 09:04:59 GMT"
}
] | 1,519,948,800,000 | [
[
"Greer",
"Kieran",
""
]
] |
1505.05364 | AlexanderArtikis | Alexander Artikis and Marek Sergot and Georgios Paliouras | Reactive Reasoning with the Event Calculus | International Workshop on Reactive Concepts in Knowledge
Representation (ReactKnow 2014), co-located with the 21st European Conference
on Artificial Intelligence (ECAI 2014). Proceedings of the International
Workshop on Reactive Concepts in Knowledge Representation (ReactKnow 2014),
pages 9-15, technical report, ISSN 1430-3701, Leipzig University, 2014.
http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-150562. 2014,1 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Systems for symbolic event recognition accept as input a stream of
time-stamped events from sensors and other computational devices, and seek to
identify high-level composite events, collections of events that satisfy some
pattern. RTEC is an Event Calculus dialect with novel implementation and
'windowing' techniques that allow for efficient event recognition, scalable to
large data streams. RTEC can deal with applications where event data arrive
with a (variable) delay from, and are revised by, the underlying sources. RTEC
can update already recognised events and recognise new events when data arrive
with a delay or following data revision. Our evaluation shows that RTEC can
support real-time event recognition and is capable of meeting the performance
requirements identified in a recent survey of event processing use cases.
| [
{
"version": "v1",
"created": "Wed, 20 May 2015 13:26:36 GMT"
}
] | 1,432,166,400,000 | [
[
"Artikis",
"Alexander",
""
],
[
"Sergot",
"Marek",
""
],
[
"Paliouras",
"Georgios",
""
]
] |
1505.05365 | Harald Beck | Harald Beck and Minh Dao-Tran and Thomas Eiter and Michael Fink | Towards Ideal Semantics for Analyzing Stream Reasoning | International Workshop on Reactive Concepts in Knowledge
Representation (ReactKnow 2014), co-located with the 21st European Conference
on Artificial Intelligence (ECAI 2014). Proceedings of the International
Workshop on Reactive Concepts in Knowledge Representation (ReactKnow 2014),
pages 17-22, technical report, ISSN 1430-3701, Leipzig University, 2014.
http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-150562 2014,1 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rise of smart applications has drawn interest to logical reasoning over
data streams. Recently, different query languages and stream
processing/reasoning engines were proposed in different communities. However,
due to a lack of theoretical foundations, the expressivity and semantics of
these diverse approaches are given only informally. Towards clear
specifications and means for analytic study, a formal framework is needed to
define their semantics in precise terms. To this end, we present a first step
towards an ideal semantics that allows for exact descriptions and comparisons
of stream reasoning systems.
| [
{
"version": "v1",
"created": "Wed, 20 May 2015 13:27:23 GMT"
}
] | 1,432,166,400,000 | [
[
"Beck",
"Harald",
""
],
[
"Dao-Tran",
"Minh",
""
],
[
"Eiter",
"Thomas",
""
],
[
"Fink",
"Michael",
""
]
] |
1505.05366 | Joerg Puehrer | Gerhard Brewka and Stefan Ellmauthaler and J\"org P\"uhrer | Multi-Context Systems for Reactive Reasoning in Dynamic Environments | International Workshop on Reactive Concepts in Knowledge
Representation (ReactKnow 2014), co-located with the 21st European Conference
on Artificial Intelligence (ECAI 2014). Proceedings of the International
Workshop on Reactive Concepts in Knowledge Representation (ReactKnow 2014),
pages 23-29, technical report, ISSN 1430-3701, Leipzig University, 2014.
http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-150562 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We show in this paper how managed multi-context systems (mMCSs) can be turned
into a reactive formalism suitable for continuous reasoning in dynamic
environments. We extend mMCSs with (abstract) sensors and define the notion of
a run of the extended systems. We then show how typical problems arising in
online reasoning can be addressed: handling potentially inconsistent sensor
input, modeling intelligent forms of forgetting, selective integration of
knowledge, and controlling the reasoning effort spent by contexts, like setting
contexts to an idle mode. We also investigate the complexity of some important
related decision problems and discuss different design choices which are given
to the knowledge engineer.
| [
{
"version": "v1",
"created": "Wed, 20 May 2015 13:28:11 GMT"
}
] | 1,432,166,400,000 | [
[
"Brewka",
"Gerhard",
""
],
[
"Ellmauthaler",
"Stefan",
""
],
[
"Pührer",
"Jörg",
""
]
] |
1505.05367 | Stefan Ellmauthaler | Stefan Ellmauthaler and J\"org P\"uhrer | Asynchronous Multi-Context Systems | International Workshop on Reactive Concepts in Knowledge
Representation (ReactKnow 2014), co-located with the 21st European Conference
on Artificial Intelligence (ECAI 2014). Proceedings of the International
Workshop on Reactive Concepts in Knowledge Representation (ReactKnow 2014),
pages 31-37, technical report, ISSN 1430-3701, Leipzig University, 2014.
http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-150562 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we present asynchronous multi-context systems (aMCSs), which
provide a framework for loosely coupling different knowledge representation
formalisms that allows for online reasoning in a dynamic environment. Systems
of this kind may interact with the outside world via input and output streams
and may therefore react to a continuous flow of external information. In
contrast to recent proposals, contexts in an aMCS communicate with each other
in an asynchronous way which fits the needs of many application domains and is
beneficial for scalability. The federal semantics of aMCSs renders our
framework an integration approach rather than a knowledge representation
formalism itself. We illustrate the introduced concepts by means of an example
scenario dealing with rescue services. In addition, we compare aMCSs to
reactive multi-context systems and describe how to simulate the latter with our
novel approach.
| [
{
"version": "v1",
"created": "Wed, 20 May 2015 13:29:45 GMT"
}
] | 1,432,166,400,000 | [
[
"Ellmauthaler",
"Stefan",
""
],
[
"Pührer",
"Jörg",
""
]
] |
1505.05368 | Matthias Knorr | Ricardo Gon\c{c}alves and Matthias Knorr and Jo\~ao Leite | On Minimal Change in Evolving Multi-Context Systems (Preliminary Report) | International Workshop on Reactive Concepts in Knowledge
Representation (ReactKnow 2014), co-located with the 21st European Conference
on Artificial Intelligence (ECAI 2014). Proceedings of the International
Workshop on Reactive Concepts in Knowledge Representation (ReactKnow 2014),
pages 47-53, technical report, ISSN 1430-3701, Leipzig University, 2014.
http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-150562 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Managed Multi-Context Systems (mMCSs) provide a general framework for
integrating knowledge represented in heterogeneous KR formalisms. However,
mMCSs are essentially static as they were not designed to run in a dynamic
scenario. Some recent approaches, among them evolving Multi-Context Systems
(eMCSs), extend mMCSs by allowing not only the ability to integrate knowledge
represented in heterogeneous KR formalisms, but at the same time to both react
to, and reason in the presence of commonly temporary dynamic observations, and
evolve by incorporating new knowledge. The notion of minimal change is a
central notion in dynamic scenarios, specially in those that admit several
possible alternative evolutions. Since eMCSs combine heterogeneous KR
formalisms, each of which may require different notions of minimal change, the
study of minimal change in eMCSs is an interesting and highly non-trivial
problem. In this paper, we study the notion of minimal change in eMCSs, and
discuss some alternative minimal change criteria.
| [
{
"version": "v1",
"created": "Wed, 20 May 2015 13:30:19 GMT"
}
] | 1,432,166,400,000 | [
[
"Gonçalves",
"Ricardo",
""
],
[
"Knorr",
"Matthias",
""
],
[
"Leite",
"João",
""
]
] |
1505.05373 | J\"org P\"uhrer | J\"org P\"uhrer | Towards a Simulation-Based Programming Paradigm for AI applications | International Workshop on Reactive Concepts in Knowledge
Representation (ReactKnow 2014), co-located with the 21st European Conference
on Artificial Intelligence (ECAI 2014). Proceedings of the International
Workshop on Reactive Concepts in Knowledge Representation (ReactKnow 2014),
pages 55-61, technical report, ISSN 1430-3701, Leipzig University, 2014 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present initial ideas for a programming paradigm based on simulation that
is targeted towards applications of artificial intelligence (AI). The approach
aims at integrating techniques from different areas of AI and is based on the
idea that simulated entities may freely exchange data and behavioural patterns.
We define basic notions of a simulation-based programming paradigm and show how
it can be used for implementing AI applications.
| [
{
"version": "v1",
"created": "Wed, 20 May 2015 13:34:34 GMT"
}
] | 1,432,166,400,000 | [
[
"Pührer",
"Jörg",
""
]
] |
1505.05375 | Matthias Thimm | Matthias Thimm | Towards Large-scale Inconsistency Measurement | International Workshop on Reactive Concepts in Knowledge
Representation (ReactKnow 2014), co-located with the 21st European Conference
on Artificial Intelligence (ECAI 2014). Proceedings of the International
Workshop on Reactive Concepts in Knowledge Representation (ReactKnow 2014),
pages 63-70, technical report, ISSN 1430-3701, Leipzig University, 2014.
http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-150562 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate the problem of inconsistency measurement on large knowledge
bases by considering stream-based inconsistency measurement, i.e., we
investigate inconsistency measures that cannot consider a knowledge base as a
whole but process it within a stream. For that, we present, first, a novel
inconsistency measure that is apt to be applied to the streaming case and,
second, stream-based approximations for the new and some existing inconsistency
measures. We conduct an extensive empirical analysis on the behavior of these
inconsistency measures on large knowledge bases, in terms of runtime, accuracy,
and scalability. We conclude that for two of these measures, the approximation
of the new inconsistency measure and an approximation of the contension
inconsistency measure, large-scale inconsistency measurement is feasible.
| [
{
"version": "v1",
"created": "Wed, 20 May 2015 13:35:09 GMT"
}
] | 1,432,166,400,000 | [
[
"Thimm",
"Matthias",
""
]
] |
1505.05502 | Ricardo Gon\c{c}alves | Ricardo Gon\c{c}alves and Matthias Knorr and Jo\~ao Leite | Towards Efficient Evolving Multi-Context Systems (Preliminary Report) | International Workshop on Reactive Concepts in Knowledge
Representation (ReactKnow 2014), co-located with the 21st European Conference
on Artificial Intelligence (ECAI 2014). Proceedings of the International
Workshop on Reactive Concepts in Knowledge Representation (ReactKnow 2014),
pages 39-45, technical report, ISSN 1430-3701, Leipzig University, 2014.
http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-150562 . arXiv admin note:
substantial text overlap with arXiv:1505.05368 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Managed Multi-Context Systems (mMCSs) provide a general framework for
integrating knowledge represented in heterogeneous KR formalisms. Recently,
evolving Multi-Context Systems (eMCSs) have been introduced as an extension of
mMCSs that add the ability to both react to, and reason in the presence of
commonly temporary dynamic observations, and evolve by incorporating new
knowledge. However, the general complexity of such an expressive formalism may
simply be too high in cases where huge amounts of information have to be
processed within a limited short amount of time, or even instantaneously. In
this paper, we investigate under which conditions eMCSs may scale in such
situations and we show that such polynomial eMCSs can be applied in a practical
use case.
| [
{
"version": "v1",
"created": "Wed, 20 May 2015 13:33:52 GMT"
}
] | 1,432,252,800,000 | [
[
"Gonçalves",
"Ricardo",
""
],
[
"Knorr",
"Matthias",
""
],
[
"Leite",
"João",
""
]
] |
1505.06366 | Viktoras Veitas Mr. | David Weinbaum (Weaver) and Viktoras Veitas | Open Ended Intelligence: The individuation of Intelligent Agents | Preprint; 35 pages, 2 figures; Keywords: intelligence, cognition,
individuation, assemblage, self-organization, sense-making, coordination,
enaction; en-US proofreading | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Artificial General Intelligence is a field of research aiming to distill the
principles of intelligence that operate independently of a specific problem
domain or a predefined context and utilize these principles in order to
synthesize systems capable of performing any intellectual task a human being is
capable of and eventually go beyond that. While "narrow" artificial
intelligence which focuses on solving specific problems such as speech
recognition, text comprehension, visual pattern recognition, robotic motion,
etc. has shown quite a few impressive breakthroughs lately, understanding
general intelligence remains elusive. In the paper we offer a novel theoretical
approach to understanding general intelligence. We start with a brief
introduction of the current conceptual approach. Our critique exposes a number
of serious limitations that are traced back to the ontological roots of the
concept of intelligence. We then propose a paradigm shift from intelligence
perceived as a competence of individual agents defined in relation to an a
priori given problem domain or a goal, to intelligence perceived as a formative
process of self-organization by which intelligent agents are individuated. We
call this process open-ended intelligence. Open-ended intelligence is developed
as an abstraction of the process of cognitive development so its application
can be extended to general agents and systems. We introduce and discuss three
facets of the idea: the philosophical concept of individuation, sense-making
and the individuation of general cognitive agents. We further show how
open-ended intelligence can be framed in terms of a distributed,
self-organizing network of interacting elements and how such process is
scalable. The framework highlights an important relation between coordination
and intelligence and a new understanding of values. We conclude with a number
of questions for future research.
| [
{
"version": "v1",
"created": "Sat, 23 May 2015 19:32:54 GMT"
},
{
"version": "v2",
"created": "Fri, 12 Jun 2015 14:57:23 GMT"
}
] | 1,434,326,400,000 | [
[
"Weinbaum",
"David",
"",
"Weaver"
],
[
"Veitas",
"Viktoras",
""
]
] |
1505.06573 | Andrzej Grzybowski | Andrzej Z. Grzybowski | New results on inconsistency indices and their relationship with the
quality of priority vector estimation | 26 pages, 2 figures, 19 tables | Expert Systems With Applications 43 (2016) 197- 212 | 10.1016/j.eswa.2015.08.049 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The article is devoted to the problem of inconsistency in the pairwise
comparisons based prioritization methodology. The issue of "inconsistency" in
this context has gained much attention in recent years. The literature provides
us with a number of different "inconsistency" indices suggested for measuring
the inconsistency of the pairwise comparison matrix (PCM). The latter is
understood as a deviation of the PCM from the "consistent case" - a notion that
is formally well-defined in this theory. However the usage of the indices is
justified only by some heuristics. It is still unclear what they really
"measure". What is even more important and still not known is the relationship
between their values and the "consistency" of the decision maker's judgments on
one hand, and the prioritization results upon the other. We provide examples
showing that it is necessary to distinguish between these three following
tasks: the "measuring" of the "PCM inconsistency" and the PCM-based "measuring"
of the consistency of decision maker's judgments and, finally, the "measuring"
of the usefulness of the PCM as a source of information for estimation of the
priority vector (PV). Next we focus on the third task, which seems to be the
most important one in Multi-Criteria Decision Making. With the help of Monte
Carlo experiments, we study the performance of various inconsistency indices as
indicators of the final PV estimation quality. The presented results allow a
deeper understanding of the information contained in these indices and help in
choosing a proper one in a given situation. They also enable us to develop a
new inconsistency characteristic and, based on it, to propose the PCM
acceptance approach that is supported by the classical statistical methodology.
| [
{
"version": "v1",
"created": "Mon, 25 May 2015 09:20:45 GMT"
},
{
"version": "v2",
"created": "Tue, 26 May 2015 09:42:40 GMT"
},
{
"version": "v3",
"created": "Tue, 15 Sep 2015 15:24:51 GMT"
}
] | 1,445,472,000,000 | [
[
"Grzybowski",
"Andrzej Z.",
""
]
] |
1505.06850 | Joseph Corneli | Joseph Corneli and Anna Jordanous | Implementing feedback in creative systems: A workshop approach | 8 pp., submitted to IJCAI 2015 Workshop 42, "AI and Feedback" | null | null | null | cs.AI | http://creativecommons.org/licenses/by/3.0/ | One particular challenge in AI is the computational modelling and simulation
of creativity. Feedback and learning from experience are key aspects of the
creative process. Here we investigate how we could implement feedback in
creative systems using a social model. From the field of creative writing we
borrow the concept of a Writers Workshop as a model for learning through
feedback. The Writers Workshop encourages examination, discussion and debates
of a piece of creative work using a prescribed format of activities. We propose
a computational model of the Writers Workshop as a roadmap for incorporation of
feedback in artificial creativity systems. We argue that the Writers Workshop
setting describes the anatomy of the creative process. We support our claim
with a case study that describes how to implement the Writers Workshop model in
a computational creativity system. We present this work using patterns other
people can follow to implement similar designs in their own systems. We
conclude by discussing the broader relevance of this model to other aspects of
AI.
| [
{
"version": "v1",
"created": "Tue, 26 May 2015 08:38:57 GMT"
}
] | 1,432,684,800,000 | [
[
"Corneli",
"Joseph",
""
],
[
"Jordanous",
"Anna",
""
]
] |
1505.07263 | Alessandro Provetti | Luca Padovani and Alessandro Provetti | Qsmodels: ASP Planning in Interactive Gaming Environment | Proceedings of Logics in Artificial Intelligence, 9th European
Conference, {JELIA} 2004, pp. 689-692. Lisbon, Portugal, September 27-30,
2004 | null | 10.1007/978-3-540-30227-8_58 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Qsmodels is a novel application of Answer Set Programming to interactive
gaming environment. We describe a software architecture by which the behavior
of a bot acting inside the Quake 3 Arena can be controlled by a planner. The
planner is written as an Answer Set Program and is interpreted by the Smodels
solver.
| [
{
"version": "v1",
"created": "Wed, 27 May 2015 10:58:03 GMT"
}
] | 1,432,771,200,000 | [
[
"Padovani",
"Luca",
""
],
[
"Provetti",
"Alessandro",
""
]
] |
1505.07751 | John Sudano Ph D | John J. Sudano | Pignistic Probability Transforms for Mixes of Low- and High-Probability
Events | 7 pages, International Society of Information Fusion Conference
Proceedings Fusion 2001 at Montreal, Quebec, Canada | Fourth International Conference on Information Fusion, August
2001, Montreal. Pages TPUB3 23-27 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In some real world information fusion situations, time critical decisions
must be made with an incomplete information set. Belief function theories
(e.g., Dempster-Shafer theory of evidence, Transferable Belief Model) have been
shown to provide a reasonable methodology for processing or fusing the
quantitative clues or information measurements that form the incomplete
information set. For decision making, the pignistic (from the Latin pignus, a
bet) probability transform has been shown to be a good method of using Beliefs
or basic belief assignments (BBAs) to make decisions. For many systems, one
need only address the most-probable elements in the set. For some critical
systems, one must evaluate the risk of wrong decisions and establish safe
probability thresholds for decision making. This adds a greater complexity to
decision making, since one must address all elements in the set that are above
the risk decision threshold. The problem is greatly simplified if most of the
probabilities fall below this threshold. Finding a probability transform that
properly represents mixes of low- and high-probability events is essential.
This article introduces four new pignistic probability transforms with an
implementation that uses the latest values of Beliefs, Plausibilities, or BBAs
to improve the pignistic probability estimates. Some of them assign smaller
values of probabilities for smaller values of Beliefs or BBAs than the Smets
pignistic transform. They also assign higher probability values for larger
values of Beliefs or BBAs than the Smets pignistic transform. These probability
transforms will assign a value of probability that converges faster to the
values below the risk threshold. A probability information content (PIC)
variable is also introduced that assigns an information content value to any
set of probability. Four operators are defined to help simplify the
derivations.
| [
{
"version": "v1",
"created": "Wed, 27 May 2015 12:05:27 GMT"
}
] | 1,433,116,800,000 | [
[
"Sudano",
"John J.",
""
]
] |
1506.00091 | Leon Abdillah | Tri Murti, Leon Andretti Abdillah, Muhammad Sobri | Sistem penunjang keputusan kelayakan pemberian pinjaman dengna metode
fuzzy tsukamoto | 5 pages, in Indonesian, in Seminar Nasional Inovasi dan Tren 2015
(SNIT2015), Bekasi, 2015 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Decision support systems (DSS) can be used to help settlement issues or
decisions that are semi-structured or structured. The method used is Fuzzy
Tsukamoto. PT Triprima Finance is a company engaged in the service sector
lending with collateral in the form of Motor Vehicle Owner Book or car (reg).
PT. Triprima Finance should consider borrowing from its customers with the
consent of the head manager. Such approval requires a long time because they
have to pass through many stages of the reporting procedure. Decision-making
activities at PT Triprima Finance carried out by the analysis process manually.
To help overcome these problems, the need for completion method in accuracy and
speed of decision making feasibility of lending. To overcome this need to
develop a new system that is a decision support system Tsukamoto fuzzy method.
is expected to facilitate kaposko to determine the decisions to be taken.
| [
{
"version": "v1",
"created": "Sat, 30 May 2015 08:06:20 GMT"
}
] | 1,433,203,200,000 | [
[
"Murti",
"Tri",
""
],
[
"Abdillah",
"Leon Andretti",
""
],
[
"Sobri",
"Muhammad",
""
]
] |
1506.00337 | Zhiguo Long | Zhiguo Long, Sanjiang Li | On Distributive Subalgebras of Qualitative Spatial and Temporal Calculi | Adding proof of Theorem 2 to appendix | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Qualitative calculi play a central role in representing and reasoning about
qualitative spatial and temporal knowledge. This paper studies distributive
subalgebras of qualitative calculi, which are subalgebras in which (weak)
composition distributives over nonempty intersections. It has been proven for
RCC5 and RCC8 that path consistent constraint network over a distributive
subalgebra is always minimal and globally consistent (in the sense of strong
$n$-consistency) in a qualitative sense. The well-known subclass of convex
interval relations provides one such an example of distributive subalgebras.
This paper first gives a characterisation of distributive subalgebras, which
states that the intersection of a set of $n\geq 3$ relations in the subalgebra
is nonempty if and only if the intersection of every two of these relations is
nonempty. We further compute and generate all maximal distributive subalgebras
for Point Algebra, Interval Algebra, RCC5 and RCC8, Cardinal Relation Algebra,
and Rectangle Algebra. Lastly, we establish two nice properties which will play
an important role in efficient reasoning with constraint networks involving a
large number of variables.
| [
{
"version": "v1",
"created": "Mon, 1 Jun 2015 03:24:18 GMT"
}
] | 1,433,203,200,000 | [
[
"Long",
"Zhiguo",
""
],
[
"Li",
"Sanjiang",
""
]
] |
1506.00529 | Marco Zaffalon | Marco Zaffalon and Enrique Miranda | Desirability and the birth of incomplete preferences | null | Journal of Artificial Intelligence Research 60, pp. 1057-1126,
2017 | 10.1613/jair.5230 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We establish an equivalence between two seemingly different theories: one is
the traditional axiomatisation of incomplete preferences on horse lotteries
based on the mixture independence axiom; the other is the theory of desirable
gambles developed in the context of imprecise probability. The equivalence
allows us to revisit incomplete preferences from the viewpoint of desirability
and through the derived notion of coherent lower previsions. On this basis, we
obtain new results and insights: in particular, we show that the theory of
incomplete preferences can be developed assuming only the existence of a worst
act---no best act is needed---, and that a weakened Archimedean axiom suffices
too; this axiom allows us also to address some controversy about the regularity
assumption (that probabilities should be positive---they need not), which
enables us also to deal with uncountable possibility spaces; we show that it is
always possible to extend in a minimal way a preference relation to one with a
worst act, and yet the resulting relation is never Archimedean, except in a
trivial case; we show that the traditional notion of state independence
coincides with the notion called strong independence in imprecise
probability---this leads us to give much a weaker definition of state
independence than the traditional one; we rework and uniform the notions of
complete preferences, beliefs, values; we argue that Archimedeanity does not
capture all the problems that can be modelled with sets of expected utilities
and we provide a new notion that does precisely that. Perhaps most importantly,
we argue throughout that desirability is a powerful and natural setting to
model, and work with, incomplete preferences, even in case of non-Archimedean
problems. This leads us to suggest that desirability, rather than preference,
should be the primitive notion at the basis of decision-theoretic
axiomatisations.
| [
{
"version": "v1",
"created": "Mon, 1 Jun 2015 15:22:34 GMT"
}
] | 1,514,937,600,000 | [
[
"Zaffalon",
"Marco",
""
],
[
"Miranda",
"Enrique",
""
]
] |
1506.00858 | Kewei Tu | Kewei Tu | Stochastic And-Or Grammars: A Unified Framework and Logic Perspective | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Stochastic And-Or grammars (AOG) extend traditional stochastic grammars of
language to model other types of data such as images and events. In this paper
we propose a representation framework of stochastic AOGs that is agnostic to
the type of the data being modeled and thus unifies various domain-specific
AOGs. Many existing grammar formalisms and probabilistic models in natural
language processing, computer vision, and machine learning can be seen as
special cases of this framework. We also propose a domain-independent inference
algorithm of stochastic context-free AOGs and show its tractability under a
reasonable assumption. Furthermore, we provide two interpretations of
stochastic context-free AOGs as a subset of probabilistic logic, which connects
stochastic AOGs to the field of statistical relational learning and clarifies
their relation with a few existing statistical relational models.
| [
{
"version": "v1",
"created": "Tue, 2 Jun 2015 12:30:35 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Dec 2015 09:16:27 GMT"
},
{
"version": "v3",
"created": "Mon, 11 Apr 2016 22:52:39 GMT"
}
] | 1,460,505,600,000 | [
[
"Tu",
"Kewei",
""
]
] |
1506.00893 | Joana C\^orte-Real | Joana C\^orte-Real and Theofrastos Mantadelis and In\^es Dutra and
Ricardo Rocha | SkILL - a Stochastic Inductive Logic Learner | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Probabilistic Inductive Logic Programming (PILP) is a rel- atively unexplored
area of Statistical Relational Learning which extends classic Inductive Logic
Programming (ILP). This work introduces SkILL, a Stochastic Inductive Logic
Learner, which takes probabilistic annotated data and produces First Order
Logic theories. Data in several domains such as medicine and bioinformatics
have an inherent degree of uncer- tainty, that can be used to produce models
closer to reality. SkILL can not only use this type of probabilistic data to
extract non-trivial knowl- edge from databases, but it also addresses
efficiency issues by introducing a novel, efficient and effective search
strategy to guide the search in PILP environments. The capabilities of SkILL
are demonstrated in three dif- ferent datasets: (i) a synthetic toy example
used to validate the system, (ii) a probabilistic adaptation of a well-known
biological metabolism ap- plication, and (iii) a real world medical dataset in
the breast cancer domain. Results show that SkILL can perform as well as a
deterministic ILP learner, while also being able to incorporate probabilistic
knowledge that would otherwise not be considered.
| [
{
"version": "v1",
"created": "Tue, 2 Jun 2015 14:10:02 GMT"
}
] | 1,433,289,600,000 | [
[
"Côrte-Real",
"Joana",
""
],
[
"Mantadelis",
"Theofrastos",
""
],
[
"Dutra",
"Inês",
""
],
[
"Rocha",
"Ricardo",
""
]
] |
1506.01056 | Peng Lin | Peng Lin | Performing Bayesian Risk Aggregation using Discrete Approximation
Algorithms with Graph Factorization | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Risk aggregation is a popular method used to estimate the sum of a collection
of financial assets or events, where each asset or event is modelled as a
random variable. Applications, in the financial services industry, include
insurance, operational risk, stress testing, and sensitivity analysis, but the
problem is widely encountered in many other application domains. This thesis
has contributed two algorithms to perform Bayesian risk aggregation when model
exhibit hybrid dependency and high dimensional inter-dependency. The first
algorithm operates on a subset of the general problem, with an emphasis on
convolution problems, in the presence of continuous and discrete variables (so
called hybrid models) and the second algorithm offer a universal method for
general purpose inference over much wider classes of Bayesian Network models.
| [
{
"version": "v1",
"created": "Tue, 2 Jun 2015 20:53:26 GMT"
}
] | 1,433,376,000,000 | [
[
"Lin",
"Peng",
""
]
] |
1506.01245 | Xinhua Zhu | Xinhua Zhu, Fei Li, Hongchao Chen, Qi Peng | A density compensation-based path computing model for measuring semantic
similarity | 17 pages,11 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The shortest path between two concepts in a taxonomic ontology is commonly
used to represent the semantic distance between concepts in the edge-based
semantic similarity measures. In the past, the edge counting is considered to
be the default method for the path computation, which is simple, intuitive and
has low computational complexity. However, a large lexical taxonomy of such as
WordNet has the irregular densities of links between concepts due to its broad
domain but. The edge counting-based path computation is powerless for this
non-uniformity problem. In this paper, we advocate that the path computation is
able to be separated from the edge-based similarity measures and form various
general computing models. Therefore, in order to solve the problem of
non-uniformity of concept density in a large taxonomic ontology, we propose a
new path computing model based on the compensation of local area density of
concepts, which is equal to the number of direct hyponyms of the subsumers of
concepts in their shortest path. This path model considers the local area
density of concepts as an extension of the edge-based path and converts the
local area density divided by their depth into the compensation for edge-based
path with an adjustable parameter, which idea has been proven to be consistent
with the information theory. This model is a general path computing model and
can be applied in various edge-based similarity algorithms. The experiment
results show that the proposed path model improves the average correlation
between edge-based measures with human judgments on Miller and Charles
benchmark from less than 0.8 to more than 0.85, and has a big advantage in
efficiency than information content (IC) computation in a dynamic ontology,
thereby successfully solving the non-uniformity problem of taxonomic ontology.
| [
{
"version": "v1",
"created": "Wed, 3 Jun 2015 13:53:05 GMT"
}
] | 1,433,808,000,000 | [
[
"Zhu",
"Xinhua",
""
],
[
"Li",
"Fei",
""
],
[
"Chen",
"Hongchao",
""
],
[
"Peng",
"Qi",
""
]
] |
1506.01432 | Ondrej Kuzelka | Ondrej Kuzelka and Jesse Davis and Steven Schockaert | Encoding Markov Logic Networks in Possibilistic Logic | Extended version of a paper appearing in UAI 2015 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Markov logic uses weighted formulas to compactly encode a probability
distribution over possible worlds. Despite the use of logical formulas, Markov
logic networks (MLNs) can be difficult to interpret, due to the often
counter-intuitive meaning of their weights. To address this issue, we propose a
method to construct a possibilistic logic theory that exactly captures what can
be derived from a given MLN using maximum a posteriori (MAP) inference.
Unfortunately, the size of this theory is exponential in general. We therefore
also propose two methods which can derive compact theories that still capture
MAP inference, but only for specific types of evidence. These theories can be
used, among others, to make explicit the hidden assumptions underlying an MLN
or to explain the predictions it makes.
| [
{
"version": "v1",
"created": "Wed, 3 Jun 2015 23:20:28 GMT"
},
{
"version": "v2",
"created": "Mon, 8 Jun 2015 19:58:03 GMT"
}
] | 1,433,808,000,000 | [
[
"Kuzelka",
"Ondrej",
""
],
[
"Davis",
"Jesse",
""
],
[
"Schockaert",
"Steven",
""
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.