id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
sequencelengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1307.1900 | Arindam Chaudhuri AC | Arindam Chaudhuri, Kajal De | Fuzzy Integer Linear Programming Mathematical Models for Examination
Timetable Problem | International Journal of Innovative Computing, Information and
Control (Special Issue), Volume 7, Number 5, 2011 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | ETP is NP Hard combinatorial optimization problem. It has received tremendous
research attention during the past few years given its wide use in
universities. In this Paper, we develop three mathematical models for NSOU,
Kolkata, India using FILP technique. To deal with impreciseness and vagueness
we model various allocation variables through fuzzy numbers. The solution to
the problem is obtained using Fuzzy number ranking method. Each feasible
solution has fuzzy number obtained by Fuzzy objective function. The different
FILP technique performance are demonstrated by experimental data generated
through extensive simulation from NSOU, Kolkata, India in terms of its
execution times. The proposed FILP models are compared with commonly used
heuristic viz. ILP approach on experimental data which gives an idea about
quality of heuristic. The techniques are also compared with different
Artificial Intelligence based heuristics for ETP with respect to best and mean
cost as well as execution time measures on Carter benchmark datasets to
illustrate its effectiveness. FILP takes an appreciable amount of time to
generate satisfactory solution in comparison to other heuristics. The
formulation thus serves as good benchmark for other heuristics. The
experimental study presented here focuses on producing a methodology that
generalizes well over spectrum of techniques that generates significant results
for one or more datasets. The performance of FILP model is finally compared to
the best results cited in literature for Carter benchmarks to assess its
potential. The problem can be further reduced by formulating with lesser number
of allocation variables it without affecting optimality of solution obtained.
FLIP model for ETP can also be adapted to solve other ETP as well as
combinatorial optimization problems.
| [
{
"version": "v1",
"created": "Sun, 7 Jul 2013 19:09:03 GMT"
}
] | 1,373,328,000,000 | [
[
"Chaudhuri",
"Arindam",
""
],
[
"De",
"Kajal",
""
]
] |
1307.1903 | Arindam Chaudhuri AC | Arindam Chaudhuri, Kajal De | Achieving greater Explanatory Power and Forecasting Accuracy with
Non-uniform spread Fuzzy Linear Regression | Proceedings of 13th Conference of Society of Operations Management,
Department of Management Studies, Indian Institute of Technology, Madras,
Tamilnadu, India, 2009 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fuzzy regression models have been applied to several Operations Research
applications viz., forecasting and prediction. Earlier works on fuzzy
regression analysis obtain crisp regression coefficients for eliminating the
problem of increasing spreads for the estimated fuzzy responses as the
magnitude of the independent variable increases. But they cannot deal with the
problem of non-uniform spreads. In this work, a three-phase approach is
discussed to construct the fuzzy regression model with non-uniform spreads to
deal with this problem. The first phase constructs the membership functions of
the least-squares estimates of regression coefficients based on extension
principle to completely conserve the fuzziness of observations. They are then
defuzzified by the centre of area method to obtain crisp regression
coefficients in the second phase. Finally, the error terms of the method are
determined by setting each estimated spread equal to its corresponding observed
spread. The Tagaki-Sugeno inference system is used for improving the accuracy
of forecasts. The simulation example demonstrates the strength of fuzzy linear
regression model in terms of higher explanatory power and forecasting
performance.
| [
{
"version": "v1",
"created": "Sun, 7 Jul 2013 19:20:01 GMT"
}
] | 1,373,328,000,000 | [
[
"Chaudhuri",
"Arindam",
""
],
[
"De",
"Kajal",
""
]
] |
1307.1905 | Arindam Chaudhuri AC | Arindam Chaudhuri | A Dynamic Algorithm for the Longest Common Subsequence Problem using Ant
Colony Optimization Technique | Proceedings of 2nd International Conference on Mathematics: Trends
and Developments, Al Azhar University, Cairo, Egypt, 2007 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a dynamic algorithm for solving the Longest Common Subsequence
Problem using Ant Colony Optimization Technique. The Ant Colony Optimization
Technique has been applied to solve many problems in Optimization Theory,
Machine Learning and Telecommunication Networks etc. In particular, application
of this theory in NP-Hard Problems has a remarkable significance. Given two
strings, the traditional technique for finding Longest Common Subsequence is
based on Dynamic Programming which consists of creating a recurrence relation
and filling a table of size . The proposed algorithm draws analogy with
behavior of ant colonies function and this new computational paradigm is known
as Ant System. It is a viable new approach to Stochastic Combinatorial
Optimization. The main characteristics of this model are positive feedback,
distributed computation, and the use of constructive greedy heuristic. Positive
feedback accounts for rapid discovery of good solutions, distributed
computation avoids premature convergence and greedy heuristic helps find
acceptable solutions in minimum number of stages. We apply the proposed
methodology to Longest Common Subsequence Problem and give the simulation
results. The effectiveness of this approach is demonstrated by efficient
Computational Complexity. To the best of our knowledge, this is the first Ant
Colony Optimization Algorithm for Longest Common Subsequence Problem.
| [
{
"version": "v1",
"created": "Sun, 7 Jul 2013 19:30:54 GMT"
}
] | 1,373,328,000,000 | [
[
"Chaudhuri",
"Arindam",
""
]
] |
1307.2200 | Hang Dinh | Hang Dinh and Hieu Dinh | Inconsistency and Accuracy of Heuristics with A* Search | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many studies in heuristic search suggest that the accuracy of the heuristic
used has a positive impact on improving the performance of the search. In
another direction, historical research perceives that the performance of
heuristic search algorithms, such as A* and IDA*, can be improved by requiring
the heuristics to be consistent -- a property satisfied by any perfect
heuristic. However, a few recent studies show that inconsistent heuristics can
also be used to achieve a large improvement in these heuristic search
algorithms. These results leave us a natural question: which property of
heuristics, accuracy or consistency/inconsistency, should we focus on when
building heuristics? While there are studies on the heuristic accuracy with the
assumption of consistency, no studies on both the inconsistency and the
accuracy of heuristics are known to our knowledge.
In this study, we investigate the relationship between the inconsistency and
the accuracy of heuristics with A* search. Our analytical result reveals a
correlation between these two properties. We then run experiments on the domain
for the Knapsack problem with a family of practical heuristics. Our empirical
results show that in many cases, the more accurate heuristics also have higher
level of inconsistency and result in fewer node expansions by A*.
| [
{
"version": "v1",
"created": "Mon, 8 Jul 2013 18:53:07 GMT"
}
] | 1,373,328,000,000 | [
[
"Dinh",
"Hang",
""
],
[
"Dinh",
"Hieu",
""
]
] |
1307.2704 | Hua Yao | Hua Yao, William Zhu | Applications of repeat degree on coverings of neighborhoods | 14 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In covering based rough sets, the neighborhood of an element is the
intersection of all the covering blocks containing the element. All the
neighborhoods form a new covering called a covering of neighborhoods. In the
course of studying under what condition a covering of neighborhoods is a
partition, the concept of repeat degree is proposed, with the help of which the
issue is addressed. This paper studies further the application of repeat degree
on coverings of neighborhoods. First, we investigate under what condition a
covering of neighborhoods is the reduct of the covering inducing it. As a
preparation for addressing this issue, we give a necessary and sufficient
condition for a subset of a set family to be the reduct of the set family. Then
we study under what condition two coverings induce a same relation and a same
covering of neighborhoods. Finally, we give the method of calculating the
covering according to repeat degree.
| [
{
"version": "v1",
"created": "Wed, 10 Jul 2013 07:43:57 GMT"
}
] | 1,373,500,800,000 | [
[
"Yao",
"Hua",
""
],
[
"Zhu",
"William",
""
]
] |
1307.3435 | Hadi Mohasel Afshar | Hadi Mohasel Afshar and Peter Sunehag | On Nicod's Condition, Rules of Induction and the Raven Paradox | On raven paradox, Nicod's condition, projectability, induction | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Philosophers writing about the ravens paradox often note that Nicod's
Condition (NC) holds given some set of background information, and fails to
hold against others, but rarely go any further. That is, it is usually not
explored which background information makes NC true or false. The present paper
aims to fill this gap. For us, "(objective) background knowledge" is restricted
to information that can be expressed as probability events. Any other
configuration is regarded as being subjective and a property of the a priori
probability distribution. We study NC in two specific settings. In the first
case, a complete description of some individuals is known, e.g. one knows of
each of a group of individuals whether they are black and whether they are
ravens. In the second case, the number of individuals having a particular
property is given, e.g. one knows how many ravens or how many black things
there are (in the relevant population). While some of the most famous answers
to the paradox are measure-dependent, our discussion is not restricted to any
particular probability measure. Our most interesting result is that in the
second setting, NC violates a simple kind of inductive inference (namely
projectability). Since relative to NC, this latter rule is more closely related
to, and more directly justified by our intuitive notion of inductive reasoning,
this tension makes a case against the plausibility of NC. In the end, we
suggest that the informal representation of NC may seem to be intuitively
plausible because it can easily be mistaken for reasoning by analogy.
| [
{
"version": "v1",
"created": "Fri, 12 Jul 2013 12:28:38 GMT"
},
{
"version": "v2",
"created": "Tue, 16 Jul 2013 02:22:09 GMT"
}
] | 1,374,019,200,000 | [
[
"Afshar",
"Hadi Mohasel",
""
],
[
"Sunehag",
"Peter",
""
]
] |
1307.3585 | Bertrand Mazure | \'Eric Gr\'egoire, Jean-Marie Lagniez, Bertrand Mazure | Improving MUC extraction thanks to local search | 17 pages, 5 figures, 1 table, 3 algorithms, 33 references | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | ExtractingMUCs(MinimalUnsatisfiableCores)fromanunsatisfiable constraint
network is a useful process when causes of unsatisfiability must be understood
so that the network can be re-engineered and relaxed to become sat- isfiable.
Despite bad worst-case computational complexity results, various MUC- finding
approaches that appear tractable for many real-life instances have been
proposed. Many of them are based on the successive identification of so-called
transition constraints. In this respect, we show how local search can be used
to possibly extract additional transition constraints at each main iteration
step. The approach is shown to outperform a technique based on a form of model
rotation imported from the SAT-related technology and that also exhibits
additional transi- tion constraints. Our extensive computational
experimentations show that this en- hancement also boosts the performance of
state-of-the-art DC(WCORE)-like MUC extractors.
| [
{
"version": "v1",
"created": "Fri, 12 Jul 2013 21:28:05 GMT"
}
] | 1,373,932,800,000 | [
[
"Grégoire",
"Éric",
""
],
[
"Lagniez",
"Jean-Marie",
""
],
[
"Mazure",
"Bertrand",
""
]
] |
1307.4689 | Yuri Malitsky | Giovanni Di Liberto and Serdar Kadioglu and Kevin Leo and Yuri
Malitsky | DASH: Dynamic Approach for Switching Heuristics | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Complete tree search is a highly effective method for tackling MIP problems,
and over the years, a plethora of branching heuristics have been introduced to
further refine the technique for varying problems. Recently, portfolio
algorithms have taken the process a step further, trying to predict the best
heuristic for each instance at hand. However, the motivation behind algorithm
selection can be taken further still, and used to dynamically choose the most
appropriate algorithm for each encountered subproblem. In this paper we
identify a feature space that captures both the evolution of the problem in the
branching tree and the similarity among subproblems of instances from the same
MIP models. We show how to exploit these features to decide the best time to
switch the branching heuristic and then show how such a system can be trained
efficiently. Experiments on a highly heterogeneous collection of MIP instances
show significant gains over the pure algorithm selection approach that for a
given instance uses only a single heuristic throughout the search.
| [
{
"version": "v1",
"created": "Wed, 17 Jul 2013 16:31:14 GMT"
}
] | 1,374,192,000,000 | [
[
"Di Liberto",
"Giovanni",
""
],
[
"Kadioglu",
"Serdar",
""
],
[
"Leo",
"Kevin",
""
],
[
"Malitsky",
"Yuri",
""
]
] |
1307.5322 | Emanuel Santos ES | Emanuel Santos, Daniel Faria, C\'atia Pesquita and Francisco Couto | Ontology alignment repair through modularization and confidence-based
heuristics | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ontology Matching aims to find a set of semantic correspondences, called an
alignment, between related ontologies. In recent years, there has been a
growing interest in efficient and effective matching methods for large
ontologies. However, most of the alignments produced for large ontologies are
logically incoherent. It was only recently that the use of repair techniques to
improve the quality of ontology alignments has been explored. In this paper we
present a novel technique for detecting incoherent concepts based on ontology
modularization, and a new repair algorithm that minimizes the incoherence of
the resulting alignment and the number of matches removed from the input
alignment. An implementation was done as part of a lightweight version of
AgreementMaker system, a successful ontology matching platform, and evaluated
using a set of four benchmark biomedical ontology matching tasks. Our results
show that our implementation is efficient and produces better alignments with
respect to their coherence and f-measure than the state of the art repairing
tools. They also show that our implementation is a better alternative for
producing coherent silver standard alignments.
| [
{
"version": "v1",
"created": "Fri, 19 Jul 2013 16:15:41 GMT"
}
] | 1,374,537,600,000 | [
[
"Santos",
"Emanuel",
""
],
[
"Faria",
"Daniel",
""
],
[
"Pesquita",
"Cátia",
""
],
[
"Couto",
"Francisco",
""
]
] |
1308.0702 | Sergey Rodionov | Alexey Potapov, Sergey Rodionov | Universal Empathy and Ethical Bias for Artificial General Intelligence | AGI Impacts conference 2012 paper | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Rational agents are usually built to maximize rewards. However, AGI agents
can find undesirable ways of maximizing any prior reward function. Therefore
value learning is crucial for safe AGI. We assume that generalized states of
the world are valuable - not rewards themselves, and propose an extension of
AIXI, in which rewards are used only to bootstrap hierarchical value learning.
The modified AIXI agent is considered in the multi-agent environment, where
other agents can be either humans or other "mature" agents, which values should
be revealed and adopted by the "infant" AGI agent. General framework for
designing such empathic agent with ethical bias is proposed also as an
extension of the universal intelligence model. Moreover, we perform experiments
in the simple Markov environment, which demonstrate feasibility of our approach
to value learning in safe AGI.
| [
{
"version": "v1",
"created": "Sat, 3 Aug 2013 14:40:36 GMT"
}
] | 1,375,747,200,000 | [
[
"Potapov",
"Alexey",
""
],
[
"Rodionov",
"Sergey",
""
]
] |
1308.0807 | Matthias Thimm | Matthias Thimm, Gabriele Kern-Isberner | Stratified Labelings for Abstract Argumentation | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce stratified labelings as a novel semantical approach to abstract
argumentation frameworks. Compared to standard labelings, stratified labelings
provide a more fine-grained assessment of the controversiality of arguments
using ranks instead of the usual labels in, out, and undecided. We relate the
framework of stratified labelings to conditional logic and, in particular, to
the System Z ranking functions.
| [
{
"version": "v1",
"created": "Sun, 4 Aug 2013 13:08:50 GMT"
}
] | 1,375,747,200,000 | [
[
"Thimm",
"Matthias",
""
],
[
"Kern-Isberner",
"Gabriele",
""
]
] |
1308.2116 | Daniel Kuehlwein | Daniel K\"uhlwein and Josef Urban | MaLeS: A Framework for Automatic Tuning of Automated Theorem Provers | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | MaLeS is an automatic tuning framework for automated theorem provers. It
provides solutions for both the strategy finding as well as the strategy
scheduling problem. This paper describes the tool and the methods used in it,
and evaluates its performance on three automated theorem provers: E, LEO-II and
Satallax. An evaluation on a subset of the TPTP library problems shows that on
average a MaLeS-tuned prover solves 8.67% more problems than the prover with
its default settings.
| [
{
"version": "v1",
"created": "Fri, 9 Aug 2013 13:08:33 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Aug 2013 12:05:11 GMT"
},
{
"version": "v3",
"created": "Sun, 1 Jun 2014 13:38:59 GMT"
}
] | 1,401,753,600,000 | [
[
"Kühlwein",
"Daniel",
""
],
[
"Urban",
"Josef",
""
]
] |
1308.2119 | Mark Keane | Mark Keane | Deconstructing analogy | Published Chapter in Book from Conference; CogSc-12: ILCLI
International Workshop on Cognitive Science. Universidad del Pais Vasco
Press: San Sebastian, Spain. 2012 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Analogy has been shown to be important in many key cognitive abilities,
including learning, problem solving, creativity and language change. For
cognitive models of analogy, the fundamental computational question is how its
inherent complexity (its NP-hardness) is solved by the human cognitive system.
Indeed, different models of analogical processing can be categorized by the
simplification strategies they adopt to make this computational problem more
tractable. In this paper, I deconstruct several of these models in terms of the
simplification-strategies they use; a deconstruction that provides some
interesting perspectives on the relative differences between them. Later, I
consider whether any of these computational simplifications reflect the actual
strategies used by people and sketch a new cognitive model that tries to
present a closer fit to the psychological evidence.
| [
{
"version": "v1",
"created": "Fri, 9 Aug 2013 13:26:57 GMT"
}
] | 1,376,265,600,000 | [
[
"Keane",
"Mark",
""
]
] |
1308.2124 | Alexander V Terekhov | Alexander V. Terekhov and J. Kevin O'Regan | Space as an invention of biological organisms | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The question of the nature of space around us has occupied thinkers since the
dawn of humanity, with scientists and philosophers today implicitly assuming
that space is something that exists objectively. Here we show that this does
not have to be the case: the notion of space could emerge when biological
organisms seek an economic representation of their sensorimotor flow. The
emergence of spatial notions does not necessitate the existence of real
physical space, but only requires the presence of sensorimotor invariants
called `compensable' sensory changes. We show mathematically and then in
simulations that na\"ive agents making no assumptions about the existence of
space are able to learn these invariants and to build the abstract notion that
physicists call rigid displacement, which is independent of what is being
displaced. Rigid displacements may underly perception of space as an unchanging
medium within which objects are described by their relative positions. Our
findings suggest that the question of the nature of space, currently exclusive
to philosophy and physics, should also be addressed from the standpoint of
neuroscience and artificial intelligence.
| [
{
"version": "v1",
"created": "Fri, 9 Aug 2013 13:50:48 GMT"
}
] | 1,376,265,600,000 | [
[
"Terekhov",
"Alexander V.",
""
],
[
"O'Regan",
"J. Kevin",
""
]
] |
1308.2309 | Tshilidzi Marwala | Satyakama Paul, Andreas Janecek, Fernando Buarque de Lima Neto and
Tshilidzi Marwala | Applying the Negative Selection Algorithm for Merger and Acquisition
Target Identification | To appear in the proceedings of the 1st BRICS Countries & 11th CBIC
Brazilian Congress on Computational Intelligence | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a new methodology based on the Negative Selection
Algorithm that belongs to the field of Computational Intelligence,
specifically, Artificial Immune Systems to identify takeover targets. Although
considerable research based on customary statistical techniques and some
contemporary Computational Intelligence techniques have been devoted to
identify takeover targets, most of the existing studies are based upon multiple
previous mergers and acquisitions. Contrary to previous research, the novelty
of this proposal lies in its ability to suggest takeover targets for novice
firms that are at the beginning of their merger and acquisition spree. We first
discuss the theoretical perspective and then provide a case study with details
for practical implementation, both capitalizing from unique generalization
capabilities of artificial immune systems algorithms.
| [
{
"version": "v1",
"created": "Sat, 10 Aug 2013 13:17:46 GMT"
}
] | 1,376,352,000,000 | [
[
"Paul",
"Satyakama",
""
],
[
"Janecek",
"Andreas",
""
],
[
"Neto",
"Fernando Buarque de Lima",
""
],
[
"Marwala",
"Tshilidzi",
""
]
] |
1308.2772 | Mohammad Reza Mollakhalili meybodi | M.R.Mollakhalili Meybodi and M.R.Meybodi | Extended Distributed Learning Automata:A New Method for Solving
Stochastic Graph Optimization Problems | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, a new structure of cooperative learning automata so-called
extended learning automata (eDLA) is introduced. Based on the proposed
structure, a new iterative randomized heuristic algorithm for finding optimal
sub-graph in a stochastic edge-weighted graph through sampling is proposed. It
has been shown that the proposed algorithm based on new networked-structure can
be to solve the optimization problems on stochastic graph through less number
of sampling in compare to standard sampling. Stochastic graphs are graphs in
which the edges have an unknown distribution probability weights. Proposed
algorithm uses an eDLA to find a policy that leads to an induced sub-graph that
satisfies some restrictions such as minimum or maximum weight (length). At each
stage of the proposed algorithm, eDLA determines which edges to be sampled.
This eDLA-based proposed sampling method may result in decreasing unnecessary
samples and hence decreasing the time that algorithm requires for finding the
optimal sub-graph. It has been shown that proposed method converge to optimal
solution, furthermore the probability of this convergence can be made
arbitrarily close to 1 by using a sufficiently small learning rate. A new
variance-aware threshold value was proposed that can be improving significantly
convergence rate of the proposed eDLA-based algorithm. It has been shown that
the proposed algorithm is competitive in terms of the quality of the solution
| [
{
"version": "v1",
"created": "Tue, 13 Aug 2013 07:15:24 GMT"
}
] | 1,376,438,400,000 | [
[
"Meybodi",
"M. R. Mollakhalili",
""
],
[
"Meybodi",
"M. R.",
""
]
] |
1308.3309 | Daniel Huntley | Daniel Huntley and Vadim Bulitko | Search-Space Characterization for Real-time Heuristic Search | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent real-time heuristic search algorithms have demonstrated outstanding
performance in video-game pathfinding. However, their applications have been
thus far limited to that domain. We proceed with the aim of facilitating wider
applications of real-time search by fostering a greater understanding of the
performance of recent algorithms. We first introduce eight
algorithm-independent complexity measures for search spaces and correlate their
values with algorithm performance. The complexity measures are statistically
shown to be significant predictors of algorithm performance across a set of
commercial video-game maps. We then extend this analysis to a wider variety of
search spaces in the first application of database-driven real-time search to
domains outside of video-game pathfinding. In doing so, we gain insight into
algorithm performance and possible enhancement as well as into search space
complexity.
| [
{
"version": "v1",
"created": "Thu, 15 Aug 2013 05:50:19 GMT"
}
] | 1,376,611,200,000 | [
[
"Huntley",
"Daniel",
""
],
[
"Bulitko",
"Vadim",
""
]
] |
1308.4846 | Martin Chmel\'ik | Krishnendu Chatterjee, Martin Chmel\'ik | POMDPs under Probabilistic Semantics | Full version of: POMDPs under Probabilistic Semantics, UAI 2013 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider partially observable Markov decision processes (POMDPs) with
limit-average payoff, where a reward value in the interval [0,1] is associated
to every transition, and the payoff of an infinite path is the long-run average
of the rewards. We consider two types of path constraints: (i) quantitative
constraint defines the set of paths where the payoff is at least a given
threshold {\lambda} in (0, 1]; and (ii) qualitative constraint which is a
special case of quantitative constraint with {\lambda} = 1. We consider the
computation of the almost-sure winning set, where the controller needs to
ensure that the path constraint is satisfied with probability 1. Our main
results for qualitative path constraint are as follows: (i) the problem of
deciding the existence of a finite-memory controller is EXPTIME-complete; and
(ii) the problem of deciding the existence of an infinite-memory controller is
undecidable. For quantitative path constraint we show that the problem of
deciding the existence of a finite-memory controller is undecidable.
| [
{
"version": "v1",
"created": "Thu, 22 Aug 2013 12:50:27 GMT"
}
] | 1,377,216,000,000 | [
[
"Chatterjee",
"Krishnendu",
""
],
[
"Chmelík",
"Martin",
""
]
] |
1308.4943 | Claus-Peter Wirth | Claus-Peter Wirth, Frieder Stolzenburg | David Poole's Specificity Revised | ii+34 pages | null | null | SEKI-Report SR-2013-01 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the middle of the 1980s, David Poole introduced a semantical,
model-theoretic notion of specificity to the artificial-intelligence community.
Since then it has found further applications in non-monotonic reasoning, in
particular in defeasible reasoning. Poole tried to approximate the intuitive
human concept of specificity, which seems to be essential for reasoning in
everyday life with its partial and inconsistent information. His notion,
however, turns out to be intricate and problematic, which --- as we show ---
can be overcome to some extent by a closer approximation of the intuitive human
concept of specificity. Besides the intuitive advantages of our novel
specificity ordering over Poole's specificity relation in the classical
examples of the literature, we also report some hard mathematical facts:
Contrary to what was claimed before, we show that Poole's relation is not
transitive. The present means to decide our novel specificity relation,
however, show only a slight improvement over the known ones for Poole's
relation, and further work is needed in this aspect.
| [
{
"version": "v1",
"created": "Thu, 22 Aug 2013 18:28:21 GMT"
},
{
"version": "v2",
"created": "Fri, 23 Aug 2013 09:51:46 GMT"
},
{
"version": "v3",
"created": "Wed, 6 Nov 2013 10:44:05 GMT"
},
{
"version": "v4",
"created": "Sun, 24 Nov 2013 17:08:51 GMT"
}
] | 1,392,076,800,000 | [
[
"Wirth",
"Claus-Peter",
""
],
[
"Stolzenburg",
"Frieder",
""
]
] |
1308.5046 | Jes\'us Gir\'aldez-Cru | C. Ans\'otegui (1), M. L. Bonet (2), J. Gir\'aldez-Cru (3) and J. Levy
(3) ((1) DIEI, Univ. de Lleida, (2) LSI, UPC, (3) IIIA-CSIC) | The Fractal Dimension of SAT Formulas | 20 pages, 11 Postscript figures | Automated Reasoning, LNCS 8562, pp 107-121, Springer (2014) | 10.1007/978-3-319-08587-6_8 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modern SAT solvers have experienced a remarkable progress on solving
industrial instances. Most of the techniques have been developed after an
intensive experimental testing process. Recently, there have been some attempts
to analyze the structure of these formulas in terms of complex networks, with
the long-term aim of explaining the success of these SAT solving techniques,
and possibly improving them.
We study the fractal dimension of SAT formulas, and show that most industrial
families of formulas are self-similar, with a small fractal dimension. We also
show that this dimension is not affected by the addition of learnt clauses. We
explore how the dimension of a formula, together with other graph properties
can be used to characterize SAT instances. Finally, we give empirical evidence
that these graph properties can be used in state-of-the-art portfolios.
| [
{
"version": "v1",
"created": "Fri, 23 Aug 2013 04:30:37 GMT"
}
] | 1,678,752,000,000 | [
[
"Ansótegui",
"C.",
"",
"DIEI, Univ. de Lleida"
],
[
"Bonet",
"M. L.",
"",
"LSI, UPC"
],
[
"Giráldez-Cru",
"J.",
"",
"IIIA-CSIC"
],
[
"Levy",
"J.",
"",
"IIIA-CSIC"
]
] |
1308.5136 | Uwe Aickelin | Josie McCulloch, Christian Wagner, Uwe Aickelin | Extending Similarity Measures of Interval Type-2 Fuzzy Sets to General
Type-2 Fuzzy Sets | International Conference on Fuzzy Systems 2013 (Fuzz-IEEE 2013) | null | 10.1109/FUZZ-IEEE.2013.6622408 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Similarity measures provide one of the core tools that enable reasoning about
fuzzy sets. While many types of similarity measures exist for type-1 and
interval type-2 fuzzy sets, there are very few similarity measures that enable
the comparison of general type-2 fuzzy sets. In this paper, we introduce a
general method for extending existing interval type-2 similarity measures to
similarity measures for general type-2 fuzzy sets. Specifically, we show how
similarity measures for interval type-2 fuzzy sets can be employed in
conjunction with the zSlices based general type-2 representation for fuzzy sets
to provide measures of similarity which preserve all the common properties
(i.e. reflexivity, symmetry, transitivity and overlapping) of the original
interval type-2 similarity measure. We demonstrate examples of such extended
fuzzy measures and provide comparisons between (different types of) interval
and general type-2 fuzzy measures.
| [
{
"version": "v1",
"created": "Fri, 23 Aug 2013 14:29:03 GMT"
}
] | 1,479,340,800,000 | [
[
"McCulloch",
"Josie",
""
],
[
"Wagner",
"Christian",
""
],
[
"Aickelin",
"Uwe",
""
]
] |
1308.5137 | Uwe Aickelin | Josie McCulloch, Christian Wagner, Uwe Aickelin | Measuring the Directional Distance Between Fuzzy Sets | UKCI 2013, the 13th Annual Workshop on Computational Intelligence,
Surrey University | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The measure of distance between two fuzzy sets is a fundamental tool within
fuzzy set theory. However, current distance measures within the literature do
not account for the direction of change between fuzzy sets; a useful concept in
a variety of applications, such as Computing With Words. In this paper, we
highlight this utility and introduce a distance measure which takes the
direction between sets into account. We provide details of its application for
normal and non-normal, as well as convex and non-convex fuzzy sets. We
demonstrate the new distance measure using real data from the MovieLens dataset
and establish the benefits of measuring the direction between fuzzy sets.
| [
{
"version": "v1",
"created": "Fri, 23 Aug 2013 14:31:10 GMT"
}
] | 1,377,475,200,000 | [
[
"McCulloch",
"Josie",
""
],
[
"Wagner",
"Christian",
""
],
[
"Aickelin",
"Uwe",
""
]
] |
1308.5321 | Seppo Ilari Tirri | Seppo Ilari Tirri | Evolution Theory of Self-Evolving Autonomous Problem Solving Systems | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The present study gives a mathematical framework for self-evolution within
autonomous problem solving systems. Special attention is set on universal
abstraction, thereof generation by net block homomorphism, consequently
multiple order solving systems and the overall decidability of the set of the
solutions. By overlapping presentation of nets new abstraction relation among
nets is formulated alongside with consequent alphabetical net block renetting
system proportional to normal forms of renetting systems regarding the
operational power. A new structure in self-evolving problem solving is
established via saturation by groups of equivalence relations and iterative
closures of generated quotient transducer algebras over the whole evolution.
| [
{
"version": "v1",
"created": "Sat, 24 Aug 2013 12:45:48 GMT"
}
] | 1,377,561,600,000 | [
[
"Tirri",
"Seppo Ilari",
""
]
] |
1308.6292 | Marco Montali | Babak Bagheri Hariri, Diego Calvanese, Marco Montali, Ario Santoso,
Dmitry Solomakhin | Verification of Semantically-Enhanced Artifact Systems (Extended
Version) | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Artifact-Centric systems have emerged in the last years as a suitable
framework to model business-relevant entities, by combining their static and
dynamic aspects. In particular, the Guard-Stage-Milestone (GSM) approach has
been recently proposed to model artifacts and their lifecycle in a declarative
way. In this paper, we enhance GSM with a Semantic Layer, constituted by a
full-fledged OWL 2 QL ontology linked to the artifact information models
through mapping specifications. The ontology provides a conceptual view of the
domain under study, and allows one to understand the evolution of the artifact
system at a higher level of abstraction. In this setting, we present a
technique to specify temporal properties expressed over the Semantic Layer, and
verify them according to the evolution in the underlying GSM model. This
technique has been implemented in a tool that exploits state-of-the-art
ontology-based data access technologies to manipulate the temporal properties
according to the ontology and the mappings, and that relies on the GSMC model
checker for verification.
| [
{
"version": "v1",
"created": "Wed, 28 Aug 2013 20:01:36 GMT"
}
] | 1,377,820,800,000 | [
[
"Hariri",
"Babak Bagheri",
""
],
[
"Calvanese",
"Diego",
""
],
[
"Montali",
"Marco",
""
],
[
"Santoso",
"Ario",
""
],
[
"Solomakhin",
"Dmitry",
""
]
] |
1309.1226 | Joseph Y. Halpern | Joseph Y. Halpern and Christopher Hitchcock | Graded Causation and Defaults | To appear, British Journal for the Philosophy of Science | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent work in psychology and experimental philosophy has shown that
judgments of actual causation are often influenced by consideration of
defaults, typicality, and normality. A number of philosophers and computer
scientists have also suggested that an appeal to such factors can help deal
with problems facing existing accounts of actual causation. This paper develops
a flexible formal framework for incorporating defaults, typicality, and
normality into an account of actual causation. The resulting account takes
actual causation to be both graded and comparative. We then show how our
account would handle a number of standard cases.
| [
{
"version": "v1",
"created": "Thu, 5 Sep 2013 02:17:54 GMT"
}
] | 1,378,425,600,000 | [
[
"Halpern",
"Joseph Y.",
""
],
[
"Hitchcock",
"Christopher",
""
]
] |
1309.1227 | Joseph Y. Halpern | Joseph Y. Halpern and Christopher Hitchcock | Compact Representations of Extended Causal Models | null | Cognitive Science 37:6, 2013, pp. 986-1010 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Judea Pearl was the first to propose a definition of actual causation using
causal models. A number of authors have suggested that an adequate account of
actual causation must appeal not only to causal structure, but also to
considerations of normality. In earlier work, we provided a definition of
actual causation using extended causal models, which include information about
both causal structure and normality. Extended causal models are potentially
very complex. In this paper, we show how it is possible to achieve a compact
representation of extended causal models.
| [
{
"version": "v1",
"created": "Thu, 5 Sep 2013 02:26:44 GMT"
}
] | 1,378,425,600,000 | [
[
"Halpern",
"Joseph Y.",
""
],
[
"Hitchcock",
"Christopher",
""
]
] |
1309.1228 | Joseph Y. Halpern | Joseph Y. Halpern | Weighted regret-based likelihood: a new approach to describing
uncertainty | Appeared in 12th European Conference on Symbolic and Quantitative
Approaches to Reasoning with Uncertainty (ECSQARU)}, 2013, pp. 266--277 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, Halpern and Leung suggested representing uncertainty by a weighted
set of probability measures, and suggested a way of making decisions based on
this representation of uncertainty: maximizing weighted regret. Their paper
does not answer an apparently simpler question: what it means, according to
this representation of uncertainty, for an event E to be more likely than an
event E'. In this paper, a notion of comparative likelihood when uncertainty is
represented by a weighted set of probability measures is defined. It
generalizes the ordering defined by probability (and by lower probability) in a
natural way; a generalization of upper probability can also be defined. A
complete axiomatic characterization of this notion of regret-based likelihood
is given.
| [
{
"version": "v1",
"created": "Thu, 5 Sep 2013 02:33:30 GMT"
}
] | 1,378,425,600,000 | [
[
"Halpern",
"Joseph Y.",
""
]
] |
1309.1973 | Feng Wu | Feng Wu and Nicholas R. Jennings | Regret-Based Multi-Agent Coordination with Uncertain Task Rewards | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many multi-agent coordination problems can be represented as DCOPs. Motivated
by task allocation in disaster response, we extend standard DCOP models to
consider uncertain task rewards where the outcome of completing a task depends
on its current state, which is randomly drawn from unknown distributions. The
goal of solving this problem is to find a solution for all agents that
minimizes the overall worst-case loss. This is a challenging problem for
centralized algorithms because the search space grows exponentially with the
number of agents and is nontrivial for standard DCOP algorithms we have. To
address this, we propose a novel decentralized algorithm that incorporates
Max-Sum with iterative constraint generation to solve the problem by passing
messages among agents. By so doing, our approach scales well and can solve
instances of the task allocation problem with hundreds of agents and tasks.
| [
{
"version": "v1",
"created": "Sun, 8 Sep 2013 16:20:06 GMT"
}
] | 1,378,771,200,000 | [
[
"Wu",
"Feng",
""
],
[
"Jennings",
"Nicholas R.",
""
]
] |
1309.2351 | Rodrigo Lopez-Pablos | Rodrigo Lopez-Pablos (Universidad Nacional de La Matanza y Universidad
Tecnol\'ogica Nacional) | Elementos de ingenier\'ia de explotaci\'on de la informaci\'on aplicados
a la investigaci\'on tributaria fiscal | 30 pages, 7 figures, written in Castilian, Artificial Intelligence
(cs.AI), Computers and Society (cs.CY) | null | null | null | cs.AI | http://creativecommons.org/licenses/publicdomain/ | By introducing elements of information mining to tax analysis, by means of
data mining software and advanced computational concepts of artificial
intelligence, the problem of tax evader's crime against public property has
been addressed. Through an empirical approach from a hypothetical case of use,
induction algorithms, neural networks and bayesian networks are applied to
determine the feasibility of its heuristic application by the tax public
administrator. Different strategies are explored to facilitate the work of
local and regional federal tax inspectors, considering their limited
computational capabilities, but equally effective for those social scientist
committed to handcrafting tax research.
-----
Apresentando a introdu\c{c}\~ao de elementos de explora\c{c}\~ao de
informa\c{c}\~oes para an\'alise fiscal, por meio de software de
minera\c{c}\~ao de dados e conceitos avan\c{c}ados computacionais de
intelig\^encia artificial, foi abordado o problema do crime de sonegador fiscal
contra o patrim\^onio p\'ublico. Atrav\'es de uma abordagem emp\'irica a partir
de um caso hipot\'etico de uso, os algoritmos de indu\c{c}\~ao, redes neurais e
redes bayesianas s\~ao aplicados para determinar a viabilidade de sua
aplica\c{c}\~ao heur\'istica pelo administrador p\'ublico tribut\'ario.
Diferentes estrat\'egias s\~ao exploradas para facilitar o trabalho dos
inspectores tribut\'arios federais locais e regionais, tendo em conta as suas
capacidades computacionais limitados, mas igualmente eficaz para aqueles
cientista social comprometido com a investiga\c{c}\~ao fiscal.
| [
{
"version": "v1",
"created": "Tue, 10 Sep 2013 00:42:05 GMT"
}
] | 1,378,857,600,000 | [
[
"Lopez-Pablos",
"Rodrigo",
"",
"Universidad Nacional de La Matanza y Universidad\n Tecnológica Nacional"
]
] |
1309.2747 | Junping Zhou | Junping Zhou, Weihua Su, Minghao Yin | Approximate Counting CSP Solutions Using Partition Function | 14 pages, 2 figures, 3 tables | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a new approximate method for counting the number of the solutions
for constraint satisfaction problem (CSP). The method derives from the
partition function based on introducing the free energy and capturing the
relationship of probabilities of variables and constraints, which requires the
marginal probabilities. It firstly obtains the marginal probabilities using the
belief propagation, and then computes the number of solutions according to the
partition function. This allows us to directly plug the marginal probabilities
into the partition function and efficiently count the number of solutions for
CSP. The experimental results show that our method can solve both random
problems and structural problems efficiently.
| [
{
"version": "v1",
"created": "Wed, 11 Sep 2013 07:32:07 GMT"
}
] | 1,378,944,000,000 | [
[
"Zhou",
"Junping",
""
],
[
"Su",
"Weihua",
""
],
[
"Yin",
"Minghao",
""
]
] |
1309.3039 | Azlan Iqbal | Azlan Iqbal | How Relevant Are Chess Composition Conventions? | 10 pages, 3 tables, 2 figures. Accepted to the 23rd International
Joint Conference on Artificial Intelligence (IJCAI) Workshop on Computer
Games, Beijing, China, 3-5 August 2013. Published version:
http://link.springer.com/chapter/10.1007%2F978-3-319-05428-5_9 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Composition conventions are guidelines used by human composers in composing
chess problems. They are particularly significant in composition tournaments.
Examples include, not having any check in the first move of the solution and
not dressing up the board with unnecessary pieces. Conventions are often
associated or even directly conflated with the overall aesthetics or beauty of
a composition. Using an existing experimentally-validated computational
aesthetics model for three-move mate problems, we analyzed sets of
computer-generated compositions adhering to at least 2, 3 and 4 comparable
conventions to test if simply conforming to more conventions had a positive
effect on their aesthetics, as is generally believed by human composers. We
found slight but statistically significant evidence that it does, but only to a
point. We also analyzed human judge scores of 145 three-move mate problems
composed by humans to see if they had any positive correlation with the
computational aesthetic scores of those problems. We found that they did not.
These seemingly conflicting findings suggest two main things. First, the right
amount of adherence to composition conventions in a composition has a positive
effect on its perceived aesthetics. Second, human judges either do not look at
the same conventions related to aesthetics in the model used or emphasize
others that have less to do with beauty as perceived by the majority of
players, even though they may mistakenly consider their judgements beautiful in
the traditional, non-esoteric sense. Human judges may also be relying
significantly on personal tastes as we found no correlation between their
individual scores either.
| [
{
"version": "v1",
"created": "Thu, 12 Sep 2013 06:00:13 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Sep 2016 02:38:10 GMT"
}
] | 1,474,502,400,000 | [
[
"Iqbal",
"Azlan",
""
]
] |
1309.3242 | Iman Esmaili Paeen Afrakoti | Iman Esmaili Paeen Afrakoti, Saeed Bagheri Shouraki and Farnood
Merrikhbayat | Using memristor crossbar structure to implement a novel adaptive real
time fuzzy modeling algorithm | 24 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although fuzzy techniques promise fast meanwhile accurate modeling and
control abilities for complicated systems, different difficulties have been
re-vealed in real situation implementations. Usually there is no escape of
it-erative optimization based on crisp domain algorithms. Recently memristor
structures appeared promising to implement neural network structures and fuzzy
algorithms. In this paper a novel adaptive real-time fuzzy modeling algorithm
is proposed which uses active learning method concept to mimic recent
understandings of right brain processing techniques. The developed method is
based on processing fuzzy numbers to provide the ability of being sensitive to
each training data point to expand the knowledge tree leading to plasticity
while used defuzzification technique guaranties enough stability. An
outstanding characteristic of the proposed algorithm is its consistency to
memristor crossbar hardware processing concepts. An analog implemen-tation of
the proposed algorithm on memristor crossbars structure is also introduced in
this paper. The effectiveness of the proposed algorithm in modeling and pattern
recognition tasks is verified by means of computer simulations
| [
{
"version": "v1",
"created": "Thu, 12 Sep 2013 19:02:00 GMT"
}
] | 1,483,833,600,000 | [
[
"Afrakoti",
"Iman Esmaili Paeen",
""
],
[
"Shouraki",
"Saeed Bagheri",
""
],
[
"Merrikhbayat",
"Farnood",
""
]
] |
1309.3285 | Salman Hooshmand | Salman Hooshmand, Mehdi Behshameh and Omid Hamidi | A tabu search algorithm with efficient diversification strategy for high
school timetabling problem | null | null | 10.5121/ijcsit.2013.5402 | null | cs.AI | http://creativecommons.org/licenses/by/3.0/ | The school timetabling problem can be described as scheduling a set of
lessons (combination of classes, teachers, subjects and rooms) in a weekly
timetable. This paper presents a novel way to generate timetables for high
schools. The algorithm has three phases. Pre-scheduling, initial phase and
optimization through tabu search. In the first phase, a graph based algorithm
used to create groups of lessons to be scheduled simultaneously; then an
initial solution is built by a sequential greedy heuristic. Finally, the
solution is optimized using tabu search algorithm based on frequency based
diversification. The algorithm has been tested on a set of real problems
gathered from Iranian high schools. Experiments show that the proposed
algorithm can effectively build acceptable timetables.
| [
{
"version": "v1",
"created": "Thu, 12 Sep 2013 20:03:09 GMT"
}
] | 1,379,289,600,000 | [
[
"Hooshmand",
"Salman",
""
],
[
"Behshameh",
"Mehdi",
""
],
[
"Hamidi",
"Omid",
""
]
] |
1309.3611 | Fionn Murtagh | Fionn Murtagh | Ultrametric Component Analysis with Application to Analysis of Text and
of Emotion | 49 pages, 15 figures, 52 citations | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We review the theory and practice of determining what parts of a data set are
ultrametric. It is assumed that the data set, to begin with, is endowed with a
metric, and we include discussion of how this can be brought about if a
dissimilarity, only, holds. The basis for part of the metric-endowed data set
being ultrametric is to consider triplets of the observables (vectors). We
develop a novel consensus of hierarchical clusterings. We do this in order to
have a framework (including visualization and supporting interpretation) for
the parts of the data that are determined to be ultrametric. Furthermore a
major objective is to determine locally ultrametric relationships as opposed to
non-local ultrametric relationships. As part of this work, we also study a
particular property of our ultrametricity coefficient, namely, it being a
function of the difference of angles of the base angles of the isosceles
triangle. This work is completed by a review of related work, on consensus
hierarchies, and of a major new application, namely quantifying and
interpreting the emotional content of narrative.
| [
{
"version": "v1",
"created": "Sat, 14 Sep 2013 00:12:13 GMT"
}
] | 1,379,376,000,000 | [
[
"Murtagh",
"Fionn",
""
]
] |
1309.3917 | Gaetan Marceau | Ga\'etan Marceau (INRIA Saclay - Ile de France, LRI), Pierre
Sav\'eant, Marc Schoenauer (INRIA Saclay - Ile de France, LRI) | Strategic Planning in Air Traffic Control as a Multi-objective
Stochastic Optimization Problem | ATM Seminar 2013 (2013) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the objective of handling the airspace sector congestion subject to
continuously growing air traffic, we suggest to create a collaborative working
plan during the strategic phase of air traffic control. The plan obtained via a
new decision support tool presented in this article consists in a schedule for
controllers, which specifies time of overflight on the different waypoints of
the flight plans. In order to do it, we believe that the decision-support tool
shall model directly the uncertainty at a trajectory level in order to
propagate the uncertainty to the sector level. Then, the probability of
congestion for any sector in the airspace can be computed. Since air traffic
regulations and sector congestion are antagonist, we designed and implemented a
multi-objective optimization algorithm for determining the best trade-off
between these two criteria. The solution comes up as a set of alternatives for
the multi-sector planner where the severity of the congestion cost is
adjustable. In this paper, the Non-dominated Sorting Genetic Algorithm
(NSGA-II) was used to solve an artificial benchmark problem involving 24
aircraft and 11 sectors, and is able to provide a good approximation of the
Pareto front.
| [
{
"version": "v1",
"created": "Mon, 16 Sep 2013 11:52:07 GMT"
}
] | 1,379,376,000,000 | [
[
"Marceau",
"Gaétan",
"",
"INRIA Saclay - Ile de France, LRI"
],
[
"Savéant",
"Pierre",
"",
"INRIA Saclay - Ile de France, LRI"
],
[
"Schoenauer",
"Marc",
"",
"INRIA Saclay - Ile de France, LRI"
]
] |
1309.3921 | Gaetan Marceau | Ga\'etan Marceau (INRIA Saclay - Ile de France, LRI), Pierre
Sav\'eant, Marc Schoenauer (INRIA Saclay - Ile de France, LRI) | Computational Methods for Probabilistic Inference of Sector Congestion
in Air Traffic Management | Interdisciplinary Science for Innovative Air Traffic Management
(2013) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This article addresses the issue of computing the expected cost functions
from a probabilistic model of the air traffic flow and capacity management. The
Clenshaw-Curtis quadrature is compared to Monte-Carlo algorithms defined
specifically for this problem. By tailoring the algorithms to this model, we
reduce the computational burden in order to simulate real instances. The study
shows that the Monte-Carlo algorithm is more sensible to the amount of
uncertainty in the system, but has the advantage to return a result with the
associated accuracy on demand. The performances for both approaches are
comparable for the computation of the expected cost of delay and the expected
cost of congestion. Finally, this study shows some evidences that the
simulation of the proposed probabilistic model is tractable for realistic
instances.
| [
{
"version": "v1",
"created": "Mon, 16 Sep 2013 11:55:27 GMT"
}
] | 1,379,376,000,000 | [
[
"Marceau",
"Gaétan",
"",
"INRIA Saclay - Ile de France, LRI"
],
[
"Savéant",
"Pierre",
"",
"INRIA Saclay - Ile de France, LRI"
],
[
"Schoenauer",
"Marc",
"",
"INRIA Saclay - Ile de France, LRI"
]
] |
1309.4085 | Gaetan Marceau | Ga\'etan Marceau (INRIA Saclay - Ile de France, LRI), Pierre
Sav\'eant, Marc Schoenauer (INRIA Saclay - Ile de France, LRI) | Multiobjective Tactical Planning under Uncertainty for Air Traffic Flow
and Capacity Management | IEEE Congress on Evolutionary Computation (2013). arXiv admin note:
substantial text overlap with arXiv:1309.3917 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate a method to deal with congestion of sectors and delays in the
tactical phase of air traffic flow and capacity management. It relies on
temporal objectives given for every point of the flight plans and shared among
the controllers in order to create a collaborative environment. This would
enhance the transition from the network view of the flow management to the
local view of air traffic control. Uncertainty is modeled at the trajectory
level with temporal information on the boundary points of the crossed sectors
and then, we infer the probabilistic occupancy count. Therefore, we can model
the accuracy of the trajectory prediction in the optimization process in order
to fix some safety margins. On the one hand, more accurate is our prediction;
more efficient will be the proposed solutions, because of the tighter safety
margins. On the other hand, when uncertainty is not negligible, the proposed
solutions will be more robust to disruptions. Furthermore, a multiobjective
algorithm is used to find the tradeoff between the delays and congestion, which
are antagonist in airspace with high traffic density. The flow management
position can choose manually, or automatically with a preference-based
algorithm, the adequate solution. This method is tested against two instances,
one with 10 flights and 5 sectors and one with 300 flights and 16 sectors.
| [
{
"version": "v1",
"created": "Mon, 16 Sep 2013 11:53:39 GMT"
}
] | 1,379,462,400,000 | [
[
"Marceau",
"Gaétan",
"",
"INRIA Saclay - Ile de France, LRI"
],
[
"Savéant",
"Pierre",
"",
"INRIA Saclay - Ile de France, LRI"
],
[
"Schoenauer",
"Marc",
"",
"INRIA Saclay - Ile de France, LRI"
]
] |
1309.4408 | Percy Liang | Percy Liang | Lambda Dependency-Based Compositional Semantics | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This short note presents a new formal language, lambda dependency-based
compositional semantics (lambda DCS) for representing logical forms in semantic
parsing. By eliminating variables and making existential quantification
implicit, lambda DCS logical forms are generally more compact than those in
lambda calculus.
| [
{
"version": "v1",
"created": "Tue, 17 Sep 2013 17:58:56 GMT"
},
{
"version": "v2",
"created": "Wed, 18 Sep 2013 00:45:02 GMT"
}
] | 1,379,548,800,000 | [
[
"Liang",
"Percy",
""
]
] |
1309.4501 | Timothy Gowers | M. Ganesalingam and W. T. Gowers | A fully automatic problem solver with human-style output | 41 pages | null | null | null | cs.AI | http://creativecommons.org/licenses/by/3.0/ | This paper describes a program that solves elementary mathematical problems,
mostly in metric space theory, and presents solutions that are hard to
distinguish from solutions that might be written by human mathematicians. The
program is part of a more general project, which we also discuss.
| [
{
"version": "v1",
"created": "Tue, 17 Sep 2013 22:56:06 GMT"
}
] | 1,379,548,800,000 | [
[
"Ganesalingam",
"M.",
""
],
[
"Gowers",
"W. T.",
""
]
] |
1309.5316 | Brigitte Charnomordic | Aur\'elie Th\'ebaut (MISTEA), Thibault Scholash, Brigitte Charnomordic
(MISTEA), Nadine Hilgert (MISTEA) | A modeling approach to design a software sensor and analyze agronomical
features - Application to sap flow and grape quality relationship | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work proposes a framework using temporal data and domain knowledge in
order to analyze complex agronomical features. The expertise is first
formalized in an ontology, under the form of concepts and relationships between
them, and then used in conjunction with raw data and mathematical models to
design a software sensor. Next the software sensor outputs are put in relation
to product quality, assessed by quantitative measurements. This requires the
use of advanced data analysis methods, such as functional regression. The
methodology is applied to a case study involving an experimental design in
French vineyards. The temporal data consist of sap flow measurements, and the
goal is to explain fruit quality (sugar concentration and weight), using vine's
water courses through the various vine phenological stages. The results are
discussed, as well as the method genericity and robustness.
| [
{
"version": "v1",
"created": "Fri, 20 Sep 2013 16:41:43 GMT"
}
] | 1,379,894,400,000 | [
[
"Thébaut",
"Aurélie",
"",
"MISTEA"
],
[
"Scholash",
"Thibault",
"",
"MISTEA"
],
[
"Charnomordic",
"Brigitte",
"",
"MISTEA"
],
[
"Hilgert",
"Nadine",
"",
"MISTEA"
]
] |
1309.5984 | Phillip Lord Dr | Phillip Lord | An evolutionary approach to Function | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/3.0/ | Background: Understanding the distinction between function and role is vexing
and difficult. While it appears to be useful, in practice this distinction is
hard to apply, particularly within biology.
Results: I take an evolutionary approach, considering a series of examples,
to develop and generate definitions for these concepts. I test them in practice
against the Ontology for Biomedical Investigations (OBI). Finally, I give an
axiomatisation and discuss methods for applying these definitions in practice.
Conclusions: The definitions in this paper are applicable, formalizing
current practice. As such, they make a significant contribution to the use of
these concepts within biomedical ontologies.
| [
{
"version": "v1",
"created": "Mon, 23 Sep 2013 21:15:10 GMT"
}
] | 1,380,067,200,000 | [
[
"Lord",
"Phillip",
""
]
] |
1309.6226 | Claus-Peter Wirth | J Strother Moore, Claus-Peter Wirth | Automation of Mathematical Induction as part of the History of Logic | ii+107 pages | IfCoLog Journal of Logics and their Applications, Vol. 4, number
5, pp. 1505-1634 (2017) | null | SEKI-Report SR-2013-02. ISSN 1437--4447 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We review the history of the automation of mathematical induction
| [
{
"version": "v1",
"created": "Tue, 24 Sep 2013 15:51:43 GMT"
},
{
"version": "v2",
"created": "Sun, 6 Oct 2013 18:40:28 GMT"
},
{
"version": "v3",
"created": "Thu, 9 Jan 2014 19:30:52 GMT"
},
{
"version": "v4",
"created": "Tue, 18 Mar 2014 20:04:26 GMT"
},
{
"version": "v5",
"created": "Mon, 28 Jul 2014 19:20:22 GMT"
}
] | 1,501,804,800,000 | [
[
"Moore",
"J Strother",
""
],
[
"Wirth",
"Claus-Peter",
""
]
] |
1309.6433 | Mohammad Bazmara | Mohammad Bazmara, Shahram Jafari, Fatemeh Pasand | A Fuzzy expert system for goalkeeper quality recognition | 5 pages | IJCSI International Journal of Computer Science Issues, Vol. 9,
Issue 5, No 1, September 2012 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Goalkeeper (GK) is an expert in soccer and goalkeeping is a complete
professional job. In fact, achieving success seems impossible without a
reliable GK. His effect in successes and failures is more dominant than other
players. The most visible mistakes in a game are those of goalkeeper's. In this
paper the expert fuzzy system is used as a suitable tool to study the quality
of a goalkeeper and compare it with others. Previously done researches are used
to find the goalkeepers' indexes in soccer. Soccer experts have found that a
successful GK should have some qualifications. A new pattern is offered here
which is called "Soccer goalkeeper quality recognition using fuzzy expert
systems". This pattern has some important capabilities. Firstly, among some
goalkeepers the one with the best quality for the main team arrange can be
chosen. Secondly, the need to expert coaches for choosing a GK using their
senses and experiences decreases a lot. Thirdly, in the survey of a GK,
quantitative criteria can be included, and finally this pattern is simple and
easy to understand.
| [
{
"version": "v1",
"created": "Wed, 25 Sep 2013 09:05:32 GMT"
}
] | 1,380,153,600,000 | [
[
"Bazmara",
"Mohammad",
""
],
[
"Jafari",
"Shahram",
""
],
[
"Pasand",
"Fatemeh",
""
]
] |
1309.6816 | Vaishak Belle | Vaishak Belle, Hector Levesque | Reasoning about Probabilities in Dynamic Systems using Goal Regression | Appears in Proceedings of the Twenty-Ninth Conference on Uncertainty
in Artificial Intelligence (UAI2013) | null | null | UAI-P-2013-PG-62-71 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reasoning about degrees of belief in uncertain dynamic worlds is fundamental
to many applications, such as robotics and planning, where actions modify state
properties and sensors provide measurements, both of which are prone to noise.
With the exception of limited cases such as Gaussian processes over linear
phenomena, belief state evolution can be complex and hard to reason with in a
general way. This paper proposes a framework with new results that allows the
reduction of subjective probabilities after sensing and acting to questions
about the initial state only. We build on an expressive probabilistic
first-order logical account by Bacchus, Halpern and Levesque, resulting in a
methodology that, in principle, can be coupled with a variety of existing
inference solutions.
| [
{
"version": "v1",
"created": "Thu, 26 Sep 2013 12:30:21 GMT"
}
] | 1,380,240,000,000 | [
[
"Belle",
"Vaishak",
""
],
[
"Levesque",
"Hector",
""
]
] |
1309.6817 | Damien Bigot | Damien Bigot, Bruno Zanuttini, Helene Fargier, Jerome Mengin | Probabilistic Conditional Preference Networks | Appears in Proceedings of the Twenty-Ninth Conference on Uncertainty
in Artificial Intelligence (UAI2013) | null | null | UAI-P-2013-PG-72-81 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In order to represent the preferences of a group of individuals, we introduce
Probabilistic CP-nets (PCP-nets). PCP-nets provide a compact language for
representing probability distributions over preference orderings. We argue that
they are useful for aggregating preferences or modelling noisy preferences.
Then we give efficient algorithms for the main reasoning problems, namely for
computing the probability that a given outcome is preferred to another one, and
the probability that a given outcome is optimal. As a by-product, we obtain an
unexpected linear-time algorithm for checking dominance in a standard,
tree-structured CP-net.
| [
{
"version": "v1",
"created": "Thu, 26 Sep 2013 12:34:49 GMT"
}
] | 1,380,240,000,000 | [
[
"Bigot",
"Damien",
""
],
[
"Zanuttini",
"Bruno",
""
],
[
"Fargier",
"Helene",
""
],
[
"Mengin",
"Jerome",
""
]
] |
1309.6822 | Hung Bui | Hung Bui, Tuyen Huynh, Sebastian Riedel | Automorphism Groups of Graphical Models and Lifted Variational Inference | Appears in Proceedings of the Twenty-Ninth Conference on Uncertainty
in Artificial Intelligence (UAI2013) | null | null | UAI-P-2013-PG-132-141 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Using the theory of group action, we first introduce the concept of the
automorphism group of an exponential family or a graphical model, thus
formalizing the general notion of symmetry of a probabilistic model. This
automorphism group provides a precise mathematical framework for lifted
inference in the general exponential family. Its group action partitions the
set of random variables and feature functions into equivalent classes (called
orbits) having identical marginals and expectations. Then the inference problem
is effectively reduced to that of computing marginals or expectations for each
class, thus avoiding the need to deal with each individual variable or feature.
We demonstrate the usefulness of this general framework in lifting two classes
of variational approximation for maximum a posteriori (MAP) inference: local
linear programming (LP) relaxation and local LP relaxation with cycle
constraints; the latter yields the first lifted variational inference algorithm
that operates on a bound tighter than the local constraints.
| [
{
"version": "v1",
"created": "Thu, 26 Sep 2013 12:36:16 GMT"
}
] | 1,380,240,000,000 | [
[
"Bui",
"Hung",
""
],
[
"Huynh",
"Tuyen",
""
],
[
"Riedel",
"Sebastian",
""
]
] |
1309.6824 | Tom Claassen | Tom Claassen, Joris Mooij, Tom Heskes | Learning Sparse Causal Models is not NP-hard | Appears in Proceedings of the Twenty-Ninth Conference on Uncertainty
in Artificial Intelligence (UAI2013) | null | null | UAI-P-2013-PG-172-181 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper shows that causal model discovery is not an NP-hard problem, in
the sense that for sparse graphs bounded by node degree k the sound and
complete causal model can be obtained in worst case order N^{2(k+2)}
independence tests, even when latent variables and selection bias may be
present. We present a modification of the well-known FCI algorithm that
implements the method for an independence oracle, and suggest improvements for
sample/real-world data versions. It does not contradict any known hardness
results, and does not solve an NP-hard problem: it just proves that sparse
causal discovery is perhaps more complicated, but not as hard as learning
minimal Bayesian networks.
| [
{
"version": "v1",
"created": "Thu, 26 Sep 2013 12:36:47 GMT"
}
] | 1,380,240,000,000 | [
[
"Claassen",
"Tom",
""
],
[
"Mooij",
"Joris",
""
],
[
"Heskes",
"Tom",
""
]
] |
1309.6825 | James Cussens | Mark Bartlett, James Cussens | Advances in Bayesian Network Learning using Integer Programming | Appears in Proceedings of the Twenty-Ninth Conference on Uncertainty
in Artificial Intelligence (UAI2013) | null | null | UAI-P-2013-PG-182-191 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of learning Bayesian networks (BNs) from complete
discrete data. This problem of discrete optimisation is formulated as an
integer program (IP). We describe the various steps we have taken to allow
efficient solving of this IP. These are (i) efficient search for cutting
planes, (ii) a fast greedy algorithm to find high-scoring (perhaps not optimal)
BNs and (iii) tightening the linear relaxation of the IP. After relating this
BN learning problem to set covering and the multidimensional 0-1 knapsack
problem, we present our empirical results. These show improvements, sometimes
dramatic, over earlier results.
| [
{
"version": "v1",
"created": "Thu, 26 Sep 2013 12:37:01 GMT"
},
{
"version": "v2",
"created": "Mon, 23 Mar 2015 13:56:08 GMT"
}
] | 1,427,155,200,000 | [
[
"Bartlett",
"Mark",
""
],
[
"Cussens",
"James",
""
]
] |
1309.6826 | Nicolas Drougard | Nicolas Drougard, Florent Teichteil-Konigsbuch, Jean-Loup Farges,
Didier Dubois | Qualitative Possibilistic Mixed-Observable MDPs | Appears in Proceedings of the Twenty-Ninth Conference on Uncertainty
in Artificial Intelligence (UAI2013) | null | null | UAI-P-2013-PG-192-201 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Possibilistic and qualitative POMDPs (pi-POMDPs) are counterparts of POMDPs
used to model situations where the agent's initial belief or observation
probabilities are imprecise due to lack of past experiences or insufficient
data collection. However, like probabilistic POMDPs, optimally solving
pi-POMDPs is intractable: the finite belief state space exponentially grows
with the number of system's states. In this paper, a possibilistic version of
Mixed-Observable MDPs is presented to get around this issue: the complexity of
solving pi-POMDPs, some state variables of which are fully observable, can be
then dramatically reduced. A value iteration algorithm for this new formulation
under infinite horizon is next proposed and the optimality of the returned
policy (for a specified criterion) is shown assuming the existence of a "stay"
action in some goal states. Experimental work finally shows that this
possibilistic model outperforms probabilistic POMDPs commonly used in robotics,
for a target recognition problem where the agent's observations are imprecise.
| [
{
"version": "v1",
"created": "Thu, 26 Sep 2013 12:37:15 GMT"
}
] | 1,380,240,000,000 | [
[
"Drougard",
"Nicolas",
""
],
[
"Teichteil-Konigsbuch",
"Florent",
""
],
[
"Farges",
"Jean-Loup",
""
],
[
"Dubois",
"Didier",
""
]
] |
1309.6827 | Stefano Ermon | Stefano Ermon, Carla P. Gomes, Ashish Sabharwal, Bart Selman | Optimization With Parity Constraints: From Binary Codes to Discrete
Integration | Appears in Proceedings of the Twenty-Ninth Conference on Uncertainty
in Artificial Intelligence (UAI2013) | null | null | UAI-P-2013-PG-202-211 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many probabilistic inference tasks involve summations over exponentially
large sets. Recently, it has been shown that these problems can be reduced to
solving a polynomial number of MAP inference queries for a model augmented with
randomly generated parity constraints. By exploiting a connection with
max-likelihood decoding of binary codes, we show that these optimizations are
computationally hard. Inspired by iterative message passing decoding
algorithms, we propose an Integer Linear Programming (ILP) formulation for the
problem, enhanced with new sparsification techniques to improve decoding
performance. By solving the ILP through a sequence of LP relaxations, we get
both lower and upper bounds on the partition function, which hold with high
probability and are much tighter than those obtained with variational methods.
| [
{
"version": "v1",
"created": "Thu, 26 Sep 2013 12:37:33 GMT"
}
] | 1,380,240,000,000 | [
[
"Ermon",
"Stefano",
""
],
[
"Gomes",
"Carla P.",
""
],
[
"Sabharwal",
"Ashish",
""
],
[
"Selman",
"Bart",
""
]
] |
1309.6828 | Zohar Feldman | Zohar Feldman, Carmel Domshlak | Monte-Carlo Planning: Theoretically Fast Convergence Meets Practical
Efficiency | Appears in Proceedings of the Twenty-Ninth Conference on Uncertainty
in Artificial Intelligence (UAI2013) | null | null | UAI-P-2013-PG-212-221 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Popular Monte-Carlo tree search (MCTS) algorithms for online planning, such
as epsilon-greedy tree search and UCT, aim at rapidly identifying a reasonably
good action, but provide rather poor worst-case guarantees on performance
improvement over time. In contrast, a recently introduced MCTS algorithm BRUE
guarantees exponential-rate improvement over time, yet it is not geared towards
identifying reasonably good choices right at the go. We take a stand on the
individual strengths of these two classes of algorithms, and show how they can
be effectively connected. We then rationalize a principle of "selective tree
expansion", and suggest a concrete implementation of this principle within
MCTS. The resulting algorithm,s favorably compete with other MCTS algorithms
under short planning times, while preserving the attractive convergence
properties of BRUE.
| [
{
"version": "v1",
"created": "Thu, 26 Sep 2013 12:37:50 GMT"
}
] | 1,380,240,000,000 | [
[
"Feldman",
"Zohar",
""
],
[
"Domshlak",
"Carmel",
""
]
] |
1309.6832 | Vibhav Gogate | Vibhav Gogate, Pedro Domingos | Structured Message Passing | Appears in Proceedings of the Twenty-Ninth Conference on Uncertainty
in Artificial Intelligence (UAI2013) | null | null | UAI-P-2013-PG-252-261 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present structured message passing (SMP), a unifying
framework for approximate inference algorithms that take advantage of
structured representations such as algebraic decision diagrams and sparse hash
tables. These representations can yield significant time and space savings over
the conventional tabular representation when the message has several identical
values (context-specific independence) or zeros (determinism) or both in its
range. Therefore, in order to fully exploit the power of structured
representations, we propose to artificially introduce context-specific
independence and determinism in the messages. This yields a new class of
powerful approximate inference algorithms which includes popular algorithms
such as cluster-graph Belief propagation (BP), expectation propagation and
particle BP as special cases. We show that our new algorithms introduce several
interesting bias-variance trade-offs. We evaluate these trade-offs empirically
and demonstrate that our new algorithms are more accurate and scalable than
state-of-the-art techniques.
| [
{
"version": "v1",
"created": "Thu, 26 Sep 2013 12:39:56 GMT"
}
] | 1,380,240,000,000 | [
[
"Gogate",
"Vibhav",
""
],
[
"Domingos",
"Pedro",
""
]
] |
1309.6836 | Antti Hyttinen | Antti Hyttinen, Patrik O. Hoyer, Frederick Eberhardt, Matti Jarvisalo | Discovering Cyclic Causal Models with Latent Variables: A General
SAT-Based Procedure | Appears in Proceedings of the Twenty-Ninth Conference on Uncertainty
in Artificial Intelligence (UAI2013) | null | null | UAI-P-2013-PG-301-310 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a very general approach to learning the structure of causal models
based on d-separation constraints, obtained from any given set of overlapping
passive observational or experimental data sets. The procedure allows for both
directed cycles (feedback loops) and the presence of latent variables. Our
approach is based on a logical representation of causal pathways, which permits
the integration of quite general background knowledge, and inference is
performed using a Boolean satisfiability (SAT) solver. The procedure is
complete in that it exhausts the available information on whether any given
edge can be determined to be present or absent, and returns "unknown"
otherwise. Many existing constraint-based causal discovery algorithms can be
seen as special cases, tailored to circumstances in which one or more
restricting assumptions apply. Simulations illustrate the effect of these
assumptions on discovery and how the present algorithm scales.
| [
{
"version": "v1",
"created": "Thu, 26 Sep 2013 12:41:22 GMT"
}
] | 1,380,240,000,000 | [
[
"Hyttinen",
"Antti",
""
],
[
"Hoyer",
"Patrik O.",
""
],
[
"Eberhardt",
"Frederick",
""
],
[
"Jarvisalo",
"Matti",
""
]
] |
1309.6839 | Arindam Khaled | Arindam Khaled, Eric A. Hansen, Changhe Yuan | Solving Limited-Memory Influence Diagrams Using Branch-and-Bound Search | Appears in Proceedings of the Twenty-Ninth Conference on Uncertainty
in Artificial Intelligence (UAI2013) | null | null | UAI-P-2013-PG-331-340 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A limited-memory influence diagram (LIMID) generalizes a traditional
influence diagram by relaxing the assumptions of regularity and no-forgetting,
allowing a wider range of decision problems to be modeled. Algorithms for
solving traditional influence diagrams are not easily generalized to solve
LIMIDs, however, and only recently have exact algorithms for solving LIMIDs
been developed. In this paper, we introduce an exact algorithm for solving
LIMIDs that is based on branch-and-bound search. Our approach is related to the
approach of solving an influence diagram by converting it to an equivalent
decision tree, with the difference that the LIMID is converted to a much
smaller decision graph that can be searched more efficiently.
| [
{
"version": "v1",
"created": "Thu, 26 Sep 2013 12:41:55 GMT"
}
] | 1,380,240,000,000 | [
[
"Khaled",
"Arindam",
""
],
[
"Hansen",
"Eric A.",
""
],
[
"Yuan",
"Changhe",
""
]
] |
1309.6842 | Sanghack Lee | Sanghack Lee, Vasant Honavar | Causal Transportability of Experiments on Controllable Subsets of
Variables: z-Transportability | Appears in Proceedings of the Twenty-Ninth Conference on Uncertainty
in Artificial Intelligence (UAI2013) | null | null | UAI-P-2013-PG-361-370 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce z-transportability, the problem of estimating the causal effect
of a set of variables X on another set of variables Y in a target domain from
experiments on any subset of controllable variables Z where Z is an arbitrary
subset of observable variables V in a source domain. z-Transportability
generalizes z-identifiability, the problem of estimating in a given domain the
causal effect of X on Y from surrogate experiments on a set of variables Z such
that Z is disjoint from X;. z-Transportability also generalizes
transportability which requires that the causal effect of X on Y in the target
domain be estimable from experiments on any subset of all observable variables
in the source domain. We first generalize z-identifiability to allow cases
where Z is not necessarily disjoint from X. Then, we establish a necessary and
sufficient condition for z-transportability in terms of generalized
z-identifiability and transportability. We provide a correct and complete
algorithm that determines whether a causal effect is z-transportable; and if it
is, produces a transport formula, that is, a recipe for estimating the causal
effect of X on Y in the target domain using information elicited from the
results of experimental manipulations of Z in the source domain and
observational data from the target domain. Our results also show that
do-calculus is complete for z-transportability.
| [
{
"version": "v1",
"created": "Thu, 26 Sep 2013 12:42:52 GMT"
}
] | 1,380,240,000,000 | [
[
"Lee",
"Sanghack",
""
],
[
"Honavar",
"Vasant",
""
]
] |
1309.6843 | Marc Maier | Marc Maier, Katerina Marazopoulou, David Arbour, David Jensen | A Sound and Complete Algorithm for Learning Causal Models from
Relational Data | Appears in Proceedings of the Twenty-Ninth Conference on Uncertainty
in Artificial Intelligence (UAI2013) | null | null | UAI-P-2013-PG-371-380 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The PC algorithm learns maximally oriented causal Bayesian networks. However,
there is no equivalent complete algorithm for learning the structure of
relational models, a more expressive generalization of Bayesian networks.
Recent developments in the theory and representation of relational models
support lifted reasoning about conditional independence. This enables a
powerful constraint for orienting bivariate dependencies and forms the basis of
a new algorithm for learning structure. We present the relational causal
discovery (RCD) algorithm that learns causal relational models. We prove that
RCD is sound and complete, and we present empirical results that demonstrate
effectiveness.
| [
{
"version": "v1",
"created": "Thu, 26 Sep 2013 12:43:12 GMT"
}
] | 1,380,240,000,000 | [
[
"Maier",
"Marc",
""
],
[
"Marazopoulou",
"Katerina",
""
],
[
"Arbour",
"David",
""
],
[
"Jensen",
"David",
""
]
] |
1309.6844 | Brandon Malone | Brandon Malone, Changhe Yuan | Evaluating Anytime Algorithms for Learning Optimal Bayesian Networks | Appears in Proceedings of the Twenty-Ninth Conference on Uncertainty
in Artificial Intelligence (UAI2013) | null | null | UAI-P-2013-PG-381-390 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Exact algorithms for learning Bayesian networks guarantee to find provably
optimal networks. However, they may fail in difficult learning tasks due to
limited time or memory. In this research we adapt several anytime heuristic
search-based algorithms to learn Bayesian networks. These algorithms find
high-quality solutions quickly, and continually improve the incumbent solution
or prove its optimality before resources are exhausted. Empirical results show
that the anytime window A* algorithm usually finds higher-quality, often
optimal, networks more quickly than other approaches. The results also show
that, surprisingly, while generating networks with few parents per variable are
structurally simpler, they are harder to learn than complex generating networks
with more parents per variable.
| [
{
"version": "v1",
"created": "Thu, 26 Sep 2013 12:43:51 GMT"
}
] | 1,380,240,000,000 | [
[
"Malone",
"Brandon",
""
],
[
"Yuan",
"Changhe",
""
]
] |
1309.6845 | Denis D. Maua | Denis D. Maua, Cassio Polpo de Campos, Alessio Benavoli, Alessandro
Antonucci | On the Complexity of Strong and Epistemic Credal Networks | Appears in Proceedings of the Twenty-Ninth Conference on Uncertainty
in Artificial Intelligence (UAI2013) | null | null | UAI-P-2013-PG-391-400 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Credal networks are graph-based statistical models whose parameters take
values in a set, instead of being sharply specified as in traditional
statistical models (e.g., Bayesian networks). The computational complexity of
inferences on such models depends on the irrelevance/independence concept
adopted. In this paper, we study inferential complexity under the concepts of
epistemic irrelevance and strong independence. We show that inferences under
strong independence are NP-hard even in trees with ternary variables. We prove
that under epistemic irrelevance the polynomial time complexity of inferences
in credal trees is not likely to extend to more general models (e.g. singly
connected networks). These results clearly distinguish networks that admit
efficient inferences and those where inferences are most likely hard, and
settle several open questions regarding computational complexity.
| [
{
"version": "v1",
"created": "Thu, 26 Sep 2013 12:44:14 GMT"
}
] | 1,380,240,000,000 | [
[
"Maua",
"Denis D.",
""
],
[
"de Campos",
"Cassio Polpo",
""
],
[
"Benavoli",
"Alessio",
""
],
[
"Antonucci",
"Alessandro",
""
]
] |
1309.6846 | James McInerney | James McInerney, Alex Rogers, Nicholas R. Jennings | Learning Periodic Human Behaviour Models from Sparse Data for
Crowdsourcing Aid Delivery in Developing Countries | Appears in Proceedings of the Twenty-Ninth Conference on Uncertainty
in Artificial Intelligence (UAI2013) | null | null | UAI-P-2013-PG-401-410 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In many developing countries, half the population lives in rural locations,
where access to essentials such as school materials, mosquito nets, and medical
supplies is restricted. We propose an alternative method of distribution (to
standard road delivery) in which the existing mobility habits of a local
population are leveraged to deliver aid, which raises two technical challenges
in the areas optimisation and learning. For optimisation, a standard Markov
decision process applied to this problem is intractable, so we provide an exact
formulation that takes advantage of the periodicities in human location
behaviour. To learn such behaviour models from sparse data (i.e., cell tower
observations), we develop a Bayesian model of human mobility. Using real cell
tower data of the mobility behaviour of 50,000 individuals in Ivory Coast, we
find that our model outperforms the state of the art approaches in mobility
prediction by at least 25% (in held-out data likelihood). Furthermore, when
incorporating mobility prediction with our MDP approach, we find a 81.3%
reduction in total delivery time versus routine planning that minimises just
the number of participants in the solution path.
| [
{
"version": "v1",
"created": "Thu, 26 Sep 2013 12:44:36 GMT"
}
] | 1,380,240,000,000 | [
[
"McInerney",
"James",
""
],
[
"Rogers",
"Alex",
""
],
[
"Jennings",
"Nicholas R.",
""
]
] |
1309.6848 | Elad Mezuman | Elad Mezuman, Daniel Tarlow, Amir Globerson, Yair Weiss | Tighter Linear Program Relaxations for High Order Graphical Models | Appears in Proceedings of the Twenty-Ninth Conference on Uncertainty
in Artificial Intelligence (UAI2013) | null | null | UAI-P-2013-PG-421-430 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graphical models with High Order Potentials (HOPs) have received considerable
interest in recent years. While there are a variety of approaches to inference
in these models, nearly all of them amount to solving a linear program (LP)
relaxation with unary consistency constraints between the HOP and the
individual variables. In many cases, the resulting relaxations are loose, and
in these cases the results of inference can be poor. It is thus desirable to
look for more accurate ways of performing inference in these models. In this
work, we study the LP relaxations that result from enforcing additional
consistency constraints between the HOP and the rest of the model. We address
theoretical questions about the strength of the resulting relaxations compared
to the relaxations that arise in standard approaches, and we develop practical
and efficient message passing algorithms for optimizing the LPs. Empirically,
we show that the LPs with additional consistency constraints lead to more
accurate inference on some challenging problems that include a combination of
low order and high order terms.
| [
{
"version": "v1",
"created": "Thu, 26 Sep 2013 12:45:22 GMT"
}
] | 1,380,240,000,000 | [
[
"Mezuman",
"Elad",
""
],
[
"Tarlow",
"Daniel",
""
],
[
"Globerson",
"Amir",
""
],
[
"Weiss",
"Yair",
""
]
] |
1309.6855 | Michael Pacer | Michael Pacer, Joseph Williams, Xi Chen, Tania Lombrozo, Thomas
Griffiths | Evaluating computational models of explanation using human judgments | Appears in Proceedings of the Twenty-Ninth Conference on Uncertainty
in Artificial Intelligence (UAI2013) | null | null | UAI-P-2013-PG-498-507 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We evaluate four computational models of explanation in Bayesian networks by
comparing model predictions to human judgments. In two experiments, we present
human participants with causal structures for which the models make divergent
predictions and either solicit the best explanation for an observed event
(Experiment 1) or have participants rate provided explanations for an observed
event (Experiment 2). Across two versions of two causal structures and across
both experiments, we find that the Causal Explanation Tree and Most Relevant
Explanation models provide better fits to human data than either Most Probable
Explanation or Explanation Tree models. We identify strengths and shortcomings
of these models and what they can reveal about human explanation. We conclude
by suggesting the value of pursuing computational and psychological
investigations of explanation in parallel.
| [
{
"version": "v1",
"created": "Thu, 26 Sep 2013 12:47:15 GMT"
}
] | 1,380,240,000,000 | [
[
"Pacer",
"Michael",
""
],
[
"Williams",
"Joseph",
""
],
[
"Chen",
"Xi",
""
],
[
"Lombrozo",
"Tania",
""
],
[
"Griffiths",
"Thomas",
""
]
] |
1309.6856 | Patrice Perny | Patrice Perny, Paul Weng, Judy Goldsmith, Josiah Hanna | Approximation of Lorenz-Optimal Solutions in Multiobjective Markov
Decision Processes | Appears in Proceedings of the Twenty-Ninth Conference on Uncertainty
in Artificial Intelligence (UAI2013) | null | null | UAI-P-2013-PG-508-517 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper is devoted to fair optimization in Multiobjective Markov Decision
Processes (MOMDPs). A MOMDP is an extension of the MDP model for planning under
uncertainty while trying to optimize several reward functions simultaneously.
This applies to multiagent problems when rewards define individual utility
functions, or in multicriteria problems when rewards refer to different
features. In this setting, we study the determination of policies leading to
Lorenz-non-dominated tradeoffs. Lorenz dominance is a refinement of Pareto
dominance that was introduced in Social Choice for the measurement of
inequalities. In this paper, we introduce methods to efficiently approximate
the sets of Lorenz-non-dominated solutions of infinite-horizon, discounted
MOMDPs. The approximations are polynomial-sized subsets of those solutions.
| [
{
"version": "v1",
"created": "Thu, 26 Sep 2013 12:48:02 GMT"
}
] | 1,380,240,000,000 | [
[
"Perny",
"Patrice",
""
],
[
"Weng",
"Paul",
""
],
[
"Goldsmith",
"Judy",
""
],
[
"Hanna",
"Josiah",
""
]
] |
1309.6857 | Marek Petrik | Marek Petrik, Dharmashankar Subramanian, Janusz Marecki | Solution Methods for Constrained Markov Decision Process with Continuous
Probability Modulation | Appears in Proceedings of the Twenty-Ninth Conference on Uncertainty
in Artificial Intelligence (UAI2013) | null | null | UAI-P-2013-PG-518-526 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose solution methods for previously-unsolved constrained MDPs in which
actions can continuously modify the transition probabilities within some
acceptable sets. While many methods have been proposed to solve regular MDPs
with large state sets, there are few practical approaches for solving
constrained MDPs with large action sets. In particular, we show that the
continuous action sets can be replaced by their extreme points when the rewards
are linear in the modulation. We also develop a tractable optimization
formulation for concave reward functions and, surprisingly, also extend it to
non- concave reward functions by using their concave envelopes. We evaluate the
effectiveness of the approach on the problem of managing delinquencies in a
portfolio of loans.
| [
{
"version": "v1",
"created": "Thu, 26 Sep 2013 12:48:47 GMT"
}
] | 1,380,240,000,000 | [
[
"Petrik",
"Marek",
""
],
[
"Subramanian",
"Dharmashankar",
""
],
[
"Marecki",
"Janusz",
""
]
] |
1309.6864 | Hossein Azari Soufiani | Hossein Azari Soufiani, David C. Parkes, Lirong Xia | Preference Elicitation For General Random Utility Models | Appears in Proceedings of the Twenty-Ninth Conference on Uncertainty
in Artificial Intelligence (UAI2013) | null | null | UAI-P-2013-PG-596-605 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper discusses {General Random Utility Models (GRUMs)}. These are a
class of parametric models that generate partial ranks over alternatives given
attributes of agents and alternatives. We propose two preference elicitation
scheme for GRUMs developed from principles in Bayesian experimental design, one
for social choice and the other for personalized choice. We couple this with a
general Monte-Carlo-Expectation-Maximization (MC-EM) based algorithm for MAP
inference under GRUMs. We also prove uni-modality of the likelihood functions
for a class of GRUMs. We examine the performance of various criteria by
experimental studies, which show that the proposed elicitation scheme increases
the precision of estimation.
| [
{
"version": "v1",
"created": "Thu, 26 Sep 2013 12:50:37 GMT"
}
] | 1,380,240,000,000 | [
[
"Soufiani",
"Hossein Azari",
""
],
[
"Parkes",
"David C.",
""
],
[
"Xia",
"Lirong",
""
]
] |
1309.6870 | Deepak Venugopal | Deepak Venugopal, Vibhav Gogate | Dynamic Blocking and Collapsing for Gibbs Sampling | Appears in Proceedings of the Twenty-Ninth Conference on Uncertainty
in Artificial Intelligence (UAI2013) | null | null | UAI-P-2013-PG-664-673 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we investigate combining blocking and collapsing -- two widely
used strategies for improving the accuracy of Gibbs sampling -- in the context
of probabilistic graphical models (PGMs). We show that combining them is not
straight-forward because collapsing (or eliminating variables) introduces new
dependencies in the PGM and in computation-limited settings, this may adversely
affect blocking. We therefore propose a principled approach for tackling this
problem. Specifically, we develop two scoring functions, one each for blocking
and collapsing, and formulate the problem of partitioning the variables in the
PGM into blocked and collapsed subsets as simultaneously maximizing both
scoring functions (i.e., a multi-objective optimization problem). We propose a
dynamic, greedy algorithm for approximately solving this intractable
optimization problem. Our dynamic algorithm periodically updates the
partitioning into blocked and collapsed variables by leveraging correlation
statistics gathered from the generated samples and enables rapid mixing by
blocking together and collapsing highly correlated variables. We demonstrate
experimentally the clear benefit of our dynamic approach: as more samples are
drawn, our dynamic approach significantly outperforms static graph-based
approaches by an order of magnitude in terms of accuracy.
| [
{
"version": "v1",
"created": "Thu, 26 Sep 2013 12:53:01 GMT"
}
] | 1,380,240,000,000 | [
[
"Venugopal",
"Deepak",
""
],
[
"Gogate",
"Vibhav",
""
]
] |
1309.6871 | Luis Gustavo Vianna | Luis Gustavo Vianna, Scott Sanner, Leliane Nunes de Barros | Bounded Approximate Symbolic Dynamic Programming for Hybrid MDPs | Appears in Proceedings of the Twenty-Ninth Conference on Uncertainty
in Artificial Intelligence (UAI2013) | null | null | UAI-P-2013-PG-674-683 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in symbolic dynamic programming (SDP) combined with the
extended algebraic decision diagram (XADD) data structure have provided exact
solutions for mixed discrete and continuous (hybrid) MDPs with piecewise linear
dynamics and continuous actions. Since XADD-based exact solutions may grow
intractably large for many problems, we propose a bounded error compression
technique for XADDs that involves the solution of a constrained bilinear saddle
point problem. Fortuitously, we show that given the special structure of this
problem, it can be expressed as a bilevel linear programming problem and solved
to optimality in finite time via constraint generation, despite having an
infinite set of constraints. This solution permits the use of efficient linear
program solvers for XADD compression and enables a novel class of bounded
approximate SDP algorithms for hybrid MDPs that empirically offers
order-of-magnitude speedups over the exact solution in exchange for a small
approximation error.
| [
{
"version": "v1",
"created": "Thu, 26 Sep 2013 12:53:25 GMT"
}
] | 1,380,240,000,000 | [
[
"Vianna",
"Luis Gustavo",
""
],
[
"Sanner",
"Scott",
""
],
[
"de Barros",
"Leliane Nunes",
""
]
] |
1309.6989 | Keyan Zahedi | Keyan Zahedi and Georg Martius and Nihat Ay | Linear combination of one-step predictive information with an external
reward in an episodic policy gradient setting: a critical analysis | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the main challenges in the field of embodied artificial intelligence
is the open-ended autonomous learning of complex behaviours. Our approach is to
use task-independent, information-driven intrinsic motivation(s) to support
task-dependent learning. The work presented here is a preliminary step in which
we investigate the predictive information (the mutual information of the past
and future of the sensor stream) as an intrinsic drive, ideally supporting any
kind of task acquisition. Previous experiments have shown that the predictive
information (PI) is a good candidate to support autonomous, open-ended learning
of complex behaviours, because a maximisation of the PI corresponds to an
exploration of morphology- and environment-dependent behavioural regularities.
The idea is that these regularities can then be exploited in order to solve any
given task. Three different experiments are presented and their results lead to
the conclusion that the linear combination of the one-step PI with an external
reward function is not generally recommended in an episodic policy gradient
setting. Only for hard tasks a great speed-up can be achieved at the cost of an
asymptotic performance lost.
| [
{
"version": "v1",
"created": "Thu, 26 Sep 2013 17:44:59 GMT"
}
] | 1,380,240,000,000 | [
[
"Zahedi",
"Keyan",
""
],
[
"Martius",
"Georg",
""
],
[
"Ay",
"Nihat",
""
]
] |
1309.7971 | Ann Nicholson | Ann Nicholson and Padhriac Smyth | Proceedings of the Twenty-Ninth Conference on Uncertainty in Artificial
Intelligence (2013) | null | null | null | UAI2013 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This is the Proceedings of the Twenty-Ninth Conference on Uncertainty in
Artificial Intelligence, which was held in Bellevue, WA, August 11-15, 2013
| [
{
"version": "v1",
"created": "Mon, 30 Sep 2013 19:16:53 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Aug 2014 03:58:06 GMT"
}
] | 1,409,270,400,000 | [
[
"Nicholson",
"Ann",
""
],
[
"Smyth",
"Padhriac",
""
]
] |
1310.0602 | Martin Josef Geiger | Martin Josef Geiger | Iterated Variable Neighborhood Search for the resource constrained
multi-mode multi-project scheduling problem | null | In: Graham Kendall, Greet Vanden Berghe, and Barry McCollum
(editors): Proceedings of the 6th Multidisciplinary International Conference
on Scheduling: Theory and Applications, August 27-29, 2013, Gent, Belgium,
pages 807-811 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The resource constrained multi-mode multi-project scheduling problem
(RCMMMPSP) is a notoriously difficult combinatorial optimization problem. For a
given set of activities, feasible execution mode assignments and execution
starting times must be found such that some optimization function, e.g. the
makespan, is optimized. When determining an optimal (or at least feasible)
assignment of decision variable values, a set of side constraints, such as
resource availabilities, precedence constraints, etc., has to be respected.
In 2013, the MISTA 2013 Challenge stipulated research in the RCMMMPSP. It's
goal was the solution of a given set of instances under running time
restrictions. We have contributed to this challenge with the here presented
approach.
| [
{
"version": "v1",
"created": "Wed, 2 Oct 2013 07:18:34 GMT"
}
] | 1,380,758,400,000 | [
[
"Geiger",
"Martin Josef",
""
]
] |
1310.0927 | Jussi Rintanen | Jukka Corander, Tomi Janhunen, Jussi Rintanen, Henrik Nyman, Johan
Pensar | Learning Chordal Markov Networks by Constraint Satisfaction | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate the problem of learning the structure of a Markov network from
data. It is shown that the structure of such networks can be described in terms
of constraints which enables the use of existing solver technology with
optimization capabilities to compute optimal networks starting from initial
scores computed from the data. To achieve efficient encodings, we develop a
novel characterization of Markov network structure using a balancing condition
on the separators between cliques forming the network. The resulting
translations into propositional satisfiability and its extensions such as
maximum satisfiability, satisfiability modulo theories, and answer set
programming, enable us to prove optimal certain network structures which have
been previously found by stochastic search.
| [
{
"version": "v1",
"created": "Thu, 3 Oct 2013 09:01:39 GMT"
}
] | 1,380,844,800,000 | [
[
"Corander",
"Jukka",
""
],
[
"Janhunen",
"Tomi",
""
],
[
"Rintanen",
"Jussi",
""
],
[
"Nyman",
"Henrik",
""
],
[
"Pensar",
"Johan",
""
]
] |
1310.1328 | Ernest Davis | Ernest Davis | The Relevance of Proofs of the Rationality of Probability Theory to
Automated Reasoning and Cognitive Models | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A number of well-known theorems, such as Cox's theorem and de Finetti's
theorem. prove that any model of reasoning with uncertain information that
satisfies specified conditions of "rationality" must satisfy the axioms of
probability theory. I argue here that these theorems do not in themselves
demonstrate that probabilistic models are in fact suitable for any specific
task in automated reasoning or plausible for cognitive models. First, the
theorems only establish that there exists some probabilistic model; they do not
establish that there exists a useful probabilistic model, i.e. one with a
tractably small number of numerical parameters and a large number of
independence assumptions. Second, there are in general many different
probabilistic models for a given situation, many of which may be far more
irrational, in the usual sense of the term, than a model that violates the
axioms of probability theory. I illustrate this second point with an extended
examples of two tasks of induction, of a similar structure, where the
reasonable probabilistic models are very different.
| [
{
"version": "v1",
"created": "Fri, 4 Oct 2013 16:04:08 GMT"
}
] | 1,381,104,000,000 | [
[
"Davis",
"Ernest",
""
]
] |
1310.2089 | Mir Mohammad Ettefagh | Habib Emdadi, Mahsa Yazdanian, Mir Mohammad Ettefagh and Mohammad-Reza
Feizi-Derakhshi | Double four-bar crank-slider mechanism dynamic balancing by
meta-heuristic algorithms | 18 pages-19 figures | International Journal of Artificial Intelligence & Applications
(IJAIA), Vol. 4, No. 5, September 2013 | 10.5121/ijaia.2013.4501 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, a new method for dynamic balancing of double four-bar crank
slider mechanism by meta- heuristic-based optimization algorithms is proposed.
For this purpose, a proper objective function which is necessary for balancing
of this mechanism and corresponding constraints has been obtained by dynamic
modeling of the mechanism. Then PSO, ABC, BGA and HGAPSO algorithms have been
applied for minimizing the defined cost function in optimization step. The
optimization results have been studied completely by extracting the cost
function, fitness, convergence speed and runtime values of applied algorithms.
It has been shown that PSO and ABC are more efficient than BGA and HGAPSO in
terms of convergence speed and result quality. Also, a laboratory scale
experimental doublefour-bar crank-slider mechanism was provided for validating
the proposed balancing method practically.
| [
{
"version": "v1",
"created": "Tue, 8 Oct 2013 10:47:32 GMT"
}
] | 1,381,276,800,000 | [
[
"Emdadi",
"Habib",
""
],
[
"Yazdanian",
"Mahsa",
""
],
[
"Ettefagh",
"Mir Mohammad",
""
],
[
"Feizi-Derakhshi",
"Mohammad-Reza",
""
]
] |
1310.2298 | Anton Belov | Anton Belov and Antonio Morgado and Joao Marques-Silva | SAT-based Preprocessing for MaxSAT (extended version) | Extended version of LPAR'19 paper with the same title | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | State-of-the-art algorithms for industrial instances of MaxSAT problem rely
on iterative calls to a SAT solver. Preprocessing is crucial for the
acceleration of SAT solving, and the key preprocessing techniques rely on the
application of resolution and subsumption elimination. Additionally,
satisfiability-preserving clause elimination procedures are often used. Since
MaxSAT computation typically involves a large number of SAT calls, we are
interested in whether an input instance to a MaxSAT problem can be preprocessed
up-front, i.e. prior to running the MaxSAT solver, rather than (or, in addition
to) during each iterative SAT solver call. The key requirement in this setting
is that the preprocessing has to be sound, i.e. so that the solution can be
reconstructed correctly and efficiently after the execution of a MaxSAT
algorithm on the preprocessed instance. While, as we demonstrate in this paper,
certain clause elimination procedures are sound for MaxSAT, it is well-known
that this is not the case for resolution and subsumption elimination. In this
paper we show how to adapt these preprocessing techniques to MaxSAT. To achieve
this we recast the MaxSAT problem in a recently introduced labelled-CNF
framework, and show that within the framework the preprocessing techniques can
be applied soundly. Furthermore, we show that MaxSAT algorithms restated in the
framework have a natural implementation on top of an incremental SAT solver. We
evaluate the prototype implementation of a MaxSAT algorithm WMSU1 in this
setting, demonstrate the effectiveness of preprocessing, and show overall
improvement with respect to non-incremental versions of the algorithm on some
classes of problems.
| [
{
"version": "v1",
"created": "Tue, 8 Oct 2013 22:33:38 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Oct 2013 09:15:07 GMT"
}
] | 1,381,968,000,000 | [
[
"Belov",
"Anton",
""
],
[
"Morgado",
"Antonio",
""
],
[
"Marques-Silva",
"Joao",
""
]
] |
1310.2396 | Hua Yao | Hua Yao, William Zhu | A necessary and sufficient condition for two relations to induce the
same definable set family | 13 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In Pawlak rough sets, the structure of the definable set families is simple
and clear, but in generalizing rough sets, the structure of the definable set
families is a bit more complex. There has been much research work focusing on
this topic. However, as a fundamental issue in relation based rough sets, under
what condition two relations induce the same definable set family has not been
discussed. In this paper, based on the concept of the closure of relations, we
present a necessary and sufficient condition for two relations to induce the
same definable set family.
| [
{
"version": "v1",
"created": "Wed, 9 Oct 2013 08:46:01 GMT"
}
] | 1,381,363,200,000 | [
[
"Yao",
"Hua",
""
],
[
"Zhu",
"William",
""
]
] |
1310.2493 | George Vouros VOUROS GEORGE | George A. Vouros and Georgios Santipantakis | Combining Ontologies with Correspondences and Link Relations: The E-SHIQ
Representation Framework | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Combining knowledge and beliefs of autonomous peers in distributed settings,
is a ma- jor challenge. In this paper we consider peers that combine ontologies
and reason jointly with their coupled knowledge. Ontologies are within the SHIQ
fragment of Description Logics. Although there are several representation
frameworks for modular Description Log- ics, each one makes crucial assumptions
concerning the subjectivity of peers' knowledge, the relation between the
domains over which ontologies are interpreted, the expressivity of the
constructors used for combining knowledge, and the way peers share their
knowledge. However in settings where autonomous peers can evolve and extend
their knowledge and beliefs independently from others, these assumptions may
not hold. In this article, we moti- vate the need for a representation
framework that allows peers to combine their knowledge in various ways,
maintaining the subjectivity of their own knowledge and beliefs, and that
reason collaboratively, constructing a tableau that is distributed among them,
jointly. The paper presents the proposed E-SHIQ representation framework, the
implementation of the E-SHIQ distributed tableau reasoner, and discusses the
efficiency of this reasoner.
| [
{
"version": "v1",
"created": "Wed, 9 Oct 2013 14:26:23 GMT"
}
] | 1,381,363,200,000 | [
[
"Vouros",
"George A.",
""
],
[
"Santipantakis",
"Georgios",
""
]
] |
1310.2743 | Valmi Dufour-Lussier | Valmi Dufour-Lussier (INRIA Nancy - Grand Est / LORIA), Florence Le
Ber (LHyGeS), Jean Lieber (INRIA Nancy - Grand Est / LORIA), Laura Martin
(ASTER Mirecourt) | Case Adaptation with Qualitative Algebras | null | International Joint Conferences on Artificial Intelligence (2013)
3002-3006 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes an approach for the adaptation of spatial or temporal
cases in a case-based reasoning system. Qualitative algebras are used as
spatial and temporal knowledge representation languages. The intuition behind
this adaptation approach is to apply a substitution and then repair potential
inconsistencies, thanks to belief revision on qualitative algebras. A temporal
example from the cooking domain is given. (The paper on which this extended
abstract is based was the recipient of the best paper award of the 2012
International Conference on Case-Based Reasoning.)
| [
{
"version": "v1",
"created": "Thu, 10 Oct 2013 09:28:20 GMT"
}
] | 1,381,449,600,000 | [
[
"Dufour-Lussier",
"Valmi",
"",
"INRIA Nancy - Grand Est / LORIA"
],
[
"Ber",
"Florence Le",
"",
"LHyGeS"
],
[
"Lieber",
"Jean",
"",
"INRIA Nancy - Grand Est / LORIA"
],
[
"Martin",
"Laura",
"",
"ASTER Mirecourt"
]
] |
1310.3174 | Manuel Lopes | Benjamin Clement, Didier Roy, Pierre-Yves Oudeyer, Manuel Lopes | Multi-Armed Bandits for Intelligent Tutoring Systems | null | Journal of Educational Data Mining, 7(2), 20-48 (2015) | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an approach to Intelligent Tutoring Systems which adaptively
personalizes sequences of learning activities to maximize skills acquired by
students, taking into account the limited time and motivational resources. At a
given point in time, the system proposes to the students the activity which
makes them progress faster. We introduce two algorithms that rely on the
empirical estimation of the learning progress, RiARiT that uses information
about the difficulty of each exercise and ZPDES that uses much less knowledge
about the problem.
The system is based on the combination of three approaches. First, it
leverages recent models of intrinsically motivated learning by transposing them
to active teaching, relying on empirical estimation of learning progress
provided by specific activities to particular students. Second, it uses
state-of-the-art Multi-Arm Bandit (MAB) techniques to efficiently manage the
exploration/exploitation challenge of this optimization process. Third, it
leverages expert knowledge to constrain and bootstrap initial exploration of
the MAB, while requiring only coarse guidance information of the expert and
allowing the system to deal with didactic gaps in its knowledge. The system is
evaluated in a scenario where 7-8 year old schoolchildren learn how to
decompose numbers while manipulating money. Systematic experiments are
presented with simulated students, followed by results of a user study across a
population of 400 school children.
| [
{
"version": "v1",
"created": "Fri, 11 Oct 2013 15:47:41 GMT"
},
{
"version": "v2",
"created": "Fri, 19 Jun 2015 21:38:13 GMT"
}
] | 1,563,321,600,000 | [
[
"Clement",
"Benjamin",
""
],
[
"Roy",
"Didier",
""
],
[
"Oudeyer",
"Pierre-Yves",
""
],
[
"Lopes",
"Manuel",
""
]
] |
1310.4086 | Liane Gabora | Liane Gabora, Wei Wen Chia, and Hadi Firouzi | A Computational Model of Two Cognitive Transitions Underlying Cultural
Evolution | arXiv admin note: text overlap with arXiv:1309.7407, arXiv:1308.5032,
arXiv:1310.3781 | (2013). Proceedings of the Annual Meeting of the Cognitive Science
Society. July 31-3, Berlin. Austin TX: Cognitive Science Society | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We tested the computational feasibility of the proposal that open-ended
cultural evolution was made possible by two cognitive transitions: (1) onset of
the capacity to chain thoughts together, followed by (2) onset of contextual
focus (CF): the capacity to shift between a divergent mode of thought conducive
to 'breaking out of a rut' and a convergent mode of thought conducive to minor
modifications. These transitions were simulated in EVOC, an agent-based model
of cultural evolution, in which the fitness of agents' actions increases as
agents invent ideas for new actions, and imitate the fittest of their
neighbors' actions. Both mean fitness and diversity of actions across the
society increased with chaining, and even more so with CF, as hypothesized. CF
was only effective when the fitness function changed, which supports its
hypothesized role in generating and refining ideas.
| [
{
"version": "v1",
"created": "Tue, 15 Oct 2013 15:36:52 GMT"
}
] | 1,381,881,600,000 | [
[
"Gabora",
"Liane",
""
],
[
"Chia",
"Wei Wen",
""
],
[
"Firouzi",
"Hadi",
""
]
] |
1310.4986 | Federico Cerutti | Federico Cerutti, Paul E. Dunne, Massimiliano Giacomin, Mauro Vallati | Computing Preferred Extensions in Abstract Argumentation: a SAT-based
Approach | Preprint of TAFA'13 post proceedings | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a novel SAT-based approach for the computation of
extensions in abstract argumentation, with focus on preferred semantics, and an
empirical evaluation of its performances. The approach is based on the idea of
reducing the problem of computing complete extensions to a SAT problem and then
using a depth-first search method to derive preferred extensions. The proposed
approach has been tested using two distinct SAT solvers and compared with three
state-of-the-art systems for preferred extension computation. It turns out that
the proposed approach delivers significantly better performances in the large
majority of the considered cases.
| [
{
"version": "v1",
"created": "Fri, 18 Oct 2013 12:14:31 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Oct 2013 09:53:43 GMT"
}
] | 1,382,572,800,000 | [
[
"Cerutti",
"Federico",
""
],
[
"Dunne",
"Paul E.",
""
],
[
"Giacomin",
"Massimiliano",
""
],
[
"Vallati",
"Mauro",
""
]
] |
1310.6432 | Burkhard C. Schipper | Eric Pacuit, Arthur Paul Pedersen, Jan-Willem Romeijn | When is an Example a Counterexample? | 10 pages, Contributed talk at TARK 2013 (arXiv:1310.6382)
http://www.tark.org | null | null | TARK/2013/p156 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this extended abstract, we carefully examine a purported counterexample to
a postulate of iterated belief revision. We suggest that the example is better
seen as a failure to apply the theory of belief revision in sufficient detail.
The main contribution is conceptual aiming at the literature on the
philosophical foundations of the AGM theory of belief revision [1]. Our
discussion is centered around the observation that it is often unclear whether
a specific example is a "genuine" counterexample to an abstract theory or a
misapplication of that theory to a concrete case.
| [
{
"version": "v1",
"created": "Wed, 23 Oct 2013 23:32:30 GMT"
}
] | 1,383,004,800,000 | [
[
"Pacuit",
"Eric",
""
],
[
"Pedersen",
"Arthur Paul",
""
],
[
"Romeijn",
"Jan-Willem",
""
]
] |
1310.7367 | Thabet Slimani | Thabet Slimani | Semantic Description of Web Services | null | IJCSI International Journal of Computer Science Issues, Vol. 10,
Issue 1, No 3, January 2013, ISSN (Print): 1694-0784 | ISSN (Online):
1694-0814 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The tasks of semantic web service (discovery, selection, composition, and
execution) are supposed to enable seamless interoperation between systems,
whereby human intervention is kept at a minimum. In the field of Web service
description research, the exploitation of descriptions of services through
semantics is a better support for the life-cycle of Web services. The large
number of developed ontologies, languages of representations, and integrated
frameworks supporting the discovery, composition and invocation of services is
a good indicator that research in the field of Semantic Web Services (SWS) has
been considerably active. We provide in this paper a detailed classification of
the approaches and solutions, indicating their core characteristics and
objectives required and provide indicators for the interested reader to follow
up further insights and details about these solutions and related software.
| [
{
"version": "v1",
"created": "Mon, 28 Oct 2013 10:32:06 GMT"
}
] | 1,383,004,800,000 | [
[
"Slimani",
"Thabet",
""
]
] |
1310.7442 | Xinyang Deng | Yuxian Du, Shiyu Chen, Yong Hu, Felix T.S. Chan, Sankaran Mahadevan,
Yong Deng | Ranking basic belief assignments in decision making under uncertain
environment | 16 pages, 1 figure | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dempster-Shafer theory (D-S theory) is widely used in decision making under
the uncertain environment. Ranking basic belief assignments (BBAs) now is an
open issue. Existing evidence distance measures cannot rank the BBAs in the
situations when the propositions have their own ranking order or their inherent
measure of closeness. To address this issue, a new ranking evidence distance
(RED) measure is proposed. Compared with the existing evidence distance
measures including the Jousselme's distance and the distance between betting
commitments, the proposed RED measure is much more general due to the fact that
the order of the propositions in the systems is taken into consideration. If
there is no order or no inherent measure of closeness in the propositions, our
proposed RED measure is reduced to the existing evidence distance. Numerical
examples show that the proposed RED measure is an efficient alternative to rank
BBAs in decision making under uncertain environment.
| [
{
"version": "v1",
"created": "Mon, 28 Oct 2013 14:59:53 GMT"
}
] | 1,383,004,800,000 | [
[
"Du",
"Yuxian",
""
],
[
"Chen",
"Shiyu",
""
],
[
"Hu",
"Yong",
""
],
[
"Chan",
"Felix T. S.",
""
],
[
"Mahadevan",
"Sankaran",
""
],
[
"Deng",
"Yong",
""
]
] |
1310.8588 | Abdoun Otman | Tkatek Said, Abdoun Otman, Abouchabaka Jaafar and Rafalia Najat | A Meta-heuristically Approach of the Spatial Assignment Problem of Human
Resources in Multi-sites Enterprise | null | International Journal of Computer Applications (0975 - 8887)
Volume 78 - Number 7 September 2013 | 10.5120/13500-1248 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The aim of this work is to present a meta-heuristically approach of the
spatial assignment problem of human resources in multi-sites enterprise.
Usually, this problem consists to move employees from one site to another based
on one or more criteria. Our goal in this new approach is to improve the
quality of service and performance of all sites with maximizing an objective
function under some managers imposed constraints. The formulation presented
here of this problem coincides perfectly with a Combinatorial Optimization
Problem (COP) which is in the most cases NP-hard to solve optimally. To avoid
this difficulty, we have opted to use a meta-heuristic popular method, which is
the genetic algorithm, to solve this problem in concrete cases. The results
obtained have shown the effectiveness of our approach, which remains until now
very costly in time. But the reduction of the time can be obtained by different
ways that we plan to do in the next work.
| [
{
"version": "v1",
"created": "Sun, 22 Sep 2013 11:10:57 GMT"
}
] | 1,383,264,000,000 | [
[
"Said",
"Tkatek",
""
],
[
"Otman",
"Abdoun",
""
],
[
"Jaafar",
"Abouchabaka",
""
],
[
"Najat",
"Rafalia",
""
]
] |
1310.8599 | J. G. Wolff | J. Gerard Wolff | Information Compression, Intelligence, Computing, and Mathematics | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents evidence for the idea that much of artificial
intelligence, human perception and cognition, mainstream computing, and
mathematics, may be understood as compression of information via the matching
and unification of patterns. This is the basis for the "SP theory of
intelligence", outlined in the paper and fully described elsewhere. Relevant
evidence may be seen: in empirical support for the SP theory; in some
advantages of information compression (IC) in terms of biology and engineering;
in our use of shorthands and ordinary words in language; in how we merge
successive views of any one thing; in visual recognition; in binocular vision;
in visual adaptation; in how we learn lexical and grammatical structures in
language; and in perceptual constancies. IC via the matching and unification of
patterns may be seen in both computing and mathematics: in IC via equations; in
the matching and unification of names; in the reduction or removal of
redundancy from unary numbers; in the workings of Post's Canonical System and
the transition function in the Universal Turing Machine; in the way computers
retrieve information from memory; in systems like Prolog; and in the
query-by-example technique for information retrieval. The chunking-with-codes
technique for IC may be seen in the use of named functions to avoid repetition
of computer code. The schema-plus-correction technique may be seen in functions
with parameters and in the use of classes in object-oriented programming. And
the run-length coding technique may be seen in multiplication, in division, and
in several other devices in mathematics and computing. The SP theory resolves
the apparent paradox of "decompression by compression". And computing and
cognition as IC is compatible with the uses of redundancy in such things as
backup copies to safeguard data and understanding speech in a noisy
environment.
| [
{
"version": "v1",
"created": "Thu, 31 Oct 2013 17:18:17 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Nov 2013 11:17:17 GMT"
},
{
"version": "v3",
"created": "Tue, 29 Apr 2014 18:16:35 GMT"
},
{
"version": "v4",
"created": "Mon, 13 Jul 2015 08:59:41 GMT"
}
] | 1,436,832,000,000 | [
[
"Wolff",
"J. Gerard",
""
]
] |
1311.0351 | Bin Yang | Bin Yang, Hong Zhao and William Zhu | Rough matroids based on coverings | 15pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The introduction of covering-based rough sets has made a substantial
contribution to the classical rough sets. However, many vital problems in rough
sets, including attribution reduction, are NP-hard and therefore the algorithms
for solving them are usually greedy. Matroid, as a generalization of linear
independence in vector spaces, it has a variety of applications in many fields
such as algorithm design and combinatorial optimization. An excellent
introduction to the topic of rough matroids is due to Zhu and Wang. On the
basis of their work, we study the rough matroids based on coverings in this
paper. First, we investigate some properties of the definable sets with respect
to a covering. Specifically, it is interesting that the set of all definable
sets with respect to a covering, equipped with the binary relation of inclusion
$\subseteq$, constructs a lattice. Second, we propose the rough matroids based
on coverings, which are a generalization of the rough matroids based on
relations. Finally, some properties of rough matroids based on coverings are
explored. Moreover, an equivalent formulation of rough matroids based on
coverings is presented. These interesting and important results exhibit many
potential connections between rough sets and matroids.
| [
{
"version": "v1",
"created": "Sat, 2 Nov 2013 06:58:46 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Nov 2013 01:29:01 GMT"
}
] | 1,383,696,000,000 | [
[
"Yang",
"Bin",
""
],
[
"Zhao",
"Hong",
""
],
[
"Zhu",
"William",
""
]
] |
1311.0413 | Gordana Dodig Crnkovic | Gordana Dodig-Crnkovic | Information, Computation, Cognition. Agency-based Hierarchies of Levels | 5 pages | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Nature can be seen as informational structure with computational dynamics
(info-computationalism), where an (info-computational) agent is needed for the
potential information of the world to actualize. Starting from the definition
of information as the difference in one physical system that makes a difference
in another physical system, which combines Bateson and Hewitt definitions, the
argument is advanced for natural computation as a computational model of the
dynamics of the physical world where information processing is constantly going
on, on a variety of levels of organization. This setting helps elucidating the
relationships between computation, information, agency and cognition, within
the common conceptual framework, which has special relevance for biology and
robotics.
| [
{
"version": "v1",
"created": "Sat, 2 Nov 2013 21:33:11 GMT"
}
] | 1,383,609,600,000 | [
[
"Dodig-Crnkovic",
"Gordana",
""
]
] |
1311.0716 | Michael Laufer Ph.D. | Michael Swan Laufer | Artificial Intelligence in Humans | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, I put forward that in many instances, thinking mechanisms are
equivalent to artificial intelligence modules programmed into the human mind.
| [
{
"version": "v1",
"created": "Wed, 30 Oct 2013 14:19:49 GMT"
}
] | 1,383,609,600,000 | [
[
"Laufer",
"Michael Swan",
""
]
] |
1311.0944 | Bin Yang | Bin Yang and William Zhu | Connectivity for matroids based on rough sets | 16 pages, 8figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In mathematics and computer science, connectivity is one of the basic
concepts of matroid theory: it asks for the minimum number of elements which
need to be removed to disconnect the remaining nodes from each other. It is
closely related to the theory of network flow problems. The connectivity of a
matroid is an important measure of its robustness as a network. Therefore, it
is very necessary to investigate the conditions under which a matroid is
connected. In this paper, the connectivity for matroids is studied through
relation-based rough sets. First, a symmetric and transitive relation is
introduced from a general matroid and its properties are explored from the
viewpoint of matroids. Moreover, through the relation introduced by a general
matroid, an undirected graph is generalized. Specifically, the connection of
the graph can be investigated by the relation-based rough sets. Second, we
study the connectivity for matroids by means of relation-based rough sets and
some conditions under which a general matroid is connected are presented.
Finally, it is easy to prove that the connectivity for a general matroid with
some special properties and its induced undirected graph is equivalent. These
results show an important application of relation-based rough sets to matroids.
| [
{
"version": "v1",
"created": "Tue, 5 Nov 2013 01:39:32 GMT"
}
] | 1,383,696,000,000 | [
[
"Yang",
"Bin",
""
],
[
"Zhu",
"William",
""
]
] |
1311.1632 | Frank Loebe | Heinrich Herre | Persistence, Change, and the Integration of Objects and Processes in the
Framework of the General Formal Ontology | 13 pages; minor revisions (compared to version 1), mainly wording and
typos | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we discuss various problems, associated to temporal phenomena.
These problems include persistence and change, the integration of objects and
processes, and truth-makers for temporal propositions. We propose an approach
which interprets persistence as a phenomenon emanating from the activity of the
mind, and which, additionally, postulates that persistence, finally, rests on
personal identity. The General Formal Ontology (GFO) is a top level ontology
being developed at the University of Leipzig. Top level ontologies can be
roughly divided into 3D-ontologies, and 4D-ontologies. GFO is the only top
level ontology, used in applications, which is a 4D-ontology admitting
additionally 3D objects. Objects and processes are integrated in a natural way.
| [
{
"version": "v1",
"created": "Thu, 7 Nov 2013 10:45:54 GMT"
},
{
"version": "v2",
"created": "Fri, 6 Dec 2013 00:14:14 GMT"
}
] | 1,386,547,200,000 | [
[
"Herre",
"Heinrich",
""
]
] |
1311.2886 | Mustafa Ayhan | Mustafa Batuhan Ayhan | A Fuzzy AHP Approach for Supplier Selection Problem: A Case Study in a
Gear Motor Company | Published in "International Journal of Managing Value and Supply
Chains (IJMVSC) Vol.4, No. 3, September 2013" | International Journal of Managing Value and Supply Chains (IJMVSC)
Vol.4, No. 3, September 2013 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Suuplier selection is one of the most important functions of a purchasing
department. Since by deciding the best supplier, companies can save material
costs and increase competitive advantage.However this decision becomes
compilcated in case of multiple suppliers, multiple conflicting criteria, and
imprecise parameters. In addition the uncertainty and vagueness of the experts'
opinion is the prominent characteristic of the problem. therefore an
extensively used multi criteria decision making tool Fuzzy AHP can be utilized
as an approach for supplier selection problem. This paper reveals the
application of Fuzzy AHP in a gear motor company determining the best supplier
with respect to selected criteria. the contribution of this study is not only
the application of the Fuzzy AHP methodology for supplier selection problem,
but also releasing a comprehensive literature review of multi criteria decision
making problems. In addition by stating the steps of Fuzzy AHP clearly and
numerically, this study can be a guide of the methodology to be implemented to
other multiple criteria decision making problems.
| [
{
"version": "v1",
"created": "Wed, 9 Oct 2013 08:50:30 GMT"
}
] | 1,384,300,800,000 | [
[
"Ayhan",
"Mustafa Batuhan",
""
]
] |
1311.2912 | Michael Laufer Ph.D. | Michael S. Laufer | A Misanthropic Reinterpretation of the Chinese Room Problem | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The chinese room problem asks if computers can think; I ask here if most
humans can.
| [
{
"version": "v1",
"created": "Sat, 26 Oct 2013 20:51:24 GMT"
}
] | 1,384,300,800,000 | [
[
"Laufer",
"Michael S.",
""
]
] |
1311.3353 | Roberto Amadini | Roberto Amadini, Maurizio Gabbrielli, Jacopo Mauro | SUNNY: a Lazy Portfolio Approach for Constraint Solving | null | Theory and Practice of Logic Programming 14 (2014) 509-524 | 10.1017/S1471068414000179 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | *** To appear in Theory and Practice of Logic Programming (TPLP) ***
Within the context of constraint solving, a portfolio approach allows one to
exploit the synergy between different solvers in order to create a globally
better solver. In this paper we present SUNNY: a simple and flexible algorithm
that takes advantage of a portfolio of constraint solvers in order to compute
--- without learning an explicit model --- a schedule of them for solving a
given Constraint Satisfaction Problem (CSP). Motivated by the performance
reached by SUNNY vs. different simulations of other state of the art
approaches, we developed sunny-csp, an effective portfolio solver that exploits
the underlying SUNNY algorithm in order to solve a given CSP. Empirical tests
conducted on exhaustive benchmarks of MiniZinc models show that the actual
performance of SUNNY conforms to the predictions. This is encouraging both for
improving the power of CSP portfolio solvers and for trying to export them to
fields such as Answer Set Programming and Constraint Logic Programming.
| [
{
"version": "v1",
"created": "Thu, 14 Nov 2013 00:37:22 GMT"
},
{
"version": "v2",
"created": "Tue, 13 May 2014 16:58:32 GMT"
}
] | 1,582,070,400,000 | [
[
"Amadini",
"Roberto",
""
],
[
"Gabbrielli",
"Maurizio",
""
],
[
"Mauro",
"Jacopo",
""
]
] |
1311.3829 | Sofia Benbelkacem | Sofia Benbelkacem, Baghdad Atmani, Mohamed Benamina | Planning based on classification by induction graph | International Conference on Data Mining & Knowledge Management
Process CDKP-2013 | null | 10.5121/csit.2013.3823 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In Artificial Intelligence, planning refers to an area of research that
proposes to develop systems that can automatically generate a result set, in
the form of an integrated decision-making system through a formal procedure,
known as plan. Instead of resorting to the scheduling algorithms to generate
plans, it is proposed to operate the automatic learning by decision tree to
optimize time. In this paper, we propose to build a classification model by
induction graph from a learning sample containing plans that have an associated
set of descriptors whose values change depending on each plan. This model will
then operate for classifying new cases by assigning the appropriate plan.
| [
{
"version": "v1",
"created": "Fri, 15 Nov 2013 12:43:56 GMT"
}
] | 1,384,905,600,000 | [
[
"Benbelkacem",
"Sofia",
""
],
[
"Atmani",
"Baghdad",
""
],
[
"Benamina",
"Mohamed",
""
]
] |
1311.4064 | Nate Derbinsky | Nate Derbinsky, Jos\'e Bento, Jonathan S. Yedidia | Methods for Integrating Knowledge with the Three-Weight Optimization
Algorithm for Hybrid Cognitive Processing | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we consider optimization as an approach for quickly and
flexibly developing hybrid cognitive capabilities that are efficient, scalable,
and can exploit knowledge to improve solution speed and quality. In this
context, we focus on the Three-Weight Algorithm, which aims to solve general
optimization problems. We propose novel methods by which to integrate knowledge
with this algorithm to improve expressiveness, efficiency, and scaling, and
demonstrate these techniques on two example problems (Sudoku and circle
packing).
| [
{
"version": "v1",
"created": "Sat, 16 Nov 2013 14:03:31 GMT"
}
] | 1,384,819,200,000 | [
[
"Derbinsky",
"Nate",
""
],
[
"Bento",
"José",
""
],
[
"Yedidia",
"Jonathan S.",
""
]
] |
1311.4166 | Xinyang Deng | Shiyu Chen, Yong Hu, Sankaran Mahadevan, Yong Deng | A Visibility Graph Averaging Aggregation Operator | 33 pages, 9 figures | null | 10.1016/j.physa.2014.02.015 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The problem of aggregation is considerable importance in many disciplines. In
this paper, a new type of operator called visibility graph averaging (VGA)
aggregation operator is proposed. This proposed operator is based on the
visibility graph which can convert a time series into a graph. The weights are
obtained according to the importance of the data in the visibility graph.
Finally, the VGA operator is used in the analysis of the TAIEX database to
illustrate that it is practical and compared with the classic aggregation
operators, it shows its advantage that it not only implements the aggregation
of the data purely, but also conserves the time information, and meanwhile, the
determination of the weights is more reasonable.
| [
{
"version": "v1",
"created": "Sun, 17 Nov 2013 15:01:31 GMT"
}
] | 1,434,499,200,000 | [
[
"Chen",
"Shiyu",
""
],
[
"Hu",
"Yong",
""
],
[
"Mahadevan",
"Sankaran",
""
],
[
"Deng",
"Yong",
""
]
] |
1311.4564 | Sofia Benbelkacem | Baghdad Atmani, Sofia Benbelkacem and Mohamed Benamina | Planning by case-based reasoning based on fuzzy logic | International Conference of Artificial Intelligence and Fuzzy Logic
AIFL-2013 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The treatment of complex systems often requires the manipulation of vague,
imprecise and uncertain information. Indeed, the human being is competent in
handling of such systems in a natural way. Instead of thinking in mathematical
terms, humans describes the behavior of the system by language proposals. In
order to represent this type of information, Zadeh proposed to model the
mechanism of human thought by approximate reasoning based on linguistic
variables. He introduced the theory of fuzzy sets in 1965, which provides an
interface between language and digital worlds. In this paper, we propose a
Boolean modeling of the fuzzy reasoning that we baptized Fuzzy-BML and uses the
characteristics of induction graph classification. Fuzzy-BML is the process by
which the retrieval phase of a CBR is modelled not in the conventional form of
mathematical equations, but in the form of a database with membership functions
of fuzzy rules.
| [
{
"version": "v1",
"created": "Mon, 18 Nov 2013 21:29:32 GMT"
}
] | 1,384,905,600,000 | [
[
"Atmani",
"Baghdad",
""
],
[
"Benbelkacem",
"Sofia",
""
],
[
"Benamina",
"Mohamed",
""
]
] |
1311.5355 | Michael Gr. Voskoglou Prof. Dr. | Michael Gr. Voskoglou, Igor Ya. Subbotin | Dealing with the Fuzziness of Human Reasoning | 16 pages, 3 figures, 1 table. arXiv admin note: substantial text
overlap with arXiv:1212.2614 | International Journal of Applications of Fuzzy Sets and Artifcial
Intelligence (ISSN 2241-1240), Vol.3, 91-106, 2013 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reasoning, the most important human brain operation, is charactrized by a
degree fuzziness. In the present paper we construct a fuzzy model for the
reasoning process giving through the calculation of the possibilities of all
possible individuals' profiles a quantitative/qualitative view of their
behaviour during the above process and we use the centroid defuzzification
technique for measuring the reasoning skills. We also present a number of
classroom experiments illustrating our results in practice.
| [
{
"version": "v1",
"created": "Thu, 21 Nov 2013 10:35:57 GMT"
}
] | 1,385,078,400,000 | [
[
"Voskoglou",
"Michael Gr.",
""
],
[
"Subbotin",
"Igor Ya.",
""
]
] |
1311.6054 | Issam Qaffou | Issam Qaffou, Mohamed Sadgal, Abdelaziz Elfazziki | Q-learning optimization in a multi-agents system for image segmentation | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To know which operators to apply and in which order, as well as attributing
good values to their parameters is a challenge for users of computer vision.
This paper proposes a solution to this problem as a multi-agent system modeled
according to the Vowel approach and using the Q-learning algorithm to optimize
its choice. An implementation is given to test and validate this method.
| [
{
"version": "v1",
"created": "Sat, 23 Nov 2013 21:25:13 GMT"
}
] | 1,385,424,000,000 | [
[
"Qaffou",
"Issam",
""
],
[
"Sadgal",
"Mohamed",
""
],
[
"Elfazziki",
"Abdelaziz",
""
]
] |
1311.6591 | Guy Van den Broeck | Guy Van den Broeck, Adnan Darwiche | On the Complexity and Approximation of Binary Evidence in Lifted
Inference | To appear in Advances in Neural Information Processing Systems 26
(NIPS), Lake Tahoe, USA, December 2013 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Lifted inference algorithms exploit symmetries in probabilistic models to
speed up inference. They show impressive performance when calculating
unconditional probabilities in relational models, but often resort to
non-lifted inference when computing conditional probabilities. The reason is
that conditioning on evidence breaks many of the model's symmetries, which can
preempt standard lifting techniques. Recent theoretical results show, for
example, that conditioning on evidence which corresponds to binary relations is
#P-hard, suggesting that no lifting is to be expected in the worst case. In
this paper, we balance this negative result by identifying the Boolean rank of
the evidence as a key parameter for characterizing the complexity of
conditioning in lifted inference. In particular, we show that conditioning on
binary evidence with bounded Boolean rank is efficient. This opens up the
possibility of approximating evidence by a low-rank Boolean matrix
factorization, which we investigate both theoretically and empirically.
| [
{
"version": "v1",
"created": "Tue, 26 Nov 2013 08:39:49 GMT"
}
] | 1,385,510,400,000 | [
[
"Broeck",
"Guy Van den",
""
],
[
"Darwiche",
"Adnan",
""
]
] |
1311.6709 | Debajyoti Mukhopadhyay Prof. | Debajyoti Mukhopadhyay, Archana Chougule | A Framework for Semi-automated Web Service Composition in Semantic Web | 6 pages, 9 figures; CUBE 2013 International Conference | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Number of web services available on Internet and its usage are increasing
very fast. In many cases, one service is not enough to complete the business
requirement; composition of web services is carried out. Autonomous composition
of web services to achieve new functionality is generating considerable
attention in semantic web domain. Development time and effort for new
applications can be reduced with service composition. Various approaches to
carry out automated composition of web services are discussed in literature.
Web service composition using ontologies is one of the effective approaches. In
this paper we demonstrate how the ontology based composition can be made faster
for each customer. We propose a framework to provide precomposed web services
to fulfil user requirements. We detail how ontology merging can be used for
composition which expedites the whole process. We discuss how framework
provides customer specific ontology merging and repository. We also elaborate
on how merging of ontologies is carried out.
| [
{
"version": "v1",
"created": "Tue, 26 Nov 2013 15:41:19 GMT"
}
] | 1,385,510,400,000 | [
[
"Mukhopadhyay",
"Debajyoti",
""
],
[
"Chougule",
"Archana",
""
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.