id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
sequencelengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1605.03009 | Ray Van De Walker Ray Van De Walker | Ray Van De Walker | Consciousness is Pattern Recognition | 8 pages; Now describes the utility of the proof. Lemma A3 is
improved. The root lemma is clarified. Included and excused some basic
objections. Reordered the speculations, objections and excuses to be more
coherent. Added paragraphs and references to aid some AI paradigms. Added my
orcid and revised the abstract | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This is a proof of the strong AI hypothesis, i.e. that machines can be
conscious. It is a phenomenological proof that pattern-recognition and
subjective consciousness are the same activity in different terms. Therefore,
it proves that essential subjective processes of consciousness are computable,
and identifies significant traits and requirements of a conscious system. Since
Husserl, many philosophers have accepted that consciousness consists of
memories of logical connections between an ego and external objects. These
connections are called "intentions." Pattern recognition systems are achievable
technical artifacts. The proof links this respected introspective philosophical
theory of consciousness with technical art. The proof therefore endorses the
strong AI hypothesis and may therefore also enable a theoretically-grounded
form of artificial intelligence called a "synthetic intentionality," able to
synthesize, generalize, select and repeat intentions. If the pattern
recognition is reflexive, able to operate on the set of intentions, and
flexible, with several methods of synthesizing intentions, an SI may be a
particularly strong form of AI. Similarities and possible applications to
several AI paradigms are discussed. The article then addresses some problems:
The proof's limitations, reflexive cognition, Searles' Chinese room, and how an
SI could "understand" "meanings" and "be creative."
| [
{
"version": "v1",
"created": "Wed, 4 May 2016 20:19:05 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Jun 2016 20:44:09 GMT"
}
] | 1,467,244,800,000 | [
[
"Van De Walker",
"Ray",
""
]
] |
1605.03142 | Tom Everitt | Tom Everitt, Daniel Filan, Mayank Daswani, Marcus Hutter | Self-Modification of Policy and Utility Function in Rational Agents | Artificial General Intelligence (AGI) 2016 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Any agent that is part of the environment it interacts with and has versatile
actuators (such as arms and fingers), will in principle have the ability to
self-modify -- for example by changing its own source code. As we continue to
create more and more intelligent agents, chances increase that they will learn
about this ability. The question is: will they want to use it? For example,
highly intelligent systems may find ways to change their goals to something
more easily achievable, thereby `escaping' the control of their designers. In
an important paper, Omohundro (2008) argued that goal preservation is a
fundamental drive of any intelligent system, since a goal is more likely to be
achieved if future versions of the agent strive towards the same goal. In this
paper, we formalise this argument in general reinforcement learning, and
explore situations where it fails. Our conclusion is that the self-modification
possibility is harmless if and only if the value function of the agent
anticipates the consequences of self-modifications and use the current utility
function when evaluating the future.
| [
{
"version": "v1",
"created": "Tue, 10 May 2016 18:25:49 GMT"
}
] | 1,462,924,800,000 | [
[
"Everitt",
"Tom",
""
],
[
"Filan",
"Daniel",
""
],
[
"Daswani",
"Mayank",
""
],
[
"Hutter",
"Marcus",
""
]
] |
1605.03143 | Tom Everitt | Tom Everitt, Marcus Hutter | Avoiding Wireheading with Value Reinforcement Learning | Artificial General Intelligence (AGI) 2016 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How can we design good goals for arbitrarily intelligent agents?
Reinforcement learning (RL) is a natural approach. Unfortunately, RL does not
work well for generally intelligent agents, as RL agents are incentivised to
shortcut the reward sensor for maximum reward -- the so-called wireheading
problem. In this paper we suggest an alternative to RL called value
reinforcement learning (VRL). In VRL, agents use the reward signal to learn a
utility function. The VRL setup allows us to remove the incentive to wirehead
by placing a constraint on the agent's actions. The constraint is defined in
terms of the agent's belief distributions, and does not require an explicit
specification of which actions constitute wireheading.
| [
{
"version": "v1",
"created": "Tue, 10 May 2016 18:28:57 GMT"
}
] | 1,462,924,800,000 | [
[
"Everitt",
"Tom",
""
],
[
"Hutter",
"Marcus",
""
]
] |
1605.03392 | Mauro Scanagatta | Mauro Scanagatta, Giorgio Corani, Cassio P. de Campos, Marco Zaffalon | Learning Bounded Treewidth Bayesian Networks with Thousands of Variables | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a method for learning treewidth-bounded Bayesian networks from
data sets containing thousands of variables. Bounding the treewidth of a
Bayesian greatly reduces the complexity of inferences. Yet, being a global
property of the graph, it considerably increases the difficulty of the learning
process. We propose a novel algorithm for this task, able to scale to large
domains and large treewidths. Our novel approach consistently outperforms the
state of the art on data sets with up to ten thousand variables.
| [
{
"version": "v1",
"created": "Wed, 11 May 2016 11:54:26 GMT"
}
] | 1,463,011,200,000 | [
[
"Scanagatta",
"Mauro",
""
],
[
"Corani",
"Giorgio",
""
],
[
"de Campos",
"Cassio P.",
""
],
[
"Zaffalon",
"Marco",
""
]
] |
1605.03506 | Felix Diaz Hermida | F. Diaz-Hermida and M. Pereira-Fari\~na and Juan C. Vidal and A.
Ramos-Soto | Characterizing Quantifier Fuzzification Mechanisms: a behavioral guide
for practical applications | 28 pages | 2017, Elsevier. Fuzzy Sets and Systems | 10.1016/j.fss.2017.07.017 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Important advances have been made in the fuzzy quantification field.
Nevertheless, some problems remain when we face the decision of selecting the
most convenient model for a specific application. In the literature, several
desirable adequacy properties have been proposed, but theoretical limits impede
quantification models from simultaneously fulfilling every adequacy property
that has been defined. Besides, the complexity of model definitions and
adequacy properties makes very difficult for real users to understand the
particularities of the different models that have been presented. In this work
we will present several criteria conceived to help in the process of selecting
the most adequate Quantifier Fuzzification Mechanisms for specific practical
applications. In addition, some of the best known well-behaved models will be
compared against this list of criteria. Based on this analysis, some guidance
to choose fuzzy quantification models for practical applications will be
provided.
| [
{
"version": "v1",
"created": "Wed, 11 May 2016 16:42:37 GMT"
}
] | 1,550,534,400,000 | [
[
"Diaz-Hermida",
"F.",
""
],
[
"Pereira-Fariña",
"M.",
""
],
[
"Vidal",
"Juan C.",
""
],
[
"Ramos-Soto",
"A.",
""
]
] |
1605.04071 | James Cussens | James Cussens, Matti J\"arvisalo, Janne H. Korhonen and Mark Bartlett | Bayesian Network Structure Learning with Integer Programming: Polytopes,
Facets, and Complexity | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The challenging task of learning structures of probabilistic graphical models
is an important problem within modern AI research. Recent years have witnessed
several major algorithmic advances in structure learning for Bayesian
networks---arguably the most central class of graphical models---especially in
what is known as the score-based setting. A successful generic approach to
optimal Bayesian network structure learning (BNSL), based on integer
programming (IP), is implemented in the GOBNILP system. Despite the recent
algorithmic advances, current understanding of foundational aspects underlying
the IP based approach to BNSL is still somewhat lacking. Understanding
fundamental aspects of cutting planes and the related separation problem( is
important not only from a purely theoretical perspective, but also since it
holds out the promise of further improving the efficiency of state-of-the-art
approaches to solving BNSL exactly. In this paper, we make several theoretical
contributions towards these goals: (i) we study the computational complexity of
the separation problem, proving that the problem is NP-hard; (ii) we formalise
and analyse the relationship between three key polytopes underlying the
IP-based approach to BNSL; (iii) we study the facets of the three polytopes
both from the theoretical and practical perspective, providing, via exhaustive
computation, a complete enumeration of facets for low-dimensional
family-variable polytopes; and, furthermore, (iv) we establish a tight
connection of the BNSL problem to the acyclic subgraph problem.
| [
{
"version": "v1",
"created": "Fri, 13 May 2016 07:27:03 GMT"
},
{
"version": "v2",
"created": "Sun, 18 Dec 2016 17:28:15 GMT"
}
] | 1,482,192,000,000 | [
[
"Cussens",
"James",
""
],
[
"Järvisalo",
"Matti",
""
],
[
"Korhonen",
"Janne H.",
""
],
[
"Bartlett",
"Mark",
""
]
] |
1605.04218 | Abhishek Dasgupta | Abhishek Dasgupta and Samson Abramsky | Anytime Inference in Valuation Algebras | 9 pages, 1 figure | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Anytime inference is inference performed incrementally, with the accuracy of
the inference being controlled by a tunable parameter, usually time. Such
anytime inference algorithms are also usually interruptible, gradually
converging to the exact inference value until terminated. While anytime
inference algorithms for specific domains like probability potentials exist in
the literature, our objective in this article is to obtain an anytime inference
algorithm which is sufficiently generic to cover a wide range of domains. For
this we utilise the theory of generic inference as a basis for constructing an
anytime inference algorithm, and in particular, extending work done on ordered
valuation algebras. The novel contribution of this work is the construction of
anytime algorithms in a generic framework, which automatically gives us
instantiations in various useful domains. We also show how to apply this
generic framework for anytime inference in semiring induced valuation algebras,
an important subclass of valuation algebras, which includes instances like
probability potentials, disjunctive normal forms and distributive lattices.
Keywords: Approximation; Anytime algorithms; Resource-bounded computation;
Generic inference; Valuation algebras; Local computation; Binary join trees.
| [
{
"version": "v1",
"created": "Fri, 13 May 2016 15:40:10 GMT"
}
] | 1,463,356,800,000 | [
[
"Dasgupta",
"Abhishek",
""
],
[
"Abramsky",
"Samson",
""
]
] |
1605.04232 | Vladimir Shakirov | Vladimir Shakirov | Review of state-of-the-arts in artificial intelligence with application
to AI safety problem | version 2 includes grant information | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Here, I review current state-of-the-arts in many areas of AI to estimate when
it's reasonable to expect human level AI development. Predictions of prominent
AI researchers vary broadly from very pessimistic predictions of Andrew Ng to
much more moderate predictions of Geoffrey Hinton and optimistic predictions of
Shane Legg, DeepMind cofounder. Given huge rate of progress in recent years and
this broad range of predictions of AI experts, AI safety questions are also
discussed.
| [
{
"version": "v1",
"created": "Wed, 11 May 2016 17:49:24 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Dec 2016 09:29:12 GMT"
}
] | 1,481,068,800,000 | [
[
"Shakirov",
"Vladimir",
""
]
] |
1605.04691 | Tom Ameloot | Tom J. Ameloot | On Avoidance Learning with Partial Observability | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study a framework where agents have to avoid aversive signals. The agents
are given only partial information, in the form of features that are
projections of task states. Additionally, the agents have to cope with
non-determinism, defined as unpredictability on the way that actions are
executed. The goal of each agent is to define its behavior based on
feature-action pairs that reliably avoid aversive signals. We study a learning
algorithm, called A-learning, that exhibits fixpoint convergence, where the
belief of the allowed feature-action pairs eventually becomes fixed. A-learning
is parameter-free and easy to implement.
| [
{
"version": "v1",
"created": "Mon, 16 May 2016 09:26:53 GMT"
}
] | 1,463,443,200,000 | [
[
"Ameloot",
"Tom J.",
""
]
] |
1605.05305 | Alberto Uriarte | Alberto Uriarte and Santiago Onta\~n\'on | Combat Models for RTS Games | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Game tree search algorithms, such as Monte Carlo Tree Search (MCTS), require
access to a forward model (or "simulator") of the game at hand. However, in
some games such forward model is not readily available. This paper presents
three forward models for two-player attrition games, which we call "combat
models", and show how they can be used to simulate combat in RTS games. We also
show how these combat models can be learned from replay data. We use StarCraft
as our application domain. We report experiments comparing our combat models
predicting a combat output and their impact when used for tactical decisions
during a real game.
| [
{
"version": "v1",
"created": "Tue, 17 May 2016 19:47:13 GMT"
}
] | 1,463,529,600,000 | [
[
"Uriarte",
"Alberto",
""
],
[
"Ontañón",
"Santiago",
""
]
] |
1605.05807 | Miquel Ramirez | Miquel Ramirez and Hector Geffner | Heuristics for Planning, Plan Recognition and Parsing | Written: June 2009, Published: May 2016 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In a recent paper, we have shown that Plan Recognition over STRIPS can be
formulated and solved using Classical Planning heuristics and algorithms. In
this work, we show that this formulation subsumes the standard formulation of
Plan Recognition over libraries through a compilation of libraries into STRIPS
theories. The libraries correspond to AND/OR graphs that may be cyclic and
where children of AND nodes may be partially ordered. These libraries include
Context-Free Grammars as a special case, where the Plan Recognition problem
becomes a parsing with missing tokens problem. Plan Recognition over the
standard libraries become Planning problems that can be easily solved by any
modern planner, while recognition over more complex libraries, including
Context-Free Grammars (CFGs), illustrate limitations of current Planning
heuristics and suggest improvements that may be relevant in other Planning
problems too.
| [
{
"version": "v1",
"created": "Thu, 19 May 2016 04:22:35 GMT"
},
{
"version": "v2",
"created": "Sun, 22 May 2016 23:02:35 GMT"
}
] | 1,464,048,000,000 | [
[
"Ramirez",
"Miquel",
""
],
[
"Geffner",
"Hector",
""
]
] |
1605.05950 | Patrick Rodler | Patrick Rodler | Interactive Debugging of Knowledge Bases | Ph.D. Thesis, Alpen-Adria Universit\"at Klagenfurt | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many AI applications rely on knowledge about a relevant real-world domain
that is encoded by means of some logical knowledge base (KB). The most
essential benefit of logical KBs is the opportunity to perform automatic
reasoning to derive implicit knowledge or to answer complex queries about the
modeled domain. The feasibility of meaningful reasoning requires KBs to meet
some minimal quality criteria such as logical consistency. Without adequate
tool assistance, the task of resolving violated quality criteria in KBs can be
extremely tough even for domain experts, especially when the problematic KB
includes a large number of logical formulas or comprises complicated logical
formalisms.
Published non-interactive debugging systems often cannot localize all
possible faults (incompleteness), suggest the deletion or modification of
unnecessarily large parts of the KB (non-minimality), return incorrect
solutions which lead to a repaired KB not satisfying the imposed quality
requirements (unsoundness) or suffer from poor scalability due to the inherent
complexity of the KB debugging problem. Even if a system is complete and sound
and considers only minimal solutions, there are generally exponentially many
solution candidates to select one from. However, any two repaired KBs obtained
from these candidates differ in their semantics in terms of entailments and
non-entailments. Selection of just any of these repaired KBs might result in
unexpected entailments, the loss of desired entailments or unwanted changes to
the KB.
This work proposes complete, sound and optimal methods for the interactive
debugging of KBs that suggest the one (minimally invasive) error correction of
the faulty KB that yields a repaired KB with exactly the intended semantics.
Users, e.g. domain experts, are involved in the debugging process by answering
automatically generated queries about the intended domain.
| [
{
"version": "v1",
"created": "Thu, 19 May 2016 13:40:01 GMT"
}
] | 1,463,702,400,000 | [
[
"Rodler",
"Patrick",
""
]
] |
1605.05966 | Khadija Tijani | Khadija Tijani (CSTB, LIG Laboratoire d'Informatique de Grenoble,
G-SCOP), Stephane Ploix (G-SCOP), Benjamin Haas (CSTB), Julie Dugdale (LIG
Laboratoire d'Informatique de Grenoble), Quoc Dung Ngo | Dynamic Bayesian Networks to simulate occupant behaviours in office
buildings related to indoor air quality | IBPSA India 2015, Dec 2015, Hyderabad, India. arXiv admin note:
substantial text overlap with arXiv:1510.01970 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes a new general approach based on Bayesian networks to
model the human behaviour. This approach represents human behaviour with
probabilistic cause-effect relations based on knowledge, but also with
conditional probabilities coming either from knowledge or deduced from
observations. This approach has been applied to the co-simulation of the CO2
concentration in an office coupled with human behaviour.
| [
{
"version": "v1",
"created": "Thu, 19 May 2016 14:16:39 GMT"
}
] | 1,463,702,400,000 | [
[
"Tijani",
"Khadija",
"",
"CSTB, LIG Laboratoire d'Informatique de Grenoble,\n G-SCOP"
],
[
"Ploix",
"Stephane",
"",
"G-SCOP"
],
[
"Haas",
"Benjamin",
"",
"CSTB"
],
[
"Dugdale",
"Julie",
"",
"LIG\n Laboratoire d'Informatique de Grenoble"
],
[
"Ngo",
"Quoc Dung",
""
]
] |
1605.06048 | Vincent Conitzer | Vincent Conitzer | Philosophy in the Face of Artificial Intelligence | Prospect, May 4, 2016.
http://www.prospectmagazine.co.uk/science-and-technology/artificial-intelligence-wheres-the-philosophical-scrutiny | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this article, I discuss how the AI community views concerns about the
emergence of superintelligent AI and related philosophical issues.
| [
{
"version": "v1",
"created": "Thu, 19 May 2016 16:45:12 GMT"
}
] | 1,463,702,400,000 | [
[
"Conitzer",
"Vincent",
""
]
] |
1605.07260 | Matthieu Vernier | Matthieu Vernier, Luis Carcamo, Eliana Scheihing | Diagnosing editorial strategies of Chilean media on Twitter using an
automatic news classifier | in Spanish | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In Chile, does not exist an independent entity that publishes quantitative or
qualitative surveys to understand the traditional media environment and its
adaptation on the Social Web. Nowadays, Chilean newsreaders are increasingly
using social web platforms as their primary source of information, among which
Twitter plays a central role. Historical media and pure players are developing
different strategies to increase their audience and influence on this platform.
In this article, we propose a methodology based on data mining techniques to
provide a first level of analysis of the new Chilean media environment. We use
a crawling technique to mine news streams of 37 different Chilean media
actively presents on Twitter and propose several indicators to compare them. We
analyze their volumes of production, their potential audience, and using NLP
techniques, we explore the content of their production: their editorial line
and their geographic coverage.
| [
{
"version": "v1",
"created": "Tue, 24 May 2016 02:05:09 GMT"
}
] | 1,464,134,400,000 | [
[
"Vernier",
"Matthieu",
""
],
[
"Carcamo",
"Luis",
""
],
[
"Scheihing",
"Eliana",
""
]
] |
1605.07335 | Aleksander Lodwich | Aleksander Lodwich | Differences between Industrial Models of Autonomy and Systemic Models of
Autonomy | 11 pages, 9 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper discusses the idea of levels of autonomy of systems - be this
technical or organic - and compares the insights with models employed by
industries used to describe maturity and capability of their products.
| [
{
"version": "v1",
"created": "Tue, 24 May 2016 08:49:36 GMT"
},
{
"version": "v2",
"created": "Wed, 25 May 2016 17:43:42 GMT"
},
{
"version": "v3",
"created": "Fri, 3 Jun 2016 21:31:44 GMT"
}
] | 1,465,257,600,000 | [
[
"Lodwich",
"Aleksander",
""
]
] |
1605.07364 | Timothy Ganesan PhD | Timothy Ganesan, Pandian Vasant and Irraivan Elamvazuthi | Non-Gaussian Random Generators in Bacteria Foraging Algorithm for
Multiobjective Optimization | 8 pages; 5 Figures; 6 Tables. Industrial Engineering & Management,
2015 | null | 10.4172/2169-0316.1000182 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Random generators or stochastic engines are a key component in the structure
of metaheuristic algorithms. This work investigates the effects of non-Gaussian
stochastic engines on the performance of metaheuristics when solving a
real-world optimization problem. In this work, the bacteria foraging algorithm
(BFA) was employed in tandem with four random generators (stochastic engines).
The stochastic engines operate using the Weibull distribution, Gamma
distribution, Gaussian distribution and a chaotic mechanism. The two
non-Gaussian distributions are the Weibull and Gamma distributions. In this
work, the approaches developed were implemented on the real-world
multi-objective resin bonded sand mould problem. The Pareto frontiers obtained
were benchmarked using two metrics; the hyper volume indicator (HVI) and the
proposed Average Explorative Rate (AER) metric. Detail discussions from various
perspectives on the effects of non-Gaussian random generators in metaheuristics
are provided.
| [
{
"version": "v1",
"created": "Tue, 24 May 2016 10:27:17 GMT"
}
] | 1,464,134,400,000 | [
[
"Ganesan",
"Timothy",
""
],
[
"Vasant",
"Pandian",
""
],
[
"Elamvazuthi",
"Irraivan",
""
]
] |
1605.07728 | John Dickerson | John P. Dickerson, Aleksandr M. Kazachkov, Ariel D. Procaccia, Tuomas
Sandholm | Small Representations of Big Kidney Exchange Graphs | Preliminary version appeared at the 31st AAAI Conference on
Artificial Intelligence (AAAI 2017) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Kidney exchanges are organized markets where patients swap willing but
incompatible donors. In the last decade, kidney exchanges grew from small and
regional to large and national---and soon, international. This growth results
in more lives saved, but exacerbates the empirical hardness of the
$\mathcal{NP}$-complete problem of optimally matching patients to donors.
State-of-the-art matching engines use integer programming techniques to clear
fielded kidney exchanges, but these methods must be tailored to specific models
and objective functions, and may fail to scale to larger exchanges. In this
paper, we observe that if the kidney exchange compatibility graph can be
encoded by a constant number of patient and donor attributes, the clearing
problem is solvable in polynomial time. We give necessary and sufficient
conditions for losslessly shrinking the representation of an arbitrary
compatibility graph. Then, using real compatibility graphs from the UNOS
nationwide kidney exchange, we show how many attributes are needed to encode
real compatibility graphs. The experiments show that, indeed, small numbers of
attributes suffice.
| [
{
"version": "v1",
"created": "Wed, 25 May 2016 04:33:41 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Dec 2016 19:31:20 GMT"
}
] | 1,482,105,600,000 | [
[
"Dickerson",
"John P.",
""
],
[
"Kazachkov",
"Aleksandr M.",
""
],
[
"Procaccia",
"Ariel D.",
""
],
[
"Sandholm",
"Tuomas",
""
]
] |
1605.07989 | Sailik Sengupta | Tathagata Chakraborti, Sarath Sreedharan, Sailik Sengupta, T. K.
Satish Kumar, and Subbarao Kambhampati | Compliant Conditions for Polynomial Time Approximation of Operator
Counts | Published at the International Symposium on Combinatorial Search
(SoCS), 2016 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we develop a computationally simpler version of the operator
count heuristic for a particular class of domains. The contribution of this
abstract is threefold, we (1) propose an efficient closed form approximation to
the operator count heuristic using the Lagrangian dual; (2) leverage compressed
sensing techniques to obtain an integer approximation for operator counts in
polynomial time; and (3) discuss the relationship of the proposed formulation
to existing heuristics and investigate properties of domains where such
approaches appear to be useful.
| [
{
"version": "v1",
"created": "Wed, 25 May 2016 18:10:48 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Jul 2016 02:10:11 GMT"
}
] | 1,467,763,200,000 | [
[
"Chakraborti",
"Tathagata",
""
],
[
"Sreedharan",
"Sarath",
""
],
[
"Sengupta",
"Sailik",
""
],
[
"Kumar",
"T. K. Satish",
""
],
[
"Kambhampati",
"Subbarao",
""
]
] |
1605.08150 | Saman Sarraf | Krishanth Krishnan, Taralyn Schwering and Saman Sarraf | Cognitive Dynamic Systems: A Technical Review of Cognitive Radar | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We start with the history of cognitive radar, where origins of the PAC,
Fuster research on cognition and principals of cognition are provided. Fuster
describes five cognitive functions: perception, memory, attention, language,
and intelligence. We describe the Perception-Action Cyclec as it applies to
cognitive radar, and then discuss long-term memory, memory storage, memory
retrieval and working memory. A comparison between memory in human cognition
and cognitive radar is given as well. Attention is another function described
by Fuster, and we have given the comparison of attention in human cognition and
cognitive radar. We talk about the four functional blocks from the PAC:
Bayesian filter, feedback information, dynamic programming and state-space
model for the radar environment. Then, to show that the PAC improves the
tracking accuracy of Cognitive Radar over Traditional Active Radar, we have
provided simulation results. In the simulation, three nonlinear filters:
Cubature Kalman Filter, Unscented Kalman Filter and Extended Kalman Filter are
compared. Based on the results, radars implemented with CKF perform better than
the radars implemented with UKF or radars implemented with EKF. Further, radar
with EKF has the worst accuracy and has the biggest computation load because of
derivation and evaluation of Jacobian matrices. We suggest using the concept of
risk management to better control parameters and improve performance in
cognitive radar. We believe, spectrum sensing can be seen as a potential
interest to be used in cognitive radar and we propose a new approach
Probabilistic ICA which will presumably reduce noise based on estimation error
in cognitive radar. Parallel computing is a concept based on divide and
conquers mechanism, and we suggest using the parallel computing approach in
cognitive radar by doing complicated calculations or tasks to reduce processing
time.
| [
{
"version": "v1",
"created": "Thu, 26 May 2016 05:49:25 GMT"
}
] | 1,464,307,200,000 | [
[
"Krishnan",
"Krishanth",
""
],
[
"Schwering",
"Taralyn",
""
],
[
"Sarraf",
"Saman",
""
]
] |
1605.08390 | Juanjuan Zhao | Juanjuan Zhao, Fan Zhang, Lai Tu, Chengzhong Xu, Dayong Shen, Chen
Tian, Xiang-Yang Li, Zhengxi Li | Estimation of Passenger Route Choice Pattern Using Smart Card Data for
Complex Metro Systems | 12 pages, 12 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Nowadays, metro systems play an important role in meeting the urban
transportation demand in large cities. The understanding of passenger route
choice is critical for public transit management. The wide deployment of
Automated Fare Collection(AFC) systems opens up a new opportunity. However,
only each trip's tap-in and tap-out timestamp and stations can be directly
obtained from AFC system records; the train and route chosen by a passenger are
unknown, which are necessary to solve our problem. While existing methods work
well in some specific situations, they don't work for complicated situations.
In this paper, we propose a solution that needs no additional equipment or
human involvement than the AFC systems. We develop a probabilistic model that
can estimate from empirical analysis how the passenger flows are dispatched to
different routes and trains. We validate our approach using a large scale data
set collected from the Shenzhen metro system. The measured results provide us
with useful inputs when building the passenger path choice model.
| [
{
"version": "v1",
"created": "Tue, 19 Apr 2016 07:52:30 GMT"
}
] | 1,464,307,200,000 | [
[
"Zhao",
"Juanjuan",
""
],
[
"Zhang",
"Fan",
""
],
[
"Tu",
"Lai",
""
],
[
"Xu",
"Chengzhong",
""
],
[
"Shen",
"Dayong",
""
],
[
"Tian",
"Chen",
""
],
[
"Li",
"Xiang-Yang",
""
],
[
"Li",
"Zhengxi",
""
]
] |
1606.00058 | Aleksander Lodwich | Aleksander Lodwich | How to avoid ethically relevant Machine Consciousness | 20 pages, 9 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper discusses the root cause of systems perceiving the self experience
and how to exploit adaptive and learning features without introducing ethically
problematic system properties.
| [
{
"version": "v1",
"created": "Tue, 31 May 2016 21:52:13 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Jun 2016 10:53:16 GMT"
}
] | 1,465,257,600,000 | [
[
"Lodwich",
"Aleksander",
""
]
] |
1606.00075 | Yura Perov N | Yura N Perov | Applications of Probabilistic Programming (Master's thesis, 2015) | Supervisor: Frank Wood. The thesis was prepared in the Department of
Engineering Science at the University of Oxford | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This thesis describes work on two applications of probabilistic programming:
the learning of probabilistic program code given specifications, in particular
program code of one-dimensional samplers; and the facilitation of sequential
Monte Carlo inference with help of data-driven proposals. The latter is
presented with experimental results on a linear Gaussian model and a
non-parametric dependent Dirichlet process mixture of objects model for object
recognition and tracking.
In Chapter 1 we provide a brief introduction to probabilistic programming.
In Chapter 2 we present an approach to automatic discovery of samplers in the
form of probabilistic programs. We formulate a Bayesian approach to this
problem by specifying a grammar-based prior over probabilistic program code. We
use an approximate Bayesian computation method to learn the programs, whose
executions generate samples that statistically match observed data or
analytical characteristics of distributions of interest. In our experiments we
leverage different probabilistic programming systems to perform Markov chain
Monte Carlo sampling over the space of programs. Experimental results have
demonstrated that, using the proposed methodology, we can learn approximate and
even some exact samplers. Finally, we show that our results are competitive
with regard to genetic programming methods.
In Chapter 3, we describe a way to facilitate sequential Monte Carlo
inference in probabilistic programming using data-driven proposals. In
particular, we develop a distance-based proposal for the non-parametric
dependent Dirichlet process mixture of objects model. We implement this
approach in the probabilistic programming system Anglican, and show that for
that model data-driven proposals provide significant performance improvements.
We also explore the possibility of using neural networks to improve data-driven
proposals.
| [
{
"version": "v1",
"created": "Tue, 31 May 2016 23:48:55 GMT"
},
{
"version": "v2",
"created": "Tue, 19 May 2020 19:41:59 GMT"
}
] | 1,590,019,200,000 | [
[
"Perov",
"Yura N",
""
]
] |
1606.00133 | Jae Hee Lee | Frank Dylla, Jae Hee Lee, Till Mossakowski, Thomas Schneider, Andr\'e
Van Delden, Jasper Van De Ven, Diedrich Wolter | A Survey of Qualitative Spatial and Temporal Calculi -- Algebraic and
Computational Properties | Submitted to ACM Computing Surveys | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Qualitative Spatial and Temporal Reasoning (QSTR) is concerned with symbolic
knowledge representation, typically over infinite domains. The motivations for
employing QSTR techniques range from exploiting computational properties that
allow efficient reasoning to capture human cognitive concepts in a
computational framework. The notion of a qualitative calculus is one of the
most prominent QSTR formalisms. This article presents the first overview of all
qualitative calculi developed to date and their computational properties,
together with generalized definitions of the fundamental concepts and methods,
which now encompass all existing calculi. Moreover, we provide a classification
of calculi according to their algebraic properties.
| [
{
"version": "v1",
"created": "Wed, 1 Jun 2016 06:46:51 GMT"
}
] | 1,464,825,600,000 | [
[
"Dylla",
"Frank",
""
],
[
"Lee",
"Jae Hee",
""
],
[
"Mossakowski",
"Till",
""
],
[
"Schneider",
"Thomas",
""
],
[
"Van Delden",
"André",
""
],
[
"Van De Ven",
"Jasper",
""
],
[
"Wolter",
"Diedrich",
""
]
] |
1606.00339 | Christian Stra{\ss}er | Mathieu Beirlaen and Christian Stra{\ss}er | A structured argumentation framework for detaching conditional
obligations | This is our submission to DEON 2016, including the technical appendix | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a general formal argumentation system for dealing with the
detachment of conditional obligations. Given a set of facts, constraints, and
conditional obligations, we answer the question whether an unconditional
obligation is detachable by considering reasons for and against its detachment.
For the evaluation of arguments in favor of detaching obligations we use a
Dung-style argumentation-theoretical semantics. We illustrate the modularity of
the general framework by considering some extensions, and we compare the
framework to some related approaches from the literature.
| [
{
"version": "v1",
"created": "Wed, 1 Jun 2016 16:04:47 GMT"
}
] | 1,464,825,600,000 | [
[
"Beirlaen",
"Mathieu",
""
],
[
"Straßer",
"Christian",
""
]
] |
1606.00626 | Patrick O. Glauner | Patrick Glauner, Jorge Augusto Meira, Petko Valtchev, Radu State,
Franck Bettinger | The Challenge of Non-Technical Loss Detection using Artificial
Intelligence: A Survey | null | International Journal of Computational Intelligence Systems
(IJCIS), vol. 10, issue 1, pp. 760-775, 2017 | 10.2991/ijcis.2017.10.1.51 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Detection of non-technical losses (NTL) which include electricity theft,
faulty meters or billing errors has attracted increasing attention from
researchers in electrical engineering and computer science. NTLs cause
significant harm to the economy, as in some countries they may range up to 40%
of the total electricity distributed. The predominant research direction is
employing artificial intelligence to predict whether a customer causes NTL.
This paper first provides an overview of how NTLs are defined and their impact
on economies, which include loss of revenue and profit of electricity providers
and decrease of the stability and reliability of electrical power grids. It
then surveys the state-of-the-art research efforts in a up-to-date and
comprehensive review of algorithms, features and data sets used. It finally
identifies the key scientific and engineering challenges in NTL detection and
suggests how they could be addressed in the future.
| [
{
"version": "v1",
"created": "Thu, 2 Jun 2016 11:14:47 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Jul 2017 22:30:28 GMT"
},
{
"version": "v3",
"created": "Tue, 25 Jul 2017 04:25:54 GMT"
}
] | 1,501,027,200,000 | [
[
"Glauner",
"Patrick",
""
],
[
"Meira",
"Jorge Augusto",
""
],
[
"Valtchev",
"Petko",
""
],
[
"State",
"Radu",
""
],
[
"Bettinger",
"Franck",
""
]
] |
1606.00652 | Jarryd Martin | Jarryd Martin, Tom Everitt, Marcus Hutter | Death and Suicide in Universal Artificial Intelligence | Conference: Artificial General Intelligence (AGI) 2016 13 pages, 2
figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reinforcement learning (RL) is a general paradigm for studying intelligent
behaviour, with applications ranging from artificial intelligence to psychology
and economics. AIXI is a universal solution to the RL problem; it can learn any
computable environment. A technical subtlety of AIXI is that it is defined
using a mixture over semimeasures that need not sum to 1, rather than over
proper probability measures. In this work we argue that the shortfall of a
semimeasure can naturally be interpreted as the agent's estimate of the
probability of its death. We formally define death for generally intelligent
agents like AIXI, and prove a number of related theorems about their behaviour.
Notable discoveries include that agent behaviour can change radically under
positive linear transformations of the reward signal (from suicidal to
dogmatically self-preserving), and that the agent's posterior belief that it
will survive increases over time.
| [
{
"version": "v1",
"created": "Thu, 2 Jun 2016 12:48:39 GMT"
}
] | 1,464,912,000,000 | [
[
"Martin",
"Jarryd",
""
],
[
"Everitt",
"Tom",
""
],
[
"Hutter",
"Marcus",
""
]
] |
1606.01015 | Jordan Henrio | Jordan Henrio, Thomas Henn, Tomoharu Nakashima, Hidehisa Akiyama | Selecting the Best Player Formation for Corner-Kick Situations Based on
Bayes' Estimation | 12 pages, 7 figures, RoboCup Symposium 2016 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the domain of the Soccer simulation 2D league of the RoboCup project,
appropriate player positioning against a given opponent team is an important
factor of soccer team performance. This work proposes a model which decides the
strategy that should be applied regarding a particular opponent team. This task
can be realized by applying preliminary a learning phase where the model
determines the most effective strategies against clusters of opponent teams.
The model determines the best strategies by using sequential Bayes' estimators.
As a first trial of the system, the proposed model is used to determine the
association of player formations against opponent teams in the particular
situation of corner-kick. The implemented model shows satisfying abilities to
compare player formations that are similar to each other in terms of
performance and determines the right ranking even by running a decent number of
simulation games.
| [
{
"version": "v1",
"created": "Fri, 3 Jun 2016 09:31:13 GMT"
}
] | 1,465,171,200,000 | [
[
"Henrio",
"Jordan",
""
],
[
"Henn",
"Thomas",
""
],
[
"Nakashima",
"Tomoharu",
""
],
[
"Akiyama",
"Hidehisa",
""
]
] |
1606.01113 | Kuang Zhou | Kuang Zhou (DRUID), Arnaud Martin (DRUID), Quan Pan, Zhun-Ga Liu | ECMdd: Evidential c-medoids clustering with multiple prototypes | null | Pattern Recognition, Elsevier, 2016 | 10.1016/j.patcog.2016.05.005 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, a new prototype-based clustering method named Evidential
C-Medoids (ECMdd), which belongs to the family of medoid-based clustering for
proximity data, is proposed as an extension of Fuzzy C-Medoids (FCMdd) on the
theoretical framework of belief functions. In the application of FCMdd and
original ECMdd, a single medoid (prototype), which is supposed to belong to the
object set, is utilized to represent one class. For the sake of clarity, this
kind of ECMdd using a single medoid is denoted by sECMdd. In real clustering
applications, using only one pattern to capture or interpret a class may not
adequately model different types of group structure and hence limits the
clustering performance. In order to address this problem, a variation of ECMdd
using multiple weighted medoids, denoted by wECMdd, is presented. Unlike
sECMdd, in wECMdd objects in each cluster carry various weights describing
their degree of representativeness for that class. This mechanism enables each
class to be represented by more than one object. Experimental results in
synthetic and real data sets clearly demonstrate the superiority of sECMdd and
wECMdd. Moreover, the clustering results by wECMdd can provide richer
information for the inner structure of the detected classes with the help of
prototype weights.
| [
{
"version": "v1",
"created": "Fri, 3 Jun 2016 14:44:15 GMT"
}
] | 1,465,171,200,000 | [
[
"Zhou",
"Kuang",
"",
"DRUID"
],
[
"Martin",
"Arnaud",
"",
"DRUID"
],
[
"Pan",
"Quan",
""
],
[
"Liu",
"Zhun-Ga",
""
]
] |
1606.01116 | Kuang Zhou | Kuang Zhou (DRUID), Arnaud Martin (DRUID), Quan Pan | The belief noisy-or model applied to network reliability analysis | null | International Journal of Uncertainty, Fuzziness and
Knowledge-Based Systems, World Scientific Publishing, 2016 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One difficulty faced in knowledge engineering for Bayesian Network (BN) is
the quan-tification step where the Conditional Probability Tables (CPTs) are
determined. The number of parameters included in CPTs increases exponentially
with the number of parent variables. The most common solution is the
application of the so-called canonical gates. The Noisy-OR (NOR) gate, which
takes advantage of the independence of causal interactions, provides a
logarithmic reduction of the number of parameters required to specify a CPT. In
this paper, an extension of NOR model based on the theory of belief functions,
named Belief Noisy-OR (BNOR), is proposed. BNOR is capable of dealing with both
aleatory and epistemic uncertainty of the network. Compared with NOR, more rich
information which is of great value for making decisions can be got when the
available knowledge is uncertain. Specially, when there is no epistemic
uncertainty, BNOR degrades into NOR. Additionally, different structures of BNOR
are presented in this paper in order to meet various needs of engineers. The
application of BNOR model on the reliability evaluation problem of networked
systems demonstrates its effectiveness.
| [
{
"version": "v1",
"created": "Fri, 3 Jun 2016 14:47:12 GMT"
}
] | 1,465,171,200,000 | [
[
"Zhou",
"Kuang",
"",
"DRUID"
],
[
"Martin",
"Arnaud",
"",
"DRUID"
],
[
"Pan",
"Quan",
""
]
] |
1606.02032 | Chang-Shing Lee | Chang-Shing Lee, Mei-Hui Wang, Shi-Jim Yen, Ting-Han Wei, I-Chen Wu,
Ping-Chiang Chou, Chun-Hsun Chou, Ming-Wan Wang, and Tai-Hsiung Yang | Human vs. Computer Go: Review and Prospect | This article is with 6 pages and 3 figures. And, it is accepted and
will be published in IEEE Computational Intelligence Magazine in August, 2016 | null | 10.1109/MCI.2016.2572559 | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The Google DeepMind challenge match in March 2016 was a historic achievement
for computer Go development. This article discusses the development of
computational intelligence (CI) and its relative strength in comparison with
human intelligence for the game of Go. We first summarize the milestones
achieved for computer Go from 1998 to 2016. Then, the computer Go programs that
have participated in previous IEEE CIS competitions as well as methods and
techniques used in AlphaGo are briefly introduced. Commentaries from three
high-level professional Go players on the five AlphaGo versus Lee Sedol games
are also included. We conclude that AlphaGo beating Lee Sedol is a huge
achievement in artificial intelligence (AI) based largely on CI methods. In the
future, powerful computer Go programs such as AlphaGo are expected to be
instrumental in promoting Go education and AI real-world applications.
| [
{
"version": "v1",
"created": "Tue, 7 Jun 2016 05:13:37 GMT"
}
] | 1,555,286,400,000 | [
[
"Lee",
"Chang-Shing",
""
],
[
"Wang",
"Mei-Hui",
""
],
[
"Yen",
"Shi-Jim",
""
],
[
"Wei",
"Ting-Han",
""
],
[
"Wu",
"I-Chen",
""
],
[
"Chou",
"Ping-Chiang",
""
],
[
"Chou",
"Chun-Hsun",
""
],
[
"Wang",
"Ming-Wan",
""
],
[
"Yang",
"Tai-Hsiung",
""
]
] |
1606.02645 | Jakub Kowalski | Jakub Kowalski, Jakub Sutowicz, Marek Szyku{\l}a | Simplified Boardgames | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We formalize Simplified Boardgames language, which describes a subclass of
arbitrary board games. The language structure is based on the regular
expressions, which makes the rules easily machine-processable while keeping the
rules concise and fairly human-readable.
| [
{
"version": "v1",
"created": "Wed, 8 Jun 2016 17:29:17 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Jul 2016 14:02:55 GMT"
}
] | 1,468,800,000,000 | [
[
"Kowalski",
"Jakub",
""
],
[
"Sutowicz",
"Jakub",
""
],
[
"Szykuła",
"Marek",
""
]
] |
1606.02710 | Berat Dogan | Berat Do\u{g}an | A Modified Vortex Search Algorithm for Numerical Function Optimization | 18 pages, 7 figures | International journal of Artificial Intelligence & Applications
(IJAIA), Volume 7, Number 3, May 2016 | 10.5121/ijaia.2016.7304 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Vortex Search (VS) algorithm is one of the recently proposed
metaheuristic algorithms which was inspired from the vortical flow of the
stirred fluids. Although the VS algorithm is shown to be a good candidate for
the solution of certain optimization problems, it also has some drawbacks. In
the VS algorithm, candidate solutions are generated around the current best
solution by using a Gaussian distribution at each iteration pass. This provides
simplicity to the algorithm but it also leads to some problems along.
Especially, for the functions those have a number of local minimum points, to
select a single point to generate candidate solutions leads the algorithm to
being trapped into a local minimum point. Due to the adaptive step-size
adjustment scheme used in the VS algorithm, the locality of the created
candidate solutions is increased at each iteration pass. Therefore, if the
algorithm cannot escape a local point as quickly as possible, it becomes much
more difficult for the algorithm to escape from that point in the latter
iterations. In this study, a modified Vortex Search algorithm (MVS) is proposed
to overcome above mentioned drawback of the existing VS algorithm. In the MVS
algorithm, the candidate solutions are generated around a number of points at
each iteration pass. Computational results showed that with the help of this
modification the global search ability of the existing VS algorithm is improved
and the MVS algorithm outperformed the existing VS algorithm, PSO2011 and ABC
algorithms for the benchmark numerical function set.
| [
{
"version": "v1",
"created": "Wed, 8 Jun 2016 12:00:28 GMT"
}
] | 1,465,516,800,000 | [
[
"Doğan",
"Berat",
""
]
] |
1606.02767 | Norbert B\'atfai Ph.D. | Norbert B\'atfai | Theoretical Robopsychology: Samu Has Learned Turing Machines | 11 pages, added a missing cc* value and the appearance of Table 1 is
improved | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | From the point of view of a programmer, the robopsychology is a synonym for
the activity is done by developers to implement their machine learning
applications. This robopsychological approach raises some fundamental
theoretical questions of machine learning. Our discussion of these questions is
constrained to Turing machines. Alan Turing had given an algorithm (aka the
Turing Machine) to describe algorithms. If it has been applied to describe
itself then this brings us to Turing's notion of the universal machine. In the
present paper, we investigate algorithms to write algorithms. From a pedagogy
point of view, this way of writing programs can be considered as a combination
of learning by listening and learning by doing due to it is based on applying
agent technology and machine learning. As the main result we introduce the
problem of learning and then we show that it cannot easily be handled in
reality therefore it is reasonable to use machine learning algorithm for
learning Turing machines.
| [
{
"version": "v1",
"created": "Wed, 8 Jun 2016 21:46:20 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Jun 2016 13:27:01 GMT"
}
] | 1,466,726,400,000 | [
[
"Bátfai",
"Norbert",
""
]
] |
1606.02899 | Manuel Mazzara | Jordi Vallverd\'u, Max Talanov, Salvatore Distefano, Manuel Mazzara,
Alexander Tchitchigin, Ildar Nurgaliev | A Cognitive Architecture for the Implementation of Emotions in Computing
Systems | null | BICA, Volume 15, January 2016, Pages 34-40 | 10.1016/j.bica.2015.11.002 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we present a new neurobiologically-inspired affective cognitive
architecture: NEUCOGAR (NEUromodulating COGnitive ARchitecture). The objective
of NEUCOGAR is the identification of a mapping from the influence of serotonin,
dopamine and noradrenaline to the computing processes based on Von Neuman's
architecture, in order to implement affective phenomena which can operate on
the Turing's machine model. As basis of the modeling we use and extend the
L\"ovheim Cube of Emotion with parameters of the Von Neumann architecture.
Validation is conducted via simulation on a computing system of dopamine
neuromodulation and its effects on the Cortex. In the experimental phase of the
project, the increase of computing power and storage redistribution due to
emotion stimulus modulated by the dopamine system, confirmed the soundness of
the model.
| [
{
"version": "v1",
"created": "Thu, 9 Jun 2016 10:43:21 GMT"
}
] | 1,465,516,800,000 | [
[
"Vallverdú",
"Jordi",
""
],
[
"Talanov",
"Max",
""
],
[
"Distefano",
"Salvatore",
""
],
[
"Mazzara",
"Manuel",
""
],
[
"Tchitchigin",
"Alexander",
""
],
[
"Nurgaliev",
"Ildar",
""
]
] |
1606.03137 | Dylan Hadfield-Menell | Dylan Hadfield-Menell, Anca Dragan, Pieter Abbeel, Stuart Russell | Cooperative Inverse Reinforcement Learning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For an autonomous system to be helpful to humans and to pose no unwarranted
risks, it needs to align its values with those of the humans in its environment
in such a way that its actions contribute to the maximization of value for the
humans. We propose a formal definition of the value alignment problem as
cooperative inverse reinforcement learning (CIRL). A CIRL problem is a
cooperative, partial-information game with two agents, human and robot; both
are rewarded according to the human's reward function, but the robot does not
initially know what this is. In contrast to classical IRL, where the human is
assumed to act optimally in isolation, optimal CIRL solutions produce behaviors
such as active teaching, active learning, and communicative actions that are
more effective in achieving value alignment. We show that computing optimal
joint policies in CIRL games can be reduced to solving a POMDP, prove that
optimality in isolation is suboptimal in CIRL, and derive an approximate CIRL
algorithm.
| [
{
"version": "v1",
"created": "Thu, 9 Jun 2016 22:39:54 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Jul 2016 18:25:07 GMT"
},
{
"version": "v3",
"created": "Sat, 12 Nov 2016 20:33:43 GMT"
},
{
"version": "v4",
"created": "Sat, 17 Feb 2024 16:13:12 GMT"
}
] | 1,708,387,200,000 | [
[
"Hadfield-Menell",
"Dylan",
""
],
[
"Dragan",
"Anca",
""
],
[
"Abbeel",
"Pieter",
""
],
[
"Russell",
"Stuart",
""
]
] |
1606.03191 | Ai Munandar Tb | Tb. Ai Munandar, Retantyo Wardoyo | Fuzzy-Klassen Model for Development Disparities Analysis based on Gross
Regional Domestic Product Sector of a Region | 6 Pages, 1 Figures, 5 Tables | null | 10.5120/ijca2015905389 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Analysis of regional development imbalances quadrant has a very important
meaning in order to see the extent of achievement of the development of certain
areas as well as the difference. Factors that could be used as a tool to
measure the inequality of development is to look at the average growth and
development contribution of each sector of Gross Regional Domestic Product
(GRDP) based on the analyzed region and the reference region. This study
discusses the development of a model to determine the regional development
imbalances using fuzzy approach system, and the rules of typology Klassen. The
model is then called fuzzy-Klassen. Implications Product Mamdani fuzzy system
is used in the model as an inference engine to generate output after
defuzzyfication process. Application of MATLAB is used as a tool of analysis in
this study. The test a result of Kota Cilegon is shows that there are
significant differences between traditional Klassen typology analyses with the
results of the model developed. Fuzzy model-Klassen shows GRDP sector
inequality Cilegon City is dominated by Quadrant I (K4), where status is the
sector forward and grows exponentially. While the traditional Klassen typology,
half of GRDP sector is dominated by Quadrant IV (K4) with a sector that is
lagging relative status.
| [
{
"version": "v1",
"created": "Fri, 10 Jun 2016 05:55:56 GMT"
}
] | 1,465,776,000,000 | [
[
"Munandar",
"Tb. Ai",
""
],
[
"Wardoyo",
"Retantyo",
""
]
] |
1606.03229 | Manuel Mazzara | Michael W. Bridges, Salvatore Distefano, Manuel Mazzara, Marat
Minlebaev, Max Talanov, Jordi Vallverd\'u | Towards Anthropo-inspired Computational Systems: the $P^3$ Model | null | In proceedings of the 9th International KES Conference on AGENTS
AND MULTI-AGENT SYSTEMS: TECHNOLOGIES AND APPLICATIONS, 2015 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes a model which aim is providing a more coherent framework
for agents design. We identify three closely related anthropo-centered domains
working on separate functional levels. Abstracting from human physiology,
psychology, and philosophy we create the $P^3$ model to be used as a multi-tier
approach to deal with complex class of problems. The three layers identified in
this model have been named PhysioComputing, MindComputing, and MetaComputing.
Several instantiations of this model are finally presented related to different
IT areas such as artificial intelligence, distributed computing, software and
service engineering.
| [
{
"version": "v1",
"created": "Fri, 10 Jun 2016 08:39:22 GMT"
}
] | 1,465,776,000,000 | [
[
"Bridges",
"Michael W.",
""
],
[
"Distefano",
"Salvatore",
""
],
[
"Mazzara",
"Manuel",
""
],
[
"Minlebaev",
"Marat",
""
],
[
"Talanov",
"Max",
""
],
[
"Vallverdú",
"Jordi",
""
]
] |
1606.03244 | Martin Cooper | Martin C. Cooper, Andreas Herzig, Faustine Maffre, Fr\'ed\'eric Maris
and Pierre R\'egnier | Simple epistemic planning: generalised gossiping | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The gossip problem, in which information (known as secrets) must be shared
among a certain number of agents using the minimum number of calls, is of
interest in the conception of communication networks and protocols. We extend
the gossip problem to arbitrary epistemic depths. For example, we may require
not only that all agents know all secrets but also that all agents know that
all agents know all secrets. We give optimal protocols for various versions of
the generalised gossip problem, depending on the graph of communication links,
in the case of two-way communications, one-way communications and parallel
communication. We also study different variants which allow us to impose
negative goals such as that certain agents must not know certain secrets. We
show that in the presence of negative goals testing the existence of a
successful protocol is NP-complete whereas this is always polynomial-time in
the case of purely positive goals.
| [
{
"version": "v1",
"created": "Fri, 10 Jun 2016 09:31:26 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Jun 2016 15:49:20 GMT"
}
] | 1,465,948,800,000 | [
[
"Cooper",
"Martin C.",
""
],
[
"Herzig",
"Andreas",
""
],
[
"Maffre",
"Faustine",
""
],
[
"Maris",
"Frédéric",
""
],
[
"Régnier",
"Pierre",
""
]
] |
1606.03298 | Brian Ruttenberg | Avi Pfeffer, Brian Ruttenberg, William Kretschmer | Structured Factored Inference: A Framework for Automated Reasoning in
Probabilistic Programming Languages | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reasoning on large and complex real-world models is a computationally
difficult task, yet one that is required for effective use of many AI
applications. A plethora of inference algorithms have been developed that work
well on specific models or only on parts of general models. Consequently, a
system that can intelligently apply these inference algorithms to different
parts of a model for fast reasoning is highly desirable. We introduce a new
framework called structured factored inference (SFI) that provides the
foundation for such a system. Using models encoded in a probabilistic
programming language, SFI provides a sound means to decompose a model into
sub-models, apply an inference algorithm to each sub-model, and combine the
resulting information to answer a query. Our results show that SFI is nearly as
accurate as exact inference yet retains the benefits of approximate inference
methods.
| [
{
"version": "v1",
"created": "Fri, 10 Jun 2016 12:53:01 GMT"
}
] | 1,465,776,000,000 | [
[
"Pfeffer",
"Avi",
""
],
[
"Ruttenberg",
"Brian",
""
],
[
"Kretschmer",
"William",
""
]
] |
1606.04000 | Douglas Summers Stay | Douglas Summers-Stay, Clare Voss and Taylor Cassidy | Using a Distributional Semantic Vector Space with a Knowledge Base for
Reasoning in Uncertain Conditions | null | Biologically Inspired Cognitive Architectures (2016), pp. 34-44 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The inherent inflexibility and incompleteness of commonsense knowledge bases
(KB) has limited their usefulness. We describe a system called Displacer for
performing KB queries extended with the analogical capabilities of the word2vec
distributional semantic vector space (DSVS). This allows the system to answer
queries with information which was not contained in the original KB in any
form. By performing analogous queries on semantically related terms and mapping
their answers back into the context of the original query using displacement
vectors, we are able to give approximate answers to many questions which, if
posed to the KB alone, would return no results.
We also show how the hand-curated knowledge in a KB can be used to increase
the accuracy of a DSVS in solving analogy problems. In these ways, a KB and a
DSVS can make up for each other's weaknesses.
| [
{
"version": "v1",
"created": "Mon, 13 Jun 2016 15:45:00 GMT"
}
] | 1,465,862,400,000 | [
[
"Summers-Stay",
"Douglas",
""
],
[
"Voss",
"Clare",
""
],
[
"Cassidy",
"Taylor",
""
]
] |
1606.04250 | Philipp Geiger | Philipp Geiger, Katja Hofmann, Bernhard Sch\"olkopf | Experimental and causal view on information integration in autonomous
agents | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The amount of digitally available but heterogeneous information about the
world is remarkable, and new technologies such as self-driving cars, smart
homes, or the internet of things may further increase it. In this paper we
present preliminary ideas about certain aspects of the problem of how such
heterogeneous information can be harnessed by autonomous agents. After
discussing potentials and limitations of some existing approaches, we
investigate how \emph{experiments} can help to obtain a better understanding of
the problem. Specifically, we present a simple agent that integrates video data
from a different agent, and implement and evaluate a version of it on the novel
experimentation platform \emph{Malmo}. The focus of a second investigation is
on how information about the hardware of different agents, the agents' sensory
data, and \emph{causal} information can be utilized for knowledge transfer
between agents and subsequently more data-efficient decision making. Finally,
we discuss potential future steps w.r.t.\ theory and experimentation, and
formulate open questions.
| [
{
"version": "v1",
"created": "Tue, 14 Jun 2016 08:38:18 GMT"
},
{
"version": "v2",
"created": "Fri, 26 Aug 2016 16:37:37 GMT"
},
{
"version": "v3",
"created": "Tue, 13 Mar 2018 15:43:19 GMT"
}
] | 1,520,985,600,000 | [
[
"Geiger",
"Philipp",
""
],
[
"Hofmann",
"Katja",
""
],
[
"Schölkopf",
"Bernhard",
""
]
] |
1606.04345 | Balazs Kegl | Ak{\i}n Kazak\c{c}{\i}and Mehdi Cherti and Bal\'azs K\'egl | Digits that are not: Generating new types through deep neural nets | preprint ICCC'16, International Conference on Computational
Creativity | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For an artificial creative agent, an essential driver of the search for
novelty is a value function which is often provided by the system designer or
users. We argue that an important barrier for progress in creativity research
is the inability of these systems to develop their own notion of value for
novelty. We propose a notion of knowledge-driven creativity that circumvent the
need for an externally imposed value function, allowing the system to explore
based on what it has learned from a set of referential objects. The concept is
illustrated by a specific knowledge model provided by a deep generative
autoencoder. Using the described system, we train a knowledge model on a set of
digit images and we use the same model to build coherent sets of new digits
that do not belong to known digit types.
| [
{
"version": "v1",
"created": "Tue, 14 Jun 2016 13:29:13 GMT"
}
] | 1,470,441,600,000 | [
[
"Cherti",
"Akın Kazakçıand Mehdi",
""
],
[
"Kégl",
"Balázs",
""
]
] |
1606.04397 | Ulrich Furbach | Ulrich Furbach and Florian Furbach and Christian Freksa | Relating Strong Spatial Cognition to Symbolic Problem Solving --- An
Example | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this note, we discuss and analyse a shortest path finding approach using
strong spatial cognition. It is compared with a symbolic graph-based algorithm
and it is shown that both approaches are similar with respect to structure and
complexity. Nevertheless, the strong spatial cognition solution is easy to
understand and even pops up immediately when one has to solve the problem.
| [
{
"version": "v1",
"created": "Tue, 14 Jun 2016 14:41:24 GMT"
}
] | 1,465,948,800,000 | [
[
"Furbach",
"Ulrich",
""
],
[
"Furbach",
"Florian",
""
],
[
"Freksa",
"Christian",
""
]
] |
1606.04486 | Martin Mladenov | Martin Mladenov and Leonard Kleinhans and Kristian Kersting | Lifted Convex Quadratic Programming | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Symmetry is the essential element of lifted inference that has recently
demon- strated the possibility to perform very efficient inference in
highly-connected, but symmetric probabilistic models models. This raises the
question, whether this holds for optimisation problems in general. Here we show
that for a large class of optimisation methods this is actually the case. More
precisely, we introduce the concept of fractional symmetries of convex
quadratic programs (QPs), which lie at the heart of many machine learning
approaches, and exploit it to lift, i.e., to compress QPs. These lifted QPs can
then be tackled with the usual optimization toolbox (off-the-shelf solvers,
cutting plane algorithms, stochastic gradients etc.). If the original QP
exhibits symmetry, then the lifted one will generally be more compact, and
hence their optimization is likely to be more efficient.
| [
{
"version": "v1",
"created": "Tue, 14 Jun 2016 18:18:58 GMT"
}
] | 1,465,948,800,000 | [
[
"Mladenov",
"Martin",
""
],
[
"Kleinhans",
"Leonard",
""
],
[
"Kersting",
"Kristian",
""
]
] |
1606.04512 | Seyed Mehran Kazemi | Seyed Mehran Kazemi and David Poole | Why is Compiling Lifted Inference into a Low-Level Language so
Effective? | 6 pages, 3 figures, accepted at IJCAI-16 Statistical Relational AI
(StaRAI) workshop | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | First-order knowledge compilation techniques have proven efficient for lifted
inference. They compile a relational probability model into a target circuit on
which many inference queries can be answered efficiently. Early methods used
data structures as their target circuit. In our KR-2016 paper, we showed that
compiling to a low-level program instead of a data structure offers orders of
magnitude speedup, resulting in the state-of-the-art lifted inference
technique. In this paper, we conduct experiments to address two questions
regarding our KR-2016 results: 1- does the speedup come from more efficient
compilation or more efficient reasoning with the target circuit?, and 2- why
are low-level programs more efficient target circuits than data structures?
| [
{
"version": "v1",
"created": "Tue, 14 Jun 2016 19:13:30 GMT"
}
] | 1,465,948,800,000 | [
[
"Kazemi",
"Seyed Mehran",
""
],
[
"Poole",
"David",
""
]
] |
1606.04589 | Ram\'on Pino P\'erez | Am\'ilcar Mata D\'iaz and Ram\'on Pino P\'erez | Impossibility in Belief Merging | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the aim of studying social properties of belief merging and having a
better understanding of impossibility, we extend in three ways the framework of
logic-based merging introduced by Konieczny and Pino P\'erez. First, at the
level of representation of the information, we pass from belief bases to
complex epistemic states. Second, the profiles are represented as functions of
finite societies to the set of epistemic states (a sort of vectors) and not as
multisets of epistemic states. Third, we extend the set of rational postulates
in order to consider the epistemic versions of the classical postulates of
Social Choice Theory: Standard Domain, Pareto Property, Independence of
Irrelevant Alternatives and Absence of Dictator. These epistemic versions of
social postulates are given, essentially, in terms of the finite propositional
logic. We state some representation theorems for these operators. These
extensions and representation theorems allow us to establish an epistemic and
very general version of Arrow's Impossibility Theorem. One of the interesting
features of our result, is that it holds for different representations of
epistemic states; for instance conditionals, Ordinal Conditional functions and,
of course, total preorders.
| [
{
"version": "v1",
"created": "Tue, 14 Jun 2016 23:05:39 GMT"
}
] | 1,466,035,200,000 | [
[
"Díaz",
"Amílcar Mata",
""
],
[
"Pérez",
"Ramón Pino",
""
]
] |
1606.05174 | Tom Zahavy | Nir Baram, Tom Zahavy, Shie Mannor | Deep Reinforcement Learning Discovers Internal Models | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep Reinforcement Learning (DRL) is a trending field of research, showing
great promise in challenging problems such as playing Atari, solving Go and
controlling robots. While DRL agents perform well in practice we are still
lacking the tools to analayze their performance. In this work we present the
Semi-Aggregated MDP (SAMDP) model. A model best suited to describe policies
exhibiting both spatial and temporal hierarchies. We describe its advantages
for analyzing trained policies over other modeling approaches, and show that
under the right state representation, like that of DQN agents, SAMDP can help
to identify skills. We detail the automatic process of creating it from
recorded trajectories, up to presenting it on t-SNE maps. We explain how to
evaluate its fitness and show surprising results indicating high compatibility
with the policy at hand. We conclude by showing how using the SAMDP model, an
extra performance gain can be squeezed from the agent.
| [
{
"version": "v1",
"created": "Thu, 16 Jun 2016 13:09:16 GMT"
}
] | 1,466,121,600,000 | [
[
"Baram",
"Nir",
""
],
[
"Zahavy",
"Tom",
""
],
[
"Mannor",
"Shie",
""
]
] |
1606.05312 | Andr\'e Barreto | Andr\'e Barreto, Will Dabney, R\'emi Munos, Jonathan J. Hunt, Tom
Schaul, Hado van Hasselt, David Silver | Successor Features for Transfer in Reinforcement Learning | Published at NIPS 2017 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Transfer in reinforcement learning refers to the notion that generalization
should occur not only within a task but also across tasks. We propose a
transfer framework for the scenario where the reward function changes between
tasks but the environment's dynamics remain the same. Our approach rests on two
key ideas: "successor features", a value function representation that decouples
the dynamics of the environment from the rewards, and "generalized policy
improvement", a generalization of dynamic programming's policy improvement
operation that considers a set of policies rather than a single one. Put
together, the two ideas lead to an approach that integrates seamlessly within
the reinforcement learning framework and allows the free exchange of
information across tasks. The proposed method also provides performance
guarantees for the transferred policy even before any learning has taken place.
We derive two theorems that set our approach in firm theoretical ground and
present experiments that show that it successfully promotes transfer in
practice, significantly outperforming alternative methods in a sequence of
navigation tasks and in the control of a simulated robotic arm.
| [
{
"version": "v1",
"created": "Thu, 16 Jun 2016 18:45:32 GMT"
},
{
"version": "v2",
"created": "Thu, 12 Apr 2018 11:41:05 GMT"
}
] | 1,523,577,600,000 | [
[
"Barreto",
"André",
""
],
[
"Dabney",
"Will",
""
],
[
"Munos",
"Rémi",
""
],
[
"Hunt",
"Jonathan J.",
""
],
[
"Schaul",
"Tom",
""
],
[
"van Hasselt",
"Hado",
""
],
[
"Silver",
"David",
""
]
] |
1606.05446 | Chiara Ghidini | Federico Chesani and Riccardo De Masellis and Chiara Di
Francescomarino and Chiara Ghidini and Paola Mello and Marco Montali and
Sergio Tessaris | Abducing Compliance of Incomplete Event Logs | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The capability to store data about business processes execution in so-called
Event Logs has brought to the diffusion of tools for the analysis of process
executions and for the assessment of the goodness of a process model.
Nonetheless, these tools are often very rigid in dealing with with Event Logs
that include incomplete information about the process execution. Thus, while
the ability of handling incomplete event data is one of the challenges
mentioned in the process mining manifesto, the evaluation of compliance of an
execution trace still requires an end-to-end complete trace to be performed.
This paper exploits the power of abduction to provide a flexible, yet
computationally effective, framework to deal with different forms of
incompleteness in an Event Log. Moreover it proposes a refinement of the
classical notion of compliance into strong and conditional compliance to take
into account incomplete logs. Finally, performances evaluation in an
experimental setting shows the feasibility of the presented approach.
| [
{
"version": "v1",
"created": "Fri, 17 Jun 2016 08:30:28 GMT"
}
] | 1,466,380,800,000 | [
[
"Chesani",
"Federico",
""
],
[
"De Masellis",
"Riccardo",
""
],
[
"Di Francescomarino",
"Chiara",
""
],
[
"Ghidini",
"Chiara",
""
],
[
"Mello",
"Paola",
""
],
[
"Montali",
"Marco",
""
],
[
"Tessaris",
"Sergio",
""
]
] |
1606.05593 | Craig Sherstan | Craig Sherstan, Adam White, Marlos C. Machado, Patrick M. Pilarski | Introspective Agents: Confidence Measures for General Value Functions | Accepted for presentation at the Ninth Conference on Artificial
General Intelligence (AGI 2016), 4 pages, 1 figure | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Agents of general intelligence deployed in real-world scenarios must adapt to
ever-changing environmental conditions. While such adaptive agents may leverage
engineered knowledge, they will require the capacity to construct and evaluate
knowledge themselves from their own experience in a bottom-up, constructivist
fashion. This position paper builds on the idea of encoding knowledge as
temporally extended predictions through the use of general value functions.
Prior work has focused on learning predictions about externally derived signals
about a task or environment (e.g. battery level, joint position). Here we
advocate that the agent should also predict internally generated signals
regarding its own learning process - for example, an agent's confidence in its
learned predictions. Finally, we suggest how such information would be
beneficial in creating an introspective agent that is able to learn to make
good decisions in a complex, changing world.
| [
{
"version": "v1",
"created": "Fri, 17 Jun 2016 17:24:36 GMT"
}
] | 1,466,380,800,000 | [
[
"Sherstan",
"Craig",
""
],
[
"White",
"Adam",
""
],
[
"Machado",
"Marlos C.",
""
],
[
"Pilarski",
"Patrick M.",
""
]
] |
1606.05597 | Kieran Greer Dr | Kieran Greer | Adding Context to Concept Trees | null | International Journal of Intelligent Systems Design and Computing,
Inderscience, Vol. 3, No. 1, pp.84-100, 2019 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A Concept Tree is a structure for storing knowledge where the trees are
stored in a database called a Concept Base. It sits between the highly
distributed neural architectures and the distributed information systems, with
the intention of bringing brain-like and computer systems closer together.
Concept Trees can grow from the semi-structured sources when consistent
sequences of concepts are presented. Each tree ideally represents a single
cohesive concept and the trees can link with each other for navigation and
semantic purposes. The trees are therefore also a type of semantic network and
would benefit from having a consistent level of context for each node. A
consistent build process is managed through a 'counting rule' and some other
rules that can normalise the database structure. This restricted structure can
then be complimented and enriched by the more dynamic context. It is also
suggested to use the linking structure of the licas system [15] as a basis for
the context links, where the mathematical model is extended further to define
this. A number of tests have demonstrated the soundness of the architecture.
Building the trees from text documents shows that the tree structure could be
inherent in natural language. Then, two types of query language are described.
Both of these can perform consistent query processes to return knowledge to the
user and even enhance the query with new knowledge. This is supported even
further with direct comparisons to a cognitive model, also being developed by
the author.
| [
{
"version": "v1",
"created": "Fri, 17 Jun 2016 17:32:11 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Sep 2017 10:58:32 GMT"
},
{
"version": "v3",
"created": "Thu, 1 Mar 2018 15:54:50 GMT"
},
{
"version": "v4",
"created": "Tue, 7 Aug 2018 14:49:17 GMT"
},
{
"version": "v5",
"created": "Fri, 17 Jan 2020 10:08:32 GMT"
}
] | 1,586,217,600,000 | [
[
"Greer",
"Kieran",
""
]
] |
1606.05767 | Naoto Yoshida | Naoto Yoshida | On Reward Function for Survival | Joint 8th International Conference on Soft Computing and Intelligent
Systems and 17th International Symposium on Advanced Intelligent Systems | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Obtaining a survival strategy (policy) is one of the fundamental problems of
biological agents. In this paper, we generalize the formulation of previous
research related to the survival of an agent and we formulate the survival
problem as a maximization of the multi-step survival probability in future time
steps. We introduce a method for converting the maximization of multi-step
survival probability into a classical reinforcement learning problem. Using
this conversion, the reward function (negative temporal cost function) is
expressed as the log of the temporal survival probability. And we show that the
objective function of the reinforcement learning in this sense is proportional
to the variational lower bound of the original problem. Finally, We empirically
demonstrate that the agent learns survival behavior by using the reward
function introduced in this paper.
| [
{
"version": "v1",
"created": "Sat, 18 Jun 2016 15:33:04 GMT"
},
{
"version": "v2",
"created": "Sun, 24 Jul 2016 13:19:23 GMT"
}
] | 1,469,491,200,000 | [
[
"Yoshida",
"Naoto",
""
]
] |
1606.06355 | Xiao Li | Xiao Li and Calin Belta | A Hierarchical Reinforcement Learning Method for Persistent
Time-Sensitive Tasks | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reinforcement learning has been applied to many interesting problems such as
the famous TD-gammon and the inverted helicopter flight. However, little effort
has been put into developing methods to learn policies for complex persistent
tasks and tasks that are time-sensitive. In this paper, we take a step towards
solving this problem by using signal temporal logic (STL) as task
specification, and taking advantage of the temporal abstraction feature that
the options framework provide. We show via simulation that a relatively easy to
implement algorithm that combines STL and options can learn a satisfactory
policy with a small number of training cases
| [
{
"version": "v1",
"created": "Mon, 20 Jun 2016 22:43:29 GMT"
}
] | 1,466,553,600,000 | [
[
"Li",
"Xiao",
""
],
[
"Belta",
"Calin",
""
]
] |
1606.07233 | Morten Goodwin Dr. | Per-Arne Andersen, Christian Kr{\aa}kevik, Morten Goodwin, Anis Yazidi | Adaptive Task Assignment in Online Learning Environments | 6th International Conference on Web Intelligence | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the increasing popularity of online learning, intelligent tutoring
systems are regaining increased attention. In this paper, we introduce adaptive
algorithms for personalized assignment of learning tasks to student so that to
improve his performance in online learning environments. As main contribution
of this paper, we propose a a novel Skill-Based Task Selector (SBTS) algorithm
which is able to approximate a student's skill level based on his performance
and consequently suggest adequate assignments. The SBTS is inspired by the
class of multi-armed bandit algorithms. However, in contrast to standard
multi-armed bandit approaches, the SBTS aims at acquiring two criteria related
to student learning, namely: which topics should the student work on, and what
level of difficulty should the task be. The SBTS centers on innovative reward
and punishment schemes in a task and skill matrix based on the student
behaviour.
To verify the algorithm, the complex student behaviour is modelled using a
neighbour node selection approach based on empirical estimations of a students
learning curve. The algorithm is evaluated with a practical scenario from a
basic java programming course. The SBTS is able to quickly and accurately adapt
to the composite student competency --- even with a multitude of student
models.
| [
{
"version": "v1",
"created": "Thu, 23 Jun 2016 09:09:49 GMT"
}
] | 1,466,726,400,000 | [
[
"Andersen",
"Per-Arne",
""
],
[
"Kråkevik",
"Christian",
""
],
[
"Goodwin",
"Morten",
""
],
[
"Yazidi",
"Anis",
""
]
] |
1606.07860 | Carl Schultz | Przemys{\l}aw Andrzej Wa{\l}\k{e}ga, Carl Schultz, Mehul Bhatt | Non-Monotonic Spatial Reasoning with Answer Set Programming Modulo
Theories | 22 pages, 6 figures, Under consideration for publication in TPLP | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The systematic modelling of dynamic spatial systems is a key requirement in a
wide range of application areas such as commonsense cognitive robotics,
computer-aided architecture design, and dynamic geographic information systems.
We present ASPMT(QS), a novel approach and fully-implemented prototype for
non-monotonic spatial reasoning -a crucial requirement within dynamic spatial
systems- based on Answer Set Programming Modulo Theories (ASPMT).
ASPMT(QS) consists of a (qualitative) spatial representation module (QS) and
a method for turning tight ASPMT instances into Satisfiability Modulo Theories
(SMT) instances in order to compute stable models by means of SMT solvers. We
formalise and implement concepts of default spatial reasoning and spatial frame
axioms. Spatial reasoning is performed by encoding spatial relations as systems
of polynomial constraints, and solving via SMT with the theory of real
nonlinear arithmetic. We empirically evaluate ASPMT(QS) in comparison with
other contemporary spatial reasoning systems both within and outside the
context of logic programming. ASPMT(QS) is currently the only existing system
that is capable of reasoning about indirect spatial effects (i.e., addressing
the ramification problem), and integrating geometric and qualitative spatial
information within a non-monotonic spatial reasoning context.
This paper is under consideration for publication in TPLP.
| [
{
"version": "v1",
"created": "Sat, 25 Jun 2016 01:02:30 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Jun 2016 18:21:10 GMT"
}
] | 1,467,158,400,000 | [
[
"Wałęga",
"Przemysław Andrzej",
""
],
[
"Schultz",
"Carl",
""
],
[
"Bhatt",
"Mehul",
""
]
] |
1606.08109 | Alex Ushveridze | Alex Ushveridze | Can Turing machine be curious about its Turing test results? Three
informal lectures on physics of intelligence | 79 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | What is the nature of curiosity? Is there any scientific way to understand
the origin of this mysterious force that drives the behavior of even the
stupidest naturally intelligent systems and is completely absent in their
smartest artificial analogs? Can we build AI systems that could be curious
about something, systems that would have an intrinsic motivation to learn? Is
such a motivation quantifiable? Is it implementable? I will discuss this
problem from the standpoint of physics. The relationship between physics and
intelligence is a consequence of the fact that correctly predicted information
is nothing but an energy resource, and the process of thinking can be viewed as
a process of accumulating and spending this resource through the acts of
perception and, respectively, decision making. The natural motivation of any
autonomous system to keep this accumulation/spending balance as high as
possible allows one to treat the problem of describing the dynamics of thinking
processes as a resource optimization problem. Here I will propose and discuss a
simple theoretical model of such an autonomous system which I call the
Autonomous Turing Machine (ATM). The potential attractiveness of ATM lies in
the fact that it is the model of a self-propelled AI for which the only
available energy resource is the information itself. For ATM, the problem of
optimal thinking, learning, and decision-making becomes conceptually simple and
mathematically well tractable. This circumstance makes the ATM an ideal
playground for studying the dynamics of intelligent behavior and allows one to
quantify many seemingly unquantifiable features of genuine intelligence.
| [
{
"version": "v1",
"created": "Mon, 27 Jun 2016 01:53:02 GMT"
}
] | 1,467,072,000,000 | [
[
"Ushveridze",
"Alex",
""
]
] |
1606.08514 | Sanjit Seshia | Sanjit A. Seshia, Dorsa Sadigh, and S. Shankar Sastry | Towards Verified Artificial Intelligence | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges.
| [
{
"version": "v1",
"created": "Mon, 27 Jun 2016 23:51:04 GMT"
},
{
"version": "v2",
"created": "Sat, 2 Jul 2016 06:27:03 GMT"
},
{
"version": "v3",
"created": "Sat, 21 Oct 2017 09:50:36 GMT"
},
{
"version": "v4",
"created": "Thu, 23 Jul 2020 17:33:59 GMT"
}
] | 1,595,548,800,000 | [
[
"Seshia",
"Sanjit A.",
""
],
[
"Sadigh",
"Dorsa",
""
],
[
"Sastry",
"S. Shankar",
""
]
] |
1606.08896 | Joohyung Lee | Joohyung Lee and Yi Wang | On the Semantic Relationship between Probabilistic Soft Logic and Markov
Logic | In Working Notes of the 6th International Workshop on Statistical
Relational AI | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Markov Logic Networks (MLN) and Probabilistic Soft Logic (PSL) are widely
applied formalisms in Statistical Relational Learning, an emerging area in
Artificial Intelligence that is concerned with combining logical and
statistical AI. Despite their resemblance, the relationship has not been
formally stated. In this paper, we describe the precise semantic relationship
between them from a logical perspective. This is facilitated by first extending
fuzzy logic to allow weights, which can be also viewed as a generalization of
PSL, and then relate that generalization to MLN. We observe that the
relationship between PSL and MLN is analogous to the known relationship between
fuzzy logic and Boolean logic, and furthermore the weight scheme of PSL is
essentially a generalization of the weight scheme of MLN for the many-valued
setting.
| [
{
"version": "v1",
"created": "Tue, 28 Jun 2016 21:43:19 GMT"
}
] | 1,467,244,800,000 | [
[
"Lee",
"Joohyung",
""
],
[
"Wang",
"Yi",
""
]
] |
1606.08906 | Aleksander Lodwich | Aleksander Lodwich | Exploring high-level Perspectives on Self-Configuration Capabilities of
Systems | 46 pages, 62 figures | null | 10.13140/RG.2.1.2945.6885 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Optimization of product performance repetitively introduces the need to make
products adaptive in a more general sense. This more general idea is often
captured under the term 'self-configuration'. Despite the importance of such
capability, research work on this feature appears isolated by technical
domains. It is not easy to tell quickly whether the approaches chosen in
different technological domains introduce new ideas or whether the differences
just reflect domain idiosyncrasies. For the sake of easy identification of key
differences between systems with self-configuring capabilities, I will explore
higher level concepts for understanding self-configuration, such as the
{\Omega}-units, in order to provide theoretical instruments for connecting
different areas of technology and research.
| [
{
"version": "v1",
"created": "Tue, 28 Jun 2016 22:36:38 GMT"
}
] | 1,467,244,800,000 | [
[
"Lodwich",
"Aleksander",
""
]
] |
1606.08962 | Jagannath Roy | Jagannath Roy, Kajal Chatterjee, Abhirup Bandhopadhyay, Samarjit Kar | Evaluation and selection of Medical Tourism sites: A rough AHP based
MABAC approach | 25 pages | null | 10.1111/exsy.12232 | 14450977 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, a novel multiple criteria decision making (MCDM) methodology
is presented for assessing and prioritizing medical tourism destinations in
uncertain environment. A systematic evaluation and assessment method is
proposed by integrating rough number based AHP (Analytic Hierarchy Process) and
rough number based MABAC (Multi-Attributive Border Approximation area
Comparison). Rough number is used to aggregate individual judgments and
preferences to deal with vagueness in decision making due to limited data.
Rough AHP analyzes the relative importance of criteria based on their
preferences given by experts. Rough MABAC evaluates the alternative sites based
on the criteria weights. The proposed methodology is explained through a case
study considering different cities for healthcare service in India. The
validity of the obtained ranking for the given decision making problem is
established by testing criteria proposed by Wang and Triantaphyllou (2008)
along with further analysis and discussion.
| [
{
"version": "v1",
"created": "Wed, 29 Jun 2016 06:00:32 GMT"
},
{
"version": "v2",
"created": "Thu, 25 Aug 2016 07:07:30 GMT"
}
] | 1,501,200,000,000 | [
[
"Roy",
"Jagannath",
""
],
[
"Chatterjee",
"Kajal",
""
],
[
"Bandhopadhyay",
"Abhirup",
""
],
[
"Kar",
"Samarjit",
""
]
] |
1606.08965 | Jagannath Roy | Jagannath Roy, Krishnendu Adhikary, Samarjit Kar | Credibilistic TOPSIS Model for Evaluation and Selection of Municipal
Solid Waste Disposal Methods | null | null | 10.1007/978-981-13-0215-2_17 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Municipal solid waste management (MSWM) is a challenging issue of urban
development in developing countries. Each country having different
socio-economic-environmental background, might not accept a particular disposal
method as the optimal choice. Selection of suitable disposal method in MSWM,
under vague and imprecise information can be considered as multi criteria
decision making problem (MCDM). In the present paper, TOPSIS (Technique for
Order Preference by Similarity to Ideal Solution) methodology is extended based
on credibility theory for evaluating the performances of MSW disposal methods
under some criteria fixed by experts. The proposed model helps decision makers
to choose a preferable alternative for their municipal area. A sensitivity
analysis by our proposed model confirms this fact.
| [
{
"version": "v1",
"created": "Wed, 29 Jun 2016 06:13:22 GMT"
},
{
"version": "v2",
"created": "Thu, 30 Jun 2016 16:54:48 GMT"
},
{
"version": "v3",
"created": "Tue, 9 Aug 2016 18:44:35 GMT"
}
] | 1,525,651,200,000 | [
[
"Roy",
"Jagannath",
""
],
[
"Adhikary",
"Krishnendu",
""
],
[
"Kar",
"Samarjit",
""
]
] |
1606.09140 | Robin Hirsch | Robin Hirsch, Marcel Jackson and Tomasz Kowalski | Algebraic foundations for qualitative calculi and networks | 22 pages | Theoretical Computer Science 768 (2019) 99-116 | 10.1016/j.tcs.2019.02.033 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A qualitative representation $\phi$ is like an ordinary representation of a
relation algebra, but instead of requiring $(a; b)^\phi = a^\phi | b^\phi$, as
we do for ordinary representations, we only require that $c^\phi\supseteq
a^\phi | b^\phi \iff c\geq a ; b$, for each $c$ in the algebra. A constraint
network is qualitatively satisfiable if its nodes can be mapped to elements of
a qualitative representation, preserving the constraints. If a constraint
network is satisfiable then it is clearly qualitatively satisfiable, but the
converse can fail. However, for a wide range of relation algebras including the
point algebra, the Allen Interval Algebra, RCC8 and many others, a network is
satisfiable if and only if it is qualitatively satisfiable.
Unlike ordinary composition, the weak composition arising from qualitative
representations need not be associative, so we can generalise by considering
network satisfaction problems over non-associative algebras. We prove that
computationally, qualitative representations have many advantages over ordinary
representations: whereas many finite relation algebras have only infinite
representations, every finite qualitatively representable algebra has a finite
qualitative representation; the representability problem for (the atom
structures of) finite non-associative algebras is NP-complete; the network
satisfaction problem over a finite qualitatively representable algebra is
always in NP; the validity of equations over qualitative representations is
co-NP-complete. On the other hand we prove that there is no finite
axiomatisation of the class of qualitatively representable algebras.
| [
{
"version": "v1",
"created": "Wed, 29 Jun 2016 15:00:48 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Jul 2016 11:44:51 GMT"
},
{
"version": "v3",
"created": "Mon, 19 Jun 2017 16:33:39 GMT"
}
] | 1,655,942,400,000 | [
[
"Hirsch",
"Robin",
""
],
[
"Jackson",
"Marcel",
""
],
[
"Kowalski",
"Tomasz",
""
]
] |
1606.09577 | Thomas Vacek Thomas Vacek | Thomas Vacek | Ordering as privileged information | 10 pages, 1 table, 2 page appendix giving proofs | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose to accelerate the rate of convergence of the pattern recognition
task by directly minimizing the variance diameters of certain hypothesis
spaces, which are critical quantities in fast-convergence results.We show that
the variance diameters can be controlled by dividing hypothesis spaces into
metric balls based on a new order metric. This order metric can be minimized as
an ordinal regression problem, leading to a LUPI (Learning Using Privileged
Information) application where we take the privileged information as some
desired ordering, and construct a faster-converging hypothesis space by
empirically restricting some larger hypothesis space according to that
ordering. We give a risk analysis of the approach. We discuss the difficulties
with model selection and give an innovative technique for selecting multiple
model parameters. Finally, we provide some data experiments.
| [
{
"version": "v1",
"created": "Thu, 30 Jun 2016 17:06:30 GMT"
}
] | 1,467,331,200,000 | [
[
"Vacek",
"Thomas",
""
]
] |
1606.09594 | Ankit Anand | Ankit Anand, Aditya Grover, Mausam, Parag Singla | Contextual Symmetries in Probabilistic Graphical Models | 9 Pages, 4 figures | IJCAI, 2016 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An important approach for efficient inference in probabilistic graphical
models exploits symmetries among objects in the domain. Symmetric variables
(states) are collapsed into meta-variables (meta-states) and inference
algorithms are run over the lifted graphical model instead of the flat one. Our
paper extends existing definitions of symmetry by introducing the novel notion
of contextual symmetry. Two states that are not globally symmetric, can be
contextually symmetric under some specific assignment to a subset of variables,
referred to as the context variables. Contextual symmetry subsumes previous
symmetry definitions and can rep resent a large class of symmetries not
representable earlier. We show how to compute contextual symmetries by reducing
it to the problem of graph isomorphism. We extend previous work on exploiting
symmetries in the MCMC framework to the case of contextual symmetries. Our
experiments on several domains of interest demonstrate that exploiting
contextual symmetries can result in significant computational gains.
| [
{
"version": "v1",
"created": "Thu, 30 Jun 2016 18:03:42 GMT"
}
] | 1,467,331,200,000 | [
[
"Anand",
"Ankit",
""
],
[
"Grover",
"Aditya",
""
],
[
"Mausam",
"",
""
],
[
"Singla",
"Parag",
""
]
] |
1606.09637 | David Smith | David Smith, Parag Singla, Vibhav Gogate | Lifted Region-Based Belief Propagation | Sixth International Workshop on Statistical Relational AI | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Due to the intractable nature of exact lifted inference, research has
recently focused on the discovery of accurate and efficient approximate
inference algorithms in Statistical Relational Models (SRMs), such as Lifted
First-Order Belief Propagation. FOBP simulates propositional factor graph
belief propagation without constructing the ground factor graph by identifying
and lifting over redundant message computations. In this work, we propose a
generalization of FOBP called Lifted Generalized Belief Propagation, in which
both the region structure and the message structure can be lifted. This
approach allows more of the inference to be performed intra-region (in the
exact inference step of BP), thereby allowing simulation of propagation on a
graph structure with larger region scopes and fewer edges, while still
maintaining tractability. We demonstrate that the resulting algorithm converges
in fewer iterations to more accurate results on a variety of SRMs.
| [
{
"version": "v1",
"created": "Thu, 30 Jun 2016 19:50:33 GMT"
}
] | 1,467,331,200,000 | [
[
"Smith",
"David",
""
],
[
"Singla",
"Parag",
""
],
[
"Gogate",
"Vibhav",
""
]
] |
1607.00061 | I. Dan Melamed | I. Dan Melamed and Nobal B. Niraula | Towards A Virtual Assistant That Can Be Taught New Tasks In Any Domain
By Its End-Users | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The challenge stated in the title can be divided into two main problems. The
first problem is to reliably mimic the way that users interact with user
interfaces. The second problem is to build an instructible agent, i.e. one that
can be taught to execute tasks expressed as previously unseen natural language
commands. This paper proposes a solution to the second problem, a system we
call Helpa. End-users can teach Helpa arbitrary new tasks whose level of
complexity is similar to the tasks available from today's most popular virtual
assistants. Teaching Helpa does not involve any programming. Instead, users
teach Helpa by providing just one example of a command paired with a
demonstration of how to execute that command. Helpa does not rely on any
pre-existing domain-specific knowledge. It is therefore completely
domain-independent. Our usability study showed that end-users can teach Helpa
many new tasks in less than a minute each, often much less.
| [
{
"version": "v1",
"created": "Thu, 30 Jun 2016 22:04:26 GMT"
}
] | 1,467,590,400,000 | [
[
"Melamed",
"I. Dan",
""
],
[
"Niraula",
"Nobal B.",
""
]
] |
1607.00234 | Florentin Smarandache | Florentin Smarandache | Neutrosophic Overset, Neutrosophic Underset, and Neutrosophic Offset.
Similarly for Neutrosophic Over-/Under-/Off- Logic, Probability, and
Statistics | 170 pages | Pons Editions, Bruxelles, 2016 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neutrosophic Over-/Under-/Off-Set and -Logic were defined by the author in
1995 and published for the first time in 2007. We extended the neutrosophic set
respectively to Neutrosophic Overset {when some neutrosophic component is over
1}, Neutrosophic Underset {when some neutrosophic component is below 0}, and to
Neutrosophic Offset {when some neutrosophic components are off the interval [0,
1], i.e. some neutrosophic component over 1 and other neutrosophic component
below 0}. This is no surprise with respect to the classical fuzzy set/logic,
intuitionistic fuzzy set/logic, or classical/imprecise probability, where the
values are not allowed outside the interval [0, 1], since our real-world has
numerous examples and applications of over-/under-/off-neutrosophic components.
For example, person working overtime deserves a membership degree over 1, while
a person producing more damage than benefit to a company deserves a membership
below 0. Then, similarly, the Neutrosophic Logic/Measure/Probability/Statistics
etc. were extended to respectively Neutrosophic Over-/Under-/Off-Logic,
-Measure, -Probability, -Statistics etc. [Smarandache, 2007].
| [
{
"version": "v1",
"created": "Thu, 30 Jun 2016 02:17:59 GMT"
}
] | 1,467,590,400,000 | [
[
"Smarandache",
"Florentin",
""
]
] |
1607.00428 | Haley Garrison | Haley Garrison and Sonia Chernova | Situated Structure Learning of a Bayesian Logic Network for Commonsense
Reasoning | International Joint Conference on Artificial Intelligence (IJCAI),
StarAI workshop | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper details the implementation of an algorithm for automatically
generating a high-level knowledge network to perform commonsense reasoning,
specifically with the application of robotic task repair. The network is
represented using a Bayesian Logic Network (BLN) (Jain, Waldherr, and Beetz
2009), which combines a set of directed relations between abstract concepts,
including IsA, AtLocation, HasProperty, and UsedFor, with a corresponding
probability distribution that models the uncertainty inherent in these
relations. Inference over this network enables reasoning over the abstract
concepts in order to perform appropriate object substitution or to locate
missing objects in the robot's environment. The structure of the network is
generated by combining information from two existing knowledge sources:
ConceptNet (Speer and Havasi 2012), and WordNet (Miller 1995). This is done in
a "situated" manner by only including information relevant a given context.
Results show that the generated network is able to accurately predict object
categories, locations, properties, and affordances in three different household
scenarios.
| [
{
"version": "v1",
"created": "Fri, 1 Jul 2016 22:52:57 GMT"
}
] | 1,467,676,800,000 | [
[
"Garrison",
"Haley",
""
],
[
"Chernova",
"Sonia",
""
]
] |
1607.00656 | Gavin Rens | Gavin Rens and Deshendran Moodley | A Hybrid POMDP-BDI Agent Architecture with Online Stochastic Planning
and Plan Caching | 26 pages, 3 figures, unpublished version | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This article presents an agent architecture for controlling an autonomous
agent in stochastic environments. The architecture combines the partially
observable Markov decision process (POMDP) model with the
belief-desire-intention (BDI) framework. The Hybrid POMDP-BDI agent
architecture takes the best features from the two approaches, that is, the
online generation of reward-maximizing courses of action from POMDP theory, and
sophisticated multiple goal management from BDI theory. We introduce the
advances made since the introduction of the basic architecture, including (i)
the ability to pursue multiple goals simultaneously and (ii) a plan library for
storing pre-written plans and for storing recently generated plans for future
reuse. A version of the architecture without the plan library is implemented
and is evaluated using simulations. The results of the simulation experiments
indicate that the approach is feasible.
| [
{
"version": "v1",
"created": "Sun, 3 Jul 2016 17:11:52 GMT"
}
] | 1,467,676,800,000 | [
[
"Rens",
"Gavin",
""
],
[
"Moodley",
"Deshendran",
""
]
] |
1607.00715 | Sebastian Sardina | Davide Aversa and Sebastian Sardina and Stavros Vassos | Path planning with Inventory-driven Jump-Point-Search | null | In Proceedings of the AAAI Conference on Artificial Intelligence
and Interactive Digital Entertainment (AIIDE), pp. 2-8, 2015 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In many navigational domains the traversability of cells is conditioned on
the path taken. This is often the case in video-games, in which a character may
need to acquire a certain object (i.e., a key or a flying suit) to be able to
traverse specific locations (e.g., doors or high walls). In order for
non-player characters to handle such scenarios we present invJPS, an
"inventory-driven" pathfinding approach based on the highly successful
grid-based Jump-Point-Search (JPS) algorithm. We show, formally and
experimentally, that the invJPS preserves JPS's optimality guarantees and its
symmetry breaking advantages in inventory-based variants of game maps.
| [
{
"version": "v1",
"created": "Mon, 4 Jul 2016 01:13:32 GMT"
}
] | 1,467,676,800,000 | [
[
"Aversa",
"Davide",
""
],
[
"Sardina",
"Sebastian",
""
],
[
"Vassos",
"Stavros",
""
]
] |
1607.00791 | Marcin Pietron | M. Pietron and M. Wielgosz and K. Wiatr | Formal analysis of HTM Spatial Pooler performance under predefined
operation conditions | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces mathematical formalism for Spatial (SP) of Hierarchical
Temporal Memory (HTM) with a spacial consideration for its hardware
implementation. Performance of HTM network and its ability to learn and adjust
to a problem at hand is governed by a large set of parameters. Most of
parameters are codependent which makes creating efficient HTM-based solutions
challenging. It requires profound knowledge of the settings and their impact on
the performance of system. Consequently, this paper introduced a set of
formulas which are to facilitate the design process by enhancing tedious
trial-and-error method with a tool for choosing initial parameters which enable
quick learning convergence. This is especially important in hardware
implementations which are constrained by the limited resources of a platform.
The authors focused especially on a formalism of Spatial Pooler and derive at
the formulas for quality and convergence of the model. This may be considered
as recipes for designing efficient HTM models for given input patterns.
| [
{
"version": "v1",
"created": "Mon, 4 Jul 2016 09:20:29 GMT"
}
] | 1,467,676,800,000 | [
[
"Pietron",
"M.",
""
],
[
"Wielgosz",
"M.",
""
],
[
"Wiatr",
"K.",
""
]
] |
1607.00819 | Sylwia Polberg | Sylwia Polberg | Understanding the Abstract Dialectical Framework (Preliminary Report) | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Among the most general structures extending the framework by Dung are the
abstract dialectical frameworks (ADFs). They come equipped with various types
of semantics, with the most prominent - the labeling-based one - analyzed in
the context of computational complexity, signatures, instantiations and
software support. This makes the abstract dialectical frameworks valuable tools
for argumentation. However, there are fewer results available concerning the
relation between the ADFs and other argumentation frameworks. In this paper we
would like to address this issue by introducing a number of translations from
various formalisms into ADFs. The results of our study show the similarities
and differences between them, thus promoting the use and understanding of ADFs.
Moreover, our analysis also proves their capability to model many of the
existing frameworks, including those that go beyond the attack relation.
Finally, translations allow other structures to benefit from the research on
ADFs in general and from the existing software in particular.
| [
{
"version": "v1",
"created": "Mon, 4 Jul 2016 10:52:57 GMT"
}
] | 1,467,676,800,000 | [
[
"Polberg",
"Sylwia",
""
]
] |
1607.00869 | Vinu E V | Vinu E.V, Tahani Alsubait, P. Sreenivasa Kumar | Modeling of Item-Difficulty for Ontology-based MCQs | Under review | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multiple choice questions (MCQs) that can be generated from a domain ontology
can significantly reduce human effort & time required for authoring &
administering assessments in an e-Learning environment. Even though here are
various methods for generating MCQs from ontologies, methods for determining
the difficulty-levels of such MCQs are less explored. In this paper, we study
various aspects and factors that are involved in determining the
difficulty-score of an MCQ, and propose an ontology-based model for the
prediction. This model characterizes the difficulty values associated with the
stem and choice set of the MCQs, and describes a measure which combines both
the scores. Further more, the notion of assigning difficultly-scores based on
the skill level of the test taker is utilized for predicating difficulty-score
of a stem. We studied the effectiveness of the predicted difficulty-scores with
the help of a psychometric model from the Item Response Theory, by involving
real-students and domain experts. Our results show that, the predicated
difficulty-levels of the MCQs are having high correlation with their actual
difficulty-levels.
| [
{
"version": "v1",
"created": "Mon, 4 Jul 2016 13:05:55 GMT"
}
] | 1,467,676,800,000 | [
[
"E.",
"Vinu",
"V"
],
[
"Alsubait",
"Tahani",
""
],
[
"Kumar",
"P. Sreenivasa",
""
]
] |
1607.01254 | Jagannath Roy | Jagannath Roy, Ananta Ranjan, Animesh Debnath, Samarjit Kar | An extended MABAC for multi-attribute decision making using trapezoidal
interval type-2 fuzzy numbers | 14 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we attempt to extend Multi Attributive Border Approximation
area Comparison (MABAC) approach for multi-attribute decision making (MADM)
problems based on type-2 fuzzy sets (IT2FSs). As a special case of IT2FSs
interval type-2 trapezoidal fuzzy numbers (IT2TrFNs) are adopted here to deal
with uncertainties present in many practical evaluation and selection problems.
A systematic description of MABAC based on IT2TrFNs is presented in the current
study. The validity and feasibility of the proposed method are illustrated by a
practical example of selecting the most suitable candidate for a software
company which is heading to hire a system analysis engineer based on few
attributes. Finally, a comparison with two other existing MADM methods is
described.
| [
{
"version": "v1",
"created": "Tue, 5 Jul 2016 14:05:29 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Jul 2016 03:04:48 GMT"
},
{
"version": "v3",
"created": "Mon, 21 Nov 2016 14:53:36 GMT"
},
{
"version": "v4",
"created": "Fri, 2 Dec 2016 08:50:29 GMT"
}
] | 1,480,896,000,000 | [
[
"Roy",
"Jagannath",
""
],
[
"Ranjan",
"Ananta",
""
],
[
"Debnath",
"Animesh",
""
],
[
"Kar",
"Samarjit",
""
]
] |
1607.01729 | Vikas Shivashankar | Vikas Shivashankar, Ron Alford, Mark Roberts and David W. Aha | Cost-Optimal Algorithms for Planning with Procedural Control Knowledge | To appear in the Proc. of ECAI 2016 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There is an impressive body of work on developing heuristics and other
reasoning algorithms to guide search in optimal and anytime planning algorithms
for classical planning. However, very little effort has been directed towards
developing analogous techniques to guide search towards high-quality solutions
in hierarchical planning formalisms like HTN planning, which allows using
additional domain-specific procedural control knowledge. In lieu of such
techniques, this control knowledge often needs to provide the necessary search
guidance to the planning algorithm, which imposes a substantial burden on the
domain author and can yield brittle or error-prone domain models. We address
this gap by extending recent work on a new hierarchical goal-based planning
formalism called Hierarchical Goal Network (HGN) Planning to develop the
Hierarchically-Optimal Goal Decomposition Planner (HOpGDP), an HGN planning
algorithm that computes hierarchically-optimal plans. HOpGDP is guided by
$h_{HL}$, a new HGN planning heuristic that extends existing admissible
landmark-based heuristics from classical planning to compute admissible cost
estimates for HGN planning problems. Our experimental evaluation across three
benchmark planning domains shows that HOpGDP compares favorably to both optimal
classical planners due to its ability to use domain-specific procedural
knowledge, and a blind-search version of HOpGDP due to the search guidance
provided by $h_{HL}$.
| [
{
"version": "v1",
"created": "Wed, 6 Jul 2016 18:02:33 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Jul 2016 02:07:22 GMT"
}
] | 1,467,936,000,000 | [
[
"Shivashankar",
"Vikas",
""
],
[
"Alford",
"Ron",
""
],
[
"Roberts",
"Mark",
""
],
[
"Aha",
"David W.",
""
]
] |
1607.02171 | Eric Nunes | Eric Nunes, Paulo Shakarian, Gerardo I. Simari, Andrew Ruef | Argumentation Models for Cyber Attribution | 8 pages paper to be presented at International Symposium on
Foundations of Open Source Intelligence and Security Informatics (FOSINT-SI)
2016 In conjunction with ASONAM 2016 San Francisco, CA, USA, August 19-20,
2016 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A major challenge in cyber-threat analysis is combining information from
different sources to find the person or the group responsible for the
cyber-attack. It is one of the most important technical and policy challenges
in cyber-security. The lack of ground truth for an individual responsible for
an attack has limited previous studies. In this paper, we take a first step
towards overcoming this limitation by building a dataset from the
capture-the-flag event held at DEFCON, and propose an argumentation model based
on a formal reasoning framework called DeLP (Defeasible Logic Programming)
designed to aid an analyst in attributing a cyber-attack. We build models from
latent variables to reduce the search space of culprits (attackers), and show
that this reduction significantly improves the performance of
classification-based approaches from 37% to 62% in identifying the attacker.
| [
{
"version": "v1",
"created": "Thu, 7 Jul 2016 21:01:06 GMT"
}
] | 1,468,195,200,000 | [
[
"Nunes",
"Eric",
""
],
[
"Shakarian",
"Paulo",
""
],
[
"Simari",
"Gerardo I.",
""
],
[
"Ruef",
"Andrew",
""
]
] |
1607.03290 | Chih-Kuan Yeh | Chih-Kuan Yeh, Hsuan-Tien Lin | Automatic Bridge Bidding Using Deep Reinforcement Learning | 8 pages, 1 figure, 2016 ECAI accepted | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bridge is among the zero-sum games for which artificial intelligence has not
yet outperformed expert human players. The main difficulty lies in the bidding
phase of bridge, which requires cooperative decision making under partial
information. Existing artificial intelligence systems for bridge bidding rely
on and are thus restricted by human-designed bidding systems or features. In
this work, we propose a pioneering bridge bidding system without the aid of
human domain knowledge. The system is based on a novel deep reinforcement
learning model, which extracts sophisticated features and learns to bid
automatically based on raw card data. The model includes an
upper-confidence-bound algorithm and additional techniques to achieve a balance
between exploration and exploitation. Our experiments validate the promising
performance of our proposed model. In particular, the model advances from
having no knowledge about bidding to achieving superior performance when
compared with a champion-winning computer bridge program that implements a
human-designed bidding system.
| [
{
"version": "v1",
"created": "Tue, 12 Jul 2016 09:58:24 GMT"
}
] | 1,468,368,000,000 | [
[
"Yeh",
"Chih-Kuan",
""
],
[
"Lin",
"Hsuan-Tien",
""
]
] |
1607.03979 | Arshia Khaffaf | Mona Khaffaf and Arshia Khaffaf | Resource Planning For Rescue Operations | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | After an earthquake, disaster sites pose a multitude of health and safety
concerns. A rescue operation of people trapped in the ruins after an earthquake
disaster requires a series of intelligent behavior, including planning. For a
successful rescue operation, given a limited number of available actions and
regulations, the role of planning in rescue operations is crucial. Fortunately,
recent developments in automated planning by artificial intelligence community
can help different organization in this crucial task. Due to the number of
rules and regulations, we believe that a rule based system for planning can be
helpful for this specific planning problem. In this research work, we use logic
rules to represent rescue and related regular regulations, together with a
logic based planner to solve this complicated problem. Although this research
is still in the prototyping and modeling stage, it clearly shows that rule
based languages can be a good infrastructure for this computational task. The
results of this research can be used by different organizations, such as
Iranian Red Crescent Society and International Institute of Seismology and
Earthquake Engineering (IISEE).
| [
{
"version": "v1",
"created": "Thu, 14 Jul 2016 02:21:14 GMT"
}
] | 1,468,540,800,000 | [
[
"Khaffaf",
"Mona",
""
],
[
"Khaffaf",
"Arshia",
""
]
] |
1607.04186 | Mathieu Acher | Mathieu Acher (DiverSe), Fran\c{c}ois Esnault (DiverSe) | Large-scale Analysis of Chess Games with Chess Engines: A Preliminary
Report | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The strength of chess engines together with the availability of numerous
chess games have attracted the attention of chess players, data scientists, and
researchers during the last decades. State-of-the-art engines now provide an
authoritative judgement that can be used in many applications like cheating
detection, intrinsic ratings computation, skill assessment, or the study of
human decision-making. A key issue for the research community is to gather a
large dataset of chess games together with the judgement of chess engines.
Unfortunately the analysis of each move takes lots of times. In this paper, we
report our effort to analyse almost 5 millions chess games with a computing
grid. During summer 2015, we processed 270 millions unique played positions
using the Stockfish engine with a quite high depth (20). We populated a
database of 1+ tera-octets of chess evaluations, representing an estimated time
of 50 years of computation on a single machine. Our effort is a first step
towards the replication of research results, the supply of open data and
procedures for exploring new directions, and the investigation of software
engineering/scalability issues when computing billions of moves.
| [
{
"version": "v1",
"created": "Thu, 28 Apr 2016 08:37:43 GMT"
}
] | 1,468,540,800,000 | [
[
"Acher",
"Mathieu",
"",
"DiverSe"
],
[
"Esnault",
"François",
"",
"DiverSe"
]
] |
1607.04583 | Matthew Liberatore | Matthew J. Liberatore | A Counterexample to the Forward Recursion in Fuzzy Critical Path
Analysis Under Discrete Fuzzy Sets | 10 pages, 1 figure, 1 table, 22 references | International Journal of Fuzzy Logic Systems 6 (2016) 53-62 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fuzzy logic is an alternate approach for quantifying uncertainty relating to
activity duration. The fuzzy version of the backward recursion has been shown
to produce results that incorrectly amplify the level of uncertainty. However,
the fuzzy version of the forward recursion has been widely proposed as an
approach for determining the fuzzy set of critical path lengths. In this paper,
the direct application of the extension principle leads to a proposition that
must be satisfied in fuzzy critical path analysis. Using a counterexample it is
demonstrated that the fuzzy forward recursion when discrete fuzzy sets are used
to represent activity durations produces results that are not consistent with
the theory presented. The problem is shown to be the application of the fuzzy
maximum. Several methods presented in the literature are described and shown to
provide results that are consistent with the extension principle.
| [
{
"version": "v1",
"created": "Mon, 9 May 2016 13:35:00 GMT"
}
] | 1,468,800,000,000 | [
[
"Liberatore",
"Matthew J.",
""
]
] |
1607.04809 | Michael Cochez | Michael Cochez, Stefan Decker, Eric Prud'hommeaux | Knowledge Representation on the Web revisited: Tools for Prototype Based
Ontologies | Related software available from
https://github.com/miselico/knowledgebase/ | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years RDF and OWL have become the most common knowledge
representation languages in use on the Web, propelled by the recommendation of
the W3C. In this paper we present a practical implementation of a different
kind of knowledge representation based on Prototypes. In detail, we present a
concrete syntax easily and effectively parsable by applications. We also
present extensible implementations of a prototype knowledge base, specifically
designed for storage of Prototypes. These implementations are written in Java
and can be extended by using the implementation as a library. Alternatively,
the software can be deployed as such. Further, results of benchmarks for both
local and web deployment are presented. This paper augments a research paper,
in which we describe the more theoretical aspects of our Prototype system.
| [
{
"version": "v1",
"created": "Sat, 16 Jul 2016 23:42:44 GMT"
}
] | 1,468,886,400,000 | [
[
"Cochez",
"Michael",
""
],
[
"Decker",
"Stefan",
""
],
[
"Prud'hommeaux",
"Eric",
""
]
] |
1607.05023 | Gabriella Panuccio | Gabriella Panuccio, Marianna Semprini, Lorenzo Natale, Michela
Chiappalone | Intelligent Biohybrid Neurotechnologies: Are They Really What They
Claim? | Number of pages: 15 Words in abstract: 49 Words in main text: 3265
Number of figures: 5 Number of references: 25 Keywords: artificial
intelligence, biohybrid system, closed-loop control, functional brain repair | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the era of intelligent biohybrid neurotechnologies for brain repair, new
fanciful terms are appearing in the scientific dictionary to define what has so
far been unimaginable. As the emerging neurotechnologies are becoming
increasingly polyhedral and sophisticated, should we talk about evolution and
rank the intelligence of these devices?
| [
{
"version": "v1",
"created": "Mon, 18 Jul 2016 11:28:11 GMT"
}
] | 1,468,886,400,000 | [
[
"Panuccio",
"Gabriella",
""
],
[
"Semprini",
"Marianna",
""
],
[
"Natale",
"Lorenzo",
""
],
[
"Chiappalone",
"Michela",
""
]
] |
1607.05077 | Ionel Hosu | Ionel-Alexandru Hosu, Traian Rebedea | Playing Atari Games with Deep Reinforcement Learning and Human
Checkpoint Replay | 6 pages, 2 figures, EGPAI 2016 - Evaluating General Purpose AI,
workshop held in conjunction with ECAI 2016 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces a novel method for learning how to play the most
difficult Atari 2600 games from the Arcade Learning Environment using deep
reinforcement learning. The proposed method, human checkpoint replay, consists
in using checkpoints sampled from human gameplay as starting points for the
learning process. This is meant to compensate for the difficulties of current
exploration strategies, such as epsilon-greedy, to find successful control
policies in games with sparse rewards. Like other deep reinforcement learning
architectures, our model uses a convolutional neural network that receives only
raw pixel inputs to estimate the state value function. We tested our method on
Montezuma's Revenge and Private Eye, two of the most challenging games from the
Atari platform. The results we obtained show a substantial improvement compared
to previous learning approaches, as well as over a random player. We also
propose a method for training deep reinforcement learning agents using human
gameplay experience, which we call human experience replay.
| [
{
"version": "v1",
"created": "Mon, 18 Jul 2016 13:55:54 GMT"
}
] | 1,468,886,400,000 | [
[
"Hosu",
"Ionel-Alexandru",
""
],
[
"Rebedea",
"Traian",
""
]
] |
1607.05810 | Emanuel Diamant | Emanuel Diamant | You want to survive the data deluge: Be careful, Computational
Intelligence will not serve you as a rescue boat | Oral presentation at the ICNSC 2016 Conference, Mexico City, April
2016 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We are at the dawn of a new era, where advances in computer power, broadband
communication and digital sensor technologies have led to an unprecedented
flood of data inundating our surrounding. It is generally believed that means
such as Computational Intelligence will help to outlive these tough times.
However, these hopes are improperly high. Computational Intelligence is a
surprising composition of two mutually exclusive and contradicting constituents
that could be coupled only if you disregard and neglect their controversies:
"Computational" implies reliance on data processing and "Intelligence" implies
reliance on information processing. Only those who are indifferent to
data-information discrepancy can believe that such a combination can be viable.
We do not believe in miracles, so we will try to share with you our
reservations.
| [
{
"version": "v1",
"created": "Wed, 20 Jul 2016 03:47:19 GMT"
}
] | 1,469,059,200,000 | [
[
"Diamant",
"Emanuel",
""
]
] |
1607.05845 | Uwe Aickelin | Jenna Reps, Zhaoyang Guo, Haoyue Zhu, Uwe Aickelin | Identifying Candidate Risk Factors for Prescription Drug Side Effects
using Causal Contrast Set Mining | Health Information Science (4th International Conference, HIS 2015,
Melbourne, Australia, May 28-30), pp. 45-55, Lecture Notes in Computer
Science, 2015 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Big longitudinal observational databases present the opportunity to extract
new knowledge in a cost effective manner. Unfortunately, the ability of these
databases to be used for causal inference is limited due to the passive way in
which the data are collected resulting in various forms of bias. In this paper
we investigate a method that can overcome these limitations and determine
causal contrast set rules efficiently from big data. In particular, we present
a new methodology for the purpose of identifying risk factors that increase a
patients likelihood of experiencing the known rare side effect of renal failure
after ingesting aminosalicylates. The results show that the methodology was
able to identify previously researched risk factors such as being prescribed
diuretics and highlighted that patients with a higher than average risk of
renal failure may be even more susceptible to experiencing it as a side effect
after ingesting aminosalicylates.
| [
{
"version": "v1",
"created": "Wed, 20 Jul 2016 07:42:52 GMT"
}
] | 1,469,059,200,000 | [
[
"Reps",
"Jenna",
""
],
[
"Guo",
"Zhaoyang",
""
],
[
"Zhu",
"Haoyue",
""
],
[
"Aickelin",
"Uwe",
""
]
] |
1607.05906 | Uwe Aickelin | Jenna M. Reps, Uwe Aickelin, Richard B. Hubbard | Refining adverse drug reaction signals by incorporating interaction
variables identified using emergent pattern mining | Computers in Biology and Medicine, 69 , pp. 61-70, 2016 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Purpose: To develop a framework for identifying and incorporating candidate
confounding interaction terms into a regularised cox regression analysis to
refine adverse drug reaction signals obtained via longitudinal observational
data. Methods: We considered six drug families that are commonly associated
with myocardial infarction in observational healthcare data, but where the
causal relationship ground truth is known (adverse drug reaction or not). We
applied emergent pattern mining to find itemsets of drugs and medical events
that are associated with the development of myocardial infarction. These are
the candidate confounding interaction terms. We then implemented a cohort study
design using regularised cox regression that incorporated and accounted for the
candidate confounding interaction terms. Results The methodology was able to
account for signals generated due to confounding and a cox regression with
elastic net regularisation correctly ranked the drug families known to be true
adverse drug reactions above those.
| [
{
"version": "v1",
"created": "Wed, 20 Jul 2016 10:45:57 GMT"
}
] | 1,469,059,200,000 | [
[
"Reps",
"Jenna M.",
""
],
[
"Aickelin",
"Uwe",
""
],
[
"Hubbard",
"Richard B.",
""
]
] |
1607.05909 | Uwe Aickelin | Jiangang Ma, Le Sun, Hua Wang, Yanchun Zhang, Uwe Aickelin | Supervised Anomaly Detection in Uncertain Pseudoperiodic Data Streams | ACM Transactions on Internet Technology (TOIT), 16 (1 (4)), 2016 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Uncertain data streams have been widely generated in many Web applications.
The uncertainty in data streams makes anomaly detection from sensor data
streams far more challenging. In this paper, we present a novel framework that
supports anomaly detection in uncertain data streams. The proposed framework
adopts an efficient uncertainty pre-processing procedure to identify and
eliminate uncertainties in data streams. Based on the corrected data streams,
we develop effective period pattern recognition and feature extraction
techniques to improve the computational efficiency. We use classification
methods for anomaly detection in the corrected data stream. We also empirically
show that the proposed approach shows a high accuracy of anomaly detection on a
number of real datasets.
| [
{
"version": "v1",
"created": "Wed, 20 Jul 2016 10:52:17 GMT"
}
] | 1,469,059,200,000 | [
[
"Ma",
"Jiangang",
""
],
[
"Sun",
"Le",
""
],
[
"Wang",
"Hua",
""
],
[
"Zhang",
"Yanchun",
""
],
[
"Aickelin",
"Uwe",
""
]
] |
1607.05913 | Uwe Aickelin | Polla Fattah, Uwe Aickelin and Christian Wagner | Optimising Rule-Based Classification in Temporal Data | ZANCO Journal of Pure and Applied Sciences, 28 (2), pp. 135-146,
2016, ISSN: 2412-3986 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study optimises manually derived rule-based expert system classification
of objects according to changes in their properties over time. One of the key
challenges that this study tries to address is how to classify objects that
exhibit changes in their behaviour over time, for example how to classify
companies' share price stability over a period of time or how to classify
students' preferences for subjects while they are progressing through school. A
specific case the paper considers is the strategy of players in public goods
games (as common in economics) across multiple consecutive games. Initial
classification starts from expert definitions specifying class allocation for
players based on aggregated attributes of the temporal data. Based on these
initial classifications, the optimisation process tries to find an improved
classifier which produces the best possible compact classes of objects
(players) for every time point in the temporal data. The compactness of the
classes is measured by a cost function based on internal cluster indices like
the Dunn Index, distance measures like Euclidean distance or statistically
derived measures like standard deviation. The paper discusses the approach in
the context of incorporating changing player strategies in the aforementioned
public good games, where common classification approaches so far do not
consider such changes in behaviour resulting from learning or in-game
experience. By using the proposed process for classifying temporal data and the
actual players' contribution during the games, we aim to produce a more refined
classification which in turn may inform the interpretation of public goods game
data.
| [
{
"version": "v1",
"created": "Wed, 20 Jul 2016 11:02:16 GMT"
}
] | 1,469,059,200,000 | [
[
"Fattah",
"Polla",
""
],
[
"Aickelin",
"Uwe",
""
],
[
"Wagner",
"Christian",
""
]
] |
1607.06186 | Uwe Aickelin | Javier Navarro, Christian Wagner, Uwe Aickelin | Applying Interval Type-2 Fuzzy Rule Based Classifiers Through a
Cluster-Based Class Representation | 2015 IEEE Symposium Series on Computational Intelligence, pp.
1816-1823, IEEE, 2015, ISBN: 978-1-4799-7560-0 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fuzzy Rule-Based Classification Systems (FRBCSs) have the potential to
provide so-called interpretable classifiers, i.e. classifiers which can be
introspective, understood, validated and augmented by human experts by relying
on fuzzy-set based rules. This paper builds on prior work for interval type-2
fuzzy set based FRBCs where the fuzzy sets and rules of the classifier are
generated using an initial clustering stage. By introducing Subtractive
Clustering in order to identify multiple cluster prototypes, the proposed
approach has the potential to deliver improved classification performance while
maintaining good interpretability, i.e. without resulting in an excessive
number of rules. The paper provides a detailed overview of the proposed FRBC
framework, followed by a series of exploratory experiments on both linearly and
non-linearly separable datasets, comparing results to existing rule-based and
SVM approaches. Overall, initial results indicate that the approach enables
comparable classification performance to non rule-based classifiers such as
SVM, while often achieving this with a very small number of rules.
| [
{
"version": "v1",
"created": "Thu, 21 Jul 2016 04:36:23 GMT"
}
] | 1,469,145,600,000 | [
[
"Navarro",
"Javier",
""
],
[
"Wagner",
"Christian",
""
],
[
"Aickelin",
"Uwe",
""
]
] |
1607.06187 | Uwe Aickelin | Javier Navarro, Christian Wagner, Uwe Aickelin, Lynsey Green and
Robert Ashford | Exploring Differences in Interpretation of Words Essential in Medical
Expert-Patient Communication | IEEE International Conference on Fuzzy Systems (FUZZ-IEEE 2016),
24-29 July 2016, Vancouver, Canada, 2016 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the context of cancer treatment and surgery, quality of life assessment is
a crucial part of determining treatment success and viability. In order to
assess it, patients completed questionnaires which employ words to capture
aspects of patients well-being are the norm. As the results of these
questionnaires are often used to assess patient progress and to determine
future treatment options, it is important to establish that the words used are
interpreted in the same way by both patients and medical professionals. In this
paper, we capture and model patients perceptions and associated uncertainty
about the words used to describe the level of their physical function used in
the highly common (in Sarcoma Services) Toronto Extremity Salvage Score (TESS)
questionnaire. The paper provides detail about the interval-valued data capture
as well as the subsequent modelling of the data using fuzzy sets. Based on an
initial sample of participants, we use Jaccard similarity on the resulting
words models to show that there may be considerable differences in the
interpretation of commonly used questionnaire terms, thus presenting a very
real risk of miscommunication between patients and medical professionals as
well as within the group of medical professionals.
| [
{
"version": "v1",
"created": "Thu, 21 Jul 2016 04:40:14 GMT"
}
] | 1,469,145,600,000 | [
[
"Navarro",
"Javier",
""
],
[
"Wagner",
"Christian",
""
],
[
"Aickelin",
"Uwe",
""
],
[
"Green",
"Lynsey",
""
],
[
"Ashford",
"Robert",
""
]
] |
1607.06198 | Uwe Aickelin | Jenna Marie Reps, Jonathan M. Garibaldi, Uwe Aickelin, Jack E. Gibson,
Richard B.Hubbard | Supervised Adverse Drug Reaction Signalling Framework Imitating Bradford
Hill's Causality Considerations | null | Journal of Biomedical Informatics, 56 , pp. 356-368, 2015 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Big longitudinal observational medical data potentially hold a wealth of
information and have been recognised as potential sources for gaining new drug
safety knowledge. Unfortunately there are many complexities and underlying
issues when analysing longitudinal observational data. Due to these
complexities, existing methods for large-scale detection of negative side
effects using observational data all tend to have issues distinguishing between
association and causality. New methods that can better discriminate causal and
non-causal relationships need to be developed to fully utilise the data. In
this paper we propose using a set of causality considerations developed by the
epidemiologist Bradford Hill as a basis for engineering features that enable
the application of supervised learning for the problem of detecting negative
side effects. The Bradford Hill considerations look at various perspectives of
a drug and outcome relationship to determine whether it shows causal traits. We
taught a classifier to find patterns within these perspectives and it learned
to discriminate between association and causality. The novelty of this research
is the combination of supervised learning and Bradford Hill's causality
considerations to automate the Bradford Hill's causality assessment. We
evaluated the framework on a drug safety gold standard know as the
observational medical outcomes partnership's nonspecified association reference
set. The methodology obtained excellent discriminate ability with area under
the curves ranging between 0.792-0.940 (existing method optimal: 0.73) and a
mean average precision of 0.640 (existing method optimal: 0.141). The proposed
features can be calculated efficiently and be readily updated, making the
framework suitable for big observational data.
| [
{
"version": "v1",
"created": "Thu, 21 Jul 2016 05:31:04 GMT"
}
] | 1,469,145,600,000 | [
[
"Reps",
"Jenna Marie",
""
],
[
"Garibaldi",
"Jonathan M.",
""
],
[
"Aickelin",
"Uwe",
""
],
[
"Gibson",
"Jack E.",
""
],
[
"Hubbard",
"Richard B.",
""
]
] |
1607.06759 | Alexander Kott | Michael Ownby, Alexander Kott | Predicting Enemy's Actions Improves Commander Decision-Making | A version of this paper was presented at CCRTS'06 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Defense Advanced Research Projects Agency (DARPA) Real-time Adversarial
Intelligence and Decision-making (RAID) program is investigating the
feasibility of "reading the mind of the enemy" - to estimate and anticipate, in
real-time, the enemy's likely goals, deceptions, actions, movements and
positions. This program focuses specifically on urban battles at echelons of
battalion and below. The RAID program leverages approximate game-theoretic and
deception-sensitive algorithms to provide real-time enemy estimates to a
tactical commander. A key hypothesis of the program is that these predictions
and recommendations will make the commander more effective, i.e. he should be
able to achieve his operational goals safer, faster, and more efficiently.
Realistic experimentation and evaluation drive the development process using
human-in-the-loop wargames to compare humans and the RAID system. Two
experiments were conducted in 2005 as part of Phase I to determine if the RAID
software could make predictions and recommendations as effectively and
accurately as a 4-person experienced staff. This report discusses the
intriguing and encouraging results of these first two experiments conducted by
the RAID program. It also provides details about the experiment environment and
methodology that were used to demonstrate and prove the research goals.
| [
{
"version": "v1",
"created": "Fri, 22 Jul 2016 17:37:24 GMT"
}
] | 1,469,404,800,000 | [
[
"Ownby",
"Michael",
""
],
[
"Kott",
"Alexander",
""
]
] |
1607.07027 | Vinu E V | E. V. Vinu, P Sreenivasa Kumar | Redundancy-free Verbalization of Individuals for Ontology Validation | Under review | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate the problem of verbalizing Web Ontology Language (OWL) axioms
of domain ontologies in this paper. The existing approaches address the problem
of fidelity of verbalized OWL texts to OWL semantics by exploring different
ways of expressing the same OWL axiom in various linguistic forms. They also
perform grouping and aggregating of the natural language (NL) sentences that
are generated corresponding to each OWL statement into a comprehensible
structure. However, no efforts have been taken to try out a semantic reduction
at logical level to remove redundancies and repetitions, so that the reduced
set of axioms can be used for generating a more meaningful and
human-understandable (what we call redundancy-free) text. Our experiments show
that, formal semantic reduction at logical level is very helpful to generate
redundancy-free descriptions of ontology entities. In this paper, we
particularly focus on generating descriptions of individuals of SHIQ based
ontologies. The details of a case study are provided to support the usefulness
of the redundancy-free NL descriptions of individuals, in knowledge validation
application.
| [
{
"version": "v1",
"created": "Sun, 24 Jul 2016 11:22:00 GMT"
}
] | 1,469,491,200,000 | [
[
"Vinu",
"E. V.",
""
],
[
"Kumar",
"P Sreenivasa",
""
]
] |
1607.07288 | Alexander Kott | Alexander Kott, Wes Milks | Validation of Information Fusion | This is a version of the paper presented at FUSION'09 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We motivate and offer a formal definition of validation as it applies to
information fusion systems. Common definitions of validation compare the actual
state of the world with that derived by the fusion process. This definition
conflates properties of the fusion system with properties of systems that
intervene between the world and the fusion system. We propose an alternative
definition where validation of an information fusion system references a
standard fusion device, such as recognized human experts. We illustrate the
approach by describing the validation process implemented in RAID, a program
conducted by DARPA and focused on information fusion in adversarial, deceptive
environments.
| [
{
"version": "v1",
"created": "Fri, 22 Jul 2016 17:18:05 GMT"
}
] | 1,469,491,200,000 | [
[
"Kott",
"Alexander",
""
],
[
"Milks",
"Wes",
""
]
] |
1607.07311 | Majd Hawasly | Majd Hawasly, Florian T. Pokorny and Subramanian Ramamoorthy | Estimating Activity at Multiple Scales using Spatial Abstractions | 16 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Autonomous robots operating in dynamic environments must maintain beliefs
over a hypothesis space that is rich enough to represent the activities of
interest at different scales. This is important both in order to accommodate
the availability of evidence at varying degrees of coarseness, such as when
interpreting and assimilating natural instructions, but also in order to make
subsequent reactive planning more efficient. We present an algorithm that
combines a topology-based trajectory clustering procedure that generates
hierarchically-structured spatial abstractions with a bank of particle filters
at each of these abstraction levels so as to produce probability estimates over
an agent's navigation activity that is kept consistent across the hierarchy. We
study the performance of the proposed method using a synthetic trajectory
dataset in 2D, as well as a dataset taken from AIS-based tracking of ships in
an extended harbour area. We show that, in comparison to a baseline which is a
particle filter that estimates activity without exploiting such structure, our
method achieves a better normalised error in predicting the trajectory as well
as better time to convergence to a true class when compared against ground
truth.
| [
{
"version": "v1",
"created": "Mon, 25 Jul 2016 15:17:06 GMT"
}
] | 1,469,491,200,000 | [
[
"Hawasly",
"Majd",
""
],
[
"Pokorny",
"Florian T.",
""
],
[
"Ramamoorthy",
"Subramanian",
""
]
] |
1607.07730 | Seth Baum | Anthony M. Barrett and Seth D. Baum | A Model of Pathways to Artificial Superintelligence Catastrophe for Risk
and Decision Analysis | null | null | 10.1080/0952813X.2016.1186228 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An artificial superintelligence (ASI) is artificial intelligence that is
significantly more intelligent than humans in all respects. While ASI does not
currently exist, some scholars propose that it could be created sometime in the
future, and furthermore that its creation could cause a severe global
catastrophe, possibly even resulting in human extinction. Given the high
stakes, it is important to analyze ASI risk and factor the risk into decisions
related to ASI research and development. This paper presents a graphical model
of major pathways to ASI catastrophe, focusing on ASI created via recursive
self-improvement. The model uses the established risk and decision analysis
modeling paradigms of fault trees and influence diagrams in order to depict
combinations of events and conditions that could lead to AI catastrophe, as
well as intervention options that could decrease risks. The events and
conditions include select aspects of the ASI itself as well as the human
process of ASI research, development, and management. Model structure is
derived from published literature on ASI risk. The model offers a foundation
for rigorous quantitative evaluation and decision making on the long-term risk
of ASI catastrophe.
| [
{
"version": "v1",
"created": "Mon, 25 Jul 2016 13:04:22 GMT"
}
] | 1,469,577,600,000 | [
[
"Barrett",
"Anthony M.",
""
],
[
"Baum",
"Seth D.",
""
]
] |
1607.07847 | Peter Sch\"uller | Gokhan Avci, Mustafa Mehuljic, Peter Sch\"uller | Technical Report: Giving Hints for Logic Programming Examples without
Revealing Solutions | 7 pages. This is an extended English version of "Gokhan Avci, Mustafa
Mehuljic, and Peter Schuller. Cozumu Aciga Cikarmadan Mantiksal Programlama
Orneklerine Ipucu Verme, Sinyal Isleme ve Iletisim Uygulamalari Kurultayi
(SIU), pages 513-516, 2016, DOI: 10.1109/SIU.2016.7495790" | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We introduce a framework for supporting learning to program in the paradigm
of Answer Set Programming (ASP), which is a declarative logic programming
formalism. Based on the idea of teaching by asking the student to complete
small example ASP programs, we introduce a three-stage method for giving hints
to the student without revealing the correct solution of an example. We
categorize mistakes into (i) syntactic mistakes, (ii) unexpected but
syntactically correct input, and (iii) semantic mistakes, describe mathematical
definitions of these mistakes, and show how to compute hints from these
definitions.
| [
{
"version": "v1",
"created": "Tue, 26 Jul 2016 19:17:11 GMT"
}
] | 1,470,960,000,000 | [
[
"Avci",
"Gokhan",
""
],
[
"Mehuljic",
"Mustafa",
""
],
[
"Schüller",
"Peter",
""
]
] |
1607.08075 | Adrian Groza | Adrian Groza, Madalina Mand Nagy | Harmonization of conflicting medical opinions using argumentation
protocols and textual entailment - a case study on Parkinson disease | ICCP 2016 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Parkinson's disease is the second most common neurodegenerative disease,
affecting more than 1.2 million people in Europe. Medications are available for
the management of its symptoms, but the exact cause of the disease is unknown
and there is currently no cure on the market. To better understand the
relations between new findings and current medical knowledge, we need tools
able to analyse published medical papers based on natural language processing
and tools capable to identify various relationships of new findings with the
current medical knowledge. Our work aims to fill the above technological gap.
To identify conflicting information in medical documents, we enact textual
entailment technology. To encapsulate existing medical knowledge, we rely on
ontologies. To connect the formal axioms in ontologies with natural text in
medical articles, we exploit ontology verbalisation techniques. To assess the
level of disagreement between human agents with respect to a medical issue, we
rely on fuzzy aggregation. To harmonize this disagreement, we design mediation
protocols within a multi-agent framework.
| [
{
"version": "v1",
"created": "Wed, 27 Jul 2016 13:13:41 GMT"
}
] | 1,469,664,000,000 | [
[
"Groza",
"Adrian",
""
],
[
"Nagy",
"Madalina Mand",
""
]
] |
1607.08131 | Larisa Safina | Alexander Tchitchigin, Max Talanov, Larisa Safina, Manuel Mazzara | Neuromorphic Robot Dream | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we present the next step in our approach to neurobiologically
plausible implementation of emotional reactions and behaviors for real-time
autonomous robotic systems. The working metaphor we use is the "day" and the
"night" phases of mammalian life. During the "day phase" a robotic system
stores the inbound information and is controlled by a light-weight rule-based
system in real time. In contrast to that, during the "night phase" information
that has been stored is transferred to a supercomputing system to update the
realistic neural network: emotional and behavioral strategies.
| [
{
"version": "v1",
"created": "Wed, 27 Jul 2016 14:54:47 GMT"
}
] | 1,469,664,000,000 | [
[
"Tchitchigin",
"Alexander",
""
],
[
"Talanov",
"Max",
""
],
[
"Safina",
"Larisa",
""
],
[
"Mazzara",
"Manuel",
""
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.