id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
sequencelengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1811.09231 | Anas Shrinah | Anas Shrinah, Kerstin Eder | Goal-constrained Planning Domain Model Verification of Safety Properties | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The verification of planning domain models is crucial to ensure the safety,
integrity and correctness of planning-based automated systems. This task is
usually performed using model checking techniques. However, unconstrained
application of model checkers to verify planning domain models can result in
false positives, i.e.counterexamples that are unreachable by a sound planner
when using the domain under verification during a planning task. In this paper,
we discuss the downside of unconstrained planning domain model verification. We
then introduce the notion of a valid planning counterexample, and demonstrate
how model checkers, as well as state trajectory constraints planning
techniques, should be used to verify planning domain models so that invalid
planning counterexamples are not returned.
| [
{
"version": "v1",
"created": "Thu, 22 Nov 2018 16:33:52 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Mar 2019 01:18:37 GMT"
},
{
"version": "v3",
"created": "Mon, 25 Nov 2019 10:17:02 GMT"
},
{
"version": "v4",
"created": "Mon, 24 Feb 2020 17:04:54 GMT"
}
] | 1,582,588,800,000 | [
[
"Shrinah",
"Anas",
""
],
[
"Eder",
"Kerstin",
""
]
] |
1811.09246 | David Manheim | David Manheim | Oversight of Unsafe Systems via Dynamic Safety Envelopes | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper reviews the reasons that Human-in-the-Loop is both critical for
preventing widely-understood failure modes for machine learning, and not a
practical solution. Following this, we review two current heuristic methods for
addressing this. The first is provable safety envelopes, which are possible
only when the dynamics of the system are fully known, but can be useful safety
guarantees when optimal behavior is based on machine learning with
poorly-understood safety characteristics. The second is the simpler circuit
breaker model, which can forestall or prevent catastrophic outcomes by stopping
the system, without any specific model of the system. This paper proposes using
heuristic, dynamic safety envelopes, which are a plausible halfway point
between these approaches that allows human oversight without some of the more
difficult problems faced by Human-in-the-Loop systems. Finally, the paper
concludes with how this approach can be used for governance of systems where
otherwise unsafe systems are deployed.
| [
{
"version": "v1",
"created": "Thu, 22 Nov 2018 17:31:41 GMT"
}
] | 1,543,190,400,000 | [
[
"Manheim",
"David",
""
]
] |
1811.09722 | Tathagata Chakraborti | Tathagata Chakraborti, Anagha Kulkarni, Sarath Sreedharan, David E.
Smith, Subbarao Kambhampati | Explicability? Legibility? Predictability? Transparency? Privacy?
Security? The Emerging Landscape of Interpretable Agent Behavior | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There has been significant interest of late in generating behavior of agents
that is interpretable to the human (observer) in the loop. However, the work in
this area has typically lacked coherence on the topic, with proposed solutions
for "explicable", "legible", "predictable" and "transparent" planning with
overlapping, and sometimes conflicting, semantics all aimed at some notion of
understanding what intentions the observer will ascribe to an agent by
observing its behavior. This is also true for the recent works on "security"
and "privacy" of plans which are also trying to answer the same question, but
from the opposite point of view -- i.e. when the agent is trying to hide
instead of revealing its intentions. This paper attempts to provide a workable
taxonomy of relevant concepts in this exciting and emerging field of inquiry.
| [
{
"version": "v1",
"created": "Fri, 23 Nov 2018 22:38:49 GMT"
}
] | 1,543,276,800,000 | [
[
"Chakraborti",
"Tathagata",
""
],
[
"Kulkarni",
"Anagha",
""
],
[
"Sreedharan",
"Sarath",
""
],
[
"Smith",
"David E.",
""
],
[
"Kambhampati",
"Subbarao",
""
]
] |
1811.09900 | Sriram Gopalakrishnan | Sriram Gopalakrishnan, Subbarao Kambhampati | TGE-viz : Transition Graph Embedding for Visualization of Plan Traces
and Domains | Supplemental material follows the references of the main paper | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing work for plan trace visualization in automated planning uses
pipeline-style visualizations, similar to plans in Gantt charts. Such
visualization do not capture the domain structure or dependencies between the
various fluents and actions. Additionally, plan traces in such visualizations
cannot be easily compared with one another without parsing the details of
individual actions, which imposes a higher cognitive load. We introduce
TGE-viz, a technique to visualize plan traces within an embedding of the entire
transition graph of a domain in low dimensional space. TGE-viz allows users to
visualize and criticize plans more intuitively for mixed-initiative planning.
It also allows users to visually appraise the structure of domains and the
dependencies in it.
| [
{
"version": "v1",
"created": "Sat, 24 Nov 2018 21:27:53 GMT"
}
] | 1,543,276,800,000 | [
[
"Gopalakrishnan",
"Sriram",
""
],
[
"Kambhampati",
"Subbarao",
""
]
] |
1811.09920 | Bo Zhang | Bo Zhang, Bin Chen, Jinyu Yang, Wenjing Yang, Jiankang Zhang | An Unified Intelligence-Communication Model for Multi-Agent System
Part-I: Overview | 12 pages, 5 figures, submitted for publications in IEEE Journals
Interactive Vesion @ https://uicm-mas.github.io/ | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Motivated by Shannon's model and recent rehabilitation of self-supervised
artificial intelligence having a "World Model", this paper propose an unified
intelligence-communication (UIC) model for describing a single agent and any
multi-agent system.
Firstly, the environment is modelled as the generic communication channel
between agents. Secondly, the UIC model adopts a learning-agent model for
unifying several well-adopted agent architecture, e.g. rule-based agent model
in complex adaptive systems, layered model for describing human-level
intelligence, world-model based agent model. The model may also provide an
unified approach to investigate a multi-agent system (MAS) having multiple
action-perception modalities, e.g. explicitly information transfer and implicit
information transfer.
This treatise would be divided into three parts, and this first part provides
an overview of the UIC model without introducing cumbersome mathematical
analysis and optimizations. In the second part of this treatise, case studies
with quantitative analysis driven by the UIC model would be provided,
exemplifying the adoption of the UIC model in multi-agent system. Specifically,
two representative cases would be studied, namely the analysis of a natural
multi-agent system, as well as the co-design of communication, perception and
action in an artificial multi-agent system. In the third part of this treatise,
the paper provides further insights and future research directions motivated by
the UIC model, such as unification of single intelligence and collective
intelligence, a possible explanation of intelligence emergence and a dual model
for agent-environment intelligence hypothesis.
Notes: This paper is a Previewed Version, the extended full-version would be
released after being accepted.
| [
{
"version": "v1",
"created": "Sun, 25 Nov 2018 01:31:38 GMT"
}
] | 1,543,276,800,000 | [
[
"Zhang",
"Bo",
""
],
[
"Chen",
"Bin",
""
],
[
"Yang",
"Jinyu",
""
],
[
"Yang",
"Wenjing",
""
],
[
"Zhang",
"Jiankang",
""
]
] |
1811.10433 | Buser Say | Buser Say, Scott Sanner | Compact and Efficient Encodings for Planning in Factored State and
Action Spaces with Learned Binarized Neural Network Transition Models | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we leverage the efficiency of Binarized Neural Networks (BNNs)
to learn complex state transition models of planning domains with discretized
factored state and action spaces. In order to directly exploit this transition
structure for planning, we present two novel compilations of the learned
factored planning problem with BNNs based on reductions to Weighted Partial
Maximum Boolean Satisfiability (FD-SAT-Plan+) as well as Binary Linear
Programming (FD-BLP-Plan+). Theoretically, we show that our SAT-based
Bi-Directional Neuron Activation Encoding is asymptotically the most compact
encoding relative to the current literature and supports Unit Propagation (UP)
-- an important property that facilitates efficiency in SAT solvers.
Experimentally, we validate the computational efficiency of our Bi-Directional
Neuron Activation Encoding in comparison to an existing neuron activation
encoding and demonstrate the ability to learn complex transition models with
BNNs. We test the runtime efficiency of both FD-SAT-Plan+ and FD-BLP-Plan+ on
the learned factored planning problem showing that FD-SAT-Plan+ scales better
with increasing BNN size and complexity. Finally, we present a finite-time
incremental constraint generation algorithm based on generalized landmark
constraints to improve the planning accuracy of our encodings through simulated
or real-world interaction.
| [
{
"version": "v1",
"created": "Mon, 26 Nov 2018 14:59:29 GMT"
},
{
"version": "v10",
"created": "Tue, 9 Apr 2019 00:23:16 GMT"
},
{
"version": "v11",
"created": "Thu, 3 Oct 2019 13:08:37 GMT"
},
{
"version": "v12",
"created": "Fri, 6 Mar 2020 17:47:58 GMT"
},
{
"version": "v2",
"created": "Tue, 27 Nov 2018 18:48:51 GMT"
},
{
"version": "v3",
"created": "Wed, 28 Nov 2018 02:08:22 GMT"
},
{
"version": "v4",
"created": "Thu, 29 Nov 2018 15:31:00 GMT"
},
{
"version": "v5",
"created": "Fri, 30 Nov 2018 17:15:31 GMT"
},
{
"version": "v6",
"created": "Thu, 6 Dec 2018 18:13:01 GMT"
},
{
"version": "v7",
"created": "Fri, 7 Dec 2018 14:38:28 GMT"
},
{
"version": "v8",
"created": "Mon, 10 Dec 2018 02:21:17 GMT"
},
{
"version": "v9",
"created": "Thu, 10 Jan 2019 11:25:20 GMT"
}
] | 1,583,712,000,000 | [
[
"Say",
"Buser",
""
],
[
"Sanner",
"Scott",
""
]
] |
1811.10656 | Alexey Ignatiev | Alexey Ignatiev, Nina Narodytska, Joao Marques-Silva | Abduction-Based Explanations for Machine Learning Models | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The growing range of applications of Machine Learning (ML) in a multitude of
settings motivates the ability of computing small explanations for predictions
made. Small explanations are generally accepted as easier for human decision
makers to understand. Most earlier work on computing explanations is based on
heuristic approaches, providing no guarantees of quality, in terms of how close
such solutions are from cardinality- or subset-minimal explanations. This paper
develops a constraint-agnostic solution for computing explanations for any ML
model. The proposed solution exploits abductive reasoning, and imposes the
requirement that the ML model can be represented as sets of constraints using
some target constraint reasoning system for which the decision problem can be
answered with some oracle. The experimental results, obtained on well-known
datasets, validate the scalability of the proposed approach as well as the
quality of the computed solutions.
| [
{
"version": "v1",
"created": "Mon, 26 Nov 2018 19:27:26 GMT"
}
] | 1,543,363,200,000 | [
[
"Ignatiev",
"Alexey",
""
],
[
"Narodytska",
"Nina",
""
],
[
"Marques-Silva",
"Joao",
""
]
] |
1811.10728 | Hisao Katsumi | Hisao Katsumi, Takuya Hiraoka, Koichiro Yoshino, Kazeto Yamamoto,
Shota Motoura, Kunihiko Sadamasa and Satoshi Nakamura | Optimization of Information-Seeking Dialogue Strategy for
Argumentation-Based Dialogue System | Accepted by AAAI2019 DEEP-DIAL 2019 workshop | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Argumentation-based dialogue systems, which can handle and exchange arguments
through dialogue, have been widely researched. It is required that these
systems have sufficient supporting information to argue their claims
rationally; however, the systems often do not have enough of such information
in realistic situations. One way to fill in the gap is acquiring such missing
information from dialogue partners (information-seeking dialogue). Existing
information-seeking dialogue systems are based on handcrafted dialogue
strategies that exhaustively examine missing information. However, the proposed
strategies are not specialized in collecting information for constructing
rational arguments. Moreover, the number of system's inquiry candidates grows
in accordance with the size of the argument set that the system deal with. In
this paper, we formalize the process of information-seeking dialogue as Markov
decision processes (MDPs) and apply deep reinforcement learning (DRL) for
automatically optimizing a dialogue strategy. By utilizing DRL, our dialogue
strategy can successfully minimize objective functions, the number of turns it
takes for our system to collect necessary information in a dialogue. We
conducted dialogue experiments using two datasets from different domains of
argumentative dialogue. Experimental results show that the proposed
formalization based on MDP works well, and the policy optimized by DRL
outperformed existing heuristic dialogue strategies.
| [
{
"version": "v1",
"created": "Mon, 26 Nov 2018 22:56:07 GMT"
}
] | 1,543,363,200,000 | [
[
"Katsumi",
"Hisao",
""
],
[
"Hiraoka",
"Takuya",
""
],
[
"Yoshino",
"Koichiro",
""
],
[
"Yamamoto",
"Kazeto",
""
],
[
"Motoura",
"Shota",
""
],
[
"Sadamasa",
"Kunihiko",
""
],
[
"Nakamura",
"Satoshi",
""
]
] |
1811.10928 | Laurent Orseau | Laurent Orseau, Levi H. S. Lelis, Tor Lattimore, Th\'eophane Weber | Single-Agent Policy Tree Search With Guarantees | null | 32nd Conference on Neural Information Processing Systems (NIPS
2018), Montr\'eal, Canada | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce two novel tree search algorithms that use a policy to guide
search. The first algorithm is a best-first enumeration that uses a cost
function that allows us to prove an upper bound on the number of nodes to be
expanded before reaching a goal state. We show that this best-first algorithm
is particularly well suited for `needle-in-a-haystack' problems. The second
algorithm is based on sampling and we prove an upper bound on the expected
number of nodes it expands before reaching a set of goal states. We show that
this algorithm is better suited for problems where many paths lead to a goal.
We validate these tree search algorithms on 1,000 computer-generated levels of
Sokoban, where the policy used to guide the search comes from a neural network
trained using A3C. Our results show that the policy tree search algorithms we
introduce are competitive with a state-of-the-art domain-independent planner
that uses heuristic search.
| [
{
"version": "v1",
"created": "Tue, 27 Nov 2018 11:53:33 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Nov 2018 10:32:36 GMT"
}
] | 1,543,449,600,000 | [
[
"Orseau",
"Laurent",
""
],
[
"Lelis",
"Levi H. S.",
""
],
[
"Lattimore",
"Tor",
""
],
[
"Weber",
"Théophane",
""
]
] |
1811.11064 | Nikhil Krishnaswamy | Nikhil Krishnaswamy, Scott Friedman, James Pustejovsky | Combining Deep Learning and Qualitative Spatial Reasoning to Learn
Complex Structures from Sparse Examples with Noise | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many modern machine learning approaches require vast amounts of training data
to learn new concepts; conversely, human learning often requires few
examples--sometimes only one--from which the learner can abstract structural
concepts. We present a novel approach to introducing new spatial structures to
an AI agent, combining deep learning over qualitative spatial relations with
various heuristic search algorithms. The agent extracts spatial relations from
a sparse set of noisy examples of block-based structures, and trains
convolutional and sequential models of those relation sets. To create novel
examples of similar structures, the agent begins placing blocks on a virtual
table, uses a CNN to predict the most similar complete example structure after
each placement, an LSTM to predict the most likely set of remaining moves
needed to complete it, and recommends one using heuristic search. We verify
that the agent learned the concept by observing its virtual block-building
activities, wherein it ranks each potential subsequent action toward building
its learned concept. We empirically assess this approach with human
participants' ratings of the block structures. Initial results and qualitative
evaluations of structures generated by the trained agent show where it has
generalized concepts from the training data, which heuristics perform best
within the search space, and how we might improve learning and execution.
| [
{
"version": "v1",
"created": "Tue, 27 Nov 2018 15:48:27 GMT"
}
] | 1,543,363,200,000 | [
[
"Krishnaswamy",
"Nikhil",
""
],
[
"Friedman",
"Scott",
""
],
[
"Pustejovsky",
"James",
""
]
] |
1811.11233 | Mark Schutera | Mark Schutera and Niklas Goby and Stefan Smolarek and Markus Reischl | Distributed traffic light control at uncoupled intersections with
real-world topology by deep reinforcement learning | 32nd Conference on Neural Information Processing Systems, within
Workshop on Machine Learning for Intelligent Transportation Systems | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work examines the implications of uncoupled intersections with local
real-world topology and sensor setup on traffic light control approaches.
Control approaches are evaluated with respect to: Traffic flow, fuel
consumption and noise emission at intersections.
The real-world road network of Friedrichshafen is depicted, preprocessed and
the present traffic light controlled intersections are modeled with respect to
state space and action space.
Different strategies, containing fixed-time, gap-based and time-based control
approaches as well as our deep reinforcement learning based control approach,
are implemented and assessed. Our novel DRL approach allows for modeling the
TLC action space, with respect to phase selection as well as selection of
transition timings. It was found that real-world topologies, and thus
irregularly arranged intersections have an influence on the performance of
traffic light control approaches. This is even to be observed within the same
intersection types (n-arm, m-phases). Moreover we could show, that these
influences can be efficiently dealt with by our deep reinforcement learning
based control approach.
| [
{
"version": "v1",
"created": "Tue, 27 Nov 2018 20:08:28 GMT"
}
] | 1,543,449,600,000 | [
[
"Schutera",
"Mark",
""
],
[
"Goby",
"Niklas",
""
],
[
"Smolarek",
"Stefan",
""
],
[
"Reischl",
"Markus",
""
]
] |
1811.11273 | Henry Bendekgey | Henry Bendekgey | Clustering Player Strategies from Variable-Length Game Logs in Dominion | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a method for encoding game logs as numeric features in the card
game Dominion. We then run the manifold learning algorithm t-SNE on these
encodings to visualize the landscape of player strategies. By quantifying game
states as the relative prevalence of cards in a player's deck, we create
visualizations that capture qualitative differences in player strategies.
Different ways of deviating from the starting game state appear as different
rays in the visualization, giving it an intuitive explanation. This is a
promising new direction for understanding player strategies across games that
vary in length.
| [
{
"version": "v1",
"created": "Tue, 27 Nov 2018 21:48:42 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Dec 2018 07:30:02 GMT"
}
] | 1,544,659,200,000 | [
[
"Bendekgey",
"Henry",
""
]
] |
1811.11435 | Chiaki Sakama | Chiaki Sakama, Hien D. Nguyen, Taisuke Sato, Katsumi Inoue | Partial Evaluation of Logic Programs in Vector Spaces | Proceedings of the 11th Workshop on Answer Set Programming and Other
Computing Paradigms 2018 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we introduce methods of encoding propositional logic programs
in vector spaces. Interpretations are represented by vectors and programs are
represented by matrices. The least model of a definite program is computed by
multiplying an interpretation vector and a program matrix. To optimize
computation in vector spaces, we provide a method of partial evaluation of
programs using linear algebra. Partial evaluation is done by unfolding rules in
a program, and it is realized in a vector space by multiplying program
matrices. We perform experiments using randomly generated programs and show
that partial evaluation has potential for realizing efficient computation in
huge scale of programs.
| [
{
"version": "v1",
"created": "Wed, 28 Nov 2018 08:24:03 GMT"
}
] | 1,543,449,600,000 | [
[
"Sakama",
"Chiaki",
""
],
[
"Nguyen",
"Hien D.",
""
],
[
"Sato",
"Taisuke",
""
],
[
"Inoue",
"Katsumi",
""
]
] |
1811.12083 | Nico Potyka | Nico Potyka | A Polynomial-time Fragment of Epistemic Probabilistic Argumentation
(Technical Report) | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Probabilistic argumentation allows reasoning about argumentation problems in
a way that is well-founded by probability theory. However, in practice, this
approach can be severely limited by the fact that probabilities are defined by
adding an exponential number of terms. We show that this exponential blowup can
be avoided in an interesting fragment of epistemic probabilistic argumentation
and that some computational problems that have been considered intractable can
be solved in polynomial time. We give efficient convex programming formulations
for these problems and explore how far our fragment can be extended without
loosing tractability.
| [
{
"version": "v1",
"created": "Thu, 29 Nov 2018 11:52:21 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Mar 2019 15:25:25 GMT"
}
] | 1,551,916,800,000 | [
[
"Potyka",
"Nico",
""
]
] |
1811.12455 | Catarina Moreira | Catarina Moreira | Unifying Decision-Making: a Review on Evolutionary Theories on
Rationality and Cognitive Biases | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we make a review on the concepts of rationality across several
different fields, namely in economics, psychology and evolutionary biology and
behavioural ecology. We review how processes like natural selection can help us
understand the evolution of cognition and how cognitive biases might be a
consequence of this natural selection. In the end we argue that humans are not
irrational, but rather rationally bounded and we complement the discussion on
how quantum cognitive models can contribute for the modelling and prediction of
human paradoxical decisions.
| [
{
"version": "v1",
"created": "Thu, 29 Nov 2018 19:56:19 GMT"
}
] | 1,543,795,200,000 | [
[
"Moreira",
"Catarina",
""
]
] |
1811.12787 | Nico Potyka | Nico Potyka | A Tutorial for Weighted Bipolar Argumentation with Continuous Dynamical
Systems and the Java Library Attractor | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Weighted bipolar argumentation frameworks allow modeling decision problems
and online discussions by defining arguments and their relationships. The
strength of arguments can be computed based on an initial weight and the
strength of attacking and supporting arguments. While previous approaches
assumed an acyclic argumentation graph and successively set arguments' strength
based on the strength of their parents, recently continuous dynamical systems
have been proposed as an alternative. Continuous models update arguments'
strength simultaneously and continuously. While there are currently no
analytical guarantees for convergence in general graphs, experiments show that
continuous models can converge quickly in large cyclic graphs with thousands of
arguments. Here, we focus on the high-level ideas of this approach and explain
key results and applications. We also introduce Attractor, a Java library that
can be used to solve weighted bipolar argumentation problems. Attractor
contains implementations of several discrete and continuous models and
numerical algorithms to compute solutions. It also provides base classes that
can be used to implement, to evaluate and to compare continuous models easily.
| [
{
"version": "v1",
"created": "Fri, 30 Nov 2018 13:31:04 GMT"
}
] | 1,543,795,200,000 | [
[
"Potyka",
"Nico",
""
]
] |
1811.12917 | Daniel Muller | Daniel Muller and Erez Karpas | Automated Tactical Decision Planning Model with Strategic Values
Guidance for Local Action-Value-Ambiguity | 9 pages, 4 figures, 2 tables | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In many real-world planning problems, action's impact differs with a place,
time and the context in which the action is applied. The same action with the
same effects in a different context or states can cause a different change. In
actions with incomplete precondition list, that applicable in several states
and circumstances, ambiguity regarding the impact of the action is challenging
even in small domains. To estimate the real impact of actions, an evaluation of
the effect list will not be enough; a relative estimation is more informative
and suitable for estimation of action's real impact. Recent work on
Over-subscription Planning (OSP) defined the net utility of action as the net
change in the state's value caused by the action. The notion of net utility of
action allows for a broader perspective on value action impact and use for a
more accurate evaluation of achievements of the action, considering inter-state
and intra-state dependencies. To achieve value-rational decisions in complex
reality often requires strategic, high level, planning with a global
perspective and values, while many local tactical decisions require real-time
information to estimate the impact of actions. This paper proposes an offline
action-value structure analysis to exploit the compactly represented
informativeness of net utility of actions to extend the scope of planning to
value uncertainty scenarios and to provide a real-time value-rational decision
planning tool. The result of the offline pre-processing phase is a compact
decision planning model representation for flexible, local reasoning of net
utility of actions with (offline) value ambiguity. The obtained flexibility is
beneficial for the online planning phase and real-time execution of actions
with value ambiguity. Our empirical evaluation shows the effectiveness of this
approach in domains with value ambiguity in their action-value-structure.
| [
{
"version": "v1",
"created": "Fri, 30 Nov 2018 18:04:19 GMT"
}
] | 1,543,795,200,000 | [
[
"Muller",
"Daniel",
""
],
[
"Karpas",
"Erez",
""
]
] |
1812.00091 | Yixiu Zhao | Yixiu Zhao, Ziyin Liu | BlockPuzzle - A Challenge in Physical Reasoning and Generalization for
Robot Learning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work we propose a novel task framework under which a variety of
physical reasoning puzzles can be constructed using very simple rules. Under
sparse reward settings, most of these tasks can be very challenging for a
reinforcement learning agent to learn. We build several simple environments
with this task framework in Mujoco and OpenAI gym and attempt to solve them. We
are able to solve the environments by designing curricula to guide the agent in
learning and using imitation learning methods to transfer knowledge from a
simpler environment. This is only a first step for the task framework, and
further research on how to solve the harder tasks and transfer knowledge
between tasks is needed.
| [
{
"version": "v1",
"created": "Fri, 30 Nov 2018 23:18:08 GMT"
}
] | 1,543,881,600,000 | [
[
"Zhao",
"Yixiu",
""
],
[
"Liu",
"Ziyin",
""
]
] |
1812.00136 | Yujian Li | Yujian Li | Theory of Cognitive Relativity: A Promising Paradigm for True AI | 38 pages (double spaced), 8 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rise of deep learning has brought artificial intelligence (AI) to the
forefront. The ultimate goal of AI is to realize machines with human mind and
consciousness, but existing achievements mainly simulate intelligent behavior
on computer platforms. These achievements all belong to weak AI rather than
strong AI. How to achieve strong AI is not known yet in the field of
intelligence science. Currently, this field is calling for a new paradigm,
especially Theory of Cognitive Relativity (TCR). The TCR aims to summarize a
simple and elegant set of first principles about the nature of intelligence, at
least including the Principle of World's Relativity and the Principle of
Symbol's Relativity. The Principle of World's Relativity states that the
subjective world an intelligent agent can observe is strongly constrained by
the way it perceives the objective world. The Principle of Symbol's Relativity
states that an intelligent agent can use any physical symbol system to express
what it observes in its subjective world. The two principles are derived from
scientific facts and life experience. Thought experiments show that they are
important to understand high-level intelligence and necessary to establish a
scientific theory of mind and consciousness. Rather than brain-like
intelligence, the TCR indeed advocates a promising change in direction to
realize true AI, i.e. artificial general intelligence or artificial
consciousness, particularly different from humans' and animals'. Furthermore, a
TCR creed has been presented and extended to reveal the secrets of
consciousness and to guide realization of conscious machines. In the sense that
true AI could be diversely implemented in a brain-different way, the TCR would
probably drive an intelligence revolution in combination with some additional
first principles.
| [
{
"version": "v1",
"created": "Sat, 1 Dec 2018 04:01:03 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Dec 2018 14:11:42 GMT"
},
{
"version": "v3",
"created": "Thu, 20 Dec 2018 06:59:22 GMT"
}
] | 1,545,350,400,000 | [
[
"Li",
"Yujian",
""
]
] |
1812.00336 | Sijia Xu | Sijia Xu, Hongyu Kuang, Zhi Zhuang, Renjie Hu, Yang Liu, Huyang Sun | Macro action selection with deep reinforcement learning in StarCraft | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | StarCraft (SC) is one of the most popular and successful Real Time Strategy
(RTS) games. In recent years, SC is also widely accepted as a challenging
testbed for AI research because of its enormous state space, partially observed
information, multi-agent collaboration, and so on. With the help of annual
AIIDE and CIG competitions, a growing number of SC bots are proposed and
continuously improved. However, a large gap remains between the top-level bot
and the professional human player. One vital reason is that current SC bots
mainly rely on predefined rules to select macro actions during their games.
These rules are not scalable and efficient enough to cope with the enormous yet
partially observed state space in the game. In this paper, we propose a deep
reinforcement learning (DRL) framework to improve the selection of macro
actions. Our framework is based on the combination of the Ape-X DQN and the
Long-Short-Term-Memory (LSTM). We use this framework to build our bot, named as
LastOrder. Our evaluation, based on training against all bots from the AIIDE
2017 StarCraft AI competition set, shows that LastOrder achieves an 83% winning
rate, outperforming 26 bots in total 28 entrants.
| [
{
"version": "v1",
"created": "Sun, 2 Dec 2018 06:06:28 GMT"
},
{
"version": "v2",
"created": "Sun, 3 Mar 2019 07:38:12 GMT"
},
{
"version": "v3",
"created": "Sat, 12 Oct 2019 02:10:29 GMT"
}
] | 1,571,097,600,000 | [
[
"Xu",
"Sijia",
""
],
[
"Kuang",
"Hongyu",
""
],
[
"Zhuang",
"Zhi",
""
],
[
"Hu",
"Renjie",
""
],
[
"Liu",
"Yang",
""
],
[
"Sun",
"Huyang",
""
]
] |
1812.01144 | Philip Cohen | Philip R Cohen | Back to the Future for Dialogue Research: A Position Paper | AAAI Workshop 2019, Deep Dial | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This short position paper is intended to provide a critique of current
approaches to dialogue, as well as a roadmap for collaborative dialogue
research. It is unapologetically opinionated, but informed by 40 years of
dialogue re-search. No attempt is made to be comprehensive. The paper will
discuss current research into building so-called "chatbots", slot-filling
dialogue systems, and plan-based dialogue systems. For further discussion of
some of these issues, please see (Allen et al., in press).
| [
{
"version": "v1",
"created": "Tue, 4 Dec 2018 00:41:51 GMT"
}
] | 1,543,968,000,000 | [
[
"Cohen",
"Philip R",
""
]
] |
1812.01351 | Paulo Vitor Campos Souza | Paulo Vitor de Campos Souza, Augusto Junio Guimaraes, Vanessa Souza
Araujo, Thiago Silva Rezende, Vinicius Jonathan Silva Araujo | Regularized Fuzzy Neural Networks to Aid Effort Forecasting in the
Construction and Software Development | null | Volume 9, Number 6, 2018 | 10.5121/ijaia.2018.9602 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Predicting the time to build software is a very complex task for software
engineering managers. There are complex factors that can directly interfere
with the productivity of the development team. Factors directly related to the
complexity of the system to be developed drastically change the time necessary
for the completion of the works with the software factories. This work proposes
the use of a hybrid system based on artificial neural networks and fuzzy
systems to assist in the construction of an expert system based on rules to
support in the prediction of hours destined to the development of software
according to the complexity of the elements present in the same. The set of
fuzzy rules obtained by the system helps the management and control of software
development by providing a base of interpretable estimates based on fuzzy
rules. The model was submitted to tests on a real database, and its results
were promissory in the construction of an aid mechanism in the predictability
of the software construction.
| [
{
"version": "v1",
"created": "Tue, 4 Dec 2018 11:57:46 GMT"
}
] | 1,547,078,400,000 | [
[
"Souza",
"Paulo Vitor de Campos",
""
],
[
"Guimaraes",
"Augusto Junio",
""
],
[
"Araujo",
"Vanessa Souza",
""
],
[
"Rezende",
"Thiago Silva",
""
],
[
"Araujo",
"Vinicius Jonathan Silva",
""
]
] |
1812.01569 | Iris Seaman | Iris Rubi Seaman, Jan-Willem van de Meent, David Wingate | Nested Reasoning About Autonomous Agents Using Probabilistic Programs | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As autonomous agents become more ubiquitous, they will eventually have to
reason about the plans of other agents, which is known as theory of mind
reasoning. We develop a planning-as-inference framework in which agents perform
nested simulation to reason about the behavior of other agents in an online
manner. As a concrete application of this framework, we use probabilistic
programs to model a high-uncertainty variant of pursuit-evasion games in which
an agent must make inferences about the other agents' plans to craft
counter-plans. Our probabilistic programs incorporate a variety of complex
primitives such as field-of-view calculations and path planners, which enable
us to model quasi-realistic scenarios in a computationally tractable manner. We
perform extensive experimental evaluations which establish a variety of
rational behaviors and quantify how allocating computation across levels of
nesting affects the variance of our estimators.
| [
{
"version": "v1",
"created": "Tue, 4 Dec 2018 18:19:34 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Mar 2020 20:31:26 GMT"
}
] | 1,583,452,800,000 | [
[
"Seaman",
"Iris Rubi",
""
],
[
"van de Meent",
"Jan-Willem",
""
],
[
"Wingate",
"David",
""
]
] |
1812.01818 | Masataro Asai | Masataro Asai | Photo-Realistic Blocksworld Dataset | The dataset generator is available at
https://github.com/ibm/photorealistic-blocksworld | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this report, we introduce an artificial dataset generator for
Photo-realistic Blocksworld domain. Blocksworld is one of the oldest high-level
task planning domain that is well defined but contains sufficient complexity,
e.g., the conflicting subgoals and the decomposability into subproblems. We aim
to make this dataset a benchmark for Neural-Symbolic integrated systems and
accelerate the research in this area. The key advantage of such systems is the
ability to obtain a symbolic model from the real-world input and perform a
fast, systematic, complete algorithm for symbolic reasoning, without any
supervision and the reward signal from the environment.
| [
{
"version": "v1",
"created": "Wed, 5 Dec 2018 05:04:15 GMT"
}
] | 1,544,054,400,000 | [
[
"Asai",
"Masataro",
""
]
] |
1812.01825 | Lu Pang | Peixi Peng, Junliang Xing | Cooperative Multi-Agent Policy Gradients with Sub-optimal Demonstration | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many reality tasks such as robot coordination can be naturally modelled as
multi-agent cooperative system where the rewards are sparse. This paper focuses
on learning decentralized policies for such tasks using sub-optimal
demonstration. To learn the multi-agent cooperation effectively and tackle the
sub-optimality of demonstration, a self-improving learning method is proposed:
On the one hand, the centralized state-action values are initialized by the
demonstration and updated by the learned decentralized policy to improve the
sub-optimality. On the other hand, the Nash Equilibrium are found by the
current state-action value and are used as a guide to learn the policy. The
proposed method is evaluated on the combat RTS games which requires a high
level of multi-agent cooperation. Extensive experimental results on various
combat scenarios demonstrate that the proposed method can learn multi-agent
cooperation effectively. It significantly outperforms many state-of-the-art
demonstration based approaches.
| [
{
"version": "v1",
"created": "Wed, 5 Dec 2018 05:47:43 GMT"
},
{
"version": "v2",
"created": "Thu, 19 Aug 2021 00:49:28 GMT"
}
] | 1,629,417,600,000 | [
[
"Peng",
"Peixi",
""
],
[
"Xing",
"Junliang",
""
]
] |
1812.01893 | Mariam Zouari | Mariam Zouari, Nesrine Baklouti, Javier Sanchez Medina, Mounir Ben
Ayed and Adel M. Alimi | An Evolutionary Hierarchical Interval Type-2 Fuzzy Knowledge
Representation System (EHIT2FKRS) for Travel Route Assignment | 13 pages, 12 Tables, 18 figures, Journal paper | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Urban Traffic Networks are characterized by high dynamics of traffic flow and
increased travel time, including waiting times. This leads to more complex road
traffic management. The present research paper suggests an innovative advanced
traffic management system based on Hierarchical Interval Type-2 Fuzzy Logic
model optimized by the Particle Swarm Optimization (PSO) method. The aim of
designing this system is to perform dynamic route assignment to relieve traffic
congestion and limit the unexpected fluctuation effects on traffic flow. The
suggested system is executed and simulated using SUMO, a well-known microscopic
traffic simulator. For the present study, we have tested four large and
heterogeneous metropolitan areas located in the cities of Sfax, Luxembourg,
Bologna and Cologne. The experimental results proved the effectiveness of
learning the Hierarchical Interval type-2 Fuzzy logic using real time particle
swarm optimization technique PSO to accomplish multiobjective optimality
regarding two criteria: number of vehicles that reach their destination and
average travel time. The obtained results are encouraging, confirming the
efficiency of the proposed system.
| [
{
"version": "v1",
"created": "Wed, 5 Dec 2018 10:16:00 GMT"
}
] | 1,544,054,400,000 | [
[
"Zouari",
"Mariam",
""
],
[
"Baklouti",
"Nesrine",
""
],
[
"Medina",
"Javier Sanchez",
""
],
[
"Ayed",
"Mounir Ben",
""
],
[
"Alimi",
"Adel M.",
""
]
] |
1812.02217 | John Hooker | John Hooker | Truly Autonomous Machines Are Ethical | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While many see the prospect of autonomous machines as threatening, autonomy
may be exactly what we want in a superintelligent machine. There is a sense of
autonomy, deeply rooted in the ethical literature, in which an autonomous
machine is necessarily an ethical one. Development of the theory underlying
this idea not only reveals the advantages of autonomy, but it sheds light on a
number of issues in the ethics of artificial intelligence. It helps us to
understand what sort of obligations we owe to machines, and what obligations
they owe to us. It clears up the issue of assigning responsibility to machines
or their creators. More generally, a concept of autonomy that is adequate to
both human and artificial intelligence can lead to a more adequate ethical
theory for both.
| [
{
"version": "v1",
"created": "Wed, 5 Dec 2018 20:47:11 GMT"
}
] | 1,544,140,800,000 | [
[
"Hooker",
"John",
""
]
] |
1812.02471 | Giovanni Sileno | Giovanni Sileno, Alexander Boer, Tom van Engers | The Role of Normware in Trustworthy and Explainable AI | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For being potentially destructive, in practice incomprehensible and for the
most unintelligible, contemporary technology is setting high challenges on our
society. New conception methods are urgently required. Reorganizing ideas and
discussions presented in AI and related fields, this position paper aims to
highlight the importance of normware--that is, computational artifacts
specifying norms--with respect to these issues, and argues for its
irreducibility with respect to software by making explicit its neglected
ecological dimension in the decision-making cycle.
| [
{
"version": "v1",
"created": "Thu, 6 Dec 2018 11:33:00 GMT"
}
] | 1,544,140,800,000 | [
[
"Sileno",
"Giovanni",
""
],
[
"Boer",
"Alexander",
""
],
[
"van Engers",
"Tom",
""
]
] |
1812.02534 | Florentin Smarandache | Florentin Smarandache | Improved Definition of NonStandard Neutrosophic Logic and Introduction
to Neutrosophic Hyperreals | 19 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | O the third version of this response-paper to Imamura criticism, we recall
that NonStandard Neutrosophic Logic was never used by neutrosophic community in
no application, that the quarter of century old neutrosophic operators (1995)
criticized by Imamura were never utilized since they were improved shortly
after but he omits to tell their development, and that in real world
applications we need to convert/approximate the NonStandard Analysis
hyperreals, monads and binads to tiny intervals with the desired accuracy,
otherwise they would be inapplicable. We point out several errors and false
statements by Imamura with respect to the inf/sup of nonstandard subsets, also
Imamura 'rigorous definition of neutrosophic logic' is wrong and the same for
his definition of nonstandard unit interval, and we prove that there is not a
total order on the set of hyperreals (because of the newly introduced
Neutrosophic Hyperreals that are indeterminate), whence the transfer principle
is questionable.
| [
{
"version": "v1",
"created": "Sat, 24 Nov 2018 23:25:04 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Feb 2019 17:35:00 GMT"
},
{
"version": "v3",
"created": "Tue, 13 Sep 2022 19:02:47 GMT"
}
] | 1,663,200,000,000 | [
[
"Smarandache",
"Florentin",
""
]
] |
1812.02559 | Bo Shen | Bo Shen, Wei Zhang, Haiyan Zhao, Zhi Jin and Yanhong Wu | Solving Pictorial Jigsaw Puzzle by Stigmergy-inspired Internet-based
Human Collective Intelligence | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The pictorial jigsaw (PJ) puzzle is a well-known leisure game for humans.
Usually, a PJ puzzle game is played by one or several human players
face-to-face in the physical space. In this paper, we focus on how to solve PJ
puzzles in the cyberspace by a group of physically distributed human players.
We propose an approach to solving PJ puzzle by stigmergy-inspired
Internet-based human collective intelligence. The core of the approach is a
continuously executing loop, named the EIF loop, which consists of three
activities: exploration, integration, and feedback. In exploration, each player
tries to solve the PJ puzzle alone, without direct interactions with other
players. At any time, the result of a player's exploration is a partial
solution to the PJ puzzle, and a set of rejected neighboring relation between
pieces. The results of all players' exploration are integrated in real time
through integration, with the output of a continuously updated collective
opinion graph (COG). And through feedback, each player is provided with
personalized feedback information based on the current COG and the player's
exploration result, in order to accelerate his/her puzzle-solving process.
Exploratory experiments show that: (1) supported by this approach, the time to
solve PJ puzzle is nearly linear to the reciprocal of the number of players,
and shows better scalability to puzzle size than that of face-to-face
collaboration for 10-player groups; (2) for groups with 2 to 10 players, the
puzzle-solving time decreases 31.36%-64.57% on average, compared with the best
single players in the experiments.
| [
{
"version": "v1",
"created": "Wed, 28 Nov 2018 12:07:12 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Dec 2018 08:58:10 GMT"
}
] | 1,544,572,800,000 | [
[
"Shen",
"Bo",
""
],
[
"Zhang",
"Wei",
""
],
[
"Zhao",
"Haiyan",
""
],
[
"Jin",
"Zhi",
""
],
[
"Wu",
"Yanhong",
""
]
] |
1812.02560 | Vincent Conitzer | Vincent Conitzer | Can Artificial Intelligence Do Everything That We Can? | A shorter version appeared as "Natural Intelligence Still Has Its
Advantages" in The Wall Street Journal on August 28, 2018 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this article, I discuss what AI can and cannot yet do, and the
implications for humanity.
| [
{
"version": "v1",
"created": "Mon, 26 Nov 2018 23:26:36 GMT"
}
] | 1,544,140,800,000 | [
[
"Conitzer",
"Vincent",
""
]
] |
1812.02573 | Osbert Bastani | Osbert Bastani, Xin Zhang, Armando Solar-Lezama | Probabilistic Verification of Fairness Properties via Concentration | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As machine learning systems are increasingly used to make real world legal
and financial decisions, it is of paramount importance that we develop
algorithms to verify that these systems do not discriminate against minorities.
We design a scalable algorithm for verifying fairness specifications. Our
algorithm obtains strong correctness guarantees based on adaptive concentration
inequalities; such inequalities enable our algorithm to adaptively take samples
until it has enough data to make a decision. We implement our algorithm in a
tool called VeriFair, and show that it scales to large machine learning models,
including a deep recurrent neural network that is more than five orders of
magnitude larger than the largest previously-verified neural network. While our
technique only gives probabilistic guarantees due to the use of random samples,
we show that we can choose the probability of error to be extremely small.
| [
{
"version": "v1",
"created": "Sun, 2 Dec 2018 19:54:38 GMT"
},
{
"version": "v2",
"created": "Mon, 30 Dec 2019 17:07:59 GMT"
}
] | 1,577,836,800,000 | [
[
"Bastani",
"Osbert",
""
],
[
"Zhang",
"Xin",
""
],
[
"Solar-Lezama",
"Armando",
""
]
] |
1812.02578 | Daniel Estrada | Daniel Estrada | Conscious enactive computation | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper looks at recent debates in the enactivist literature on
computation and consciousness in order to assess major obstacles to building
artificial conscious agents. We consider a proposal from Villalobos and
Dewhurst (2018) for enactive computation on the basis of organizational
closure. We attempt to improve the argument by reflecting on the closed paths
through state space taken by finite state automata. This motivates a defense
against Clark's recent criticisms of "extended consciousness", and perhaps a
new perspective on living with machines.
| [
{
"version": "v1",
"created": "Mon, 3 Dec 2018 17:48:11 GMT"
}
] | 1,544,140,800,000 | [
[
"Estrada",
"Daniel",
""
]
] |
1812.02580 | Debasis Mitra Ph.D. | Debasis Mitra | Selected Qualitative Spatio-temporal Calculi Developed for Constraint
Reasoning: A Review | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this article a few of the qualitative spatio-temporal knowledge
representation techniques developed by the constraint reasoning community
within artificial intelligence are reviewed. The objective is to provide a
broad exposure to any other interested group who may utilize these
representations. The author has a particular interest in applying these calculi
(in a broad sense) in topological data analysis, as these schemes are highly
qualitative in nature.
| [
{
"version": "v1",
"created": "Mon, 3 Dec 2018 23:49:37 GMT"
}
] | 1,544,140,800,000 | [
[
"Mitra",
"Debasis",
""
]
] |
1812.02850 | John Foley | John Foley, Emma Tosch, Kaleigh Clary, David Jensen | ToyBox: Better Atari Environments for Testing Reinforcement Learning
Agents | NeurIPS Systems for ML Workshop | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is a widely accepted principle that software without tests has bugs.
Testing reinforcement learning agents is especially difficult because of the
stochastic nature of both agents and environments, the complexity of
state-of-the-art models, and the sequential nature of their predictions.
Recently, the Arcade Learning Environment (ALE) has become one of the most
widely used benchmark suites for deep learning research, and state-of-the-art
Reinforcement Learning (RL) agents have been shown to routinely equal or exceed
human performance on many ALE tasks. Since ALE is based on emulation of
original Atari games, the environment does not provide semantically meaningful
representations of internal game state. This means that ALE has limited utility
as an environment for supporting testing or model introspection. We propose
ToyBox, a collection of reimplementations of these games that solves this
critical problem and enables robust testing of RL agents.
| [
{
"version": "v1",
"created": "Thu, 6 Dec 2018 23:15:41 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Dec 2018 16:58:36 GMT"
},
{
"version": "v3",
"created": "Fri, 25 Jan 2019 16:39:37 GMT"
}
] | 1,548,633,600,000 | [
[
"Foley",
"John",
""
],
[
"Tosch",
"Emma",
""
],
[
"Clary",
"Kaleigh",
""
],
[
"Jensen",
"David",
""
]
] |
1812.02942 | Mieczys{\l}aw K{\l}opotek | Mieczys{\l}aw A. K{\l}opotek, and S{\l}awomir T. Wierzcho\'n | On Marginally Correct Approximations of Dempster-Shafer Belief Functions
from Data | M.A. K{\l}opotek, S.T. Wierzcho\'n: On Marginally Correct
Approximations of Dempster-Shafer Belief Functions from Data. Proc. IPMU'96
(Information Processing and Management of Uncertainty), Grenada (Spain),
Publisher: Universitaed de Granada, 1-5 July 1996, Vol II, pp. 769-774 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mathematical Theory of Evidence (MTE), a foundation for reasoning under
partial ignorance, is blamed to leave frequencies outside (or aside of) its
framework. The seriousness of this accusation is obvious: no experiment may be
run to compare the performance of MTE-based models of real world processes
against real world data.
In this paper we consider this problem from the point of view of conditioning
in the MTE. We describe the class of belief functions for which marginal
consistency with observed frequencies may be achieved and conditional belief
functions are proper belief functions,%\ and deal with implications for
(marginal) approximation of general belief functions by this class of belief
functions and for inference models in MTE.
| [
{
"version": "v1",
"created": "Fri, 7 Dec 2018 08:33:26 GMT"
}
] | 1,544,400,000,000 | [
[
"Kłopotek",
"Mieczysław A.",
""
],
[
"Wierzchoń",
"Sławomir T.",
""
]
] |
1812.02953 | Han Yu | Han Yu, Zhiqi Shen, Chunyan Miao, Cyril Leung, Victor R. Lesser and
Qiang Yang | Building Ethics into Artificial Intelligence | null | H. Yu, Z. Shen, C. Miao, C. Leung, V. R. Lesser & Q. Yang,
"Building Ethics into Artificial Intelligence," in Proceedings of the 27th
International Joint Conference on Artificial Intelligence (IJCAI'18), pp.
5527-5533, 2018 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As artificial intelligence (AI) systems become increasingly ubiquitous, the
topic of AI governance for ethical decision-making by AI has captured public
imagination. Within the AI research community, this topic remains less familiar
to many researchers. In this paper, we complement existing surveys, which
largely focused on the psychological, social and legal discussions of the
topic, with an analysis of recent advances in technical solutions for AI
governance. By reviewing publications in leading AI conferences including AAAI,
AAMAS, ECAI and IJCAI, we propose a taxonomy which divides the field into four
areas: 1) exploring ethical dilemmas; 2) individual ethical decision
frameworks; 3) collective ethical decision frameworks; and 4) ethics in
human-AI interactions. We highlight the intuitions and key techniques used in
each approach, and discuss promising future research directions towards
successful integration of ethical AI systems into human societies.
| [
{
"version": "v1",
"created": "Fri, 7 Dec 2018 09:18:01 GMT"
}
] | 1,544,400,000,000 | [
[
"Yu",
"Han",
""
],
[
"Shen",
"Zhiqi",
""
],
[
"Miao",
"Chunyan",
""
],
[
"Leung",
"Cyril",
""
],
[
"Lesser",
"Victor R.",
""
],
[
"Yang",
"Qiang",
""
]
] |
1812.03007 | Yin Liang | Zecang Gu, Yin Liang, Zhaoxi Zhang | The Modeling of SDL Aiming at Knowledge Acquisition in Automatic Driving | 12 pages, 6 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | In this paper we proposed an ultimate theory to solve the multi-target
control problem through its introduction to the machine learning framework in
automatic driving, which explored the implementation of excellent drivers'
knowledge acquisition. Nowadays there exist some core problems that have not
been fully realized by the researchers in automatic driving, such as the
optimal way to control the multi-target objective functions of energy saving,
safe driving, headway distance control and comfort driving, as well as the
resolvability of the networks that automatic driving relied on and the
high-performance chips like GPU on the complex driving environments. According
to these problems, we developed a new theory to map multitarget objective
functions in different spaces into the same one and thus introduced a machine
learning framework of SDL(Super Deep Learning) for optimal multi-targetcontrol
based on knowledge acquisition. We will present in this paper the optimal
multi-target control by combining the fuzzy relationship of each multi-target
objective function and the implementation of excellent drivers' knowledge
acquired by machine learning. Theoretically, the impact of this method will
exceed that of the fuzzy control method used in automatic train.
| [
{
"version": "v1",
"created": "Fri, 7 Dec 2018 12:50:47 GMT"
}
] | 1,544,400,000,000 | [
[
"Gu",
"Zecang",
""
],
[
"Liang",
"Yin",
""
],
[
"Zhang",
"Zhaoxi",
""
]
] |
1812.03075 | Franc Brglez | Franc Brglez | On Uncensored Mean First-Passage-Time Performance Experiments with
Multiwalk in $\mathbb{R}^p$: a New Stochastic Optimization Algorithm | 8 pages, 5 figures. Invited talk, IEEE Proc. 7th Int. Conf. on
Reliability, InfoCom Technologies and Optimization (ICRITO'2018); Aug.
29--31, 2018, Amity University, Noida, India, 2018 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A rigorous empirical comparison of two stochastic solvers is important when
one of the solvers is a prototype of a new algorithm such as multiwalk (MWA).
When searching for global minima in $\mathbb{R}^p$, the key data structures of
MWA include: $p$ rulers with each ruler assigned $m$ marks and a set of $p$
neighborhood matrices of size up to $m(m-2)$, where each entry represents
absolute values of pairwise differences between $m$ marks. Before taking the
next step, a controller links the tableau of neighborhood matrices and computes
new and improved positions for each of the $m$ marks. The number of columns in
each neighborhood matrix is denoted as the neighborhood radius $r_n \le m-2$.
Any variant of the DEA (differential evolution algorithm) has an effective
population neighborhood of radius not larger than 1. Uncensored
first-passage-time performance experiments that vary the neighborhood radius of
a MW-solver can thus be readily compared to existing variants of DE-solvers.
The paper considers seven test cases of increasing complexity and demonstrates,
under uncensored first-passage-time performance experiments: (1) significant
variability in convergence rate for seven DE-based solver configurations, and
(2) consistent, monotonic, and significantly faster rate of convergence for the
MW-solver prototype as we increase the neighborhood radius from 4 to its
maximum value.
| [
{
"version": "v1",
"created": "Thu, 6 Dec 2018 03:31:29 GMT"
}
] | 1,544,400,000,000 | [
[
"Brglez",
"Franc",
""
]
] |
1812.03625 | Song Gao | Shaohua Wang, Song Gao, Xin Feng, Alan T. Murray, Yuan Zeng | A context-based geoprocessing framework for optimizing meetup location
of multiple moving objects along road networks | 34 pages, 8 figures | International Journal of Geographical Information Science, 32(7),
1368-1390 (2018) | 10.1080/13658816.2018.1431838. | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given different types of constraints on human life, people must make
decisions that satisfy social activity needs. Minimizing costs (i.e., distance,
time, or money) associated with travel plays an important role in perceived and
realized social quality of life. Identifying optimal interaction locations on
road networks when there are multiple moving objects (MMO) with space-time
constraints remains a challenge. In this research, we formalize the problem of
finding dynamic ideal interaction locations for MMO as a spatial optimization
model and introduce a context-based geoprocessing heuristic framework to
address this problem. As a proof of concept, a case study involving
identification of a meetup location for multiple people under traffic
conditions is used to validate the proposed geoprocessing framework. Five
heuristic methods with regard to efficient shortest-path search space have been
tested. We find that the R* tree-based algorithm performs the best with high
quality solutions and low computation time. This framework is implemented in a
GIS environment to facilitate integration with external geographic contextual
information, e.g., temporary road barriers, points of interest (POI), and
real-time traffic information, when dynamically searching for ideal meetup
sites. The proposed method can be applied in trip planning, carpooling
services, collaborative interaction, and logistics management.
| [
{
"version": "v1",
"created": "Mon, 10 Dec 2018 05:10:31 GMT"
}
] | 1,544,486,400,000 | [
[
"Wang",
"Shaohua",
""
],
[
"Gao",
"Song",
""
],
[
"Feng",
"Xin",
""
],
[
"Murray",
"Alan T.",
""
],
[
"Zeng",
"Yuan",
""
]
] |
1812.03789 | Sander Beckers | Sander Beckers and Joseph Y. Halpern | Abstracting Causal Models | Appears in AAAI-19 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider a sequence of successively more restrictive definitions of
abstraction for causal models, starting with a notion introduced by Rubenstein
et al. (2017) called exact transformation that applies to probabilistic causal
models, moving to a notion of uniform transformation that applies to
deterministic causal models and does not allow differences to be hidden by the
"right" choice of distribution, and then to abstraction, where the
interventions of interest are determined by the map from low-level states to
high-level states, and strong abstraction, which takes more seriously all
potential interventions in a model, not just the allowed interventions. We show
that procedures for combining micro-variables into macro-variables are
instances of our notion of strong abstraction, as are all the examples
considered by Rubenstein et al.
| [
{
"version": "v1",
"created": "Mon, 10 Dec 2018 13:41:42 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Feb 2019 15:46:41 GMT"
},
{
"version": "v3",
"created": "Thu, 27 Jun 2019 12:23:45 GMT"
},
{
"version": "v4",
"created": "Tue, 9 Jul 2019 18:32:39 GMT"
}
] | 1,562,803,200,000 | [
[
"Beckers",
"Sander",
""
],
[
"Halpern",
"Joseph Y.",
""
]
] |
1812.03868 | Naveen Sundar Govindarajulu | Naveen Sundar Govindarajulu, Selmer Bringsjord and Rikhiya Ghosh | Toward the Engineering of Virtuous Machines | To appear in the proceedings of AAAI/ACM Conference on AI, Ethics,
and Society (AIES) 2019 (http://www.aies-conference.com/accepted-papers/).
This subsumes and completes the earlier partial formalization described in
arXiv:1805.07797 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While various traditions under the 'virtue ethics' umbrella have been studied
extensively and advocated by ethicists, it has not been clear that there exists
a version of virtue ethics rigorous enough to be a target for machine ethics
(which we take to include the engineering of an ethical sensibility in a
machine or robot itself, not only the study of ethics in the humans who might
create artificial agents). We begin to address this by presenting an embryonic
formalization of a key part of any virtue-ethics theory: namely, the learning
of virtue by a focus on exemplars of moral virtue. Our work is based in part on
a computational formal logic previously used to formally model other ethical
theories and principles therein, and to implement these models in artificial
agents.
| [
{
"version": "v1",
"created": "Fri, 7 Dec 2018 16:30:20 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Dec 2018 05:37:19 GMT"
}
] | 1,546,300,800,000 | [
[
"Govindarajulu",
"Naveen Sundar",
""
],
[
"Bringsjord",
"Selmer",
""
],
[
"Ghosh",
"Rikhiya",
""
]
] |
1812.04128 | Xingyu Zhao | Xingyu Zhao, Valentin Robu, David Flynn, Fateme Dinmohammadi, Michael
Fisher, Matt Webster | Probabilistic Model Checking of Robots Deployed in Extreme Environments | Version accepted at the 33rd AAAI Conference on Artificial
Intelligence, Honolulu, Hawaii, 2019 | null | 10.1609/aaai.v33i01.33018066 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Robots are increasingly used to carry out critical missions in extreme
environments that are hazardous for humans. This requires a high degree of
operational autonomy under uncertain conditions, and poses new challenges for
assuring the robot's safety and reliability. In this paper, we develop a
framework for probabilistic model checking on a layered Markov model to verify
the safety and reliability requirements of such robots, both at pre-mission
stage and during runtime. Two novel estimators based on conservative Bayesian
inference and imprecise probability model with sets of priors are introduced to
learn the unknown transition parameters from operational data. We demonstrate
our approach using data from a real-world deployment of unmanned underwater
vehicles in extreme environments.
| [
{
"version": "v1",
"created": "Mon, 10 Dec 2018 22:11:18 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Dec 2018 14:21:05 GMT"
},
{
"version": "v3",
"created": "Fri, 15 Feb 2019 17:36:56 GMT"
}
] | 1,607,385,600,000 | [
[
"Zhao",
"Xingyu",
""
],
[
"Robu",
"Valentin",
""
],
[
"Flynn",
"David",
""
],
[
"Dinmohammadi",
"Fateme",
""
],
[
"Fisher",
"Michael",
""
],
[
"Webster",
"Matt",
""
]
] |
1812.04608 | Shane Mueller | Robert R. Hoffman, Shane T. Mueller, Gary Klein, Jordan Litman | Metrics for Explainable AI: Challenges and Prospects | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The question addressed in this paper is: If we present to a user an AI system
that explains how it works, how do we know whether the explanation works and
the user has achieved a pragmatic understanding of the AI? In other words, how
do we know that an explanainable AI system (XAI) is any good? Our focus is on
the key concepts of measurement. We discuss specific methods for evaluating:
(1) the goodness of explanations, (2) whether users are satisfied by
explanations, (3) how well users understand the AI systems, (4) how curiosity
motivates the search for explanations, (5) whether the user's trust and
reliance on the AI are appropriate, and finally, (6) how the human-XAI work
system performs. The recommendations we present derive from our integration of
extensive research literatures and our own psychometric evaluations.
| [
{
"version": "v1",
"created": "Tue, 11 Dec 2018 18:50:02 GMT"
},
{
"version": "v2",
"created": "Fri, 1 Feb 2019 09:31:08 GMT"
}
] | 1,549,238,400,000 | [
[
"Hoffman",
"Robert R.",
""
],
[
"Mueller",
"Shane T.",
""
],
[
"Klein",
"Gary",
""
],
[
"Litman",
"Jordan",
""
]
] |
1812.04741 | Marija Slavkovik | Beishui Liao, Pere Pardo, Marija Slavkovik, Leendert van der Torre | The Jiminy Advisor: Moral Agreements Among Stakeholders Based on Norms
and Argumentation | Accepted for publication with JAIR | Journal of Artificial Intelligence Research 77: 737 - 792 (2023) | 10.1613/jair.1.14368 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An autonomous system is constructed by a manufacturer, operates in a society
subject to norms and laws, and interacts with end users. All of these actors
are stakeholders affected by the behavior of the autonomous system. We address
the challenge of how the ethical views of such stakeholders can be integrated
in the behavior of an autonomous system. We propose an ethical recommendation
component called Jiminy which uses techniques from normative systems and formal
argumentation to reach moral agreements among stakeholders. A Jiminy represents
the ethical views of each stakeholder by using normative systems, and has three
ways of resolving moral dilemmas that involve the opinions of the stakeholders.
First, the Jiminy considers how the arguments of the stakeholders relate to one
another, which may already resolve the dilemma. Secondly, the Jiminy combines
the normative systems of the stakeholders such that the combined expertise of
the stakeholders may resolve the dilemma. Thirdly, and only if these two other
methods have failed, the Jiminy uses context-sensitive rules to decide which of
the stakeholders take preference over the others. At the abstract level, these
three methods are characterized by adding arguments, adding attacks between
arguments, and revising attacks between arguments. We show how a Jiminy can be
used not only for ethical reasoning and collaborative decision-making, but also
to provide explanations about ethical behavior.
| [
{
"version": "v1",
"created": "Tue, 11 Dec 2018 23:16:16 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Mar 2019 15:23:15 GMT"
},
{
"version": "v3",
"created": "Thu, 13 Jan 2022 13:16:01 GMT"
},
{
"version": "v4",
"created": "Fri, 28 Apr 2023 10:17:14 GMT"
}
] | 1,689,552,000,000 | [
[
"Liao",
"Beishui",
""
],
[
"Pardo",
"Pere",
""
],
[
"Slavkovik",
"Marija",
""
],
[
"van der Torre",
"Leendert",
""
]
] |
1812.05070 | Ivan Amaya | I. Amaya, J. C. Ortiz-Bayliss, A. Rosales-P\'erez, A. E.
Guti\'errez-Rodr\'iguez, S. E. Conant-Pablos, H. Terashima-Mar\'in, C. A.
Coello Coello | Enhancing Selection Hyper-heuristics via Feature Transformations | Accepted version of the article published in the IEEE Computational
Intelligence Magazine. DOI: 10.1109/MCI.2018.2807018 \c{opyright}2018IEEE | IEEE Comput Intell Mag. 2018, 13(2) | 10.1109/MCI.2018.2807018 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hyper-heuristics are a novel tool. They deal with complex optimization
problems where standalone solvers exhibit varied performance. Among such a tool
reside selection hyper-heuristics. By combining the strengths of each solver,
this kind of hyper-heuristic offers a more robust tool. However, their
effectiveness is highly dependent on the 'features' used to link them with the
problem that is being solved. Aiming at enhancing selection hyper-heuristics,
in this paper we propose two types of transformation: explicit and implicit.
The first one directly changes the distribution of critical points within the
feature domain while using a Euclidean distance to measure proximity. The
second one operates indirectly by preserving the distribution of critical
points but changing the distance metric through a kernel function. We focus on
analyzing the effect of each kind of transformation, and of their combinations.
We test our ideas in the domain of constraint satisfaction problems because of
their popularity and many practical applications. In this work, we compare the
performance of our proposals against those of previously published data.
Furthermore, we expand on previous research by increasing the number of
analyzed features. We found that, by incorporating transformations into the
model of selection hyper-heuristics, overall performance can be improved,
yielding more stable results. However, combining implicit and explicit
transformations was not as fruitful. Additionally, we ran some confirmatory
tests on the domain of knapsack problems. Again, we observed improved
stability, leading to the generation of hyper-heuristics whose profit had a
standard deviation between 20% and 30% smaller.
| [
{
"version": "v1",
"created": "Wed, 12 Dec 2018 18:14:06 GMT"
}
] | 1,544,659,200,000 | [
[
"Amaya",
"I.",
""
],
[
"Ortiz-Bayliss",
"J. C.",
""
],
[
"Rosales-Pérez",
"A.",
""
],
[
"Gutiérrez-Rodríguez",
"A. E.",
""
],
[
"Conant-Pablos",
"S. E.",
""
],
[
"Terashima-Marín",
"H.",
""
],
[
"Coello",
"C. A. Coello",
""
]
] |
1812.05362 | Beishui Liao | Beishui Liao, Michael Anderson and Susan Leigh Anderson | Representation, Justification and Explanation in a Value Driven Agent:
An Argumentation-Based Approach | 24 pages, 6 figures, submitted to JASSS | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ethical and explainable artificial intelligence is an interdisciplinary
research area involving computer science, philosophy, logic, the social
sciences, etc. For an ethical autonomous system, the ability to justify and
explain its decision making is a crucial aspect of transparency and
trustworthiness. This paper takes a Value Driven Agent (VDA) as an example,
explicitly representing implicit knowledge of a machine learning-based
autonomous agent and using this formalism to justify and explain the decisions
of the agent. For this purpose, we introduce a novel formalism to describe the
intrinsic knowledge and solutions of a VDA in each situation. Based on this
formalism, we formulate an approach to justify and explain the decision-making
process of a VDA, in terms of a typical argumentation formalism,
Assumption-based Argumentation (ABA). As a result, a VDA in a given situation
is mapped onto an argumentation framework in which arguments are defined by the
notion of deduction. Justified actions with respect to semantics from
argumentation correspond to solutions of the VDA. The acceptance (rejection) of
arguments and their premises in the framework provides an explanation for why
an action was selected (or not). Furthermore, we go beyond the existing version
of VDA, considering not only practical reasoning, but also epistemic reasoning,
such that the inconsistency of knowledge of the VDA can be identified, handled
and explained.
| [
{
"version": "v1",
"created": "Thu, 13 Dec 2018 11:04:24 GMT"
},
{
"version": "v2",
"created": "Sun, 20 Oct 2019 09:11:34 GMT"
}
] | 1,571,702,400,000 | [
[
"Liao",
"Beishui",
""
],
[
"Anderson",
"Michael",
""
],
[
"Anderson",
"Susan Leigh",
""
]
] |
1812.05794 | Bo Zhang | Bo Zhang, Bin Chen, Jin-lin Peng | The Entropy of Artificial Intelligence and a Case Study of AlphaZero
from Shannon's Perspective | 8 pages, 4 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | The recently released AlphaZero algorithm achieves superhuman performance in
the games of chess, shogi and Go, which raises two open questions. Firstly, as
there is a finite number of possibilities in the game, is there a quantifiable
intelligence measurement for evaluating intelligent systems, e.g. AlphaZero?
Secondly, AlphaZero introduces sophisticated reinforcement learning and
self-play to efficiently encode the possible states, is there a simple
information-theoretic model to represent the learning process and offer more
insights in fostering strong AI systems?
This paper explores the above two questions by proposing a simple variance of
Shannon's communication model, the concept of intelligence entropy and the
Unified Intelligence-Communication Model is proposed, which provide an
information-theoretic metric for investigating the intelligence level and also
provide an bound for intelligent agents in the form of Shannon's capacity,
namely, the intelligence capacity. This paper then applies the concept and
model to AlphaZero as a case study and explains the learning process of
intelligent agent as turbo-like iterative decoding, so that the learning
performance of AlphaZero may be quantitatively evaluated. Finally, conclusions
are provided along with theoretical and practical remarks.
| [
{
"version": "v1",
"created": "Fri, 14 Dec 2018 06:06:29 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Dec 2018 08:49:34 GMT"
}
] | 1,545,091,200,000 | [
[
"Zhang",
"Bo",
""
],
[
"Chen",
"Bin",
""
],
[
"Peng",
"Jin-lin",
""
]
] |
1812.05795 | Tatsuji Takahashi | Akihiro Tamatsukuri and Tatsuji Takahashi | Guaranteed satisficing and finite regret: Analysis of a cognitive
satisficing value function | 16 pages, 3 figures, supplementary information (A, B, and C) included | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As reinforcement learning algorithms are being applied to increasingly
complicated and realistic tasks, it is becoming increasingly difficult to solve
such problems within a practical time frame. Hence, we focus on a
\textit{satisficing} strategy that looks for an action whose value is above the
aspiration level (analogous to the break-even point), rather than the optimal
action. In this paper, we introduce a simple mathematical model called
risk-sensitive satisficing ($RS$) that implements a satisficing strategy by
integrating risk-averse and risk-prone attitudes under the greedy policy. We
apply the proposed model to the $K$-armed bandit problems, which constitute the
most basic class of reinforcement learning tasks, and prove two propositions.
The first is that $RS$ is guaranteed to find an action whose value is above the
aspiration level. The second is that the regret (expected loss) of $RS$ is
upper bounded by a finite value, given that the aspiration level is set to an
"optimal level" so that satisficing implies optimizing. We confirm the results
through numerical simulations and compare the performance of $RS$ with that of
other representative algorithms for the $K$-armed bandit problems.
| [
{
"version": "v1",
"created": "Fri, 14 Dec 2018 06:26:50 GMT"
},
{
"version": "v2",
"created": "Sat, 23 Feb 2019 11:11:14 GMT"
}
] | 1,551,139,200,000 | [
[
"Tamatsukuri",
"Akihiro",
""
],
[
"Takahashi",
"Tatsuji",
""
]
] |
1812.06015 | C. Maria Keet | C. Maria Keet and Kieren Davies and Agnieszka Lawrynowicz | More Effective Ontology Authoring with Test-Driven Development | 16 pages, 7 figures, extended tech report of ESWC17 demo paper and
extended version of a preprint of an article submitted for consideration in
IJAIT | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ontology authoring is a complex process, where commonly the automated
reasoner is invoked for verification of newly introduced changes, therewith
amounting to a time-consuming test-last approach. Test-Driven Development (TDD)
for ontology authoring is a recent {\em test-first} approach that aims to
reduce authoring time and increase authoring efficiency. Current TDD testing
falls short on coverage of OWL features and possible test outcomes, the
rigorous foundation thereof, and evaluations to ascertain its effectiveness.
We aim to address these issues in one instantiation of TDD for ontology
authoring. We first propose a succinct, logic-based model of TDD testing and
present novel TDD algorithms so as to cover also any OWL 2 class expression for
the TBox and for the principal ABox assertions, and prove their correctness.
The algorithms use methods from the OWL API directly such that reclassification
is not necessary for test execution, therewith reducing ontology authoring
time. The algorithms were implemented in TDDonto2, a Prot\'eg\'e plugin.
TDDonto2 was evaluated on editing efficiency and by users. The editing
efficiency study demonstrated that it is faster than a typical ontology
authoring interface, especially for medium size and large ontologies. The user
evaluation demonstrated that modellers make significantly less errors with
TDDonto2 compared to the standard Prot\'eg\'e interface and complete their
tasks better using less time. Thus, the results indicate that Test-Driven
Development is a promising approach in an ontology development methodology.
| [
{
"version": "v1",
"created": "Fri, 14 Dec 2018 16:47:03 GMT"
}
] | 1,545,004,800,000 | [
[
"Keet",
"C. Maria",
""
],
[
"Davies",
"Kieren",
""
],
[
"Lawrynowicz",
"Agnieszka",
""
]
] |
1812.06028 | Mieczys{\l}aw K{\l}opotek | Andrzej Matuszewski, Mieczys{\l}aw A. K{\l}opotek | Factorization of Dempster-Shafer Belief Functions Based on Data | 15 pages | null | null | IPI PAN Report 798 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One important obstacle in applying Dempster-Shafer Theory (DST) is its
relationship to frequencies. In particular, there exist serious difficulties in
finding factorizations of belief functions from data.
In probability theory factorizations are usually related to notion of
(conditional) independence and their possibility tested accordingly. However,
in DST conditional belief distributions prove to be non-proper belief functions
(that is ones connected with negative "frequencies"). This makes statistical
testing of potential conditional independencies practically impossible, as no
coherent interpretation could be found so far for negative belief function
values.
In this paper a novel attempt is made to overcome this difficulty. In the
proposal no conditional beliefs are calculated, but instead a new measure F is
introduced within the framework of DST, closely related to conditional
independence, allowing to apply conventional statistical tests for detection of
dependence/independence.
| [
{
"version": "v1",
"created": "Fri, 14 Dec 2018 17:05:59 GMT"
}
] | 1,545,004,800,000 | [
[
"Matuszewski",
"Andrzej",
""
],
[
"Kłopotek",
"Mieczysław A.",
""
]
] |
1812.06510 | Tshilidzi Marwala | Tshilidzi Marwala | The limit of artificial intelligence: Can machines be rational? | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper studies the question on whether machines can be rational. It
observes the existing reasons why humans are not rational which is due to
imperfect and limited information, limited and inconsistent processing power
through the brain and the inability to optimize decisions and achieve maximum
utility. It studies whether these limitations of humans are transferred to the
limitations of machines. The conclusion reached is that even though machines
are not rational advances in technological developments make these machines
more rational. It also concludes that machines can be more rational than
humans.
| [
{
"version": "v1",
"created": "Sun, 16 Dec 2018 17:57:16 GMT"
}
] | 1,545,091,200,000 | [
[
"Marwala",
"Tshilidzi",
""
]
] |
1812.06873 | Boris Chidlovskii | Giorgio Giannone and Boris Chidlovskii | Learning Common Representation from RGB and Depth Images | 7 pages, 3 figures, 2 tables | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a new deep learning architecture for the tasks of semantic
segmentation and depth prediction from RGB-D images. We revise the state of art
based on the RGB and depth feature fusion, where both modalities are assumed to
be available at train and test time. We propose a new architecture where the
feature fusion is replaced with a common deep representation. Combined with an
encoder-decoder type of the network, the architecture can jointly learn models
for semantic segmentation and depth estimation based on their common
representation. This representation, inspired by multi-view learning, offers
several important advantages, such as using one modality available at test time
to reconstruct the missing modality. In the RGB-D case, this enables the
cross-modality scenarios, such as using depth data for semantically
segmentation and the RGB images for depth estimation. We demonstrate the
effectiveness of the proposed network on two publicly available RGB-D datasets.
The experimental results show that the proposed method works well in both
semantic segmentation and depth estimation tasks.
| [
{
"version": "v1",
"created": "Mon, 17 Dec 2018 16:22:47 GMT"
}
] | 1,545,091,200,000 | [
[
"Giannone",
"Giorgio",
""
],
[
"Chidlovskii",
"Boris",
""
]
] |
1812.07297 | Peng Peng | Peng Peng, Liang Pang, Yufeng Yuan, Chao Gao | Continual Match Based Training in Pommerman: Technical Report | 8 pages, 7 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Continual learning is the ability of agents to improve their capacities
throughout multiple tasks continually. While recent works in the literature of
continual learning mostly focused on developing either particular loss
functions or specialized structures of neural network explaining the episodic
memory or neural plasticity, we study continual learning from the perspective
of the training mechanism. Specifically, we propose a COnitnual Match BAsed
Training (COMBAT) framework for training a population of advantage-actor-critic
(A2C) agents in Pommerman, a partially observable multi-agent environment with
no communication. Following the COMBAT framework, we trained an agent, namely,
Navocado, that won the title of the top 1 learning agent in the NeurIPS 2018
Pommerman Competition. Two critical features of our agent are worth mentioning.
Firstly, our agent did not learn from any demonstrations. Secondly, our agent
is highly reproducible. As a technical report, we articulate the design of
state space, action space, reward, and most importantly, the COMBAT framework
for our Pommerman agent. We show in the experiments that Pommerman is a perfect
environment for studying continual learning, and the agent can improve its
performance by continually learning new skills without forgetting the old ones.
Finally, the result in the Pommerman Competition verifies the robustness of our
agent when competing with various opponents.
| [
{
"version": "v1",
"created": "Tue, 18 Dec 2018 11:08:31 GMT"
}
] | 1,545,177,600,000 | [
[
"Peng",
"Peng",
""
],
[
"Pang",
"Liang",
""
],
[
"Yuan",
"Yufeng",
""
],
[
"Gao",
"Chao",
""
]
] |
1812.08390 | Giora Alexandron | Tanya Nazaretsky and Sara Hershkovitz and Giora Alexandron | Kappa Learning: A New Method for Measuring Similarity Between
Educational Items Using Performance Data | 9 pages, 5 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sequencing items in adaptive learning systems typically relies on a large
pool of interactive assessment items (questions) that are analyzed into a
hierarchy of skills or Knowledge Components (KCs). Educational data mining
techniques can be used to analyze students performance data in order to
optimize the mapping of items to KCs. Standard methods that map items into KCs
using item-similarity measures make the implicit assumption that students
performance on items that depend on the same skill should be similar. This
assumption holds if the latent trait (mastery of the underlying skill) is
relatively fixed during students activity, as in the context of testing, which
is the primary context in which these measures were developed and applied.
However, in adaptive learning systems that aim for learning, and address
subject matters such as K6 Math that consist of multiple sub-skills, this
assumption does not hold. In this paper we propose a new item-similarity
measure, termed Kappa Learning (KL), which aims to address this gap. KL
identifies similarity between items under the assumption of learning, namely,
that learners mastery of the underlying skills changes as they progress through
the items. We evaluate Kappa Learning on data from a computerized tutor that
teaches Fractions for 4th grade, with experts tagging as ground truth, and on
simulated data. Our results show that clustering that is based on Kappa
Learning outperforms clustering that is based on commonly used similarity
measures (Cohen Kappa, Yule, and Pearson).
| [
{
"version": "v1",
"created": "Thu, 20 Dec 2018 07:12:45 GMT"
}
] | 1,545,350,400,000 | [
[
"Nazaretsky",
"Tanya",
""
],
[
"Hershkovitz",
"Sara",
""
],
[
"Alexandron",
"Giora",
""
]
] |
1812.08586 | Zhonghua Han | Zhonghua Han, Quan Zhang, Haibo Shi, Yuanwei Qi, Liangliang Sun | Research on Limited Buffer Scheduling Problems in Flexible Flow Shops
with Setup Times | Accepted for publication by International Journal of Modelling,
Identification and Control (IJMIC) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In order to solve the limited buffer scheduling problems in flexible flow
shops with setup times, this paper proposes an improved whale optimization
algorithm (IWOA) as a global optimization algorithm. Firstly, this paper
presents a mathematic programming model for limited buffer in flexible flow
shops with setup times, and applies the IWOA algorithm as the global
optimization algorithm. Based on the whale optimization algorithm (WOA), the
improved algorithm uses Levy flight, opposition-based learning strategy and
simulated annealing to expand the search range, enhance the ability for jumping
out of local extremum, and improve the continuous evolution of the algorithm.
To verify the improvement of the proposed algorithm on the optimization ability
of the standard WOA algorithm, the IWOA algorithm is tested by verification
examples of small-scale and large-scale flexible flow shop scheduling problems,
and the imperialist competitive algorithm (ICA), bat algorithm (BA), and whale
optimization algorithm (WOA) are used for comparision. Based on the instance
data of bus manufacturer, simulation tests are made on the four algorithms
under variouis of practical evalucation scenarios. The simulation results show
that the IWOA algorithm can better solve this type of limited buffer scheduling
problem in flexible flow shops with setup times compared with the state of the
art algorithms.
| [
{
"version": "v1",
"created": "Fri, 7 Dec 2018 18:02:42 GMT"
}
] | 1,545,350,400,000 | [
[
"Han",
"Zhonghua",
""
],
[
"Zhang",
"Quan",
""
],
[
"Shi",
"Haibo",
""
],
[
"Qi",
"Yuanwei",
""
],
[
"Sun",
"Liangliang",
""
]
] |
1812.08597 | Prashan Madumal | Prashan Madumal, Ronal Singh, Joshua Newn, Frank Vetere | Interaction Design for Explainable AI: Workshop Proceedings | Workshop proceedings of Interaction Design for Explainable AI
workshop held at OzCHI 2018 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As artificial intelligence (AI) systems become increasingly complex and
ubiquitous, these systems will be responsible for making decisions that
directly affect individuals and society as a whole. Such decisions will need to
be justified due to ethical concerns as well as trust, but achieving this has
become difficult due to the `black-box' nature many AI models have adopted.
Explainable AI (XAI) can potentially address this problem by explaining its
actions, decisions and behaviours of the system to users. However, much
research in XAI is done in a vacuum using only the researchers' intuition of
what constitutes a `good' explanation while ignoring the interaction and the
human aspect. This workshop invites researchers in the HCI community and
related fields to have a discourse about human-centred approaches to XAI rooted
in interaction and to shed light and spark discussion on interaction design
challenges in XAI.
| [
{
"version": "v1",
"created": "Thu, 13 Dec 2018 12:45:26 GMT"
}
] | 1,545,350,400,000 | [
[
"Madumal",
"Prashan",
""
],
[
"Singh",
"Ronal",
""
],
[
"Newn",
"Joshua",
""
],
[
"Vetere",
"Frank",
""
]
] |
1812.08960 | Hussein Abbass A | Hussein Abbass, John Harvey, Kate Yaxley | Lifelong Testing of Smart Autonomous Systems by Shepherding a Swarm of
Watchdog Artificial Intelligence Agents | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Artificial Intelligence (AI) technologies could be broadly categorised into
Analytics and Autonomy. Analytics focuses on algorithms offering perception,
comprehension, and projection of knowledge gleaned from sensorial data.
Autonomy revolves around decision making, and influencing and shaping the
environment through action production. A smart autonomous system (SAS) combines
analytics and autonomy to understand, learn, decide and act autonomously. To be
useful, SAS must be trusted and that requires testing. Lifelong learning of a
SAS compounds the testing process. In the remote chance that it is possible to
fully test and certify the system pre-release, which is theoretically an
undecidable problem, it is near impossible to predict the future behaviours
that these systems, alone or collectively, will exhibit. While it may be
feasible to severely restrict such systems\textquoteright \ learning abilities
to limit the potential unpredictability of their behaviours, an undesirable
consequence may be severely limiting their utility. In this paper, we propose
the architecture for a watchdog AI (WAI) agent dedicated to lifelong functional
testing of SAS. We further propose system specifications including a level of
abstraction whereby humans shepherd a swarm of WAI agents to oversee an
ecosystem made of humans and SAS. The discussion extends to the challenges,
pros, and cons of the proposed concept.
| [
{
"version": "v1",
"created": "Fri, 21 Dec 2018 05:53:47 GMT"
}
] | 1,545,609,600,000 | [
[
"Abbass",
"Hussein",
""
],
[
"Harvey",
"John",
""
],
[
"Yaxley",
"Kate",
""
]
] |
1812.09044 | A. Adhikari | Ajaya Adhikari, D.M.J Tax, Riccardo Satta, Matthias Fath | LEAFAGE: Example-based and Feature importance-based Explanationsfor
Black-box ML models | Submitted to the 2019 Fuzz-IEEE conference (special session on
Advances on eXplainable Artificial Intelligence) | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | As machine learning models become more accurate, they typically become more
complex and uninterpretable by humans. The black-box character of these models
holds back its acceptance in practice, especially in high-risk domains where
the consequences of failure could be catastrophic such as health-care or
defense. Providing understandable and useful explanations behind ML models or
predictions can increase the trust of the user. Example-based reasoning, which
entails leveraging previous experience with analogous tasks to make a decision,
is a well known strategy for problem solving and justification. This work
presents a new explanation extraction method called LEAFAGE, for a prediction
made by any black-box ML model. The explanation consists of the visualization
of similar examples from the training set and the importance of each feature.
Moreover, these explanations are contrastive which aims to take the
expectations of the user into account. LEAFAGE is evaluated in terms of
fidelity to the underlying black-box model and usefulness to the user. The
results showed that LEAFAGE performs overall better than the current
state-of-the-art method LIME in terms of fidelity, on ML models with non-linear
decision boundary. A user-study was conducted which focused on revealing the
differences between example-based and feature importance-based explanations. It
showed that example-based explanations performed significantly better than
feature importance-based explanation, in terms of perceived transparency,
information sufficiency, competence and confidence. Counter-intuitively, when
the gained knowledge of the participants was tested, it showed that they
learned less about the black-box model after seeing a feature importance-based
explanation than seeing no explanation at all. The participants found feature
importance-based explanation vague and hard to generalize it to other
instances.
| [
{
"version": "v1",
"created": "Fri, 21 Dec 2018 11:02:09 GMT"
},
{
"version": "v2",
"created": "Sun, 27 Jan 2019 17:19:06 GMT"
},
{
"version": "v3",
"created": "Fri, 16 Aug 2019 09:59:57 GMT"
}
] | 1,566,172,800,000 | [
[
"Adhikari",
"Ajaya",
""
],
[
"Tax",
"D. M. J",
""
],
[
"Satta",
"Riccardo",
""
],
[
"Fath",
"Matthias",
""
]
] |
1812.09086 | Mieczys{\l}aw K{\l}opotek | S.T. Wierzcho\'n and M.A. K{\l}opotek and M. Michalewicz | Reasoning and Facts Explanation in Valuation Based Systems | 12 pasges | Fundamenta Informaticae 30(3/4)1997, pp. 359-371 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the literature, the optimization problem to identify a set of composite
hypotheses H, which will yield the $k$ largest $P(H|S_e)$ where a composite
hypothesis is an instantiation of all the nodes in the network except the
evidence nodes \cite{KSy:93} is of significant interest. This problem is called
"finding the $k$ Most Plausible Explanation (MPE) of a given evidence $S_e$ in
a Bayesian belief network".
The problem of finding $k$ most probable hypotheses is generally NP-hard
\cite{Cooper:90}. Therefore in the past various simplifications of the task by
restricting $k$ (to 1 or 2), restricting the structure (e.g. to singly
connected networks), or shifting the complexity to spatial domain have been
investigated.
A genetic algorithm is proposed in this paper to overcome some of these
restrictions while stepping out from probabilistic domain onto the general
Valuation based System (VBS) framework is also proposed by generalizing the
genetic algorithm approach to the realm of Dempster-Shafer belief calculus.
| [
{
"version": "v1",
"created": "Fri, 21 Dec 2018 12:41:00 GMT"
}
] | 1,545,609,600,000 | [
[
"Wierzchoń",
"S. T.",
""
],
[
"Kłopotek",
"M. A.",
""
],
[
"Michalewicz",
"M.",
""
]
] |
1812.09207 | Tias Guns | Tias Guns, Peter J. Stuckey and Guido Tack | Solution Dominance over Constraint Satisfaction Problems | Presented at the ModRef18 workshop at CP18 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Constraint Satisfaction Problems (CSPs) typically have many solutions that
satisfy all constraints. Often though, some solutions are preferred over
others, that is, some solutions dominate other solutions. We present solution
dominance as a formal framework to reason about such settings. We define
Constraint Dominance Problems (CDPs) as CSPs with a dominance relation, that
is, a preorder over the solutions of the CSP. This framework captures many
well-known variants of constraint satisfaction, including optimization,
multi-objective optimization, Max-CSP, minimal models, minimum correction
subsets as well as optimization over CP-nets and arbitrary dominance relations.
We extend MiniZinc, a declarative language for modeling CSPs, to CDPs by
introducing dominance nogoods; these can be derived from dominance relations in
a principled way. A generic method for solving arbitrary CDPs incrementally
calls a CSP solver and is compatible with any existing solver that supports
MiniZinc. This encourages experimenting with different solution dominance
relations for a problem, as well as comparing different solvers without having
to modify their implementations.
| [
{
"version": "v1",
"created": "Fri, 21 Dec 2018 15:54:34 GMT"
}
] | 1,545,609,600,000 | [
[
"Guns",
"Tias",
""
],
[
"Stuckey",
"Peter J.",
""
],
[
"Tack",
"Guido",
""
]
] |
1812.09351 | Quang Minh Ha | Quang Minh Ha, Yves Deville, Quang Dung Pham, Minh Ho\`ang H\`a | A Hybrid Genetic Algorithm for the Traveling Salesman Problem with Drone | Technical Report. 34 pages, 5 figures | null | 10.1007/s10732-019-09431-y | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper addresses the Traveling Salesman Problem with Drone (TSP-D), in
which a truck and drone are used to deliver parcels to customers. The objective
of this problem is to either minimize the total operational cost (min-cost
TSP-D) or minimize the completion time for the truck and drone (min-time
TSP-D). This problem has gained a lot of attention in the last few years since
it is matched with the recent trends in a new delivery method among logistics
companies. To solve the TSP-D, we propose a hybrid genetic search with dynamic
population management and adaptive diversity control based on a split
algorithm, problem-tailored crossover and local search operators, a new restore
method to advance the convergence and an adaptive penalization mechanism to
dynamically balance the search between feasible/infeasible solutions. The
computational results show that the proposed algorithm outperforms existing
methods in terms of solution quality and improves best known solutions found in
the literature. Moreover, various analyses on the impacts of crossover choice
and heuristic components have been conducted to analysis further their
sensitivity to the performance of our method.
| [
{
"version": "v1",
"created": "Fri, 21 Dec 2018 19:42:56 GMT"
}
] | 1,574,208,000,000 | [
[
"Ha",
"Quang Minh",
""
],
[
"Deville",
"Yves",
""
],
[
"Pham",
"Quang Dung",
""
],
[
"Hà",
"Minh Hoàng",
""
]
] |
1812.09376 | Ravi Pandya | Ravi Pandya, Sandy H. Huang, Dylan Hadfield-Menell, Anca D. Dragan | Human-AI Learning Performance in Multi-Armed Bandits | Artificial Intelligence, Ethics and Society (AIES) 2019 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | People frequently face challenging decision-making problems in which outcomes
are uncertain or unknown. Artificial intelligence (AI) algorithms exist that
can outperform humans at learning such tasks. Thus, there is an opportunity for
AI agents to assist people in learning these tasks more effectively. In this
work, we use a multi-armed bandit as a controlled setting in which to explore
this direction. We pair humans with a selection of agents and observe how well
each human-agent team performs. We find that team performance can beat both
human and agent performance in isolation. Interestingly, we also find that an
agent's performance in isolation does not necessarily correlate with the
human-agent team's performance. A drop in agent performance can lead to a
disproportionately large drop in team performance, or in some settings can even
improve team performance. Pairing a human with an agent that performs slightly
better than them can make them perform much better, while pairing them with an
agent that performs the same can make them them perform much worse. Further,
our results suggest that people have different exploration strategies and might
perform better with agents that match their strategy. Overall, optimizing
human-agent team performance requires going beyond optimizing agent
performance, to understanding how the agent's suggestions will influence human
decision-making.
| [
{
"version": "v1",
"created": "Fri, 21 Dec 2018 21:28:11 GMT"
}
] | 1,545,868,800,000 | [
[
"Pandya",
"Ravi",
""
],
[
"Huang",
"Sandy H.",
""
],
[
"Hadfield-Menell",
"Dylan",
""
],
[
"Dragan",
"Anca D.",
""
]
] |
1812.09421 | Zunjing Wang | Xiao-Feng Xie and Zun-Jing Wang | Exploiting Problem Structure in Combinatorial Landscapes: A Case Study
on Pure Mathematics Application | 7 pages, 2 figures, conference | International Joint Conference on Artificial Intelligence, New
York, 2016, pp.2683-2689 | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In this paper, we present a method using AI techniques to solve a case of
pure mathematics applications for finding narrow admissible tuples. The
original problem is formulated into a combinatorial optimization problem. In
particular, we show how to exploit the local search structure to formulate the
problem landscape for dramatic reductions in search space and for non-trivial
elimination in search barriers, and then to realize intelligent search
strategies for effectively escaping from local minima. Experimental results
demonstrate that the proposed method is able to efficiently find best known
solutions. This research sheds light on exploiting the local problem structure
for an efficient search in combinatorial landscapes as an application of AI to
a new problem domain.
| [
{
"version": "v1",
"created": "Sat, 22 Dec 2018 00:33:59 GMT"
}
] | 1,545,868,800,000 | [
[
"Xie",
"Xiao-Feng",
""
],
[
"Wang",
"Zun-Jing",
""
]
] |
1812.09521 | Jacob Menashe | Jacob Menashe and Peter Stone | Escape Room: A Configurable Testbed for Hierarchical Reinforcement
Learning | 24 pages, 4 image figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent successes in Reinforcement Learning have encouraged a fast-growing
network of RL researchers and a number of breakthroughs in RL research. As the
RL community and the body of RL work grows, so does the need for widely
applicable benchmarks that can fairly and effectively evaluate a variety of RL
algorithms.
This need is particularly apparent in the realm of Hierarchical Reinforcement
Learning (HRL). While many existing test domains may exhibit hierarchical
action or state structures, modern RL algorithms still exhibit great difficulty
in solving domains that necessitate hierarchical modeling and action planning,
even when such domains are seemingly trivial. These difficulties highlight both
the need for more focus on HRL algorithms themselves, and the need for new
testbeds that will encourage and validate HRL research.
Existing HRL testbeds exhibit a Goldilocks problem; they are often either too
simple (e.g. Taxi) or too complex (e.g. Montezuma's Revenge from the Arcade
Learning Environment). In this paper we present the Escape Room Domain (ERD), a
new flexible, scalable, and fully implemented testing domain for HRL that
bridges the "moderate complexity" gap left behind by existing alternatives. ERD
is open-source and freely available through GitHub, and conforms to widely-used
public testing interfaces for simple integration and testing with a variety of
public RL agent implementations. We show that the ERD presents a suite of
challenges with scalable difficulty to provide a smooth learning gradient from
Taxi to the Arcade Learning Environment.
| [
{
"version": "v1",
"created": "Sat, 22 Dec 2018 12:29:20 GMT"
}
] | 1,545,868,800,000 | [
[
"Menashe",
"Jacob",
""
],
[
"Stone",
"Peter",
""
]
] |
1812.10097 | Morteza Haghir Chehreghani | Yuxin Chen and Morteza Haghir Chehreghani | Trip Prediction by Leveraging Trip Histories from Neighboring Users | This work is published by IEEE (ITSC) | IEEE 25th International Intelligent Transportation Systems
Conference (IEEE ITSC), pp. 967-973, 2022 | 10.1109/ITSC55140.2022.9922430 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel approach for trip prediction by analyzing user's trip
histories. We augment users' (self-) trip histories by adding 'similar' trips
from other users, which could be informative and useful for predicting future
trips for a given user. This also helps to cope with noisy or sparse trip
histories, where the self-history by itself does not provide a reliable
prediction of future trips. We show empirical evidence that by enriching the
users' trip histories with additional trips, one can improve the prediction
error by 15%-40%, evaluated on multiple subsets of the Nancy2012 dataset. This
real-world dataset is collected from public transportation ticket validations
in the city of Nancy, France. Our prediction tool is a central component of a
trip simulator system designed to analyze the functionality of public
transportation in the city of Nancy.
| [
{
"version": "v1",
"created": "Tue, 25 Dec 2018 12:37:32 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Mar 2022 11:04:32 GMT"
},
{
"version": "v3",
"created": "Thu, 29 Dec 2022 18:00:45 GMT"
}
] | 1,672,617,600,000 | [
[
"Chen",
"Yuxin",
""
],
[
"Chehreghani",
"Morteza Haghir",
""
]
] |
1812.10144 | Tshilidzi Marwala | Tshilidzi Marwala | Can rationality be measured? | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper studies whether rationality can be computed. Rationality is
defined as the use of complete information, which is processed with a perfect
biological or physical brain, in an optimized fashion. To compute rationality
one needs to quantify how complete is the information, how perfect is the
physical or biological brain and how optimized is the entire decision making
system. The rationality of a model (i.e. physical or biological brain) is
measured by the expected accuracy of the model. The rationality of the
optimization procedure is measured as the ratio of the achieved objective (i.e.
utility) to the global objective. The overall rationality of a decision is
measured as the product of the rationality of the model and the rationality of
the optimization procedure. The conclusion reached is that rationality can be
computed for convex optimization problems.
| [
{
"version": "v1",
"created": "Tue, 25 Dec 2018 17:52:39 GMT"
}
] | 1,545,868,800,000 | [
[
"Marwala",
"Tshilidzi",
""
]
] |
1812.10607 | Ken Li | Hui Li, Kailiang Hu, Zhibang Ge, Tao Jiang, Yuan Qi, Le Song | Double Neural Counterfactual Regret Minimization | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Counterfactual Regret Minimization (CRF) is a fundamental and effective
technique for solving Imperfect Information Games (IIG). However, the original
CRF algorithm only works for discrete state and action spaces, and the
resulting strategy is maintained as a tabular representation. Such tabular
representation limits the method from being directly applied to large games and
continuing to improve from a poor strategy profile. In this paper, we propose a
double neural representation for the imperfect information games, where one
neural network represents the cumulative regret, and the other represents the
average strategy. Furthermore, we adopt the counterfactual regret minimization
algorithm to optimize this double neural representation. To make neural
learning efficient, we also developed several novel techniques including a
robust sampling method, mini-batch Monte Carlo Counterfactual Regret
Minimization (MCCFR) and Monte Carlo Counterfactual Regret Minimization Plus
(MCCFR+) which may be of independent interests. Experimentally, we demonstrate
that the proposed double neural algorithm converges significantly better than
the reinforcement learning counterpart.
| [
{
"version": "v1",
"created": "Thu, 27 Dec 2018 03:31:33 GMT"
}
] | 1,546,214,400,000 | [
[
"Li",
"Hui",
""
],
[
"Hu",
"Kailiang",
""
],
[
"Ge",
"Zhibang",
""
],
[
"Jiang",
"Tao",
""
],
[
"Qi",
"Yuan",
""
],
[
"Song",
"Le",
""
]
] |
1812.10851 | Pavel Surynek | Pavel Surynek | A Summary of Adaptation of Techniques from Search-based Optimal
Multi-Agent Path Finding Solvers to Compilation-based Approach | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the multi-agent path finding problem (MAPF) we are given a set of agents
each with respective start and goal positions. The task is to find paths for
all agents while avoiding collisions aiming to minimize an objective function.
Two such common objective functions is the sum-of-costs and the makespan. Many
optimal solvers were introduced in the past decade - two prominent categories
of solvers can be distinguished: search-based solvers and compilation-based
solvers.
Search-based solvers were developed and tested for the sum-of-costs objective
while the most prominent compilation-based solvers that are built around
Boolean satisfiability (SAT) were designed for the makespan objective. Very
little was known on the performance and relevance of the compilation-based
approach on the sum-of-costs objective. In this paper we show how to close the
gap between these cost functions in the compilation-based approach. Moreover we
study applicability of various techniques developed for search-based solvers in
the compilation-based approach.
A part of this paper introduces a SAT-solver that is directly aimed to solve
the sum-of-costs objective function. Using both a lower bound on the
sum-of-costs and an upper bound on the makespan, we are able to have a
reasonable number of variables in our SAT encoding. We then further improve the
encoding by borrowing ideas from ICTS, a search-based solver. Experimental
evaluation on several domains show that there are many scenarios where our new
SAT-based methods outperforms the best variants of previous sum-of-costs search
solvers - the ICTS, CBS algorithms, and ICBS algorithms.
| [
{
"version": "v1",
"created": "Fri, 28 Dec 2018 00:36:29 GMT"
}
] | 1,546,214,400,000 | [
[
"Surynek",
"Pavel",
""
]
] |
1812.11371 | Michal \v{C}ertick\'y | Mykyta Viazovskyi and Michal Certicky | StarAlgo: A Squad Movement Planning Library for StarCraft using Monte
Carlo Tree Search and Negamax | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Real-Time Strategy (RTS) games have recently become a popular testbed for
artificial intelligence research. They represent a complex adversarial domain
providing a number of interesting AI challenges. There exists a wide variety of
research-supporting software tools, libraries and frameworks for one RTS game
in particular -- StarCraft: Brood War. These tools are designed to address
various specific sub-problems, such as resource allocation or opponent
modelling so that researchers can focus exclusively on the tasks relevant to
them. We present one such tool -- a library called StarAlgo that produces plans
for the coordinated movement of squads (groups of combat units) within the game
world. StarAlgo library can solve the squad movement planning problem using one
of two algorithms: Monte Carlo Tree Search Considering Durations (MCTSCD) and a
slightly modified version of Negamax. We evaluate both the algorithms, compare
them, and demonstrate their usage. The library is implemented as a static C++
library that can be easily plugged into most StarCraft AI bots.
| [
{
"version": "v1",
"created": "Sat, 29 Dec 2018 14:21:19 GMT"
}
] | 1,546,300,800,000 | [
[
"Viazovskyi",
"Mykyta",
""
],
[
"Certicky",
"Michal",
""
]
] |
1812.11509 | Abhishek Gupta | Yew-Soon Ong, Abhishek Gupta | AIR5: Five Pillars of Artificial Intelligence Research | 5 pages, 0 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this article, we provide and overview of what we consider to be some of
the most pressing research questions facing the fields of artificial
intelligence (AI) and computational intelligence (CI); with the latter focusing
on algorithms that are inspired by various natural phenomena. We demarcate
these questions using five unique Rs - namely, (i) rationalizability, (ii)
resilience, (iii) reproducibility, (iv) realism, and (v) responsibility.
Notably, just as air serves as the basic element of biological life, the term
AIR5 - cumulatively referring to the five aforementioned Rs - is introduced
herein to mark some of the basic elements of artificial life (supporting the
sustained growth of AI and CI). A brief summary of each of the Rs is presented,
highlighting their relevance as pillars of future research in this arena.
| [
{
"version": "v1",
"created": "Sun, 30 Dec 2018 11:00:48 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Jan 2019 06:46:39 GMT"
}
] | 1,546,473,600,000 | [
[
"Ong",
"Yew-Soon",
""
],
[
"Gupta",
"Abhishek",
""
]
] |
1901.00064 | Peter Eckersley | Peter Eckersley | Impossibility and Uncertainty Theorems in AI Value Alignment (or why
your AGI should not have a utility function) | Published in SafeAI 2019: Proceedings of the AAAI Workshop on
Artificial Intelligence Safety 2019 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Utility functions or their equivalents (value functions, objective functions,
loss functions, reward functions, preference orderings) are a central tool in
most current machine learning systems. These mechanisms for defining goals and
guiding optimization run into practical and conceptual difficulty when there
are independent, multi-dimensional objectives that need to be pursued
simultaneously and cannot be reduced to each other. Ethicists have proved
several impossibility theorems that stem from this origin; those results appear
to show that there is no way of formally specifying what it means for an
outcome to be good for a population without violating strong human ethical
intuitions (in such cases, the objective function is a social welfare
function). We argue that this is a practical problem for any machine learning
system (such as medical decision support systems or autonomous weapons) or
rigidly rule-based bureaucracy that will make high stakes decisions about human
lives: such systems should not use objective functions in the strict
mathematical sense.
We explore the alternative of using uncertain objectives, represented for
instance as partially ordered preferences, or as probability distributions over
total orders. We show that previously known impossibility theorems can be
transformed into uncertainty theorems in both of those settings, and prove
lower bounds on how much uncertainty is implied by the impossibility results.
We close by proposing two conjectures about the relationship between
uncertainty in objectives and severe unintended consequences from AI systems.
| [
{
"version": "v1",
"created": "Mon, 31 Dec 2018 23:51:27 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Feb 2019 02:57:13 GMT"
},
{
"version": "v3",
"created": "Tue, 5 Mar 2019 03:12:49 GMT"
}
] | 1,551,830,400,000 | [
[
"Eckersley",
"Peter",
""
]
] |
1901.00270 | Luckeciano Melo | Luckeciano Carvalho Melo, Marcos Ricardo Omena Albuquerque Maximo, and
Adilson Marques da Cunha | Learning Humanoid Robot Motions Through Deep Neural Networks | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Controlling a high degrees of freedom humanoid robot is acknowledged as one
of the hardest problems in Robotics. Due to the lack of mathematical models, an
approach frequently employed is to rely on human intuition to design keyframe
movements by hand, usually aided by graphical tools. In this paper, we propose
a learning framework based on neural networks in order to mimic humanoid robot
movements. The developed technique does not make any assumption about the
underlying implementation of the movement, therefore both keyframe and
model-based motions may be learned. The framework was applied in the RoboCup 3D
Soccer Simulation domain and promising results were obtained using the same
network architecture for several motions, even when copying motions from
another teams.
| [
{
"version": "v1",
"created": "Wed, 2 Jan 2019 05:46:52 GMT"
}
] | 1,546,473,600,000 | [
[
"Melo",
"Luckeciano Carvalho",
""
],
[
"Maximo",
"Marcos Ricardo Omena Albuquerque",
""
],
[
"da Cunha",
"Adilson Marques",
""
]
] |
1901.00298 | Han Yu | Han Yu, Chunyan Miao, Yongqing Zheng, Lizhen Cui, Simon Fauvel and
Cyril Leung | Ethically Aligned Opportunistic Scheduling for Productive Laziness | null | Proceedings of the 2nd AAAI/ACM Conference on Artificial
Intelligence, Ethics, and Society (AIES-19), 2019 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In artificial intelligence (AI) mediated workforce management systems (e.g.,
crowdsourcing), long-term success depends on workers accomplishing tasks
productively and resting well. This dual objective can be summarized by the
concept of productive laziness. Existing scheduling approaches mostly focus on
efficiency but overlook worker wellbeing through proper rest. In order to
enable workforce management systems to follow the IEEE Ethically Aligned Design
guidelines to prioritize worker wellbeing, we propose a distributed
Computational Productive Laziness (CPL) approach in this paper. It
intelligently recommends personalized work-rest schedules based on local data
concerning a worker's capabilities and situational factors to incorporate
opportunistic resting and achieve superlinear collective productivity without
the need for explicit coordination messages. Extensive experiments based on a
real-world dataset of over 5,000 workers demonstrate that CPL enables workers
to spend 70% of the effort to complete 90% of the tasks on average, providing
more ethically aligned scheduling than existing approaches.
| [
{
"version": "v1",
"created": "Wed, 2 Jan 2019 09:01:07 GMT"
}
] | 1,546,473,600,000 | [
[
"Yu",
"Han",
""
],
[
"Miao",
"Chunyan",
""
],
[
"Zheng",
"Yongqing",
""
],
[
"Cui",
"Lizhen",
""
],
[
"Fauvel",
"Simon",
""
],
[
"Leung",
"Cyril",
""
]
] |
1901.00365 | Karl Schlechta | Karl Schlechta | KI, Philosophie, Logik | in German | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This is a short (and personal) introduction in German to the connections
between artificial intelligence, philosophy, and logic, and to the author's
work.
Dies ist eine kurze (und persoenliche) Einfuehrung in die Zusammenhaenge
zwischen Kuenstlicher Intelligenz, Philosophie, und Logik, und in die Arbeiten
des Autors.
| [
{
"version": "v1",
"created": "Thu, 27 Dec 2018 10:29:47 GMT"
}
] | 1,546,473,600,000 | [
[
"Schlechta",
"Karl",
""
]
] |
1901.00723 | Simon Lucas | Simon M. Lucas, Jialin Liu, Ivan Bravi, Raluca D. Gaina, John
Woodward, Vanessa Volz and Diego Perez-Liebana | Efficient Evolutionary Methods for Game Agent Optimisation: Model-Based
is Best | 8 pages, to appear in 2019 AAAI workshop on Games and Simulations for
Artificial Intelligence ( https://www.gamesim.ai/ ) | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper introduces a simple and fast variant of Planet Wars as a test-bed
for statistical planning based Game AI agents, and for noisy hyper-parameter
optimisation. Planet Wars is a real-time strategy game with simple rules but
complex game-play. The variant introduced in this paper is designed for speed
to enable efficient experimentation, and also for a fixed action space to
enable practical inter-operability with General Video Game AI agents. If we
treat the game as a win-loss game (which is standard), then this leads to
challenging noisy optimisation problems both in tuning agents to play the game,
and in tuning game parameters. Here we focus on the problem of tuning an agent,
and report results using the recently developed N-Tuple Bandit Evolutionary
Algorithm and a number of other optimisers, including Sequential Model-based
Algorithm Configuration (SMAC). Results indicate that the N-Tuple Bandit
Evolutionary offers competitive performance as well as insight into the effects
of combinations of parameter choices.
| [
{
"version": "v1",
"created": "Thu, 3 Jan 2019 14:03:23 GMT"
}
] | 1,546,560,000,000 | [
[
"Lucas",
"Simon M.",
""
],
[
"Liu",
"Jialin",
""
],
[
"Bravi",
"Ivan",
""
],
[
"Gaina",
"Raluca D.",
""
],
[
"Woodward",
"John",
""
],
[
"Volz",
"Vanessa",
""
],
[
"Perez-Liebana",
"Diego",
""
]
] |
1901.00921 | Lantao Liu | Shoubhik Debnath, Lantao Liu, Gaurav Sukhatme | Reachability and Differential based Heuristics for Solving Markov
Decision Processes | The paper was published in 2017 International Symposium on Robotics
Research (ISRR) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The solution convergence of Markov Decision Processes (MDPs) can be
accelerated by prioritized sweeping of states ranked by their potential impacts
to other states. In this paper, we present new heuristics to speed up the
solution convergence of MDPs. First, we quantify the level of reachability of
every state using the Mean First Passage Time (MFPT) and show that such
reachability characterization very well assesses the importance of states which
is used for effective state prioritization. Then, we introduce the notion of
backup differentials as an extension to the prioritized sweeping mechanism, in
order to evaluate the impacts of states at an even finer scale. Finally, we
extend the state prioritization to the temporal process, where only partial
sweeping can be performed during certain intermediate value iteration stages.
To validate our design, we have performed numerical evaluations by comparing
the proposed new heuristics with corresponding classic baseline mechanisms. The
evaluation results showed that our reachability based framework and its
differential variants have outperformed the state-of-the-art solutions in terms
of both practical runtime and number of iterations.
| [
{
"version": "v1",
"created": "Thu, 3 Jan 2019 22:01:26 GMT"
}
] | 1,546,819,200,000 | [
[
"Debnath",
"Shoubhik",
""
],
[
"Liu",
"Lantao",
""
],
[
"Sukhatme",
"Gaurav",
""
]
] |
1901.00942 | Florian Richoux | Valentin Antuori and Florian Richoux | Constrained optimization under uncertainty for decision-making problems:
Application to Real-Time Strategy games | Published at the 2019 IEEE Congress on Evolutionary Computation
(CEC'19) | null | 10.1109/CEC.2019.8789922 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Decision-making problems can be modeled as combinatorial optimization
problems with Constraint Programming formalisms such as Constrained
Optimization Problems. However, few Constraint Programming formalisms can deal
with both optimization and uncertainty at the same time, and none of them are
convenient to model problems we tackle in this paper.
Here, we propose a way to deal with combinatorial optimization problems under
uncertainty within the classical Constrained Optimization Problems formalism by
injecting the Rank Dependent Utility from decision theory. We also propose a
proof of concept of our method to show it is implementable and can solve
concrete decision-making problems using a regular constraint solver, and
propose a bot that won the partially observable track of the 2018 {\mu}RTS AI
competition.
Our result shows it is possible to handle uncertainty with regular Constraint
Programming solvers, without having to define a new formalism neither to
develop dedicated solvers. This brings new perspective to tackle uncertainty in
Constraint Programming.
| [
{
"version": "v1",
"created": "Thu, 3 Jan 2019 23:45:00 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Jan 2019 08:47:41 GMT"
},
{
"version": "v3",
"created": "Tue, 23 Apr 2019 05:09:11 GMT"
}
] | 1,653,350,400,000 | [
[
"Antuori",
"Valentin",
""
],
[
"Richoux",
"Florian",
""
]
] |
1901.00949 | Hussein Abbass A | Nicholas R. Clayton and Hussein Abbass | Machine Teaching in Hierarchical Genetic Reinforcement Learning:
Curriculum Design of Reward Functions for Swarm Shepherding | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The design of reward functions in reinforcement learning is a human skill
that comes with experience. Unfortunately, there is not any methodology in the
literature that could guide a human to design the reward function or to allow a
human to transfer the skills developed in designing reward functions to another
human and in a systematic manner. In this paper, we use Systematic
Instructional Design, an approach in human education, to engineer a machine
education methodology to design reward functions for reinforcement learning. We
demonstrate the methodology in designing a hierarchical genetic reinforcement
learner that adopts a neural network representation to evolve a swarm
controller for an agent shepherding a boids-based swarm. The results reveal
that the methodology is able to guide the design of hierarchical reinforcement
learners, with each model in the hierarchy learning incrementally through a
multi-part reward function. The hierarchy acts as a decision fusion function
that combines the individual behaviours and skills learnt by each instruction
to create a smart shepherd to control the swarm.
| [
{
"version": "v1",
"created": "Fri, 4 Jan 2019 00:10:46 GMT"
}
] | 1,546,819,200,000 | [
[
"Clayton",
"Nicholas R.",
""
],
[
"Abbass",
"Hussein",
""
]
] |
1901.01830 | Christophe Lecoutre | Christophe Lecoutre and Olivier Roussel | Proceedings of the 2018 XCSP3 Competition | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This document represents the proceedings of the 2018 XCSP3 Competition. The
results of this competition of constraint solvers were presented at CP'18, the
24th International Conference on Principles and Practice of Constraint
Programming, held in Lille, France from 27th August 2018 to 31th August, 2018.
| [
{
"version": "v1",
"created": "Mon, 17 Dec 2018 14:29:45 GMT"
}
] | 1,546,905,600,000 | [
[
"Lecoutre",
"Christophe",
""
],
[
"Roussel",
"Olivier",
""
]
] |
1901.01834 | Baogang Hu | Baogang Hu, Weiming Dong | "Ge Shu Zhi Zhi": Towards Deep Understanding about Worlds | 10 pages, in Chinese. 5 figures, 2 tables | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | "Ge She Zhi Zhi" is a novel saying in Chinese, stated as "To investigate
things from the underlying principle(s) and to acquire knowledge in the form of
mathematical representations". The saying is adopted and modified based on the
ideas from the Eastern and Western philosophers. This position paper discusses
the saying in the background of artificial intelligence (AI). Some related
subjects, such as the ultimate goals of AI and two levels of knowledge
representations, are discussed from the perspective of machine learning. A case
study on objective evaluations over multi attributes, a typical problem in the
filed of social computing, is given to support the saying for wide
applications. A methodology of meta rules is proposed for examining the
objectiveness of the evaluations. The possible problems of the saying are also
presented.
| [
{
"version": "v1",
"created": "Wed, 19 Dec 2018 05:18:20 GMT"
},
{
"version": "v2",
"created": "Sat, 16 Mar 2019 01:25:59 GMT"
},
{
"version": "v3",
"created": "Wed, 5 Jun 2019 01:32:04 GMT"
}
] | 1,559,779,200,000 | [
[
"Hu",
"Baogang",
""
],
[
"Dong",
"Weiming",
""
]
] |
1901.01851 | Roman Yampolskiy | Roman V. Yampolskiy | Personal Universes: A Solution to the Multi-Agent Value Alignment
Problem | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | AI Safety researchers attempting to align values of highly capable
intelligent systems with those of humanity face a number of challenges
including personal value extraction, multi-agent value merger and finally
in-silico encoding. State-of-the-art research in value alignment shows
difficulties in every stage in this process, but merger of incompatible
preferences is a particularly difficult challenge to overcome. In this paper we
assume that the value extraction problem will be solved and propose a possible
way to implement an AI solution which optimally aligns with individual
preferences of each user. We conclude by analyzing benefits and limitations of
the proposed approach.
| [
{
"version": "v1",
"created": "Tue, 1 Jan 2019 18:05:43 GMT"
}
] | 1,546,905,600,000 | [
[
"Yampolskiy",
"Roman V.",
""
]
] |
1901.01855 | Xiaojie Gao | Xiaojie Gao, Shikui Tu, Lei Xu | A* Tree Search for Portfolio Management | The paper needs a major revision including the title | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a planning-based method to teach an agent to manage portfolio from
scratch. Our approach combines deep reinforcement learning techniques with
search techniques like AlphaGo. By uniting the advantages in A* search
algorithm with Monte Carlo tree search, we come up with a new algorithm named
A* tree search in which best information is returned to guide next search.
Also, the expansion mode of Monte Carlo tree is improved for a higher
utilization of the neural network. The suggested algorithm can also optimize
non-differentiable utility function by combinatorial search. This technique is
then used in our trading system. The major component is a neural network that
is trained by trading experiences from tree search and outputs prior
probability to guide search by pruning away branches in turn. Experimental
results on simulated and real financial data verify the robustness of the
proposed trading system and the trading system produces better strategies than
several approaches based on reinforcement learning.
| [
{
"version": "v1",
"created": "Mon, 7 Jan 2019 14:59:15 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Feb 2019 10:26:13 GMT"
}
] | 1,550,534,400,000 | [
[
"Gao",
"Xiaojie",
""
],
[
"Tu",
"Shikui",
""
],
[
"Xu",
"Lei",
""
]
] |
1901.01856 | Krishn Bera | Krishn Bera, Tejas Savalia and Bapi Raju | A Computational Framework for Motor Skill Acquisition | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | There have been numerous attempts in explaining the general learning
behaviours by various cognitive models. Multiple hypotheses have been put
further to qualitatively argue the best-fit model for motor skill acquisition
task and its variations. In this context, for a discrete sequence production
(DSP) task, one of the most insightful models is Verwey's Dual Processor Model
(DPM). It largely explains the learning and behavioural phenomenon of skilled
discrete key-press sequences without providing any concrete computational basis
of reinforcement. Therefore, we propose a quantitative explanation for Verwey's
DPM hypothesis by experimentally establishing a general computational framework
for motor skill learning. We attempt combining the qualitative and quantitative
theories based on a best-fit model of the experimental simulations of
variations of dual processor models. The fundamental premise of sequential
decision making for skill learning is based on interacting model-based (MB) and
model-free (MF) reinforcement learning (RL) processes. Our unifying framework
shows the proposed idea agrees well to Verwey's DPM and Fitts' three phases of
skill learning. The accuracy of our model can further be validated by its
statistical fit with the human-generated data on simple environment tasks like
the grid-world.
| [
{
"version": "v1",
"created": "Thu, 3 Jan 2019 09:06:56 GMT"
}
] | 1,546,905,600,000 | [
[
"Bera",
"Krishn",
""
],
[
"Savalia",
"Tejas",
""
],
[
"Raju",
"Bapi",
""
]
] |
1901.02035 | Roi Ceren | Roi Ceren, Shannon Quinn, Glen Raines | Towards a Decentralized, Autonomous Multiagent Framework for Mitigating
Crop Loss | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a generalized decision-theoretic system for a heterogeneous team
of autonomous agents who are tasked with online identification of
phenotypically expressed stress in crop fields.. This system employs four
distinct types of agents, specific to four available sensor modalities:
satellites (Layer 3), uninhabited aerial vehicles (L2), uninhabited ground
vehicles (L1), and static ground-level sensors (L0). Layers 3, 2, and 1 are
tasked with performing image processing at the available resolution of the
sensor modality and, along with data generated by layer 0 sensors, identify
erroneous differences that arise over time. Our goal is to limit the use of the
more computationally and temporally expensive subsequent layers. Therefore,
from layer 3 to 1, each layer only investigates areas that previous layers have
identified as potentially afflicted by stress. We introduce a reinforcement
learning technique based on Perkins' Monte Carlo Exploring Starts for a
generalized Markovian model for each layer's decision problem, and label the
system the Agricultural Distributed Decision Framework (ADDF). As our domain is
real-world and online, we illustrate implementations of the two major
components of our system: a clustering-based image processing methodology and a
two-layer POMDP implementation.
| [
{
"version": "v1",
"created": "Mon, 7 Jan 2019 19:44:44 GMT"
}
] | 1,546,992,000,000 | [
[
"Ceren",
"Roi",
""
],
[
"Quinn",
"Shannon",
""
],
[
"Raines",
"Glen",
""
]
] |
1901.02307 | Nikhil Bhargava | Nikhil Bhargava, Brian Williams | Complexity Bounds for the Controllability of Temporal Networks with
Conditions, Disjunctions, and Uncertainty | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In temporal planning, many different temporal network formalisms are used to
model real world situations. Each of these formalisms has different features
which affect how easy it is to determine whether the underlying network of
temporal constraints is consistent. While many of the simpler models have been
well-studied from a computational complexity perspective, the algorithms
developed for advanced models which combine features have very loose complexity
bounds. In this paper, we provide tight completeness bounds for strong, weak,
and dynamic controllability checking of temporal networks that have conditions,
disjunctions, and temporal uncertainty. Our work exposes some of the subtle
differences between these different structures and, remarkably, establishes a
guarantee that all of these problems are computable in PSPACE.
| [
{
"version": "v1",
"created": "Tue, 8 Jan 2019 13:47:12 GMT"
}
] | 1,546,992,000,000 | [
[
"Bhargava",
"Nikhil",
""
],
[
"Williams",
"Brian",
""
]
] |
1901.02412 | Ritwik Sinha | Ritwik Sinha, Dhruv Singal, Pranav Maneriker, Kushal Chawla, Yash
Shrivastava, Deepak Pai, Atanu R Sinha | Forecasting Granular Audience Size for Online Advertising | Published at AdKDD & TargetAd 2018 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Orchestration of campaigns for online display advertising requires marketers
to forecast audience size at the granularity of specific attributes of web
traffic, characterized by the categorical nature of all attributes (e.g. {US,
Chrome, Mobile}). With each attribute taking many values, the very large
attribute combination set makes estimating audience size for any specific
attribute combination challenging. We modify Eclat, a frequent itemset mining
(FIM) algorithm, to accommodate categorical variables. For consequent frequent
and infrequent itemsets, we then provide forecasts using time series analysis
with conditional probabilities to aid approximation. An extensive simulation,
based on typical characteristics of audience data, is built to stress test our
modified-FIM approach. In two real datasets, comparison with baselines
including neural network models, shows that our method lowers computation time
of FIM for categorical data. On hold out samples we show that the proposed
forecasting method outperforms these baselines.
| [
{
"version": "v1",
"created": "Tue, 8 Jan 2019 17:13:51 GMT"
}
] | 1,546,992,000,000 | [
[
"Sinha",
"Ritwik",
""
],
[
"Singal",
"Dhruv",
""
],
[
"Maneriker",
"Pranav",
""
],
[
"Chawla",
"Kushal",
""
],
[
"Shrivastava",
"Yash",
""
],
[
"Pai",
"Deepak",
""
],
[
"Sinha",
"Atanu R",
""
]
] |
1901.02565 | Maxwell Crouse | Maxwell Crouse, Achille Fokoue, Maria Chang, Pavan Kapanipathi, Ryan
Musa, Constantine Nakos, Lingfei Wu, Kenneth Forbus, Michael Witbrock | High-Fidelity Vector Space Models of Structured Data | updated to reflect conference submission, new experiment added | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machine learning systems regularly deal with structured data in real-world
applications. Unfortunately, such data has been difficult to faithfully
represent in a way that most machine learning techniques would expect, i.e. as
a real-valued vector of a fixed, pre-specified size. In this work, we introduce
a novel approach that compiles structured data into a satisfiability problem
which has in its set of solutions at least (and often only) the input data. The
satisfiability problem is constructed from constraints which are generated
automatically a priori from a given signature, thus trivially allowing for a
bag-of-words-esque vector representation of the input to be constructed. The
method is demonstrated in two areas, automated reasoning and natural language
processing, where it is shown to produce vector representations of
natural-language sentences and first-order logic clauses that can be precisely
translated back to their original, structured input forms.
| [
{
"version": "v1",
"created": "Wed, 9 Jan 2019 00:26:00 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Jan 2019 14:03:52 GMT"
}
] | 1,547,596,800,000 | [
[
"Crouse",
"Maxwell",
""
],
[
"Fokoue",
"Achille",
""
],
[
"Chang",
"Maria",
""
],
[
"Kapanipathi",
"Pavan",
""
],
[
"Musa",
"Ryan",
""
],
[
"Nakos",
"Constantine",
""
],
[
"Wu",
"Lingfei",
""
],
[
"Forbus",
"Kenneth",
""
],
[
"Witbrock",
"Michael",
""
]
] |
1901.02918 | Barry Smith | Jobst Landgrebe and Barry Smith | Making AI meaningful again | 23 pages, 1 Table | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Artificial intelligence (AI) research enjoyed an initial period of enthusiasm
in the 1970s and 80s. But this enthusiasm was tempered by a long interlude of
frustration when genuinely useful AI applications failed to be forthcoming.
Today, we are experiencing once again a period of enthusiasm, fired above all
by the successes of the technology of deep neural networks or deep machine
learning. In this paper we draw attention to what we take to be serious
problems underlying current views of artificial intelligence encouraged by
these successes, especially in the domain of language processing. We then show
an alternative approach to language-centric AI, in which we identify a role for
philosophy.
| [
{
"version": "v1",
"created": "Wed, 9 Jan 2019 20:16:44 GMT"
},
{
"version": "v2",
"created": "Sun, 17 Feb 2019 11:07:26 GMT"
},
{
"version": "v3",
"created": "Sat, 23 Mar 2019 06:17:08 GMT"
}
] | 1,553,558,400,000 | [
[
"Landgrebe",
"Jobst",
""
],
[
"Smith",
"Barry",
""
]
] |
1901.04199 | Ana-Maria Olteteanu | Ana-Maria Olteteanu, Zoe Falomir | Proceedings of the 2nd Symposium on Problem-solving, Creativity and
Spatial Reasoning in Cognitive Systems, ProSocrates 2017 | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This book contains the accepted papers at ProSocrates 2017 Symposium:
Problem-solving,Creativity and Spatial Reasoning in Cognitive Systems.
ProSocrates 2017 symposium was held at the Hansewissenschaftkolleg (HWK) of
Advanced Studies in Delmenhorst, 20-21July 2017. This was the second edition of
this symposium which aims to bring together researchers interested in spatial
reasoning, problem solving and creativity.
| [
{
"version": "v1",
"created": "Mon, 14 Jan 2019 09:16:11 GMT"
}
] | 1,547,510,400,000 | [
[
"Olteteanu",
"Ana-Maria",
""
],
[
"Falomir",
"Zoe",
""
]
] |
1901.04274 | Tobias Joppen | Tobias Joppen and Johannes F\"urnkranz | Ordinal Monte Carlo Tree Search | preview | IJCAI Workshop on Monte Carlo Tree Search, 2020 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In many problem settings, most notably in game playing, an agent receives a
possibly delayed reward for its actions. Often, those rewards are handcrafted
and not naturally given. Even simple terminal-only rewards, like winning equals
1 and losing equals -1, can not be seen as an unbiased statement, since these
values are chosen arbitrarily, and the behavior of the learner may change with
different encodings, such as setting the value of a loss to -0:5, which is
often done in practice to encourage learning. It is hard to argue about good
rewards and the performance of an agent often depends on the design of the
reward signal. In particular, in domains where states by nature only have an
ordinal ranking and where meaningful distance information between game state
values are not available, a numerical reward signal is necessarily biased. In
this paper, we take a look at Monte Carlo Tree Search (MCTS), a popular
algorithm to solve MDPs, highlight a reoccurring problem concerning its use of
rewards, and show that an ordinal treatment of the rewards overcomes this
problem. Using the General Video Game Playing framework we show a dominance of
our newly proposed ordinal MCTS algorithm over preference-based MCTS, vanilla
MCTS and various other MCTS variants.
| [
{
"version": "v1",
"created": "Mon, 14 Jan 2019 13:01:59 GMT"
}
] | 1,607,472,000,000 | [
[
"Joppen",
"Tobias",
""
],
[
"Fürnkranz",
"Johannes",
""
]
] |
1901.04626 | Liudmyla Nechepurenko | Liudmyla Nechepurenko, Viktor Voss, and Vyacheslav Gritsenko | Comparing Knowledge-based Reinforcement Learning to Neural Networks in a
Strategy Game | 7 pages, 6 figures | Hybrid Artificial Intelligent Systems. HAIS 2020. Lecture Notes in
Computer Science, vol 12344 | 10.1007/978-3-030-61705-9_26 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The paper reports on an experiment, in which a Knowledge-Based Reinforcement
Learning (KB-RL) method was compared to a Neural Network (NN) approach in
solving a classical Artificial Intelligence (AI) task. In contrast to NNs,
which require a substantial amount of data to learn a good policy, the KB-RL
method seeks to encode human knowledge into the solution, considerably reducing
the amount of data needed for a good policy. By means of Reinforcement Learning
(RL), KB-RL learns to optimize the model and improves the output of the system.
Furthermore, KB-RL offers the advantage of a clear explanation of the taken
decisions as well as transparent reasoning behind the solution.
The goal of the reported experiment was to examine the performance of the
KB-RL method in contrast to the Neural Network and to explore the capabilities
of KB-RL to deliver a strong solution for the AI tasks. The results show that,
within the designed settings, KB-RL outperformed the NN, and was able to learn
a better policy from the available amount of data. These results support the
opinion that Artificial Intelligence can benefit from the discovery and study
of alternative approaches, potentially extending the frontiers of AI.
| [
{
"version": "v1",
"created": "Tue, 15 Jan 2019 01:23:38 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Jan 2020 11:01:33 GMT"
}
] | 1,605,052,800,000 | [
[
"Nechepurenko",
"Liudmyla",
""
],
[
"Voss",
"Viktor",
""
],
[
"Gritsenko",
"Vyacheslav",
""
]
] |
1901.04772 | Montaser Mohammedalamen | Montaser Mohammedalamen, Waleed D. Khamies, Benjamin Rosman | Transfer Learning for Prosthetics Using Imitation Learning | Workshop paper, Black in AI, NeurIPS 2018 | Black in AI Workshop, NeurIPS 2018 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, We Apply Reinforcement learning (RL) techniques to train a
realistic biomechanical model to work with different people and on different
walking environments. We benchmarking 3 RL algorithms: Deep Deterministic
Policy Gradient (DDPG), Trust Region Policy Optimization (TRPO) and Proximal
Policy Optimization (PPO) in OpenSim environment, Also we apply imitation
learning to a prosthetics domain to reduce the training time needed to design
customized prosthetics. We use DDPG algorithm to train an original expert
agent. We then propose a modification to the Dataset Aggregation (DAgger)
algorithm to reuse the expert knowledge and train a new target agent to
replicate that behaviour in fewer than 5 iterations, compared to the 100
iterations taken by the expert agent which means reducing training time by 95%.
Our modifications to the DAgger algorithm improve the balance between
exploiting the expert policy and exploring the environment. We show empirically
that these improve convergence time of the target agent, particularly when
there is some degree of variation between expert and naive agent.
| [
{
"version": "v1",
"created": "Tue, 15 Jan 2019 11:35:26 GMT"
}
] | 1,547,596,800,000 | [
[
"Mohammedalamen",
"Montaser",
""
],
[
"Khamies",
"Waleed D.",
""
],
[
"Rosman",
"Benjamin",
""
]
] |
1901.05322 | Saeid Amiri | Saeid Amiri, Mohammad Shokrolah Shirazi, Shiqi Zhang | Learning and Reasoning for Robot Sequential Decision Making under
Uncertainty | In proceedings of 34th AAAI conference on Artificial Intelligence,
2020 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Robots frequently face complex tasks that require more than one action, where
sequential decision-making (SDM) capabilities become necessary. The key
contribution of this work is a robot SDM framework, called LCORPP, that
supports the simultaneous capabilities of supervised learning for passive state
estimation, automated reasoning with declarative human knowledge, and planning
under uncertainty toward achieving long-term goals. In particular, we use a
hybrid reasoning paradigm to refine the state estimator, and provide
informative priors for the probabilistic planner. In experiments, a mobile
robot is tasked with estimating human intentions using their motion
trajectories, declarative contextual knowledge, and human-robot interaction
(dialog-based and motion-based). Results suggest that, in efficiency and
accuracy, our framework performs better than its no-learning and no-reasoning
counterparts in office environment.
| [
{
"version": "v1",
"created": "Wed, 16 Jan 2019 14:47:14 GMT"
},
{
"version": "v2",
"created": "Sun, 17 Nov 2019 16:56:47 GMT"
},
{
"version": "v3",
"created": "Tue, 10 Dec 2019 13:42:31 GMT"
}
] | 1,576,022,400,000 | [
[
"Amiri",
"Saeid",
""
],
[
"Shirazi",
"Mohammad Shokrolah",
""
],
[
"Zhang",
"Shiqi",
""
]
] |
1901.05431 | Michael Green | Michael Cerny Green, Benjamin Sergent, Pushyami Shandilya and Vibhor
Kumar | Evolutionarily-Curated Curriculum Learning for Deep Reinforcement
Learning Agents | 9 pages, 7 figures, accepted to the Reinforcement Learning in Games
workshop at AAAI 2019 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we propose a new training loop for deep reinforcement learning
agents with an evolutionary generator. Evolutionary procedural content
generation has been used in the creation of maps and levels for games before.
Our system incorporates an evolutionary map generator to construct a training
curriculum that is evolved to maximize loss within the state-of-the-art Double
Dueling Deep Q Network architecture with prioritized replay. We present a
case-study in which we prove the efficacy of our new method on a game with a
discrete, large action space we made called Attackers and Defenders. Our
results demonstrate that training on an evolutionarily-curated curriculum
(directed sampling) of maps both expedites training and improves generalization
when compared to a network trained on an undirected sampling of maps.
| [
{
"version": "v1",
"created": "Wed, 16 Jan 2019 18:53:14 GMT"
}
] | 1,547,683,200,000 | [
[
"Green",
"Michael Cerny",
""
],
[
"Sergent",
"Benjamin",
""
],
[
"Shandilya",
"Pushyami",
""
],
[
"Kumar",
"Vibhor",
""
]
] |
1901.05437 | Zenna Tavares | Zenna Tavares, Javier Burroni, Edgar Minaysan, Armando Solar Lezama,
Rajesh Ranganath | Soft Constraints for Inference with Declarative Knowledge | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We develop a likelihood free inference procedure for conditioning a
probabilistic model on a predicate. A predicate is a Boolean valued function
which expresses a yes/no question about a domain. Our contribution, which we
call predicate exchange, constructs a softened predicate which takes value in
the unit interval [0, 1] as opposed to a simply true or false. Intuitively, 1
corresponds to true, and a high value (such as 0.999) corresponds to "nearly
true" as determined by a distance metric. We define Boolean algebra for soft
predicates, such that they can be negated, conjoined and disjoined arbitrarily.
A softened predicate can serve as a tractable proxy to a likelihood function
for approximate posterior inference. However, to target exact inference, we
temper the relaxation by a temperature parameter, and add a accept/reject phase
use to replica exchange Markov Chain Mont Carlo, which exchanges states between
a sequence of models conditioned on predicates at varying temperatures. We
describe a lightweight implementation of predicate exchange that it provides a
language independent layer that can be implemented on top of existingn modeling
formalisms.
| [
{
"version": "v1",
"created": "Wed, 16 Jan 2019 18:59:38 GMT"
}
] | 1,547,683,200,000 | [
[
"Tavares",
"Zenna",
""
],
[
"Burroni",
"Javier",
""
],
[
"Minaysan",
"Edgar",
""
],
[
"Lezama",
"Armando Solar",
""
],
[
"Ranganath",
"Rajesh",
""
]
] |
1901.05506 | Konstantin Yakovlev S | Anton Andreychuk, Konstantin Yakovlev, Dor Atzmon, Roni Stern | Multi-Agent Pathfinding with Continuous Time | Camera-ready version of the paper as to appear in IJCAI'19
proceedings | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-Agent Pathfinding (MAPF) is the problem of finding paths for multiple
agents such that every agent reaches its goal and the agents do not collide.
Most prior work on MAPF was on grids, assumed agents' actions have uniform
duration, and that time is discretized into timesteps. We propose a MAPF
algorithm that does not rely on these assumptions, is complete, and provides
provably optimal solutions. This algorithm is based on a novel adaptation of
Safe interval path planning (SIPP), a continuous time single-agent planning
algorithm, and a modified version of Conflict-based search (CBS), a state of
the art multi-agent pathfinding algorithm. We analyze this algorithm, discuss
its pros and cons, and evaluate it experimentally on several standard
benchmarks.
| [
{
"version": "v1",
"created": "Wed, 16 Jan 2019 19:34:03 GMT"
},
{
"version": "v2",
"created": "Fri, 17 May 2019 21:08:24 GMT"
},
{
"version": "v3",
"created": "Fri, 14 Jun 2019 17:50:35 GMT"
}
] | 1,560,729,600,000 | [
[
"Andreychuk",
"Anton",
""
],
[
"Yakovlev",
"Konstantin",
""
],
[
"Atzmon",
"Dor",
""
],
[
"Stern",
"Roni",
""
]
] |
1901.05564 | Soheila Sadeghiram | Soheila Sadeghiram, Hui MA, Gang Chen | Distance-Guided GA-Based Approach to Distributed Data-Intensive Web
Service Composition | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Distributed computing which uses Web services as fundamental elements,
enables high-speed development of software applications through composing many
interoperating, distributed, re-usable, and autonomous services. As a
fundamental challenge for service developers, service composition must fulfil
functional requirements and optimise Quality of Service (QoS) attributes,
simultaneously. On the other hand, huge amounts of data have been created by
advances in technologies, which may be exchanged between services.
Data-intensive Web services are of great interest to implement data-intensive
processes. However, current approaches to Web service composition have omitted
either the effect of data, or the distribution of services. Evolutionary
Computing (EC) techniques allow for the creation of compositions that meet all
the above factors. In this paper, we will develop Genetic Algorithm (GA)-based
approach for solving the problem of distributed data-intensive Web service
composition (DWSC). In particular, we will introduce two new heuristics, i.e.
Longest Common Subsequence(LCS) distance of services, in designing crossover
operators. Additionally, a new local search technique incorporating distance of
services will be proposed.
| [
{
"version": "v1",
"created": "Wed, 16 Jan 2019 23:48:57 GMT"
}
] | 1,547,769,600,000 | [
[
"Sadeghiram",
"Soheila",
""
],
[
"MA",
"Hui",
""
],
[
"Chen",
"Gang",
""
]
] |
1901.06343 | G\'erald Rocher | G\'erald Rocher, Jean-Yves Tigli, St\'ephane Lavirotte and Nhan Le
Thanh | Effectiveness Assessment of Cyber-Physical Systems | Preprint submitted to International Journal of Approximate Reasoning | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | By achieving their purposes through interactions with the physical world,
Cyber-Physical Systems (CPS) pose new challenges in terms of dependability.
Indeed, the evolution of the physical systems they control with transducers can
be affected by surrounding physical processes over which they have no control
and which may potentially hamper the achievement of their purposes. While it is
illusory to hope for a comprehensive model of the physical environment at
design time to anticipate and remove faults that may occur once these systems
are deployed, it becomes necessary to evaluate their degree of effectiveness in
vivo. In this paper, the degree of effectiveness is formally defined and
generalized in the context of the measure theory. The measure is developed in
the context of the Transferable Belief Model (TBM), an elaboration on the
Dempster-Shafer Theory (DST) of evidence so as to handle epistemic and aleatory
uncertainties respectively pertaining the users' expectations and the natural
variability of the physical environment. The TBM is used in conjunction with
the Input/Output Hidden Markov Modeling framework (we denote by Ev-IOHMM) to
specify the expected evolution of the physical system controlled by the CPS and
the tolerances towards uncertainties. The measure of effectiveness is then
obtained from the forward algorithm, leveraging the conflict entailed by the
successive combinations of the beliefs obtained from observations of the
physical system and the beliefs corresponding to its expected evolution. The
proposed approach is applied to autonomous vehicles and show how the degree of
effectiveness can be used for bench-marking their controller relative to the
highway code speed limitations and passengers' well-being constraints, both
modeled through an Ev-IOHMM.
| [
{
"version": "v1",
"created": "Thu, 10 Jan 2019 10:35:41 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Jan 2019 07:30:53 GMT"
},
{
"version": "v3",
"created": "Wed, 29 May 2019 14:40:51 GMT"
},
{
"version": "v4",
"created": "Fri, 13 Dec 2019 12:52:59 GMT"
}
] | 1,576,454,400,000 | [
[
"Rocher",
"Gérald",
""
],
[
"Tigli",
"Jean-Yves",
""
],
[
"Lavirotte",
"Stéphane",
""
],
[
"Thanh",
"Nhan Le",
""
]
] |
1901.06560 | Leilani Gilpin | Leilani H. Gilpin and Cecilia Testart and Nathaniel Fruchter and
Julius Adebayo | Explaining Explanations to Society | NeurIPS 2018 Workshop on Ethical, Social and Governance Issues in AI | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There is a disconnect between explanatory artificial intelligence (XAI)
methods and the types of explanations that are useful for and demanded by
society (policy makers, government officials, etc.) Questions that experts in
artificial intelligence (AI) ask opaque systems provide inside explanations,
focused on debugging, reliability, and validation. These are different from
those that society will ask of these systems to build trust and confidence in
their decisions. Although explanatory AI systems can answer many questions that
experts desire, they often don't explain why they made decisions in a way that
is precise (true to the model) and understandable to humans. These outside
explanations can be used to build trust, comply with regulatory and policy
changes, and act as external validation. In this paper, we focus on XAI methods
for deep neural networks (DNNs) because of DNNs' use in decision-making and
inherent opacity. We explore the types of questions that explanatory DNN
systems can answer and discuss challenges in building explanatory systems that
provide outside explanations for societal requirements and benefit.
| [
{
"version": "v1",
"created": "Sat, 19 Jan 2019 17:33:10 GMT"
}
] | 1,548,201,600,000 | [
[
"Gilpin",
"Leilani H.",
""
],
[
"Testart",
"Cecilia",
""
],
[
"Fruchter",
"Nathaniel",
""
],
[
"Adebayo",
"Julius",
""
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.