id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
sequencelengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1903.07198 | Sarath Sreedharan | Sarath Sreedharan, Alberto Olmo, Aditya Prasad Mishra and Subbarao
Kambhampati | Model-Free Model Reconciliation | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Designing agents capable of explaining complex sequential decisions remain a
significant open problem in automated decision-making. Recently, there has been
a lot of interest in developing approaches for generating such explanations for
various decision-making paradigms. One such approach has been the idea of {\em
explanation as model-reconciliation}. The framework hypothesizes that one of
the common reasons for the user's confusion could be the mismatch between the
user's model of the task and the one used by the system to generate the
decisions. While this is a general framework, most works that have been
explicitly built on this explanatory philosophy have focused on settings where
the model of user's knowledge is available in a declarative form. Our goal in
this paper is to adapt the model reconciliation approach to the cases where
such user models are no longer explicitly provided. We present a simple and
easy to learn labeling model that can help an explainer decide what information
could help achieve model reconciliation between the user and the agent.
| [
{
"version": "v1",
"created": "Sun, 17 Mar 2019 23:30:52 GMT"
}
] | 1,552,953,600,000 | [
[
"Sreedharan",
"Sarath",
""
],
[
"Olmo",
"Alberto",
""
],
[
"Mishra",
"Aditya Prasad",
""
],
[
"Kambhampati",
"Subbarao",
""
]
] |
1903.07260 | Yaoting Huang | Yaoting Huang, Boyu Chen, Wenlian Lu, Zhong-Xiao Jin, Ren Zheng | Intelligent Solution System towards Parts Logistics Optimization | WCGO 2019 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Due to the complication of the presented problem, intelligent algorithms show
great power to solve the parts logistics optimization problem related to the
vehicle routing problem (VRP). However, most of the existing research to VRP
are incomprehensive and failed to solve a real-work parts logistics problem.
In this work, towards SAIC logistics problem, we propose a systematic
solution to this 2-Dimensional Loading Capacitated Multi-Depot Heterogeneous
VRP with Time Windows by integrating diverse types of intelligent algorithms,
including, a heuristic algorithm to initialize feasible logistics planning
schemes by imitating manual planning, the core Tabu Search algorithm for global
optimization, accelerated by a novel bundle technique, heuristically algorithms
for routing, packing and queuing associated, and a heuristic post-optimization
process to promote the optimal solution.
Based on these algorithms, the SAIC Motor has successfully established an
intelligent management system to give a systematic solution for the parts
logistics planning, superior than manual planning in its performance,
customizability and expandability.
| [
{
"version": "v1",
"created": "Mon, 18 Mar 2019 05:43:03 GMT"
}
] | 1,552,953,600,000 | [
[
"Huang",
"Yaoting",
""
],
[
"Chen",
"Boyu",
""
],
[
"Lu",
"Wenlian",
""
],
[
"Jin",
"Zhong-Xiao",
""
],
[
"Zheng",
"Ren",
""
]
] |
1903.07269 | Sarath Sreedharan | Sarath Sreedharan, Tathagata Chakraborti, Christian Muise, Subbarao
Kambhampati | Expectation-Aware Planning: A Unifying Framework for Synthesizing and
Executing Self-Explaining Plans for Human-Aware Planning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we present a new planning formalism called Expectation-Aware
planning for decision making with humans in the loop where the human's
expectations about an agent may differ from the agent's own model. We show how
this formulation allows agents to not only leverage existing strategies for
handling model differences but can also exhibit novel behaviors that are
generated through the combination of these different strategies. Our
formulation also reveals a deep connection to existing approaches in epistemic
planning. Specifically, we show how we can leverage classical planning
compilations for epistemic planning to solve Expectation-Aware planning
problems. To the best of our knowledge, the proposed formulation is the first
complete solution to decision-making in the presence of diverging user
expectations that is amenable to a classical planning compilation while
successfully combining previous works on explanation and explicability. We
empirically show how our approach provides a computational advantage over
existing approximate approaches that unnecessarily try to search in the space
of models while also failing to facilitate the full gamut of behaviors enabled
by our framework.
| [
{
"version": "v1",
"created": "Mon, 18 Mar 2019 06:41:18 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Mar 2019 19:49:51 GMT"
},
{
"version": "v3",
"created": "Mon, 11 Nov 2019 02:48:11 GMT"
}
] | 1,573,516,800,000 | [
[
"Sreedharan",
"Sarath",
""
],
[
"Chakraborti",
"Tathagata",
""
],
[
"Muise",
"Christian",
""
],
[
"Kambhampati",
"Subbarao",
""
]
] |
1903.08218 | Sarath Sreedharan | Sarath Sreedharan, Siddharth Srivastava, David Smith, Subbarao
Kambhampati | Why Couldn't You do that? Explaining Unsolvability of Classical Planning
Problems in the Presence of Plan Advice | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Explainable planning is widely accepted as a prerequisite for autonomous
agents to successfully work with humans. While there has been a lot of research
on generating explanations of solutions to planning problems, explaining the
absence of solutions remains an open and under-studied problem, even though
such situations can be the hardest to understand or debug. In this paper, we
show that hierarchical abstractions can be used to efficiently generate reasons
for unsolvability of planning problems. In contrast to related work on
computing certificates of unsolvability, we show that these methods can
generate compact, human-understandable reasons for unsolvability. Empirical
analysis and user studies show the validity of our methods as well as their
computational efficacy on a number of benchmark planning domains.
| [
{
"version": "v1",
"created": "Tue, 19 Mar 2019 19:08:32 GMT"
}
] | 1,553,126,400,000 | [
[
"Sreedharan",
"Sarath",
""
],
[
"Srivastava",
"Siddharth",
""
],
[
"Smith",
"David",
""
],
[
"Kambhampati",
"Subbarao",
""
]
] |
1903.08452 | Jerry Lonlac | Jerry Lonlac, Sa\"idd Jabbour, Engelbert Mephu Nguifo, Lakhdar Sa\"is,
Badran Raddaoui | Extracting Frequent Gradual Patterns Using Constraints Modeling | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a constraint-based modeling approach for the
problem of discovering frequent gradual patterns in a numerical dataset. This
SAT-based declarative approach offers an additional possibility to benefit from
the recent progress in satisfiability testing and to exploit the efficiency of
modern SAT solvers for enumerating all frequent gradual patterns in a numerical
dataset. Our approach can easily be extended with extra constraints, such as
temporal constraints in order to extract more specific patterns in a broad
range of gradual patterns mining applications. We show the practical
feasibility of our SAT model by running experiments on two real world datasets.
| [
{
"version": "v1",
"created": "Wed, 20 Mar 2019 11:33:02 GMT"
}
] | 1,553,126,400,000 | [
[
"Lonlac",
"Jerry",
""
],
[
"Jabbour",
"Saïdd",
""
],
[
"Nguifo",
"Engelbert Mephu",
""
],
[
"Saïs",
"Lakhdar",
""
],
[
"Raddaoui",
"Badran",
""
]
] |
1903.08495 | Elena Stamm | Andreas Christ, Franz Quint (eds.) | Artificial Intelligence : from Research to Application ; the Upper-Rhine
Artificial Intelligence Symposium (UR-AI 2019) | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The TriRhenaTech alliance universities and their partners presented their
competences in the field of artificial intelligence and their cross-border
cooperations with the industry at the tri-national conference 'Artificial
Intelligence : from Research to Application' on March 13th, 2019 in Offenburg.
The TriRhenaTech alliance is a network of universities in the Upper Rhine
Trinational Metropolitan Region comprising of the German universities of
applied sciences in Furtwangen, Kaiserslautern, Karlsruhe, and Offenburg, the
Baden-Wuerttemberg Cooperative State University Loerrach, the French university
network Alsace Tech (comprised of 14 'grandes \'ecoles' in the fields of
engineering, architecture and management) and the University of Applied
Sciences and Arts Northwestern Switzerland. The alliance's common goal is to
reinforce the transfer of knowledge, research, and technology, as well as the
cross-border mobility of students.
| [
{
"version": "v1",
"created": "Wed, 20 Mar 2019 13:18:15 GMT"
}
] | 1,553,126,400,000 | [
[
"Christ",
"Andreas",
"",
"eds."
],
[
"Quint",
"Franz",
"",
"eds."
]
] |
1903.08523 | Aaron Sterling | Aaron Sterling | Ontology of Card Sleights | 8 pages. Preprint. Final version appeared in ICSC 2019. Copyright of
final version is held by IEEE | IEEE 14th International Conference on Semantic Computing (ICSC),
February 2019, pp. 263-270 | 10.1109/ICOSC.2019.8665514 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a machine-readable movement writing for sleight-of-hand moves with
cards -- a "Labanotation of card magic." This scheme of movement writing
contains 440 categories of motion, and appears to taxonomize all card sleights
that have appeared in over 1500 publications. The movement writing is
axiomatized in $\mathcal{SROIQ}$(D) Description Logic, and collected formally
as an Ontology of Card Sleights, a computational ontology that extends the
Basic Formal Ontology and the Information Artifact Ontology. The Ontology of
Card Sleights is implemented in OWL DL, a Description Logic fragment of the Web
Ontology Language. While ontologies have historically been used to classify at
a less granular level, the algorithmic nature of card tricks allows us to
transcribe a performer's actions step by step. We conclude by discussing design
criteria we have used to ensure the ontology can be accessed and modified with
a simple click-and-drag interface. This may allow database searches and
performance transcriptions by users with card magic knowledge, but no ontology
background.
| [
{
"version": "v1",
"created": "Wed, 20 Mar 2019 14:35:16 GMT"
}
] | 1,553,126,400,000 | [
[
"Sterling",
"Aaron",
""
]
] |
1903.08606 | Yao Hengshuai | Nazmus Sakib, Hengshuai Yao, Hong Zhang, Shangling Jui | Single-step Options for Adversary Driving | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we use reinforcement learning for safety driving in adversary
settings. In our work, the knowledge in state-of-art planning methods is reused
by single-step options whose action suggestions are compared in parallel with
primitive actions. We show two advantages by doing so. First, training this
reinforcement learning agent is easier and faster than training the
primitive-action agent. Second, our new agent outperforms the primitive-action
reinforcement learning agent, human testers as well as the state-of-art
planning methods that our agent queries as skill options.
| [
{
"version": "v1",
"created": "Wed, 20 Mar 2019 16:39:28 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Nov 2019 18:56:07 GMT"
}
] | 1,575,244,800,000 | [
[
"Sakib",
"Nazmus",
""
],
[
"Yao",
"Hengshuai",
""
],
[
"Zhang",
"Hong",
""
],
[
"Jui",
"Shangling",
""
]
] |
1903.08772 | Jaroslav Vitku | Jaroslav V\'itk\r{u}, Petr Dluho\v{s}, Joseph Davidson, Mat\v{e}j
Nikl, Simon Andersson, P\v{r}emysl Pa\v{s}ka, Jan \v{S}inkora, Petr
Hlubu\v{c}ek, Martin Str\'ansk\'y, Martin Hyben, Martin Poliak, Jan
Feyereisl, Marek Rosa | ToyArchitecture: Unsupervised Learning of Interpretable Models of the
World | Revision: changed the pdftitle | null | 10.1371/journal.pone.0230432 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Research in Artificial Intelligence (AI) has focused mostly on two extremes:
either on small improvements in narrow AI domains, or on universal theoretical
frameworks which are usually uncomputable, incompatible with theories of
biological intelligence, or lack practical implementations. The goal of this
work is to combine the main advantages of the two: to follow a big picture
view, while providing a particular theory and its implementation. In contrast
with purely theoretical approaches, the resulting architecture should be usable
in realistic settings, but also form the core of a framework containing all the
basic mechanisms, into which it should be easier to integrate additional
required functionality.
In this paper, we present a novel, purposely simple, and interpretable
hierarchical architecture which combines multiple different mechanisms into one
system: unsupervised learning of a model of the world, learning the influence
of one's own actions on the world, model-based reinforcement learning,
hierarchical planning and plan execution, and symbolic/sub-symbolic integration
in general. The learned model is stored in the form of hierarchical
representations with the following properties: 1) they are increasingly more
abstract, but can retain details when needed, and 2) they are easy to
manipulate in their local and symbolic-like form, thus also allowing one to
observe the learning process at each level of abstraction. On all levels of the
system, the representation of the data can be interpreted in both a symbolic
and a sub-symbolic manner. This enables the architecture to learn efficiently
using sub-symbolic methods and to employ symbolic inference.
| [
{
"version": "v1",
"created": "Wed, 20 Mar 2019 23:07:12 GMT"
},
{
"version": "v2",
"created": "Fri, 12 Apr 2019 21:47:29 GMT"
},
{
"version": "v3",
"created": "Wed, 9 Sep 2020 07:54:19 GMT"
}
] | 1,599,696,000,000 | [
[
"Vítků",
"Jaroslav",
""
],
[
"Dluhoš",
"Petr",
""
],
[
"Davidson",
"Joseph",
""
],
[
"Nikl",
"Matěj",
""
],
[
"Andersson",
"Simon",
""
],
[
"Paška",
"Přemysl",
""
],
[
"Šinkora",
"Jan",
""
],
[
"Hlubuček",
"Petr",
""
],
[
"Stránský",
"Martin",
""
],
[
"Hyben",
"Martin",
""
],
[
"Poliak",
"Martin",
""
],
[
"Feyereisl",
"Jan",
""
],
[
"Rosa",
"Marek",
""
]
] |
1903.09035 | Marie-El\'eonore Kessaci | Lucien Mousin, Marie-El\'eonore Kessaci, Clarisse Dhaenens | Exploiting Promising Sub-Sequences of Jobs to solve the No-Wait Flowshop
Scheduling Problem | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The no-wait flowshop scheduling problem is a variant of the classical
permutation flowshop problem, with the additional constraint that jobs have to
be processed by the successive machines without waiting time. To efficiently
address this NP-hard combinatorial optimization problem we conduct an analysis
of the structure of good quality solutions. This analysis shows that the
No-Wait specificity gives them a common structure: they share identical
sub-sequences of jobs, we call super-jobs. After a discussion on the way to
identify these super-jobs, we propose IG-SJ, an algorithm that exploits
super-jobs within the state-of-the-art algorithm for the classical permutation
flowshop, the well-known Iterated Greedy (IG) algorithm. An iterative approach
of IG-SJ is also proposed. Experiments are conducted on Taillard's instances.
The experimental results show that exploiting super-jobs is successful since
IG-SJ is able to find 64 new best solutions.
| [
{
"version": "v1",
"created": "Thu, 21 Mar 2019 14:48:00 GMT"
}
] | 1,553,212,800,000 | [
[
"Mousin",
"Lucien",
""
],
[
"Kessaci",
"Marie-Eléonore",
""
],
[
"Dhaenens",
"Clarisse",
""
]
] |
1903.09328 | Bharat Prakash | Bharat Prakash, Mohit Khatwani, Nicholas Waytowich, Tinoosh Mohsenin | Improving Safety in Reinforcement Learning Using Model-Based
Architectures and Human Intervention | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent progress in AI and Reinforcement learning has shown great success in
solving complex problems with high dimensional state spaces. However, most of
these successes have been primarily in simulated environments where failure is
of little or no consequence. Most real-world applications, however, require
training solutions that are safe to operate as catastrophic failures are
inadmissible especially when there is human interaction involved. Currently,
Safe RL systems use human oversight during training and exploration in order to
make sure the RL agent does not go into a catastrophic state. These methods
require a large amount of human labor and it is very difficult to scale up. We
present a hybrid method for reducing the human intervention time by combining
model-based approaches and training a supervised learner to improve sample
efficiency while also ensuring safety. We evaluate these methods on various
grid-world environments using both standard and visual representations and show
that our approach achieves better performance in terms of sample efficiency,
number of catastrophic states reached as well as overall task performance
compared to traditional model-free approaches
| [
{
"version": "v1",
"created": "Fri, 22 Mar 2019 02:48:21 GMT"
}
] | 1,553,472,000,000 | [
[
"Prakash",
"Bharat",
""
],
[
"Khatwani",
"Mohit",
""
],
[
"Waytowich",
"Nicholas",
""
],
[
"Mohsenin",
"Tinoosh",
""
]
] |
1903.09569 | Li Zhang | Li Zhang, Wei Wang, Shijian Li, Gang Pan | Monte Carlo Neural Fictitious Self-Play: Approach to Approximate Nash
equilibrium of Imperfect-Information Games | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Researchers on artificial intelligence have achieved human-level intelligence
in large-scale perfect-information games, but it is still a challenge to
achieve (nearly) optimal results (in other words, an approximate Nash
Equilibrium) in large-scale imperfect-information games (i.e. war games,
football coach or business strategies). Neural Fictitious Self Play (NFSP) is
an effective algorithm for learning approximate Nash equilibrium of
imperfect-information games from self-play without prior domain knowledge.
However, it relies on Deep Q-Network, which is off-line and is hard to converge
in online games with changing opponent strategy, so it can't approach
approximate Nash equilibrium in games with large search scale and deep search
depth. In this paper, we propose Monte Carlo Neural Fictitious Self Play
(MC-NFSP), an algorithm combines Monte Carlo tree search with NFSP, which
greatly improves the performance on large-scale zero-sum imperfect-information
games. Experimentally, we demonstrate that the proposed Monte Carlo Neural
Fictitious Self Play can converge to approximate Nash equilibrium in games with
large-scale search depth while the Neural Fictitious Self Play can't.
Furthermore, we develop Asynchronous Neural Fictitious Self Play (ANFSP). It
use asynchronous and parallel architecture to collect game experience. In
experiments, we show that parallel actor-learners have a further accelerated
and stabilizing effect on training.
| [
{
"version": "v1",
"created": "Fri, 22 Mar 2019 15:58:35 GMT"
},
{
"version": "v2",
"created": "Sat, 6 Apr 2019 09:12:25 GMT"
}
] | 1,554,768,000,000 | [
[
"Zhang",
"Li",
""
],
[
"Wang",
"Wei",
""
],
[
"Li",
"Shijian",
""
],
[
"Pan",
"Gang",
""
]
] |
1903.09604 | Christopher Solinas | Christopher Solinas, Douglas Rebstock, Michael Buro | Improving Search with Supervised Learning in Trick-Based Card Games | Accepted for publication at AAAI-19 | Vol 33 (2019): Proceedings of the Thirty-Third AAAI Conference on
Artificial Intelligence, Pages 1158-1165 | 10.1609/aaai.v33i01.33011158 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In trick-taking card games, a two-step process of state sampling and
evaluation is widely used to approximate move values. While the evaluation
component is vital, the accuracy of move value estimates is also fundamentally
linked to how well the sampling distribution corresponds the true distribution.
Despite this, recent work in trick-taking card game AI has mainly focused on
improving evaluation algorithms with limited work on improving sampling. In
this paper, we focus on the effect of sampling on the strength of a player and
propose a novel method of sampling more realistic states given move history. In
particular, we use predictions about locations of individual cards made by a
deep neural network --- trained on data from human gameplay - in order to
sample likely worlds for evaluation. This technique, used in conjunction with
Perfect Information Monte Carlo (PIMC) search, provides a substantial increase
in cardplay strength in the popular trick-taking card game of Skat.
| [
{
"version": "v1",
"created": "Fri, 22 Mar 2019 17:00:50 GMT"
}
] | 1,568,246,400,000 | [
[
"Solinas",
"Christopher",
""
],
[
"Rebstock",
"Douglas",
""
],
[
"Buro",
"Michael",
""
]
] |
1903.09820 | Pavel Surynek | Pavel Surynek | Multi-agent Path Finding with Continuous Time Viewed Through
Satisfiability Modulo Theories (SMT) | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper addresses a variant of multi-agent path finding (MAPF) in
continuous space and time. We present a new solving approach based on
satisfiability modulo theories (SMT) to obtain makespan optimal solutions. The
standard MAPF is a task of navigating agents in an undirected graph from given
starting vertices to given goal vertices so that agents do not collide with
each other in vertices of the graph. In the continuous version
(MAPF$^\mathcal{R}$) agents move in an $n$-dimensional Euclidean space along
straight lines that interconnect predefined positions. For simplicity, we work
with circular omni-directional agents having constant velocities in the 2D
plane. As agents can have different sizes and move smoothly along lines, a
non-colliding movement along certain lines with small agents can result in a
collision if the same movement is performed with larger agents. Our SMT-based
approach for MAPF$^\mathcal{R}$ called SMT-CBS$^\mathcal{R}$ reformulates the
Conflict-based Search (CBS) algorithm in terms of SMT concepts. We suggest lazy
generation of decision variables and constraints. Each time a new conflict is
discovered, the underlying encoding is extended with new variables and
constraints to eliminate the conflict. We compared SMT-CBS$^\mathcal{R}$ and
adaptations of CBS for the continuous variant of MAPF experimentally.
| [
{
"version": "v1",
"created": "Sat, 23 Mar 2019 13:27:32 GMT"
}
] | 1,553,558,400,000 | [
[
"Surynek",
"Pavel",
""
]
] |
1903.09850 | Marcello Balduccini | Marcello Balduccini and Emily LeBlanc | Action-Centered Information Retrieval | Under consideration in Theory and Practice of Logic Programming
(TPLP) | Theory and Practice of Logic Programming 20 (2020) 249-272 | 10.1017/S1471068419000097 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Information Retrieval (IR) aims at retrieving documents that are most
relevant to a query provided by a user. Traditional techniques rely mostly on
syntactic methods. In some cases, however, links at a deeper semantic level
must be considered. In this paper, we explore a type of IR task in which
documents describe sequences of events, and queries are about the state of the
world after such events. In this context, successfully matching documents and
query requires considering the events' possibly implicit, uncertain effects and
side-effects. We begin by analyzing the problem, then propose an action
language based formalization, and finally automate the corresponding IR task
using Answer Set Programming.
| [
{
"version": "v1",
"created": "Sat, 23 Mar 2019 17:34:25 GMT"
}
] | 1,582,070,400,000 | [
[
"Balduccini",
"Marcello",
""
],
[
"LeBlanc",
"Emily",
""
]
] |
1903.10187 | Christoph Benzm\"uller | Christoph Benzm\"uller, Xavier Parent, Leendert van der Torre | Designing Normative Theories for Ethical and Legal Reasoning: LogiKEy
Framework, Methodology, and Tool Support | 50 pages; 10 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A framework and methodology---termed LogiKEy---for the design and engineering
of ethical reasoners, normative theories and deontic logics is presented. The
overall motivation is the development of suitable means for the control and
governance of intelligent autonomous systems. LogiKEy's unifying formal
framework is based on semantical embeddings of deontic logics, logic
combinations and ethico-legal domain theories in expressive classic
higher-order logic (HOL). This meta-logical approach enables the provision of
powerful tool support in LogiKEy: off-the-shelf theorem provers and model
finders for HOL are assisting the LogiKEy designer of ethical intelligent
agents to flexibly experiment with underlying logics and their combinations,
with ethico-legal domain theories, and with concrete examples---all at the same
time. Continuous improvements of these off-the-shelf provers, without further
ado, leverage the reasoning performance in LogiKEy. Case studies, in which the
LogiKEy framework and methodology has been applied and tested, give evidence
that HOL's undecidability often does not hinder efficient experimentation.
| [
{
"version": "v1",
"created": "Mon, 25 Mar 2019 09:01:27 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Aug 2019 13:05:11 GMT"
},
{
"version": "v3",
"created": "Fri, 9 Aug 2019 09:46:18 GMT"
},
{
"version": "v4",
"created": "Sun, 18 Aug 2019 06:29:46 GMT"
},
{
"version": "v5",
"created": "Fri, 27 Mar 2020 12:24:57 GMT"
},
{
"version": "v6",
"created": "Sun, 24 May 2020 09:21:53 GMT"
}
] | 1,590,451,200,000 | [
[
"Benzmüller",
"Christoph",
""
],
[
"Parent",
"Xavier",
""
],
[
"van der Torre",
"Leendert",
""
]
] |
1903.10325 | Gary Merrill | Gary H. Merrill | Ontology, Ontologies, and Science | null | Topoi 30 (2011) 71-83 | 10.1007/s11245-011-9091-x | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Philosophers frequently struggle with the relation of metaphysics to the
everyday world, with its practical value, and with its relation to empirical
science. This paper distinguishes several different models of the relation
between philosophical ontology and applied (scientific) ontology that have been
advanced in the history of philosophy. Adoption of a strong participation model
for the philosophical ontologist in science is urged, and requirements and
consequences of the participation model are explored. This approach provides
both a principled view and justification of the role of the philosophical
ontologist in contemporary empirical science as well as guidelines for
integrating philosophers and philosophical contributions into the practice of
science.
| [
{
"version": "v1",
"created": "Sat, 9 Mar 2019 19:05:25 GMT"
}
] | 1,553,731,200,000 | [
[
"Merrill",
"Gary H.",
""
]
] |
1903.10559 | Luis A. Pineda | Luis A. Pineda | The Mode of Computing | 47 pages, 7 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Turing Machine is the paradigmatic case of computing machines, but there
are others such as analogical, connectionist, quantum and diverse forms of
unconventional computing, each based on a particular intuition of the
phenomenon of computing. This variety can be captured in terms of system
levels, re-interpreting and generalizing Newell's hierarchy, which includes the
knowledge level at the top and the symbol level immediately below it. In this
re-interpretation the knowledge level consists of human knowledge and the
symbol level is generalized into a new level that here is called The Mode of
Computing. Mental processes performed by natural brains are often thought of
informally as computing process and that the brain is alike to computing
machinery. However, if natural computing does exist it should be characterized
on its own. A proposal to such an effect is that natural computing appeared
when interpretations were first made by biological entities, so natural
computing and interpreting are two aspects of the same phenomenon, or that
consciousness and experience are the manifestations of computing/interpreting.
By analogy with computing machinery, there must be a system level at the top of
the neural circuitry and directly below the knowledge level that is named here
The mode of Natural Computing. If it turns out that such putative object does
not exist the proposition that the mind is a computing process should be
dropped; but characterizing it would come with solving the hard problem of
consciousness.
| [
{
"version": "v1",
"created": "Mon, 25 Mar 2019 19:25:16 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Oct 2019 17:20:01 GMT"
},
{
"version": "v3",
"created": "Tue, 7 Sep 2021 08:17:52 GMT"
},
{
"version": "v4",
"created": "Mon, 9 Oct 2023 04:18:55 GMT"
}
] | 1,696,896,000,000 | [
[
"Pineda",
"Luis A.",
""
]
] |
1903.10605 | Riley Simmons-Edler | Riley Simmons-Edler, Ben Eisner, Eric Mitchell, Sebastian Seung,
Daniel Lee | Q-Learning for Continuous Actions with Cross-Entropy Guided Policies | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Off-Policy reinforcement learning (RL) is an important class of methods for
many problem domains, such as robotics, where the cost of collecting data is
high and on-policy methods are consequently intractable. Standard methods for
applying Q-learning to continuous-valued action domains involve iteratively
sampling the Q-function to find a good action (e.g. via hill-climbing), or by
learning a policy network at the same time as the Q-function (e.g. DDPG). Both
approaches make tradeoffs between stability, speed, and accuracy. We propose a
novel approach, called Cross-Entropy Guided Policies, or CGP, that draws
inspiration from both classes of techniques. CGP aims to combine the stability
and performance of iterative sampling policies with the low computational cost
of a policy network. Our approach trains the Q-function using iterative
sampling with the Cross-Entropy Method (CEM), while training a policy network
to imitate CEM's sampling behavior. We demonstrate that our method is more
stable to train than state of the art policy network methods, while preserving
equivalent inference time compute costs, and achieving competitive total reward
on standard benchmarks.
| [
{
"version": "v1",
"created": "Mon, 25 Mar 2019 21:46:58 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Mar 2019 21:52:02 GMT"
},
{
"version": "v3",
"created": "Mon, 1 Jul 2019 21:03:09 GMT"
}
] | 1,562,112,000,000 | [
[
"Simmons-Edler",
"Riley",
""
],
[
"Eisner",
"Ben",
""
],
[
"Mitchell",
"Eric",
""
],
[
"Seung",
"Sebastian",
""
],
[
"Lee",
"Daniel",
""
]
] |
1903.11678 | Ahmed Khalifa | Debosmita Bhaumik, Ahmed Khalifa, Michael Cerny Green, Julian Togelius | Tree Search vs Optimization Approaches for Map Generation | 10 pages, 9 figures, published at AIIDE 2020 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Search-based procedural content generation uses stochastic global
optimization algorithms to search for game content. However, standard tree
search algorithms can be competitive with evolution on some optimization
problems. We investigate the applicability of several tree search methods to
level generation and compare them systematically with several optimization
algorithms, including evolutionary algorithms. We compare them on three
different game level generation problems: Binary, Zelda, and Sokoban. We
introduce two new representations that can help tree search algorithms deal
with the large branching factor of the generation problem. We find that in
general, optimization algorithms clearly outperform tree search algorithms, but
given the right problem representation certain tree search algorithms perform
similarly to optimization algorithms, and in one particular problem, we see
surprisingly strong results from MCTS.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2019 19:53:29 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Feb 2020 22:00:00 GMT"
},
{
"version": "v3",
"created": "Thu, 13 Aug 2020 02:34:56 GMT"
}
] | 1,597,363,200,000 | [
[
"Bhaumik",
"Debosmita",
""
],
[
"Khalifa",
"Ahmed",
""
],
[
"Green",
"Michael Cerny",
""
],
[
"Togelius",
"Julian",
""
]
] |
1903.11723 | Abdur Rakib | Abba Lawan and Abdur Rakib | The Semantic Web Rule Language Expressiveness Extensions-A Survey | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Semantic Web Rule Language (SWRL) is a direct extension of OWL 2 DL with
a subset of RuleML, and it is designed to be the rule language of the Semantic
Web. This paper explores the state-of-the-art of SWRL's expressiveness
extensions proposed over time. As a motivation, the effectiveness of the
SWRL/OWL combination in modeling domain facts is discussed while some of the
common expressive limitations of the combination are also highlighted. The
paper then classifies and presents the relevant language extensions of the SWRL
and their added expressive powers to the original SWRL definition. Furthermore,
it provides a comparative analysis of the syntax and semantics of the proposed
extensions. In conclusion, the decidability requirement and usability of each
expressiveness extension are evaluated towards an efficient inclusion into the
OWL ontologies.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2019 23:03:48 GMT"
}
] | 1,553,817,600,000 | [
[
"Lawan",
"Abba",
""
],
[
"Rakib",
"Abdur",
""
]
] |
1903.11777 | Guang Hu | Guang Hu, Tim Miller and Nir Lipovetzky | What you get is what you see: Decomposing Epistemic Planning using
Functional STRIPS | 20 pages, 3 figures, 4 experiments, journal paper | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Epistemic planning --- planning with knowledge and belief --- is essential in
many multi-agent and human-agent interaction domains. Most state-of-the-art
epistemic planners solve this problem by compiling to propositional classical
planning, for example, generating all possible knowledge atoms, or compiling
epistemic formula to normal forms. However, these methods become
computationally infeasible as problems grow. In this paper, we decompose
epistemic planning by delegating reasoning about epistemic formula to an
external solver. We do this by modelling the problem using \emph{functional
STRIPS}, which is more expressive than standard STRIPS and supports the use of
external, black-box functions within action models. Exploiting recent work that
demonstrates the relationship between what an agent `sees' and what it knows,
we allow modellers to provide new implementations of externals functions. These
define what agents see in their environment, allowing new epistemic logics to
be defined without changing the planner. As a result, it increases the
capability and flexibility of the epistemic model itself, and avoids the
exponential pre-compilation step. We ran evaluations on well-known epistemic
planning benchmarks to compare with an existing state-of-the-art planner, and
on new scenarios based on different external functions. The results show that
our planner scales significantly better than the state-of-the-art planner
against which we compared, and can express problems more succinctly.
| [
{
"version": "v1",
"created": "Thu, 28 Mar 2019 03:34:45 GMT"
},
{
"version": "v2",
"created": "Tue, 2 Apr 2019 06:11:43 GMT"
}
] | 1,554,249,600,000 | [
[
"Hu",
"Guang",
""
],
[
"Miller",
"Tim",
""
],
[
"Lipovetzky",
"Nir",
""
]
] |
1903.11857 | Lianmeng Jiao | Lianmeng Jiao and Xiaojiao Geng | Analysis and Extension of the Evidential Reasoning Algorithm for
Multiple Attribute Decision Analysis with Uncertainty | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In multiple attribute decision analysis (MADA) problems, one often needs to
deal with assessment information with uncertainty. The evidential reasoning
approach is one of the most effective methods to deal with such MADA problems.
As kernel of the evidential reasoning approach, an original evidential
reasoning (ER) algorithm was firstly proposed by Yang et al, and later they
modified the ER algorithm in order to satisfy the proposed four synthesis
axioms with which a rational aggregation process needs to satisfy. However, up
to present, the essential difference of the two ER algorithms as well as the
rationality of the synthesis axioms are still unclear. In this paper, we
analyze the ER algorithms in Dempster-Shafer theory (DST) framework and prove
that the original ER algorithm follows the reliability discounting and
combination scheme, while the modified one follows the importance discounting
and combination scheme. Further we reveal that the four synthesis axioms are
not valid criteria to check the rationality of one attribute aggregation
algorithm. Based on these new findings, an extended ER algorithm is proposed to
take into account both the reliability and importance of different attributes,
which provides a more general attribute aggregation scheme for MADA with
uncertainty. A motorcycle performance assessment problem is examined to
illustrate the proposed algorithm.
| [
{
"version": "v1",
"created": "Thu, 28 Mar 2019 09:37:50 GMT"
}
] | 1,553,817,600,000 | [
[
"Jiao",
"Lianmeng",
""
],
[
"Geng",
"Xiaojiao",
""
]
] |
1903.12508 | Simon Lucas | Simon M. Lucas, Alexander Dockhorn, Vanessa Volz, Chris Bamford,
Raluca D. Gaina, Ivan Bravi, Diego Perez-Liebana, Sanaz Mostaghim, Rudolf
Kruse | A Local Approach to Forward Model Learning: Results on the Game of Life
Game | Submitted to IEEE Conference on Games 2019 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper investigates the effect of learning a forward model on the
performance of a statistical forward planning agent. We transform Conway's Game
of Life simulation into a single-player game where the objective can be either
to preserve as much life as possible or to extinguish all life as quickly as
possible.
In order to learn the forward model of the game, we formulate the problem in
a novel way that learns the local cell transition function by creating a set of
supervised training data and predicting the next state of each cell in the grid
based on its current state and immediate neighbours. Using this method we are
able to harvest sufficient data to learn perfect forward models by observing
only a few complete state transitions, using either a look-up table, a decision
tree or a neural network.
In contrast, learning the complete state transition function is a much harder
task and our initial efforts to do this using deep convolutional auto-encoders
were less successful.
We also investigate the effects of imperfect learned models on prediction
errors and game-playing performance, and show that even models with significant
errors can provide good performance.
| [
{
"version": "v1",
"created": "Fri, 29 Mar 2019 13:17:15 GMT"
}
] | 1,554,076,800,000 | [
[
"Lucas",
"Simon M.",
""
],
[
"Dockhorn",
"Alexander",
""
],
[
"Volz",
"Vanessa",
""
],
[
"Bamford",
"Chris",
""
],
[
"Gaina",
"Raluca D.",
""
],
[
"Bravi",
"Ivan",
""
],
[
"Perez-Liebana",
"Diego",
""
],
[
"Mostaghim",
"Sanaz",
""
],
[
"Kruse",
"Rudolf",
""
]
] |
1903.12517 | Chen Jingye | Jieneng Chen, Jingye Chen, Ruiming Zhang, Xiaobin Hu | Towards Brain-inspired System: Deep Recurrent Reinforcement Learning for
Simulated Self-driving Agent | 8 pages, 5 figures, 1 table | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | An effective way to achieve intelligence is to simulate various intelligent
behaviors in the human brain. In recent years, bio-inspired learning methods
have emerged, and they are different from the classical mathematical
programming principle. In the perspective of brain inspiration, reinforcement
learning has gained additional interest in solving decision-making tasks as
increasing neuroscientific research demonstrates that significant links exist
between reinforcement learning and specific neural substrates. Because of the
tremendous research that focuses on human brains and reinforcement learning,
scientists have investigated how robots can autonomously tackle complex tasks
in the form of a self-driving agent control in a human-like way. In this study,
we propose an end-to-end architecture using novel deep-Q-network architecture
in conjunction with a recurrence to resolve the problem in the field of
simulated self-driving. The main contribution of this study is that we trained
the driving agent using a brain-inspired trial-and-error technique, which was
in line with the real world situation. Besides, there are three innovations in
the proposed learning network: raw screen outputs are the only information
which the driving agent can rely on, a weighted layer that enhances the
differences of the lengthy episode, and a modified replay mechanism that
overcomes the problem of sparsity and accelerates learning. The proposed
network was trained and tested under a third-partied OpenAI Gym environment.
After training for several episodes, the resulting driving agent performed
advanced behaviors in the given scene. We hope that in the future, the proposed
brain-inspired learning system would inspire practicable self-driving control
solutions.
| [
{
"version": "v1",
"created": "Fri, 29 Mar 2019 13:31:44 GMT"
}
] | 1,554,076,800,000 | [
[
"Chen",
"Jieneng",
""
],
[
"Chen",
"Jingye",
""
],
[
"Zhang",
"Ruiming",
""
],
[
"Hu",
"Xiaobin",
""
]
] |
1904.00103 | Milo\v{s} Simi\'c | Milo\v{s} Simi\'c (University of Belgrade, Belgrade, Serbia) | How to Estimate the Ability of a Metaheuristic Algorithm to Guide
Heuristics During Optimization | 24 pages, 3 figures, submitted to Journal of Heuristics | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Metaheuristics are general methods that guide application of concrete
heuristic(s) to problems that are too hard to solve using exact algorithms.
However, even though a growing body of literature has been devoted to their
statistical evaluation, the approaches proposed so far are able to assess only
coupled effects of metaheuristics and heuristics. They do not reveal us
anything about how efficient the examined metaheuristic is at guiding its
subordinate heuristic(s), nor do they provide us information about how much the
heuristic component of the combined algorithm contributes to the overall
performance. In this paper, we propose a simple yet effective methodology of
doing so by deriving a naive, placebo metaheuristic from the one being studied
and comparing the distributions of chosen performance metrics for the two
methods. We propose three measures of difference between the two distributions.
Those measures, which we call BER values (benefit, equivalence, risk) are based
on a preselected threshold of practical significance which represents the
minimal difference between two performance scores required for them to be
considered practically different. We illustrate usefulness of our methodology
on the example of Simulated Annealing, Boolean Satisfiability Problem, and the
Flip heuristic.
| [
{
"version": "v1",
"created": "Fri, 29 Mar 2019 22:06:40 GMT"
}
] | 1,554,163,200,000 | [
[
"Simić",
"Miloš",
"",
"University of Belgrade, Belgrade, Serbia"
]
] |
1904.00317 | Patrick Rodler | Patrick Rodler and Michael Eichholzer | A New Expert Questioning Approach to More Efficient Fault Localization
in Ontologies | This is a preprint of the article "Patrick Rodler. One step at a
time: An efficient approach to query-based ontology debugging.
Knowledge-Based Systems 108987, 2022." | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | When ontologies reach a certain size and complexity, faults such as
inconsistencies, unsatisfiable classes or wrong entailments are hardly
avoidable. Locating the incorrect axioms that cause these faults is a hard and
time-consuming task. Addressing this issue, several techniques for
semi-automatic fault localization in ontologies have been proposed. Often,
these approaches involve a human expert who provides answers to
system-generated questions about the intended (correct) ontology in order to
reduce the possible fault locations. To suggest as informative questions as
possible, existing methods draw on various algorithmic optimizations as well as
heuristics. However, these computations are often based on certain assumptions
about the interacting user.
In this work, we characterize and discuss different user types and show that
existing approaches do not achieve optimal efficiency for all of them. As a
remedy, we suggest a new type of expert question which aims at fitting the
answering behavior of all analyzed experts. Moreover, we present an algorithm
to optimize this new query type which is fully compatible with the (tried and
tested) heuristics used in the field. Experiments on faulty real-world
ontologies show the potential of the new querying method for minimizing the
expert consultation time, independent of the expert type. Besides, the gained
insights can inform the design of interactive debugging tools towards better
meeting their users' needs.
| [
{
"version": "v1",
"created": "Sun, 31 Mar 2019 01:25:52 GMT"
},
{
"version": "v2",
"created": "Fri, 5 Aug 2022 01:32:32 GMT"
}
] | 1,659,916,800,000 | [
[
"Rodler",
"Patrick",
""
],
[
"Eichholzer",
"Michael",
""
]
] |
1904.00441 | Uk Jo | Uk Jo, Taehyun Jo, Wanjun Kim, Iljoo Yoon, Dongseok Lee, Seungho Lee | Cooperative Multi-Agent Reinforcement Learning Framework for Scalping
Trading | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We explore deep Reinforcement Learning(RL) algorithms for scalping trading
and knew that there is no appropriate trading gym and agent examples. Thus we
propose gym and agent like Open AI gym in finance. Not only that, we introduce
new RL framework based on our hybrid algorithm which leverages between
supervised learning and RL algorithm and uses meaningful observations such
order book and settlement data from experience watching scalpers trading. That
is very crucial information for traders behavior to be decided. To feed these
data into our model, we use spatio-temporal convolution layer, called Conv3D
for order book data and temporal CNN, called Conv1D for settlement data. Those
are preprocessed by episode filter we developed. Agent consists of four sub
agents divided to clarify their own goal to make best decision. Also, we
adopted value and policy based algorithm to our framework. With these features,
we could make agent mimic scalpers as much as possible. In many fields, RL
algorithm has already begun to transcend human capabilities in many domains.
This approach could be a starting point to beat human in the financial stock
market, too and be a good reference for anyone who wants to design RL algorithm
in real world domain. Finally, weexperiment our framework and gave you
experiment progress.
| [
{
"version": "v1",
"created": "Sun, 31 Mar 2019 16:15:42 GMT"
}
] | 1,554,163,200,000 | [
[
"Jo",
"Uk",
""
],
[
"Jo",
"Taehyun",
""
],
[
"Kim",
"Wanjun",
""
],
[
"Yoon",
"Iljoo",
""
],
[
"Lee",
"Dongseok",
""
],
[
"Lee",
"Seungho",
""
]
] |
1904.00512 | Yi Wang | Yi Wang, Joohyung Lee | Elaboration Tolerant Representation of Markov Decision Process via
Decision-Theoretic Extension of Probabilistic Action Language pBC+ | 31 pages, 3 figures; Under consideration in Theory and Practice of
Logic Programming (TPLP). arXiv admin note: text overlap with
arXiv:1805.00634 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We extend probabilistic action language pBC+ with the notion of utility as in
decision theory. The semantics of the extended pBC+ can be defined as a
shorthand notation for a decision-theoretic extension of the probabilistic
answer set programming language LPMLN. Alternatively, the semantics of pBC+ can
also be defined in terms of Markov Decision Process (MDP), which in turn allows
for representing MDP in a succinct and elaboration tolerant way as well as to
leverage an MDP solver to compute pBC+. The idea led to the design of the
system pbcplus2mdp, which can find an optimal policy of a pBC+ action
description using an MDP solver. This paper is under consideration in Theory
and Practice of Logic Programming (TPLP).
| [
{
"version": "v1",
"created": "Mon, 1 Apr 2019 00:14:01 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Oct 2020 17:15:20 GMT"
}
] | 1,601,856,000,000 | [
[
"Wang",
"Yi",
""
],
[
"Lee",
"Joohyung",
""
]
] |
1904.01484 | Patrick Rodler | Patrick Rodler, Dietmar Jannach, Konstantin Schekotihin, Philipp
Fleiss | Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers? | This is a preprint of the paper "Patrick Rodler, Dietmar Jannach,
Konstantin Schekotihin, Philipp Fleiss. Are query-based ontology debuggers
really helping knowledge engineers? Knowledge-Based Systems, 179 (2019):
92-107" | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Real-world semantic or knowledge-based systems, e.g., in the biomedical
domain, can become large and complex. Tool support for the localization and
repair of faults within knowledge bases of such systems can therefore be
essential for their practical success. Correspondingly, a number of knowledge
base debugging approaches, in particular for ontology-based systems, were
proposed throughout recent years. Query-based debugging is a comparably recent
interactive approach that localizes the true cause of an observed problem by
asking knowledge engineers a series of questions. Concrete implementations of
this approach exist, such as the OntoDebug plug-in for the ontology editor
Prot\'eg\'e.
To validate that a newly proposed method is favorable over an existing one,
researchers often rely on simulation-based comparisons. Such an evaluation
approach however has certain limitations and often cannot fully inform us about
a method's true usefulness. We therefore conducted different user studies to
assess the practical value of query-based ontology debugging. One main insight
from the studies is that the considered interactive approach is indeed more
efficient than an alternative algorithmic debugging based on test cases. We
also observed that users frequently made errors in the process, which
highlights the importance of a careful design of the queries that users need to
answer.
| [
{
"version": "v1",
"created": "Tue, 2 Apr 2019 15:17:56 GMT"
},
{
"version": "v2",
"created": "Thu, 4 Aug 2022 22:10:51 GMT"
}
] | 1,659,916,800,000 | [
[
"Rodler",
"Patrick",
""
],
[
"Jannach",
"Dietmar",
""
],
[
"Schekotihin",
"Konstantin",
""
],
[
"Fleiss",
"Philipp",
""
]
] |
1904.01540 | Nadisha-Marie Aliman | Nadisha-Marie Aliman and Leon Kester | Augmented Utilitarianism for AGI Safety | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the light of ongoing progresses of research on artificial intelligent
systems exhibiting a steadily increasing problem-solving ability, the
identification of practicable solutions to the value alignment problem in AGI
Safety is becoming a matter of urgency. In this context, one preeminent
challenge that has been addressed by multiple researchers is the adequate
formulation of utility functions or equivalents reliably capturing human
ethical conceptions. However, the specification of suitable utility functions
harbors the risk of "perverse instantiation" for which no final consensus on
responsible proactive countermeasures has been achieved so far. Amidst this
background, we propose a novel socio-technological ethical framework denoted
Augmented Utilitarianism which directly alleviates the perverse instantiation
problem. We elaborate on how augmented by AI and more generally science and
technology, it might allow a society to craft and update ethical utility
functions while jointly undergoing a dynamical ethical enhancement. Further, we
elucidate the need to consider embodied simulations in the design of utility
functions for AGIs aligned with human values. Finally, we discuss future
prospects regarding the usage of the presented scientifically grounded ethical
framework and mention possible challenges.
| [
{
"version": "v1",
"created": "Tue, 2 Apr 2019 16:54:38 GMT"
}
] | 1,554,249,600,000 | [
[
"Aliman",
"Nadisha-Marie",
""
],
[
"Kester",
"Leon",
""
]
] |
1904.01883 | Ivan Bravi | Ivan Bravi and Simon Lucas and Diego Perez-Liebana and Jialin Liu | Rinascimento: Optimising Statistical Forward Planning Agents for Playing
Splendor | Submitted to IEEE Conference on Games 2019 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Game-based benchmarks have been playing an essential role in the development
of Artificial Intelligence (AI) techniques. Providing diverse challenges is
crucial to push research toward innovation and understanding in modern
techniques. Rinascimento provides a parameterised partially-observable
multiplayer card-based board game, these parameters can easily modify the
rules, objectives and items in the game. We describe the framework in all its
features and the game-playing challenge providing baseline game-playing AIs and
analysis of their skills. We reserve to agents' hyper-parameter tuning a
central role in the experiments highlighting how it can heavily influence the
performance. The base-line agents contain several additional contribution to
Statistical Forward Planning algorithms.
| [
{
"version": "v1",
"created": "Wed, 3 Apr 2019 09:53:10 GMT"
}
] | 1,554,336,000,000 | [
[
"Bravi",
"Ivan",
""
],
[
"Lucas",
"Simon",
""
],
[
"Perez-Liebana",
"Diego",
""
],
[
"Liu",
"Jialin",
""
]
] |
1904.03008 | Yunlong Liu | Yunlong Liu, Jianyang Zheng | Combining Offline Models and Online Monte-Carlo Tree Search for Planning
from Scratch | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Planning in stochastic and partially observable environments is a central
issue in artificial intelligence. One commonly used technique for solving such
a problem is by constructing an accurate model firstly. Although some recent
approaches have been proposed for learning optimal behaviour under model
uncertainty, prior knowledge about the environment is still needed to guarantee
the performance of the proposed algorithms. With the benefits of the Predictive
State Representations~(PSRs) approach for state representation and model
prediction, in this paper, we introduce an approach for planning from scratch,
where an offline PSR model is firstly learned and then combined with online
Monte-Carlo tree search for planning with model uncertainty. By comparing with
the state-of-the-art approach of planning with model uncertainty, we
demonstrated the effectiveness of the proposed approaches along with the proof
of their convergence. The effectiveness and scalability of our proposed
approach are also tested on the RockSample problem, which are infeasible for
the state-of-the-art BA-POMDP based approaches.
| [
{
"version": "v1",
"created": "Fri, 5 Apr 2019 11:57:41 GMT"
}
] | 1,554,681,600,000 | [
[
"Liu",
"Yunlong",
""
],
[
"Zheng",
"Jianyang",
""
]
] |
1904.03606 | Mohannad Babli | Mohannad Babli, Eva Onaindia, Eliseo Marzal | Extending planning knowledge using ontologies for goal opportunities | 10 pages, 8 Figures, 31st
International-Business-Information-Management-Association Conference, Milan
ITALY, date: APR 25-26, 2018 | 31st IBIMA Conference (2018), INNOVATION MANAGEMENT AND EDUCATION
EXCELLENCE THROUGH VISION 2020, VOLS IV-VI (3199-3208) | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Approaches to goal-directed behaviour including online planning and
opportunistic planning tackle a change in the environment by generating
alternative goals to avoid failures or seize opportunities. However, current
approaches only address unanticipated changes related to objects or object
types already defined in the planning task that is being solved. This article
describes a domain-independent approach that advances the state of the art by
extending the knowledge of a planning task with relevant objects of new types.
The approach draws upon the use of ontologies, semantic measures, and ontology
alignment to accommodate newly acquired data that trigger the formulation of
goal opportunities inducing a better-valued plan.
| [
{
"version": "v1",
"created": "Sun, 7 Apr 2019 08:39:10 GMT"
}
] | 1,554,768,000,000 | [
[
"Babli",
"Mohannad",
""
],
[
"Onaindia",
"Eva",
""
],
[
"Marzal",
"Eliseo",
""
]
] |
1904.05405 | Cogan Shimizu | Cogan Shimizu and Quinn Hirt and Pascal Hitzler | MODL: A Modular Ontology Design Library | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pattern-based, modular ontologies have several beneficial properties that
lend themselves to FAIR data practices, especially as it pertains to
Interoperability and Reusability. However, developing such ontologies has a
high upfront cost, e.g. reusing a pattern is predicated upon being aware of its
existence in the first place. Thus, to help overcome these barriers, we have
developed MODL: a modular ontology design library. MODL is a curated collection
of well-documented ontology design patterns, drawn from a wide variety of
interdisciplinary use-cases. In this paper we present MODL as a resource,
discuss its use, and provide some examples of its contents.
| [
{
"version": "v1",
"created": "Wed, 10 Apr 2019 19:36:36 GMT"
}
] | 1,555,027,200,000 | [
[
"Shimizu",
"Cogan",
""
],
[
"Hirt",
"Quinn",
""
],
[
"Hitzler",
"Pascal",
""
]
] |
1904.06317 | Tom Silver | Tom Silver, Kelsey R. Allen, Alex K. Lew, Leslie Pack Kaelbling, Josh
Tenenbaum | Few-Shot Bayesian Imitation Learning with Logical Program Policies | AAAI 2020 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Humans can learn many novel tasks from a very small number (1--5) of
demonstrations, in stark contrast to the data requirements of nearly tabula
rasa deep learning methods. We propose an expressive class of policies, a
strong but general prior, and a learning algorithm that, together, can learn
interesting policies from very few examples. We represent policies as logical
combinations of programs drawn from a domain-specific language (DSL), define a
prior over policies with a probabilistic grammar, and derive an approximate
Bayesian inference algorithm to learn policies from demonstrations. In
experiments, we study five strategy games played on a 2D grid with one shared
DSL. After a few demonstrations of each game, the inferred policies generalize
to new game instances that differ substantially from the demonstrations. Our
policy learning is 20--1,000x more data efficient than convolutional and fully
convolutional policy learning and many orders of magnitude more computationally
efficient than vanilla program induction. We argue that the proposed method is
an apt choice for tasks that have scarce training data and feature significant,
structured variation between task instances.
| [
{
"version": "v1",
"created": "Fri, 12 Apr 2019 16:51:01 GMT"
},
{
"version": "v2",
"created": "Sat, 16 Nov 2019 15:34:48 GMT"
}
] | 1,574,121,600,000 | [
[
"Silver",
"Tom",
""
],
[
"Allen",
"Kelsey R.",
""
],
[
"Lew",
"Alex K.",
""
],
[
"Kaelbling",
"Leslie Pack",
""
],
[
"Tenenbaum",
"Josh",
""
]
] |
1904.06736 | Dhruv Ramani | Dhruv Ramani | A Short Survey On Memory Based Reinforcement Learning | arXiv admin note: text overlap with arXiv:1803.10760,
arXiv:1803.01846, arXiv:1702.08360, arXiv:1805.12375, arXiv:1507.06527,
arXiv:1810.02274, arXiv:1711.06677 by other authors | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reinforcement learning (RL) is a branch of machine learning which is employed
to solve various sequential decision making problems without proper
supervision. Due to the recent advancement of deep learning, the newly proposed
Deep-RL algorithms have been able to perform extremely well in sophisticated
high-dimensional environments. However, even after successes in many domains,
one of the major challenge in these approaches is the high magnitude of
interactions with the environment required for efficient decision making.
Seeking inspiration from the brain, this problem can be solved by incorporating
instance based learning by biasing the decision making on the memories of high
rewarding experiences. This paper reviews various recent reinforcement learning
methods which incorporate external memory to solve decision making and a survey
of them is presented. We provide an overview of the different methods - along
with their advantages and disadvantages, applications and the standard
experimentation settings used for memory based models. This review hopes to be
a helpful resource to provide key insight of the recent advances in the field
and provide help in further future development of it.
| [
{
"version": "v1",
"created": "Sun, 14 Apr 2019 18:18:45 GMT"
}
] | 1,555,459,200,000 | [
[
"Ramani",
"Dhruv",
""
]
] |
1904.07091 | Miquel Junyent | Miquel Junyent, Anders Jonsson, Vicen\c{c} G\'omez | Deep Policies for Width-Based Planning in Pixel Domains | In Proceedings of the 29th International Conference on Automated
Planning and Scheduling (ICAPS 2019). arXiv admin note: text overlap with
arXiv:1806.05898 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Width-based planning has demonstrated great success in recent years due to
its ability to scale independently of the size of the state space. For example,
Bandres et al. (2018) introduced a rollout version of the Iterated Width
algorithm whose performance compares well with humans and learning methods in
the pixel setting of the Atari games suite. In this setting, planning is done
on-line using the "screen" states and selecting actions by looking ahead into
the future. However, this algorithm is purely exploratory and does not leverage
past reward information. Furthermore, it requires the state to be factored into
features that need to be pre-defined for the particular task, e.g., the B-PROST
pixel features. In this work, we extend width-based planning by incorporating
an explicit policy in the action selection mechanism. Our method, called
$\pi$-IW, interleaves width-based planning and policy learning using the
state-actions visited by the planner. The policy estimate takes the form of a
neural network and is in turn used to guide the planning step, thus reinforcing
promising paths. Surprisingly, we observe that the representation learned by
the neural network can be used as a feature space for the width-based planner
without degrading its performance, thus removing the requirement of pre-defined
features for the planner. We compare $\pi$-IW with previous width-based methods
and with AlphaZero, a method that also interleaves planning and learning, in
simple environments, and show that $\pi$-IW has superior performance. We also
show that $\pi$-IW algorithm outperforms previous width-based methods in the
pixel setting of Atari games suite.
| [
{
"version": "v1",
"created": "Fri, 12 Apr 2019 10:50:12 GMT"
},
{
"version": "v2",
"created": "Tue, 12 May 2020 09:32:49 GMT"
},
{
"version": "v3",
"created": "Tue, 5 Oct 2021 14:14:19 GMT"
}
] | 1,633,478,400,000 | [
[
"Junyent",
"Miquel",
""
],
[
"Jonsson",
"Anders",
""
],
[
"Gómez",
"Vicenç",
""
]
] |
1904.07491 | Moyuru Kurita | Moyuru Kurita, Kunihito Hoki | Method for Constructing Artificial Intelligence Player with Abstraction
to Markov Decision Processes in Multiplayer Game of Mahjong | Copyright 2019 IEEE. Personal use of this material is permitted.
Permission from IEEE must be obtained for all other uses, in any current or
future media, including reprinting/republishing this material for advertising
or promotional purposes, creating new collective works, for resale or
redistribution to servers or lists, or reuse of any copyrighted component of
this work in other works | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a method for constructing artificial intelligence (AI) of mahjong,
which is a multiplayer imperfect information game. Since the size of the game
tree is huge, constructing an expert-level AI player of mahjong is challenging.
We define multiple Markov decision processes (MDPs) as abstractions of mahjong
to construct effective search trees. We also introduce two methods of inferring
state values of the original mahjong using these MDPs. We evaluated the
effectiveness of our method using gameplays vis-\`{a}-vis the current strongest
AI player.
| [
{
"version": "v1",
"created": "Tue, 16 Apr 2019 06:43:05 GMT"
}
] | 1,555,459,200,000 | [
[
"Kurita",
"Moyuru",
""
],
[
"Hoki",
"Kunihito",
""
]
] |
1904.07786 | Kieran Greer Dr | Kieran Greer | A Pattern-Hierarchy Classifier for Reduced Teaching | null | WSEAS Transactions on Computers, ISSN / E-ISSN: 1109-2750 /
2224-2872, Volume 19, 2020, Art. #23, pp. 183-193 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes a design that can be used for Explainable AI. The lower
level is a nested ensemble of patterns created by self-organisation. The upper
level is a hierarchical tree, where nodes are linked through individual
concepts, so there is a transition from mixed ensemble masses to specific
categories. Lower-level pattern ensembles are learned in an unsupervsised
manner and then split into branches when it is clear that the category has
changed. Links between the two levels define that the concepts are learned and
missing links define that they are guessed only. This paper proposes some new
clustering algorithms for producing the pattern ensembles, that are themselves
an ensemble which converges through aggregations. Multiple solutions are also
combined, to make the final result more robust. One measure of success is how
coherent these ensembles are, which means that every data row in the cluster
belongs to the same category. The total number of clusters is also important
and the teaching phase can correct the ensemble estimates with respect to both
of these. A teaching phase would then help the classifier to learn the true
category for each input row. During this phase, any classifier can learn or
infer correct classifications from some other classifier's knowledge, thereby
reducing the required number of presentations. As the information is added,
cross-referencing between the two structures allows it to be used more widely,
where a unique structure can build up that would not be possible by either
method separately.
| [
{
"version": "v1",
"created": "Tue, 16 Apr 2019 16:08:24 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Oct 2020 14:56:32 GMT"
},
{
"version": "v3",
"created": "Tue, 20 Oct 2020 09:41:56 GMT"
}
] | 1,606,694,400,000 | [
[
"Greer",
"Kieran",
""
]
] |
1904.08123 | Avi Rosenfeld | Avi Rosenfeld, Ariella Richardson | Explainability in Human-Agent Systems | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a taxonomy of explainability in Human-Agent Systems. We
consider fundamental questions about the Why, Who, What, When and How of
explainability. First, we define explainability, and its relationship to the
related terms of interpretability, transparency, explicitness, and
faithfulness. These definitions allow us to answer why explainability is needed
in the system, whom it is geared to and what explanations can be generated to
meet this need. We then consider when the user should be presented with this
information. Last, we consider how objective and subjective measures can be
used to evaluate the entire system. This last question is the most encompassing
as it will need to evaluate all other issues regarding explainability.
| [
{
"version": "v1",
"created": "Wed, 17 Apr 2019 08:18:12 GMT"
}
] | 1,555,545,600,000 | [
[
"Rosenfeld",
"Avi",
""
],
[
"Richardson",
"Ariella",
""
]
] |
1904.08303 | Oleh Andriichuk | Oleh Andriichuk, Vitaliy Tsyganok, Dmitry Lande, Oleg Chertov,
Yaroslava Porplenko | Usage of Decision Support Systems for Conflicts Modelling during
Information Operations Recognition | 8 pages, 1 figure | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Application of decision support systems for conflict modeling in information
operations recognition is presented. An information operation is considered as
a complex weakly structured system. The model of conflict between two subjects
is proposed based on the second-order rank reflexive model. The method is
described for construction of the design pattern for knowledge bases of
decision support systems. In the talk, the methodology is proposed for using of
decision support systems for modeling of conflicts in information operations
recognition based on the use of expert knowledge and content monitoring.
| [
{
"version": "v1",
"created": "Tue, 16 Apr 2019 16:58:51 GMT"
}
] | 1,555,545,600,000 | [
[
"Andriichuk",
"Oleh",
""
],
[
"Tsyganok",
"Vitaliy",
""
],
[
"Lande",
"Dmitry",
""
],
[
"Chertov",
"Oleg",
""
],
[
"Porplenko",
"Yaroslava",
""
]
] |
1904.08626 | Elena Camossi | Maximilian Zocholl, Elena Camossi, Anne-Laure Jousselme, Cyril Ray | Ontology-based Design of Experiments on Big Data Solutions | Pre-print and extended version of the poster paper presented at the
14th International Conference on Semantic Systems | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Big data solutions are designed to cope with data of huge Volume and wide
Variety, that need to be ingested at high Velocity and have potential Veracity
issues, challenging characteristics that are usually referred to as the "4Vs of
Big Data". In order to evaluate possibly complex big data solutions, stress
tests require to assess a large number of combinations of sub-components
jointly with the possible big data variations. A formalization of the Design of
Experiments (DoE) on big data solutions is aimed at ensuring the
reproducibility of the experiments, facilitating their partitioning in
sub-experiments and guaranteeing the consistency of their outcomes in a global
assessment. In this paper, an ontology-based approach is proposed to support
the evaluation of a big data system in two ways. Firstly, the approach
formalizes a decomposition and recombination of the big data solution, allowing
for the aggregation of component evaluation results at inter-component level.
Secondly, existing work on DoE is translated into an ontology for supporting
the selection of experiments. The proposed ontology-based approach offers the
possibility to combine knowledge from the evaluation domain and the application
domain. It exploits domain and inter-domain specific restrictions on the factor
combinations in order to reduce the number of experiments. Contrary to existing
approaches, the proposed use of ontologies is not limited to the assertional
description and exploitation of past experiments but offers richer
terminological descriptions for the development of a DoE from scratch. As an
application example, a maritime big data solution to the problem of detecting
and predicting vessel suspicious behaviour through mobility analysis is
selected. The article is concluded with a sketch of future works.
| [
{
"version": "v1",
"created": "Thu, 18 Apr 2019 07:52:54 GMT"
}
] | 1,555,632,000,000 | [
[
"Zocholl",
"Maximilian",
""
],
[
"Camossi",
"Elena",
""
],
[
"Jousselme",
"Anne-Laure",
""
],
[
"Ray",
"Cyril",
""
]
] |
1904.09134 | Marco Maratea | Martin Gebser, Marco Maratea, Francesco Ricca | The Seventh Answer Set Programming Competition: Design and Results | 28 pages | Theory and Practice of Logic Programming 20 (2020) 176-204 | 10.1017/S1471068419000061 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Answer Set Programming (ASP) is a prominent knowledge representation language
with roots in logic programming and non-monotonic reasoning. Biennial ASP
competitions are organized in order to furnish challenging benchmark
collections and assess the advancement of the state of the art in ASP solving.
In this paper, we report on the design and results of the Seventh ASP
Competition, jointly organized by the University of Calabria (Italy), the
University of Genova (Italy), and the University of Potsdam (Germany), in
affiliation with the 14th International Conference on Logic Programming and
Non-Monotonic Reasoning (LPNMR 2017). (Under consideration for acceptance in
TPLP).
| [
{
"version": "v1",
"created": "Fri, 19 Apr 2019 09:51:42 GMT"
}
] | 1,582,070,400,000 | [
[
"Gebser",
"Martin",
""
],
[
"Maratea",
"Marco",
""
],
[
"Ricca",
"Francesco",
""
]
] |
1904.09366 | Buser Say | Buser Say, Scott Sanner, Sylvie Thi\'ebaux | Reward Potentials for Planning with Learned Neural Network Transition
Models | To appear in the proceedings of the 25th International Conference on
Principles and Practice of Constraint Programming | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Optimal planning with respect to learned neural network (NN) models in
continuous action and state spaces using mixed-integer linear programming
(MILP) is a challenging task for branch-and-bound solvers due to the poor
linear relaxation of the underlying MILP model. For a given set of features,
potential heuristics provide an efficient framework for computing bounds on
cost (reward) functions. In this paper, we model the problem of finding optimal
potential bounds for learned NN models as a bilevel program, and solve it using
a novel finite-time constraint generation algorithm. We then strengthen the
linear relaxation of the underlying MILP model by introducing constraints to
bound the reward function based on the precomputed reward potentials.
Experimentally, we show that our algorithm efficiently computes reward
potentials for learned NN models, and that the overhead of computing reward
potentials is justified by the overall strengthening of the underlying MILP
model for the task of planning over long horizons.
| [
{
"version": "v1",
"created": "Fri, 19 Apr 2019 23:15:59 GMT"
},
{
"version": "v2",
"created": "Tue, 7 May 2019 07:03:01 GMT"
},
{
"version": "v3",
"created": "Sun, 19 May 2019 11:01:30 GMT"
},
{
"version": "v4",
"created": "Fri, 26 Jul 2019 14:54:45 GMT"
}
] | 1,564,358,400,000 | [
[
"Say",
"Buser",
""
],
[
"Sanner",
"Scott",
""
],
[
"Thiébaux",
"Sylvie",
""
]
] |
1904.09422 | Ario Santoso | Ario Santoso, Michael Felderer | Specification-Driven Predictive Business Process Monitoring | This article significantly extends the previous work in
https://doi.org/10.1007/978-3-319-91704-7_7 which has a technical report in
arXiv:1804.00617. This article and the previous work have a coauthor in
common | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Predictive analysis in business process monitoring aims at forecasting the
future information of a running business process. The prediction is typically
made based on the model extracted from historical process execution logs (event
logs). In practice, different business domains might require different kinds of
predictions. Hence, it is important to have a means for properly specifying the
desired prediction tasks, and a mechanism to deal with these various prediction
tasks. Although there have been many studies in this area, they mostly focus on
a specific prediction task. This work introduces a language for specifying the
desired prediction tasks, and this language allows us to express various kinds
of prediction tasks. This work also presents a mechanism for automatically
creating the corresponding prediction model based on the given specification.
Differently from previous studies, instead of focusing on a particular
prediction task, we present an approach to deal with various prediction tasks
based on the given specification of the desired prediction tasks. We also
provide an implementation of the approach which is used to conduct experiments
using real-life event logs.
| [
{
"version": "v1",
"created": "Sat, 20 Apr 2019 09:01:23 GMT"
}
] | 1,556,150,400,000 | [
[
"Santoso",
"Ario",
""
],
[
"Felderer",
"Michael",
""
]
] |
1904.09443 | Razieh Mehri | Razieh Mehri and Volker Haarslev and Hamidreza Chinaei | Learning the Right Expansion-ordering Heuristics for Satisfiability
Testing in OWL Reasoners | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Web Ontology Language (OWL) reasoners are used to infer new logical relations
from ontologies. While inferring new facts, these reasoners can be further
optimized, e.g., by properly ordering disjuncts in disjunction expressions of
ontologies for satisfiability testing of concepts. Different expansion-ordering
heuristics have been developed for this purpose. The built-in heuristics in
these reasoners determine the order for branches in search trees while each
heuristic choice causes different effects for various ontologies depending on
the ontologies' syntactic structure and probably other features as well. A
learning-based approach that takes into account the features aims to select an
appropriate expansion-ordering heuristic for each ontology. The proper choice
is expected to accelerate the reasoning process for the reasoners. In this
paper, the effect of our methodology is investigated on a well-known reasoner
that is JFact. Our experiments show the average speedup by a factor of one to
two orders of magnitude for satisfiability testing after applying learning
methodology for selecting the right expansion-ordering heuristics.
| [
{
"version": "v1",
"created": "Sat, 20 Apr 2019 12:58:56 GMT"
}
] | 1,555,977,600,000 | [
[
"Mehri",
"Razieh",
""
],
[
"Haarslev",
"Volker",
""
],
[
"Chinaei",
"Hamidreza",
""
]
] |
1904.09837 | Md. Noor-E-Alam | Md Mahmudul Hassan, Dizuo Jiang, A. M. M. Sharif Ullah and Md.
Noor-E-Alam | Resilient Supplier Selection in Logistics 4.0 with Heterogeneous
Information | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Supplier selection problem has gained extensive attention in the prior
studies. However, research based on Fuzzy Multi-Attribute Decision Making
(F-MADM) approach in ranking resilient suppliers in logistic 4 is still in its
infancy. Traditional MADM approach fails to address the resilient supplier
selection problem in logistic 4 primarily because of the large amount of data
concerning some attributes that are quantitative, yet difficult to process
while making decisions. Besides, some qualitative attributes prevalent in
logistic 4 entail imprecise perceptual or judgmental decision relevant
information, and are substantially different than those considered in
traditional suppler selection problems. This study develops a Decision Support
System (DSS) that will help the decision maker to incorporate and process such
imprecise heterogeneous data in a unified framework to rank a set of resilient
suppliers in the logistic 4 environment. The proposed framework induces a
triangular fuzzy number from large-scale temporal data using
probability-possibility consistency principle. Large number of non-temporal
data presented graphically are computed by extracting granular information that
are imprecise in nature. Fuzzy linguistic variables are used to map the
qualitative attributes. Finally, fuzzy based TOPSIS method is adopted to
generate the ranking score of alternative suppliers. These ranking scores are
used as input in a Multi-Choice Goal Programming (MCGP) model to determine
optimal order allocation for respective suppliers. Finally, a sensitivity
analysis assesses how the Suppliers Cost versus Resilience Index (SCRI) changes
when differential priorities are set for respective cost and resilience
attributes.
| [
{
"version": "v1",
"created": "Wed, 10 Apr 2019 03:33:37 GMT"
},
{
"version": "v2",
"created": "Mon, 8 Jul 2019 21:18:53 GMT"
},
{
"version": "v3",
"created": "Sat, 13 Jul 2019 04:04:02 GMT"
}
] | 1,563,235,200,000 | [
[
"Hassan",
"Md Mahmudul",
""
],
[
"Jiang",
"Dizuo",
""
],
[
"Ullah",
"A. M. M. Sharif",
""
],
[
"Noor-E-Alam",
"Md.",
""
]
] |
1904.09845 | Mohannad Babli | Mohannad Babli and Eva Onaindia | A context-aware knowledge acquisition for planning applications using
ontologies | 13 pages, 11 Figures, conference. arXiv admin note: text overlap with
arXiv:1904.03606 | 33rd International Business Information Management (IBIMA),
INNOVATION MANAGEMENT AND EDUCATION EXCELLENCE THROUGH VISION 2020 (pp.
3199-3208) | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automated planning technology has developed significantly. Designing a
planning model that allows an automated agent to be capable of reacting
intelligently to unexpected events in a real execution environment yet remains
a challenge. This article describes a domain-independent approach to allow the
agent to be context-aware of its execution environment and the task it
performs, acquire new information that is guaranteed to be related and more
importantly manageable, and integrate such information into its model through
the use of ontologies and semantic operations to autonomously formulate new
objectives, resulting in a more human-like behaviour for handling unexpected
events in the context of opportunities.
| [
{
"version": "v1",
"created": "Fri, 19 Apr 2019 13:48:02 GMT"
}
] | 1,555,977,600,000 | [
[
"Babli",
"Mohannad",
""
],
[
"Onaindia",
"Eva",
""
]
] |
1904.11106 | Solimul Chowdhury | Md Solimul Chowdhury and Martin M\"uller and Jia-Huai You | Characterization of Glue Variables in CDCL SAT Solving | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A state-of-the-art criterion to evaluate the importance of a given learned
clause is called Literal Block Distance (LBD) score. It measures the number of
distinct decision levels in a given learned clause. The lower the LBD score of
a learned clause, the better is its quality. The learned clauses with LBD score
of 2, called glue clauses, are known to possess high pruning power which are
never deleted from the clause databases of the modern CDCL SAT solvers. In this
work, we relate glue clauses to decision variables. We call the variables that
appeared in at least one glue clause up to the current search state Glue
Variables. We first show experimentally, by running the state-of-the-art CDCL
SAT solver MapleLCMDist on benchmarks from SAT Competition-2017 and 2018, that
branching decisions with glue variables are categorically more inference and
conflict efficient than nonglue variables. Based on this observation, we
develop a structure aware CDCL variable bumping scheme, which bumps the
activity score of a glue variable based on its appearance count in the glue
clauses that are learned so far by the search. Empirical evaluation shows
effectiveness of the new method over the main track instances from SAT
Competition 2017 and 2018.
| [
{
"version": "v1",
"created": "Thu, 25 Apr 2019 00:52:06 GMT"
}
] | 1,556,236,800,000 | [
[
"Chowdhury",
"Md Solimul",
""
],
[
"Müller",
"Martin",
""
],
[
"You",
"Jia-Huai",
""
]
] |
1904.11739 | Ramon Fraga Pereira | Ramon Fraga Pereira, Nir Oren, and Felipe Meneguzzi | Landmark-Based Approaches for Goal Recognition as Planning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The task of recognizing goals and plans from missing and full observations
can be done efficiently by using automated planning techniques. In many
applications, it is important to recognize goals and plans not only accurately,
but also quickly. To address this challenge, we develop novel goal recognition
approaches based on planning techniques that rely on planning landmarks. In
automated planning, landmarks are properties (or actions) that cannot be
avoided to achieve a goal. We show the applicability of a number of planning
techniques with an emphasis on landmarks for goal and plan recognition tasks in
two settings: (1) we use the concept of landmarks to develop goal recognition
heuristics; and (2) we develop a landmark-based filtering method to refine
existing planning-based goal and plan recognition approaches. These recognition
approaches are empirically evaluated in experiments over several classical
planning domains. We show that our goal recognition approaches yield not only
accuracy comparable to (and often higher than) other state-of-the-art
techniques, but also substantially faster recognition time over such
techniques.
| [
{
"version": "v1",
"created": "Fri, 26 Apr 2019 09:40:37 GMT"
},
{
"version": "v2",
"created": "Thu, 23 May 2019 01:57:46 GMT"
}
] | 1,558,656,000,000 | [
[
"Pereira",
"Ramon Fraga",
""
],
[
"Oren",
"Nir",
""
],
[
"Meneguzzi",
"Felipe",
""
]
] |
1904.12178 | Maen Alzubi | Maen Alzubi, Zsolt Csaba Johany\'ak, Szilveszter Kov\'acs | Fuzzy Rule Interpolation Methods and Fri Toolbox | null | Journal of Theoretical and Applied Information Technology 15th
November 2018. Vol.96. No 21 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | FRI methods are less popular in the practical application domain. One
possible reason is the missing common framework. There are many FRI methods
developed independently, having different interpolation concepts and features.
One trial for setting up a common FRI framework was the MATLAB FRI Toolbox,
developed by Johany\'ak et. al. in 2006. The goals of this paper are divided as
follows: firstly, to present a brief introduction of the FRI methods. Secondly,
to introduce a brief description of the refreshed and extended version of the
original FRI Toolbox. And thirdly, to use different unified numerical benchmark
examples to evaluate and analyze the Fuzzy Rule Interpolation Techniques (FRI)
(KH, KH Stabilized, MACI, IMUL, CRF, VKK, GM, FRIPOC, LESFRI, and SCALEMOVE),
that will be classified and compared based on different features by following
the abnormality and linearity conditions [15].
| [
{
"version": "v1",
"created": "Sat, 27 Apr 2019 16:44:33 GMT"
}
] | 1,556,582,400,000 | [
[
"Alzubi",
"Maen",
""
],
[
"Johanyák",
"Zsolt Csaba",
""
],
[
"Kovács",
"Szilveszter",
""
]
] |
1904.13308 | Oleh Andriichuk | Oleh Dmytrenko, Dmitry Lande, Oleh Andriichuk | Method for Searching of an Optimal Scenario of Impact in Cognitive Maps
during Information Operations Recognition | 13 pages, 1 figure | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we consider the problem of choosing the optimal scenario of
the impact between nodes based on of the introduced criteria for the optimality
of the impact. Two criteria for the optimality of the impact, which are called
the force of impact and the speed of implementation of the scenario, are
considered. To obtain a unique solution of the problem, a multi-criterial
assessment of the received scenarios using the Pareto principle was applied.
Based on the criteria of a force of impact and the speed of implementation of
the scenario, the choice of the optimal scenario of impact was justified. The
results and advantages of the proposed approach in comparison with the Kosko
model are presented.
| [
{
"version": "v1",
"created": "Thu, 25 Apr 2019 23:58:05 GMT"
}
] | 1,556,668,800,000 | [
[
"Dmytrenko",
"Oleh",
""
],
[
"Lande",
"Dmitry",
""
],
[
"Andriichuk",
"Oleh",
""
]
] |
1905.00517 | Yu Zhang | Yu Zhang | From Abstractions to Grounded Languages for Robust Coordination of Task
Planning Robots | A short version of this paper appears as an extended abstract at
AAMAS 2023 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we consider a first step to bridge a gap in coordinating task
planning robots. Specifically, we study the automatic construction of languages
that are maximally flexible while being sufficiently explicative for
coordination. To this end, we view language as a machinery for specifying
temporal-state constraints of plans. Such a view enables us to reverse-engineer
a language from the ground up by mapping these composable constraints to words.
Our language expresses a plan for any given task as a "plan sketch" to convey
just-enough details while maximizing the flexibility to realize it, leading to
robust coordination with optimality guarantees among other benefits. We
formulate and analyze the problem, provide an approximate solution, and
validate the advantages of our approach under various scenarios to shed light
on its applications.
| [
{
"version": "v1",
"created": "Wed, 1 May 2019 22:05:42 GMT"
},
{
"version": "v2",
"created": "Wed, 18 Mar 2020 02:16:19 GMT"
},
{
"version": "v3",
"created": "Thu, 22 Feb 2024 23:07:35 GMT"
}
] | 1,708,905,600,000 | [
[
"Zhang",
"Yu",
""
]
] |
1905.00607 | Mohsen Annabestani | Mohsen Annabestani, Alireza Rowhanimanesh, Akram Rezaei, Ladan
Avazpour, Fatemeh Sheikhhasani | A knowledge-based intelligent system for control of dirt recognition
process in the smart washing machines | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose an intelligence approach based on fuzzy logic to
modeling human intelligence in washing clothes. At first, an intelligent
feedback loop is designed for perception-based sensing of dirt inspired by
human color understanding. Then, when color stains leak out of some colored
clothes the human probabilistic decision making is computationally modeled to
detect this stain leakage and thus the problem of recognizing dirt from stain
can be considered in the washing process. Finally, we discuss the fuzzy control
of washing clothes and design and simulate a smart controller based on the
fuzzy intelligence feedback loop.
| [
{
"version": "v1",
"created": "Thu, 2 May 2019 08:05:59 GMT"
},
{
"version": "v2",
"created": "Tue, 7 May 2019 09:38:56 GMT"
}
] | 1,557,273,600,000 | [
[
"Annabestani",
"Mohsen",
""
],
[
"Rowhanimanesh",
"Alireza",
""
],
[
"Rezaei",
"Akram",
""
],
[
"Avazpour",
"Ladan",
""
],
[
"Sheikhhasani",
"Fatemeh",
""
]
] |
1905.02549 | Mohsen Annabestani | Mohsen Annabestani, Alireza Rowhanimanesh, Aylar Mizani, Akram Rezaei | Descriptive evaluation of students using fuzzy approximate reasoning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, descriptive evaluation has been introduced as a new model
for educational evaluation of Iranian students. The current descriptive
evaluation method is based on four-valued logic. Assessing all students with
only four values is led to a lack of relative justice and the creation of
unrealistic equality. Also, the complexity of the evaluation process in the
current method increases teacher errors likelihood. As a suitable solution, in
this paper, a fuzzy descriptive evaluation system has been proposed. The
proposed method is based on fuzzy logic, which is an infinite-valued logic and
it can perform approximate reasoning on natural language propositions. By the
proposed fuzzy system, student assessment is performed over the school year
with infinite values instead of four values. But to eliminate the diversity of
assigned values to students, at the end of the school year, the calculated
values for each student will be rounded to the nearest value of the four
standard values of the current descriptive evaluation system. It can be
implemented easily in an appropriate smartphone app, which makes it much easier
for the teachers to evaluate the evaluation process. In this paper, the
evaluation process of the elementary third-grade mathematics course in Iran
during the period from the beginning of the MEHR (The Seventh month of Iran) to
the end of BAHMAN (The Eleventh Month of Iran) is examined by the proposed
system. To evaluate the validity of this system, the proposed method has been
simulated in MATLAB software.
| [
{
"version": "v1",
"created": "Tue, 7 May 2019 13:25:22 GMT"
},
{
"version": "v2",
"created": "Sat, 11 May 2019 13:49:49 GMT"
}
] | 1,557,792,000,000 | [
[
"Annabestani",
"Mohsen",
""
],
[
"Rowhanimanesh",
"Alireza",
""
],
[
"Mizani",
"Aylar",
""
],
[
"Rezaei",
"Akram",
""
]
] |
1905.02940 | Yunyou Huang | Yunyou Huang, Zhifei Zhang, Nana Wang, Nengquan Li, Mengjia Du,
Tianshu Hao and Jianfeng Zhan | A new direction to promote the implementation of artificial intelligence
in natural clinical settings | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Artificial intelligence (AI) researchers claim that they have made great
`achievements' in clinical realms. However, clinicians point out the so-called
`achievements' have no ability to implement into natural clinical settings. The
root cause for this huge gap is that many essential features of natural
clinical tasks are overlooked by AI system developers without medical
background. In this paper, we propose that the clinical benchmark suite is a
novel and promising direction to capture the essential features of the
real-world clinical tasks, hence qualifies itself for guiding the development
of AI systems, promoting the implementation of AI in real-world clinical
practice.
| [
{
"version": "v1",
"created": "Wed, 8 May 2019 07:26:27 GMT"
}
] | 1,557,360,000,000 | [
[
"Huang",
"Yunyou",
""
],
[
"Zhang",
"Zhifei",
""
],
[
"Wang",
"Nana",
""
],
[
"Li",
"Nengquan",
""
],
[
"Du",
"Mengjia",
""
],
[
"Hao",
"Tianshu",
""
],
[
"Zhan",
"Jianfeng",
""
]
] |
1905.03362 | Bin Yang | Bin Yang, Lin Yang, Xiaochun Li, Wenhan Zhang, Hua Zhou, Yequn Zhang,
Yongxiong Ren and Yinbo Shi | 2-bit Model Compression of Deep Convolutional Neural Network on ASIC
Engine for Image Retrieval | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Image retrieval utilizes image descriptors to retrieve the most similar
images to a given query image. Convolutional neural network (CNN) is becoming
the dominant approach to extract image descriptors for image retrieval. For
low-power hardware implementation of image retrieval, the drawback of CNN-based
feature descriptor is that it requires hundreds of megabytes of storage. To
address this problem, this paper applies deep model quantization and
compression to CNN in ASIC chip for image retrieval. It is demonstrated that
the CNN-based features descriptor can be extracted using as few as 2-bit
weights quantization to deliver a similar performance as floating-point model
for image retrieval. In addition, to implement CNN in ASIC, especially for
large scale images, the limited buffer size of chips should be considered. To
retrieve large scale images, we propose an improved pooling strategy, region
nested invariance pooling (RNIP), which uses cropped sub-images for CNN.
Testing results on chip show that integrating RNIP with the proposed 2-bit CNN
model compression approach is capable of retrieving large scale images.
| [
{
"version": "v1",
"created": "Wed, 8 May 2019 21:48:42 GMT"
}
] | 1,557,446,400,000 | [
[
"Yang",
"Bin",
""
],
[
"Yang",
"Lin",
""
],
[
"Li",
"Xiaochun",
""
],
[
"Zhang",
"Wenhan",
""
],
[
"Zhou",
"Hua",
""
],
[
"Zhang",
"Yequn",
""
],
[
"Ren",
"Yongxiong",
""
],
[
"Shi",
"Yinbo",
""
]
] |
1905.03398 | Yuanxin Wu | Qi Cai, Tsung-Ching Lin, Yuanxin Wu, Wenxian Yu and Trieu-Kien Truong | General Method for Prime-point Cyclic Convolution over the Real Field | 6 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A general and fast method is conceived for computing the cyclic convolution
of n points, where n is a prime number. This method fully exploits the internal
structure of the cyclic matrix, and hence leads to significant reduction of the
multiplication complexity in terms of CPU time by 50%, as compared with
Winograd's algorithm. In this paper, we only consider the real and complex
fields due to their most important applications, but in general, the idea
behind this method can be extended to any finite field of interest. Clearly, it
is well-known that the discrete Fourier transform (DFT) can be expressed in
terms of cyclic convolution, so it can be utilized to compute the DFT when the
block length is a prime.
| [
{
"version": "v1",
"created": "Thu, 9 May 2019 00:53:30 GMT"
}
] | 1,557,446,400,000 | [
[
"Cai",
"Qi",
""
],
[
"Lin",
"Tsung-Ching",
""
],
[
"Wu",
"Yuanxin",
""
],
[
"Yu",
"Wenxian",
""
],
[
"Truong",
"Trieu-Kien",
""
]
] |
1905.03592 | Vijay Gadepally | Vijay Gadepally, Justin Goodwin, Jeremy Kepner, Albert Reuther, Hayley
Reynolds, Siddharth Samsi, Jonathan Su, David Martinez | AI Enabling Technologies: A Survey | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Artificial Intelligence (AI) has the opportunity to revolutionize the way the
United States Department of Defense (DoD) and Intelligence Community (IC)
address the challenges of evolving threats, data deluge, and rapid courses of
action. Developing an end-to-end artificial intelligence system involves
parallel development of different pieces that must work together in order to
provide capabilities that can be used by decision makers, warfighters and
analysts. These pieces include data collection, data conditioning, algorithms,
computing, robust artificial intelligence, and human-machine teaming. While
much of the popular press today surrounds advances in algorithms and computing,
most modern AI systems leverage advances across numerous different fields.
Further, while certain components may not be as visible to end-users as others,
our experience has shown that each of these interrelated components play a
major role in the success or failure of an AI system. This article is meant to
highlight many of these technologies that are involved in an end-to-end AI
system. The goal of this article is to provide readers with an overview of
terminology, technical details and recent highlights from academia, industry
and government. Where possible, we indicate relevant resources that can be used
for further reading and understanding.
| [
{
"version": "v1",
"created": "Wed, 8 May 2019 15:41:38 GMT"
}
] | 1,557,446,400,000 | [
[
"Gadepally",
"Vijay",
""
],
[
"Goodwin",
"Justin",
""
],
[
"Kepner",
"Jeremy",
""
],
[
"Reuther",
"Albert",
""
],
[
"Reynolds",
"Hayley",
""
],
[
"Samsi",
"Siddharth",
""
],
[
"Su",
"Jonathan",
""
],
[
"Martinez",
"David",
""
]
] |
1905.04020 | Thomy Phan | Thomy Phan, Lenz Belzner, Marie Kiermeier, Markus Friedrich, Kyrill
Schmid, Claudia Linnhoff-Popien | Memory Bounded Open-Loop Planning in Large POMDPs using Thompson
Sampling | Presented at AAAI 2019 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | State-of-the-art approaches to partially observable planning like POMCP are
based on stochastic tree search. While these approaches are computationally
efficient, they may still construct search trees of considerable size, which
could limit the performance due to restricted memory resources. In this paper,
we propose Partially Observable Stacked Thompson Sampling (POSTS), a memory
bounded approach to open-loop planning in large POMDPs, which optimizes a fixed
size stack of Thompson Sampling bandits. We empirically evaluate POSTS in four
large benchmark problems and compare its performance with different tree-based
approaches. We show that POSTS achieves competitive performance compared to
tree-based open-loop planning and offers a performance-memory tradeoff, making
it suitable for partially observable planning with highly restricted
computational and memory resources.
| [
{
"version": "v1",
"created": "Fri, 10 May 2019 09:06:50 GMT"
}
] | 1,557,705,600,000 | [
[
"Phan",
"Thomy",
""
],
[
"Belzner",
"Lenz",
""
],
[
"Kiermeier",
"Marie",
""
],
[
"Friedrich",
"Markus",
""
],
[
"Schmid",
"Kyrill",
""
],
[
"Linnhoff-Popien",
"Claudia",
""
]
] |
1905.04210 | Felipe Meneguzzi | Lu\'isa R. de A. Santos and Felipe Meneguzzi and Ramon Fraga Pereira
and Andr\'e Grahl Pereira | An LP-Based Approach for Goal Recognition as Planning | 8 pages, 4 tables, 3 figures. Published in AAAI 2021. Updated final
authorship and text | AAAI 2021: 11939-11946 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Goal recognition aims to recognize the set of candidate goals that are
compatible with the observed behavior of an agent. In this paper, we develop a
method based on the operator-counting framework that efficiently computes
solutions that satisfy the observations and uses the information generated to
solve goal recognition tasks. Our method reasons explicitly about both partial
and noisy observations: estimating uncertainty for the former, and satisfying
observations given the unreliability of the sensor for the latter. We evaluate
our approach empirically over a large data set, analyzing its components on how
each can impact the quality of the solutions. In general, our approach is
superior to previous methods in terms of agreement ratio, accuracy, and spread.
Finally, our approach paves the way for new research on combinatorial
optimization to solve goal recognition tasks.
| [
{
"version": "v1",
"created": "Fri, 10 May 2019 15:14:30 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Feb 2020 04:24:21 GMT"
},
{
"version": "v3",
"created": "Tue, 15 Jun 2021 08:58:21 GMT"
}
] | 1,623,801,600,000 | [
[
"Santos",
"Luísa R. de A.",
""
],
[
"Meneguzzi",
"Felipe",
""
],
[
"Pereira",
"Ramon Fraga",
""
],
[
"Pereira",
"André Grahl",
""
]
] |
1905.04640 | Jianyi Wang | Yuhang Song, Jianyi Wang, Thomas Lukasiewicz, Zhenghua Xu, Shangtong
Zhang, Andrzej Wojcicki, Mai Xu | Mega-Reward: Achieving Human-Level Play without Extrinsic Rewards | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Intrinsic rewards were introduced to simulate how human intelligence works;
they are usually evaluated by intrinsically-motivated play, i.e., playing games
without extrinsic rewards but evaluated with extrinsic rewards. However, none
of the existing intrinsic reward approaches can achieve human-level performance
under this very challenging setting of intrinsically-motivated play. In this
work, we propose a novel megalomania-driven intrinsic reward (called
mega-reward), which, to our knowledge, is the first approach that achieves
human-level performance in intrinsically-motivated play. Intuitively,
mega-reward comes from the observation that infants' intelligence develops when
they try to gain more control on entities in an environment; therefore,
mega-reward aims to maximize the control capabilities of agents on given
entities in a given environment. To formalize mega-reward, a relational
transition model is proposed to bridge the gaps between direct and latent
control. Experimental studies show that mega-reward (i) can greatly outperform
all state-of-the-art intrinsic reward approaches, (ii) generally achieves the
same level of performance as Ex-PPO and professional human-level scores, and
(iii) has also a superior performance when it is incorporated with extrinsic
rewards.
| [
{
"version": "v1",
"created": "Sun, 12 May 2019 03:48:06 GMT"
},
{
"version": "v2",
"created": "Sat, 25 May 2019 09:01:03 GMT"
},
{
"version": "v3",
"created": "Thu, 30 May 2019 03:24:05 GMT"
},
{
"version": "v4",
"created": "Wed, 27 Nov 2019 04:05:44 GMT"
}
] | 1,574,899,200,000 | [
[
"Song",
"Yuhang",
""
],
[
"Wang",
"Jianyi",
""
],
[
"Lukasiewicz",
"Thomas",
""
],
[
"Xu",
"Zhenghua",
""
],
[
"Zhang",
"Shangtong",
""
],
[
"Wojcicki",
"Andrzej",
""
],
[
"Xu",
"Mai",
""
]
] |
1905.05013 | Dennis Soemers | \'Eric Piette, Dennis J.N.J. Soemers, Matthew Stephenson, Chiara F.
Sironi, Mark H.M. Winands, Cameron Browne | Ludii -- The Ludemic General Game System | Accepted at ECAI 2020 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While current General Game Playing (GGP) systems facilitate useful research
in Artificial Intelligence (AI) for game-playing, they are often somewhat
specialised and computationally inefficient. In this paper, we describe the
"ludemic" general game system Ludii, which has the potential to provide an
efficient tool for AI researchers as well as game designers, historians,
educators and practitioners in related fields. Ludii defines games as
structures of ludemes -- high-level, easily understandable game concepts --
which allows for concise and human-understandable game descriptions. We
formally describe Ludii and outline its main benefits: generality,
extensibility, understandability and efficiency. Experimentally, Ludii
outperforms one of the most efficient Game Description Language (GDL)
reasoners, based on a propositional network, in all games available in the
Tiltyard GGP repository. Moreover, Ludii is also competitive in terms of
performance with the more recently proposed Regular Boardgames (RBG) system,
and has various advantages in qualitative aspects such as generality.
| [
{
"version": "v1",
"created": "Mon, 13 May 2019 12:39:39 GMT"
},
{
"version": "v2",
"created": "Thu, 16 May 2019 08:01:27 GMT"
},
{
"version": "v3",
"created": "Fri, 21 Feb 2020 15:35:38 GMT"
}
] | 1,582,502,400,000 | [
[
"Piette",
"Éric",
""
],
[
"Soemers",
"Dennis J. N. J.",
""
],
[
"Stephenson",
"Matthew",
""
],
[
"Sironi",
"Chiara F.",
""
],
[
"Winands",
"Mark H. M.",
""
],
[
"Browne",
"Cameron",
""
]
] |
1905.05176 | Catarina Moreira | Catarina Moreira, Lauren Fell, Shahram Dehdashti, Peter Bruza, Andreas
Wichert | Towards a Quantum-Like Cognitive Architecture for Decision-Making | null | null | 10.1017/S0140525X19001687 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose an alternative and unifying framework for decision-making that, by
using quantum mechanics, provides more generalised cognitive and decision
models with the ability to represent more information than classical models.
This framework can accommodate and predict several cognitive biases reported in
Lieder & Griffiths without heavy reliance on heuristics nor on assumptions of
the computational resources of the mind.
| [
{
"version": "v1",
"created": "Sat, 11 May 2019 11:12:23 GMT"
},
{
"version": "v2",
"created": "Sun, 8 Nov 2020 15:49:55 GMT"
}
] | 1,604,966,400,000 | [
[
"Moreira",
"Catarina",
""
],
[
"Fell",
"Lauren",
""
],
[
"Dehdashti",
"Shahram",
""
],
[
"Bruza",
"Peter",
""
],
[
"Wichert",
"Andreas",
""
]
] |
1905.05713 | Alessandro Umbrico | Alessandro Umbrico | Timeline-based Planning and Execution with Uncertainty: Theory, Modeling
Methodologies and Practice | PhD thesis, Information and Automation, Roma Tre University | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automated Planning is one of the main research field of Artificial
Intelligence since its beginnings. Research in Automated Planning aims at
developing general reasoners (i.e., planners) capable of automatically solve
complex problems. Broadly speaking, planners rely on a general model
characterizing the possible states of the world and the actions that can be
performed in order to change the status of the world. Given a model and an
initial known state, the objective of a planner is to synthesize a set of
actions needed to achieve a particular goal state. The classical approach to
planning roughly corresponds to the description given above. The timeline-based
approach is a particular planning paradigm capable of integrating causal and
temporal reasoning within a unified solving process. This approach has been
successfully applied in many real-world scenarios although a common
interpretation of the related planning concepts is missing. Indeed, there are
significant differences among the existing frameworks that apply this
technique. Each framework relies on its own interpretation of timeline-based
planning and therefore it is not easy to compare these systems. Thus, the
objective of this work is to investigate the timeline-based approach to
planning by addressing several aspects ranging from the semantics of the
related planning concepts to the modeling and solving techniques. Specifically,
the main contributions of this PhD work consist of: (i) the proposal of a
formal characterization of the timeline-based approach capable of dealing with
temporal uncertainty; (ii) the proposal of a hierarchical modeling and solving
approach; (iii) the development of a general purpose framework for planning and
execution with timelines; (iv) the validation{\dag}of this approach in
real-world manufacturing scenarios.
| [
{
"version": "v1",
"created": "Tue, 14 May 2019 16:42:33 GMT"
}
] | 1,557,878,400,000 | [
[
"Umbrico",
"Alessandro",
""
]
] |
1905.06088 | Son Tran | Artur d'Avila Garcez, Marco Gori, Luis C. Lamb, Luciano Serafini,
Michael Spranger, Son N. Tran | Neural-Symbolic Computing: An Effective Methodology for Principled
Integration of Machine Learning and Reasoning | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Current advances in Artificial Intelligence and machine learning in general,
and deep learning in particular have reached unprecedented impact not only
across research communities, but also over popular media channels. However,
concerns about interpretability and accountability of AI have been raised by
influential thinkers. In spite of the recent impact of AI, several works have
identified the need for principled knowledge representation and reasoning
mechanisms integrated with deep learning-based systems to provide sound and
explainable models for such systems. Neural-symbolic computing aims at
integrating, as foreseen by Valiant, two most fundamental cognitive abilities:
the ability to learn from the environment, and the ability to reason from what
has been learned. Neural-symbolic computing has been an active topic of
research for many years, reconciling the advantages of robust learning in
neural networks and reasoning and interpretability of symbolic representation.
In this paper, we survey recent accomplishments of neural-symbolic computing as
a principled methodology for integrated machine learning and reasoning. We
illustrate the effectiveness of the approach by outlining the main
characteristics of the methodology: principled integration of neural learning
with symbolic knowledge representation and reasoning allowing for the
construction of explainable AI systems. The insights provided by
neural-symbolic computing shed new light on the increasingly prominent need for
interpretable and accountable AI systems.
| [
{
"version": "v1",
"created": "Wed, 15 May 2019 11:00:48 GMT"
}
] | 1,557,964,800,000 | [
[
"Garcez",
"Artur d'Avila",
""
],
[
"Gori",
"Marco",
""
],
[
"Lamb",
"Luis C.",
""
],
[
"Serafini",
"Luciano",
""
],
[
"Spranger",
"Michael",
""
],
[
"Tran",
"Son N.",
""
]
] |
1905.06402 | Bence Cserna | Bence Cserna, Kevin C. Gall, Wheeler Ruml | Improved Safe Real-time Heuristic Search | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A fundamental concern in real-time planning is the presence of dead-ends in
the state space, from which no goal is reachable. Recently, the SafeRTS
algorithm was proposed for searching in such spaces. SafeRTS exploits a
user-provided predicate to identify safe states, from which a goal is likely
reachable, and attempts to maintain a backup plan for reaching a safe state at
all times. In this paper, we study the SafeRTS approach, identify certain
properties of its behavior, and design an improved framework for safe real-time
search. We prove that the new approach performs at least as well as SafeRTS and
present experimental results showing that its promise is fulfilled in practice.
| [
{
"version": "v1",
"created": "Wed, 15 May 2019 19:22:59 GMT"
}
] | 1,558,051,200,000 | [
[
"Cserna",
"Bence",
""
],
[
"Gall",
"Kevin C.",
""
],
[
"Ruml",
"Wheeler",
""
]
] |
1905.06413 | Mathieu Ritou | Mathieu Ritou (RoMas, IUT NANTES), Farouk Belkadi (IS3P, ECN), Zakaria
Yahouni (LS2N, IUT NANTES), Catherine Da Cunha (IS3P, ECN), Florent Laroche
(IS3P, ECN), Benoit Furet (RoMas, IUT NANTES) | Knowledge-based multi-level aggregation for decision aid in the
machining industry | CIRP Annals - Manufacturing Technology, Elsevier, 2019 | null | 10.1016/j.cirp.2019.03.009 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the context of Industry 4.0, data management is a key point for decision
aid approaches. Large amounts of manufacturing digital data are collected on
the shop floor. Their analysis can then require a large amount of computing
power. The Big Data issue can be solved by aggregation, generating smart and
meaningful data. This paper presents a new knowledge-based multi-level
aggregation strategy to support decision making. Manufacturing knowledge is
used at each level to design the monitoring criteria or aggregation operators.
The proposed approach has been implemented as a demonstrator and successfully
applied to a real machining database from the aeronautic industry. Decision
Making; Machining; Knowledge based system
| [
{
"version": "v1",
"created": "Tue, 14 May 2019 07:08:47 GMT"
}
] | 1,558,051,200,000 | [
[
"Ritou",
"Mathieu",
"",
"RoMas, IUT NANTES"
],
[
"Belkadi",
"Farouk",
"",
"IS3P, ECN"
],
[
"Yahouni",
"Zakaria",
"",
"LS2N, IUT NANTES"
],
[
"Da Cunha",
"Catherine",
"",
"IS3P, ECN"
],
[
"Laroche",
"Florent",
"",
"IS3P, ECN"
],
[
"Furet",
"Benoit",
"",
"RoMas, IUT NANTES"
]
] |
1905.07186 | Mark Keane | Mark T Keane and Eoin M Kenny | How Case Based Reasoning Explained Neural Networks: An XAI Survey of
Post-Hoc Explanation-by-Example in ANN-CBR Twins | 15 pages | Proceedings of the 27th International Conference on Case Based
Reasoning (ICCBR-19), 2019 | 10.1007/978-3-030-29249-2_11 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper surveys an approach to the XAI problem, using post-hoc explanation
by example, that hinges on twinning Artificial Neural Networks (ANNs) with
Case-Based Reasoning (CBR) systems, so-called ANN-CBR twins. A systematic
survey of 1100+ papers was carried out to identify the fragmented literature on
this topic and to trace it influence through to more recent work involving Deep
Neural Networks (DNNs). The paper argues that this twin-system approach,
especially using ANN-CBR twins, presents one possible coherent, generic
solution to the XAI problem (and, indeed, XCBR problem). The paper concludes by
road-mapping some future directions for this XAI solution involving (i) further
tests of feature-weighting techniques, (iii) explorations of how explanatory
cases might best be deployed (e.g., in counterfactuals, near-miss cases, a
fortori cases), and (iii) the raising of the unwelcome and, much ignored, issue
of human user evaluation.
| [
{
"version": "v1",
"created": "Fri, 17 May 2019 10:14:29 GMT"
}
] | 1,618,963,200,000 | [
[
"Keane",
"Mark T",
""
],
[
"Kenny",
"Eoin M",
""
]
] |
1905.08069 | Mark Keane | Mark T. Keane and Eoin M. Kenny | The Twin-System Approach as One Generic Solution for XAI: An Overview of
ANN-CBR Twins for Explaining Deep Learning | 5 pages | IJCAI 2019 Workshop on Explainable Artificial Intelligence (XAI) | null | http://hdl.handle.net/10197/11071 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The notion of twin systems is proposed to address the eXplainable AI (XAI)
problem, where an uninterpretable black-box system is mapped to a white-box
'twin' that is more interpretable. In this short paper, we overview very recent
work that advances a generic solution to the XAI problem, the so called twin
system approach. The most popular twinning in the literature is that between an
Artificial Neural Networks (ANN ) as a black box and Case Based Reasoning (CBR)
system as a white-box, where the latter acts as an interpretable proxy for the
former. We outline how recent work reviving this idea has applied it to deep
learning methods. Furthermore, we detail the many fruitful directions in which
this work may be taken; such as, determining the most (i) accurate
feature-weighting methods to be used, (ii) appropriate deployments for
explanatory cases, (iii) useful cases of explanatory value to users.
| [
{
"version": "v1",
"created": "Mon, 20 May 2019 12:57:34 GMT"
}
] | 1,618,963,200,000 | [
[
"Keane",
"Mark T.",
""
],
[
"Kenny",
"Eoin M.",
""
]
] |
1905.08222 | Xiou Ge | Xiou Ge, Richard T. Goodwin, Jeremy R. Gregory, Randolph E. Kirchain,
Joana Maria, Lav R. Varshney | Accelerated Discovery of Sustainable Building Materials | Presented at AAAI 2019 Spring Symposium, Towards AI for Collaborative
Open Science (TACOS) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Concrete is the most widely used engineered material in the world with more
than 10 billion tons produced annually. Unfortunately, with that scale comes a
significant burden in terms of energy, water, and release of greenhouse gases
and other pollutants. As such, there is interest in creating concrete formulas
that minimize this environmental burden, while satisfying engineering
performance requirements. Recent advances in artificial intelligence have
enabled machines to generate highly plausible artifacts, such as images of
realistic looking faces. Semi-supervised generative models allow generation of
artifacts with specific, desired characteristics. In this work, we use
Conditional Variational Autoencoders (CVAE), a type of semi-supervised
generative model, to discover concrete formulas with desired properties. Our
model is trained using open data from the UCI Machine Learning Repository
joined with environmental impact data computed using a web-based tool. We
demonstrate CVAEs can design concrete formulas with lower emissions and natural
resource usage while meeting design requirements. To ensure fair comparison
between extant and generated formulas, we also train regression models to
predict the environmental impacts and strength of discovered formulas. With
these results, a construction engineer may create a formula that meets
structural needs and best addresses local environmental concerns.
| [
{
"version": "v1",
"created": "Mon, 20 May 2019 17:21:39 GMT"
}
] | 1,558,396,800,000 | [
[
"Ge",
"Xiou",
""
],
[
"Goodwin",
"Richard T.",
""
],
[
"Gregory",
"Jeremy R.",
""
],
[
"Kirchain",
"Randolph E.",
""
],
[
"Maria",
"Joana",
""
],
[
"Varshney",
"Lav R.",
""
]
] |
1905.08347 | Kai Sauerwald | Kai Sauerwald and Christoph Beierle | Decrement Operators in Belief Change | null | null | 10.1007/978-3-030-29765-7_21 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While research on iterated revision is predominant in the field of iterated
belief change, the class of iterated contraction operators received more
attention in recent years. In this article, we examine a non-prioritized
generalisation of iterated contraction. In particular, the class of weak
decrement operators is introduced, which are operators that by multiple steps
achieve the same as a contraction. Inspired by Darwiche and Pearl's work on
iterated revision the subclass of decrement operators is defined. For both,
decrement and weak decrement operators, postulates are presented and for each
of them a representation theorem in the framework of total preorders is given.
Furthermore, we present two sub-types of decrement operators.
| [
{
"version": "v1",
"created": "Mon, 20 May 2019 21:09:55 GMT"
},
{
"version": "v2",
"created": "Thu, 18 Jul 2019 12:20:37 GMT"
}
] | 1,565,654,400,000 | [
[
"Sauerwald",
"Kai",
""
],
[
"Beierle",
"Christoph",
""
]
] |
1905.08581 | Deepika Verma | Deepika Verma, Kerstin Bach, Paul Jarle Mork | Similarity Measure Development for Case-Based Reasoning- A Data-driven
Approach | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we demonstrate a data-driven methodology for modelling the
local similarity measures of various attributes in a dataset. We analyse the
spread in the numerical attributes and estimate their distribution using
polynomial function to showcase an approach for deriving strong initial value
ranges of numerical attributes and use a non-overlapping distribution for
categorical attributes such that the entire similarity range [0,1] is utilized.
We use an open source dataset for demonstrating modelling and development of
the similarity measures and will present a case-based reasoning (CBR) system
that can be used to search for the most relevant similar cases.
| [
{
"version": "v1",
"created": "Tue, 21 May 2019 12:33:42 GMT"
}
] | 1,558,483,200,000 | [
[
"Verma",
"Deepika",
""
],
[
"Bach",
"Kerstin",
""
],
[
"Mork",
"Paul Jarle",
""
]
] |
1905.09103 | Andrea Galassi | Andrea Galassi, Kristian Kersting, Marco Lippi, Xiaoting Shao, Paolo
Torroni | Neural-Symbolic Argumentation Mining: an Argument in Favor of Deep
Learning and Reasoning | null | Frontiers in Big Data 2 (2020) 52 | 10.3389/fdata.2019.00052 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Deep learning is bringing remarkable contributions to the field of
argumentation mining, but the existing approaches still need to fill the gap
toward performing advanced reasoning tasks. In this position paper, we posit
that neural-symbolic and statistical relational learning could play a crucial
role in the integration of symbolic and sub-symbolic methods to achieve this
goal.
| [
{
"version": "v1",
"created": "Wed, 22 May 2019 12:31:08 GMT"
},
{
"version": "v2",
"created": "Fri, 31 May 2019 08:55:00 GMT"
},
{
"version": "v3",
"created": "Tue, 28 Jan 2020 16:59:48 GMT"
}
] | 1,580,256,000,000 | [
[
"Galassi",
"Andrea",
""
],
[
"Kersting",
"Kristian",
""
],
[
"Lippi",
"Marco",
""
],
[
"Shao",
"Xiaoting",
""
],
[
"Torroni",
"Paolo",
""
]
] |
1905.09355 | Sandhya Saisubramanian | Sandhya Saisubramanian and Shlomo Zilberstein | Minimizing the Negative Side Effects of Planning with Reduced Models | AAAI Workshop on Artificial Intelligence Safety (2019) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reduced models of large Markov decision processes accelerate planning by
considering a subset of outcomes for each state-action pair. This reduction in
reachable states leads to replanning when the agent encounters states without a
precomputed action during plan execution. However, not all states are suitable
for replanning. In the worst case, the agent may not be able to reach the goal
from the newly encountered state. Agents should be better prepared to handle
such risky situations and avoid replanning in risky states. Hence, we consider
replanning in states that are unsafe for deliberation as a negative side effect
of planning with reduced models. While the negative side effects can be
minimized by always using the full model, this defeats the purpose of using
reduced models. The challenge is to plan with reduced models, but somehow
account for the possibility of encountering risky situations. An agent should
thus only replan in states that the user has approved as safe for replanning.
To that end, we propose planning using a portfolio of reduced models, a
planning paradigm that minimizes the negative side effects of planning using
reduced models by alternating between different outcome selection approaches.
We empirically demonstrate the effectiveness of our approach on three domains:
an electric vehicle charging domain using real-world data from a university
campus and two benchmark planning problems.
| [
{
"version": "v1",
"created": "Wed, 22 May 2019 20:36:28 GMT"
}
] | 1,558,656,000,000 | [
[
"Saisubramanian",
"Sandhya",
""
],
[
"Zilberstein",
"Shlomo",
""
]
] |
1905.09519 | C. Maria Keet | C Maria Keet | The African Wildlife Ontology tutorial ontologies: requirements, design,
and content | 8 pages, 2 figures; submitted to an international journal | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Background. Most tutorial ontologies focus on illustrating one aspect of
ontology development, notably language features and automated reasoners, but
ignore ontology development factors, such as emergent modelling guidelines and
ontological principles. Yet, novices replicate examples from the exercises they
carry out. Not providing good examples holistically causes the propagation of
sub-optimal ontology development, which may negatively affect the quality of a
real domain ontology. Results. We identified 22 requirements that a good
tutorial ontology should satisfy regarding subject domain, logics and
reasoning, and engineering aspects. We developed a set of ontologies about
African Wildlife to serve as tutorial ontologies. A majority of the
requirements have been met with the set of African Wildlife Ontology tutorial
ontologies, which are introduced in this paper. The African Wildlife Ontology
is mature and has been used yearly in an ontology engineering course or
tutorial since 2010 and is included in a recent ontology engineering textbook
with relevant examples and exercises. Conclusion. The African Wildlife Ontology
provides a wide range of options concerning examples and exercises for ontology
engineering well beyond illustrating only language features and automated
reasoning. It assists in demonstrating tasks about ontology quality, such as
alignment to a foundational ontology and satisfying competency questions,
versioning, and multilingual ontologies.
| [
{
"version": "v1",
"created": "Thu, 23 May 2019 07:59:30 GMT"
}
] | 1,558,656,000,000 | [
[
"Keet",
"C Maria",
""
]
] |
1905.09565 | Zarathustra Amadeus Goertzel | Zarathustra Goertzel, Jan Jakub\r{u}v, Josef Urban | ENIGMAWatch: ProofWatch Meets ENIGMA | 12 pages, 5 tables, 3 figures, submitted to TABLEAUX 2019 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work we describe a new learning-based proof guidance -- ENIGMAWatch
-- for saturation-style first-order theorem provers. ENIGMAWatch combines two
guiding approaches for the given-clause selection implemented for the E ATP
system: ProofWatch and ENIGMA. ProofWatch is motivated by the watchlist (hints)
method and based on symbolic matching of multiple related proofs, while ENIGMA
is based on statistical machine learning. The two methods are combined by using
the evolving information about symbolic proof matching as an additional
information that characterizes the saturation-style proof search for the
statistical learning methods. The new system is experimentally evaluated on a
large set of problems from the Mizar Library. We show that the added
proof-matching information is considered important by the statistical machine
learners, and that it leads to improvements in E's Performance over ProofWatch
and ENIGMA.
| [
{
"version": "v1",
"created": "Thu, 23 May 2019 10:05:55 GMT"
},
{
"version": "v2",
"created": "Fri, 23 Aug 2019 13:07:32 GMT"
}
] | 1,566,777,600,000 | [
[
"Goertzel",
"Zarathustra",
""
],
[
"Jakubův",
"Jan",
""
],
[
"Urban",
"Josef",
""
]
] |
1905.10621 | Jorge Fandinno | Pedro Cabalar, Jorge Fandinno and Luis Fari\~nas del Cerro | Dynamic Epistemic Logic with ASP Updates: Application to Conditional
Planning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dynamic Epistemic Logic (DEL) is a family of multimodal logics that has
proved to be very successful for epistemic reasoning in planning tasks. In this
logic, the agent's knowledge is captured by modal epistemic operators whereas
the system evolution is described in terms of (some subset of) dynamic logic
modalities in which actions are usually represented as semantic objects called
event models. In this paper, we study a variant of DEL, that wecall DEL[ASP],
where actions are syntactically described by using an Answer Set Programming
(ASP) representation instead of event models. This representation directly
inherits high level expressive features like indirect effects, qualifications,
state constraints, defaults, or recursive fluents that are common in ASP
descriptions of action domains. Besides, we illustrate how this approach can be
applied for obtaining conditional plans in single-agent, partially observable
domains where knowledge acquisition may be represented as indirect effects of
actions.
| [
{
"version": "v1",
"created": "Sat, 25 May 2019 15:52:13 GMT"
}
] | 1,559,001,600,000 | [
[
"Cabalar",
"Pedro",
""
],
[
"Fandinno",
"Jorge",
""
],
[
"del Cerro",
"Luis Fariñas",
""
]
] |
1905.10672 | Anagha Kulkarni | Anagha Kulkarni, Siddharth Srivastava, Subbarao Kambhampati | Signaling Friends and Head-Faking Enemies Simultaneously: Balancing Goal
Obfuscation and Goal Legibility | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In order to be useful in the real world, AI agents need to plan and act in
the presence of others, who may include adversarial and cooperative entities.
In this paper, we consider the problem where an autonomous agent needs to act
in a manner that clarifies its objectives to cooperative entities while
preventing adversarial entities from inferring those objectives. We show that
this problem is solvable when cooperative entities and adversarial entities use
different types of sensors and/or prior knowledge. We develop two new solution
approaches for computing such plans. One approach provides an optimal solution
to the problem by using an IP solver to provide maximum obfuscation for
adversarial entities while providing maximum legibility for cooperative
entities in the environment, whereas the other approach provides a satisficing
solution using heuristic-guided forward search to achieve preset levels of
obfuscation and legibility for adversarial and cooperative entities
respectively. We show the feasibility and utility of our algorithms through
extensive empirical evaluation on problems derived from planning benchmarks.
| [
{
"version": "v1",
"created": "Sat, 25 May 2019 20:56:07 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Jan 2020 00:06:56 GMT"
}
] | 1,580,083,200,000 | [
[
"Kulkarni",
"Anagha",
""
],
[
"Srivastava",
"Siddharth",
""
],
[
"Kambhampati",
"Subbarao",
""
]
] |
1905.10792 | Damien Anderson Mr | Damien Anderson, Cristina Guerrero-Romero, Diego Perez-Liebana, Philip
Rodgers and John Levine | Ensemble Decision Systems for General Video Game Playing | 8 Pages, Accepted at COG2019 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ensemble Decision Systems offer a unique form of decision making that allows
a collection of algorithms to reason together about a problem. Each individual
algorithm has its own inherent strengths and weaknesses, and often it is
difficult to overcome the weaknesses while retaining the strengths. Instead of
altering the properties of the algorithm, the Ensemble Decision System augments
the performance with other algorithms that have complementing strengths. This
work outlines different options for building an Ensemble Decision System as
well as providing analysis on its performance compared to the individual
components of the system with interesting results, showing an increase in the
generality of the algorithms without significantly impeding performance.
| [
{
"version": "v1",
"created": "Sun, 26 May 2019 12:11:37 GMT"
}
] | 1,559,001,600,000 | [
[
"Anderson",
"Damien",
""
],
[
"Guerrero-Romero",
"Cristina",
""
],
[
"Perez-Liebana",
"Diego",
""
],
[
"Rodgers",
"Philip",
""
],
[
"Levine",
"John",
""
]
] |
1905.10863 | Maurizio Parton | Francesco Morandin, Gianluca Amato, Marco Fantozzi, Rosa Gini, Carlo
Metta, Maurizio Parton | SAI: a Sensible Artificial Intelligence that plays with handicap and
targets high scores in 9x9 Go (extended version) | Added Section 4.4 on minimization of suboptimal moves. Improved
Section 5 on future developments. Minor corrections | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We develop a new model that can be applied to any perfect information
two-player zero-sum game to target a high score, and thus a perfect play. We
integrate this model into the Monte Carlo tree search-policy iteration learning
pipeline introduced by Google DeepMind with AlphaGo. Training this model on 9x9
Go produces a superhuman Go player, thus proving that it is stable and robust.
We show that this model can be used to effectively play with both positional
and score handicap, and to minimize suboptimal moves. We develop a family of
agents that can target high scores against any opponent, and recover from very
severe disadvantage against weak opponents. To the best of our knowledge, these
are the first effective achievements in this direction.
| [
{
"version": "v1",
"created": "Sun, 26 May 2019 19:29:59 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Jun 2019 10:18:33 GMT"
},
{
"version": "v3",
"created": "Tue, 26 Nov 2019 23:22:46 GMT"
}
] | 1,574,899,200,000 | [
[
"Morandin",
"Francesco",
""
],
[
"Amato",
"Gianluca",
""
],
[
"Fantozzi",
"Marco",
""
],
[
"Gini",
"Rosa",
""
],
[
"Metta",
"Carlo",
""
],
[
"Parton",
"Maurizio",
""
]
] |
1905.10907 | Douglas Rebstock | Douglas Rebstock, Christopher Solinas, Michael Buro | Learning Policies from Human Data for Skat | accepted by IEEE Conference on Games 2019 (CoG-2019) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Decision-making in large imperfect information games is difficult. Thanks to
recent success in Poker, Counterfactual Regret Minimization (CFR) methods have
been at the forefront of research in these games. However, most of the success
in large games comes with the use of a forward model and powerful state
abstractions. In trick-taking card games like Bridge or Skat, large information
sets and an inability to advance the simulation without fully determinizing the
state make forward search problematic. Furthermore, state abstractions can be
especially difficult to construct because the precise holdings of each player
directly impact move values.
In this paper we explore learning model-free policies for Skat from human
game data using deep neural networks (DNN). We produce a new state-of-the-art
system for bidding and game declaration by introducing methods to a) directly
vary the aggressiveness of the bidder and b) declare games based on expected
value while mitigating issues with rarely observed state-action pairs. Although
cardplay policies learned through imitation are slightly weaker than the
current best search-based method, they run orders of magnitude faster. We also
explore how these policies could be learned directly from experience in a
reinforcement learning setting and discuss the value of incorporating human
data for this task.
| [
{
"version": "v1",
"created": "Mon, 27 May 2019 00:05:44 GMT"
}
] | 1,559,001,600,000 | [
[
"Rebstock",
"Douglas",
""
],
[
"Solinas",
"Christopher",
""
],
[
"Buro",
"Michael",
""
]
] |
1905.10911 | Douglas Rebstock | Douglas Rebstock, Christopher Solinas, Michael Buro, Nathan R.
Sturtevant | Policy Based Inference in Trick-Taking Card Games | accepted to IEEE Conference on Games 2019 (CoG-2019) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Trick-taking card games feature a large amount of private information that
slowly gets revealed through a long sequence of actions. This makes the number
of histories exponentially large in the action sequence length, as well as
creating extremely large information sets. As a result, these games become too
large to solve. To deal with these issues many algorithms employ inference, the
estimation of the probability of states within an information set. In this
paper, we demonstrate a Policy Based Inference (PI) algorithm that uses player
modelling to infer the probability we are in a given state. We perform
experiments in the German trick-taking card game Skat, in which we show that
this method vastly improves the inference as compared to previous work, and
increases the performance of the state-of-the-art Skat AI system Kermit when it
is employed into its determinized search algorithm.
| [
{
"version": "v1",
"created": "Mon, 27 May 2019 00:25:22 GMT"
}
] | 1,559,001,600,000 | [
[
"Rebstock",
"Douglas",
""
],
[
"Solinas",
"Christopher",
""
],
[
"Buro",
"Michael",
""
],
[
"Sturtevant",
"Nathan R.",
""
]
] |
1905.10985 | Jeff Clune | Jeff Clune | AI-GAs: AI-generating algorithms, an alternate paradigm for producing
general artificial intelligence | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Perhaps the most ambitious scientific quest in human history is the creation
of general artificial intelligence, which roughly means AI that is as smart or
smarter than humans. The dominant approach in the machine learning community is
to attempt to discover each of the pieces required for intelligence, with the
implicit assumption that some future group will complete the Herculean task of
figuring out how to combine all of those pieces into a complex thinking
machine. I call this the "manual AI approach". This paper describes another
exciting path that ultimately may be more successful at producing general AI.
It is based on the clear trend in machine learning that hand-designed solutions
eventually are replaced by more effective, learned solutions. The idea is to
create an AI-generating algorithm (AI-GA), which automatically learns how to
produce general AI. Three Pillars are essential for the approach: (1)
meta-learning architectures, (2) meta-learning the learning algorithms
themselves, and (3) generating effective learning environments. I argue that
either approach could produce general AI first, and both are scientifically
worthwhile irrespective of which is the fastest path. Because both are
promising, yet the ML community is currently committed to the manual approach,
I argue that our community should increase its research investment in the AI-GA
approach. To encourage such research, I describe promising work in each of the
Three Pillars. I also discuss AI-GA-specific safety and ethical considerations.
Because it it may be the fastest path to general AI and because it is
inherently scientifically interesting to understand the conditions in which a
simple algorithm can produce general AI (as happened on Earth where Darwinian
evolution produced human intelligence), I argue that the pursuit of AI-GAs
should be considered a new grand challenge of computer science research.
| [
{
"version": "v1",
"created": "Mon, 27 May 2019 06:05:16 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Feb 2020 04:46:25 GMT"
}
] | 1,580,774,400,000 | [
[
"Clune",
"Jeff",
""
]
] |
1905.11346 | Alberto Pozanco | Robert C. Holte, Ruben Majadas, Alberto Pozanco, Daniel Borrajo | Error Analysis and Correction for Weighted A*'s Suboptimality (Extended
Version) | Published as a short paper in the 12th Annual Symposium on
Combinatorial Search, SoCS 2019 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Weighted A* (wA*) is a widely used algorithm for rapidly, but suboptimally,
solving planning and search problems. The cost of the solution it produces is
guaranteed to be at most W times the optimal solution cost, where W is the
weight wA* uses in prioritizing open nodes. W is therefore a suboptimality
bound for the solution produced by wA*. There is broad consensus that this
bound is not very accurate, that the actual suboptimality of wA*'s solution is
often much less than W times optimal. However, there is very little published
evidence supporting that view, and no existing explanation of why W is a poor
bound. This paper fills in these gaps in the literature. We begin with a
large-scale experiment demonstrating that, across a wide variety of domains and
heuristics for those domains, W is indeed very often far from the true
suboptimality of wA*'s solution. We then analytically identify the potential
sources of error. Finally, we present a practical method for correcting for two
of these sources of error and experimentally show that the correction
frequently eliminates much of the error.
| [
{
"version": "v1",
"created": "Mon, 27 May 2019 17:08:08 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Jun 2019 15:04:48 GMT"
},
{
"version": "v3",
"created": "Fri, 26 May 2023 08:56:36 GMT"
}
] | 1,685,318,400,000 | [
[
"Holte",
"Robert C.",
""
],
[
"Majadas",
"Ruben",
""
],
[
"Pozanco",
"Alberto",
""
],
[
"Borrajo",
"Daniel",
""
]
] |
1905.11807 | Andrew Powell | Andrew Powell | Artificial Consciousness and Security | 7 pages, no figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes a possible way to improve computer security by
implementing a program which implements the following three features related to
a weak notion of artificial consciousness: (partial) self-monitoring, ability
to compute the truth of quantifier-free propositions and the ability to
communicate with the user. The integrity of the program could be enhanced by
using a trusted computing approach, that is to say a hardware module that is at
the root of a chain of trust. This paper outlines a possible approach but does
not refer to an implementation (which would need further work), but the author
believes that an implementation using current processors, a debugger, a
monitoring program and a trusted processing module is currently possible.
| [
{
"version": "v1",
"created": "Sat, 11 May 2019 11:21:05 GMT"
}
] | 1,559,088,000,000 | [
[
"Powell",
"Andrew",
""
]
] |
1905.12186 | Michael Cohen | Michael K Cohen, Badri Vellambi, Marcus Hutter | Asymptotically Unambitious Artificial General Intelligence | 9 pages with 5 figures; 10 page Appendix with 2 figures | Proc.AAAI. 34 (2020) 2467-2476 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | General intelligence, the ability to solve arbitrary solvable problems, is
supposed by many to be artificially constructible. Narrow intelligence, the
ability to solve a given particularly difficult problem, has seen impressive
recent development. Notable examples include self-driving cars, Go engines,
image classifiers, and translators. Artificial General Intelligence (AGI)
presents dangers that narrow intelligence does not: if something smarter than
us across every domain were indifferent to our concerns, it would be an
existential threat to humanity, just as we threaten many species despite no ill
will. Even the theory of how to maintain the alignment of an AGI's goals with
our own has proven highly elusive. We present the first algorithm we are aware
of for asymptotically unambitious AGI, where "unambitiousness" includes not
seeking arbitrary power. Thus, we identify an exception to the Instrumental
Convergence Thesis, which is roughly that by default, an AGI would seek power,
including over us.
| [
{
"version": "v1",
"created": "Wed, 29 May 2019 02:48:15 GMT"
},
{
"version": "v2",
"created": "Thu, 19 Sep 2019 03:17:31 GMT"
},
{
"version": "v3",
"created": "Thu, 12 Dec 2019 12:17:45 GMT"
},
{
"version": "v4",
"created": "Tue, 21 Jul 2020 13:27:38 GMT"
}
] | 1,595,376,000,000 | [
[
"Cohen",
"Michael K",
""
],
[
"Vellambi",
"Badri",
""
],
[
"Hutter",
"Marcus",
""
]
] |
1905.12389 | Frank Van Harmelen | Frank van Harmelen and Annette ten Teije | A Boxology of Design Patterns for Hybrid Learning and Reasoning Systems | 12 pages,55 references | Journal of Web Engineering, Vol. 18 1-3, pgs. 97-124, 2019 | 10.13052/jwe1540-9589.18133 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a set of compositional design patterns to describe a large variety
of systems that combine statistical techniques from machine learning with
symbolic techniques from knowledge representation. As in other areas of
computer science (knowledge engineering, software engineering, ontology
engineering, process mining and others), such design patterns help to
systematize the literature, clarify which combinations of techniques serve
which purposes, and encourage re-use of software components. We have validated
our set of compositional design patterns against a large body of recent
literature.
| [
{
"version": "v1",
"created": "Wed, 29 May 2019 12:53:10 GMT"
}
] | 1,559,174,400,000 | [
[
"van Harmelen",
"Frank",
""
],
[
"Teije",
"Annette ten",
""
]
] |
1905.12464 | Luigi Portinale | Luigi Portinale | Approaching Adaptation Guided Retrieval in Case-Based Reasoning through
Inference in Undirected Graphical Models | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In Case-Based Reasoning, when the similarity assumption does not hold, the
retrieval of a set of cases structurally similar to the query does not
guarantee to get a reusable or revisable solution. Knowledge about the
adaptability of solutions has to be exploited, in order to define a method for
adaptation-guided retrieval. We propose a novel approach to address this
problem, where knowledge about the adaptability of the solutions is captured
inside a metric Markov Random Field (MRF). Nodes of the MRF represent cases and
edges connect nodes whose solutions are close in the solution space. States of
the nodes represent different adaptation levels with respect to the potential
query. Metric-based potentials enforce connected nodes to share the same state,
since cases having similar solutions should have the same adaptability level
with respect to the query. The main goal is to enlarge the set of potentially
adaptable cases that are retrieved without significantly sacrificing the
precision and accuracy of retrieval. We will report on some experiments
concerning a retrieval architecture where a simple kNN retrieval (on the
problem description) is followed by a further retrieval step based on MRF
inference.
| [
{
"version": "v1",
"created": "Wed, 29 May 2019 14:00:26 GMT"
}
] | 1,559,174,400,000 | [
[
"Portinale",
"Luigi",
""
]
] |
1905.12877 | Tommy Liu | Tommy Liu and Jochen Renz and Peng Zhang and Matthew Stephenson | Using Restart Heuristics to Improve Agent Performance in Angry Birds | To appear: IEEE Conference on Games 2019 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Over the past few years the Angry Birds AI competition has been held in an
attempt to develop intelligent agents that can successfully and efficiently
solve levels for the video game Angry Birds. Many different agents and
strategies have been developed to solve the complex and challenging physical
reasoning problems associated with such a game. However none of these agents
attempt one of the key strategies which humans employ to solve Angry Birds
levels, which is restarting levels. Restarting is important in Angry Birds
because sometimes the level is no longer solvable or some given shot made has
little to no benefit towards the ultimate goal of the game. This paper proposes
a framework and experimental evaluation for when to restart levels in Angry
Birds. We demonstrate that restarting is a viable strategy to improve agent
performance in many cases.
| [
{
"version": "v1",
"created": "Thu, 30 May 2019 06:54:46 GMT"
}
] | 1,559,260,800,000 | [
[
"Liu",
"Tommy",
""
],
[
"Renz",
"Jochen",
""
],
[
"Zhang",
"Peng",
""
],
[
"Stephenson",
"Matthew",
""
]
] |
1905.12941 | Nicolas Perrin-Gilbert | Thomas Pierrot, Guillaume Ligner, Scott Reed, Olivier Sigaud, Nicolas
Perrin, Alexandre Laterre, David Kas, Karim Beguir, Nando de Freitas | Learning Compositional Neural Programs with Recursive Tree Search and
Planning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel reinforcement learning algorithm, AlphaNPI, that
incorporates the strengths of Neural Programmer-Interpreters (NPI) and
AlphaZero. NPI contributes structural biases in the form of modularity,
hierarchy and recursion, which are helpful to reduce sample complexity, improve
generalization and increase interpretability. AlphaZero contributes powerful
neural network guided search algorithms, which we augment with recursion.
AlphaNPI only assumes a hierarchical program specification with sparse rewards:
1 when the program execution satisfies the specification, and 0 otherwise.
Using this specification, AlphaNPI is able to train NPI models effectively with
RL for the first time, completely eliminating the need for strong supervision
in the form of execution traces. The experiments show that AlphaNPI can sort as
well as previous strongly supervised NPI variants. The AlphaNPI agent is also
trained on a Tower of Hanoi puzzle with two disks and is shown to generalize to
puzzles with an arbitrary number of disk
| [
{
"version": "v1",
"created": "Thu, 30 May 2019 10:08:00 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Apr 2021 12:25:49 GMT"
}
] | 1,618,358,400,000 | [
[
"Pierrot",
"Thomas",
""
],
[
"Ligner",
"Guillaume",
""
],
[
"Reed",
"Scott",
""
],
[
"Sigaud",
"Olivier",
""
],
[
"Perrin",
"Nicolas",
""
],
[
"Laterre",
"Alexandre",
""
],
[
"Kas",
"David",
""
],
[
"Beguir",
"Karim",
""
],
[
"de Freitas",
"Nando",
""
]
] |
1905.12966 | Zhengui Xue | Zhengui Xue, Zhiwei Lin, Hui Wang and Sally McClean | Quantifying consensus of rankings based on q-support patterns | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Rankings, representing preferences over a set of candidates, are widely used
in many information systems, e.g., group decision making and information
retrieval. It is of great importance to evaluate the consensus of the obtained
rankings from multiple agents. An overall measure of the consensus degree
provides an insight into the ranking data. Moreover, it could provide a
quantitative indicator for consensus comparison between groups and further
improvement of a ranking system. Existing studies are insufficient in assessing
the overall consensus of a ranking set. They did not provide an evaluation of
the consensus degree of preference patterns in most rankings. In this paper, a
novel consensus quantifying approach, without the need for any correlation or
distance functions as in existing studies of consensus, is proposed based on a
concept of q-support patterns of rankings. The q-support patterns represent the
commonality embedded in a set of rankings. A method for detecting outliers in a
set of rankings is naturally derived from the proposed consensus quantifying
approach. Experimental studies are conducted to demonstrate the effectiveness
of the proposed approach.
| [
{
"version": "v1",
"created": "Thu, 30 May 2019 11:21:22 GMT"
},
{
"version": "v2",
"created": "Fri, 26 Jul 2019 16:45:45 GMT"
}
] | 1,564,358,400,000 | [
[
"Xue",
"Zhengui",
""
],
[
"Lin",
"Zhiwei",
""
],
[
"Wang",
"Hui",
""
],
[
"McClean",
"Sally",
""
]
] |
1905.13516 | Dennis Soemers | Cameron Browne, Dennis J. N. J. Soemers, \'Eric Piette, Matthew
Stephenson, Michael Conrad, Walter Crist, Thierry Depaulis, Eddie Duggan,
Fred Horn, Steven Kelk, Simon M. Lucas, Jo\~ao Pedro Neto, David Parlett,
Abdallah Saffidine, Ulrich Sch\"adler, Jorge Nuno Silva, Alex de Voogt, Mark
H. M. Winands | Foundations of Digital Arch{\ae}oludology | Report on Dagstuhl Research Meeting. Authored/edited by all
participants. Appendices by Thierry Depaulis | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Digital Archaeoludology (DAL) is a new field of study involving the analysis
and reconstruction of ancient games from incomplete descriptions and
archaeological evidence using modern computational techniques. The aim is to
provide digital tools and methods to help game historians and other researchers
better understand traditional games, their development throughout recorded
human history, and their relationship to the development of human culture and
mathematical knowledge. This work is being explored in the ERC-funded Digital
Ludeme Project.
The aim of this inaugural international research meeting on DAL is to gather
together leading experts in relevant disciplines - computer science, artificial
intelligence, machine learning, computational phylogenetics, mathematics,
history, archaeology, anthropology, etc. - to discuss the key themes and
establish the foundations for this new field of research, so that it may
continue beyond the lifetime of its initiating project.
| [
{
"version": "v1",
"created": "Fri, 31 May 2019 11:22:00 GMT"
}
] | 1,559,520,000,000 | [
[
"Browne",
"Cameron",
""
],
[
"Soemers",
"Dennis J. N. J.",
""
],
[
"Piette",
"Éric",
""
],
[
"Stephenson",
"Matthew",
""
],
[
"Conrad",
"Michael",
""
],
[
"Crist",
"Walter",
""
],
[
"Depaulis",
"Thierry",
""
],
[
"Duggan",
"Eddie",
""
],
[
"Horn",
"Fred",
""
],
[
"Kelk",
"Steven",
""
],
[
"Lucas",
"Simon M.",
""
],
[
"Neto",
"João Pedro",
""
],
[
"Parlett",
"David",
""
],
[
"Saffidine",
"Abdallah",
""
],
[
"Schädler",
"Ulrich",
""
],
[
"Silva",
"Jorge Nuno",
""
],
[
"de Voogt",
"Alex",
""
],
[
"Winands",
"Mark H. M.",
""
]
] |
1905.13521 | Li-Cheng Lan | Li-Cheng Lan, Wei Li, Ting-Han Wei, and I-Chen Wu | Multiple Policy Value Monte Carlo Tree Search | Proceedings of the 28th International Joint Conference on Artificial
Intelligence (IJCAI-19) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many of the strongest game playing programs use a combination of Monte Carlo
tree search (MCTS) and deep neural networks (DNN), where the DNNs are used as
policy or value evaluators. Given a limited budget, such as online playing or
during the self-play phase of AlphaZero (AZ) training, a balance needs to be
reached between accurate state estimation and more MCTS simulations, both of
which are critical for a strong game playing agent. Typically, larger DNNs are
better at generalization and accurate evaluation, while smaller DNNs are less
costly, and therefore can lead to more MCTS simulations and bigger search trees
with the same budget. This paper introduces a new method called the multiple
policy value MCTS (MPV-MCTS), which combines multiple policy value neural
networks (PV-NNs) of various sizes to retain advantages of each network, where
two PV-NNs f_S and f_L are used in this paper. We show through experiments on
the game NoGo that a combined f_S and f_L MPV-MCTS outperforms single PV-NN
with policy value MCTS, called PV-MCTS. Additionally, MPV-MCTS also outperforms
PV-MCTS for AZ training.
| [
{
"version": "v1",
"created": "Fri, 31 May 2019 11:33:06 GMT"
}
] | 1,559,520,000,000 | [
[
"Lan",
"Li-Cheng",
""
],
[
"Li",
"Wei",
""
],
[
"Wei",
"Ting-Han",
""
],
[
"Wu",
"I-Chen",
""
]
] |
1906.00131 | Arsh Javed Rehman | Arsh Javed Rehman, Pradeep Tomar | Decision-Making in Reinforcement Learning | 4 pages, 1 figure | null | 10.13140/RG.2.2.12367.33443 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this research work, probabilistic decision-making approaches are studied,
e.g. Bayesian and Boltzmann strategies, along with various deterministic
exploration strategies, e.g. greedy, epsilon-Greedy and random approaches. In
this research work, a comparative study has been done between probabilistic and
deterministic decision-making approaches, the experiments are performed in
OpenAI gym environment, solving Cart Pole problem. This research work discusses
about the Bayesian approach to decision-making in deep reinforcement learning,
and about dropout, how it can reduce the computational cost. All the
exploration approaches are compared. It also discusses about the importance of
exploration in deep reinforcement learning, and how improving exploration
strategies may help in science and technology. This research work shows how
probabilistic decision-making approaches are better in the long run as compared
to the deterministic approaches. When there is uncertainty, Bayesian dropout
approach proved to be better than all other approaches in this research work.
| [
{
"version": "v1",
"created": "Sat, 1 Jun 2019 02:36:42 GMT"
}
] | 1,559,606,400,000 | [
[
"Rehman",
"Arsh Javed",
""
],
[
"Tomar",
"Pradeep",
""
]
] |
1906.00163 | Mukund Raghothaman | Xujie Si, Mukund Raghothaman, Kihong Heo, Mayur Naik | Synthesizing Datalog Programs Using Numerical Relaxation | Per editor's instructions, this is only an early preprint of the
paper which will be presented at IJCAI 2019 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The problem of learning logical rules from examples arises in diverse fields,
including program synthesis, logic programming, and machine learning. Existing
approaches either involve solving computationally difficult combinatorial
problems, or performing parameter estimation in complex statistical models.
In this paper, we present Difflog, a technique to extend the logic
programming language Datalog to the continuous setting. By attaching
real-valued weights to individual rules of a Datalog program, we naturally
associate numerical values with individual conclusions of the program.
Analogous to the strategy of numerical relaxation in optimization problems, we
can now first determine the rule weights which cause the best agreement between
the training labels and the induced values of output tuples, and subsequently
recover the classical discrete-valued target program from the continuous
optimum.
We evaluate Difflog on a suite of 34 benchmark problems from recent
literature in knowledge discovery, formal verification, and database
query-by-example, and demonstrate significant improvements in learning complex
programs with recursive rules, invented predicates, and relations of arbitrary
arity.
| [
{
"version": "v1",
"created": "Sat, 1 Jun 2019 06:42:05 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Jun 2019 08:34:46 GMT"
}
] | 1,561,507,200,000 | [
[
"Si",
"Xujie",
""
],
[
"Raghothaman",
"Mukund",
""
],
[
"Heo",
"Kihong",
""
],
[
"Naik",
"Mayur",
""
]
] |
1906.00317 | Elif Surer | Sinan Ariyurek, Aysu Betin-Can, Elif Surer | Automated Video Game Testing Using Synthetic and Human-Like Agents | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present a new methodology that employs tester agents to
automate video game testing. We introduce two types of agents -synthetic and
human-like- and two distinct approaches to create them. Our agents are derived
from Reinforcement Learning (RL) and Monte Carlo Tree Search (MCTS) agents, but
focus on finding defects. The synthetic agent uses test goals generated from
game scenarios, and these goals are further modified to examine the effects of
unintended game transitions. The human-like agent uses test goals extracted by
our proposed multiple greedy-policy inverse reinforcement learning (MGP-IRL)
algorithm from tester trajectories. MGPIRL captures multiple policies executed
by human testers. These testers' aims are finding defects while interacting
with the game to break it, which is considerably different from game playing.
We present interaction states to model such interactions. We use our agents to
produce test sequences, run the game with these sequences, and check the game
for each run with an automated test oracle. We analyze the proposed method in
two parts: we compare the success of human-like and synthetic agents in bug
finding, and we evaluate the similarity between humanlike agents and human
testers. We collected 427 trajectories from human testers using the General
Video Game Artificial Intelligence (GVG-AI) framework and created three games
with 12 levels that contain 45 bugs. Our experiments reveal that human-like and
synthetic agents compete with human testers' bug finding performances.
Moreover, we show that MGP-IRL increases the human-likeness of agents while
improving the bug finding performance.
| [
{
"version": "v1",
"created": "Sun, 2 Jun 2019 00:19:00 GMT"
}
] | 1,559,606,400,000 | [
[
"Ariyurek",
"Sinan",
""
],
[
"Betin-Can",
"Aysu",
""
],
[
"Surer",
"Elif",
""
]
] |
1906.00657 | Andreas Holzinger | Heimo Mueller and Andreas Holzinger | Kandinsky Patterns | 13 pages, 13 Figures | Artificial Intelligence, 2021 | 10.1016/j.artint.2021.103546 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Kandinsky Figures and Kandinsky Patterns are mathematically describable,
simple self-contained hence controllable test data sets for the development,
validation and training of explainability in artificial intelligence. Whilst
Kandinsky Patterns have these computationally manageable properties, they are
at the same time easily distinguishable from human observers. Consequently,
controlled patterns can be described by both humans and computers. We define a
Kandinsky Pattern as a set of Kandinsky Figures, where for each figure an
"infallible authority" defines that the figure belongs to the Kandinsky
Pattern. With this simple principle we build training and validation data sets
for automatic interpretability and context learning. In this paper we describe
the basic idea and some underlying principles of Kandinsky Patterns and provide
a Github repository to invite the international machine learning research
community to a challenge to experiment with our Kandinsky Patterns to expand
and thus make progress in the field of explainable AI and to contribute to the
upcoming field of explainability and causability.
| [
{
"version": "v1",
"created": "Mon, 3 Jun 2019 09:22:33 GMT"
}
] | 1,623,369,600,000 | [
[
"Mueller",
"Heimo",
""
],
[
"Holzinger",
"Andreas",
""
]
] |
1906.01820 | Chris Van Merwijk | Evan Hubinger, Chris van Merwijk, Vladimir Mikulik, Joar Skalse, Scott
Garrabrant | Risks from Learned Optimization in Advanced Machine Learning Systems | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We analyze the type of learned optimization that occurs when a learned model
(such as a neural network) is itself an optimizer - a situation we refer to as
mesa-optimization, a neologism we introduce in this paper. We believe that the
possibility of mesa-optimization raises two important questions for the safety
and transparency of advanced machine learning systems. First, under what
circumstances will learned models be optimizers, including when they should not
be? Second, when a learned model is an optimizer, what will its objective be -
how will it differ from the loss function it was trained under - and how can it
be aligned? In this paper, we provide an in-depth analysis of these two primary
questions and provide an overview of topics for future research.
| [
{
"version": "v1",
"created": "Wed, 5 Jun 2019 04:43:25 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Jun 2019 21:44:27 GMT"
},
{
"version": "v3",
"created": "Wed, 1 Dec 2021 11:22:52 GMT"
}
] | 1,638,403,200,000 | [
[
"Hubinger",
"Evan",
""
],
[
"van Merwijk",
"Chris",
""
],
[
"Mikulik",
"Vladimir",
""
],
[
"Skalse",
"Joar",
""
],
[
"Garrabrant",
"Scott",
""
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.