id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
sequencelengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1709.06772 | Angelo Impedovo | Angelo Impedovo, Corrado Loglisci, Michelangelo Ceci | Temporal Pattern Mining from Evolving Networks | 4 pages, to be presented at the PhD forum of ECML-PKDD 2017 (The
European Conference on Machine Learning & Principles and Practice of
Knowledge Discovery in Databases) in Skopje, 22 September 2017 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, evolving networks are becoming a suitable form to model many
real-world complex systems, due to their peculiarities to represent the systems
and their constituting entities, the interactions between the entities and the
time-variability of their structure and properties. Designing computational
models able to analyze evolving networks becomes relevant in many applications.
The goal of this research project is to evaluate the possible contribution of
temporal pattern mining techniques in the analysis of evolving networks. In
particular, we aim at exploiting available snapshots for the recognition of
valuable and potentially useful knowledge about the temporal dynamics exhibited
by the network over the time, without making any prior assumption about the
underlying evolutionary schema. Pattern-based approaches of temporal pattern
mining can be exploited to detect and characterize changes exhibited by a
network over the time, starting from observed snapshots.
| [
{
"version": "v1",
"created": "Wed, 20 Sep 2017 08:54:28 GMT"
}
] | 1,505,952,000,000 | [
[
"Impedovo",
"Angelo",
""
],
[
"Loglisci",
"Corrado",
""
],
[
"Ceci",
"Michelangelo",
""
]
] |
1709.06908 | Chao Zhao | Chao Zhao, Jingchi Jiang, Yi Guan | EMR-based medical knowledge representation and inference via Markov
random fields and distributed representation learning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Objective: Electronic medical records (EMRs) contain an amount of medical
knowledge which can be used for clinical decision support (CDS). Our objective
is a general system that can extract and represent these knowledge contained in
EMRs to support three CDS tasks: test recommendation, initial diagnosis, and
treatment plan recommendation, with the given condition of one patient.
Methods: We extracted four kinds of medical entities from records and
constructed an EMR-based medical knowledge network (EMKN), in which nodes are
entities and edges reflect their co-occurrence in a single record. Three
bipartite subgraphs (bi-graphs) were extracted from the EMKN to support each
task. One part of the bi-graph was the given condition (e.g., symptoms), and
the other was the condition to be inferred (e.g., diseases). Each bi-graph was
regarded as a Markov random field to support the inference. Three lazy energy
functions and one parameter-based energy function were proposed, as well as two
knowledge representation learning-based energy functions, which can provide a
distributed representation of medical entities. Three measures were utilized
for performance evaluation. Results: On the initial diagnosis task, 80.11% of
the test records identified at least one correct disease from top 10
candidates. Test and treatment recommendation results were 87.88% and 92.55%,
respectively. These results altogether indicate that the proposed system
outperformed the baseline methods. The distributed representation of medical
entities does reflect similarity relationships in regards to knowledge level.
Conclusion: Combining EMKN and MRF is an effective approach for general medical
knowledge representation and inference. Different tasks, however, require
designing their energy functions individually.
| [
{
"version": "v1",
"created": "Wed, 20 Sep 2017 14:45:21 GMT"
}
] | 1,505,952,000,000 | [
[
"Zhao",
"Chao",
""
],
[
"Jiang",
"Jingchi",
""
],
[
"Guan",
"Yi",
""
]
] |
1709.07092 | Umut Oztok | Umut Oztok and Adnan Darwiche | On Compiling DNNFs without Determinism | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | State-of-the-art knowledge compilers generate deterministic subsets of DNNF,
which have been recently shown to be exponentially less succinct than DNNF. In
this paper, we propose a new method to compile DNNFs without enforcing
determinism necessarily. Our approach is based on compiling deterministic DNNFs
with the addition of auxiliary variables to the input formula. These variables
are then existentially quantified from the deterministic structure in linear
time, which would lead to a DNNF that is equivalent to the input formula and
not necessarily deterministic. On the theoretical side, we show that the new
method could generate exponentially smaller DNNFs than deterministic ones, even
by adding a single auxiliary variable. Further, we show that various existing
techniques that introduce auxiliary variables to the input formulas can be
employed in our framework. On the practical side, we empirically demonstrate
that our new method can significantly advance DNNF compilation on certain
benchmarks.
| [
{
"version": "v1",
"created": "Wed, 20 Sep 2017 21:45:29 GMT"
}
] | 1,506,038,400,000 | [
[
"Oztok",
"Umut",
""
],
[
"Darwiche",
"Adnan",
""
]
] |
1709.07114 | Peter Henderson | Peter Henderson, Matthew Vertescher, David Meger, Mark Coates | Cost Adaptation for Robust Decentralized Swarm Behaviour | Accepted to IEEE/RSJ International Conference on Intelligent Robots
and Systems (IROS), 2018 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Decentralized receding horizon control (D-RHC) provides a mechanism for
coordination in multi-agent settings without a centralized command center.
However, combining a set of different goals, costs, and constraints to form an
efficient optimization objective for D-RHC can be difficult. To allay this
problem, we use a meta-learning process -- cost adaptation -- which generates
the optimization objective for D-RHC to solve based on a set of human-generated
priors (cost and constraint functions) and an auxiliary heuristic. We use this
adaptive D-RHC method for control of mesh-networked swarm agents. This
formulation allows a wide range of tasks to be encoded and can account for
network delays, heterogeneous capabilities, and increasingly large swarms
through the adaptation mechanism. We leverage the Unity3D game engine to build
a simulator capable of introducing artificial networking failures and delays in
the swarm. Using the simulator we validate our method on an example coordinated
exploration task. We demonstrate that cost adaptation allows for more efficient
and safer task completion under varying environment conditions and increasingly
large swarm sizes. We release our simulator and code to the community for
future work.
| [
{
"version": "v1",
"created": "Thu, 21 Sep 2017 00:50:23 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Sep 2018 01:23:53 GMT"
}
] | 1,538,438,400,000 | [
[
"Henderson",
"Peter",
""
],
[
"Vertescher",
"Matthew",
""
],
[
"Meger",
"David",
""
],
[
"Coates",
"Mark",
""
]
] |
1709.07255 | Christian Stra{\ss}er | Jesse Heyninck and Christian Stra{\ss}er and Pere Pardo | Assumption-Based Approaches to Reasoning with Priorities | Forthcoming in the proceedings of AI^3 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper maps out the relation between different approaches for handling
preferences in argumentation with strict rules and defeasible assumptions by
offering translations between them. The systems we compare are: non-prioritized
defeats i.e. attacks, preference-based defeats, and preference-based defeats
extended with reverse defeat.
| [
{
"version": "v1",
"created": "Thu, 21 Sep 2017 10:46:00 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Sep 2017 15:49:26 GMT"
}
] | 1,506,902,400,000 | [
[
"Heyninck",
"Jesse",
""
],
[
"Straßer",
"Christian",
""
],
[
"Pardo",
"Pere",
""
]
] |
1709.07511 | Mark Lewis | Mark Lewis, Gary Kochenberger, John Metcalfe | Robust Optimization of Unconstrained Binary Quadratic Problems | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we focus on the unconstrained binary quadratic optimization
model, maximize x^t Qx, x binary, and consider the problem of identifying
optimal solutions that are robust with respect to perturbations in the Q
matrix.. We are motivated to find robust, or stable, solutions because of the
uncertainty inherent in the big data origins of Q and limitations in computer
numerical precision, particularly in a new class of quantum annealing
computers. Experimental design techniques are used to generate a diverse subset
of possible scenarios, from which robust solutions are identified. An
illustrative example with practical application to business decision making is
examined. The approach presented also generates a surface response equation
which is used to estimate upper bounds in constant time for Q instantiations
within the scenario extremes. In addition, a theoretical framework for the
robustness of individual x_i variables is considered by examining the range of
Q values over which the x_i are predetermined.
| [
{
"version": "v1",
"created": "Thu, 21 Sep 2017 20:36:21 GMT"
}
] | 1,506,297,600,000 | [
[
"Lewis",
"Mark",
""
],
[
"Kochenberger",
"Gary",
""
],
[
"Metcalfe",
"John",
""
]
] |
1709.07576 | Jialong Shi | Jialong Shi, Qingfu Zhang, Edward Tsang | EB-GLS: An Improved Guided Local Search Based on the Big Valley
Structure | null | Memetic Computing, 2017: 1-18 | 10.1007/s12293-017-0242-5 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Local search is a basic building block in memetic algorithms. Guided Local
Search (GLS) can improve the efficiency of local search. By changing the guide
function, GLS guides a local search to escape from locally optimal solutions
and find better solutions. The key component of GLS is its penalizing mechanism
which determines which feature is selected to penalize when the search is
trapped in a locally optimal solution. The original GLS penalizing mechanism
only makes use of the cost and the current penalty value of each feature. It is
well known that many combinatorial optimization problems have a big valley
structure, i.e., the better a solution is, the more the chance it is closer to
a globally optimal solution. This paper proposes to use big valley structure
assumption to improve the GLS penalizing mechanism. An improved GLS algorithm
called Elite Biased GLS (EB-GLS) is proposed. EB-GLS records and maintains an
elite solution as an estimate of the globally optimal solutions, and reduces
the chance of penalizing the features in this solution. We have systematically
tested the proposed algorithm on the symmetric traveling salesman problem.
Experimental results show that EB-GLS is significantly better than GLS.
| [
{
"version": "v1",
"created": "Fri, 22 Sep 2017 02:43:25 GMT"
}
] | 1,506,297,600,000 | [
[
"Shi",
"Jialong",
""
],
[
"Zhang",
"Qingfu",
""
],
[
"Tsang",
"Edward",
""
]
] |
1709.07597 | Mohit Sharma | Mohit Sharma, Kris M. Kitani, and Joachim Groeger | Inverse Reinforcement Learning with Conditional Choice Probabilities | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We make an important connection to existing results in econometrics to
describe an alternative formulation of inverse reinforcement learning (IRL). In
particular, we describe an algorithm using Conditional Choice Probabilities
(CCP), which are maximum likelihood estimates of the policy estimated from
expert demonstrations, to solve the IRL problem. Using the language of
structural econometrics, we re-frame the optimal decision problem and introduce
an alternative representation of value functions due to (Hotz and Miller 1993).
In addition to presenting the theoretical connections that bridge the IRL
literature between Economics and Robotics, the use of CCPs also has the
practical benefit of reducing the computational cost of solving the IRL
problem. Specifically, under the CCP representation, we show how one can avoid
repeated calls to the dynamic programming subroutine typically used in IRL. We
show via extensive experimentation on standard IRL benchmarks that CCP-IRL is
able to outperform MaxEnt-IRL, with as much as a 5x speedup and without
compromising on the quality of the recovered reward function.
| [
{
"version": "v1",
"created": "Fri, 22 Sep 2017 05:12:04 GMT"
}
] | 1,506,297,600,000 | [
[
"Sharma",
"Mohit",
""
],
[
"Kitani",
"Kris M.",
""
],
[
"Groeger",
"Joachim",
""
]
] |
1709.07604 | Vincent Zheng | Hongyun Cai, Vincent W. Zheng, Kevin Chen-Chuan Chang | A Comprehensive Survey of Graph Embedding: Problems, Techniques and
Applications | A 20-page comprehensive survey of graph/network embedding for over
150+ papers till year 2018. It provides systematic categorization of
problems, techniques and applications. Accepted by IEEE Transactions on
Knowledge and Data Engineering (TKDE). Comments and suggestions are welcomed
for continuously improving this survey | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph is an important data representation which appears in a wide diversity
of real-world scenarios. Effective graph analytics provides users a deeper
understanding of what is behind the data, and thus can benefit a lot of useful
applications such as node classification, node recommendation, link prediction,
etc. However, most graph analytics methods suffer the high computation and
space cost. Graph embedding is an effective yet efficient way to solve the
graph analytics problem. It converts the graph data into a low dimensional
space in which the graph structural information and graph properties are
maximally preserved. In this survey, we conduct a comprehensive review of the
literature in graph embedding. We first introduce the formal definition of
graph embedding as well as the related concepts. After that, we propose two
taxonomies of graph embedding which correspond to what challenges exist in
different graph embedding problem settings and how the existing work address
these challenges in their solutions. Finally, we summarize the applications
that graph embedding enables and suggest four promising future research
directions in terms of computation efficiency, problem settings, techniques and
application scenarios.
| [
{
"version": "v1",
"created": "Fri, 22 Sep 2017 05:54:16 GMT"
},
{
"version": "v2",
"created": "Wed, 3 Jan 2018 02:09:40 GMT"
},
{
"version": "v3",
"created": "Fri, 2 Feb 2018 07:01:22 GMT"
}
] | 1,517,788,800,000 | [
[
"Cai",
"Hongyun",
""
],
[
"Zheng",
"Vincent W.",
""
],
[
"Chang",
"Kevin Chen-Chuan",
""
]
] |
1709.07791 | Benjamin Goertzel | Ben Goertzel, Julia Mossbridge, Eddie Monroe, David Hanson, Gino Yu | Humanoid Robots as Agents of Human Consciousness Expansion | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The "Loving AI" project involves developing software enabling humanoid robots
to interact with people in loving and compassionate ways, and to promote
people' self-understanding and self-transcendence. Currently the project
centers on the Hanson Robotics robot "Sophia" -- specifically, on supplying
Sophia with personality content and cognitive, linguistic, perceptual and
behavioral content aimed at enabling loving interactions supportive of human
self-transcendence. In September 2017 a small pilot study was conducted,
involving the Sophia robot leading human subjects through dialogues and
exercises focused on meditation, visualization and relaxation. The pilot was an
apparent success, qualitatively demonstrating the viability of the approach and
the ability of appropriate human-robot interaction to increase human well-being
and advance human consciousness.
| [
{
"version": "v1",
"created": "Fri, 22 Sep 2017 14:52:23 GMT"
}
] | 1,506,297,600,000 | [
[
"Goertzel",
"Ben",
""
],
[
"Mossbridge",
"Julia",
""
],
[
"Monroe",
"Eddie",
""
],
[
"Hanson",
"David",
""
],
[
"Yu",
"Gino",
""
]
] |
1709.08024 | Yuanfang Chen | Yuanfang Chen, Mohsen Guizani, Yan Zhang, Lei Wang, Noel Crespi, Gyu
Myoung Lee | When Traffic Flow Prediction Meets Wireless Big Data Analytics | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traffic flow prediction is an important research issue for solving the
traffic congestion problem in an Intelligent Transportation System (ITS).
Traffic congestion is one of the most serious problems in a city, which can be
predicted in advance by analyzing traffic flow patterns. Such prediction is
possible by analyzing the real-time transportation data from correlative roads
and vehicles. This article first gives a brief introduction to the
transportation data, and surveys the state-of-the-art prediction methods. Then,
we verify whether or not the prediction performance is able to be improved by
fitting actual data to optimize the parameters of the prediction model which is
used to predict the traffic flow. Such verification is conducted by comparing
the optimized time series prediction model with the normal time series
prediction model. This means that in the era of big data, accurate use of the
data becomes the focus of studying the traffic flow prediction to solve the
congestion problem. Finally, experimental results of a case study are provided
to verify the existence of such performance improvement, while the research
challenges of this data-analytics-based prediction are presented and discussed.
| [
{
"version": "v1",
"created": "Sat, 23 Sep 2017 08:54:25 GMT"
}
] | 1,506,384,000,000 | [
[
"Chen",
"Yuanfang",
""
],
[
"Guizani",
"Mohsen",
""
],
[
"Zhang",
"Yan",
""
],
[
"Wang",
"Lei",
""
],
[
"Crespi",
"Noel",
""
],
[
"Lee",
"Gyu Myoung",
""
]
] |
1709.08027 | Dmytro Terletskyi | Dmytro Terletskyi | Object-Oriented Knowledge Representation and Data Storage Using
Inhomogeneous Classes | 2 figures | Information and Software Technologies, Volume 756 of the series
Communications in Computer and Information Science, 2017, pp. 48-61 | 10.1007/978-3-319-67642-5_5 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper contains analysis of concept of a class within different
object-oriented knowledge representation models. The main attention is paid to
structure of the class and its efficiency in the context of data storage, using
object-relational mapping. The main achievement of the paper is extension of
concept of homogeneous class of objects by introducing concepts of single-core
and multi-core inhomogeneous classes of objects, which allow simultaneous
defining of a few different types within one class of objects, avoiding
duplication of properties and methods in representation of types, decreasing
sizes of program codes and providing more efficient information storage in the
databases. In addition, the paper contains results of experiment, which show
that data storage in relational database, using proposed extensions of the
class, in some cases is more efficient in contrast to usage of homogeneous
classes of objects.
| [
{
"version": "v1",
"created": "Sat, 23 Sep 2017 09:09:04 GMT"
}
] | 1,541,116,800,000 | [
[
"Terletskyi",
"Dmytro",
""
]
] |
1709.08034 | Beishui Liao | Beishui Liao, Nir Oren, Leendert van der Torre and Serena Villata | Prioritized Norms in Formal Argumentation | Accepted by the Journal of Logic and Computation on November 2nd,
2017 | null | 10.1093/logcom/exy009 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To resolve conflicts among norms, various nonmonotonic formalisms can be used
to perform prioritized normative reasoning. Meanwhile, formal argumentation
provides a way to represent nonmonotonic logics. In this paper, we propose a
representation of prioritized normative reasoning by argumentation. Using
hierarchical abstract normative systems, we define three kinds of prioritized
normative reasoning approaches, called Greedy, Reduction, and Optimization.
Then, after formulating an argumentation theory for a hierarchical abstract
normative system, we show that for a totally ordered hierarchical abstract
normative system, Greedy and Reduction can be represented in argumentation by
applying the weakest link and the last link principles respectively, and
Optimization can be represented by introducing additional defeats capturing the
idea that for each argument that contains a norm not belonging to the maximal
obeyable set then this argument should be rejected.
| [
{
"version": "v1",
"created": "Sat, 23 Sep 2017 10:21:56 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Feb 2018 11:44:26 GMT"
}
] | 1,520,294,400,000 | [
[
"Liao",
"Beishui",
""
],
[
"Oren",
"Nir",
""
],
[
"van der Torre",
"Leendert",
""
],
[
"Villata",
"Serena",
""
]
] |
1709.08693 | Xiaojun Xu | Xiaojun Xu, Xinyun Chen, Chang Liu, Anna Rohrbach, Trevor Darrell and
Dawn Song | Fooling Vision and Language Models Despite Localization and Attention
Mechanism | CVPR 2018 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Adversarial attacks are known to succeed on classifiers, but it has been an
open question whether more complex vision systems are vulnerable. In this
paper, we study adversarial examples for vision and language models, which
incorporate natural language understanding and complex structures such as
attention, localization, and modular architectures. In particular, we
investigate attacks on a dense captioning model and on two visual question
answering (VQA) models. Our evaluation shows that we can generate adversarial
examples with a high success rate (i.e., > 90%) for these models. Our work
sheds new light on understanding adversarial attacks on vision systems which
have a language component and shows that attention, bounding box localization,
and compositional internal structures are vulnerable to adversarial attacks.
These observations will inform future work towards building effective defenses.
| [
{
"version": "v1",
"created": "Mon, 25 Sep 2017 19:32:49 GMT"
},
{
"version": "v2",
"created": "Fri, 6 Apr 2018 01:56:16 GMT"
}
] | 1,523,232,000,000 | [
[
"Xu",
"Xiaojun",
""
],
[
"Chen",
"Xinyun",
""
],
[
"Liu",
"Chang",
""
],
[
"Rohrbach",
"Anna",
""
],
[
"Darrell",
"Trevor",
""
],
[
"Song",
"Dawn",
""
]
] |
1709.08982 | Aisha Blfgeh | Aisha Blfgeh and Phillip Lord | User and Developer Interaction with Editable and Readable Ontologies | 5 pages, 5 figures, accepted at ICBO 2017, License updated | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The process of building ontologies is a difficult task that involves
collaboration between ontology developers and domain experts and requires an
ongoing interaction between them. This collaboration is made more difficult,
because they tend to use different tool sets, which can hamper this
interaction. In this paper, we propose to decrease this distance between domain
experts and ontology developers by creating more readable forms of ontologies,
and further to enable editing in normal office environments. Building on a
programmatic ontology development environment, such as Tawny-OWL, we are now
able to generate these readable/editable from the raw ontological source and
its embedded comments. We have this translation to HTML for reading; this
environment provides rich hyperlinking as well as active features such as
hiding the source code in favour of comments. We are now working on translation
to a Word document that also enables editing. Taken together this should
provide a significant new route for collaboration between the ontologist and
domain specialist.
| [
{
"version": "v1",
"created": "Tue, 26 Sep 2017 12:48:33 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Sep 2017 14:13:47 GMT"
}
] | 1,506,643,200,000 | [
[
"Blfgeh",
"Aisha",
""
],
[
"Lord",
"Phillip",
""
]
] |
1709.09131 | Felix H\"ulsmann | Felix H\"ulsmann, Stefan Kopp, Mario Botsch | Automatic Error Analysis of Human Motor Performance for Interactive
Coaching in Virtual Reality | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the context of fitness coaching or for rehabilitation purposes, the motor
actions of a human participant must be observed and analyzed for errors in
order to provide effective feedback. This task is normally carried out by human
coaches, and it needs to be solved automatically in technical applications that
are to provide automatic coaching (e.g. training environments in VR). However,
most coaching systems only provide coarse information on movement quality, such
as a scalar value per body part that describes the overall deviation from the
correct movement. Further, they are often limited to static body postures or
rather simple movements of single body parts. While there are many approaches
to distinguish between different types of movements (e.g., between walking and
jumping), the detection of more subtle errors in a motor performance is less
investigated. We propose a novel approach to classify errors in sports or
rehabilitation exercises such that feedback can be delivered in a rapid and
detailed manner: Homogeneous sub-sequences of exercises are first temporally
aligned via Dynamic Time Warping. Next, we extract a feature vector from the
aligned sequences, which serves as a basis for feature selection using Random
Forests. The selected features are used as input for Support Vector Machines,
which finally classify the movement errors. We compare our algorithm to a well
established state-of-the-art approach in time series classification, 1-Nearest
Neighbor combined with Dynamic Time Warping, and show our algorithm's
superiority regarding classification quality as well as computational cost.
| [
{
"version": "v1",
"created": "Tue, 26 Sep 2017 17:01:32 GMT"
}
] | 1,506,470,400,000 | [
[
"Hülsmann",
"Felix",
""
],
[
"Kopp",
"Stefan",
""
],
[
"Botsch",
"Mario",
""
]
] |
1709.09433 | Fulvio Mastrogiovanni | Luca Buoncompagni, Fulvio Mastrogiovanni, Alessandro Saffiotti | Scene learning, recognition and similarity detection in a fuzzy ontology
via human examples | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces a Fuzzy Logic framework for scene learning, recognition
and similarity detection, where scenes are taught via human examples. The
framework allows a robot to: (i) deal with the intrinsic vagueness associated
with determining spatial relations among objects; (ii) infer similarities and
dissimilarities in a set of scenes, and represent them in a hierarchical
structure represented in a Fuzzy ontology. In this paper, we briefly formalize
our approach and we provide a few use cases by way of illustration.
Nevertheless, we discuss how the framework can be used in real-world scenarios.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2017 10:19:38 GMT"
}
] | 1,506,556,800,000 | [
[
"Buoncompagni",
"Luca",
""
],
[
"Mastrogiovanni",
"Fulvio",
""
],
[
"Saffiotti",
"Alessandro",
""
]
] |
1709.09585 | Xingyi Cheng | Xingyi Cheng, Ruiqing Zhang, Jie Zhou, Wei Xu | DeepTransport: Learning Spatial-Temporal Dependency for Traffic
Condition Forecasting | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Predicting traffic conditions has been recently explored as a way to relieve
traffic congestion. Several pioneering approaches have been proposed based on
traffic observations of the target location as well as its adjacent regions,
but they obtain somewhat limited accuracy due to a lack of mining road
topology. To address the effect attenuation problem, we suggest taking into
account the traffic of surrounding locations(wider than the adjacent range). We
propose an end-to-end framework called DeepTransport, in which Convolutional
Neural Networks (CNN) and Recurrent Neural Networks (RNN) are utilized to
obtain spatial-temporal traffic information within a transport network
topology. In addition, an attention mechanism is introduced to align spatial
and temporal information. Moreover, we constructed and released a real-world
large traffic condition dataset with a 5-minute resolution. Our experiments on
this dataset demonstrate our method captures the complex relationship in the
temporal and spatial domains. It significantly outperforms traditional
statistical methods and a state-of-the-art deep learning method.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2017 15:39:49 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Oct 2019 02:49:09 GMT"
},
{
"version": "v3",
"created": "Sun, 29 May 2022 14:46:13 GMT"
},
{
"version": "v4",
"created": "Sun, 20 Aug 2023 02:36:27 GMT"
}
] | 1,692,662,400,000 | [
[
"Cheng",
"Xingyi",
""
],
[
"Zhang",
"Ruiqing",
""
],
[
"Zhou",
"Jie",
""
],
[
"Xu",
"Wei",
""
]
] |
1709.09611 | Xiao Li | Xiao Li, Yao Ma and Calin Belta | A Policy Search Method For Temporal Logic Specified Reinforcement
Learning Tasks | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reward engineering is an important aspect of reinforcement learning. Whether
or not the user's intentions can be correctly encapsulated in the reward
function can significantly impact the learning outcome. Current methods rely on
manually crafted reward functions that often require parameter tuning to obtain
the desired behavior. This operation can be expensive when exploration requires
systems to interact with the physical world. In this paper, we explore the use
of temporal logic (TL) to specify tasks in reinforcement learning. TL formula
can be translated to a real-valued function that measures its level of
satisfaction against a trajectory. We take advantage of this function and
propose temporal logic policy search (TLPS), a model-free learning technique
that finds a policy that satisfies the TL specification. A set of simulated
experiments are conducted to evaluate the proposed approach.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2017 16:37:51 GMT"
}
] | 1,506,556,800,000 | [
[
"Li",
"Xiao",
""
],
[
"Ma",
"Yao",
""
],
[
"Belta",
"Calin",
""
]
] |
1709.09839 | Mor Vered | Mor Vered and Gal A. Kaminka | Heuristic Online Goal Recognition in Continuous Domains | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Goal recognition is the problem of inferring the goal of an agent, based on
its observed actions. An inspiring approach - plan recognition by planning
(PRP) - uses off-the-shelf planners to dynamically generate plans for given
goals, eliminating the need for the traditional plan library. However, existing
PRP formulation is inherently inefficient in online recognition, and cannot be
used with motion planners for continuous spaces. In this paper, we utilize a
different PRP formulation which allows for online goal recognition, and for
application in continuous spaces. We present an online recognition algorithm,
where two heuristic decision points may be used to improve run-time
significantly over existing work. We specify heuristics for continuous domains,
prove guarantees on their use, and empirically evaluate the algorithm over
hundreds of experiments in both a 3D navigational environment and a cooperative
robotic team task.
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2017 07:58:59 GMT"
}
] | 1,506,643,200,000 | [
[
"Vered",
"Mor",
""
],
[
"Kaminka",
"Gal A.",
""
]
] |
1709.09972 | Andr\'e Hottung | Andr\'e Hottung, Shunji Tanaka, Kevin Tierney | Deep Learning Assisted Heuristic Tree Search for the Container
Pre-marshalling Problem | null | Computers & Operations Research 113 (2020) 104781 | 10.1016/j.cor.2019.104781 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The container pre-marshalling problem (CPMP) is concerned with the
re-ordering of containers in container terminals during off-peak times so that
containers can be quickly retrieved when the port is busy. The problem has
received significant attention in the literature and is addressed by a large
number of exact and heuristic methods. Existing methods for the CPMP heavily
rely on problem-specific components (e.g., proven lower bounds) that need to be
developed by domain experts with knowledge of optimization techniques and a
deep understanding of the problem at hand. With the goal to automate the costly
and time-intensive design of heuristics for the CPMP, we propose a new method
called Deep Learning Heuristic Tree Search (DLTS). It uses deep neural networks
to learn solution strategies and lower bounds customized to the CPMP solely
through analyzing existing (near-) optimal solutions to CPMP instances. The
networks are then integrated into a tree search procedure to decide which
branch to choose next and to prune the search tree. DLTS produces the highest
quality heuristic solutions to the CPMP to date with gaps to optimality below
2% on real-world sized instances.
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2017 14:06:28 GMT"
},
{
"version": "v2",
"created": "Wed, 18 Sep 2019 15:16:38 GMT"
}
] | 1,568,851,200,000 | [
[
"Hottung",
"André",
""
],
[
"Tanaka",
"Shunji",
""
],
[
"Tierney",
"Kevin",
""
]
] |
1709.10242 | Liu Feng | Feng Liu, Yong Shi, Ying Liu | Intelligence Quotient and Intelligence Grade of Artificial Intelligence | null | Annals of Data Science, June 2017, Volume 4, Issue 2, pp 179-191 | 10.1007/s40745-017-0109-0 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although artificial intelligence is currently one of the most interesting
areas in scientific research, the potential threats posed by emerging AI
systems remain a source of persistent controversy. To address the issue of AI
threat, this study proposes a standard intelligence model that unifies AI and
human characteristics in terms of four aspects of knowledge, i.e., input,
output, mastery, and creation. Using this model, we observe three challenges,
namely, expanding of the von Neumann architecture; testing and ranking the
intelligence quotient of naturally and artificially intelligent systems,
including humans, Google, Bing, Baidu, and Siri; and finally, the dividing of
artificially intelligent systems into seven grades from robots to Google Brain.
Based on this, we conclude that AlphaGo belongs to the third grade.
| [
{
"version": "v1",
"created": "Fri, 29 Sep 2017 05:43:39 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Oct 2017 16:33:07 GMT"
}
] | 1,507,161,600,000 | [
[
"Liu",
"Feng",
""
],
[
"Shi",
"Yong",
""
],
[
"Liu",
"Ying",
""
]
] |
1709.10256 | Daniele Magazzeni | Maria Fox, Derek Long, Daniele Magazzeni | Explainable Planning | Presented at the IJCAI-17 workshop on Explainable AI
(http://home.earthlink.net/~dwaha/research/meetings/ijcai17-xai/). Melbourne,
August 2017 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As AI is increasingly being adopted into application solutions, the challenge
of supporting interaction with humans is becoming more apparent. Partly this is
to support integrated working styles, in which humans and intelligent systems
cooperate in problem-solving, but also it is a necessary step in the process of
building trust as humans migrate greater responsibility to such systems. The
challenge is to find effective ways to communicate the foundations of AI-driven
behaviour, when the algorithms that drive it are far from transparent to
humans. In this paper we consider the opportunities that arise in AI planning,
exploiting the model-based representations that form a familiar and common
basis for communication with users, while acknowledging the gap between
planning algorithms and human problem-solving.
| [
{
"version": "v1",
"created": "Fri, 29 Sep 2017 07:05:38 GMT"
}
] | 1,506,902,400,000 | [
[
"Fox",
"Maria",
""
],
[
"Long",
"Derek",
""
],
[
"Magazzeni",
"Daniele",
""
]
] |
1709.10482 | Andrea Marrella | Andrea Marrella | What Automated Planning can do for Business Process Management | Preprint of a paper to be published in BPAI 2017, Workshop on BP
Innovation with Artificial Intelligence | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Business Process Management (BPM) is a central element of today
organizations. Despite over the years its main focus has been the support of
processes in highly controlled domains, nowadays many domains of interest to
the BPM community are characterized by ever-changing requirements,
unpredictable environments and increasing amounts of data that influence the
execution of process instances. Under such dynamic conditions, BPM systems must
increase their level of automation to provide the reactivity and flexibility
necessary for process management. On the other hand, the Artificial
Intelligence (AI) community has concentrated its efforts on investigating
dynamic domains that involve active control of computational entities and
physical devices (e.g., robots, software agents, etc.). In this context,
Automated Planning, which is one of the oldest areas in AI, is conceived as a
model-based approach to synthesize autonomous behaviours in automated way from
a model. In this paper, we discuss how automated planning techniques can be
leveraged to enable new levels of automation and support for business
processing, and we show some concrete examples of their successful application
to the different stages of the BPM life cycle.
| [
{
"version": "v1",
"created": "Fri, 29 Sep 2017 16:18:18 GMT"
},
{
"version": "v2",
"created": "Fri, 20 Oct 2017 15:19:29 GMT"
}
] | 1,508,716,800,000 | [
[
"Marrella",
"Andrea",
""
]
] |
1710.00336 | Xiangxiang Chu | Xiangxiang Chu, Hangjun Ye | Parameter Sharing Deep Deterministic Policy Gradient for Cooperative
Multi-agent Reinforcement Learning | 12 pages, 6 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep reinforcement learning for multi-agent cooperation and competition has
been a hot topic recently. This paper focuses on cooperative multi-agent
problem based on actor-critic methods under local observations settings. Multi
agent deep deterministic policy gradient obtained state of art results for some
multi-agent games, whereas, it cannot scale well with growing amount of agents.
In order to boost scalability, we propose a parameter sharing deterministic
policy gradient method with three variants based on neural networks, including
actor-critic sharing, actor sharing and actor sharing with partially shared
critic. Benchmarks from rllab show that the proposed method has advantages in
learning speed and memory efficiency, well scales with growing amount of
agents, and moreover, it can make full use of reward sharing and
exchangeability if possible.
| [
{
"version": "v1",
"created": "Sun, 1 Oct 2017 11:43:10 GMT"
},
{
"version": "v2",
"created": "Tue, 3 Oct 2017 00:47:58 GMT"
}
] | 1,507,075,200,000 | [
[
"Chu",
"Xiangxiang",
""
],
[
"Ye",
"Hangjun",
""
]
] |
1710.00675 | Martin Chmel\'ik | Krishnendu Chatterjee, Martin Chmelik, Ufuk Topcu | Sensor Synthesis for POMDPs with Reachability Objectives | arXiv admin note: text overlap with arXiv:1511.08456 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Partially observable Markov decision processes (POMDPs) are widely used in
probabilistic planning problems in which an agent interacts with an environment
using noisy and imprecise sensors. We study a setting in which the sensors are
only partially defined and the goal is to synthesize "weakest" additional
sensors, such that in the resulting POMDP, there is a small-memory policy for
the agent that almost-surely (with probability~1) satisfies a reachability
objective. We show that the problem is NP-complete, and present a symbolic
algorithm by encoding the problem into SAT instances. We illustrate trade-offs
between the amount of memory of the policy and the number of additional sensors
on a simple example. We have implemented our approach and consider three
classical POMDP examples from the literature, and show that in all the examples
the number of sensors can be significantly decreased (as compared to the
existing solutions in the literature) without increasing the complexity of the
policies.
| [
{
"version": "v1",
"created": "Fri, 29 Sep 2017 08:27:24 GMT"
}
] | 1,506,988,800,000 | [
[
"Chatterjee",
"Krishnendu",
""
],
[
"Chmelik",
"Martin",
""
],
[
"Topcu",
"Ufuk",
""
]
] |
1710.00794 | Derek Doran | Derek Doran, Sarah Schulz, Tarek R. Besold | What Does Explainable AI Really Mean? A New Conceptualization of
Perspectives | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We characterize three notions of explainable AI that cut across research
fields: opaque systems that offer no insight into its algo- rithmic mechanisms;
interpretable systems where users can mathemat- ically analyze its algorithmic
mechanisms; and comprehensible systems that emit symbols enabling user-driven
explanations of how a conclusion is reached. The paper is motivated by a corpus
analysis of NIPS, ACL, COGSCI, and ICCV/ECCV paper titles showing differences
in how work on explainable AI is positioned in various fields. We close by
introducing a fourth notion: truly explainable systems, where automated
reasoning is central to output crafted explanations without requiring human
post processing as final step of the generative process.
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2017 17:09:38 GMT"
}
] | 1,506,988,800,000 | [
[
"Doran",
"Derek",
""
],
[
"Schulz",
"Sarah",
""
],
[
"Besold",
"Tarek R.",
""
]
] |
1710.01275 | Stefano Bromuri Dr | Stefano Bromuri and Albert Brugues de la Torre and Fabien Duboisson
and Michael Schumacher | Indexing the Event Calculus with Kd-trees to Monitor Diabetes | 24 pages, preliminary results calculated on an implementation of
CECKD, precursor to Journal paper being submitted in 2017, with further
indexing and results possibilities, put here for reference and chronological
purposes to remember how the idea evolved | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Personal Health Systems (PHS) are mobile solutions tailored to monitoring
patients affected by chronic non communicable diseases. A patient affected by a
chronic disease can generate large amounts of events. Type 1 Diabetic patients
generate several glucose events per day, ranging from at least 6 events per day
(under normal monitoring) to 288 per day when wearing a continuous glucose
monitor (CGM) that samples the blood every 5 minutes for several days. This is
a large number of events to monitor for medical doctors, in particular when
considering that they may have to take decisions concerning adjusting the
treatment, which may impact the life of the patients for a long time. Given the
need to analyse such a large stream of data, doctors need a simple approach
towards physiological time series that allows them to promptly transfer their
knowledge into queries to identify interesting patterns in the data. Achieving
this with current technology is not an easy task, as on one hand it cannot be
expected that medical doctors have the technical knowledge to query databases
and on the other hand these time series include thousands of events, which
requires to re-think the way data is indexed. In order to tackle the knowledge
representation and efficiency problem, this contribution presents the kd-tree
cached event calculus (\ceckd) an event calculus extension for knowledge
engineering of temporal rules capable to handle many thousands events produced
by a diabetic patient. \ceckd\ is built as a support to a graphical interface
to represent monitoring rules for diabetes type 1. In addition, the paper
evaluates the \ceckd\ with respect to the cached event calculus (CEC) to show
how indexing events using kd-trees improves scalability with respect to the
current state of the art.
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2017 17:01:54 GMT"
}
] | 1,507,075,200,000 | [
[
"Bromuri",
"Stefano",
""
],
[
"de la Torre",
"Albert Brugues",
""
],
[
"Duboisson",
"Fabien",
""
],
[
"Schumacher",
"Michael",
""
]
] |
1710.01823 | James O' Neill | C\'ecile Robin, James O'Neill, Paul Buitelaar | Automatic Taxonomy Generation - A Use-Case in the Legal Domain | 9 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A key challenge in the legal domain is the adaptation and representation of
the legal knowledge expressed through texts, in order for legal practitioners
and researchers to access this information easier and faster to help with
compliance related issues. One way to approach this goal is in the form of a
taxonomy of legal concepts. While this task usually requires a manual
construction of terms and their relations by domain experts, this paper
describes a methodology to automatically generate a taxonomy of legal noun
concepts. We apply and compare two approaches on a corpus consisting of
statutory instruments for UK, Wales, Scotland and Northern Ireland laws.
| [
{
"version": "v1",
"created": "Wed, 4 Oct 2017 23:00:08 GMT"
}
] | 1,507,248,000,000 | [
[
"Robin",
"Cécile",
""
],
[
"O'Neill",
"James",
""
],
[
"Buitelaar",
"Paul",
""
]
] |
1710.02210 | Suraj Narayanan Sasikumar | Suraj Narayanan Sasikumar | Exploration in Feature Space for Reinforcement Learning | Masters thesis. Australian National University, May 2017. 65 pp | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The infamous exploration-exploitation dilemma is one of the oldest and most
important problems in reinforcement learning (RL). Deliberate and effective
exploration is necessary for RL agents to succeed in most environments.
However, until very recently even very sophisticated RL algorithms employed
simple, undirected exploration strategies in large-scale RL tasks.
We introduce a new optimistic count-based exploration algorithm for RL that
is feasible in high-dimensional MDPs. The success of RL algorithms in these
domains depends crucially on generalization from limited training experience.
Function approximation techniques enable RL agents to generalize in order to
estimate the value of unvisited states, but at present few methods have
achieved generalization about the agent's uncertainty regarding unvisited
states. We present a new method for computing a generalized state visit-count,
which allows the agent to estimate the uncertainty associated with any state.
In contrast to existing exploration techniques, our
$\phi$-$\textit{pseudocount}$ achieves generalization by exploiting the feature
representation of the state space that is used for value function
approximation. States that have less frequently observed features are deemed
more uncertain. The resulting $\phi$-$\textit{Exploration-Bonus}$ algorithm
rewards the agent for exploring in feature space rather than in the original
state space. This method is simpler and less computationally expensive than
some previous proposals, and achieves near state-of-the-art results on
high-dimensional RL benchmarks. In particular, we report world-class results on
several notoriously difficult Atari 2600 video games, including Montezuma's
Revenge.
| [
{
"version": "v1",
"created": "Thu, 5 Oct 2017 20:46:47 GMT"
}
] | 1,507,507,200,000 | [
[
"Sasikumar",
"Suraj Narayanan",
""
]
] |
1710.02511 | Hao Li | Hao Li and Zhijian Liu | Performance Prediction and Optimization of Solar Water Heater via a
Knowledge-Based Machine Learning Method | 20 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Measuring the performance of solar energy and heat transfer systems requires
a lot of time, economic cost and manpower. Meanwhile, directly predicting their
performance is challenging due to the complicated internal structures.
Fortunately, a knowledge-based machine learning method can provide a promising
prediction and optimization strategy for the performance of energy systems. In
this Chapter, the authors will show how they utilize the machine learning
models trained from a large experimental database to perform precise prediction
and optimization on a solar water heater (SWH) system. A new energy system
optimization strategy based on a high-throughput screening (HTS) process is
proposed. This Chapter consists of: i) Comparative studies on varieties of
machine learning models (artificial neural networks (ANNs), support vector
machine (SVM) and extreme learning machine (ELM)) to predict the performances
of SWHs; ii) Development of an ANN-based software to assist the quick
prediction and iii) Introduction of a computational HTS method to design a
high-performance SWH system.
| [
{
"version": "v1",
"created": "Fri, 6 Oct 2017 17:39:32 GMT"
}
] | 1,507,507,200,000 | [
[
"Li",
"Hao",
""
],
[
"Liu",
"Zhijian",
""
]
] |
1710.02648 | Yujian Li | Yujian Li | Can Machines Think in Radio Language? | 4 pages, 1 figure | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | People can think in auditory, visual and tactile forms of language, so can
machines principally. But is it possible for them to think in radio language?
According to a first principle presented for general intelligence, i.e. the
principle of language's relativity, the answer may give an exceptional solution
for robot astronauts to talk with each other in space exploration.
| [
{
"version": "v1",
"created": "Sat, 7 Oct 2017 08:03:58 GMT"
},
{
"version": "v2",
"created": "Wed, 11 Oct 2017 08:49:37 GMT"
},
{
"version": "v3",
"created": "Sun, 17 Dec 2017 12:39:53 GMT"
}
] | 1,513,641,600,000 | [
[
"Li",
"Yujian",
""
]
] |
1710.02714 | Qiaozi Gao | Qiaozi Gao, Lanbo She, and Joyce Y. Chai | Interactive Learning of State Representation through Natural Language
Instruction and Explanation | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One significant simplification in most previous work on robot learning is the
closed-world assumption where the robot is assumed to know ahead of time a
complete set of predicates describing the state of the physical world. However,
robots are not likely to have a complete model of the world especially when
learning a new task. To address this problem, this extended abstract gives a
brief introduction to our on-going work that aims to enable the robot to
acquire new state representations through language communication with humans.
| [
{
"version": "v1",
"created": "Sat, 7 Oct 2017 17:45:14 GMT"
}
] | 1,507,593,600,000 | [
[
"Gao",
"Qiaozi",
""
],
[
"She",
"Lanbo",
""
],
[
"Chai",
"Joyce Y.",
""
]
] |
1710.03131 | Huikai Wu | Huikai Wu, Yanqi Zong, Junge Zhang, Kaiqi Huang | MSC: A Dataset for Macro-Management in StarCraft II | Homepage: https://github.com/wuhuikai/MSC | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Macro-management is an important problem in StarCraft, which has been studied
for a long time. Various datasets together with assorted methods have been
proposed in the last few years. But these datasets have some defects for
boosting the academic and industrial research: 1) There're neither standard
preprocessing, parsing and feature extraction procedures nor predefined
training, validation and test set in some datasets. 2) Some datasets are only
specified for certain tasks in macro-management. 3) Some datasets are either
too small or don't have enough labeled data for modern machine learning
algorithms such as deep neural networks. So most previous methods are trained
with various features, evaluated on different test sets from the same or
different datasets, making it difficult to be compared directly. To boost the
research of macro-management in StarCraft, we release a new dataset MSC based
on the platform SC2LE. MSC consists of well-designed feature vectors,
pre-defined high-level actions and final result of each match. We also split
MSC into training, validation and test set for the convenience of evaluation
and comparison. Besides the dataset, we propose a baseline model and present
initial baseline results for global state evaluation and build order
prediction, which are two of the key tasks in macro-management. Various
downstream tasks and analyses of the dataset are also described for the sake of
research on macro-management in StarCraft II. Homepage:
https://github.com/wuhuikai/MSC.
| [
{
"version": "v1",
"created": "Mon, 9 Oct 2017 14:59:11 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Feb 2019 12:06:34 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Apr 2023 11:56:53 GMT"
}
] | 1,680,566,400,000 | [
[
"Wu",
"Huikai",
""
],
[
"Zong",
"Yanqi",
""
],
[
"Zhang",
"Junge",
""
],
[
"Huang",
"Kaiqi",
""
]
] |
1710.03392 | EPTCS | Marco Bozzano | Causality and Temporal Dependencies in the Design of Fault Management
Systems | In Proceedings CREST 2017, arXiv:1710.02770 | EPTCS 259, 2017, pp. 39-46 | 10.4204/EPTCS.259.4 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reasoning about causes and effects naturally arises in the engineering of
safety-critical systems. A classical example is Fault Tree Analysis, a
deductive technique used for system safety assessment, whereby an undesired
state is reduced to the set of its immediate causes. The design of fault
management systems also requires reasoning on causality relationships. In
particular, a fail-operational system needs to ensure timely detection and
identification of faults, i.e. recognize the occurrence of run-time faults
through their observable effects on the system. Even more complex scenarios
arise when multiple faults are involved and may interact in subtle ways.
In this work, we propose a formal approach to fault management for complex
systems. We first introduce the notions of fault tree and minimal cut sets. We
then present a formal framework for the specification and analysis of
diagnosability, and for the design of fault detection and identification (FDI)
components. Finally, we review recent advances in fault propagation analysis,
based on the Timed Failure Propagation Graphs (TFPG) formalism.
| [
{
"version": "v1",
"created": "Tue, 10 Oct 2017 03:51:47 GMT"
}
] | 1,507,680,000,000 | [
[
"Bozzano",
"Marco",
""
]
] |
1710.03592 | Kun Li | Kun Li, Joel W. Burdick | Meta Inverse Reinforcement Learning via Maximum Reward Sharing for Human
Motion Analysis | arXiv admin note: text overlap with arXiv:1707.09394 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work handles the inverse reinforcement learning (IRL) problem where only
a small number of demonstrations are available from a demonstrator for each
high-dimensional task, insufficient to estimate an accurate reward function.
Observing that each demonstrator has an inherent reward for each state and the
task-specific behaviors mainly depend on a small number of key states, we
propose a meta IRL algorithm that first models the reward function for each
task as a distribution conditioned on a baseline reward function shared by all
tasks and dependent only on the demonstrator, and then finds the most likely
reward function in the distribution that explains the task-specific behaviors.
We test the method in a simulated environment on path planning tasks with
limited demonstrations, and show that the accuracy of the learned reward
function is significantly improved. We also apply the method to analyze the
motion of a patient under rehabilitation.
| [
{
"version": "v1",
"created": "Sat, 7 Oct 2017 20:22:32 GMT"
},
{
"version": "v2",
"created": "Thu, 12 Oct 2017 20:42:35 GMT"
}
] | 1,508,112,000,000 | [
[
"Li",
"Kun",
""
],
[
"Burdick",
"Joel W.",
""
]
] |
1710.03748 | Trapit Bansal | Trapit Bansal, Jakub Pachocki, Szymon Sidor, Ilya Sutskever, Igor
Mordatch | Emergent Complexity via Multi-Agent Competition | Published as a conference paper at ICLR 2018 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reinforcement learning algorithms can train agents that solve problems in
complex, interesting environments. Normally, the complexity of the trained
agent is closely related to the complexity of the environment. This suggests
that a highly capable agent requires a complex environment for training. In
this paper, we point out that a competitive multi-agent environment trained
with self-play can produce behaviors that are far more complex than the
environment itself. We also point out that such environments come with a
natural curriculum, because for any skill level, an environment full of agents
of this level will have the right level of difficulty. This work introduces
several competitive multi-agent environments where agents compete in a 3D world
with simulated physics. The trained agents learn a wide variety of complex and
interesting skills, even though the environment themselves are relatively
simple. The skills include behaviors such as running, blocking, ducking,
tackling, fooling opponents, kicking, and defending using both arms and legs. A
highlight of the learned behaviors can be found here: https://goo.gl/eR7fbX
| [
{
"version": "v1",
"created": "Tue, 10 Oct 2017 17:59:41 GMT"
},
{
"version": "v2",
"created": "Thu, 12 Oct 2017 21:49:55 GMT"
},
{
"version": "v3",
"created": "Wed, 14 Mar 2018 21:09:49 GMT"
}
] | 1,521,158,400,000 | [
[
"Bansal",
"Trapit",
""
],
[
"Pachocki",
"Jakub",
""
],
[
"Sidor",
"Szymon",
""
],
[
"Sutskever",
"Ilya",
""
],
[
"Mordatch",
"Igor",
""
]
] |
1710.03792 | Hongjia Li | Hongjia Li, Tianshu Wei, Ao Ren, Qi Zhu, Yanzhi Wang | Deep Reinforcement Learning: Framework, Applications, and Embedded
Implementations | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The recent breakthroughs of deep reinforcement learning (DRL) technique in
Alpha Go and playing Atari have set a good example in handling large state and
actions spaces of complicated control problems. The DRL technique is comprised
of (i) an offline deep neural network (DNN) construction phase, which derives
the correlation between each state-action pair of the system and its value
function, and (ii) an online deep Q-learning phase, which adaptively derives
the optimal action and updates value estimates. In this paper, we first present
the general DRL framework, which can be widely utilized in many applications
with different optimization objectives. This is followed by the introduction of
three specific applications: the cloud computing resource allocation problem,
the residential smart grid task scheduling problem, and building HVAC system
optimal control problem. The effectiveness of the DRL technique in these three
cyber-physical applications have been validated. Finally, this paper
investigates the stochastic computing-based hardware implementations of the DRL
framework, which consumes a significant improvement in area efficiency and
power consumption compared with binary-based implementation counterparts.
| [
{
"version": "v1",
"created": "Tue, 10 Oct 2017 19:22:50 GMT"
}
] | 1,507,766,400,000 | [
[
"Li",
"Hongjia",
""
],
[
"Wei",
"Tianshu",
""
],
[
"Ren",
"Ao",
""
],
[
"Zhu",
"Qi",
""
],
[
"Wang",
"Yanzhi",
""
]
] |
1710.04157 | Jacob Devlin | Jacob Devlin, Rudy Bunel, Rishabh Singh, Matthew Hausknecht, Pushmeet
Kohli | Neural Program Meta-Induction | 8 Pages + 1 page appendix | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most recently proposed methods for Neural Program Induction work under the
assumption of having a large set of input/output (I/O) examples for learning
any underlying input-output mapping. This paper aims to address the problem of
data and computation efficiency of program induction by leveraging information
from related tasks. Specifically, we propose two approaches for cross-task
knowledge transfer to improve program induction in limited-data scenarios. In
our first proposal, portfolio adaptation, a set of induction models is
pretrained on a set of related tasks, and the best model is adapted towards the
new task using transfer learning. In our second approach, meta program
induction, a $k$-shot learning approach is used to make a model generalize to
new tasks without additional training. To test the efficacy of our methods, we
constructed a new benchmark of programs written in the Karel programming
language. Using an extensive experimental evaluation on the Karel benchmark, we
demonstrate that our proposals dramatically outperform the baseline induction
method that does not use knowledge transfer. We also analyze the relative
performance of the two approaches and study conditions in which they perform
best. In particular, meta induction outperforms all existing approaches under
extreme data sparsity (when a very small number of examples are available),
i.e., fewer than ten. As the number of available I/O examples increase (i.e. a
thousand or more), portfolio adapted program induction becomes the best
approach. For intermediate data sizes, we demonstrate that the combined method
of adapted meta program induction has the strongest performance.
| [
{
"version": "v1",
"created": "Wed, 11 Oct 2017 16:29:38 GMT"
}
] | 1,507,766,400,000 | [
[
"Devlin",
"Jacob",
""
],
[
"Bunel",
"Rudy",
""
],
[
"Singh",
"Rishabh",
""
],
[
"Hausknecht",
"Matthew",
""
],
[
"Kohli",
"Pushmeet",
""
]
] |
1710.04161 | Naveen Sundar Govindarajulu | Naveen Sundar Govindarajulu and Selmer Bringsjord | Counterfactual Conditionals in Quantified Modal Logic | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a novel formalization of counterfactual conditionals in a
quantified modal logic. Counterfactual conditionals play a vital role in
ethical and moral reasoning. Prior work has shown that moral reasoning systems
(and more generally, theory-of-mind reasoning systems) should be at least as
expressive as first-order (quantified) modal logic (QML) to be well-behaved.
While existing work on moral reasoning has focused on counterfactual-free QML
moral reasoning, we present a fully specified and implemented formal system
that includes counterfactual conditionals. We validate our model with two
projects. In the first project, we demonstrate that our system can be used to
model a complex moral principle, the doctrine of double effect. In the second
project, we use the system to build a data-set with true and false
counterfactuals as licensed by our theory, which we believe can be useful for
other researchers. This project also shows that our model can be
computationally feasible.
| [
{
"version": "v1",
"created": "Wed, 11 Oct 2017 16:32:30 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Nov 2017 23:04:57 GMT"
}
] | 1,509,926,400,000 | [
[
"Govindarajulu",
"Naveen Sundar",
""
],
[
"Bringsjord",
"Selmer",
""
]
] |
1710.04324 | Md Kamruzzaman Sarker | Md Kamruzzaman Sarker, Ning Xie, Derek Doran, Michael Raymer, Pascal
Hitzler | Explaining Trained Neural Networks with Semantic Web Technologies: First
Steps | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ever increasing prevalence of publicly available structured data on the
World Wide Web enables new applications in a variety of domains. In this paper,
we provide a conceptual approach that leverages such data in order to explain
the input-output behavior of trained artificial neural networks. We apply
existing Semantic Web technologies in order to provide an experimental proof of
concept.
| [
{
"version": "v1",
"created": "Wed, 11 Oct 2017 22:32:51 GMT"
}
] | 1,507,852,800,000 | [
[
"Sarker",
"Md Kamruzzaman",
""
],
[
"Xie",
"Ning",
""
],
[
"Doran",
"Derek",
""
],
[
"Raymer",
"Michael",
""
],
[
"Hitzler",
"Pascal",
""
]
] |
1710.04805 | Santiago Ontanon | Santiago Onta\~n\'on | Combinatorial Multi-armed Bandits for Real-Time Strategy Games | null | (2017) Journal of Artificial Intelligence Research (JAIR). Volume
58, pp 665-702 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Games with large branching factors pose a significant challenge for game tree
search algorithms. In this paper, we address this problem with a sampling
strategy for Monte Carlo Tree Search (MCTS) algorithms called {\em na\"{i}ve
sampling}, based on a variant of the Multi-armed Bandit problem called {\em
Combinatorial Multi-armed Bandits} (CMAB). We analyze the theoretical
properties of several variants of {\em na\"{i}ve sampling}, and empirically
compare it against the other existing strategies in the literature for CMABs.
We then evaluate these strategies in the context of real-time strategy (RTS)
games, a genre of computer games characterized by their very large branching
factors. Our results show that as the branching factor grows, {\em na\"{i}ve
sampling} outperforms the other sampling strategies.
| [
{
"version": "v1",
"created": "Fri, 13 Oct 2017 05:08:14 GMT"
}
] | 1,508,112,000,000 | [
[
"Ontañón",
"Santiago",
""
]
] |
1710.05060 | Nate Soares | Eliezer Yudkowsky and Nate Soares | Functional Decision Theory: A New Theory of Instrumental Rationality | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes and motivates a new decision theory known as functional
decision theory (FDT), as distinct from causal decision theory and evidential
decision theory. Functional decision theorists hold that the normative
principle for action is to treat one's decision as the output of a fixed
mathematical function that answers the question, "Which output of this very
function would yield the best outcome?" Adhering to this principle delivers a
number of benefits, including the ability to maximize wealth in an array of
traditional decision-theoretic and game-theoretic problems where CDT and EDT
perform poorly. Using one simple and coherent decision rule, functional
decision theorists (for example) achieve more utility than CDT on Newcomb's
problem, more utility than EDT on the smoking lesion problem, and more utility
than both in Parfit's hitchhiker problem. In this paper, we define FDT, explore
its prescriptions in a number of different decision problems, compare it to CDT
and EDT, and give philosophical justifications for FDT as a normative theory of
decision-making.
| [
{
"version": "v1",
"created": "Fri, 13 Oct 2017 19:51:38 GMT"
},
{
"version": "v2",
"created": "Tue, 22 May 2018 21:07:53 GMT"
}
] | 1,527,120,000,000 | [
[
"Yudkowsky",
"Eliezer",
""
],
[
"Soares",
"Nate",
""
]
] |
1710.05207 | Ivan Brugere | Ivan Brugere and Tanya Y. Berger-Wolf | Network Model Selection Using Task-Focused Minimum Description Length | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Networks are fundamental models for data used in practically every
application domain. In most instances, several implicit or explicit choices
about the network definition impact the translation of underlying data to a
network representation, and the subsequent question(s) about the underlying
system being represented. Users of downstream network data may not even be
aware of these choices or their impacts. We propose a task-focused network
model selection methodology which addresses several key challenges. Our
approach constructs network models from underlying data and uses minimum
description length (MDL) criteria for selection. Our methodology measures
efficiency, a general and comparable measure of the network's performance of a
local (i.e. node-level) predictive task of interest. Selection on efficiency
favors parsimonious (e.g. sparse) models to avoid overfitting and can be
applied across arbitrary tasks and representations. We show stability,
sensitivity, and significance testing in our methodology.
| [
{
"version": "v1",
"created": "Sat, 14 Oct 2017 16:27:51 GMT"
},
{
"version": "v2",
"created": "Thu, 11 Jan 2018 02:26:28 GMT"
}
] | 1,515,715,200,000 | [
[
"Brugere",
"Ivan",
""
],
[
"Berger-Wolf",
"Tanya Y.",
""
]
] |
1710.05426 | Tong Wang | Tong Wang and Cynthia Rudin | Causal Rule Sets for Identifying Subgroups with Enhanced Treatment
Effect | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A key question in causal inference analyses is how to find subgroups with
elevated treatment effects. This paper takes a machine learning approach and
introduces a generative model, Causal Rule Sets (CRS), for interpretable
subgroup discovery. A CRS model uses a small set of short decision rules to
capture a subgroup where the average treatment effect is elevated. We present a
Bayesian framework for learning a causal rule set. The Bayesian model consists
of a prior that favors simple models for better interpretability as well as
avoiding overfitting, and a Bayesian logistic regression that captures the
likelihood of data, characterizing the relation between outcomes, attributes,
and subgroup membership. The Bayesian model has tunable parameters that can
characterize subgroups with various sizes, providing users with more flexible
choices of models from the \emph{treatment efficient frontier}. We find maximum
a posteriori models using iterative discrete Monte Carlo steps in the joint
solution space of rules sets and parameters. To improve search efficiency, we
provide theoretically grounded heuristics and bounding strategies to prune and
confine the search space. Experiments show that the search algorithm can
efficiently recover true underlying subgroups. We apply CRS on public and
real-world datasets from domains where interpretability is indispensable. We
compare CRS with state-of-the-art rule-based subgroup discovery models. Results
show that CRS achieved consistently competitive performance on datasets from
various domains, represented by high treatment efficient frontiers.
| [
{
"version": "v1",
"created": "Mon, 16 Oct 2017 00:30:43 GMT"
},
{
"version": "v2",
"created": "Tue, 24 Jul 2018 13:43:28 GMT"
},
{
"version": "v3",
"created": "Thu, 20 May 2021 04:30:54 GMT"
}
] | 1,621,555,200,000 | [
[
"Wang",
"Tong",
""
],
[
"Rudin",
"Cynthia",
""
]
] |
1710.05627 | Wei Gao | Wei Gao and David Hsu and Wee Sun Lee and Shengmei Shen and Karthikk
Subramanian | Intention-Net: Integrating Planning and Deep Learning for Goal-Directed
Autonomous Navigation | Published in 1st Annual Conference on Robot Learning (CoRL 2017) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How can a delivery robot navigate reliably to a destination in a new office
building, with minimal prior information? To tackle this challenge, this paper
introduces a two-level hierarchical approach, which integrates model-free deep
learning and model-based path planning. At the low level, a neural-network
motion controller, called the intention-net, is trained end-to-end to provide
robust local navigation. The intention-net maps images from a single monocular
camera and "intentions" directly to robot controls. At the high level, a path
planner uses a crude map, e.g., a 2-D floor plan, to compute a path from the
robot's current location to the goal. The planned path provides intentions to
the intention-net. Preliminary experiments suggest that the learned motion
controller is robust against perceptual uncertainty and by integrating with a
path planner, it generalizes effectively to new environments and goals.
| [
{
"version": "v1",
"created": "Mon, 16 Oct 2017 11:22:32 GMT"
},
{
"version": "v2",
"created": "Tue, 17 Oct 2017 02:24:06 GMT"
}
] | 1,508,284,800,000 | [
[
"Gao",
"Wei",
""
],
[
"Hsu",
"David",
""
],
[
"Lee",
"Wee Sun",
""
],
[
"Shen",
"Shengmei",
""
],
[
"Subramanian",
"Karthikk",
""
]
] |
1710.05733 | Sobhan Moosavi | Sobhan Moosavi, Behrooz Omidvar-Tehrani, R. Bruce Craig, Arnab Nandi,
Rajiv Ramnath | Characterizing Driving Context from Driver Behavior | Accepted to be published at The 25th ACM SIGSPATIAL International
Conference on Advances in Geographic Information Systems (ACM SIGSPATIAL
2017) | null | 10.1145/3139958.3139992 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Because of the increasing availability of spatiotemporal data, a variety of
data-analytic applications have become possible. Characterizing driving
context, where context may be thought of as a combination of location and time,
is a new challenging application. An example of such a characterization is
finding the correlation between driving behavior and traffic conditions. This
contextual information enables analysts to validate observation-based
hypotheses about the driving of an individual. In this paper, we present
DriveContext, a novel framework to find the characteristics of a context, by
extracting significant driving patterns (e.g., a slow-down), and then
identifying the set of potential causes behind patterns (e.g., traffic
congestion). Our experimental results confirm the feasibility of the framework
in identifying meaningful driving patterns, with improvements in comparison
with the state-of-the-art. We also demonstrate how the framework derives
interesting characteristics for different contexts, through real-world
examples.
| [
{
"version": "v1",
"created": "Fri, 13 Oct 2017 17:34:11 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Nov 2017 23:42:05 GMT"
}
] | 1,511,222,400,000 | [
[
"Moosavi",
"Sobhan",
""
],
[
"Omidvar-Tehrani",
"Behrooz",
""
],
[
"Craig",
"R. Bruce",
""
],
[
"Nandi",
"Arnab",
""
],
[
"Ramnath",
"Rajiv",
""
]
] |
1710.07075 | Spyros Gkezerlis | Spyros Gkezerlis and Dimitris Kalles | Decision Trees for Helpdesk Advisor Graphs | null | Bulletin of the Technical Committee on Learning Technology, Volume
18, Issue 2-3, April 2016 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We use decision trees to build a helpdesk agent reference network to
facilitate the on-the-job advising of junior or less experienced staff on how
to better address telecommunication customer fault reports. Such reports
generate field measurements and remote measurements which, when coupled with
location data and client attributes, and fused with organization-level
statistics, can produce models of how support should be provided. Beyond
decision support, these models can help identify staff who can act as advisors,
based on the quality, consistency and predictability of dealing with complex
troubleshooting reports. Advisor staff models are then used to guide less
experienced staff in their decision making; thus, we advocate the deployment of
a simple mechanism which exploits the availability of staff with a sound track
record at the helpdesk to act as dormant tutors.
| [
{
"version": "v1",
"created": "Thu, 19 Oct 2017 10:48:52 GMT"
}
] | 1,508,457,600,000 | [
[
"Gkezerlis",
"Spyros",
""
],
[
"Kalles",
"Dimitris",
""
]
] |
1710.07214 | Georgios Feretzakis | Georgios Feretzakis, Dimitris Kalles and Vassilios S. Verykios | On Using Linear Diophantine Equations to Tune the extent of Look Ahead
while Hiding Decision Tree Rules | 10 pages, 5 figures. arXiv admin note: substantial text overlap with
arXiv:1706.05733 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper focuses on preserving the privacy of sensitive pat-terns when
inducing decision trees. We adopt a record aug-mentation approach for hiding
sensitive classification rules in binary datasets. Such a hiding methodology is
preferred over other heuristic solutions like output perturbation or
crypto-graphic techniques - which restrict the usability of the data - since
the raw data itself is readily available for public use. In this paper, we
propose a look ahead approach using linear Diophantine equations in order to
add the appropriate number of instances while minimally disturbing the initial
entropy of the nodes.
| [
{
"version": "v1",
"created": "Wed, 18 Oct 2017 04:12:59 GMT"
}
] | 1,508,457,600,000 | [
[
"Feretzakis",
"Georgios",
""
],
[
"Kalles",
"Dimitris",
""
],
[
"Verykios",
"Vassilios S.",
""
]
] |
1710.07360 | Matias Alvarado Dr | Mat\'ias Alvarado, Arturo Yee, Carlos Villarreal | Go game formal revealing by Ising model | 19 pages, 9 figures some of them composition of 2 - 5 small ones. 42
references | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Go gaming is a struggle for territory control between rival, black and white,
stones on a board. We model the Go dynamics in a game by means of the Ising
model whose interaction coefficients reflect essential rules and tactics
employed in Go to build long-term strategies. At any step of the game, the
energy functional of the model provides the control degree (strength) of a
player over the board. A close fit between predictions of the model with actual
games is obtained.
| [
{
"version": "v1",
"created": "Thu, 19 Oct 2017 21:36:09 GMT"
}
] | 1,508,716,800,000 | [
[
"Alvarado",
"Matías",
""
],
[
"Yee",
"Arturo",
""
],
[
"Villarreal",
"Carlos",
""
]
] |
1710.07983 | Weichao Zhou | Weichao Zhou, Wenchao Li | Safety-Aware Apprenticeship Learning | Accepted by International Conference on Computer Aided Verification
(CAV) 2018 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Apprenticeship learning (AL) is a kind of Learning from Demonstration
techniques where the reward function of a Markov Decision Process (MDP) is
unknown to the learning agent and the agent has to derive a good policy by
observing an expert's demonstrations. In this paper, we study the problem of
how to make AL algorithms inherently safe while still meeting its learning
objective. We consider a setting where the unknown reward function is assumed
to be a linear combination of a set of state features, and the safety property
is specified in Probabilistic Computation Tree Logic (PCTL). By embedding
probabilistic model checking inside AL, we propose a novel
counterexample-guided approach that can ensure safety while retaining
performance of the learnt policy. We demonstrate the effectiveness of our
approach on several challenging AL scenarios where safety is essential.
| [
{
"version": "v1",
"created": "Sun, 22 Oct 2017 17:29:16 GMT"
},
{
"version": "v2",
"created": "Sat, 2 Dec 2017 20:48:50 GMT"
},
{
"version": "v3",
"created": "Tue, 6 Feb 2018 18:58:32 GMT"
},
{
"version": "v4",
"created": "Sat, 28 Apr 2018 14:25:44 GMT"
}
] | 1,525,132,800,000 | [
[
"Zhou",
"Weichao",
""
],
[
"Li",
"Wenchao",
""
]
] |
1710.08191 | Fabio Massimo Zanzotto | Fabio Massimo Zanzotto | Human-in-the-loop Artificial Intelligence | null | Journal of Artificial Intelligence Research, 2019 | 10.1613/jair.1.11345 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Little by little, newspapers are revealing the bright future that Artificial
Intelligence (AI) is building. Intelligent machines will help everywhere.
However, this bright future has a dark side: a dramatic job market contraction
before its unpredictable transformation. Hence, in a near future, large numbers
of job seekers will need financial support while catching up with these novel
unpredictable jobs. This possible job market crisis has an antidote inside. In
fact, the rise of AI is sustained by the biggest knowledge theft of the recent
years. Learning AI machines are extracting knowledge from unaware skilled or
unskilled workers by analyzing their interactions. By passionately doing their
jobs, these workers are digging their own graves.
In this paper, we propose Human-in-the-loop Artificial Intelligence (HIT-AI)
as a fairer paradigm for Artificial Intelligence systems. HIT-AI will reward
aware and unaware knowledge producers with a different scheme: decisions of AI
systems generating revenues will repay the legitimate owners of the knowledge
used for taking those decisions. As modern Robin Hoods, HIT-AI researchers
should fight for a fairer Artificial Intelligence that gives back what it
steals.
| [
{
"version": "v1",
"created": "Mon, 23 Oct 2017 10:37:50 GMT"
}
] | 1,555,459,200,000 | [
[
"Zanzotto",
"Fabio Massimo",
""
]
] |
1710.09788 | Alessandro Checco | Alessandro Checco, Gianluca Demartini, Alexander Loeser, Ines Arous,
Mourad Khayati, Matthias Dantone, Richard Koopmanschap, Svetlin Stalinov,
Martin Kersten, Ying Zhang | FashionBrain Project: A Vision for Understanding Europe's Fashion Data
Universe | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A core business in the fashion industry is the understanding and prediction
of customer needs and trends. Search engines and social networks are at the
same time a fundamental bridge and a costly middleman between the customer's
purchase intention and the retailer. To better exploit Europe's distinctive
characteristics e.g., multiple languages, fashion and cultural differences, it
is pivotal to reduce retailers' dependence to search engines. This goal can be
achieved by harnessing various data channels (manufacturers and distribution
networks, online shops, large retailers, social media, market observers, call
centers, press/magazines etc.) that retailers can leverage in order to gain
more insight about potential buyers, and on the industry trends as a whole.
This can enable the creation of novel on-line shopping experiences, the
detection of influencers, and the prediction of upcoming fashion trends.
In this paper, we provide an overview of the main research challenges and an
analysis of the most promising technological solutions that we are
investigating in the FashionBrain project.
| [
{
"version": "v1",
"created": "Thu, 26 Oct 2017 16:18:31 GMT"
}
] | 1,509,062,400,000 | [
[
"Checco",
"Alessandro",
""
],
[
"Demartini",
"Gianluca",
""
],
[
"Loeser",
"Alexander",
""
],
[
"Arous",
"Ines",
""
],
[
"Khayati",
"Mourad",
""
],
[
"Dantone",
"Matthias",
""
],
[
"Koopmanschap",
"Richard",
""
],
[
"Stalinov",
"Svetlin",
""
],
[
"Kersten",
"Martin",
""
],
[
"Zhang",
"Ying",
""
]
] |
1710.09952 | Renato Fabbri | Renato Fabbri | Enhancements of linked data expressiveness for ontologies | null | Anais do XX ENMC - Encontro Nacional de Modelagem Computacional e
VIII ECTM - Encontro de Ci\^encias e Tecnologia de Materiais, Nova Friburgo,
RJ - 16 a 19 Outubro 2017 | null | ISSN 2527-2357, ISBN 978-85-5676-019-7 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The semantic web has received many contributions of researchers as ontologies
which, in this context, i.e. within RDF linked data, are formalized
conceptualizations that might use different protocols, such as RDFS, OWL DL and
OWL FULL. In this article, we describe new expressive techniques which were
found necessary after elaborating dozens of OWL ontologies for the scientific
academy, the State and the civil society. They consist in: 1) stating possible
uses a property might have without incurring into axioms or restrictions; 2)
assigning a level of priority for an element (class, property, triple); 3)
correct depiction in diagrams of relations between classes, between individuals
which are imperative, and between individuals which are optional; 4) a
convenient association between OWL classes and SKOS concepts. We propose
specific rules to accomplish these enhancements and exemplify both its use and
the difficulties that arise because these techniques are currently not
established as standards to the ontology designer.
| [
{
"version": "v1",
"created": "Fri, 27 Oct 2017 00:16:04 GMT"
}
] | 1,509,321,600,000 | [
[
"Fabbri",
"Renato",
""
]
] |
1710.10093 | Alejandro Ramos Soto | A. Ramos-Soto and M. Pereira-Fari\~na | On modeling vagueness and uncertainty in data-to-text systems through
fuzzy sets | 31 pages including references (in a review-friendly format), 4
figures, 1 table | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vagueness and uncertainty management is counted among one of the challenges
that remain unresolved in systems that generate texts from non-linguistic data,
known as data-to-text systems. In the last decade, work in fuzzy linguistic
summarization and description of data has raised the interest of using fuzzy
sets to model and manage the imprecision of human language in data-to-text
systems. However, despite some research in this direction, there has not been
an actual clear discussion and justification on how fuzzy sets can contribute
to data-to-text for modeling vagueness and uncertainty in words and
expressions. This paper intends to bridge this gap by answering the following
questions: What does vagueness mean in fuzzy sets theory? What does vagueness
mean in data-to-text contexts? In what ways can fuzzy sets theory contribute to
improve data-to-text systems? What are the challenges that researchers from
both disciplines need to address for a successful integration of fuzzy sets
into data-to-text systems? In what cases should the use of fuzzy sets be
avoided in D2T? For this, we review and discuss the state of the art of
vagueness modeling in natural language generation and data-to-text, describe
potential and actual usages of fuzzy sets in data-to-text contexts, and provide
some additional insights about the engineering of data-to-text systems that
make use of fuzzy set-based techniques.
| [
{
"version": "v1",
"created": "Fri, 27 Oct 2017 11:56:08 GMT"
}
] | 1,509,321,600,000 | [
[
"Ramos-Soto",
"A.",
""
],
[
"Pereira-Fariña",
"M.",
""
]
] |
1710.10098 | Vincent Mousseau | K. Belahc\`ene, C. Labreuche, N. Maudet, V. Mousseau, W. Ouerdane | An efficient SAT formulation for learning multiple criteria
non-compensatory sorting rules from examples | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The literature on Multiple Criteria Decision Analysis (MCDA) proposes several
methods in order to sort alternatives evaluated on several attributes into
ordered classes. Non Compensatory Sorting models (NCS) assign alternatives to
classes based on the way they compare to multicriteria profiles separating the
consecutive classes. Previous works have proposed approaches to learn the
parameters of a NCS model based on a learning set. Exact approaches based on
mixed integer linear programming ensures that the learning set is best
restored, but can only handle datasets of limited size. Heuristic approaches
can handle large learning sets, but do not provide any guarantee about the
inferred model. In this paper, we propose an alternative formulation to learn a
NCS model. This formulation, based on a SAT problem, guarantees to find a model
fully consistent with the learning set (whenever it exists), and is
computationally much more efficient than existing exact MIP approaches.
| [
{
"version": "v1",
"created": "Fri, 27 Oct 2017 12:07:55 GMT"
}
] | 1,509,321,600,000 | [
[
"Belahcène",
"K.",
""
],
[
"Labreuche",
"C.",
""
],
[
"Maudet",
"N.",
""
],
[
"Mousseau",
"V.",
""
],
[
"Ouerdane",
"W.",
""
]
] |
1710.10164 | Fulvio Mastrogiovanni | Luca Buoncompagni, Barbara Bruno, Antonella Giuni, Fulvio
Mastrogiovanni, Renato Zaccaria | Towards a new paradigm for assistive technology at home: research
challenges, design issues and performance assessment | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Providing elderly and people with special needs, including those suffering
from physical disabilities and chronic diseases, with the possibility of
retaining their independence at best is one of the most important challenges
our society is expected to face. Assistance models based on the home care
paradigm are being adopted rapidly in almost all industrialized and emerging
countries. Such paradigms hypothesize that it is necessary to ensure that the
so-called Activities of Daily Living are correctly and regularly performed by
the assisted person to increase the perception of an improved quality of life.
This chapter describes the computational inference engine at the core of
Arianna, a system able to understand whether an assisted person performs a
given set of ADL and to motivate him/her in performing them through a
speech-mediated motivational dialogue, using a set of nearables to be installed
in an apartment, plus a wearable to be worn or fit in garments.
| [
{
"version": "v1",
"created": "Fri, 27 Oct 2017 14:36:44 GMT"
}
] | 1,509,321,600,000 | [
[
"Buoncompagni",
"Luca",
""
],
[
"Bruno",
"Barbara",
""
],
[
"Giuni",
"Antonella",
""
],
[
"Mastrogiovanni",
"Fulvio",
""
],
[
"Zaccaria",
"Renato",
""
]
] |
1710.10538 | Ramanathan Guha | Ramanathan V. Guha | Partial Knowledge In Embeddings | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Representing domain knowledge is crucial for any task. There has been a wide
range of techniques developed to represent this knowledge, from older logic
based approaches to the more recent deep learning based techniques (i.e.
embeddings). In this paper, we discuss some of these methods, focusing on the
representational expressiveness tradeoffs that are often made. In particular,
we focus on the the ability of various techniques to encode `partial knowledge'
- a key component of successful knowledge systems. We introduce and describe
the concepts of `ensembles of embeddings' and `aggregate embeddings' and
demonstrate how they allow for partial knowledge.
| [
{
"version": "v1",
"created": "Sat, 28 Oct 2017 23:55:33 GMT"
}
] | 1,509,408,000,000 | [
[
"Guha",
"Ramanathan V.",
""
]
] |
1711.00054 | Lei Lin | Zhenhua Zhang, Lei Lin | Abnormal Spatial-Temporal Pattern Analysis for Niagara Frontier Border
Wait Times | submitted to ITS World Congress 2017 Montreal | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Border crossing delays cause problems like huge economics loss and heavy
environmental pollutions. To understand more about the nature of border
crossing delay, this study applies a dictionary-based compression algorithm to
process the historical Niagara Frontier border wait times data. It can identify
the abnormal spatial-temporal patterns for both passenger vehicles and trucks
at three bridges connecting US and Canada. Furthermore, it provides a
quantitate anomaly score to rank the wait times patterns across the three
bridges for each vehicle type and each direction. By analyzing the top three
most abnormal patterns, we find that there are at least two factors
contributing the anomaly of the patterns. The weekends and holidays may cause
unusual heave congestions at the three bridges at the same time, and the
freight transportation demand may be uneven from Canada to the USA at Peace
Bridge and Lewiston-Queenston Bridge, which may lead to a high anomaly score.
By calculating the frequency of the top 5% abnormal patterns by hour of the
day, the results show that for cars from the USA to Canada, the frequency of
abnormal waiting time patterns is the highest during noon while for trucks in
the same direction, it is the highest during the afternoon peak hours. For
Canada to US direction, the frequency of abnormal border wait time patterns for
both cars and trucks reaches to the peak during the afternoon. The analysis of
abnormal spatial-temporal wait times patterns is promising to improve the
border crossing management
| [
{
"version": "v1",
"created": "Tue, 31 Oct 2017 18:53:26 GMT"
}
] | 1,509,580,800,000 | [
[
"Zhang",
"Zhenhua",
""
],
[
"Lin",
"Lei",
""
]
] |
1711.00129 | Xiao Li | Xiao Li, Yao Ma and Calin Belta | Automata-Guided Hierarchical Reinforcement Learning for Skill
Composition | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Skills learned through (deep) reinforcement learning often generalizes poorly
across domains and re-training is necessary when presented with a new task. We
present a framework that combines techniques in \textit{formal methods} with
\textit{reinforcement learning} (RL). The methods we provide allows for
convenient specification of tasks with logical expressions, learns hierarchical
policies (meta-controller and low-level controllers) with well-defined
intrinsic rewards, and construct new skills from existing ones with little to
no additional exploration. We evaluate the proposed methods in a simple grid
world simulation as well as a more complicated kitchen environment in AI2Thor
| [
{
"version": "v1",
"created": "Tue, 31 Oct 2017 22:21:02 GMT"
},
{
"version": "v2",
"created": "Mon, 21 May 2018 01:38:04 GMT"
}
] | 1,526,947,200,000 | [
[
"Li",
"Xiao",
""
],
[
"Ma",
"Yao",
""
],
[
"Belta",
"Calin",
""
]
] |
1711.00138 | Sam Greydanus | Sam Greydanus, Anurag Koul, Jonathan Dodge, Alan Fern | Visualizing and Understanding Atari Agents | ICML 2018 conference paper. Code:
https://github.com/greydanus/visualize_atari Blog:
https://greydanus.github.io/2017/11/01/visualize-atari/ | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While deep reinforcement learning (deep RL) agents are effective at
maximizing rewards, it is often unclear what strategies they use to do so. In
this paper, we take a step toward explaining deep RL agents through a case
study using Atari 2600 environments. In particular, we focus on using saliency
maps to understand how an agent learns and executes a policy. We introduce a
method for generating useful saliency maps and use it to show 1) what strong
agents attend to, 2) whether agents are making decisions for the right or wrong
reasons, and 3) how agents evolve during learning. We also test our method on
non-expert human subjects and find that it improves their ability to reason
about these agents. Overall, our results show that saliency information can
provide significant insight into an RL agent's decisions and learning behavior.
| [
{
"version": "v1",
"created": "Tue, 31 Oct 2017 23:03:17 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Nov 2017 19:35:42 GMT"
},
{
"version": "v3",
"created": "Wed, 22 Nov 2017 21:34:02 GMT"
},
{
"version": "v4",
"created": "Fri, 23 Mar 2018 00:37:12 GMT"
},
{
"version": "v5",
"created": "Mon, 10 Sep 2018 18:42:40 GMT"
}
] | 1,536,710,400,000 | [
[
"Greydanus",
"Sam",
""
],
[
"Koul",
"Anurag",
""
],
[
"Dodge",
"Jonathan",
""
],
[
"Fern",
"Alan",
""
]
] |
1711.00150 | Anna Korhonen | Yiding Lu, Yufan Guo, Anna Korhonen | Erratum: Link prediction in drug-target interactions network using
similarity indices | 10 pages | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Background: In silico drug-target interaction (DTI) prediction plays an
integral role in drug repositioning: the discovery of new uses for existing
drugs. One popular method of drug repositioning is network-based DTI
prediction, which uses complex network theory to predict DTIs from a
drug-target network. Currently, most network-based DTI prediction is based on
machine learning methods such as Restricted Boltzmann Machines (RBM) or Support
Vector Machines (SVM). These methods require additional information about the
characteristics of drugs, targets and DTIs, such as chemical structure, genome
sequence, binding types, causes of interactions, etc., and do not perform
satisfactorily when such information is unavailable. We propose a new,
alternative method for DTI prediction that makes use of only network topology
information attempting to solve this problem.
Results: We compare our method for DTI prediction against the well-known RBM
approach. We show that when applied to the MATADOR database, our approach based
on node neighborhoods yield higher precision for high-ranking predictions than
RBM when no information regarding DTI types is available.
Conclusion: This demonstrates that approaches purely based on network
topology provide a more suitable approach to DTI prediction in the many
real-life situations where little or no prior knowledge is available about the
characteristics of drugs, targets, or their interactions.
| [
{
"version": "v1",
"created": "Wed, 1 Nov 2017 00:21:48 GMT"
}
] | 1,509,580,800,000 | [
[
"Lu",
"Yiding",
""
],
[
"Guo",
"Yufan",
""
],
[
"Korhonen",
"Anna",
""
]
] |
1711.00363 | Andrew Critch PhD | Andrew Critch and Stuart Russell | Servant of Many Masters: Shifting priorities in Pareto-optimal
sequential decision-making | 10 pages. arXiv admin note: substantial text overlap with
arXiv:1701.01302 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is often argued that an agent making decisions on behalf of two or more
principals who have different utility functions should adopt a {\em
Pareto-optimal} policy, i.e., a policy that cannot be improved upon for one
agent without making sacrifices for another. A famous theorem of Harsanyi shows
that, when the principals have a common prior on the outcome distributions of
all policies, a Pareto-optimal policy for the agent is one that maximizes a
fixed, weighted linear combination of the principals' utilities.
In this paper, we show that Harsanyi's theorem does not hold for principals
with different priors, and derive a more precise generalization which does
hold, which constitutes our main result. In this more general case, the
relative weight given to each principal's utility should evolve over time
according to how well the agent's observations conform with that principal's
prior. The result has implications for the design of contracts, treaties, joint
ventures, and robots.
| [
{
"version": "v1",
"created": "Tue, 31 Oct 2017 05:09:13 GMT"
}
] | 1,509,580,800,000 | [
[
"Critch",
"Andrew",
""
],
[
"Russell",
"Stuart",
""
]
] |
1711.00399 | Brent Mittelstadt | Sandra Wachter, Brent Mittelstadt, Chris Russell | Counterfactual Explanations without Opening the Black Box: Automated
Decisions and the GDPR | null | Harvard Journal of Law & Technology, 2018 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There has been much discussion of the right to explanation in the EU General
Data Protection Regulation, and its existence, merits, and disadvantages.
Implementing a right to explanation that opens the black box of algorithmic
decision-making faces major legal and technical barriers. Explaining the
functionality of complex algorithmic decision-making systems and their
rationale in specific cases is a technically challenging problem. Some
explanations may offer little meaningful information to data subjects, raising
questions around their value. Explanations of automated decisions need not
hinge on the general public understanding how algorithmic systems function.
Even though such interpretability is of great importance and should be pursued,
explanations can, in principle, be offered without opening the black box.
Looking at explanations as a means to help a data subject act rather than
merely understand, one could gauge the scope and content of explanations
according to the specific goal or action they are intended to support. From the
perspective of individuals affected by automated decision-making, we propose
three aims for explanations: (1) to inform and help the individual understand
why a particular decision was reached, (2) to provide grounds to contest the
decision if the outcome is undesired, and (3) to understand what would need to
change in order to receive a desired result in the future, based on the current
decision-making model. We assess how each of these goals finds support in the
GDPR. We suggest data controllers should offer a particular type of
explanation, unconditional counterfactual explanations, to support these three
aims. These counterfactual explanations describe the smallest change to the
world that can be made to obtain a desirable outcome, or to arrive at the
closest possible world, without needing to explain the internal logic of the
system.
| [
{
"version": "v1",
"created": "Wed, 1 Nov 2017 15:39:23 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Dec 2017 12:26:47 GMT"
},
{
"version": "v3",
"created": "Wed, 21 Mar 2018 11:43:46 GMT"
}
] | 1,521,676,800,000 | [
[
"Wachter",
"Sandra",
""
],
[
"Mittelstadt",
"Brent",
""
],
[
"Russell",
"Chris",
""
]
] |
1711.00694 | Smitha Milli | Smitha Milli, Pieter Abbeel, Igor Mordatch | Interpretable and Pedagogical Examples | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Teachers intentionally pick the most informative examples to show their
students. However, if the teacher and student are neural networks, the examples
that the teacher network learns to give, although effective at teaching the
student, are typically uninterpretable. We show that training the student and
teacher iteratively, rather than jointly, can produce interpretable teaching
strategies. We evaluate interpretability by (1) measuring the similarity of the
teacher's emergent strategies to intuitive strategies in each domain and (2)
conducting human experiments to evaluate how effective the teacher's strategies
are at teaching humans. We show that the teacher network learns to select or
generate interpretable, pedagogical examples to teach rule-based,
probabilistic, boolean, and hierarchical concepts.
| [
{
"version": "v1",
"created": "Thu, 2 Nov 2017 11:40:08 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Feb 2018 15:41:23 GMT"
}
] | 1,518,652,800,000 | [
[
"Milli",
"Smitha",
""
],
[
"Abbeel",
"Pieter",
""
],
[
"Mordatch",
"Igor",
""
]
] |
1711.00909 | Robert Woodward | Robert J. Woodward and Berthe Y. Choueiry | Weight-Based Variable Ordering in the Context of High-Level
Consistencies | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dom/wdeg is one of the best performing heuristics for dynamic variable
ordering in backtrack search [Boussemart et al., 2004]. As originally defined,
this heuristic increments the weight of the constraint that causes a domain
wipeout (i.e., a dead-end) when enforcing arc consistency during search. "The
process of weighting constraints with dom/wdeg is not defined when more than
one constraint lead to a domain wipeout [Vion et al., 2011]." In this paper, we
investigate how weights should be updated in the context of two high-level
consistencies, namely, singleton (POAC) and relational consistencies (RNIC). We
propose, analyze, and empirically evaluate several strategies for updating the
weights. We statistically compare the proposed strategies and conclude with our
recommendations.
| [
{
"version": "v1",
"created": "Thu, 2 Nov 2017 19:55:18 GMT"
}
] | 1,509,926,400,000 | [
[
"Woodward",
"Robert J.",
""
],
[
"Choueiry",
"Berthe Y.",
""
]
] |
1711.01503 | Richard Liaw | Richard Liaw, Sanjay Krishnan, Animesh Garg, Daniel Crankshaw, Joseph
E. Gonzalez, Ken Goldberg | Composing Meta-Policies for Autonomous Driving Using Hierarchical Deep
Reinforcement Learning | 8 pages, 11 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Rather than learning new control policies for each new task, it is possible,
when tasks share some structure, to compose a "meta-policy" from previously
learned policies. This paper reports results from experiments using Deep
Reinforcement Learning on a continuous-state, discrete-action autonomous
driving simulator. We explore how Deep Neural Networks can represent
meta-policies that switch among a set of previously learned policies,
specifically in settings where the dynamics of a new scenario are composed of a
mixture of previously learned dynamics and where the state observation is
possibly corrupted by sensing noise. We also report the results of experiments
varying dynamics mixes, distractor policies, magnitudes/distributions of
sensing noise, and obstacles. In a fully observed experiment, the meta-policy
learning algorithm achieves 2.6x the reward achieved by the next best policy
composition technique with 80% less exploration. In a partially observed
experiment, the meta-policy learning algorithm converges after 50 iterations
while a direct application of RL fails to converge even after 200 iterations.
| [
{
"version": "v1",
"created": "Sat, 4 Nov 2017 22:37:25 GMT"
}
] | 1,510,012,800,000 | [
[
"Liaw",
"Richard",
""
],
[
"Krishnan",
"Sanjay",
""
],
[
"Garg",
"Animesh",
""
],
[
"Crankshaw",
"Daniel",
""
],
[
"Gonzalez",
"Joseph E.",
""
],
[
"Goldberg",
"Ken",
""
]
] |
1711.01518 | Rivindu Perera | Rivindu Perera, Parma Nand, Boris Bacic, Wen-Hsin Yang, Kazuhiro Seki,
and Radek Burget | Semantic Web Today: From Oil Rigs to Panama Papers | 21 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The next leap on the internet has already started as Semantic Web. At its
core, Semantic Web transforms the document oriented web to a data oriented web
enriched with semantics embedded as metadata. This change in perspective
towards the web offers numerous benefits for vast amount of data intensive
industries that are bound to the web and its related applications. The
industries are diverse as they range from Oil & Gas exploration to the
investigative journalism, and everything in between. This paper discusses eight
different industries which currently reap the benefits of Semantic Web. The
paper also offers a future outlook into Semantic Web applications and discusses
the areas in which Semantic Web would play a key role in the future.
| [
{
"version": "v1",
"created": "Sun, 5 Nov 2017 01:52:17 GMT"
}
] | 1,510,012,800,000 | [
[
"Perera",
"Rivindu",
""
],
[
"Nand",
"Parma",
""
],
[
"Bacic",
"Boris",
""
],
[
"Yang",
"Wen-Hsin",
""
],
[
"Seki",
"Kazuhiro",
""
],
[
"Burget",
"Radek",
""
]
] |
1711.03087 | Jonathan C. Campbell | Jonathan C. Campbell (1) and Clark Verbrugge (1) ((1) McGill
University) | Exploration in NetHack With Secret Discovery | 11 pages, 11 figures. Accepted in IEEE Transactions on Games.
Revision adds BotHack comparison, result breakdown by num. map rooms, and
improved optimal solution | null | 10.1109/TG.2018.2861759 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Roguelike games generally feature exploration problems as a critical, yet
often repetitive element of gameplay. Automated approaches, however, face
challenges in terms of optimality, as well as due to incomplete information,
such as from the presence of secret doors. This paper presents an algorithmic
approach to exploration of roguelike dungeon environments. Our design aims to
minimize exploration time, balancing coverage and discovery of secret areas
with resource cost. Our algorithm is based on the concept of occupancy maps
popular in robotics, adapted to encourage efficient discovery of secret access
points. Through extensive experimentation on NetHack maps we show that this
technique is significantly more efficient than simpler greedy approaches and an
existing automated player. We further investigate optimized parameterization
for the algorithm through a comprehensive data analysis. These results point
towards better automation for players as well as heuristics applicable to fully
automated gameplay.
| [
{
"version": "v1",
"created": "Wed, 8 Nov 2017 18:40:00 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Aug 2018 21:12:48 GMT"
}
] | 1,533,686,400,000 | [
[
"Campbell",
"Jonathan C.",
""
],
[
"Verbrugge",
"Clark",
""
]
] |
1711.03237 | James Wu | Dr. W. A. Rivera and James C. Wu | CogSciK: Clustering for Cognitive Science Motivated Decision Making | 5 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Computational models of decisionmaking must contend with the variance of
context and any number of possible decisions that a defined strategic actor can
make at a given time. Relying on cognitive science theory, the authors have
created an algorithm that captures the orientation of the actor towards an
object and arrays the possible decisions available to that actor based on their
given intersubjective orientation. This algorithm, like a traditional K-means
clustering algorithm, relies on a core-periphery structure that gives the
likelihood of moves as those closest to the cluster's centroid. The result is
an algorithm that enables unsupervised classification of an array of decision
points belonging to an actor's present state and deeply rooted in cognitive
science theory.
| [
{
"version": "v1",
"created": "Thu, 9 Nov 2017 02:28:59 GMT"
}
] | 1,510,272,000,000 | [
[
"Rivera",
"Dr. W. A.",
""
],
[
"Wu",
"James C.",
""
]
] |
1711.03243 | Yewen Pu | Yewen Pu, Zachery Miranda, Armando Solar-Lezama, Leslie Pack Kaelbling | Selecting Representative Examples for Program Synthesis | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Program synthesis is a class of regression problems where one seeks a
solution, in the form of a source-code program, mapping the inputs to their
corresponding outputs exactly. Due to its precise and combinatorial nature,
program synthesis is commonly formulated as a constraint satisfaction problem,
where input-output examples are encoded as constraints and solved with a
constraint solver. A key challenge of this formulation is scalability: while
constraint solvers work well with a few well-chosen examples, a large set of
examples can incur significant overhead in both time and memory. We describe a
method to discover a subset of examples that is both small and representative:
the subset is constructed iteratively, using a neural network to predict the
probability of unchosen examples conditioned on the chosen examples in the
subset, and greedily adding the least probable example. We empirically evaluate
the representativeness of the subsets constructed by our method, and
demonstrate such subsets can significantly improve synthesis time and
stability.
| [
{
"version": "v1",
"created": "Thu, 9 Nov 2017 03:38:15 GMT"
},
{
"version": "v2",
"created": "Sun, 25 Feb 2018 00:34:06 GMT"
},
{
"version": "v3",
"created": "Thu, 7 Jun 2018 04:06:10 GMT"
}
] | 1,528,416,000,000 | [
[
"Pu",
"Yewen",
""
],
[
"Miranda",
"Zachery",
""
],
[
"Solar-Lezama",
"Armando",
""
],
[
"Kaelbling",
"Leslie Pack",
""
]
] |
1711.03430 | Nicolas Troquard | Nicolas Troquard, Roberto Confalonieri, Pietro Galliani, Rafael
Penaloza, Daniele Porello, Oliver Kutz | Repairing Ontologies via Axiom Weakening | To appear AAAI 2018 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ontology engineering is a hard and error-prone task, in which small changes
may lead to errors, or even produce an inconsistent ontology. As ontologies
grow in size, the need for automated methods for repairing inconsistencies
while preserving as much of the original knowledge as possible increases. Most
previous approaches to this task are based on removing a few axioms from the
ontology to regain consistency. We propose a new method based on weakening
these axioms to make them less restrictive, employing the use of refinement
operators. We introduce the theoretical framework for weakening DL ontologies,
propose algorithms to repair ontologies based on the framework, and provide an
analysis of the computational complexity. Through an empirical analysis made
over real-life ontologies, we show that our approach preserves significantly
more of the original knowledge of the ontology than removing axioms.
| [
{
"version": "v1",
"created": "Thu, 9 Nov 2017 15:39:41 GMT"
}
] | 1,510,272,000,000 | [
[
"Troquard",
"Nicolas",
""
],
[
"Confalonieri",
"Roberto",
""
],
[
"Galliani",
"Pietro",
""
],
[
"Penaloza",
"Rafael",
""
],
[
"Porello",
"Daniele",
""
],
[
"Kutz",
"Oliver",
""
]
] |
1711.03580 | Kananat Suwanviwatana | Kananat Suwanviwatana, Hiroyuki Iida | First Results from Using Game Refinement Measure and Learning
Coefficient in Scrabble | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper explores the entertainment experience and learning experience in
Scrabble. It proposes a new measure from the educational point of view, which
we call learning coefficient, based on the balance between the learner's skill
and the challenge in Scrabble. Scrabble variants, generated using different
size of board and dictionary, are analyzed with two measures of game refinement
and learning coefficient. The results show that 13x13 Scrabble yields the best
entertainment experience and 15x15 (standard) Scrabble with 4% of original
dictionary size yields the most effective environment for language learners.
Moreover, 15x15 Scrabble with 10% of original dictionary size has a good
balance between entertainment and learning experience.
| [
{
"version": "v1",
"created": "Tue, 7 Nov 2017 10:39:42 GMT"
}
] | 1,510,531,200,000 | [
[
"Suwanviwatana",
"Kananat",
""
],
[
"Iida",
"Hiroyuki",
""
]
] |
1711.03817 | Anna Harutyunyan | Anna Harutyunyan, Peter Vrancx, Pierre-Luc Bacon, Doina Precup, Ann
Nowe | Learning with Options that Terminate Off-Policy | AAAI 2018 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A temporally abstract action, or an option, is specified by a policy and a
termination condition: the policy guides option behavior, and the termination
condition roughly determines its length. Generally, learning with longer
options (like learning with multi-step returns) is known to be more efficient.
However, if the option set for the task is not ideal, and cannot express the
primitive optimal policy exactly, shorter options offer more flexibility and
can yield a better solution. Thus, the termination condition puts learning
efficiency at odds with solution quality. We propose to resolve this dilemma by
decoupling the behavior and target terminations, just like it is done with
policies in off-policy learning. To this end, we give a new algorithm,
Q(\beta), that learns the solution with respect to any termination condition,
regardless of how the options actually terminate. We derive Q(\beta) by casting
learning with options into a common framework with well-studied multi-step
off-policy learning. We validate our algorithm empirically, and show that it
holds up to its motivating claims.
| [
{
"version": "v1",
"created": "Fri, 10 Nov 2017 13:49:47 GMT"
},
{
"version": "v2",
"created": "Sat, 2 Dec 2017 12:57:35 GMT"
}
] | 1,512,432,000,000 | [
[
"Harutyunyan",
"Anna",
""
],
[
"Vrancx",
"Peter",
""
],
[
"Bacon",
"Pierre-Luc",
""
],
[
"Precup",
"Doina",
""
],
[
"Nowe",
"Ann",
""
]
] |
1711.03902 | Tarek Richard Besold | Tarek R. Besold, Artur d'Avila Garcez, Sebastian Bader, Howard Bowman,
Pedro Domingos, Pascal Hitzler, Kai-Uwe Kuehnberger, Luis C. Lamb, Daniel
Lowd, Priscila Machado Vieira Lima, Leo de Penning, Gadi Pinkas, Hoifung
Poon, Gerson Zaverucha | Neural-Symbolic Learning and Reasoning: A Survey and Interpretation | 58 pages, work in progress | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The study and understanding of human behaviour is relevant to computer
science, artificial intelligence, neural computation, cognitive science,
philosophy, psychology, and several other areas. Presupposing cognition as
basis of behaviour, among the most prominent tools in the modelling of
behaviour are computational-logic systems, connectionist models of cognition,
and models of uncertainty. Recent studies in cognitive science, artificial
intelligence, and psychology have produced a number of cognitive models of
reasoning, learning, and language that are underpinned by computation. In
addition, efforts in computer science research have led to the development of
cognitive computational systems integrating machine learning and automated
reasoning. Such systems have shown promise in a range of applications,
including computational biology, fault diagnosis, training and assessment in
simulators, and software verification. This joint survey reviews the personal
ideas and views of several researchers on neural-symbolic learning and
reasoning. The article is organised in three parts: Firstly, we frame the scope
and goals of neural-symbolic computation and have a look at the theoretical
foundations. We then proceed to describe the realisations of neural-symbolic
computation, systems, and applications. Finally we present the challenges
facing the area and avenues for further research.
| [
{
"version": "v1",
"created": "Fri, 10 Nov 2017 16:14:22 GMT"
}
] | 1,510,531,200,000 | [
[
"Besold",
"Tarek R.",
""
],
[
"Garcez",
"Artur d'Avila",
""
],
[
"Bader",
"Sebastian",
""
],
[
"Bowman",
"Howard",
""
],
[
"Domingos",
"Pedro",
""
],
[
"Hitzler",
"Pascal",
""
],
[
"Kuehnberger",
"Kai-Uwe",
""
],
[
"Lamb",
"Luis C.",
""
],
[
"Lowd",
"Daniel",
""
],
[
"Lima",
"Priscila Machado Vieira",
""
],
[
"de Penning",
"Leo",
""
],
[
"Pinkas",
"Gadi",
""
],
[
"Poon",
"Hoifung",
""
],
[
"Zaverucha",
"Gerson",
""
]
] |
1711.04309 | Joshua Gans | Joshua S. Gans | Self-Regulating Artificial General Intelligence | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Here we examine the paperclip apocalypse concern for artificial general
intelligence (or AGI) whereby a superintelligent AI with a simple goal (ie.,
producing paperclips) accumulates power so that all resources are devoted
towards that simple goal and are unavailable for any other use. We provide
conditions under which a paper apocalypse can arise but also show that, under
certain architectures for recursive self-improvement of AIs, that a paperclip
AI may refrain from allowing power capabilities to be developed. The reason is
that such developments pose the same control problem for the AI as they do for
humans (over AIs) and hence, threaten to deprive it of resources for its
primary goal.
| [
{
"version": "v1",
"created": "Sun, 12 Nov 2017 15:19:56 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Feb 2018 21:00:42 GMT"
}
] | 1,518,998,400,000 | [
[
"Gans",
"Joshua S.",
""
]
] |
1711.04438 | Zongyi Li | Brendan Juba, Zongyi Li, Evan Miller | Learning Abduction under Partial Observability | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Juba recently proposed a formulation of learning abductive reasoning from
examples, in which both the relative plausibility of various explanations, as
well as which explanations are valid, are learned directly from data. The main
shortcoming of this formulation of the task is that it assumes access to
full-information (i.e., fully specified) examples; relatedly, it offers no role
for declarative background knowledge, as such knowledge is rendered redundant
in the abduction task by complete information. In this work, we extend the
formulation to utilize such partially specified examples, along with
declarative background knowledge about the missing data. We show that it is
possible to use implicitly learned rules together with the explicitly given
declarative knowledge to support hypotheses in the course of abduction. We
observe that when a small explanation exists, it is possible to obtain a
much-improved guarantee in the challenging exception-tolerant setting. Such
small, human-understandable explanations are of particular interest for
potential applications of the task.
| [
{
"version": "v1",
"created": "Mon, 13 Nov 2017 06:51:40 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Nov 2017 22:35:49 GMT"
},
{
"version": "v3",
"created": "Sat, 25 Nov 2017 00:21:16 GMT"
}
] | 1,511,827,200,000 | [
[
"Juba",
"Brendan",
""
],
[
"Li",
"Zongyi",
""
],
[
"Miller",
"Evan",
""
]
] |
1711.04994 | Mikael Henaff | Mikael Henaff, Junbo Zhao and Yann LeCun | Prediction Under Uncertainty with Error-Encoding Networks | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work we introduce a new framework for performing temporal predictions
in the presence of uncertainty. It is based on a simple idea of disentangling
components of the future state which are predictable from those which are
inherently unpredictable, and encoding the unpredictable components into a
low-dimensional latent variable which is fed into a forward model. Our method
uses a supervised training objective which is fast and easy to train. We
evaluate it in the context of video prediction on multiple datasets and show
that it is able to consistently generate diverse predictions without the need
for alternating minimization over a latent space or adversarial training.
| [
{
"version": "v1",
"created": "Tue, 14 Nov 2017 08:32:43 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Nov 2017 07:32:36 GMT"
},
{
"version": "v3",
"created": "Thu, 30 Nov 2017 23:11:58 GMT"
}
] | 1,512,345,600,000 | [
[
"Henaff",
"Mikael",
""
],
[
"Zhao",
"Junbo",
""
],
[
"LeCun",
"Yann",
""
]
] |
1711.05105 | Mehdi Sadeqi | Mehdi Sadeqi, Robert C. Holte and Sandra Zilles | An Empirical Study of the Effects of Spurious Transitions on
Abstraction-based Heuristics | 38 pages, 9 figures, appendix with 5 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The efficient solution of state space search problems is often attempted by
guiding search algorithms with heuristics (estimates of the distance from any
state to the goal). A popular way for creating heuristic functions is by using
an abstract version of the state space. However, the quality of
abstraction-based heuristic functions, and thus the speed of search, can suffer
from spurious transitions, i.e., state transitions in the abstract state space
for which no corresponding transitions in the reachable component of the
original state space exist. Our first contribution is a quantitative study
demonstrating that the harmful effects of spurious transitions on heuristic
functions can be substantial, in terms of both the increase in the number of
abstract states and the decrease in the heuristic values, which may slow down
search. Our second contribution is an empirical study on the benefits of
removing a certain kind of spurious transition, namely those that involve
states with a pair of mutually exclusive (mutex) variablevalue assignments. In
the context of state space planning, a mutex pair is a pair of variable-value
assignments that does not occur in any reachable state. Detecting mutex pairs
is a problem that has been addressed frequently in the planning literature. Our
study shows that there are cases in which mutex detection helps to eliminate
harmful spurious transitions to a large extent and thus to speed up search
substantially.
| [
{
"version": "v1",
"created": "Tue, 14 Nov 2017 14:27:05 GMT"
}
] | 1,510,704,000,000 | [
[
"Sadeqi",
"Mehdi",
""
],
[
"Holte",
"Robert C.",
""
],
[
"Zilles",
"Sandra",
""
]
] |
1711.05216 | Francesco Scarcello | Georg Gottlob, Gianlugi Greco, Francesco Scarcello | Tree Projections and Constraint Optimization Problems: Fixed-Parameter
Tractability and Parallel Algorithms | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tree projections provide a unifying framework to deal with most structural
decomposition methods of constraint satisfaction problems (CSPs). Within this
framework, a CSP instance is decomposed into a number of sub-problems, called
views, whose solutions are either already available or can be computed
efficiently. The goal is to arrange portions of these views in a tree-like
structure, called tree projection, which determines an efficiently solvable CSP
instance equivalent to the original one. Deciding whether a tree projection
exists is NP-hard. Solution methods have therefore been proposed in the
literature that do not require a tree projection to be given, and that either
correctly decide whether the given CSP instance is satisfiable, or return that
a tree projection actually does not exist. These approaches had not been
generalized so far on CSP extensions for optimization problems, where the goal
is to compute a solution of maximum value/minimum cost. The paper fills the
gap, by exhibiting a fixed-parameter polynomial-time algorithm that either
disproves the existence of tree projections or computes an optimal solution,
with the parameter being the size of the expression of the objective function
to be optimized over all possible solutions (and not the size of the whole
constraint formula, used in related works). Tractability results are also
established for the problem of returning the best K solutions. Finally,
parallel algorithms for such optimization problems are proposed and analyzed.
Given that the classes of acyclic hypergraphs, hypergraphs of bounded
treewidth, and hypergraphs of bounded generalized hypertree width are all
covered as special cases of the tree projection framework, the results in this
paper directly apply to these classes. These classes are extensively considered
in the CSP setting, as well as in conjunctive database query evaluation and
optimization.
| [
{
"version": "v1",
"created": "Tue, 14 Nov 2017 17:30:08 GMT"
}
] | 1,510,704,000,000 | [
[
"Gottlob",
"Georg",
""
],
[
"Greco",
"Gianlugi",
""
],
[
"Scarcello",
"Francesco",
""
]
] |
1711.05227 | Boris Motik | Michael Benedikt and Boris Motik and Efthymia Tsamoura | Goal-Driven Query Answering for Existential Rules with Equality | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Inspired by the magic sets for Datalog, we present a novel goal-driven
approach for answering queries over terminating existential rules with equality
(aka TGDs and EGDs). Our technique improves the performance of query answering
by pruning the consequences that are not relevant for the query. This is
challenging in our setting because equalities can potentially affect all
predicates in a dataset. We address this problem by combining the existing
singularization technique with two new ingredients: an algorithm for
identifying the rules relevant to a query and a new magic sets algorithm. We
show empirically that our technique can significantly improve the performance
of query answering, and that it can mean the difference between answering a
query in a few seconds or not being able to process the query at all.
| [
{
"version": "v1",
"created": "Tue, 14 Nov 2017 18:00:38 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Nov 2017 20:09:27 GMT"
}
] | 1,511,308,800,000 | [
[
"Benedikt",
"Michael",
""
],
[
"Motik",
"Boris",
""
],
[
"Tsamoura",
"Efthymia",
""
]
] |
1711.05435 | Takuma Ebisu | Takuma Ebisu and Ryutaro Ichise | TorusE: Knowledge Graph Embedding on a Lie Group | accepted for AAAI-18 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge graphs are useful for many artificial intelligence (AI) tasks.
However, knowledge graphs often have missing facts. To populate the graphs,
knowledge graph embedding models have been developed. Knowledge graph embedding
models map entities and relations in a knowledge graph to a vector space and
predict unknown triples by scoring candidate triples. TransE is the first
translation-based method and it is well known because of its simplicity and
efficiency for knowledge graph completion. It employs the principle that the
differences between entity embeddings represent their relations. The principle
seems very simple, but it can effectively capture the rules of a knowledge
graph. However, TransE has a problem with its regularization. TransE forces
entity embeddings to be on a sphere in the embedding vector space. This
regularization warps the embeddings and makes it difficult for them to fulfill
the abovementioned principle. The regularization also affects adversely the
accuracies of the link predictions. On the other hand, regularization is
important because entity embeddings diverge by negative sampling without it.
This paper proposes a novel embedding model, TorusE, to solve the
regularization problem. The principle of TransE can be defined on any Lie
group. A torus, which is one of the compact Lie groups, can be chosen for the
embedding space to avoid regularization. To the best of our knowledge, TorusE
is the first model that embeds objects on other than a real or complex vector
space, and this paper is the first to formally discuss the problem of
regularization of TransE. Our approach outperforms other state-of-the-art
approaches such as TransE, DistMult and ComplEx on a standard link prediction
task. We show that TorusE is scalable to large-size knowledge graphs and is
faster than the original TransE.
| [
{
"version": "v1",
"created": "Wed, 15 Nov 2017 07:44:22 GMT"
}
] | 1,510,790,400,000 | [
[
"Ebisu",
"Takuma",
""
],
[
"Ichise",
"Ryutaro",
""
]
] |
1711.05508 | Patrick Rodler | Patrick Rodler, Wolfgang Schmid, Konstantin Schekotihin | A Generally Applicable, Highly Scalable Measurement Computation and
Optimization Approach to Sequential Model-Based Diagnosis | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Model-Based Diagnosis deals with the identification of the real cause of a
system's malfunction based on a formal system model and observations of the
system behavior. When a malfunction is detected, there is usually not enough
information available to pinpoint the real cause and one needs to discriminate
between multiple fault hypotheses (called diagnoses). To this end, Sequential
Diagnosis approaches ask an oracle for additional system measurements.
This work presents strategies for (optimal) measurement selection in
model-based sequential diagnosis. In particular, assuming a set of leading
diagnoses being given, we show how queries (sets of measurements) can be
computed and optimized along two dimensions: expected number of queries and
cost per query. By means of a suitable decoupling of two optimizations and a
clever search space reduction the computations are done without any inference
engine calls. For the full search space, we give a method requiring only a
polynomial number of inferences and show how query properties can be guaranteed
which existing methods do not provide. Evaluation results using real-world
problems indicate that the new method computes (virtually) optimal queries
instantly independently of the size and complexity of the considered diagnosis
problems and outperforms equally general methods not exploiting the proposed
theory by orders of magnitude.
| [
{
"version": "v1",
"created": "Wed, 15 Nov 2017 11:44:03 GMT"
}
] | 1,510,790,400,000 | [
[
"Rodler",
"Patrick",
""
],
[
"Schmid",
"Wolfgang",
""
],
[
"Schekotihin",
"Konstantin",
""
]
] |
1711.05541 | Stuart Armstrong | Stuart Armstrong, Xavier O'Rorke | Good and safe uses of AI Oracles | 11 pages, 2 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is possible that powerful and potentially dangerous artificial
intelligence (AI) might be developed in the future. An Oracle is a design which
aims to restrain the impact of a potentially dangerous AI by restricting the
agent to no actions besides answering questions. Unfortunately, most Oracles
will be motivated to gain more control over the world by manipulating users
through the content of their answers, and Oracles of potentially high
intelligence might be very successful at this
\citep{DBLP:journals/corr/AlfonsecaCACAR16}. In this paper we present two
designs for Oracles which, even under pessimistic assumptions, will not
manipulate their users into releasing them and yet will still be incentivised
to provide their users with helpful answers. The first design is the
counterfactual Oracle -- which choses its answer as if it expected nobody to
ever read it. The second design is the low-bandwidth Oracle -- which is limited
by the quantity of information it can transmit.
| [
{
"version": "v1",
"created": "Wed, 15 Nov 2017 12:47:17 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Nov 2017 11:01:01 GMT"
},
{
"version": "v3",
"created": "Fri, 17 Nov 2017 17:17:11 GMT"
},
{
"version": "v4",
"created": "Tue, 13 Mar 2018 16:06:38 GMT"
},
{
"version": "v5",
"created": "Tue, 5 Jun 2018 11:13:48 GMT"
}
] | 1,528,243,200,000 | [
[
"Armstrong",
"Stuart",
""
],
[
"O'Rorke",
"Xavier",
""
]
] |
1711.05738 | C Lee Giles | G.Z. Sun, C.L. Giles, H.H. Chen, Y.C. Lee | The Neural Network Pushdown Automaton: Model, Stack and Learning
Simulations | null | null | null | UMIACS-TR-93-77 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In order for neural networks to learn complex languages or grammars, they
must have sufficient computational power or resources to recognize or generate
such languages. Though many approaches have been discussed, one ob- vious
approach to enhancing the processing power of a recurrent neural network is to
couple it with an external stack memory - in effect creating a neural network
pushdown automata (NNPDA). This paper discusses in detail this NNPDA - its
construction, how it can be trained and how useful symbolic information can be
extracted from the trained network.
In order to couple the external stack to the neural network, an optimization
method is developed which uses an error function that connects the learning of
the state automaton of the neural network to the learning of the operation of
the external stack. To minimize the error function using gradient descent
learning, an analog stack is designed such that the action and storage of
information in the stack are continuous. One interpretation of a continuous
stack is the probabilistic storage of and action on data. After training on
sample strings of an unknown source grammar, a quantization procedure extracts
from the analog stack and neural network a discrete pushdown automata (PDA).
Simulations show that in learning deterministic context-free grammars - the
balanced parenthesis language, 1*n0*n, and the deterministic Palindrome - the
extracted PDA is correct in the sense that it can correctly recognize unseen
strings of arbitrary length. In addition, the extracted PDAs can be shown to be
identical or equivalent to the PDAs of the source grammars which were used to
generate the training strings.
| [
{
"version": "v1",
"created": "Wed, 15 Nov 2017 18:26:49 GMT"
}
] | 1,510,876,800,000 | [
[
"Sun",
"G. Z.",
""
],
[
"Giles",
"C. L.",
""
],
[
"Chen",
"H. H.",
""
],
[
"Lee",
"Y. C.",
""
]
] |
1711.05767 | Avinash Achar | Avinash Achar, Venkatesh Sarangan, R Rohith, Anand Sivasubramaniam | Predicting vehicular travel times by modeling heterogeneous influences
between arterial roads | 13 pages, conference | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Predicting travel times of vehicles in urban settings is a useful and
tangible quantity of interest in the context of intelligent transportation
systems. We address the problem of travel time prediction in arterial roads
using data sampled from probe vehicles. There is only a limited literature on
methods using data input from probe vehicles. The spatio-temporal dependencies
captured by existing data driven approaches are either too detailed or very
simplistic. We strike a balance of the existing data driven approaches to
account for varying degrees of influence a given road may experience from its
neighbors, while controlling the number of parameters to be learnt.
Specifically, we use a NoisyOR conditional probability distribution (CPD) in
conjunction with a dynamic bayesian network (DBN) to model state transitions of
various roads. We propose an efficient algorithm to learn model parameters. We
propose an algorithm for predicting travel times on trips of arbitrary
durations. Using synthetic and real world data traces we demonstrate the
superior performance of the proposed method under different traffic conditions.
| [
{
"version": "v1",
"created": "Wed, 15 Nov 2017 19:31:55 GMT"
}
] | 1,510,876,800,000 | [
[
"Achar",
"Avinash",
""
],
[
"Sarangan",
"Venkatesh",
""
],
[
"Rohith",
"R",
""
],
[
"Sivasubramaniam",
"Anand",
""
]
] |
1711.05788 | Huaiyang Zhong | Xiaocheng Li, Huaiyang Zhong, Margaret L. Brandeau | Quantile Markov Decision Process | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The goal of a traditional Markov decision process (MDP) is to maximize
expected cumulativereward over a defined horizon (possibly infinite). In many
applications, however, a decision maker may beinterested in optimizing a
specific quantile of the cumulative reward instead of its expectation. In this
paperwe consider the problem of optimizing the quantiles of the cumulative
rewards of a Markov decision process(MDP), which we refer to as a quantile
Markov decision process (QMDP). We provide analytical resultscharacterizing the
optimal QMDP value function and present a dynamic programming-based algorithm
tosolve for the optimal policy. The algorithm also extends to the MDP problem
with a conditional value-at-risk(CVaR) objective. We illustrate the practical
relevance of our model by evaluating it on an HIV treatmentinitiation problem,
where patients aim to balance the potential benefits and risks of the
treatment.
| [
{
"version": "v1",
"created": "Wed, 15 Nov 2017 20:24:51 GMT"
},
{
"version": "v2",
"created": "Wed, 17 Jan 2018 22:46:28 GMT"
},
{
"version": "v3",
"created": "Mon, 9 Sep 2019 23:47:35 GMT"
},
{
"version": "v4",
"created": "Tue, 4 Aug 2020 08:33:36 GMT"
}
] | 1,596,585,600,000 | [
[
"Li",
"Xiaocheng",
""
],
[
"Zhong",
"Huaiyang",
""
],
[
"Brandeau",
"Margaret L.",
""
]
] |
1711.05900 | Dhanya Sridhar | Dhanya Sridhar, Jay Pujara, Lise Getoor | Using Noisy Extractions to Discover Causal Knowledge | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge bases (KB) constructed through information extraction from text
play an important role in query answering and reasoning. In this work, we study
a particular reasoning task, the problem of discovering causal relationships
between entities, known as causal discovery. There are two contrasting types of
approaches to discovering causal knowledge. One approach attempts to identify
causal relationships from text using automatic extraction techniques, while the
other approach infers causation from observational data. However, extractions
alone are often insufficient to capture complex patterns and full observational
data is expensive to obtain. We introduce a probabilistic method for fusing
noisy extractions with observational data to discover causal knowledge. We
propose a principled approach that uses the probabilistic soft logic (PSL)
framework to encode well-studied constraints to recover long-range patterns and
consistent predictions, while cheaply acquired extractions provide a proxy for
unseen observations. We apply our method gene regulatory networks and show the
promise of exploiting KB signals in causal discovery, suggesting a critical,
new area of research.
| [
{
"version": "v1",
"created": "Thu, 16 Nov 2017 02:57:00 GMT"
}
] | 1,510,876,800,000 | [
[
"Sridhar",
"Dhanya",
""
],
[
"Pujara",
"Jay",
""
],
[
"Getoor",
"Lise",
""
]
] |
1711.05905 | Yijia Wang | Yijia Wang, Yan Wan and Zhijian Wang | Using experimental game theory to transit human values to ethical AI | 6 pages, 8 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowing the reflection of game theory and ethics, we develop a mathematical
representation to bridge the gap between the concepts in moral philosophy
(e.g., Kantian and Utilitarian) and AI ethics industry technology standard
(e.g., IEEE P7000 standard series for Ethical AI). As an application, we
demonstrate how human value can be obtained from the experimental game theory
(e.g., trust game experiment) so as to build an ethical AI. Moreover, an
approach to test the ethics (rightness or wrongness) of a given AI algorithm by
using an iterated Prisoner's Dilemma Game experiment is discussed as an
example. Compared with existing mathematical frameworks and testing method on
AI ethics technology, the advantages of the proposed approach are analyzed.
| [
{
"version": "v1",
"created": "Thu, 16 Nov 2017 03:30:29 GMT"
}
] | 1,510,876,800,000 | [
[
"Wang",
"Yijia",
""
],
[
"Wan",
"Yan",
""
],
[
"Wang",
"Zhijian",
""
]
] |
1711.06035 | Martijn Van Otterlo | Martijn van Otterlo | From Algorithmic Black Boxes to Adaptive White Boxes: Declarative
Decision-Theoretic Ethical Programs as Codes of Ethics | 7 pages, 1 figure, submitted | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ethics of algorithms is an emerging topic in various disciplines such as
social science, law, and philosophy, but also artificial intelligence (AI). The
value alignment problem expresses the challenge of (machine) learning values
that are, in some way, aligned with human requirements or values. In this paper
I argue for looking at how humans have formalized and communicated values, in
professional codes of ethics, and for exploring declarative decision-theoretic
ethical programs (DDTEP) to formalize codes of ethics. This renders machine
ethical reasoning and decision-making, as well as learning, more transparent
and hopefully more accountable. The paper includes proof-of-concept examples of
known toy dilemmas and gatekeeping domains such as archives and libraries.
| [
{
"version": "v1",
"created": "Thu, 16 Nov 2017 11:29:54 GMT"
}
] | 1,510,876,800,000 | [
[
"van Otterlo",
"Martijn",
""
]
] |
1711.06301 | Yuan Yang | Yuan Yang | One Model for the Learning of Language | This is a draft write-up of an undergraduate project. A full journal
version is still under preparation | null | 10.1073/pnas.2021865119 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A major target of linguistics and cognitive science has been to understand
what class of learning systems can acquire the key structures of natural
language. Until recently, the computational requirements of language have been
used to argue that learning is impossible without a highly constrained
hypothesis space. Here, we describe a learning system that is maximally
unconstrained, operating over the space of all computations, and is able to
acquire several of the key structures present natural language from positive
evidence alone. The model successfully acquires regular (e.g. $(ab)^n$),
context-free (e.g. $a^n b^n$, $x x^R$), and context-sensitive (e.g.
$a^nb^nc^n$, $a^nb^mc^nd^m$, $xx$) formal languages. Our approach develops the
concept of factorized programs in Bayesian program induction in order to help
manage the complexity of representation. We show in learning, the model
predicts several phenomena empirically observed in human grammar acquisition
experiments.
| [
{
"version": "v1",
"created": "Thu, 16 Nov 2017 19:41:15 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Nov 2017 18:15:06 GMT"
}
] | 1,643,241,600,000 | [
[
"Yang",
"Yuan",
""
]
] |
1711.06362 | David Narv\'aez | David E. Narv\'aez | Exploring the Use of Shatter for AllSAT Through Ramsey-Type Problems | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | In the context of SAT solvers, Shatter is a popular tool for symmetry
breaking on CNF formulas. Nevertheless, little has been said about its use in
the context of AllSAT problems: problems where we are interested in listing all
the models of a Boolean formula. AllSAT has gained much popularity in recent
years due to its many applications in domains like model checking, data mining,
etc. One example of a particularly transparent application of AllSAT to other
fields of computer science is computational Ramsey theory. In this paper we
study the effect of incorporating Shatter to the workflow of using Boolean
formulas to generate all possible edge colorings of a graph avoiding prescribed
monochromatic subgraphs. Generating complete sets of colorings is an important
building block in computational Ramsey theory. We identify two drawbacks in the
na\"ive use of Shatter to break the symmetries of Boolean formulas encoding
Ramsey-type problems for graphs: a "blow-up" in the number of models and the
generation of incomplete sets of colorings. The issues presented in this work
are not intended to discourage the use of Shatter as a preprocessing tool for
AllSAT problems in combinatorial computing but to help researchers properly use
this tool by avoiding these potential pitfalls. To this end, we provide
strategies and additional tools to cope with the negative effects of using
Shatter for AllSAT. While the specific application addressed in this paper is
that of Ramsey-type problems, the analysis we carry out applies to many other
areas in which highly-symmetrical Boolean formulas arise and we wish to find
all of their models.
| [
{
"version": "v1",
"created": "Fri, 17 Nov 2017 00:50:36 GMT"
}
] | 1,511,136,000,000 | [
[
"Narváez",
"David E.",
""
]
] |
1711.06498 | Victoria Hodge | Victoria Hodge, Sam Devlin, Nick Sephton, Florian Block, Anders
Drachen and Peter Cowling | Win Prediction in Esports: Mixed-Rank Match Prediction in Multi-player
Online Battle Arena Games | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Esports has emerged as a popular genre for players as well as spectators,
supporting a global entertainment industry. Esports analytics has evolved to
address the requirement for data-driven feedback, and is focused on
cyber-athlete evaluation, strategy and prediction. Towards the latter, previous
work has used match data from a variety of player ranks from hobbyist to
professional players. However, professional players have been shown to behave
differently than lower ranked players. Given the comparatively limited supply
of professional data, a key question is thus whether mixed-rank match datasets
can be used to create data-driven models which predict winners in professional
matches and provide a simple in-game statistic for viewers and broadcasters.
Here we show that, although there is a slightly reduced accuracy, mixed-rank
datasets can be used to predict the outcome of professional matches, with
suitably optimized configurations.
| [
{
"version": "v1",
"created": "Fri, 17 Nov 2017 11:18:31 GMT"
}
] | 1,511,136,000,000 | [
[
"Hodge",
"Victoria",
""
],
[
"Devlin",
"Sam",
""
],
[
"Sephton",
"Nick",
""
],
[
"Block",
"Florian",
""
],
[
"Drachen",
"Anders",
""
],
[
"Cowling",
"Peter",
""
]
] |
1711.06517 | Moshe BenBassat Professor | Moshe BenBassat | Wikipedia for Smart Machines and Double Deep Machine Learning | 10 pages, 2 Figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Very important breakthroughs in data centric deep learning algorithms led to
impressive performance in transactional point applications of Artificial
Intelligence (AI) such as Face Recognition, or EKG classification. With all due
appreciation, however, knowledge blind data only machine learning algorithms
have severe limitations for non-transactional AI applications, such as medical
diagnosis beyond the EKG results. Such applications require deeper and broader
knowledge in their problem solving capabilities, e.g. integrating anatomy and
physiology knowledge with EKG results and other patient findings. Following a
review and illustrations of such limitations for several real life AI
applications, we point at ways to overcome them. The proposed Wikipedia for
Smart Machines initiative aims at building repositories of software structures
that represent humanity science & technology knowledge in various parts of
life; knowledge that we all learn in schools, universities and during our
professional life. Target readers for these repositories are smart machines;
not human. AI software developers will have these Reusable Knowledge structures
readily available, hence, the proposed name ReKopedia. Big Data is by now a
mature technology, it is time to focus on Big Knowledge. Some will be derived
from data, some will be obtained from mankind gigantic repository of knowledge.
Wikipedia for smart machines along with the new Double Deep Learning approach
offer a paradigm for integrating datacentric deep learning algorithms with
algorithms that leverage deep knowledge, e.g. evidential reasoning and
causality reasoning. For illustration, a project is described to produce
ReKopedia knowledge modules for medical diagnosis of about 1,000 disorders.
Data is important, but knowledge deep, basic, and commonsense is equally
important.
| [
{
"version": "v1",
"created": "Fri, 17 Nov 2017 12:59:22 GMT"
},
{
"version": "v2",
"created": "Tue, 22 May 2018 05:54:17 GMT"
}
] | 1,527,033,600,000 | [
[
"BenBassat",
"Moshe",
""
]
] |
1711.06892 | Falk Lieder | Frederick Callaway and Sayan Gul and Paul M. Krueger and Thomas L.
Griffiths and Falk Lieder | Learning to select computations | null | Proceedings of the 34th Conference of Uncertainty in Artificial
Intelligence (2018) | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The efficient use of limited computational resources is an essential
ingredient of intelligence. Selecting computations optimally according to
rational metareasoning would achieve this, but this is computationally
intractable. Inspired by psychology and neuroscience, we propose the first
concrete and domain-general learning algorithm for approximating the optimal
selection of computations: Bayesian metalevel policy search (BMPS). We derive
this general, sample-efficient search algorithm for a computation-selecting
metalevel policy based on the insight that the value of information lies
between the myopic value of information and the value of perfect information.
We evaluate BMPS on three increasingly difficult metareasoning problems: when
to terminate computation, how to allocate computation between competing
options, and planning. Across all three domains, BMPS achieved near-optimal
performance and compared favorably to previously proposed metareasoning
heuristics. Finally, we demonstrate the practical utility of BMPS in an
emergency management scenario, even accounting for the overhead of
metareasoning.
| [
{
"version": "v1",
"created": "Sat, 18 Nov 2017 16:42:48 GMT"
},
{
"version": "v2",
"created": "Mon, 26 Feb 2018 22:12:17 GMT"
},
{
"version": "v3",
"created": "Tue, 7 Aug 2018 22:13:18 GMT"
}
] | 1,533,772,800,000 | [
[
"Callaway",
"Frederick",
""
],
[
"Gul",
"Sayan",
""
],
[
"Krueger",
"Paul M.",
""
],
[
"Griffiths",
"Thomas L.",
""
],
[
"Lieder",
"Falk",
""
]
] |
1711.07071 | Evgeny Ivanko | Evgeny Ivanko | The destiny of constant structure discrete time closed semantic systems | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Constant structure closed semantic systems are the systems each element of
which receives its definition through the correspondent unchangeable set of
other elements of the system. Discrete time means here that the definitions of
the elements change iteratively and simultaneously based on the "neighbor
portraits" from the previous iteration. I prove that the iterative redefinition
process in such class of systems will quickly degenerate into a series of
pairwise isomorphic states and discuss some directions of further research.
| [
{
"version": "v1",
"created": "Sun, 19 Nov 2017 20:15:35 GMT"
}
] | 1,511,222,400,000 | [
[
"Ivanko",
"Evgeny",
""
]
] |
1711.07111 | Marisa Vasconcelos | Marisa Vasconcelos, Carlos Cardonha, Bernardo Gon\c{c}alves | Modeling Epistemological Principles for Bias Mitigation in AI Systems:
An Illustration in Hiring Decisions | null | 2018 AAAI/ACM Conference on AI, Ethics, and Society | 10.1145/3278721.3278751 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Artificial Intelligence (AI) has been used extensively in automatic decision
making in a broad variety of scenarios, ranging from credit ratings for loans
to recommendations of movies. Traditional design guidelines for AI models focus
essentially on accuracy maximization, but recent work has shown that
economically irrational and socially unacceptable scenarios of discrimination
and unfairness are likely to arise unless these issues are explicitly
addressed. This undesirable behavior has several possible sources, such as
biased datasets used for training that may not be detected in black-box models.
After pointing out connections between such bias of AI and the problem of
induction, we focus on Popper's contributions after Hume's, which offer a
logical theory of preferences. An AI model can be preferred over others on
purely rational grounds after one or more attempts at refutation based on
accuracy and fairness. Inspired by such epistemological principles, this paper
proposes a structured approach to mitigate discrimination and unfairness caused
by bias in AI systems. In the proposed computational framework, models are
selected and enhanced after attempts at refutation. To illustrate our
discussion, we focus on hiring decision scenarios where an AI system filters in
which job applicants should go to the interview phase.
| [
{
"version": "v1",
"created": "Mon, 20 Nov 2017 00:27:57 GMT"
}
] | 1,538,006,400,000 | [
[
"Vasconcelos",
"Marisa",
""
],
[
"Cardonha",
"Carlos",
""
],
[
"Gonçalves",
"Bernardo",
""
]
] |
1711.07273 | Phillip Lord Dr | Phillip Lord, Robert Stevens | Facets, Tiers and Gems: Ontology Patterns for Hypernormalisation | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | There are many methodologies and techniques for easing the task of ontology
building. Here we describe the intersection of two of these: ontology
normalisation and fully programmatic ontology development. The first of these
describes a standardized organisation for an ontology, with singly inherited
self-standing entities, and a number of small taxonomies of refining entities.
The former are described and defined in terms of the latter and used to manage
the polyhierarchy of the self-standing entities. Fully programmatic development
is a technique where an ontology is developed using a domain-specific language
within a programming language, meaning that as well defining ontological
entities, it is possible to add arbitrary patterns or new syntax within the
same environment. We describe how new patterns can be used to enable a new
style of ontology development that we call hypernormalisation.
| [
{
"version": "v1",
"created": "Mon, 20 Nov 2017 12:05:18 GMT"
}
] | 1,511,222,400,000 | [
[
"Lord",
"Phillip",
""
],
[
"Stevens",
"Robert",
""
]
] |
1711.07321 | Guangming Lang | Guangming Lang | Related family-based attribute reduction of covering information systems
when varying attribute sets | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In practical situations, there are many dynamic covering information systems
with variations of attributes, but there are few studies on related
family-based attribute reduction of dynamic covering information systems. In
this paper, we first investigate updated mechanisms of constructing attribute
reducts for consistent and inconsistent covering information systems when
varying attribute sets by using related families. Then we employ examples to
illustrate how to compute attribute reducts of dynamic covering information
systems with variations of attribute sets. Finally, the experimental results
illustrates that the related family-based methods are effective to perform
attribute reduction of dynamic covering information systems when attribute sets
are varying with time.
| [
{
"version": "v1",
"created": "Thu, 16 Nov 2017 08:54:28 GMT"
}
] | 1,511,222,400,000 | [
[
"Lang",
"Guangming",
""
]
] |
1711.07832 | Daniel J Mankowitz | Daniel J. Mankowitz, Aviv Tamar, Shie Mannor | Situationally Aware Options | arXiv admin note: substantial text overlap with arXiv:1610.02847 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hierarchical abstractions, also known as options -- a type of temporally
extended action (Sutton et. al. 1999) that enables a reinforcement learning
agent to plan at a higher level, abstracting away from the lower-level details.
In this work, we learn reusable options whose parameters can vary, encouraging
different behaviors, based on the current situation. In principle, these
behaviors can include vigor, defence or even risk-averseness. These are some
examples of what we refer to in the broader context as Situational Awareness
(SA). We incorporate SA, in the form of vigor, into hierarchical RL by defining
and learning situationally aware options in a Probabilistic Goal Semi-Markov
Decision Process (PG-SMDP). This is achieved using our Situationally Aware
oPtions (SAP) policy gradient algorithm which comes with a theoretical
convergence guarantee. We learn reusable options in different scenarios in a
RoboCup soccer domain (i.e., winning/losing). These options learn to execute
with different levels of vigor resulting in human-like behaviours such as
`time-wasting' in the winning scenario. We show the potential of the agent to
exit bad local optima using reusable options in RoboCup. Finally, using SAP,
the agent mitigates feature-based model misspecification in a Bottomless Pit of
Death domain.
| [
{
"version": "v1",
"created": "Mon, 20 Nov 2017 08:11:12 GMT"
}
] | 1,511,308,800,000 | [
[
"Mankowitz",
"Daniel J.",
""
],
[
"Tamar",
"Aviv",
""
],
[
"Mannor",
"Shie",
""
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.