id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
sequencelengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1808.07004 | J. G. Wolff | J Gerard Wolff | Mathematics as information compression via the matching and unification
of patterns | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes a novel perspective on the foundations of mathematics:
how mathematics may be seen to be largely about 'information compression via
the matching and unification of patterns' (ICMUP). ICMUP is itself a novel
approach to information compression, couched in terms of non-mathematical
primitives, as is necessary in any investigation of the foundations of
mathematics. This new perspective on the foundations of mathematics has grown
out of an extensive programme of research developing the "SP Theory of
Intelligence" and its realisation in the "SP Computer Model", a system in which
a generalised version of ICMUP -- the powerful concept of SP-multiple-alignment
-- plays a central role. These ideas may be seen to be part of a "Big Picture"
comprising six areas of interest, with information compression as a unifying
theme. The paper describes the close relation between mathematics and
information compression, and describes examples showing how variants of ICMUP
may be seen in widely-used structures and operations in mathematics. Examples
are also given to show how the mathematics-related disciplines of logic and
computing may be understood as ICMUP. There are many potential benefits and
applications of these ideas.
| [
{
"version": "v1",
"created": "Sun, 5 Aug 2018 09:17:06 GMT"
},
{
"version": "v2",
"created": "Tue, 9 Oct 2018 13:42:48 GMT"
}
] | 1,539,129,600,000 | [
[
"Wolff",
"J Gerard",
""
]
] |
1808.07050 | Yuanlin Zhang | Michael Gelfond and Yuanlin Zhang | Vicious Circle Principle and Logic Programs with Aggregates | arXiv admin note: text overlap with arXiv:1405.3637 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The paper presents a knowledge representation language $\mathcal{A}log$ which
extends ASP with aggregates. The goal is to have a language based on simple
syntax and clear intuitive and mathematical semantics. We give some properties
of $\mathcal{A}log$, an algorithm for computing its answer sets, and comparison
with other approaches.
| [
{
"version": "v1",
"created": "Tue, 21 Aug 2018 04:16:03 GMT"
}
] | 1,534,982,400,000 | [
[
"Gelfond",
"Michael",
""
],
[
"Zhang",
"Yuanlin",
""
]
] |
1808.07302 | Sergey Paramonov | Sergey Paramonov, Daria Stepanova, Pauli Miettinen | Hybrid ASP-based Approach to Pattern Mining | 29 pages, 7 figures, 5 tables | Theory and Practice of Logic Programming 19 (2019) 505-535 | 10.1017/S1471068418000467 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Detecting small sets of relevant patterns from a given dataset is a central
challenge in data mining. The relevance of a pattern is based on user-provided
criteria; typically, all patterns that satisfy certain criteria are considered
relevant. Rule-based languages like Answer Set Programming (ASP) seem
well-suited for specifying such criteria in a form of constraints. Although
progress has been made, on the one hand, on solving individual mining problems
and, on the other hand, developing generic mining systems, the existing methods
either focus on scalability or on generality. In this paper we make steps
towards combining local (frequency, size, cost) and global (various condensed
representations like maximal, closed, skyline) constraints in a generic and
efficient way. We present a hybrid approach for itemset, sequence and graph
mining which exploits dedicated highly optimized mining systems to detect
frequent patterns and then filters the results using declarative ASP. To
further demonstrate the generic nature of our hybrid framework we apply it to a
problem of approximately tiling a database. Experiments on real-world datasets
show the effectiveness of the proposed method and computational gains for
itemset, sequence and graph mining, as well as approximate tiling.
Under consideration in Theory and Practice of Logic Programming (TPLP).
| [
{
"version": "v1",
"created": "Wed, 22 Aug 2018 10:21:13 GMT"
}
] | 1,582,070,400,000 | [
[
"Paramonov",
"Sergey",
""
],
[
"Stepanova",
"Daria",
""
],
[
"Miettinen",
"Pauli",
""
]
] |
1808.07621 | Chenchen Li | Chenchen Li, Xiang Yan, Xiaotie Deng, Yuan Qi, Wei Chu, Le Song,
Junlong Qiao, Jianshan He, Junwu Xiong | Latent Dirichlet Allocation for Internet Price War | 22 pages, 8 figures, Draft | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Internet market makers are always facing intense competitive environment,
where personalized price reductions or discounted coupons are provided for
attracting more customers. Participants in such a price war scenario have to
invest a lot to catch up with other competitors. However, such a huge cost of
money may not always lead to an improvement of market share. This is mainly due
to a lack of information about others' strategies or customers' willingness
when participants develop their strategies.
In order to obtain this hidden information through observable data, we study
the relationship between companies and customers in the Internet price war.
Theoretically, we provide a formalization of the problem as a stochastic game
with imperfect and incomplete information. Then we develop a variant of Latent
Dirichlet Allocation (LDA) to infer latent variables under the current market
environment, which represents the preferences of customers and strategies of
competitors. To our best knowledge, it is the first time that LDA is applied to
game scenario.
We conduct simulated experiments where our LDA model exhibits a significant
improvement on finding strategies in the Internet price war by including all
available market information of the market maker's competitors. And the model
is applied to an open dataset for real business. Through comparisons on the
likelihood of prediction for users' behavior and distribution distance between
inferred opponent's strategy and the real one, our model is shown to be able to
provide a better understanding for the market environment.
Our work marks a successful learning method to infer latent information in
the environment of price war by the LDA modeling, and sets an example for
related competitive applications to follow.
| [
{
"version": "v1",
"created": "Thu, 23 Aug 2018 03:39:52 GMT"
}
] | 1,535,068,800,000 | [
[
"Li",
"Chenchen",
""
],
[
"Yan",
"Xiang",
""
],
[
"Deng",
"Xiaotie",
""
],
[
"Qi",
"Yuan",
""
],
[
"Chu",
"Wei",
""
],
[
"Song",
"Le",
""
],
[
"Qiao",
"Junlong",
""
],
[
"He",
"Jianshan",
""
],
[
"Xiong",
"Junwu",
""
]
] |
1808.07980 | Thomas Lukasiewicz | Patrick Hohenecker, Thomas Lukasiewicz | Ontology Reasoning with Deep Neural Networks | null | J. Artif. Intell. Res. 68:503-540 (2020) | 10.1613/jair.1.11661 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ability to conduct logical reasoning is a fundamental aspect of
intelligent human behavior, and thus an important problem along the way to
human-level artificial intelligence. Traditionally, logic-based symbolic
methods from the field of knowledge representation and reasoning have been used
to equip agents with capabilities that resemble human logical reasoning
qualities. More recently, however, there has been an increasing interest in
using machine learning rather than logic-based symbolic formalisms to tackle
these tasks. In this paper, we employ state-of-the-art methods for training
deep neural networks to devise a novel model that is able to learn how to
effectively perform logical reasoning in the form of basic ontology reasoning.
This is an important and at the same time very natural logical reasoning task,
which is why the presented approach is applicable to a plethora of important
real-world problems. We present the outcomes of several experiments, which show
that our model is able to learn to perform highly accurate ontology reasoning
on very large, diverse, and challenging benchmarks. Furthermore, it turned out
that the suggested approach suffers much less from different obstacles that
prohibit logic-based symbolic reasoning, and, at the same time, is surprisingly
plausible from a biological point of view.
| [
{
"version": "v1",
"created": "Fri, 24 Aug 2018 01:44:37 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Sep 2018 18:14:04 GMT"
},
{
"version": "v3",
"created": "Mon, 10 Dec 2018 15:25:16 GMT"
},
{
"version": "v4",
"created": "Fri, 8 Jan 2021 12:35:36 GMT"
}
] | 1,610,323,200,000 | [
[
"Hohenecker",
"Patrick",
""
],
[
"Lukasiewicz",
"Thomas",
""
]
] |
1808.08213 | Arquimedes Canedo | Jiang Wan, Blake S. Pollard, Sujit Rokka Chhetri, Palash Goyal,
Mohammad Abdullah Al Faruque, Arquimedes Canedo | Future Automation Engineering using Structural Graph Convolutional
Neural Networks | ICCAD 2018 | null | 10.1145/3240765.3243477 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The digitalization of automation engineering generates large quantities of
engineering data that is interlinked in knowledge graphs. Classifying and
clustering subgraphs according to their functionality is useful to discover
functionally equivalent engineering artifacts that exhibit different graph
structures. This paper presents a new graph learning algorithm designed to
classify engineering data artifacts -- represented in the form of graphs --
according to their structure and neighborhood features. Our Structural Graph
Convolutional Neural Network (SGCNN) is capable of learning graphs and
subgraphs with a novel graph invariant convolution kernel and
downsampling/pooling algorithm. On a realistic engineering-related dataset, we
show that SGCNN is capable of achieving ~91% classification accuracy.
| [
{
"version": "v1",
"created": "Fri, 24 Aug 2018 17:07:05 GMT"
}
] | 1,535,328,000,000 | [
[
"Wan",
"Jiang",
""
],
[
"Pollard",
"Blake S.",
""
],
[
"Chhetri",
"Sujit Rokka",
""
],
[
"Goyal",
"Palash",
""
],
[
"Faruque",
"Mohammad Abdullah Al",
""
],
[
"Canedo",
"Arquimedes",
""
]
] |
1808.08433 | Pascal Hitzler | Pascal Hitzler, Adila Krisnadhi | A Tutorial on Modular Ontology Modeling with Ontology Design Patterns:
The Cooking Recipes Ontology | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We provide a detailed example for modular ontology modeling based on ontology
design patterns.
| [
{
"version": "v1",
"created": "Sat, 25 Aug 2018 14:36:00 GMT"
}
] | 1,535,414,400,000 | [
[
"Hitzler",
"Pascal",
""
],
[
"Krisnadhi",
"Adila",
""
]
] |
1808.08441 | Mark Law | Mark Law, Alessandra Russo and Krysia Broda | Inductive Learning of Answer Set Programs from Noisy Examples | To appear in Advances in Cognitive Systems | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, non-monotonic Inductive Logic Programming has received
growing interest. Specifically, several new learning frameworks and algorithms
have been introduced for learning under the answer set semantics, allowing the
learning of common-sense knowledge involving defaults and exceptions, which are
essential aspects of human reasoning. In this paper, we present a
noise-tolerant generalisation of the learning from answer sets framework. We
evaluate our ILASP3 system, both on synthetic and on real datasets, represented
in the new framework. In particular, we show that on many of the datasets
ILASP3 achieves a higher accuracy than other ILP systems that have previously
been applied to the datasets, including a recently proposed differentiable
learning framework.
| [
{
"version": "v1",
"created": "Sat, 25 Aug 2018 15:30:17 GMT"
}
] | 1,535,414,400,000 | [
[
"Law",
"Mark",
""
],
[
"Russo",
"Alessandra",
""
],
[
"Broda",
"Krysia",
""
]
] |
1808.08497 | Qibing Li | Xiaolin Zheng, Mengying Zhu, Qibing Li, Chaochao Chen, Yanchao Tan | FinBrain: When Finance Meets AI 2.0 | 11 pages | Frontiers of Information Technology & Electronic Engineering 2018 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Artificial intelligence (AI) is the core technology of technological
revolution and industrial transformation. As one of the new intelligent needs
in the AI 2.0 era, financial intelligence has elicited much attention from the
academia and industry. In our current dynamic capital market, financial
intelligence demonstrates a fast and accurate machine learning capability to
handle complex data and has gradually acquired the potential to become a
"financial brain". In this work, we survey existing studies on financial
intelligence. First, we describe the concept of financial intelligence and
elaborate on its position in the financial technology field. Second, we
introduce the development of financial intelligence and review state-of-the-art
techniques in wealth management, risk management, financial security, financial
consulting, and blockchain. Finally, we propose a research framework called
FinBrain and summarize four open issues, namely, explainable financial agents
and causality, perception and prediction under uncertainty, risk-sensitive and
robust decision making, and multi-agent game and mechanism design. We believe
that these research directions can lay the foundation for the development of AI
2.0 in the finance field.
| [
{
"version": "v1",
"created": "Sun, 26 Aug 2018 03:12:50 GMT"
}
] | 1,535,414,400,000 | [
[
"Zheng",
"Xiaolin",
""
],
[
"Zhu",
"Mengying",
""
],
[
"Li",
"Qibing",
""
],
[
"Chen",
"Chaochao",
""
],
[
"Tan",
"Yanchao",
""
]
] |
1808.08794 | Juliao Braga | Juliao Braga, Joao Nuno Silva, Patricia Takako Endo, Nizam Omar | Theoretical Foundations of the A2RD Project: Part I | 9 pages | null | 10.13140/RG.2.2.22156.97923 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This article identifies and discusses the theoretical foundations that were
considered in the design of the A2RD model. In addition to the points
considered, references are made to the studies available and considered in the
approach.
| [
{
"version": "v1",
"created": "Mon, 27 Aug 2018 11:46:13 GMT"
},
{
"version": "v2",
"created": "Wed, 29 Aug 2018 02:48:14 GMT"
},
{
"version": "v3",
"created": "Thu, 30 Aug 2018 15:23:53 GMT"
}
] | 1,535,673,600,000 | [
[
"Braga",
"Juliao",
""
],
[
"Silva",
"Joao Nuno",
""
],
[
"Endo",
"Patricia Takako",
""
],
[
"Omar",
"Nizam",
""
]
] |
1808.09293 | Juliao Braga | Juliao Braga, Joao Nuno Silva, Patricia Takako Endo, Nizam Omar | A Summary Description of the A2RD Project | arXiv admin note: text overlap with arXiv:1805.02241,
arXiv:1805.05250 | null | 10.13140/RG.2.2.33386.57281 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper describes the Autonomous Architecture Over Restricted Domains
project. It begins with the description of the context upon which the project
is focused, and in the sequence describes the project and implementation
models. It finish by presenting the environment conceptual model, showing where
stand the components, inputs and facilities required to interact among the
intelligent agents of the various implementations in their respective and
restricted, routing domains (Autonomous Systems) which together make the
Internet work.
| [
{
"version": "v1",
"created": "Sun, 26 Aug 2018 15:02:23 GMT"
},
{
"version": "v2",
"created": "Wed, 29 Aug 2018 03:01:34 GMT"
},
{
"version": "v3",
"created": "Thu, 6 Sep 2018 07:45:25 GMT"
}
] | 1,536,537,600,000 | [
[
"Braga",
"Juliao",
""
],
[
"Silva",
"Joao Nuno",
""
],
[
"Endo",
"Patricia Takako",
""
],
[
"Omar",
"Nizam",
""
]
] |
1808.09847 | \"Ozg\"ur Akg\"un | \"Ozg\"ur Akg\"un, Ian Miguel | Modelling Langford's Problem: A Viewpoint for Search | null | ModRef 2018 - The 17th workshop on Constraint Modelling and
Reformulation | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The performance of enumerating all solutions to an instance of Langford's
Problem is sensitive to the model and the search strategy. In this paper we
compare the performance of a large variety of models, all derived from two base
viewpoints. We empirically show that a channelled model with a static branching
order on one of the viewpoints offers the best performance out of all the
options we consider. Surprisingly, one of the base models proves very effective
for propagation, while the other provides an effective means of stating a
static search order.
| [
{
"version": "v1",
"created": "Wed, 29 Aug 2018 14:25:55 GMT"
}
] | 1,535,587,200,000 | [
[
"Akgün",
"Özgür",
""
],
[
"Miguel",
"Ian",
""
]
] |
1808.10012 | Niket Tandon | Niket Tandon, Bhavana Dalvi Mishra, Joel Grus, Wen-tau Yih, Antoine
Bosselut, Peter Clark | Reasoning about Actions and State Changes by Injecting Commonsense
Knowledge | Accepted at EMNLP 2018. Niket Tandon and Bhavana Dalvi Mishra
contributed equally to this work | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Comprehending procedural text, e.g., a paragraph describing photosynthesis,
requires modeling actions and the state changes they produce, so that questions
about entities at different timepoints can be answered. Although several recent
systems have shown impressive progress in this task, their predictions can be
globally inconsistent or highly improbable. In this paper, we show how the
predicted effects of actions in the context of a paragraph can be improved in
two ways: (1) by incorporating global, commonsense constraints (e.g., a
non-existent entity cannot be destroyed), and (2) by biasing reading with
preferences from large-scale corpora (e.g., trees rarely move). Unlike earlier
methods, we treat the problem as a neural structured prediction task, allowing
hard and soft constraints to steer the model away from unlikely predictions. We
show that the new model significantly outperforms earlier systems on a
benchmark dataset for procedural text comprehension (+8% relative gain), and
that it also avoids some of the nonsensical predictions that earlier systems
make.
| [
{
"version": "v1",
"created": "Wed, 29 Aug 2018 18:53:53 GMT"
}
] | 1,535,673,600,000 | [
[
"Tandon",
"Niket",
""
],
[
"Mishra",
"Bhavana Dalvi",
""
],
[
"Grus",
"Joel",
""
],
[
"Yih",
"Wen-tau",
""
],
[
"Bosselut",
"Antoine",
""
],
[
"Clark",
"Peter",
""
]
] |
1808.10104 | Md Kamruzzaman Sarker | Md. Kamruzzaman Sarker, David Carral, Adila A. Krisnadhi, Pascal
Hitzler | Modeling OWL with Rules: The ROWL Protege Plugin | Accepted at ISWC 2016 | S. Md Kamruzzaman, Carral, D., Krisnadhi, A., and Hitzler, P.,
Modeling OWL with Rules: The ROWL Protege Plugin Kobe, Japan, 2016 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In our experience, some ontology users find it much easier to convey logical
statements using rules rather than OWL (or description logic) axioms. Based on
recent theoretical developments on transformations between rules and
description logics, we develop ROWL, a Protege plugin that allows users to
enter OWL axioms by way of rules; the plugin then automatically converts these
rules into OWL DL axioms if possible, and prompts the user in case such a
conversion is not possible without weakening the semantics of the rule.
| [
{
"version": "v1",
"created": "Thu, 30 Aug 2018 03:55:11 GMT"
}
] | 1,535,673,600,000 | [
[
"Sarker",
"Md. Kamruzzaman",
""
],
[
"Carral",
"David",
""
],
[
"Krisnadhi",
"Adila A.",
""
],
[
"Hitzler",
"Pascal",
""
]
] |
1808.10750 | Andrew Brockmann | Andrew Brockmann | Victory Probability in the Fire Emblem Arena | 14 pages, 2 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We demonstrate how to efficiently compute the probability of victory in Fire
Emblem arena battles. The probability can be expressed in terms of a
multivariate recurrence relation which lends itself to a straightforward
dynamic programming solution. Some implementation issues are addressed, and a
full implementation is provided in code.
| [
{
"version": "v1",
"created": "Wed, 29 Aug 2018 01:21:22 GMT"
}
] | 1,535,932,800,000 | [
[
"Brockmann",
"Andrew",
""
]
] |
1809.00858 | Anthony Hunter | Anthony Hunter | Non-monotonic Reasoning in Deductive Argumentation | 24 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Argumentation is a non-monotonic process. This reflects the fact that
argumentation involves uncertain information, and so new information can cause
a change in the conclusions drawn. However, the base logic does not need to be
non-monotonic. Indeed, most proposals for structured argumentation use a
monotonic base logic (e.g. some form of modus ponens with a rule-based
language, or classical logic). Nonetheless, there are issues in capturing
defeasible reasoning in argumentation including choice of base logic and
modelling of defeasible knowledge. And there are insights and tools to be
harnessed for research in non-monontonic logics. We consider some of these
issues in this paper.
| [
{
"version": "v1",
"created": "Tue, 4 Sep 2018 09:29:37 GMT"
}
] | 1,536,105,600,000 | [
[
"Hunter",
"Anthony",
""
]
] |
1809.01036 | L\^e Nguy\^en Hoang | L\^e Nguy\^en Hoang | A Roadmap for Robust End-to-End Alignment | 21 pages, 2 figures | null | null | null | cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | This paper discussed the {\it robust alignment} problem, that is, the problem
of aligning the goals of algorithms with human preferences. It presented a
general roadmap to tackle this issue. Interestingly, this roadmap identifies 5
critical steps, as well as many relevant aspects of these 5 steps. In other
words, we have presented a large number of hopefully more tractable subproblems
that readers are highly encouraged to tackle. Hopefully, this combination
allows to better highlight the most pressing problems, how every expertise can
be best used to, and how combining the solutions to subproblems might add up to
solve robust alignment.
| [
{
"version": "v1",
"created": "Tue, 4 Sep 2018 15:19:44 GMT"
},
{
"version": "v2",
"created": "Sun, 21 Oct 2018 11:01:41 GMT"
},
{
"version": "v3",
"created": "Mon, 25 Feb 2019 09:32:09 GMT"
},
{
"version": "v4",
"created": "Tue, 25 Feb 2020 08:45:45 GMT"
}
] | 1,582,675,200,000 | [
[
"Hoang",
"Lê Nguyên",
""
]
] |
1809.01220 | Benjamin Ayton | Benjamin J Ayton, Brian C Williams | Vulcan: A Monte Carlo Algorithm for Large Chance Constrained MDPs with
Risk Bounding Functions | 33 pages, 12 figures. In review | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Chance Constrained Markov Decision Processes maximize reward subject to a
bounded probability of failure, and have been frequently applied for planning
with potentially dangerous outcomes or unknown environments. Solution
algorithms have required strong heuristics or have been limited to relatively
small problems with up to millions of states, because the optimal action to
take from a given state depends on the probability of failure in the rest of
the policy, leading to a coupled problem that is difficult to solve. In this
paper we examine a generalization of a CCMDP that trades off probability of
failure against reward through a functional relationship. We derive a
constraint that can be applied to each state history in a policy individually,
and which guarantees that the chance constraint will be satisfied. The approach
decouples states in the CCMDP, so that large problems can be solved
efficiently. We then introduce Vulcan, which uses our constraint in order to
apply Monte Carlo Tree Search to CCMDPs. Vulcan can be applied to problems
where it is unfeasible to generate the entire state space, and policies must be
returned in an anytime manner. We show that Vulcan and its variants run tens to
hundreds of times faster than linear programming methods, and over ten times
faster than heuristic based methods, all without the need for a heuristic, and
returning solutions with a mean suboptimality on the order of a few percent.
Finally, we use Vulcan to solve for a chance constrained policy in a CCMDP with
over $10^{13}$ states in 3 minutes.
| [
{
"version": "v1",
"created": "Tue, 4 Sep 2018 19:42:22 GMT"
}
] | 1,536,192,000,000 | [
[
"Ayton",
"Benjamin J",
""
],
[
"Williams",
"Brian C",
""
]
] |
1809.02031 | Joan Bruna | David Folqu\'e, Sainbayar Sukhbaatar, Arthur Szlam, Joan Bruna | Planning with Arithmetic and Geometric Attributes | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A desirable property of an intelligent agent is its ability to understand its
environment to quickly generalize to novel tasks and compose simpler tasks into
more complex ones. If the environment has geometric or arithmetic structure,
the agent should exploit these for faster generalization. Building on recent
work that augments the environment with user-specified attributes, we show that
further equipping these attributes with the appropriate geometric and
arithmetic structure brings substantial gains in sample complexity.
| [
{
"version": "v1",
"created": "Thu, 6 Sep 2018 15:03:13 GMT"
}
] | 1,536,278,400,000 | [
[
"Folqué",
"David",
""
],
[
"Sukhbaatar",
"Sainbayar",
""
],
[
"Szlam",
"Arthur",
""
],
[
"Bruna",
"Joan",
""
]
] |
1809.02193 | Andres Campero | Andres Campero and Aldo Pareja and Tim Klinger and Josh Tenenbaum and
Sebastian Riedel | Logical Rule Induction and Theory Learning Using Neural Theorem Proving | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A hallmark of human cognition is the ability to continually acquire and
distill observations of the world into meaningful, predictive theories. In this
paper we present a new mechanism for logical theory acquisition which takes a
set of observed facts and learns to extract from them a set of logical rules
and a small set of core facts which together entail the observations. Our
approach is neuro-symbolic in the sense that the rule pred- icates and core
facts are given dense vector representations. The rules are applied to the core
facts using a soft unification procedure to infer additional facts. After k
steps of forward inference, the consequences are compared to the initial
observations and the rules and core facts are then encouraged towards
representations that more faithfully generate the observations through
inference. Our approach is based on a novel neural forward-chaining
differentiable rule induction network. The rules are interpretable and learned
compositionally from their predicates, which may be invented. We demonstrate
the efficacy of our approach on a variety of ILP rule induction and domain
theory learning datasets.
| [
{
"version": "v1",
"created": "Thu, 6 Sep 2018 19:49:20 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Sep 2018 18:46:21 GMT"
},
{
"version": "v3",
"created": "Wed, 12 Sep 2018 21:34:59 GMT"
}
] | 1,536,883,200,000 | [
[
"Campero",
"Andres",
""
],
[
"Pareja",
"Aldo",
""
],
[
"Klinger",
"Tim",
""
],
[
"Tenenbaum",
"Josh",
""
],
[
"Riedel",
"Sebastian",
""
]
] |
1809.02232 | Matthew Guzdial | Matthew Guzdial and Mark Riedl | Automated Game Design via Conceptual Expansion | 7 pages, 3 figures, Artificial Intelligence and Interactive Digital
Entertainment | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automated game design has remained a key challenge within the field of Game
AI. In this paper, we introduce a method for recombining existing games to
create new games through a process called conceptual expansion. Prior automated
game design approaches have relied on hand-authored or crowd-sourced knowledge,
which limits the scope and applications of such systems. Our approach instead
relies on machine learning to learn approximate representations of games. Our
approach recombines knowledge from these learned representations to create new
games via conceptual expansion. We evaluate this approach by demonstrating the
ability for the system to recreate existing games. To the best of our
knowledge, this represents the first machine learning-based automated game
design system.
| [
{
"version": "v1",
"created": "Thu, 6 Sep 2018 21:53:39 GMT"
}
] | 1,536,537,600,000 | [
[
"Guzdial",
"Matthew",
""
],
[
"Riedl",
"Mark",
""
]
] |
1809.02260 | Brian Shay | Brian Shay, Patrick Brazil | The Force of Proof by Which Any Argument Prevails | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Jakob Bernoulli, working in the late 17th century, identified a gap in
contemporary probability theory. He cautioned that it was inadequate to specify
force of proof (probability of provability) for some kinds of uncertain
arguments. After 300 years, this gap remains in present-day probability theory.
We present axioms analogous to Kolmogorov's axioms for probability, specifying
uncertainty that lies in an argument's inference/implication itself rather than
in its premise and conclusion. The axioms focus on arguments spanning two
Boolean algebras, but generalize the obligatory: "force of proof of A implies B
is the probability of B or not A" in the case that the Boolean algebras are
identical. We propose a categorical framework that relies on generalized
probabilities (objects) to express uncertainty in premises, to mix with
arguments (morphisms) to express uncertainty embedded directly in
inference/implication. There is a direct application to Shafer's evidence
theory (Dempster-Shafer theory), greatly expanding its scope for applications.
Therefore, we can offer this framework not only as an optimal solution to a
difficult historical puzzle, but also to advance the frontiers of contemporary
artificial intelligence.
Keywords: force of proof, probability of provability, Ars Conjectandi, non
additive probabilities, evidence theory.
| [
{
"version": "v1",
"created": "Fri, 7 Sep 2018 00:24:29 GMT"
}
] | 1,536,537,600,000 | [
[
"Shay",
"Brian",
""
],
[
"Brazil",
"Patrick",
""
]
] |
1809.02317 | Soumi Chattopadhyay | Soumi Chattopadhyay, Ansuman Banerjee | QoS aware Automatic Web Service Composition with Multiple objectives | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With an increasing number of web services, providing an end-to-end Quality of
Service (QoS) guarantee in responding to user queries is becoming an important
concern. Multiple QoS parameters (e.g., response time, latency, throughput,
reliability, availability, success rate) are associated with a service,
thereby, service composition with a large number of candidate services is a
challenging multi-objective optimization problem. In this paper, we study the
multi-constrained multi-objective QoS aware web service composition problem and
propose three different approaches to solve the same, one optimal, based on
Pareto front construction and two other based on heuristically traversing the
solution space. We compare the performance of the heuristics against the
optimal, and show the effectiveness of our proposals over other classical
approaches for the same problem setting, with experiments on WSC-2009 and
ICEBE-2005 datasets.
| [
{
"version": "v1",
"created": "Fri, 7 Sep 2018 05:47:39 GMT"
}
] | 1,536,537,600,000 | [
[
"Chattopadhyay",
"Soumi",
""
],
[
"Banerjee",
"Ansuman",
""
]
] |
1809.02378 | Seydou Ba | Seydou Ba, Takuya Hiraoka, Takashi Onishi, Toru Nakata, Yoshimasa
Tsuruoka | Monte Carlo Tree Search with Scalable Simulation Periods for
Continuously Running Tasks | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Monte Carlo Tree Search (MCTS) is particularly adapted to domains where the
potential actions can be represented as a tree of sequential decisions. For an
effective action selection, MCTS performs many simulations to build a reliable
tree representation of the decision space. As such, a bottleneck to MCTS
appears when enough simulations cannot be performed between action selections.
This is particularly highlighted in continuously running tasks, for which the
time available to perform simulations between actions tends to be limited due
to the environment's state constantly changing. In this paper, we present an
approach that takes advantage of the anytime characteristic of MCTS to increase
the simulation time when allowed. Our approach is to effectively balance the
prospect of selecting an action with the time that can be spared to perform
MCTS simulations before the next action selection. For that, we considered the
simulation time as a decision variable to be selected alongside an action. We
extended the Hierarchical Optimistic Optimization applied to Tree (HOOT) method
to adapt our approach to environments with a continuous decision space. We
evaluated our approach for environments with a continuous decision space
through OpenAI gym's Pendulum and Continuous Mountain Car environments and for
environments with discrete action space through the arcade learning environment
(ALE) platform. The evaluation results show that, with variable simulation
times, the proposed approach outperforms the conventional MCTS in the evaluated
continuous decision space tasks and improves the performance of MCTS in most of
the ALE tasks.
| [
{
"version": "v1",
"created": "Fri, 7 Sep 2018 09:56:21 GMT"
}
] | 1,536,537,600,000 | [
[
"Ba",
"Seydou",
""
],
[
"Hiraoka",
"Takuya",
""
],
[
"Onishi",
"Takashi",
""
],
[
"Nakata",
"Toru",
""
],
[
"Tsuruoka",
"Yoshimasa",
""
]
] |
1809.02904 | Matthew Stephenson | Matthew Stephenson, Damien Anderson, Ahmed Khalifa, John Levine,
Jochen Renz, Julian Togelius, Christoph Salge | A Continuous Information Gain Measure to Find the Most Discriminatory
Problems for AI Benchmarking | 8 pages, 1 figure, 2 tables | IEEE Congress on Evolutionary Computation (IEEE CEC), Special
Session on Games, Glasgow, UK, 2020 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces an information-theoretic method for selecting a subset
of problems which gives the most information about a group of problem-solving
algorithms. This method was tested on the games in the General Video Game AI
(GVGAI) framework, allowing us to identify a smaller set of games that still
gives a large amount of information about the abilities of different
game-playing agents. This approach can be used to make agent testing more
efficient. We can achieve almost as good discriminatory accuracy when testing
on only a handful of games as when testing on more than a hundred games,
something which is often computationally infeasible. Furthermore, this method
can be extended to study the dimensions of the effective variance in game
design between these games, allowing us to identify which games differentiate
between agents in the most complementary ways.
| [
{
"version": "v1",
"created": "Sun, 9 Sep 2018 00:56:20 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Sep 2018 04:16:15 GMT"
},
{
"version": "v3",
"created": "Mon, 18 May 2020 10:21:26 GMT"
}
] | 1,589,846,400,000 | [
[
"Stephenson",
"Matthew",
""
],
[
"Anderson",
"Damien",
""
],
[
"Khalifa",
"Ahmed",
""
],
[
"Levine",
"John",
""
],
[
"Renz",
"Jochen",
""
],
[
"Togelius",
"Julian",
""
],
[
"Salge",
"Christoph",
""
]
] |
1809.02909 | Chuancun Yin | Xiuyan Sha, Zeshui Xu, Chuancun Yin | Elliptical Distributions-Based Weights-Determining Method for OWA
Operators | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ordered weighted averaging (OWA) operators play a crucial role in
aggregating multiple criteria evaluations into an overall assessment supporting
the decision makers' choice. One key point steps is to determine the associated
weights. In this paper, we first briefly review some main methods for
determining the weights by using distribution functions. Then we propose a new
approach for determining OWA weights by using the RIM quantifier. Motivated by
the idea of normal distribution-based method to determine the OWA weights, we
develop a method based on elliptical distributions for determining the OWA
weights, and some of its desirable properties have been investigated.
| [
{
"version": "v1",
"created": "Sun, 9 Sep 2018 01:40:45 GMT"
}
] | 1,536,624,000,000 | [
[
"Sha",
"Xiuyan",
""
],
[
"Xu",
"Zeshui",
""
],
[
"Yin",
"Chuancun",
""
]
] |
1809.03260 | Diptikalyan Saha | Aniya Agarwal, Pranay Lohia, Seema Nagar, Kuntal Dey, Diptikalyan Saha | Automated Test Generation to Detect Individual Discrimination in AI
Models | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dependability on AI models is of utmost importance to ensure full acceptance
of the AI systems. One of the key aspects of the dependable AI system is to
ensure that all its decisions are fair and not biased towards any individual.
In this paper, we address the problem of detecting whether a model has an
individual discrimination. Such a discrimination exists when two individuals
who differ only in the values of their protected attributes (such as,
gender/race) while the values of their non-protected ones are exactly the same,
get different decisions. Measuring individual discrimination requires an
exhaustive testing, which is infeasible for a non-trivial system. In this
paper, we present an automated technique to generate test inputs, which is
geared towards finding individual discrimination. Our technique combines the
well-known technique called symbolic execution along with the local
explainability for generation of effective test cases. Our experimental results
clearly demonstrate that our technique produces 3.72 times more successful test
cases than the existing state-of-the-art across all our chosen benchmarks.
| [
{
"version": "v1",
"created": "Mon, 10 Sep 2018 12:11:21 GMT"
}
] | 1,536,624,000,000 | [
[
"Agarwal",
"Aniya",
""
],
[
"Lohia",
"Pranay",
""
],
[
"Nagar",
"Seema",
""
],
[
"Dey",
"Kuntal",
""
],
[
"Saha",
"Diptikalyan",
""
]
] |
1809.03359 | Quentin Cappart | Quentin Cappart, Emmanuel Goutierre, David Bergman, Louis-Martin
Rousseau | Improving Optimization Bounds using Machine Learning: Decision Diagrams
meet Deep Reinforcement Learning | Accepted and presented at AAAI'19 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Finding tight bounds on the optimal solution is a critical element of
practical solution methods for discrete optimization problems. In the last
decade, decision diagrams (DDs) have brought a new perspective on obtaining
upper and lower bounds that can be significantly better than classical bounding
mechanisms, such as linear relaxations. It is well known that the quality of
the bounds achieved through this flexible bounding method is highly reliant on
the ordering of variables chosen for building the diagram, and finding an
ordering that optimizes standard metrics is an NP-hard problem. In this paper,
we propose an innovative and generic approach based on deep reinforcement
learning for obtaining an ordering for tightening the bounds obtained with
relaxed and restricted DDs. We apply the approach to both the Maximum
Independent Set Problem and the Maximum Cut Problem. Experimental results on
synthetic instances show that the deep reinforcement learning approach, by
achieving tighter objective function bounds, generally outperforms ordering
methods commonly used in the literature when the distribution of instances is
known. To the best knowledge of the authors, this is the first paper to apply
machine learning to directly improve relaxation bounds obtained by
general-purpose bounding mechanisms for combinatorial optimization problems.
| [
{
"version": "v1",
"created": "Mon, 10 Sep 2018 14:41:17 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Feb 2019 18:27:35 GMT"
}
] | 1,551,312,000,000 | [
[
"Cappart",
"Quentin",
""
],
[
"Goutierre",
"Emmanuel",
""
],
[
"Bergman",
"David",
""
],
[
"Rousseau",
"Louis-Martin",
""
]
] |
1809.03406 | Erik Peterson | Erik J Peterson, Necati Alp M\"uyesser, Timothy Verstynen, Kyle
Dunovan | Combining imagination and heuristics to learn strategies that generalize | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Deep reinforcement learning can match or exceed human performance in stable
contexts, but with minor changes to the environment artificial networks, unlike
humans, often cannot adapt. Humans rely on a combination of heuristics to
simplify computational load and imagination to extend experiential learning to
new and more challenging environments. Motivated by theories of the
hierarchical organization of the human prefrontal networks, we have developed a
model of hierarchical reinforcement learning that combines both heuristics and
imagination into a stumbler-strategist network. We test performance of this
network using Wythoff's game, a gridworld environment with a known optimal
strategy. We show that a heuristic labeling of each position as hot or cold,
combined with imagined play, both accelerates learning and promotes transfer to
novel games, while also improving model interpretability.
| [
{
"version": "v1",
"created": "Mon, 10 Sep 2018 15:43:57 GMT"
},
{
"version": "v2",
"created": "Thu, 11 Jun 2020 20:40:35 GMT"
}
] | 1,592,179,200,000 | [
[
"Peterson",
"Erik J",
""
],
[
"Müyesser",
"Necati Alp",
""
],
[
"Verstynen",
"Timothy",
""
],
[
"Dunovan",
"Kyle",
""
]
] |
1809.03916 | Maarten Bieshaar | Maarten Bieshaar, G\"unther Reitberger, Stefan Zernetsch, Bernhard
Sick, Erich Fuchs, Konrad Doll | Detecting Intentions of Vulnerable Road Users Based on Collective
Intelligence | 20 pages, published at Automatisiertes und vernetztes Fahren (AAET),
Braunschweig, Germany, 2017 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vulnerable road users (VRUs, i.e. cyclists and pedestrians) will play an
important role in future traffic. To avoid accidents and achieve a highly
efficient traffic flow, it is important to detect VRUs and to predict their
intentions. In this article a holistic approach for detecting intentions of
VRUs by cooperative methods is presented. The intention detection consists of
basic movement primitive prediction, e.g. standing, moving, turning, and a
forecast of the future trajectory. Vehicles equipped with sensors, data
processing systems and communication abilities, referred to as intelligent
vehicles, acquire and maintain a local model of their surrounding traffic
environment, e.g. crossing cyclists. Heterogeneous, open sets of agents
(cooperating and interacting vehicles, infrastructure, e.g. cameras and laser
scanners, and VRUs equipped with smart devices and body-worn sensors) exchange
information forming a multi-modal sensor system with the goal to reliably and
robustly detect VRUs and their intentions under consideration of real time
requirements and uncertainties. The resulting model allows to extend the
perceptual horizon of the individual agent beyond their own sensory
capabilities, enabling a longer forecast horizon. Concealments,
implausibilities and inconsistencies are resolved by the collective
intelligence of cooperating agents. Novel techniques of signal processing and
modelling in combination with analytical and learning based approaches of
pattern and activity recognition are used for detection, as well as intention
prediction of VRUs. Cooperation, by means of probabilistic sensor and knowledge
fusion, takes place on the level of perception and intention recognition. Based
on the requirements of the cooperative approach for the communication a new
strategy for an ad hoc network is proposed.
| [
{
"version": "v1",
"created": "Tue, 11 Sep 2018 14:18:49 GMT"
}
] | 1,536,710,400,000 | [
[
"Bieshaar",
"Maarten",
""
],
[
"Reitberger",
"Günther",
""
],
[
"Zernetsch",
"Stefan",
""
],
[
"Sick",
"Bernhard",
""
],
[
"Fuchs",
"Erich",
""
],
[
"Doll",
"Konrad",
""
]
] |
1809.03928 | Maurizio Parton | Francesco Morandin and Gianluca Amato and Rosa Gini and Carlo Metta
and Maurizio Parton and Gian-Carlo Pascutto | SAI, a Sensible Artificial Intelligence that plays Go | Updated for IJCNN 2019 conference | 2019 International Joint Conference on Neural Networks (IJCNN) | 10.1109/IJCNN.2019.8852266 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a multiple-komi modification of the AlphaGo Zero/Leela Zero
paradigm. The winrate as a function of the komi is modeled with a
two-parameters sigmoid function, so that the neural network must predict just
one more variable to assess the winrate for all komi values. A second novel
feature is that training is based on self-play games that occasionally branch
-- with changed komi -- when the position is uneven. With this setting,
reinforcement learning is showed to work on 7x7 Go, obtaining very strong
playing agents. As a useful byproduct, the sigmoid parameters given by the
network allow to estimate the score difference on the board, and to evaluate
how much the game is decided.
| [
{
"version": "v1",
"created": "Tue, 11 Sep 2018 14:30:01 GMT"
},
{
"version": "v2",
"created": "Wed, 1 May 2019 08:16:29 GMT"
}
] | 1,574,899,200,000 | [
[
"Morandin",
"Francesco",
""
],
[
"Amato",
"Gianluca",
""
],
[
"Gini",
"Rosa",
""
],
[
"Metta",
"Carlo",
""
],
[
"Parton",
"Maurizio",
""
],
[
"Pascutto",
"Gian-Carlo",
""
]
] |
1809.04106 | Christoph Trattner | Christoph Trattner (University of Bergen), Vanessa Murdock (Amazon),
Steven Chang (Quora) | ACM RecSys 2018 Late-Breaking Results Proceedings | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ACM RecSys'18 Late-Breaking Results track (previously known as the Poster
track) is part of the main program of the 2018 ACM Conference on Recommender
Systems in Vancouver, Canada. The track attracted 48 submissions this year out
of which 18 papers could be accepted resulting in an acceptance rated of 37.5%.
| [
{
"version": "v1",
"created": "Tue, 11 Sep 2018 18:52:56 GMT"
}
] | 1,536,796,800,000 | [
[
"Trattner",
"Christoph",
"",
"University of Bergen"
],
[
"Murdock",
"Vanessa",
"",
"Amazon"
],
[
"Chang",
"Steven",
"",
"Quora"
]
] |
1809.04113 | Tianxing He | Tianxing He and James Glass | Detecting egregious responses in neural sequence-to-sequence models | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we attempt to answer a critical question: whether there exists
some input sequence that will cause a well-trained discrete-space neural
network sequence-to-sequence (seq2seq) model to generate egregious outputs
(aggressive, malicious, attacking, etc.). And if such inputs exist, how to find
them efficiently. We adopt an empirical methodology, in which we first create
lists of egregious output sequences, and then design a discrete optimization
algorithm to find input sequences that will cause the model to generate them.
Moreover, the optimization algorithm is enhanced for large vocabulary search
and constrained to search for input sequences that are likely to be input by
real-world users. In our experiments, we apply this approach to dialogue
response generation models trained on three real-world dialogue data-sets:
Ubuntu, Switchboard and OpenSubtitles, testing whether the model can generate
malicious responses. We demonstrate that given the trigger inputs our algorithm
finds, a significant number of malicious sentences are assigned large
probability by the model, which reveals an undesirable consequence of standard
seq2seq training.
| [
{
"version": "v1",
"created": "Tue, 11 Sep 2018 19:11:51 GMT"
},
{
"version": "v2",
"created": "Wed, 3 Oct 2018 17:45:04 GMT"
}
] | 1,538,611,200,000 | [
[
"He",
"Tianxing",
""
],
[
"Glass",
"James",
""
]
] |
1809.04232 | Akifumi Wachi | Akifumi Wachi, Hiroshi Kajino, Asim Munawar | Safe Exploration in Markov Decision Processes with Time-Variant Safety
using Spatio-Temporal Gaussian Process | 12 pages, 7 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In many real-world applications (e.g., planetary exploration, robot
navigation), an autonomous agent must be able to explore a space with
guaranteed safety. Most safe exploration algorithms in the field of
reinforcement learning and robotics have been based on the assumption that the
safety features are a priori known and time-invariant. This paper presents a
learning algorithm called ST-SafeMDP for exploring Markov decision processes
(MDPs) that is based on the assumption that the safety features are a priori
unknown and time-variant. In this setting, the agent explores MDPs while
constraining the probability of entering unsafe states defined by a safety
function being below a threshold. The unknown and time-variant safety values
are modeled using a spatio-temporal Gaussian process. However, there remains an
issue that an agent may have no viable action in a shrinking true safe space.
To address this issue, we formulate a problem maximizing the cumulative number
of safe states in the worst case scenario with respect to future observations.
The effectiveness of this approach was demonstrated in two simulation settings,
including one using real lunar terrain data.
| [
{
"version": "v1",
"created": "Wed, 12 Sep 2018 02:43:19 GMT"
}
] | 1,536,796,800,000 | [
[
"Wachi",
"Akifumi",
""
],
[
"Kajino",
"Hiroshi",
""
],
[
"Munawar",
"Asim",
""
]
] |
1809.04234 | Liheng Chen | Liheng Chen, Yanru Qu, Zhenghui Wang, Lin Qiu, Weinan Zhang, Ken Chen,
Shaodian Zhang, Yong Yu | Sampled in Pairs and Driven by Text: A New Graph Embedding Framework | Accepted by WWW 2019 (The World Wide Web Conference. ACM, 2019) | Proceedings of the 2019 World Wide Web Conference | 10.1145/3308558.3313520 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In graphs with rich texts, incorporating textual information with structural
information would benefit constructing expressive graph embeddings. Among
various graph embedding models, random walk (RW)-based is one of the most
popular and successful groups. However, it is challenged by two issues when
applied on graphs with rich texts: (i) sampling efficiency: deriving from the
training objective of RW-based models (e.g., DeepWalk and node2vec), we show
that RW-based models are likely to generate large amounts of redundant training
samples due to three main drawbacks. (ii) text utilization: these models have
difficulty in dealing with zero-shot scenarios where graph embedding models
have to infer graph structures directly from texts. To solve these problems, we
propose a novel framework, namely Text-driven Graph Embedding with Pairs
Sampling (TGE-PS). TGE-PS uses Pairs Sampling (PS) to improve the sampling
strategy of RW, being able to reduce ~99% training samples while preserving
competitive performance. TGE-PS uses Text-driven Graph Embedding (TGE), an
inductive graph embedding approach, to generate node embeddings from texts.
Since each node contains rich texts, TGE is able to generate high-quality
embeddings and provide reasonable predictions on existence of links to unseen
nodes. We evaluate TGE-PS on several real-world datasets, and experiment
results demonstrate that TGE-PS produces state-of-the-art results on both
traditional and zero-shot link prediction tasks.
| [
{
"version": "v1",
"created": "Wed, 12 Sep 2018 02:53:00 GMT"
},
{
"version": "v2",
"created": "Sat, 12 Oct 2019 05:29:41 GMT"
}
] | 1,571,097,600,000 | [
[
"Chen",
"Liheng",
""
],
[
"Qu",
"Yanru",
""
],
[
"Wang",
"Zhenghui",
""
],
[
"Qiu",
"Lin",
""
],
[
"Zhang",
"Weinan",
""
],
[
"Chen",
"Ken",
""
],
[
"Zhang",
"Shaodian",
""
],
[
"Yu",
"Yong",
""
]
] |
1809.04258 | Zeheng Wang | Yuanzhe Yao, Zeheng Wang, Liang Li, Kun Lu, Runyu Liu, Zhiyuan Liu,
Jing Yan | An Ontology-Based Artificial Intelligence Model for Medicine Side-Effect
Prediction: Taking Traditional Chinese Medicine as An Example | null | null | 10.1155/2019/8617503 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, an ontology-based model for AI-assisted medicine side-effect
(SE) prediction is developed, where three main components, including the drug
model, the treatment model, and the AI-assisted prediction model, of proposed
model are presented. To validate the proposed model, an ANN structure is
established and trained by two hundred and forty-two TCM prescriptions. These
data are gathered and classified from the most famous ancient TCM book and more
than one thousand SE reports, in which two ontology-based attributions, hot and
cold, are introduced to evaluate whether the prescription will cause SE or not.
The results preliminarily reveal that it is a relationship between the
ontology-based attributions and the corresponding predicted indicator that can
be learnt by AI for predicting the SE, which suggests the proposed model has a
potential in AI-assisted SE prediction. However, it should be noted that, the
proposed model highly depends on the sufficient clinic data, and hereby, much
deeper exploration is important for enhancing the accuracy of the prediction.
| [
{
"version": "v1",
"created": "Wed, 12 Sep 2018 05:04:58 GMT"
},
{
"version": "v2",
"created": "Wed, 31 Jul 2019 07:02:37 GMT"
},
{
"version": "v3",
"created": "Tue, 1 Oct 2019 05:02:36 GMT"
}
] | 1,569,974,400,000 | [
[
"Yao",
"Yuanzhe",
""
],
[
"Wang",
"Zeheng",
""
],
[
"Li",
"Liang",
""
],
[
"Lu",
"Kun",
""
],
[
"Liu",
"Runyu",
""
],
[
"Liu",
"Zhiyuan",
""
],
[
"Yan",
"Jing",
""
]
] |
1809.04343 | Giovanni Iacca Dr. | Giovanni Iacca and Fabio Caraffini | Compact Optimization Algorithms with Re-sampled Inheritance | null | null | 10.1007/978-3-030-16692-2_35 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Compact optimization algorithms are a class of Estimation of Distribution
Algorithms (EDAs) characterized by extremely limited memory requirements (hence
they are called "compact"). As all EDAs, compact algorithms build and update a
probabilistic model of the distribution of solutions within the search space,
as opposed to population-based algorithms that instead make use of an explicit
population of solutions. In addition to that, to keep their memory consumption
low, compact algorithms purposely employ simple probabilistic models that can
be described with a small number of parameters. Despite their simplicity,
compact algorithms have shown good performances on a broad range of benchmark
functions and real-world problems. However, compact algorithms also come with
some drawbacks, i.e. they tend to premature convergence and show poorer
performance on non-separable problems. To overcome these limitations, here we
investigate a possible algorithmic scheme obtained by combining compact
algorithms with a non-disruptive restart mechanism taken from the literature,
named Re-Sampled Inheritance (RI). The resulting compact algorithms with RI are
tested on the CEC 2014 benchmark functions. The numerical results show on the
one hand that the use of RI consistently enhances the performances of compact
algorithms, still keeping a limited usage of memory. On the other hand, our
experiments show that among the tested algorithms, the best performance is
obtained by compact Differential Evolution with RI.
| [
{
"version": "v1",
"created": "Wed, 12 Sep 2018 10:11:20 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Jan 2019 10:50:51 GMT"
},
{
"version": "v3",
"created": "Sun, 7 Apr 2019 15:47:05 GMT"
}
] | 1,554,940,800,000 | [
[
"Iacca",
"Giovanni",
""
],
[
"Caraffini",
"Fabio",
""
]
] |
1809.04362 | Bruno Escoffier | Bruno Escoffier, Hugo Gilbert, Ad\`ele Pass-Lanneau | Iterative Delegations in Liquid Democracy with Restricted Preferences | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we study liquid democracy, a collective decision making
paradigm which lies between direct and representative democracy. One main
feature of liquid democracy is that voters can delegate their votes in a
transitive manner so that: A delegates to B and B delegates to C leads to A
delegates to C. Unfortunately, this process may not converge as there may not
even exist a stable state (also called equilibrium). In this paper, we
investigate the stability of the delegation process in liquid democracy when
voters have restricted types of preference on the agent representing them
(e.g., single-peaked preferences). We show that various natural structures of
preferences guarantee the existence of an equilibrium and we obtain both
tractability and hardness results for the problem of computing several
equilibria with some desirable properties.
| [
{
"version": "v1",
"created": "Wed, 12 Sep 2018 11:30:54 GMT"
},
{
"version": "v2",
"created": "Thu, 16 May 2019 15:26:12 GMT"
}
] | 1,558,051,200,000 | [
[
"Escoffier",
"Bruno",
""
],
[
"Gilbert",
"Hugo",
""
],
[
"Pass-Lanneau",
"Adèle",
""
]
] |
1809.04861 | Christian Stra{\ss}er | AnneMarie Borg, Christian Stra{\ss}er | Relevance in Structured Argumentation | Extended version of the paper with the same name published in the
main track of IJCAI 2018. It countains additionally a treatment of credulous
and weak skeptical semantics | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study properties related to relevance in non-monotonic consequence
relations obtained by systems of structured argumentation. Relevance desiderata
concern the robustness of a consequence relation under the addition of
irrelevant information. For an account of what (ir)relevance amounts to we use
syntactic and semantic considerations. Syntactic criteria have been proposed in
the domain of relevance logic and were recently used in argumentation theory
under the names of non-interference and crash-resistance. The basic idea is
that the conclusions of a given argumentative theory should be robust under
adding information that shares no propositional variables with the original
database. Some semantic relevance criteria are known from non-monotonic logic.
For instance, cautious monotony states that if we obtain certain conclusions
from an argumentation theory, we may expect to still obtain the same
conclusions if we add some of them to the given database. In this paper we
investigate properties of structured argumentation systems that warrant
relevance desiderata.
| [
{
"version": "v1",
"created": "Thu, 13 Sep 2018 09:52:03 GMT"
},
{
"version": "v2",
"created": "Thu, 14 May 2020 06:28:12 GMT"
}
] | 1,589,500,800,000 | [
[
"Borg",
"AnneMarie",
""
],
[
"Straßer",
"Christian",
""
]
] |
1809.05001 | Son-Il Kwak | Son-il Kwak, Gum-ju Kim, Michio Sugeno, Gwang-chol Li, Myong-suk Son,
Hyok-chol Kim, Un-ha Kim | Reductive property of new fuzzy reasoning method based on distance
measure | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Firstly in this paper we propose a new criterion function for evaluation of
the reductive property about the fuzzy reasoning result for fuzzy modus ponens
and fuzzy modus tollens. Secondly unlike fuzzy reasoning methods based on the
similarity measure, we propose a new fuzzy reasoning method based on distance
measure. Thirdly the reductive property for 5 fuzzy reasoning methods are
checked with respect to fuzzy modus ponens and fuzzy modus tollens. Through the
experiment, we show that proposed method is better than the previous methods in
accordance with human thinking.
| [
{
"version": "v1",
"created": "Fri, 7 Sep 2018 10:37:22 GMT"
}
] | 1,536,883,200,000 | [
[
"Kwak",
"Son-il",
""
],
[
"Kim",
"Gum-ju",
""
],
[
"Sugeno",
"Michio",
""
],
[
"Li",
"Gwang-chol",
""
],
[
"Son",
"Myong-suk",
""
],
[
"Kim",
"Hyok-chol",
""
],
[
"Kim",
"Un-ha",
""
]
] |
1809.05676 | Prabhat Nagarajan | Prabhat Nagarajan, Garrett Warnell, Peter Stone | Deterministic Implementations for Reproducibility in Deep Reinforcement
Learning | 17 Pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While deep reinforcement learning (DRL) has led to numerous successes in
recent years, reproducing these successes can be extremely challenging. One
reproducibility challenge particularly relevant to DRL is nondeterminism in the
training process, which can substantially affect the results. Motivated by this
challenge, we study the positive impacts of deterministic implementations in
eliminating nondeterminism in training. To do so, we consider the particular
case of the deep Q-learning algorithm, for which we produce a deterministic
implementation by identifying and controlling all sources of nondeterminism in
the training process. One by one, we then allow individual sources of
nondeterminism to affect our otherwise deterministic implementation, and
measure the impact of each source on the variance in performance. We find that
individual sources of nondeterminism can substantially impact the performance
of agent, illustrating the benefits of deterministic implementations. In
addition, we also discuss the important role of deterministic implementations
in achieving exact replicability of results.
| [
{
"version": "v1",
"created": "Sat, 15 Sep 2018 08:53:28 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Sep 2018 11:13:05 GMT"
},
{
"version": "v3",
"created": "Mon, 31 Dec 2018 04:39:18 GMT"
},
{
"version": "v4",
"created": "Tue, 8 Jan 2019 15:55:22 GMT"
},
{
"version": "v5",
"created": "Sun, 9 Jun 2019 12:56:34 GMT"
}
] | 1,560,211,200,000 | [
[
"Nagarajan",
"Prabhat",
""
],
[
"Warnell",
"Garrett",
""
],
[
"Stone",
"Peter",
""
]
] |
1809.05762 | John Kingston | John KC Kingston | Using Artificial Intelligence to Support Compliance with the General
Data Protection Regulation | null | Artificial Intelligence and Law (2017) 25, 429 - 443 | 10.1007/s10506-017-9206-9 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The General Data Protection Regulation (GDPR) is a European Union regulation
that will replace the existing Data Protection Directive on 25 May 2018. The
most significant change is a huge increase in the maximum fine that can be
levied for breaches of the regulation. Yet fewer than half of UK companies are
fully aware of GDPR - and a number of those who were preparing for it stopped
doing so when the Brexit vote was announced. A last-minute rush to become
compliant is therefore expected, and numerous companies are starting to offer
advice, checklists and consultancy on how to comply with GDPR. In such an
environment, artificial intelligence technologies ought to be able to assist by
providing best advice; asking all and only the relevant questions; monitoring
activities; and carrying out assessments. The paper considers four areas of
GDPR compliance where rule based technologies and/or machine learning
techniques may be relevant: * Following compliance checklists and codes of
conduct; * Supporting risk assessments; * Complying with the new regulations
regarding technologies that perform automatic profiling; * Complying with the
new regulations concerning recognising and reporting breaches of security. It
concludes that AI technology can support each of these four areas. The
requirements that GDPR (or organisations that need to comply with GDPR) state
for explanation and justification of reasoning imply that rule-based approaches
are likely to be more helpful than machine learning approaches. However, there
may be good business reasons to take a different approach in some
circumstances.
| [
{
"version": "v1",
"created": "Sat, 15 Sep 2018 19:57:02 GMT"
}
] | 1,537,228,800,000 | [
[
"Kingston",
"John KC",
""
]
] |
1809.05763 | Anton Wiehe | Anton Orell Wiehe, Nil Stolt Ans\'o, Madalina M. Drugan, Marco A.
Wiering | Sampled Policy Gradient for Learning to Play the Game Agar.io | null | null | null | null | cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | In this paper, a new offline actor-critic learning algorithm is introduced:
Sampled Policy Gradient (SPG). SPG samples in the action space to calculate an
approximated policy gradient by using the critic to evaluate the samples. This
sampling allows SPG to search the action-Q-value space more globally than
deterministic policy gradient (DPG), enabling it to theoretically avoid more
local optima. SPG is compared to Q-learning and the actor-critic algorithms
CACLA and DPG in a pellet collection task and a self play environment in the
game Agar.io. The online game Agar.io has become massively popular on the
internet due to intuitive game design and the ability to instantly compete
against players around the world. From the point of view of artificial
intelligence this game is also very intriguing: The game has a continuous input
and action space and allows to have diverse agents with complex strategies
compete against each other. The experimental results show that Q-Learning and
CACLA outperform a pre-programmed greedy bot in the pellet collection task, but
all algorithms fail to outperform this bot in a fighting scenario. The SPG
algorithm is analyzed to have great extendability through offline exploration
and it matches DPG in performance even in its basic form without extensive
sampling.
| [
{
"version": "v1",
"created": "Sat, 15 Sep 2018 20:01:06 GMT"
}
] | 1,537,228,800,000 | [
[
"Wiehe",
"Anton Orell",
""
],
[
"Ansó",
"Nil Stolt",
""
],
[
"Drugan",
"Madalina M.",
""
],
[
"Wiering",
"Marco A.",
""
]
] |
1809.05959 | Pavel Surynek | Pavel Surynek | Lazy Modeling of Variants of Token Swapping Problem and Multi-agent Path
Finding through Combination of Satisfiability Modulo Theories and
Conflict-based Search | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address item relocation problems in graphs in this paper. We assume items
placed in vertices of an undirected graph with at most one item per vertex.
Items can be moved across edges while various constraints depending on the type
of relocation problem must be satisfied. We introduce a general problem
formulation that encompasses known types of item relocation problems such as
multi-agent path finding (MAPF) and token swapping (TSWAP). In this formulation
we express two new types of relocation problems derived from token swapping
that we call token rotation (TROT) and token permutation (TPERM). Our solving
approach for item relocation combines satisfiability modulo theory (SMT) with
conflict-based search (CBS). We interpret CBS in the SMT framework where we
start with the basic model and refine the model with a collision resolution
constraint whenever a collision between items occurs in the current solution.
The key difference between the standard CBS and our SMT-based modification of
CBS (SMT-CBS) is that the standard CBS branches the search to resolve the
collision while in SMT-CBS we iteratively add a single disjunctive collision
resolution constraint. Experimental evaluation on several benchmarks shows that
the SMT-CBS algorithm significantly outperforms the standard CBS. We also
compared SMT-CBS with a modification of the SAT-based MDD-SAT solver that uses
an eager modeling of item relocation in which all potential collisions are
eliminated by constrains in advance. Experiments show that lazy approach in
SMT-CBS produce fewer constraint than MDD-SAT and also achieves faster solving
run-times.
| [
{
"version": "v1",
"created": "Sun, 16 Sep 2018 21:19:35 GMT"
}
] | 1,537,228,800,000 | [
[
"Surynek",
"Pavel",
""
]
] |
1809.06180 | Riccardo Zese | Riccardo Zese, Giuseppe Cota, Evelina Lamma, Elena Bellodi, Fabrizio
Riguzzi | Probabilistic DL Reasoning with Pinpointing Formulas: A Prolog-based
Approach | null | Theory and Practice of Logic Programming, 19 (3), 449-476, 2019 | 10.1017/S1471068418000480 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | When modeling real world domains we have to deal with information that is
incomplete or that comes from sources with different trust levels. This
motivates the need for managing uncertainty in the Semantic Web. To this
purpose, we introduced a probabilistic semantics, named DISPONTE, in order to
combine description logics with probability theory. The probability of a query
can be then computed from the set of its explanations by building a Binary
Decision Diagram (BDD). The set of explanations can be found using the tableau
algorithm, which has to handle non-determinism. Prolog, with its efficient
handling of non-determinism, is suitable for implementing the tableau
algorithm. TRILL and TRILLP are systems offering a Prolog implementation of the
tableau algorithm. TRILLP builds a pinpointing formula, that compactly
represents the set of explanations and can be directly translated into a BDD.
Both reasoners were shown to outperform state-of-the-art DL reasoners. In this
paper, we present an improvement of TRILLP, named TORNADO, in which the BDD is
directly built during the construction of the tableau, further speeding up the
overall inference process. An experimental comparison shows the effectiveness
of TORNADO. All systems can be tried online in the TRILL on SWISH web
application at http://trill.ml.unife.it/.
| [
{
"version": "v1",
"created": "Mon, 17 Sep 2018 13:13:02 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Jan 2019 09:15:01 GMT"
},
{
"version": "v3",
"created": "Mon, 1 Apr 2019 11:44:58 GMT"
}
] | 1,554,163,200,000 | [
[
"Zese",
"Riccardo",
""
],
[
"Cota",
"Giuseppe",
""
],
[
"Lamma",
"Evelina",
""
],
[
"Bellodi",
"Elena",
""
],
[
"Riguzzi",
"Fabrizio",
""
]
] |
1809.06205 | Aristotelis Charalampous | Aristotelis Charalampous, Sotirios Chatzis | Quantum Statistics-Inspired Neural Attention | Submitted to The 23rd Pacific-Asia Conference on Knowledge Discovery
and Data Mining (PAKDD 2019) | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Sequence-to-sequence (encoder-decoder) models with attention constitute a
cornerstone of deep learning research, as they have enabled unprecedented
sequential data modeling capabilities. This effectiveness largely stems from
the capacity of these models to infer salient temporal dynamics over long
horizons; these are encoded into the obtained neural attention (NA)
distributions. However, existing NA formulations essentially constitute
point-wise selection mechanisms over the observed source sequences; that is,
attention weights computation relies on the assumption that each source
sequence element is independent of the rest. Unfortunately, although
convenient, this assumption fails to account for higher-order dependencies
which might be prevalent in real-world data. This paper addresses these
limitations by leveraging Quantum-Statistical modeling arguments. Specifically,
our work broadens the notion of NA, by attempting to account for the case that
the NA model becomes inherently incapable of discerning between individual
source elements; this is assumed to be the case due to higher-order temporal
dynamics. On the contrary, we postulate that in some cases selection may be
feasible only at the level of pairs of source sequence elements. To this end,
we cast NA into inference of an attention density matrix (ADM) approximation.
We derive effective training and inference algorithms, and evaluate our
approach in the context of a machine translation (MT) application. We perform
experiments with challenging benchmark datasets. As we show, our approach
yields favorable outcomes in terms of several evaluation metrics.
| [
{
"version": "v1",
"created": "Mon, 17 Sep 2018 13:58:13 GMT"
},
{
"version": "v2",
"created": "Tue, 30 Oct 2018 13:31:44 GMT"
}
] | 1,540,944,000,000 | [
[
"Charalampous",
"Aristotelis",
""
],
[
"Chatzis",
"Sotirios",
""
]
] |
1809.06260 | Jun Feng | Jun Feng, Heng Li, Minlie Huang, Shichen Liu, Wenwu Ou, Zhirong Wang
and Xiaoyan Zhu | Learning to Collaborate: Multi-Scenario Ranking via Multi-Agent
Reinforcement Learning | WWW2018 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ranking is a fundamental and widely studied problem in scenarios such as
search, advertising, and recommendation. However, joint optimization for
multi-scenario ranking, which aims to improve the overall performance of
several ranking strategies in different scenarios, is rather untouched.
Separately optimizing each individual strategy has two limitations. The first
one is lack of collaboration between scenarios meaning that each strategy
maximizes its own objective but ignores the goals of other strategies, leading
to a sub-optimal overall performance. The second limitation is the inability of
modeling the correlation between scenarios meaning that independent
optimization in one scenario only uses its own user data but ignores the
context in other scenarios.
In this paper, we formulate multi-scenario ranking as a fully cooperative,
partially observable, multi-agent sequential decision problem. We propose a
novel model named Multi-Agent Recurrent Deterministic Policy Gradient (MA-RDPG)
which has a communication component for passing messages, several private
actors (agents) for making actions for ranking, and a centralized critic for
evaluating the overall performance of the co-working actors. Each scenario is
treated as an agent (actor). Agents collaborate with each other by sharing a
global action-value function (the critic) and passing messages that encodes
historical information across scenarios. The model is evaluated with online
settings on a large E-commerce platform. Results show that the proposed model
exhibits significant improvements against baselines in terms of the overall
performance.
| [
{
"version": "v1",
"created": "Mon, 17 Sep 2018 14:45:21 GMT"
}
] | 1,537,228,800,000 | [
[
"Feng",
"Jun",
""
],
[
"Li",
"Heng",
""
],
[
"Huang",
"Minlie",
""
],
[
"Liu",
"Shichen",
""
],
[
"Ou",
"Wenwu",
""
],
[
"Wang",
"Zhirong",
""
],
[
"Zhu",
"Xiaoyan",
""
]
] |
1809.06305 | Xiao Li | Xiao Li, Yao Ma and Calin Belta | Automata Guided Reinforcement Learning With Demonstrations | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tasks with complex temporal structures and long horizons pose a challenge for
reinforcement learning agents due to the difficulty in specifying the tasks in
terms of reward functions as well as large variances in the learning signals.
We propose to address these problems by combining temporal logic (TL) with
reinforcement learning from demonstrations. Our method automatically generates
intrinsic rewards that align with the overall task goal given a TL task
specification. The policy resulting from our framework has an interpretable and
hierarchical structure. We validate the proposed method experimentally on a set
of robotic manipulation tasks.
| [
{
"version": "v1",
"created": "Mon, 17 Sep 2018 16:17:28 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Sep 2018 22:10:42 GMT"
}
] | 1,538,006,400,000 | [
[
"Li",
"Xiao",
""
],
[
"Ma",
"Yao",
""
],
[
"Belta",
"Calin",
""
]
] |
1809.06481 | Sahin Geyik | Sahin Cem Geyik, Qi Guo, Bo Hu, Cagri Ozcaglar, Ketan Thakkar, Xianren
Wu, Krishnaram Kenthapadi | Talent Search and Recommendation Systems at LinkedIn: Practical
Challenges and Lessons Learned | This paper has been accepted for publication at ACM SIGIR 2018 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | LinkedIn Talent Solutions business contributes to around 65% of LinkedIn's
annual revenue, and provides tools for job providers to reach out to potential
candidates and for job seekers to find suitable career opportunities.
LinkedIn's job ecosystem has been designed as a platform to connect job
providers and job seekers, and to serve as a marketplace for efficient matching
between potential candidates and job openings. A key mechanism to help achieve
these goals is the LinkedIn Recruiter product, which enables recruiters to
search for relevant candidates and obtain candidate recommendations for their
job postings. In this work, we highlight a set of unique information retrieval,
system, and modeling challenges associated with talent search and
recommendation systems.
| [
{
"version": "v1",
"created": "Tue, 18 Sep 2018 00:03:15 GMT"
}
] | 1,537,315,200,000 | [
[
"Geyik",
"Sahin Cem",
""
],
[
"Guo",
"Qi",
""
],
[
"Hu",
"Bo",
""
],
[
"Ozcaglar",
"Cagri",
""
],
[
"Thakkar",
"Ketan",
""
],
[
"Wu",
"Xianren",
""
],
[
"Kenthapadi",
"Krishnaram",
""
]
] |
1809.06488 | Sahin Geyik | Sahin Cem Geyik, Vijay Dialani, Meng Meng, Ryan Smith | In-Session Personalization for Talent Search | This paper has been accepted for publication at ACM CIKM 2018 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Previous efforts in recommendation of candidates for talent search followed
the general pattern of receiving an initial search criteria and generating a
set of candidates utilizing a pre-trained model. Traditionally, the generated
recommendations are final, that is, the list of potential candidates is not
modified unless the user explicitly changes his/her search criteria. In this
paper, we are proposing a candidate recommendation model which takes into
account the immediate feedback of the user, and updates the candidate
recommendations at each step. This setting also allows for very uninformative
initial search queries, since we pinpoint the user's intent due to the feedback
during the search session. To achieve our goal, we employ an intent clustering
method based on topic modeling which separates the candidate space into
meaningful, possibly overlapping, subsets (which we call intent clusters) for
each position. On top of the candidate segments, we apply a multi-armed bandit
approach to choose which intent cluster is more appropriate for the current
session. We also present an online learning scheme which updates the intent
clusters within the session, due to user feedback, to achieve further
personalization. Our offline experiments as well as the results from the online
deployment of our solution demonstrate the benefits of our proposed
methodology.
| [
{
"version": "v1",
"created": "Tue, 18 Sep 2018 00:24:23 GMT"
}
] | 1,537,315,200,000 | [
[
"Geyik",
"Sahin Cem",
""
],
[
"Dialani",
"Vijay",
""
],
[
"Meng",
"Meng",
""
],
[
"Smith",
"Ryan",
""
]
] |
1809.06625 | Chengwei Zhang | Chengwei Zhang and Xiaohong Li and Jianye Hao and Siqi Chen and Karl
Tuyls and Zhiyong Feng and Wanli Xue and Rong Chen | SCC-rFMQ Learning in Cooperative Markov Games with Continuous Actions | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although many reinforcement learning methods have been proposed for learning
the optimal solutions in single-agent continuous-action domains, multiagent
coordination domains with continuous actions have received relatively few
investigations. In this paper, we propose an independent learner hierarchical
method, named Sample Continuous Coordination with recursive Frequency Maximum
Q-Value (SCC-rFMQ), which divides the cooperative problem with continuous
actions into two layers. The first layer samples a finite set of actions from
the continuous action spaces by a re-sampling mechanism with variable
exploratory rates, and the second layer evaluates the actions in the sampled
action set and updates the policy using a reinforcement learning cooperative
method. By constructing cooperative mechanisms at both levels, SCC-rFMQ can
handle cooperative problems in continuous action cooperative Markov games
effectively. The effectiveness of SCC-rFMQ is experimentally demonstrated on
two well-designed games, i.e., a continuous version of the climbing game and a
cooperative version of the boat problem. Experimental results show that
SCC-rFMQ outperforms other reinforcement learning algorithms.
| [
{
"version": "v1",
"created": "Tue, 18 Sep 2018 10:19:35 GMT"
}
] | 1,537,315,200,000 | [
[
"Zhang",
"Chengwei",
""
],
[
"Li",
"Xiaohong",
""
],
[
"Hao",
"Jianye",
""
],
[
"Chen",
"Siqi",
""
],
[
"Tuyls",
"Karl",
""
],
[
"Feng",
"Zhiyong",
""
],
[
"Xue",
"Wanli",
""
],
[
"Chen",
"Rong",
""
]
] |
1809.06723 | Biplav Srivastava | Biplav Srivastava | Decision-support for the Masses by Enabling Conversations with Open Data | 6 pages. arXiv admin note: text overlap with arXiv:1803.09789 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Open data refers to data that is freely available for reuse. Although there
has been rapid increase in availability of open data to public in the last
decade, this has not translated into better decision-support tools for them. We
propose intelligent conversation generators as a grand challenge that would
automatically create data-driven conversation interfaces (CIs), also known as
chatbots or dialog systems, from open data and deliver personalized analytical
insights to users based on their contextual needs. Such generators will not
only help bring Artificial Intelligence (AI)-based solutions for important
societal problems to the masses but also advance AI by providing an integrative
testbed for human-centric AI and filling gaps in the state-of-art towards this
aim.
| [
{
"version": "v1",
"created": "Sun, 16 Sep 2018 17:59:43 GMT"
},
{
"version": "v2",
"created": "Fri, 11 Jan 2019 14:18:14 GMT"
}
] | 1,547,424,000,000 | [
[
"Srivastava",
"Biplav",
""
]
] |
1809.06775 | Norberto Ritzmann J\'unior | Norberto Ritzmann Junior and Julio Cesar Nievola | A generalized financial time series forecasting model based on automatic
feature engineering using genetic algorithms and support vector machine | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose the genetic algorithm for time window optimization, which is an
embedded genetic algorithm (GA), to optimize the time window (TW) of the
attributes using feature selection and support vector machine. This GA is
evolved using the results of a trading simulation, and it determines the best
TW for each technical indicator. An appropriate evaluation was conducted using
a walk-forward trading simulation, and the trained model was verified to be
generalizable for forecasting other stock data. The results show that using the
GA to determine the TW can improve the rate of return, leading to better
prediction models than those resulting from using the default TW.
| [
{
"version": "v1",
"created": "Tue, 18 Sep 2018 14:40:19 GMT"
}
] | 1,537,315,200,000 | [
[
"Junior",
"Norberto Ritzmann",
""
],
[
"Nievola",
"Julio Cesar",
""
]
] |
1809.07027 | Ville Vakkuri | Ville Vakkuri and Pekka Abrahamsson | The Key Concepts of Ethics of Artificial Intelligence - A Keyword based
Systematic Mapping Study | This is the author's version of the work. The copyright holder's
version can be found at http://dx.doi.org/10.1109/ICE.2018.8436265 | 2018 IEEE International Conference on Engineering, Technology and
Innovation (ICE/ITMC), Stuttgart, 2018 | 10.1109/ICE.2018.8436265 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The growing influence and decision-making capacities of Autonomous systems
and Artificial Intelligence in our lives force us to consider the values
embedded in these systems. But how ethics should be implemented into these
systems? In this study, the solution is seen on philosophical conceptualization
as a framework to form practical implementation model for ethics of AI. To take
the first steps on conceptualization main concepts used on the field needs to
be identified. A keyword based Systematic Mapping Study (SMS) on the keywords
used in AI and ethics was conducted to help in identifying, defying and
comparing main concepts used in current AI ethics discourse. Out of 1062 papers
retrieved SMS discovered 37 re-occurring keywords in 83 academic papers. We
suggest that the focus on finding keywords is the first step in guiding and
providing direction for future research in the AI ethics field.
| [
{
"version": "v1",
"created": "Wed, 19 Sep 2018 07:01:53 GMT"
}
] | 1,537,401,600,000 | [
[
"Vakkuri",
"Ville",
""
],
[
"Abrahamsson",
"Pekka",
""
]
] |
1809.07045 | Soumi Chattopadhyay | Soumi Chattopadhyay, Ansuman Banerjee | A Methodology for Search Space Reduction in QoS Aware Semantic Web
Service Composition | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The semantic information regulates the expressiveness of a web service.
State-of-the-art approaches in web services research have used the semantics of
a web service for different purposes, mainly for service discovery,
composition, execution etc. In this paper, our main focus is on semantic driven
Quality of Service (QoS) aware service composition. Most of the contemporary
approaches on service composition have used the semantic information to combine
the services appropriately to generate the composition solution. However, in
this paper, our intention is to use the semantic information to expedite the
service composition algorithm. Here, we present a service composition framework
that uses semantic information of a web service to generate different clusters,
where the services are semantically related within a cluster. Our final aim is
to construct a composition solution using these clusters that can efficiently
scale to large service spaces, while ensuring solution quality. Experimental
results show the efficiency of our proposed method.
| [
{
"version": "v1",
"created": "Wed, 19 Sep 2018 07:53:29 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Mar 2019 06:00:30 GMT"
}
] | 1,553,126,400,000 | [
[
"Chattopadhyay",
"Soumi",
""
],
[
"Banerjee",
"Ansuman",
""
]
] |
1809.07133 | Nico Potyka | Nico Potyka | Extending Modular Semantics for Bipolar Weighted Argumentation
(Technical Report) | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Weighted bipolar argumentation frameworks offer a tool for decision support
and social media analysis. Arguments are evaluated by an iterative procedure
that takes initial weights and attack and support relations into account. Until
recently, convergence of these iterative procedures was not very well
understood in cyclic graphs. Mossakowski and Neuhaus recently introduced a
unification of different approaches and proved first convergence and divergence
results. We build up on this work, simplify and generalize convergence results
and complement them with runtime guarantees. As it turns out, there is a
tradeoff between semantics' convergence guarantees and their ability to move
strength values away from the initial weights. We demonstrate that divergence
problems can be avoided without this tradeoff by continuizing semantics.
Semantically, we extend the framework with a Duality property that assures a
symmetric impact of attack and support relations. We also present a Java
implementation of modular semantics and explain the practical usefulness of the
theoretical ideas.
| [
{
"version": "v1",
"created": "Wed, 19 Sep 2018 11:54:46 GMT"
},
{
"version": "v2",
"created": "Fri, 1 Mar 2019 15:17:02 GMT"
}
] | 1,551,657,600,000 | [
[
"Potyka",
"Nico",
""
]
] |
1809.07141 | Patrick Kahl | Anthony P. Leclerc and Patrick Thor Kahl | A survey of advances in epistemic logic program solvers | Proceedings of the 11th Workshop on Answer Set Programming and Other
Computing Paradigms 2018 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent research in extensions of Answer Set Programming has included a
renewed interest in the language of Epistemic Specifications, which adds modal
operators K ("known") and M ("may be true") to provide for more powerful
introspective reasoning and enhanced capability, particularly when reasoning
with incomplete information. An epistemic logic program is a set of rules in
this language. Infused with the research has been the desire for an efficient
solver to enable the practical use of such programs for problem solving. In
this paper, we report on the current state of development of epistemic logic
program solvers.
| [
{
"version": "v1",
"created": "Wed, 19 Sep 2018 12:18:10 GMT"
}
] | 1,537,401,600,000 | [
[
"Leclerc",
"Anthony P.",
""
],
[
"Kahl",
"Patrick Thor",
""
]
] |
1809.07193 | Peng Sun | Peng Sun, Xinghai Sun, Lei Han, Jiechao Xiong, Qing Wang, Bo Li, Yang
Zheng, Ji Liu, Yongsheng Liu, Han Liu, Tong Zhang | TStarBots: Defeating the Cheating Level Builtin AI in StarCraft II in
the Full Game | add link for source code | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Starcraft II (SC2) is widely considered as the most challenging Real Time
Strategy (RTS) game. The underlying challenges include a large observation
space, a huge (continuous and infinite) action space, partial observations,
simultaneous move for all players, and long horizon delayed rewards for local
decisions. To push the frontier of AI research, Deepmind and Blizzard jointly
developed the StarCraft II Learning Environment (SC2LE) as a testbench of
complex decision making systems. SC2LE provides a few mini games such as
MoveToBeacon, CollectMineralShards, and DefeatRoaches, where some AI agents
have achieved the performance level of human professional players. However, for
full games, the current AI agents are still far from achieving human
professional level performance. To bridge this gap, we present two full game AI
agents in this paper - the AI agent TStarBot1 is based on deep reinforcement
learning over a flat action structure, and the AI agent TStarBot2 is based on
hard-coded rules over a hierarchical action structure. Both TStarBot1 and
TStarBot2 are able to defeat the built-in AI agents from level 1 to level 10 in
a full game (1v1 Zerg-vs-Zerg game on the AbyssalReef map), noting that level
8, level 9, and level 10 are cheating agents with unfair advantages such as
full vision on the whole map and resource harvest boosting. To the best of our
knowledge, this is the first public work to investigate AI agents that can
defeat the built-in AI in the StarCraft II full game.
| [
{
"version": "v1",
"created": "Wed, 19 Sep 2018 13:45:47 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Nov 2018 03:33:01 GMT"
},
{
"version": "v3",
"created": "Thu, 27 Dec 2018 09:29:31 GMT"
}
] | 1,546,214,400,000 | [
[
"Sun",
"Peng",
""
],
[
"Sun",
"Xinghai",
""
],
[
"Han",
"Lei",
""
],
[
"Xiong",
"Jiechao",
""
],
[
"Wang",
"Qing",
""
],
[
"Li",
"Bo",
""
],
[
"Zheng",
"Yang",
""
],
[
"Liu",
"Ji",
""
],
[
"Liu",
"Yongsheng",
""
],
[
"Liu",
"Han",
""
],
[
"Zhang",
"Tong",
""
]
] |
1809.07614 | Chaluka Salgado | Chaluka Salgado (1), Muhammad Aamir Cheema (1), David Taniar (1) ((1)
Monash University, Clayton, Australia) | An Efficient Approximation Algorithm for Multi-criteria Indoor Route
Planning Queries | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A route planning query has many real-world applications and has been studied
extensively in outdoor spaces such as road networks or Euclidean space. Despite
its many applications in indoor venues (e.g., shopping centres, libraries,
airports), almost all existing studies are specifically designed for outdoor
spaces and do not take into account unique properties of the indoor spaces such
as hallways, stairs, escalators, rooms etc. We identify this research gap and
formally define the problem of category aware multi-criteria route planning
query, denoted by CAM, which returns the optimal route from an indoor source
point to an indoor target point that passes through at least one indoor point
from each given category while minimizing the total cost of the route in terms
of travel distance and other relevant attributes. We show that CAM query is
NP-hard. Based on a novel dominance-based pruning, we propose an efficient
algorithm which generates high-quality results. We provide an extensive
experimental study conducted on the largest shopping centre in Australia and
compare our algorithm with alternative approaches. The experiments demonstrate
that our algorithm is highly efficient and produces quality results.
| [
{
"version": "v1",
"created": "Tue, 18 Sep 2018 03:14:31 GMT"
}
] | 1,537,488,000,000 | [
[
"Salgado",
"Chaluka",
""
],
[
"Cheema",
"Muhammad Aamir",
""
],
[
"Taniar",
"David",
""
]
] |
1809.07842 | Kirsten Lloyd | Kirsten Lloyd | Bias Amplification in Artificial Intelligence Systems | Presented at AAAI FSS-18: Artificial Intelligence in Government and
Public Sector, Arlington, Virginia, USA | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | As Artificial Intelligence (AI) technologies proliferate, concern has
centered around the long-term dangers of job loss or threats of machines
causing harm to humans. All of this concern, however, detracts from the more
pertinent and already existing threats posed by AI today: its ability to
amplify bias found in training datasets, and swiftly impact marginalized
populations at scale. Government and public sector institutions have a
responsibility to citizens to establish a dialogue with technology developers
and release thoughtful policy around data standards to ensure diverse
representation in datasets to prevent bias amplification and ensure that AI
systems are built with inclusion in mind.
| [
{
"version": "v1",
"created": "Thu, 20 Sep 2018 20:29:56 GMT"
}
] | 1,537,747,200,000 | [
[
"Lloyd",
"Kirsten",
""
]
] |
1809.07882 | Lance Kaplan | Lance Kaplan, Federico Cerutti, Murat Sensoy, Alun Preece, Paul
Sullivan | Uncertainty Aware AI ML: Why and How | Presented at AAAI FSS-18: Artificial Intelligence in Government and
Public Sector, Arlington, Virginia, USA | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper argues the need for research to realize uncertainty-aware
artificial intelligence and machine learning (AI\&ML) systems for decision
support by describing a number of motivating scenarios. Furthermore, the paper
defines uncertainty-awareness and lays out the challenges along with surveying
some promising research directions. A theoretical demonstration illustrates how
two emerging uncertainty-aware ML and AI technologies could be integrated and
be of value for a route planning operation.
| [
{
"version": "v1",
"created": "Thu, 20 Sep 2018 22:15:06 GMT"
}
] | 1,537,747,200,000 | [
[
"Kaplan",
"Lance",
""
],
[
"Cerutti",
"Federico",
""
],
[
"Sensoy",
"Murat",
""
],
[
"Preece",
"Alun",
""
],
[
"Sullivan",
"Paul",
""
]
] |
1809.07888 | Federico Cerutti | Federico Cerutti and Lance Kaplan and Angelika Kimmig and Murat Sensoy | Probabilistic Logic Programming with Beta-Distributed Random Variables | Accepted for presentation at AAAI 2019 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We enable aProbLog---a probabilistic logical programming approach---to reason
in presence of uncertain probabilities represented as Beta-distributed random
variables. We achieve the same performance of state-of-the-art algorithms for
highly specified and engineered domains, while simultaneously we maintain the
flexibility offered by aProbLog in handling complex relational domains. Our
motivation is that faithfully capturing the distribution of probabilities is
necessary to compute an expected utility for effective decision making under
uncertainty: unfortunately, these probability distributions can be highly
uncertain due to sparse data. To understand and accurately manipulate such
probability distributions we need a well-defined theoretical framework that is
provided by the Beta distribution, which specifies a distribution of
probabilities representing all the possible values of a probability when the
exact value is unknown.
| [
{
"version": "v1",
"created": "Thu, 20 Sep 2018 23:01:58 GMT"
},
{
"version": "v2",
"created": "Wed, 31 Oct 2018 19:37:15 GMT"
},
{
"version": "v3",
"created": "Thu, 15 Nov 2018 10:43:18 GMT"
}
] | 1,542,326,400,000 | [
[
"Cerutti",
"Federico",
""
],
[
"Kaplan",
"Lance",
""
],
[
"Kimmig",
"Angelika",
""
],
[
"Sensoy",
"Murat",
""
]
] |
1809.08034 | Jorge Fandinno | Jorge Fandinno and Claudia Schulz | Answering the "why" in Answer Set Programming - A Survey of Explanation
Approaches | Under consideration in Theory and Practice of Logic Programming
(TPLP) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Artificial Intelligence (AI) approaches to problem-solving and
decision-making are becoming more and more complex, leading to a decrease in
the understandability of solutions. The European Union's new General Data
Protection Regulation tries to tackle this problem by stipulating a "right to
explanation" for decisions made by AI systems. One of the AI paradigms that may
be affected by this new regulation is Answer Set Programming (ASP). Thanks to
the emergence of efficient solvers, ASP has recently been used for
problem-solving in a variety of domains, including medicine, cryptography, and
biology. To ensure the successful application of ASP as a problem-solving
paradigm in the future, explanations of ASP solutions are crucial. In this
survey, we give an overview of approaches that provide an answer to the
question of why an answer set is a solution to a given problem, notably
off-line justifications, causal graphs, argumentative explanations and why-not
provenance, and highlight their similarities and differences. Moreover, we
review methods explaining why a set of literals is not an answer set or why no
solution exists at all.
| [
{
"version": "v1",
"created": "Fri, 21 Sep 2018 10:52:08 GMT"
}
] | 1,537,747,200,000 | [
[
"Fandinno",
"Jorge",
""
],
[
"Schulz",
"Claudia",
""
]
] |
1809.08059 | John Kingston | John Kingston | Conducting Feasibility Studies for Knowledge Based Systems | Presented at ES 2003, the annual conference of the BCS Specialist
Group on Artificial Intelligence, December 2003 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes how to carry out a feasibility study for a potential
knowledge based system application. It discusses factors to be considered under
three headings: the business case, the technical feasibility, and stakeholder
issues. It concludes with a case study of a feasibility study for a KBS to
guide surgeons in diagnosis and treatment of thyroid conditions.
| [
{
"version": "v1",
"created": "Fri, 21 Sep 2018 12:29:27 GMT"
}
] | 1,537,747,200,000 | [
[
"Kingston",
"John",
""
]
] |
1809.08208 | Syed Yusha Kareem | Syed Yusha Kareem, Luca Buoncompagni, Fulvio Mastrogiovanni | Arianna+: Scalable Human Activity Recognition by Reasoning with a
Network of Ontologies | 13 pages, 5 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Aging population ratios are rising significantly. Meanwhile, smart home based
health monitoring services are evolving rapidly to become a viable alternative
to traditional healthcare solutions. Such services can augment qualitative
analyses done by gerontologists with quantitative data. Hence, the recognition
of Activities of Daily Living (ADL) has become an active domain of research in
recent times. For a system to perform human activity recognition in a
real-world environment, multiple requirements exist, such as scalability,
robustness, ability to deal with uncertainty (e.g., missing sensor data), to
operate with multi-occupants and to take into account their privacy and
security. This paper attempts to address the requirements of scalability and
robustness, by describing a reasoning mechanism based on modular spatial and/or
temporal context models as a network of ontologies. The reasoning mechanism has
been implemented in a smart home system referred to as Arianna+. The paper
presents and discusses a use case, and experiments are performed on a simulated
dataset, to showcase Arianna+'s modularity feature, internal working, and
computational performance. Results indicate scalability and robustness for
human activity recognition processes.
| [
{
"version": "v1",
"created": "Fri, 21 Sep 2018 17:00:56 GMT"
}
] | 1,537,747,200,000 | [
[
"Kareem",
"Syed Yusha",
""
],
[
"Buoncompagni",
"Luca",
""
],
[
"Mastrogiovanni",
"Fulvio",
""
]
] |
1809.08304 | Yuanlin Zhang | Elias Marcopoulos and Yuanlin Zhang | onlineSPARC: a Programming Environment for Answer Set Programming | Under consideration in Theory and Practice of Logic Programming
(TPLP) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent progress in logic programming (e.g., the development of the Answer Set
Programming paradigm) has made it possible to teach it to general undergraduate
and even middle/high school students. Given the limited exposure of these
students to computer science, the complexity of downloading, installing and
using tools for writing logic programs could be a major barrier for logic
programming to reach a much wider audience. We developed onlineSPARC, an online
answer set programming environment with a self contained file system and a
simple interface. It allows users to type/edit logic programs and perform
several tasks over programs, including asking a query to a program, getting the
answer sets of a program, and producing a drawing/animation based on the answer
sets of a program.
| [
{
"version": "v1",
"created": "Fri, 21 Sep 2018 20:38:17 GMT"
}
] | 1,537,833,600,000 | [
[
"Marcopoulos",
"Elias",
""
],
[
"Zhang",
"Yuanlin",
""
]
] |
1809.08422 | Jingchi Jiang | Jingchi Jiang, Huanzheng Wang, Jing Xie, Xitong Guo, Yi Guan, Qiubin
Yu | Medical Knowledge Embedding Based on Recursive Neural Network for
Multi-Disease Diagnosis | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The representation of knowledge based on first-order logic captures the
richness of natural language and supports multiple probabilistic inference
models. Although symbolic representation enables quantitative reasoning with
statistical probability, it is difficult to utilize with machine learning
models as they perform numerical operations. In contrast, knowledge embedding
(i.e., high-dimensional and continuous vectors) is a feasible approach to
complex reasoning that can not only retain the semantic information of
knowledge but also establish the quantifiable relationship among them. In this
paper, we propose recursive neural knowledge network (RNKN), which combines
medical knowledge based on first-order logic with recursive neural network for
multi-disease diagnosis. After RNKN is efficiently trained from manually
annotated Chinese Electronic Medical Records (CEMRs), diagnosis-oriented
knowledge embeddings and weight matrixes are learned. Experimental results
verify that the diagnostic accuracy of RNKN is superior to that of some
classical machine learning models and Markov logic network (MLN). The results
also demonstrate that the more explicit the evidence extracted from CEMRs is,
the better is the performance achieved. RNKN gradually exhibits the
interpretation of knowledge embeddings as the number of training epochs
increases.
| [
{
"version": "v1",
"created": "Sat, 22 Sep 2018 10:07:46 GMT"
}
] | 1,537,833,600,000 | [
[
"Jiang",
"Jingchi",
""
],
[
"Wang",
"Huanzheng",
""
],
[
"Xie",
"Jing",
""
],
[
"Guo",
"Xitong",
""
],
[
"Guan",
"Yi",
""
],
[
"Yu",
"Qiubin",
""
]
] |
1809.08509 | Biplav Srivastava | Himadri Mishra, Ramashish Gaurav, Biplav Srivastava | A Train Status Assistant for Indian Railways | 2 pages, demonstration chatbot, learning, train delay | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Trains are part-and-parcel of every day lives in countries with large,
diverse, multi-lingual population like India. Consequently, an assistant which
can accurately predict and explain train delays will help people and businesses
alike. We present a novel conversation agent which can engage with people about
train status and inform them about its delay at in-line stations. It is trained
on past delay data from a subset of trains and generalizes to others.
| [
{
"version": "v1",
"created": "Sun, 23 Sep 2018 01:48:50 GMT"
}
] | 1,537,833,600,000 | [
[
"Mishra",
"Himadri",
""
],
[
"Gaurav",
"Ramashish",
""
],
[
"Srivastava",
"Biplav",
""
]
] |
1809.08713 | Sein Minn | Sein Minn, Yi Yu, Michel C. Desmarais, Feida Zhu, Jill Jenn Vie | Deep Knowledge Tracing and Dynamic Student Classification for Knowledge
Tracing | IEEE International Conference on Data Mining, 2018 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In Intelligent Tutoring System (ITS), tracing the student's knowledge state
during learning has been studied for several decades in order to provide more
supportive learning instructions. In this paper, we propose a novel model for
knowledge tracing that i) captures students' learning ability and dynamically
assigns students into distinct groups with similar ability at regular time
intervals, and ii) combines this information with a Recurrent Neural Network
architecture known as Deep Knowledge Tracing. Experimental results confirm that
the proposed model is significantly better at predicting student performance
than well known state-of-the-art techniques for student modelling.
| [
{
"version": "v1",
"created": "Mon, 24 Sep 2018 01:11:45 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Jan 2021 14:18:38 GMT"
}
] | 1,610,064,000,000 | [
[
"Minn",
"Sein",
""
],
[
"Yu",
"Yi",
""
],
[
"Desmarais",
"Michel C.",
""
],
[
"Zhu",
"Feida",
""
],
[
"Vie",
"Jill Jenn",
""
]
] |
1809.08751 | Frank Dignum | Frank Dignum | Interactions as Social Practices: towards a formalization | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-agent models are a suitable starting point to model complex social
interactions. However, as the complexity of the systems increase, we argue that
novel modeling approaches are needed that can deal with inter-dependencies at
different levels of society, where many heterogeneous parties (software agents,
robots, humans) are interacting and reacting to each other. In this paper, we
present a formalization of a social framework for agents based in the concept
of Social Practices as high level specifications of normal (expected) behavior
in a given social context. We argue that social practices facilitate the
practical reasoning of agents in standard social interactions.
| [
{
"version": "v1",
"created": "Mon, 24 Sep 2018 04:32:17 GMT"
}
] | 1,537,833,600,000 | [
[
"Dignum",
"Frank",
""
]
] |
1809.08823 | Douglas Summers Stay | Douglas Summers-Stay, Peter Sutor, Dandan Li | Representing Sets as Summed Semantic Vectors | In Biologically Inspired Cognitive Architectures 2018 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Representing meaning in the form of high dimensional vectors is a common and
powerful tool in biologically inspired architectures. While the meaning of a
set of concepts can be summarized by taking a (possibly weighted) sum of their
associated vectors, this has generally been treated as a one-way operation. In
this paper we show how a technique built to aid sparse vector decomposition
allows in many cases the exact recovery of the inputs and weights to such a
sum, allowing a single vector to represent an entire set of vectors from a
dictionary. We characterize the number of vectors that can be recovered under
various conditions, and explore several ways such a tool can be used for
vector-based reasoning.
| [
{
"version": "v1",
"created": "Mon, 24 Sep 2018 09:55:37 GMT"
}
] | 1,537,833,600,000 | [
[
"Summers-Stay",
"Douglas",
""
],
[
"Sutor",
"Peter",
""
],
[
"Li",
"Dandan",
""
]
] |
1809.09414 | Shengbin Jia | Shengbin Jia and Yang Xiang and Xiaojun Chen | Triple Trustworthiness Measurement for Knowledge Graph | This paper has been accepted by WWW 2019 | null | 10.1145/3308558.3313586 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Knowledge graph (KG) uses the triples to describe the facts in the real
world. It has been widely used in intelligent analysis and applications.
However, possible noises and conflicts are inevitably introduced in the process
of constructing. And the KG based tasks or applications assume that the
knowledge in the KG is completely correct and inevitably bring about potential
deviations. In this paper, we establish a knowledge graph triple
trustworthiness measurement model that quantify their semantic correctness and
the true degree of the facts expressed. The model is a crisscrossing neural
network structure. It synthesizes the internal semantic information in the
triples and the global inference information of the KG to achieve the
trustworthiness measurement and fusion in the three levels of entity level,
relationship level, and KG global level. We analyzed the validity of the model
output confidence values, and conducted experiments in the real-world dataset
FB15K (from Freebase) for the knowledge graph error detection task. The
experimental results showed that compared with other models, our model achieved
significant and consistent improvements.
| [
{
"version": "v1",
"created": "Tue, 25 Sep 2018 11:37:27 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Nov 2018 06:21:40 GMT"
},
{
"version": "v3",
"created": "Tue, 19 Feb 2019 07:57:27 GMT"
}
] | 1,550,620,800,000 | [
[
"Jia",
"Shengbin",
""
],
[
"Xiang",
"Yang",
""
],
[
"Chen",
"Xiaojun",
""
]
] |
1809.09419 | Matthew Guzdial | Matthew Guzdial, Joshua Reno, Jonathan Chen, Gillian Smith, and Mark
Riedl | Explainable PCGML via Game Design Patterns | 8 pages, 3 figures, Fifth Experimental AI in Games Workshop | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Procedural content generation via Machine Learning (PCGML) is the umbrella
term for approaches that generate content for games via machine learning. One
of the benefits of PCGML is that, unlike search or grammar-based PCG, it does
not require hand authoring of initial content or rules. Instead, PCGML relies
on existing content and black box models, which can be difficult to tune or
tweak without expert knowledge. This is especially problematic when a human
designer needs to understand how to manipulate their data or models to achieve
desired results. We present an approach to Explainable PCGML via Design
Patterns in which the design patterns act as a vocabulary and mode of
interaction between user and model. We demonstrate that our technique
outperforms non-explainable versions of our system in interactions with five
expert designers, four of whom lack any machine learning expertise.
| [
{
"version": "v1",
"created": "Tue, 25 Sep 2018 11:54:46 GMT"
}
] | 1,537,920,000,000 | [
[
"Guzdial",
"Matthew",
""
],
[
"Reno",
"Joshua",
""
],
[
"Chen",
"Jonathan",
""
],
[
"Smith",
"Gillian",
""
],
[
"Riedl",
"Mark",
""
]
] |
1809.09424 | Matthew Guzdial | Matthew Guzdial, Shukan Shah and Mark Riedl | Towards Automated Let's Play Commentary | 5 pages, 2 figures, Fifth Experimental AI in Games Workshop | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce the problem of generating Let's Play-style commentary of
gameplay video via machine learning. We propose an analysis of Let's Play
commentary and a framework for building such a system. To test this framework
we build an initial, naive implementation, which we use to interrogate the
assumptions of the framework. We demonstrate promising results towards future
Let's Play commentary generation.
| [
{
"version": "v1",
"created": "Tue, 25 Sep 2018 12:09:52 GMT"
}
] | 1,537,920,000,000 | [
[
"Guzdial",
"Matthew",
""
],
[
"Shah",
"Shukan",
""
],
[
"Riedl",
"Mark",
""
]
] |
1809.09762 | Rodrigo Canaan | Rodrigo Canaan, Stefan Menzel, Julian Togelius and Andy Nealen | Towards Game-based Metrics for Computational Co-creativity | IEEE Computational Intelligence and Games (CIG) conference, 2018,
Maastricht. 8 pages, 2 figures, 2 tables | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose the following question: what game-like interactive system would
provide a good environment for measuring the impact and success of a
co-creative, cooperative agent? Creativity is often formulated in terms of
novelty, value, surprise and interestingness. We review how these concepts are
measured in current computational intelligence research and provide a mapping
from modern electronic and tabletop games to open research problems in
mixed-initiative systems and computational co-creativity. We propose
application scenarios for future research, and a number of metrics under which
the performance of cooperative agents in these environments will be evaluated.
| [
{
"version": "v1",
"created": "Wed, 26 Sep 2018 00:05:47 GMT"
}
] | 1,538,006,400,000 | [
[
"Canaan",
"Rodrigo",
""
],
[
"Menzel",
"Stefan",
""
],
[
"Togelius",
"Julian",
""
],
[
"Nealen",
"Andy",
""
]
] |
1809.09764 | Rodrigo Canaan | Rodrigo Canaan, Haotian Shen, Ruben Rodriguez Torrado, Julian
Togelius, Andy Nealen and Stefan Menzel | Evolving Agents for the Hanabi 2018 CIG Competition | IEEE Computational Intelligence and Games (CIG) conference, 2018,
Maastricht. 8 pages, 1 figure, 8 tables | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hanabi is a cooperative card game with hidden information that has won
important awards in the industry and received some recent academic attention. A
two-track competition of agents for the game will take place in the 2018 CIG
conference. In this paper, we develop a genetic algorithm that builds
rule-based agents by determining the best sequence of rules from a fixed rule
set to use as strategy. In three separate experiments, we remove human
assumptions regarding the ordering of rules, add new, more expressive rules to
the rule set and independently evolve agents specialized at specific game
sizes. As result, we achieve scores superior to previously published research
for the mirror and mixed evaluation of agents.
| [
{
"version": "v1",
"created": "Wed, 26 Sep 2018 00:12:03 GMT"
}
] | 1,538,006,400,000 | [
[
"Canaan",
"Rodrigo",
""
],
[
"Shen",
"Haotian",
""
],
[
"Torrado",
"Ruben Rodriguez",
""
],
[
"Togelius",
"Julian",
""
],
[
"Nealen",
"Andy",
""
],
[
"Menzel",
"Stefan",
""
]
] |
1809.10436 | Daniel P. Lupp | Henrik Forssell, Christian Kindermann, Daniel P. Lupp, Uli Sattler,
Evgenij Thorstensen | Generating Ontologies from Templates: A Rule-Based Approach for
Capturing Regularity | Technical report, extended version of paper accepted to DL Workshop
2018 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a second-order language that can be used to succinctly specify
ontologies in a consistent and transparent manner. This language is based on
ontology templates (OTTR), a framework for capturing recurring patterns of
axioms in ontological modelling. The language and our results are independent
of any specific DL. We define the language and its semantics, including the
case of negation-as-failure, investigate reasoning over ontologies specified
using our language, and show results about the decidability of useful reasoning
tasks about the language itself. We also state and discuss some open problems
that we believe to be of interest.
| [
{
"version": "v1",
"created": "Thu, 27 Sep 2018 10:10:20 GMT"
}
] | 1,538,092,800,000 | [
[
"Forssell",
"Henrik",
""
],
[
"Kindermann",
"Christian",
""
],
[
"Lupp",
"Daniel P.",
""
],
[
"Sattler",
"Uli",
""
],
[
"Thorstensen",
"Evgenij",
""
]
] |
1809.10441 | Dmitry Maximov | Dmitry Maximov and Yury Legovich and Vladimir Goncharenko | A Way to Facilitate Decision Making in a Mixed Group of Manned and
Unmanned Aerial Vehicles | 18 pages total, 12 ones of the text, appendix, 1 figure | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A mixed group of manned and unmanned aerial vehicles is considered as a
distributed system. A lattice of tasks which may be fulfilled by the system
matches to it. An external multiplication operation is defined at the lattice,
which defines correspondingly linear logic operations. Linear implication and
tensor product are used to choose a system reconfiguration variant, i.e., to
determine a new task executor choice. The task lattice structure (i.e., the
system purpose) and the operation definitions largely define the choice. Thus,
the choice is mainly the system purpose consequence. Such a method of the
behavior variant choice facilitates the decision making by the pilot
controlling the group. The suggested approach is illustrated using an example
of a mixed group control at forest fire compression.
| [
{
"version": "v1",
"created": "Thu, 27 Sep 2018 10:28:10 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Nov 2018 08:15:04 GMT"
}
] | 1,542,326,400,000 | [
[
"Maximov",
"Dmitry",
""
],
[
"Legovich",
"Yury",
""
],
[
"Goncharenko",
"Vladimir",
""
]
] |
1809.10595 | Jinyuan Yu Mr. | Zheng Xie, XingYu Fu and JinYuan Yu | AlphaGomoku: An AlphaGo-based Gomoku Artificial Intelligence using
Curriculum Learning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this project, we combine AlphaGo algorithm with Curriculum Learning to
crack the game of Gomoku. Modifications like Double Networks Mechanism and
Winning Value Decay are implemented to solve the intrinsic asymmetry and
short-sight of Gomoku. Our final AI AlphaGomoku, through two days' training on
a single GPU, has reached humans' playing level.
| [
{
"version": "v1",
"created": "Thu, 27 Sep 2018 16:10:01 GMT"
}
] | 1,538,092,800,000 | [
[
"Xie",
"Zheng",
""
],
[
"Fu",
"XingYu",
""
],
[
"Yu",
"JinYuan",
""
]
] |
1809.11074 | Keting Lu | Keting Lu, Shiqi Zhang, Peter Stone, Xiaoping Chen | Robot Representation and Reasoning with Knowledge from Reinforcement
Learning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reinforcement learning (RL) agents aim at learning by interacting with an
environment, and are not designed for representing or reasoning with
declarative knowledge. Knowledge representation and reasoning (KRR) paradigms
are strong in declarative KRR tasks, but are ill-equipped to learn from such
experiences. In this work, we integrate logical-probabilistic KRR with
model-based RL, enabling agents to simultaneously reason with declarative
knowledge and learn from interaction experiences. The knowledge from humans and
RL is unified and used for dynamically computing task-specific planning models
under potentially new environments. Experiments were conducted using a mobile
robot working on dialog, navigation, and delivery tasks. Results show
significant improvements, in comparison to existing model-based RL methods.
| [
{
"version": "v1",
"created": "Fri, 28 Sep 2018 15:02:21 GMT"
},
{
"version": "v2",
"created": "Tue, 9 Oct 2018 07:38:48 GMT"
},
{
"version": "v3",
"created": "Thu, 22 Nov 2018 13:56:47 GMT"
}
] | 1,543,190,400,000 | [
[
"Lu",
"Keting",
""
],
[
"Zhang",
"Shiqi",
""
],
[
"Stone",
"Peter",
""
],
[
"Chen",
"Xiaoping",
""
]
] |
1809.11089 | Gavin Pearson | Gavin Pearson (1), Phil Jolley (2) and Geraint Evans (3) ((1) Dstl,
(2) IBM, (3) Defence Academy) | A Systems Approach to Achieving the Benefits of Artificial Intelligence
in UK Defence | Presented at AAAI FSS-18: Artificial Intelligence in Government and
Public Sector, Arlington, Virginia, USA | null | null | Dstl/CP111074 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ability to exploit the opportunities offered by AI within UK Defence
calls for an understanding of systemic issues required to achieve an effective
operational capability. This paper provides the authors' views of issues which
currently block UK Defence from fully benefitting from AI technology. These are
situated within a reference model for the AI Value Train, so enabling the
community to address the exploitation of such data and software intensive
systems in a systematic, end to end manner. The paper sets out the conditions
for success including: Researching future solutions to known problems and
clearly defined use cases; Addressing achievable use cases to show benefit;
Enhancing the availability of Defence-relevant data; Enhancing Defence 'know
how' in AI; Operating Software Intensive supply chain eco-systems at required
breadth and pace; Governance and, the integration of software and platform
supply chains and operating models.
| [
{
"version": "v1",
"created": "Fri, 28 Sep 2018 15:32:21 GMT"
}
] | 1,538,352,000,000 | [
[
"Pearson",
"Gavin",
""
],
[
"Jolley",
"Phil",
""
],
[
"Evans",
"Geraint",
""
]
] |
1810.00177 | Takuya Hiraoka | Takuya Hiraoka, Takashi Onishi, Takahisa Imagawa, Yoshimasa Tsuruoka | Refining Manually-Designed Symbol Grounding and High-Level Planning by
Policy Gradients | presented at the IJCAI-ICAI 2018 workshop on Learning & Reasoning
(L&R 2018) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hierarchical planners that produce interpretable and appropriate plans are
desired, especially in its application to supporting human decision making. In
the typical development of the hierarchical planners, higher-level planners and
symbol grounding functions are manually created, and this manual creation
requires much human effort. In this paper, we propose a framework that can
automatically refine symbol grounding functions and a high-level planner to
reduce human effort for designing these modules. In our framework, symbol
grounding and high-level planning, which are based on manually-designed
knowledge bases, are modeled with semi-Markov decision processes. A policy
gradient method is then applied to refine the modules, in which two terms for
updating the modules are considered. The first term, called a reinforcement
term, contributes to updating the modules to improve the overall performance of
a hierarchical planner to produce appropriate plans. The second term, called a
penalty term, contributes to keeping refined modules consistent with the
manually-designed original modules. Namely, it keeps the planner, which uses
the refined modules, producing interpretable plans. We perform preliminary
experiments to solve the Mountain car problem, and its results show that a
manually-designed high-level planner and symbol grounding function were
successfully refined by our framework.
| [
{
"version": "v1",
"created": "Sat, 29 Sep 2018 09:15:27 GMT"
}
] | 1,538,438,400,000 | [
[
"Hiraoka",
"Takuya",
""
],
[
"Onishi",
"Takashi",
""
],
[
"Imagawa",
"Takahisa",
""
],
[
"Tsuruoka",
"Yoshimasa",
""
]
] |
1810.00184 | Alun Preece | Alun Preece, Dan Harborne, Dave Braines, Richard Tomsett and Supriyo
Chakraborty | Stakeholders in Explainable AI | Presented at AAAI FSS-18: Artificial Intelligence in Government and
Public Sector, Arlington, Virginia, USA | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There is general consensus that it is important for artificial intelligence
(AI) and machine learning systems to be explainable and/or interpretable.
However, there is no general consensus over what is meant by 'explainable' and
'interpretable'. In this paper, we argue that this lack of consensus is due to
there being several distinct stakeholder communities. We note that, while the
concerns of the individual communities are broadly compatible, they are not
identical, which gives rise to different intents and requirements for
explainability/interpretability. We use the software engineering distinction
between validation and verification, and the epistemological distinctions
between knowns/unknowns, to tease apart the concerns of the stakeholder
communities and highlight the areas where their foci overlap or diverge. It is
not the purpose of the authors of this paper to 'take sides' - we count
ourselves as members, to varying degrees, of multiple communities - but rather
to help disambiguate what stakeholders mean when they ask 'Why?' of an AI.
| [
{
"version": "v1",
"created": "Sat, 29 Sep 2018 10:15:18 GMT"
}
] | 1,538,438,400,000 | [
[
"Preece",
"Alun",
""
],
[
"Harborne",
"Dan",
""
],
[
"Braines",
"Dave",
""
],
[
"Tomsett",
"Richard",
""
],
[
"Chakraborty",
"Supriyo",
""
]
] |
1810.00445 | Daniela Inclezan | Qinglin Zhang and Chris Benton and Daniela Inclezan | An Application of ASP Theories of Intentions to Understanding Restaurant
Scenarios: Insights and Narrative Corpus | Under consideration in Theory and Practice of Logic Programming
(TPLP) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a practical application of Answer Set Programming to the
understanding of narratives about restaurants. While this task was investigated
in depth by Erik Mueller, exceptional scenarios remained a serious challenge
for his script-based story comprehension system. We present a methodology that
remedies this issue by modeling characters in a restaurant episode as
intentional agents. We focus especially on the refinement of certain components
of this methodology in order to increase coverage and performance. We present a
restaurant story corpus that we created to design and evaluate our methodology.
Under consideration in Theory and Practice of Logic Programming (TPLP).
| [
{
"version": "v1",
"created": "Sun, 30 Sep 2018 18:39:23 GMT"
}
] | 1,538,438,400,000 | [
[
"Zhang",
"Qinglin",
""
],
[
"Benton",
"Chris",
""
],
[
"Inclezan",
"Daniela",
""
]
] |
1810.00685 | Arnaud Martin | Kuang Zhou (NPU), Arnaud Martin (DRUID), Quan Pan (NPU) | A belief combination rule for a large number of sources | arXiv admin note: substantial text overlap with arXiv:1707.07999 | Journal of Advances in Information Fusion, 2018, 13 (2) | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The theory of belief functions is widely used for data from multiple sources.
Different evidence combination rules have been proposed in this framework
according to the properties of the sources to combine. However, most of these
combination rules are not efficient when there are a large number of sources.
This is due to either the complexity or the existence of an absorbing element
such as the total conflict mass function for the conjunctive based rules when
applied on unreliable evidence. In this paper, based on the assumption that the
majority of sources are reliable, a combination rule for a large number of
sources is proposed using a simple idea: the more common ideas the sources
share, the more reliable these sources are supposed to be. This rule is
adaptable for aggregating a large number of sources which may not all be
reliable. It will keep the spirit of the conjunctive rule to reinforce the
belief on the focal elements with which the sources are in agreement. The mass
on the emptyset will be kept as an indicator of the conflict. The proposed
rule, called LNS-CR (Conjunctive combinationRule for a Large Number of
Sources), is evaluated on synthetic mass functions. The experimental results
verify that the rule can be effectively used to combine a large number of mass
functions and to elicit the major opinion.
| [
{
"version": "v1",
"created": "Fri, 28 Sep 2018 08:24:26 GMT"
}
] | 1,538,438,400,000 | [
[
"Zhou",
"Kuang",
"",
"NPU"
],
[
"Martin",
"Arnaud",
"",
"DRUID"
],
[
"Pan",
"Quan",
"",
"NPU"
]
] |
1810.00694 | Fabio Massimo Zennaro | Fabio Massimo Zennaro, Magdalena Ivanovska | Counterfactually Fair Prediction Using Multiple Causal Models | 18 pages, 5 figures, conference paper. arXiv admin note: text overlap
with arXiv:1805.09866 | null | 10.1007/978-3-030-14174-5_17 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we study the problem of making predictions using multiple
structural casual models defined by different agents, under the constraint that
the prediction satisfies the criterion of counterfactual fairness. Relying on
the frameworks of causality, fairness and opinion pooling, we build upon and
extend previous work focusing on the qualitative aggregation of causal Bayesian
networks and causal models. In order to complement previous qualitative
results, we devise a method based on Monte Carlo simulations. This method
enables a decision-maker to aggregate the outputs of the causal models provided
by different experts while guaranteeing the counterfactual fairness of the
result. We demonstrate our approach on a simple, yet illustrative, toy case
study.
| [
{
"version": "v1",
"created": "Mon, 1 Oct 2018 13:11:27 GMT"
}
] | 1,621,900,800,000 | [
[
"Zennaro",
"Fabio Massimo",
""
],
[
"Ivanovska",
"Magdalena",
""
]
] |
1810.00748 | Vasile Patrascu | Vasile Patrascu | Shannon Entropy for Neutrosophic Information | Submitted for publication | null | 10.13140/RG.2.2.32352.74244 | R.C.E.I.T-1.9.18 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The paper presents an extension of Shannon entropy for neutrosophic
information. This extension uses a new formula for distance between two
neutrosophic triplets. In addition, the obtained results are particularized for
bifuzzy, intuitionistic and paraconsistent fuzzy information.
| [
{
"version": "v1",
"created": "Mon, 24 Sep 2018 03:42:53 GMT"
}
] | 1,538,438,400,000 | [
[
"Patrascu",
"Vasile",
""
]
] |
1810.00916 | Volker Haarslev | Humaira Farid and Volker Haarslev | Handling Nominals and Inverse Roles using Algebraic Reasoning | 23 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a novel SHOI tableau calculus which incorporates
algebraic reasoning for deciding ontology consistency. Numerical restrictions
imposed by nominals, existential and universal restrictions are encoded into a
set of linear inequalities. Column generation and branch-and-price algorithms
are used to solve these inequalities. Our preliminary experiments indicate that
this calculus performs better on SHOI ontologies than standard tableau methods.
| [
{
"version": "v1",
"created": "Mon, 1 Oct 2018 18:40:57 GMT"
}
] | 1,538,524,800,000 | [
[
"Farid",
"Humaira",
""
],
[
"Haarslev",
"Volker",
""
]
] |
1810.01127 | Andrea Martin | Andrea E. Martin, Leonidas A. A. Doumas | Predicate learning in neural systems: Discovering latent generative
structures | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Humans learn complex latent structures from their environments (e.g., natural
language, mathematics, music, social hierarchies). In cognitive science and
cognitive neuroscience, models that infer higher-order structures from sensory
or first-order representations have been proposed to account for the complexity
and flexibility of human behavior. But how do the structures that these models
invoke arise in neural systems in the first place? To answer this question, we
explain how a system can learn latent representational structures (i.e.,
predicates) from experience with wholly unstructured data. During the process
of predicate learning, an artificial neural network exploits the naturally
occurring dynamic properties of distributed computing across neuronal
assemblies in order to learn predicates, but also to combine them
compositionally, two computational aspects which appear to be necessary for
human behavior as per formal theories in multiple domains. We describe how
predicates can be combined generatively using neural oscillations to achieve
human-like extrapolation and compositionality in an artificial neural network.
The ability to learn predicates from experience, to represent structures
compositionally, and to extrapolate to unseen data offers an inroads to
understanding and modeling the most complex human behaviors.
| [
{
"version": "v1",
"created": "Tue, 2 Oct 2018 09:15:00 GMT"
}
] | 1,538,524,800,000 | [
[
"Martin",
"Andrea E.",
""
],
[
"Doumas",
"Leonidas A. A.",
""
]
] |
1810.01257 | Ofir Nachum | Ofir Nachum, Shixiang Gu, Honglak Lee, Sergey Levine | Near-Optimal Representation Learning for Hierarchical Reinforcement
Learning | ICLR 2019 Conference Paper | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the problem of representation learning in goal-conditioned
hierarchical reinforcement learning. In such hierarchical structures, a
higher-level controller solves tasks by iteratively communicating goals which a
lower-level policy is trained to reach. Accordingly, the choice of
representation -- the mapping of observation space to goal space -- is crucial.
To study this problem, we develop a notion of sub-optimality of a
representation, defined in terms of expected reward of the optimal hierarchical
policy using this representation. We derive expressions which bound the
sub-optimality and show how these expressions can be translated to
representation learning objectives which may be optimized in practice. Results
on a number of difficult continuous-control tasks show that our approach to
representation learning yields qualitatively better representations as well as
quantitatively better hierarchical policies, compared to existing methods (see
videos at https://sites.google.com/view/representation-hrl).
| [
{
"version": "v1",
"created": "Tue, 2 Oct 2018 14:00:14 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Jan 2019 16:00:49 GMT"
}
] | 1,547,078,400,000 | [
[
"Nachum",
"Ofir",
""
],
[
"Gu",
"Shixiang",
""
],
[
"Lee",
"Honglak",
""
],
[
"Levine",
"Sergey",
""
]
] |
1810.01541 | Mihai Boicu | Mihai Boicu, Dorin Marcu, Gheorghe Tecuci, Lou Kaiser, Chirag
Uttamsingh, Navya Kalale | Co-Arg: Cogent Argumentation with Crowd Elicitation | Presented at AAAI FSS-18: Artificial Intelligence in Government and
Public Sector, Arlington, Virginia, USA | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents Co-Arg, a new type of cognitive assistant to an
intelligence analyst that enables the synergistic integration of analyst
imagination and expertise, computer knowledge and critical reasoning, and crowd
wisdom, to draw defensible and persuasive conclusions from masses of evidence
of all types, in a world that is changing all the time. Co-Arg's goal is to
improve the quality of the analytic results and enhance their understandability
for both experts and novices. The performed analysis is based on a sound and
transparent argumentation that links evidence to conclusions in a way that
shows very clearly how the conclusions have been reached, what evidence was
used and how, what is not known, and what assumptions have been made. The
analytic results are presented in a report describes the analytic conclusion
and its probability, the main favoring and disfavoring arguments, the
justification of the key judgments and assumptions, and the missing information
that might increase the accuracy of the solution.
| [
{
"version": "v1",
"created": "Tue, 2 Oct 2018 23:41:43 GMT"
}
] | 1,538,611,200,000 | [
[
"Boicu",
"Mihai",
""
],
[
"Marcu",
"Dorin",
""
],
[
"Tecuci",
"Gheorghe",
""
],
[
"Kaiser",
"Lou",
""
],
[
"Uttamsingh",
"Chirag",
""
],
[
"Kalale",
"Navya",
""
]
] |
1810.01560 | Debarpita Santra | Debarpita Santra, Swapan Kumar Basu, Jyotsna Kumar Mandal, Subrata
Goswami | Rough set based lattice structure for knowledge representation in
medical expert systems: low back pain management case study | 34 pages, 2 figures, International Journal | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The aim of medical knowledge representation is to capture the detailed domain
knowledge in a clinically efficient manner and to offer a reliable resolution
with the acquired knowledge. The knowledge base to be used by a medical expert
system should allow incremental growth with inclusion of updated knowledge over
the time. As knowledge are gathered from a variety of knowledge sources by
different knowledge engineers, the problem of redundancy is an important
concern here due to increased processing time of knowledge and occupancy of
large computational storage to accommodate all the gathered knowledge. Also
there may exist many inconsistent knowledge in the knowledge base. In this
paper, we have proposed a rough set based lattice structure for knowledge
representation in medical expert systems which overcomes the problem of
redundancy and inconsistency in knowledge and offers computational efficiency
with respect to both time and space. We have also generated an optimal set of
decision rules that would be used directly by the inference engine. The
reliability of each rule has been measured using a new metric called
credibility factor, and the certainty and coverage factors of a decision rule
have been re-defined. With a set of decisions rules arranged in descending
order according to their reliability measures, the medical expert system will
consider the highly reliable and certain rules at first, then it would search
for the possible and uncertain rules at later stage, if recommended by
physicians. The proposed knowledge representation technique has been
illustrated using an example from the domain of low back pain. The proposed
scheme ensures completeness, consistency, integrity, non-redundancy, and ease
of access.
| [
{
"version": "v1",
"created": "Tue, 2 Oct 2018 17:44:16 GMT"
}
] | 1,538,611,200,000 | [
[
"Santra",
"Debarpita",
""
],
[
"Basu",
"Swapan Kumar",
""
],
[
"Mandal",
"Jyotsna Kumar",
""
],
[
"Goswami",
"Subrata",
""
]
] |
1810.01836 | Roberto Alonso | Roberto Alonso and Stephan G\"unnemann | Mining Contrasting Quasi-Clique Patterns | 10 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mining dense quasi-cliques is a well-known clustering task with applications
ranging from social networks over collaboration graphs to document analysis.
Recent work has extended this task to multiple graphs; i.e. the goal is to find
groups of vertices highly dense among multiple graphs. In this paper, we argue
that in a multi-graph scenario the sparsity is valuable for knowledge
extraction as well. We introduce the concept of contrasting quasi-clique
patterns: a collection of vertices highly dense in one graph but highly sparse
(i.e. less connected) in a second graph. Thus, these patterns specifically
highlight the difference/contrast between the considered graphs. Based on our
novel model, we propose an algorithm that enables fast computation of
contrasting patterns by exploiting intelligent traversal and pruning
techniques. We showcase the potential of contrasting patterns on a variety of
synthetic and real-world datasets.
| [
{
"version": "v1",
"created": "Wed, 3 Oct 2018 16:42:33 GMT"
}
] | 1,538,611,200,000 | [
[
"Alonso",
"Roberto",
""
],
[
"Günnemann",
"Stephan",
""
]
] |
1810.01943 | Michael Hind | Rachel K. E. Bellamy, Kuntal Dey, Michael Hind, Samuel C. Hoffman,
Stephanie Houde, Kalapriya Kannan, Pranay Lohia, Jacquelyn Martino, Sameep
Mehta, Aleksandra Mojsilovic, Seema Nagar, Karthikeyan Natesan Ramamurthy,
John Richards, Diptikalyan Saha, Prasanna Sattigeri, Moninder Singh, Kush R.
Varshney, Yunfeng Zhang | AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and
Mitigating Unwanted Algorithmic Bias | 20 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fairness is an increasingly important concern as machine learning models are
used to support decision making in high-stakes applications such as mortgage
lending, hiring, and prison sentencing. This paper introduces a new open source
Python toolkit for algorithmic fairness, AI Fairness 360 (AIF360), released
under an Apache v2.0 license {https://github.com/ibm/aif360). The main
objectives of this toolkit are to help facilitate the transition of fairness
research algorithms to use in an industrial setting and to provide a common
framework for fairness researchers to share and evaluate algorithms.
The package includes a comprehensive set of fairness metrics for datasets and
models, explanations for these metrics, and algorithms to mitigate bias in
datasets and models. It also includes an interactive Web experience
(https://aif360.mybluemix.net) that provides a gentle introduction to the
concepts and capabilities for line-of-business users, as well as extensive
documentation, usage guidance, and industry-specific tutorials to enable data
scientists and practitioners to incorporate the most appropriate tool for their
problem into their work products. The architecture of the package has been
engineered to conform to a standard paradigm used in data science, thereby
further improving usability for practitioners. Such architectural design and
abstractions enable researchers and developers to extend the toolkit with their
new algorithms and improvements, and to use it for performance benchmarking. A
built-in testing infrastructure maintains code quality.
| [
{
"version": "v1",
"created": "Wed, 3 Oct 2018 20:18:35 GMT"
}
] | 1,538,697,600,000 | [
[
"Bellamy",
"Rachel K. E.",
""
],
[
"Dey",
"Kuntal",
""
],
[
"Hind",
"Michael",
""
],
[
"Hoffman",
"Samuel C.",
""
],
[
"Houde",
"Stephanie",
""
],
[
"Kannan",
"Kalapriya",
""
],
[
"Lohia",
"Pranay",
""
],
[
"Martino",
"Jacquelyn",
""
],
[
"Mehta",
"Sameep",
""
],
[
"Mojsilovic",
"Aleksandra",
""
],
[
"Nagar",
"Seema",
""
],
[
"Ramamurthy",
"Karthikeyan Natesan",
""
],
[
"Richards",
"John",
""
],
[
"Saha",
"Diptikalyan",
""
],
[
"Sattigeri",
"Prasanna",
""
],
[
"Singh",
"Moninder",
""
],
[
"Varshney",
"Kush R.",
""
],
[
"Zhang",
"Yunfeng",
""
]
] |
1810.01982 | Junxuan Li | Junxuan Li and Yung-wen Liu and Yuting Jia and Jay Nanduri | Discriminative Data-driven Self-adaptive Fraud Control Decision System
with Incomplete Information | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While E-commerce has been growing explosively and online shopping has become
popular and even dominant in the present era, online transaction fraud control
has drawn considerable attention in business practice and academic research.
Conventional fraud control considers mainly the interactions of two major
involved decision parties, i.e. merchants and fraudsters, to make fraud
classification decision without paying much attention to dynamic looping effect
arose from the decisions made by other profit-related parties. This paper
proposes a novel fraud control framework that can quantify interactive effects
of decisions made by different parties and can adjust fraud control strategies
using data analytics, artificial intelligence, and dynamic optimization
techniques. Three control models, Naive, Myopic and Prospective Controls, were
developed based on the availability of data attributes and levels of label
maturity. The proposed models are purely data-driven and self-adaptive in a
real-time manner. The field test on Microsoft real online transaction data
suggested that new systems could sizably improve the company's profit.
| [
{
"version": "v1",
"created": "Wed, 3 Oct 2018 21:40:32 GMT"
},
{
"version": "v2",
"created": "Sat, 27 Jul 2019 23:20:55 GMT"
}
] | 1,564,444,800,000 | [
[
"Li",
"Junxuan",
""
],
[
"Liu",
"Yung-wen",
""
],
[
"Jia",
"Yuting",
""
],
[
"Nanduri",
"Jay",
""
]
] |
1810.02612 | Brian Paden | Brian Paden, Peng Liu, Schuyler Cullen | Accelerated Labeling of Discrete Abstractions for Autonomous Driving
Subject to LTL Specifications | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Linear temporal logic and automaton-based run-time verification provide a
powerful framework for designing task and motion planning algorithms for
autonomous agents. The drawback to this approach is the computational cost of
operating on high resolution discrete abstractions of continuous dynamical
systems. In particular, the computational bottleneck that arises is converting
perceived environment variables into a labeling function on the states of a
Kripke structure or analogously the transitions of a labeled transition system.
This paper presents the design and empirical evaluation of an approach to
constructing the labeling function that exposes a large degree of parallelism
in the operation as well as efficient memory access patterns. The approach is
implemented on a commodity GPU and empirical results demonstrate the efficacy
of the labeling technique for real-time planning and decision-making.
| [
{
"version": "v1",
"created": "Fri, 5 Oct 2018 11:15:27 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Nov 2018 18:07:51 GMT"
}
] | 1,541,376,000,000 | [
[
"Paden",
"Brian",
""
],
[
"Liu",
"Peng",
""
],
[
"Cullen",
"Schuyler",
""
]
] |
1810.02869 | In\`es Osman | In\`es Osman | A New Method for the Semantic Integration of Multiple OWL Ontologies
using Alignments | supervised by Marouen Kachroudi and Sadok Ben Yahia, in French | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work is done as part of a master's thesis project. The goal is to
integrate two or more ontologies (of the same or close domains) in a new
consistent and coherent OWL ontology to insure semantic interoperability
between them. To do this, we have chosen to create a bridge ontology that
includes all source ontologies and their bridging axioms in a customized way.
In addition, we introduced a new criterion for obtaining an ontology of better
quality (having the minimum of semantic/logical conflicts). We have also
proposed new terminology and definitions that clarify the unclear and misplaced
"integration" and "merging" notions that are randomly used in state-of-the-art
works. Finally, we tested and evaluated our OIA2R tool using ontologies and
reference alignments of the OAEI campaign. It turned out that it is generic,
efficient and powerful enough.
| [
{
"version": "v1",
"created": "Fri, 5 Oct 2018 20:03:00 GMT"
}
] | 1,539,043,200,000 | [
[
"Osman",
"Inès",
""
]
] |
1810.03151 | Yimin Tang | Yimin Tang, Tian Jiang, Yanpeng Hu | A Minesweeper Solver Using Logic Inference, CSP and Sampling | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Minesweeper as a puzzle video game and is proved that it is an NPC problem.
We use CSP, Logic Inference and Sampling to make a minesweeper solver and we
limit us each select in 5 seconds.
| [
{
"version": "v1",
"created": "Sun, 7 Oct 2018 14:26:11 GMT"
}
] | 1,539,043,200,000 | [
[
"Tang",
"Yimin",
""
],
[
"Jiang",
"Tian",
""
],
[
"Hu",
"Yanpeng",
""
]
] |
1810.03981 | Minh Ho\`ang H\`a | Hoa Nguyen Phuong, Huyen Tran Ngoc Nhat, Minh Ho\`ang H\`a, Andr\'e
Langevin, Martin Tr\'epanier | Solving the clustered traveling salesman problem with d-relaxed priority
rule | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Clustered Traveling Salesman Problem with a Prespecified Order on the
Clusters, a variant of the well-known traveling salesman problem is studied in
literature. In this problem, delivery locations are divided into clusters with
different urgency levels and more urgent locations must be visited before less
urgent ones. However, this could lead to an inefficient route in terms of
traveling cost. This priority-oriented constraint can be relaxed by a rule
called d-relaxed priority that provides a trade-off between transportation cost
and emergency level. Our research proposes two approaches to solve the problem
with d-relaxed priority rule. We improve the mathematical formulation proposed
in the literature to construct an exact solution method. A meta-heuristic
method based on the framework of Iterated Local Search with problem-tailored
operators is also introduced to find approximate solutions. Experimental
results show the effectiveness of our methods.
| [
{
"version": "v1",
"created": "Sat, 6 Oct 2018 17:24:11 GMT"
}
] | 1,539,129,600,000 | [
[
"Phuong",
"Hoa Nguyen",
""
],
[
"Nhat",
"Huyen Tran Ngoc",
""
],
[
"Hà",
"Minh Hoàng",
""
],
[
"Langevin",
"André",
""
],
[
"Trépanier",
"Martin",
""
]
] |
1810.04053 | J.-M. Chauvet | Jean-Marie Chauvet | The 30-Year Cycle In The AI Debate | 31 pages, 5 tables | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In the last couple of years, the rise of Artificial Intelligence and the
successes of academic breakthroughs in the field have been inescapable. Vast
sums of money have been thrown at AI start-ups. Many existing tech companies --
including the giants like Google, Amazon, Facebook, and Microsoft -- have
opened new research labs. The rapid changes in these everyday work and
entertainment tools have fueled a rising interest in the underlying technology
itself; journalists write about AI tirelessly, and companies -- of tech nature
or not -- brand themselves with AI, Machine Learning or Deep Learning whenever
they get a chance. Confronting squarely this media coverage, several analysts
are starting to voice concerns about over-interpretation of AI's blazing
successes and the sometimes poor public reporting on the topic. This paper
reviews briefly the track-record in AI and Machine Learning and finds this
pattern of early dramatic successes, followed by philosophical critique and
unexpected difficulties, if not downright stagnation, returning almost to the
clock in 30-year cycles since 1958.
| [
{
"version": "v1",
"created": "Mon, 8 Oct 2018 16:35:06 GMT"
}
] | 1,539,129,600,000 | [
[
"Chauvet",
"Jean-Marie",
""
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.