id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
sequencelengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
cs/0605055 | Pierre Bessiere | David Bellot (INRIA Rh\^one-Alpes / Gravir-Imag), Pierre Bessiere
(INRIA Rh\^one-Alpes / Gravir-Imag) | Approximate Discrete Probability Distribution Representation using a
Multi-Resolution Binary Tree | null | International Conference on Tools for Artificial Intelligence -
ITCAI 2003, Sacramento, (2003) -- | null | null | cs.AI | null | Computing and storing probabilities is a hard problem as soon as one has to
deal with complex distributions over multiple random variables. The problem of
efficient representation of probability distributions is central in term of
computational efficiency in the field of probabilistic reasoning. The main
problem arises when dealing with joint probability distributions over a set of
random variables: they are always represented using huge probability arrays. In
this paper, a new method based on binary-tree representation is introduced in
order to store efficiently very large joint distributions. Our approach
approximates any multidimensional joint distributions using an adaptive
discretization of the space. We make the assumption that the lower is the
probability mass of a particular region of feature space, the larger is the
discretization step. This assumption leads to a very optimized representation
in term of time and memory. The other advantages of our approach are the
ability to refine dynamically the distribution every time it is needed leading
to a more accurate representation of the probability distribution and to an
anytime representation of the distribution.
| [
{
"version": "v1",
"created": "Fri, 12 May 2006 13:32:50 GMT"
}
] | 1,471,305,600,000 | [
[
"Bellot",
"David",
"",
"INRIA Rhône-Alpes / Gravir-Imag"
],
[
"Bessiere",
"Pierre",
"",
"INRIA Rhône-Alpes / Gravir-Imag"
]
] |
cs/0605108 | Daowen Qiu | Fuchun Liu, Daowen Qiu, Hongyan Xing, and Zhujun Fan | Diagnosability of Fuzzy Discrete Event Systems | 14 pages; revisions have been made | null | 10.1007/978-3-540-74205-0_73 | null | cs.AI | null | In order to more effectively cope with the real-world problems of vagueness,
{\it fuzzy discrete event systems} (FDESs) were proposed recently, and the
supervisory control theory of FDESs was developed. In view of the importance of
failure diagnosis, in this paper, we present an approach of the failure
diagnosis in the framework of FDESs. More specifically: (1) We formalize the
definition of diagnosability for FDESs, in which the observable set and failure
set of events are {\it fuzzy}, that is, each event has certain degree to be
observable and unobservable, and, also, each event may possess different
possibility of failure occurring. (2) Through the construction of
observability-based diagnosers of FDESs, we investigate its some basic
properties. In particular, we present a necessary and sufficient condition for
diagnosability of FDESs. (3) Some examples serving to illuminate the
applications of the diagnosability of FDESs are described. To conclude, some
related issues are raised for further consideration.
| [
{
"version": "v1",
"created": "Wed, 24 May 2006 15:49:06 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Dec 2006 04:04:13 GMT"
}
] | 1,435,190,400,000 | [
[
"Liu",
"Fuchun",
""
],
[
"Qiu",
"Daowen",
""
],
[
"Xing",
"Hongyan",
""
],
[
"Fan",
"Zhujun",
""
]
] |
cs/0605123 | Jaime Cardoso | Jaime S. Cardoso | Classification of Ordinal Data | 62 pages, MSc thesis | null | null | null | cs.AI | null | Classification of ordinal data is one of the most important tasks of relation
learning. In this thesis a novel framework for ordered classes is proposed. The
technique reduces the problem of classifying ordered classes to the standard
two-class problem. The introduced method is then mapped into support vector
machines and neural networks. Compared with a well-known approach using
pairwise objects as training samples, the new algorithm has a reduced
complexity and training time. A second novel model, the unimodal model, is also
introduced and a parametric version is mapped into neural networks. Several
case studies are presented to assert the validity of the proposed models.
| [
{
"version": "v1",
"created": "Fri, 26 May 2006 09:44:44 GMT"
}
] | 1,179,878,400,000 | [
[
"Cardoso",
"Jaime S.",
""
]
] |
cs/0606020 | Vadim Astakhov | Vadim Astakhov, Tamara Astakhova, Brian Sanders | Imagination as Holographic Processor for Text Animation | 10 pages, 10 figures, prototype presented at 4th International
Conference on Computer Science and its Applications (ICCSA-2006), paper
submited to SIGCHI 2007 | null | null | null | cs.AI | null | Imagination is the critical point in developing of realistic artificial
intelligence (AI) systems. One way to approach imagination would be simulation
of its properties and operations. We developed two models: AI-Brain Network
Hierarchy of Languages and Semantical Holographic Calculus as well as
simulation system ScriptWriter that emulate the process of imagination through
an automatic animation of English texts. The purpose of this paper is to
demonstrate the model and to present ScriptWriter system
http://nvo.sdsc.edu/NVO/JCSG/get_SRB_mime_file2.cgi//home/tamara.sdsc/test/demo.zip?F=/home/tamara.sdsc/test/demo.zip&M=application/x-gtar
for simulation of the imagination.
| [
{
"version": "v1",
"created": "Mon, 5 Jun 2006 23:55:37 GMT"
},
{
"version": "v2",
"created": "Wed, 10 Jan 2007 23:47:23 GMT"
}
] | 1,179,878,400,000 | [
[
"Astakhov",
"Vadim",
""
],
[
"Astakhova",
"Tamara",
""
],
[
"Sanders",
"Brian",
""
]
] |
cs/0606029 | Audun Josang | Audun Josang | Belief Calculus | 22 pages, 10 figures | null | null | null | cs.AI | null | In Dempster-Shafer belief theory, general beliefs are expressed as belief
mass distribution functions over frames of discernment. In Subjective Logic
beliefs are expressed as belief mass distribution functions over binary frames
of discernment. Belief representations in Subjective Logic, which are called
opinions, also contain a base rate parameter which express the a priori belief
in the absence of evidence. Philosophically, beliefs are quantitative
representations of evidence as perceived by humans or by other intelligent
agents. The basic operators of classical probability calculus, such as addition
and multiplication, can be applied to opinions, thereby making belief calculus
practical. Through the equivalence between opinions and Beta probability
density functions, this also provides a calculus for Beta probability density
functions. This article explains the basic elements of belief calculus.
| [
{
"version": "v1",
"created": "Wed, 7 Jun 2006 14:32:55 GMT"
}
] | 1,179,878,400,000 | [
[
"Josang",
"Audun",
""
]
] |
cs/0606066 | Audun Josang | Audun Josang | The Cumulative Rule for Belief Fusion | null | null | null | null | cs.AI | null | The problem of combining beliefs in the Dempster-Shafer belief theory has
attracted considerable attention over the last two decades. The classical
Dempster's Rule has often been criticised, and many alternative rules for
belief combination have been proposed in the literature. The consensus operator
for combining beliefs has nice properties and produces more intuitive results
than Dempster's rule, but has the limitation that it can only be applied to
belief distribution functions on binary state spaces. In this paper we present
a generalisation of the consensus operator that can be applied to Dirichlet
belief functions on state spaces of arbitrary size. This rule, called the
cumulative rule of belief combination, can be derived from classical
statistical theory, and corresponds well with human intuition.
| [
{
"version": "v1",
"created": "Wed, 14 Jun 2006 11:36:06 GMT"
}
] | 1,179,878,400,000 | [
[
"Josang",
"Audun",
""
]
] |
cs/0606081 | Juergen Schmidhuber | Juergen Schmidhuber | New Millennium AI and the Convergence of History | Speed Prior: clarification / 15 pages, to appear in "Challenges to
Computational Intelligence" | null | null | IDSIA-14-06 | cs.AI | null | Artificial Intelligence (AI) has recently become a real formal science: the
new millennium brought the first mathematically sound, asymptotically optimal,
universal problem solvers, providing a new, rigorous foundation for the
previously largely heuristic field of General AI and embedded agents. At the
same time there has been rapid progress in practical methods for learning true
sequence-processing programs, as opposed to traditional methods limited to
stationary pattern association. Here we will briefly review some of the new
results, and speculate about future developments, pointing out that the time
intervals between the most notable events in over 40,000 years or 2^9 lifetimes
of human history have sped up exponentially, apparently converging to zero
within the next few decades. Or is this impression just a by-product of the way
humans allocate memory space to past events?
| [
{
"version": "v1",
"created": "Mon, 19 Jun 2006 09:13:43 GMT"
},
{
"version": "v2",
"created": "Fri, 23 Jun 2006 14:35:37 GMT"
},
{
"version": "v3",
"created": "Thu, 29 Jun 2006 10:05:19 GMT"
}
] | 1,179,878,400,000 | [
[
"Schmidhuber",
"Juergen",
""
]
] |
cs/0607005 | Florentin Smarandache | Florentin Smarandache, Jean Dezert | Belief Conditioning Rules (BCRs) | 26 pages | null | null | null | cs.AI | null | In this paper we propose a new family of Belief Conditioning Rules (BCRs) for
belief revision. These rules are not directly related with the fusion of
several sources of evidence but with the revision of a belief assignment
available at a given time according to the new truth (i.e. conditioning
constraint) one has about the space of solutions of the problem.
| [
{
"version": "v1",
"created": "Sun, 2 Jul 2006 14:54:54 GMT"
},
{
"version": "v2",
"created": "Wed, 29 Nov 2006 19:14:21 GMT"
}
] | 1,179,878,400,000 | [
[
"Smarandache",
"Florentin",
""
],
[
"Dezert",
"Jean",
""
]
] |
cs/0607071 | Peter J. Stuckey | H. Fang, Y. Kilani, J.H.M. Lee, and P.J. Stuckey | Islands for SAT | 7 pages | null | null | null | cs.AI | null | In this note we introduce the notion of islands for restricting local search.
We show how we can construct islands for CNF SAT problems, and how much search
space can be eliminated by restricting search to the island.
| [
{
"version": "v1",
"created": "Fri, 14 Jul 2006 12:44:24 GMT"
}
] | 1,179,878,400,000 | [
[
"Fang",
"H.",
""
],
[
"Kilani",
"Y.",
""
],
[
"Lee",
"J. H. M.",
""
],
[
"Stuckey",
"P. J.",
""
]
] |
cs/0607084 | Farid Nouioua | Daniel Kayser (LIPN), Farid Nouioua (LIPN) | About Norms and Causes | null | The 17th FLAIRS'04 Conference (2004) 502-507 | null | null | cs.AI | null | Knowing the norms of a domain is crucial, but there exist no repository of
norms. We propose a method to extract them from texts: texts generally do not
describe a norm, but rather how a state-of-affairs differs from it. Answers
concerning the cause of the state-of-affairs described often reveal the
implicit norm. We apply this idea to the domain of driving, and validate it by
designing algorithms that identify, in a text, the "basic" norms to which it
refers implicitly.
| [
{
"version": "v1",
"created": "Tue, 18 Jul 2006 07:46:07 GMT"
}
] | 1,179,878,400,000 | [
[
"Kayser",
"Daniel",
"",
"LIPN"
],
[
"Nouioua",
"Farid",
"",
"LIPN"
]
] |
cs/0607086 | Farid Nouioua | Daniel Kayser (LIPN), Farid Nouioua (LIPN) | Representing Knowledge about Norms | null | The 16th European Conference on Artificial Intelligence (ECAI'04)
(2004) 363-367 | null | null | cs.AI | null | Norms are essential to extend inference: inferences based on norms are far
richer than those based on logical implications. In the recent decades, much
effort has been devoted to reason on a domain, once its norms are represented.
How to extract and express those norms has received far less attention.
Extraction is difficult: as the readers are supposed to know them, the norms of
a domain are seldom made explicit. For one thing, extracting norms requires a
language to represent them, and this is the topic of this paper. We apply this
language to represent norms in the domain of driving, and show that it is
adequate to reason on the causes of accidents, as described by car-crash
reports.
| [
{
"version": "v1",
"created": "Tue, 18 Jul 2006 08:15:04 GMT"
}
] | 1,179,878,400,000 | [
[
"Kayser",
"Daniel",
"",
"LIPN"
],
[
"Nouioua",
"Farid",
"",
"LIPN"
]
] |
cs/0607143 | Florentin Smarandache | Jean Dezert, Albena Tchamova, Florentin Smarandache, Pavlina
Konstantinova | Target Type Tracking with PCR5 and Dempster's rules: A Comparative
Analysis | 10 pages, 5 diagrams. Presented to Fusion 2006 International
Conference, Florence, Italy, July 2006 | Proceedings of Fusion 2006 International Conference, Florence,
Italy, July 2006 | null | null | cs.AI | null | In this paper we consider and analyze the behavior of two combinational rules
for temporal (sequential) attribute data fusion for target type estimation. Our
comparative analysis is based on Dempster's fusion rule proposed in
Dempster-Shafer Theory (DST) and on the Proportional Conflict Redistribution
rule no. 5 (PCR5) recently proposed in Dezert-Smarandache Theory (DSmT). We
show through very simple scenario and Monte-Carlo simulation, how PCR5 allows a
very efficient Target Type Tracking and reduces drastically the latency delay
for correct Target Type decision with respect to Demspter's rule. For cases
presenting some short Target Type switches, Demspter's rule is proved to be
unable to detect the switches and thus to track correctly the Target Type
changes. The approach proposed here is totally new, efficient and promising to
be incorporated in real-time Generalized Data Association - Multi Target
Tracking systems (GDA-MTT) and provides an important result on the behavior of
PCR5 with respect to Dempster's rule. The MatLab source code is provided in
| [
{
"version": "v1",
"created": "Mon, 31 Jul 2006 15:32:44 GMT"
}
] | 1,179,878,400,000 | [
[
"Dezert",
"Jean",
""
],
[
"Tchamova",
"Albena",
""
],
[
"Smarandache",
"Florentin",
""
],
[
"Konstantinova",
"Pavlina",
""
]
] |
cs/0607147 | Florentin Smarandache | Florentin Smarandache, Jean Dezert | Fusion of qualitative beliefs using DSmT | 13 pages. To appear in "Advances and Applications of DSmT for
Information Fusion", collected works, second volume, 2006 | Presented as an extended version (Tutorial MO2) to the Fusion 2006
International Conference, Florence, Italy, July 10-13, 2006 | null | null | cs.AI | null | This paper introduces the notion of qualitative belief assignment to model
beliefs of human experts expressed in natural language (with linguistic
labels). We show how qualitative beliefs can be efficiently combined using an
extension of Dezert-Smarandache Theory (DSmT) of plausible and paradoxical
quantitative reasoning to qualitative reasoning. We propose a new arithmetic on
linguistic labels which allows a direct extension of classical DSm fusion rule
or DSm Hybrid rules. An approximate qualitative PCR5 rule is also proposed
jointly with a Qualitative Average Operator. We also show how crisp or interval
mappings can be used to deal indirectly with linguistic labels. A very simple
example is provided to illustrate our qualitative fusion rules.
| [
{
"version": "v1",
"created": "Mon, 31 Jul 2006 17:16:57 GMT"
},
{
"version": "v2",
"created": "Wed, 29 Nov 2006 19:16:48 GMT"
}
] | 1,179,878,400,000 | [
[
"Smarandache",
"Florentin",
""
],
[
"Dezert",
"Jean",
""
]
] |
cs/0608002 | Florentin Smarandache | Florentin Smarandache, Jean Dezert | An Introduction to the DSm Theory for the Combination of Paradoxical,
Uncertain, and Imprecise Sources of Information | 21 pages, many tables, figures. To appear in Information&Security
International Journal, 2006 | Presented at 13th International Congress of Cybernetics and
Systems, Maribor, Slovenia, July 6-10, 2005. | null | null | cs.AI | null | The management and combination of uncertain, imprecise, fuzzy and even
paradoxical or high conflicting sources of information has always been, and
still remains today, of primal importance for the development of reliable
modern information systems involving artificial reasoning. In this
introduction, we present a survey of our recent theory of plausible and
paradoxical reasoning, known as Dezert-Smarandache Theory (DSmT) in the
literature, developed for dealing with imprecise, uncertain and paradoxical
sources of information. We focus our presentation here rather on the
foundations of DSmT, and on the two important new rules of combination, than on
browsing specific applications of DSmT available in literature. Several simple
examples are given throughout the presentation to show the efficiency and the
generality of this new approach.
| [
{
"version": "v1",
"created": "Tue, 1 Aug 2006 15:31:13 GMT"
}
] | 1,179,878,400,000 | [
[
"Smarandache",
"Florentin",
""
],
[
"Dezert",
"Jean",
""
]
] |
cs/0608019 | Sebastian Brand | Sebastian Brand | Relation Variables in Qualitative Spatial Reasoning | 14 pages; 27th German Conference on Artificial Intelligence (KI'04) | null | null | null | cs.AI | null | We study an alternative to the prevailing approach to modelling qualitative
spatial reasoning (QSR) problems as constraint satisfaction problems. In the
standard approach, a relation between objects is a constraint whereas in the
alternative approach it is a variable. The relation-variable approach greatly
simplifies integration and implementation of QSR. To substantiate this point,
we discuss several QSR algorithms from the literature which in the
relation-variable approach reduce to the customary constraint propagation
algorithm enforcing generalised arc-consistency.
| [
{
"version": "v1",
"created": "Thu, 3 Aug 2006 03:24:24 GMT"
}
] | 1,179,878,400,000 | [
[
"Brand",
"Sebastian",
""
]
] |
cs/0608028 | Joseph Y. Halpern | Joseph Y. Halpern | Using Sets of Probability Measures to Represent Uncertainty | null | null | null | null | cs.AI | null | I explore the use of sets of probability measures as a representation of
uncertainty.
| [
{
"version": "v1",
"created": "Fri, 4 Aug 2006 20:26:25 GMT"
}
] | 1,179,878,400,000 | [
[
"Halpern",
"Joseph Y.",
""
]
] |
cs/0609111 | Tran Cao Son | Le-Chi Tuan, Chitta Baral, Tran Cao Son | A State-Based Regression Formulation for Domains with Sensing
Actions<br> and Incomplete Information | 34 pages, 7 Figures | Logical Methods in Computer Science, Volume 2, Issue 4 (October 2,
2006) lmcs:2238 | 10.2168/LMCS-2(4:2)2006 | null | cs.AI | null | We present a state-based regression function for planning domains where an
agent does not have complete information and may have sensing actions. We
consider binary domains and employ a three-valued characterization of domains
with sensing actions to define the regression function. We prove the soundness
and completeness of our regression formulation with respect to the definition
of progression. More specifically, we show that (i) a plan obtained through
regression for a planning problem is indeed a progression solution of that
planning problem, and that (ii) for each plan found through progression, using
regression one obtains that plan or an equivalent one.
| [
{
"version": "v1",
"created": "Tue, 19 Sep 2006 21:33:07 GMT"
},
{
"version": "v2",
"created": "Sun, 1 Oct 2006 23:22:12 GMT"
}
] | 1,484,092,800,000 | [
[
"Tuan",
"Le-Chi",
""
],
[
"Baral",
"Chitta",
""
],
[
"Son",
"Tran Cao",
""
]
] |
cs/0609132 | Jochen Gruber | Jochen Gruber | Semantic Description of Parameters in Web Service Annotations | 7 pages, 3 figures | null | null | null | cs.AI | null | A modification of OWL-S regarding parameter description is proposed. It is
strictly based on Description Logic. In addition to class description of
parameters it also allows the modelling of relations between parameters and the
precise description of the size of data to be supplied to a service. In
particular, it solves two major issues identified within current proposals for
a Semantic Web Service annotation standard.
| [
{
"version": "v1",
"created": "Sun, 24 Sep 2006 17:29:49 GMT"
}
] | 1,179,878,400,000 | [
[
"Gruber",
"Jochen",
""
]
] |
cs/0609136 | Adeline Nazarenko | Adeline Nazarenko (LIPN), Erick Alphonse (LIPN), Julien Derivi\`ere
(LIPN), Thierry Hamon (LIPN), Guillaume Vauvert (LIPN), Davy Weissenbacher
(LIPN) | The ALVIS Format for Linguistically Annotated Documents | null | Proceedings of the fifth international conference on Language
Resources and Evaluation, LREC 2006 (2006) 1782-1786 | null | null | cs.AI | null | The paper describes the ALVIS annotation format designed for the indexing of
large collections of documents in topic-specific search engines. This paper is
exemplified on the biological domain and on MedLine abstracts, as developing a
specialized search engine for biologists is one of the ALVIS case studies. The
ALVIS principle for linguistic annotations is based on existing works and
standard propositions. We made the choice of stand-off annotations rather than
inserted mark-up. Annotations are encoded as XML elements which form the
linguistic subsection of the document record.
| [
{
"version": "v1",
"created": "Sun, 24 Sep 2006 20:04:01 GMT"
}
] | 1,471,305,600,000 | [
[
"Nazarenko",
"Adeline",
"",
"LIPN"
],
[
"Alphonse",
"Erick",
"",
"LIPN"
],
[
"Derivière",
"Julien",
"",
"LIPN"
],
[
"Hamon",
"Thierry",
"",
"LIPN"
],
[
"Vauvert",
"Guillaume",
"",
"LIPN"
],
[
"Weissenbacher",
"Davy",
"",
"LIPN"
]
] |
cs/0609142 | Bruno Scherrer | Bruno Scherrer (INRIA Lorraine - LORIA) | Modular self-organization | null | null | null | null | cs.AI | null | The aim of this paper is to provide a sound framework for addressing a
difficult problem: the automatic construction of an autonomous agent's modular
architecture. We combine results from two apparently uncorrelated domains:
Autonomous planning through Markov Decision Processes and a General Data
Clustering Approach using a kernel-like method. Our fundamental idea is that
the former is a good framework for addressing autonomy whereas the latter
allows to tackle self-organizing problems.
| [
{
"version": "v1",
"created": "Tue, 26 Sep 2006 07:52:54 GMT"
}
] | 1,179,878,400,000 | [
[
"Scherrer",
"Bruno",
"",
"INRIA Lorraine - LORIA"
]
] |
cs/0610006 | Adrian Paschke | Adrian Paschke | A Typed Hybrid Description Logic Programming Language with Polymorphic
Order-Sorted DL-Typed Unification for Semantic Web Type Systems | Full technical report 12/05. Published inn: Proc. of 2nd Int.
Workshop on OWL: Experiences and Directions 2006 (OWLED'06) at ISWC'06,
Athens, Georgia, USA, 2006 | In: Proc. of 2nd Int. Workshop on OWL: Experiences and Directions
2006 (OWLED'06) at ISWC'06, Athens, Georgia, USA, 2006 | null | null | cs.AI | null | In this paper we elaborate on a specific application in the context of hybrid
description logic programs (hybrid DLPs), namely description logic Semantic Web
type systems (DL-types) which are used for term typing of LP rules based on a
polymorphic, order-sorted, hybrid DL-typed unification as procedural semantics
of hybrid DLPs. Using Semantic Web ontologies as type systems facilitates
interchange of domain-independent rules over domain boundaries via dynamically
typing and mapping of explicitly defined type ontologies.
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2006 08:57:54 GMT"
},
{
"version": "v2",
"created": "Tue, 3 Apr 2007 21:36:44 GMT"
}
] | 1,179,878,400,000 | [
[
"Paschke",
"Adrian",
""
]
] |
cs/0610015 | Farid Nouioua | Farid Nouioua (LIPN) | Why did the accident happen? A norm-based reasoning approach | null | Logical Aspects of Computational Linguistics, student
sessionUniversit\'{e} de bordeaux (Ed.) (2005) 31-34 | null | null | cs.AI | null | In this paper we describe an architecture of a system that answer the
question : Why did the accident happen? from the textual description of an
accident. We present briefly the different parts of the architecture and then
we describe with more detail the semantic part of the system i.e. the part in
which the norm-based reasoning is performed on the explicit knowlege extracted
from the text.
| [
{
"version": "v1",
"created": "Wed, 4 Oct 2006 11:32:22 GMT"
}
] | 1,179,878,400,000 | [
[
"Nouioua",
"Farid",
"",
"LIPN"
]
] |
cs/0610023 | Farid Nouioua | Farid Nouioua (LIPN), Daniel Kayser (LIPN) | Une exp\'{e}rience de s\'{e}mantique inf\'{e}rentielle | null | Actes de TALN'06UCL Presses Universitaires de Louvain (Ed.) (2006)
246-255 | null | null | cs.AI | null | We develop a system which must be able to perform the same inferences that a
human reader of an accident report can do and more particularly to determine
the apparent causes of the accident. We describe the general framework in which
we are situated, linguistic and semantic levels of the analysis and the
inference rules used by the system.
| [
{
"version": "v1",
"created": "Thu, 5 Oct 2006 05:18:03 GMT"
}
] | 1,179,878,400,000 | [
[
"Nouioua",
"Farid",
"",
"LIPN"
],
[
"Kayser",
"Daniel",
"",
"LIPN"
]
] |
cs/0610043 | Zengyou He | Zengyou He | Farthest-Point Heuristic based Initialization Methods for K-Modes
Clustering | 7 pages | null | null | null | cs.AI | null | The k-modes algorithm has become a popular technique in solving categorical
data clustering problems in different application domains. However, the
algorithm requires random selection of initial points for the clusters.
Different initial points often lead to considerable distinct clustering
results. In this paper we present an experimental study on applying a
farthest-point heuristic based initialization method to k-modes clustering to
improve its performance. Experiments show that new initialization method leads
to better clustering accuracy than random selection initialization method for
k-modes clustering.
| [
{
"version": "v1",
"created": "Mon, 9 Oct 2006 12:12:45 GMT"
}
] | 1,179,878,400,000 | [
[
"He",
"Zengyou",
""
]
] |
cs/0610060 | Mark Levene | Mark Levene and Judit Bar-Ilan | Comparing Typical Opening Move Choices Made by Humans and Chess Engines | 12 pages, 1 figure, 6 tables | null | null | null | cs.AI | null | The opening book is an important component of a chess engine, and thus
computer chess programmers have been developing automated methods to improve
the quality of their books. For chess, which has a very rich opening theory,
large databases of high-quality games can be used as the basis of an opening
book, from which statistics relating to move choices from given positions can
be collected. In order to find out whether the opening books used by modern
chess engines in machine versus machine competitions are ``comparable'' to
those used by chess players in human versus human competitions, we carried out
analysis on 26 test positions using statistics from two opening books one
compiled from humans' games and the other from machines' games. Our analysis
using several nonparametric measures, shows that, overall, there is a strong
association between humans' and machines' choices of opening moves when using a
book to guide their choices.
| [
{
"version": "v1",
"created": "Wed, 11 Oct 2006 10:26:40 GMT"
}
] | 1,179,878,400,000 | [
[
"Levene",
"Mark",
""
],
[
"Bar-Ilan",
"Judit",
""
]
] |
cs/0610111 | Devavrat Shah | Kyomin Jung and Devavrat Shah | Local approximate inference algorithms | 21 pages, 10 figures | null | null | null | cs.AI | null | We present a new local approximation algorithm for computing Maximum a
Posteriori (MAP) and log-partition function for arbitrary exponential family
distribution represented by a finite-valued pair-wise Markov random field
(MRF), say $G$. Our algorithm is based on decomposition of $G$ into {\em
appropriately} chosen small components; then computing estimates locally in
each of these components and then producing a {\em good} global solution. We
show that if the underlying graph $G$ either excludes some finite-sized graph
as its minor (e.g. Planar graph) or has low doubling dimension (e.g. any graph
with {\em geometry}), then our algorithm will produce solution for both
questions within {\em arbitrary accuracy}. We present a message-passing
implementation of our algorithm for MAP computation using self-avoiding walk of
graph. In order to evaluate the computational cost of this implementation, we
derive novel tight bounds on the size of self-avoiding walk tree for arbitrary
graph.
As a consequence of our algorithmic result, we show that the normalized
log-partition function (also known as free-energy) for a class of {\em regular}
MRFs will converge to a limit, that is computable to an arbitrary accuracy.
| [
{
"version": "v1",
"created": "Wed, 18 Oct 2006 21:51:44 GMT"
},
{
"version": "v2",
"created": "Sun, 4 Feb 2007 13:27:12 GMT"
},
{
"version": "v3",
"created": "Wed, 3 Oct 2007 00:04:51 GMT"
}
] | 1,191,369,600,000 | [
[
"Jung",
"Kyomin",
""
],
[
"Shah",
"Devavrat",
""
]
] |
cs/0610140 | Leonid Makarov | Leonid Makarov and Peter Komarov | Constant for associative patterns ensemble | 6 pages | null | null | null | cs.AI | null | Creation procedure of associative patterns ensemble in terms of formal logic
with using neural net-work (NN) model is formulated. It is shown that the
associative patterns set is created by means of unique procedure of NN work
which having individual parameters of entrance stimulus transformation. It is
ascer-tained that the quantity of the selected associative patterns possesses
is a constant.
| [
{
"version": "v1",
"created": "Tue, 24 Oct 2006 16:27:09 GMT"
}
] | 1,179,878,400,000 | [
[
"Makarov",
"Leonid",
""
],
[
"Komarov",
"Peter",
""
]
] |
cs/0610156 | Fadi Badra | Mathieu D'Aquin (INRIA Lorraine - LORIA, KMI), Fadi Badra (INRIA
Lorraine - LORIA), Sandrine Lafrogne (INRIA Lorraine - LORIA), Jean Lieber
(INRIA Lorraine - LORIA), Amedeo Napoli (INRIA Lorraine - LORIA), Laszlo
Szathmary (INRIA Lorraine - LORIA) | Adaptation Knowledge Discovery from a Case Base | null | Proceedings of the 17th European Conference on Artificial
Intelligence (ECAI-06), Trento G. Brewka (Ed.) (2006) 795--796 | null | null | cs.AI | null | In case-based reasoning, the adaptation step depends in general on
domain-dependent knowledge, which motivates studies on adaptation knowledge
acquisition (AKA). CABAMAKA is an AKA system based on principles of knowledge
discovery from databases. This system explores the variations within the case
base to elicit adaptation knowledge. It has been successfully tested in an
application of case-based decision support to breast cancer treatment.
| [
{
"version": "v1",
"created": "Fri, 27 Oct 2006 10:08:32 GMT"
}
] | 1,179,878,400,000 | [
[
"D'Aquin",
"Mathieu",
"",
"INRIA Lorraine - LORIA, KMI"
],
[
"Badra",
"Fadi",
"",
"INRIA\n Lorraine - LORIA"
],
[
"Lafrogne",
"Sandrine",
"",
"INRIA Lorraine - LORIA"
],
[
"Lieber",
"Jean",
"",
"INRIA Lorraine - LORIA"
],
[
"Napoli",
"Amedeo",
"",
"INRIA Lorraine - LORIA"
],
[
"Szathmary",
"Laszlo",
"",
"INRIA Lorraine - LORIA"
]
] |
cs/0610165 | Daowen Qiu | Fuchun Liu, Daowen Qiu, Hongyan Xing, and Zhujun Fan | Decentralized Failure Diagnosis of Stochastic Discrete Event Systems | 25 pages. Comments and criticisms are welcome | IEEE Transactions on Automatic Control, 53 (2) (2008) 535-546. | null | null | cs.AI | null | Recently, the diagnosability of {\it stochastic discrete event systems}
(SDESs) was investigated in the literature, and, the failure diagnosis
considered was {\it centralized}. In this paper, we propose an approach to {\it
decentralized} failure diagnosis of SDESs, where the stochastic system uses
multiple local diagnosers to detect failures and each local diagnoser possesses
its own information. In a way, the centralized failure diagnosis of SDESs can
be viewed as a special case of the decentralized failure diagnosis presented in
this paper with only one projection. The main contributions are as follows: (1)
We formalize the notion of codiagnosability for stochastic automata, which
means that a failure can be detected by at least one local stochastic diagnoser
within a finite delay. (2) We construct a codiagnoser from a given stochastic
automaton with multiple projections, and the codiagnoser associated with the
local diagnosers is used to test codiagnosability condition of SDESs. (3) We
deal with a number of basic properties of the codiagnoser. In particular, a
necessary and sufficient condition for the codiagnosability of SDESs is
presented. (4) We give a computing method in detail to check whether
codiagnosability is violated. And (5) some examples are described to illustrate
the applications of the codiagnosability and its computing method.
| [
{
"version": "v1",
"created": "Mon, 30 Oct 2006 09:59:31 GMT"
}
] | 1,268,179,200,000 | [
[
"Liu",
"Fuchun",
""
],
[
"Qiu",
"Daowen",
""
],
[
"Xing",
"Hongyan",
""
],
[
"Fan",
"Zhujun",
""
]
] |
cs/0610175 | Florentin Smarandache | Jean Dezert, Florentin Smarandache | DSmT: A new paradigm shift for information fusion | 11 pages. Presented to Cogis06 International Conference, Paris,
France, 2006 | null | null | null | cs.AI | null | The management and combination of uncertain, imprecise, fuzzy and even
paradoxical or high conflicting sources of information has always been and
still remains of primal importance for the development of reliable information
fusion systems. In this short survey paper, we present the theory of plausible
and paradoxical reasoning, known as DSmT (Dezert-Smarandache Theory) in
literature, developed for dealing with imprecise, uncertain and potentially
highly conflicting sources of information. DSmT is a new paradigm shift for
information fusion and recent publications have shown the interest and the
potential ability of DSmT to solve fusion problems where Dempster's rule used
in Dempster-Shafer Theory (DST) provides counter-intuitive results or fails to
provide useful result at all. This paper is focused on the foundations of DSmT
and on its main rules of combination (classic, hybrid and Proportional Conflict
Redistribution rules). Shafer's model on which is based DST appears as a
particular and specific case of DSm hybrid model which can be easily handled by
DSmT as well. Several simple but illustrative examples are given throughout
this paper to show the interest and the generality of this new theory.
| [
{
"version": "v1",
"created": "Tue, 31 Oct 2006 14:50:06 GMT"
}
] | 1,179,878,400,000 | [
[
"Dezert",
"Jean",
""
],
[
"Smarandache",
"Florentin",
""
]
] |
cs/0611047 | Adrian Paschke | Adrian Paschke | The Reaction RuleML Classification of the Event / Action / State
Processing and Reasoning Space | The Reaction RuleML Classification of the Event / Action / State
Processing and Reasoning Space extracted from Paschke, A.: ECA-RuleML: An
Approach combining ECA Rules with temporal interval-based KR Event/Action
Logics and Transactional Update Logics, Internet-based Information Systems,
Technical University Munich, Technical Report 11 / 2005 | null | null | Paschke, A.: The Reaction RuleML Classification of the Event /
Action / State Processing and Reasoning Space, White Paper, October, 2006 | cs.AI | null | Reaction RuleML is a general, practical, compact and user-friendly
XML-serialized language for the family of reaction rules. In this white paper
we give a review of the history of event / action /state processing and
reaction rule approaches and systems in different domains, define basic
concepts and give a classification of the event, action, state processing and
reasoning space as well as a discussion of relevant / related work
| [
{
"version": "v1",
"created": "Fri, 10 Nov 2006 22:03:11 GMT"
}
] | 1,179,878,400,000 | [
[
"Paschke",
"Adrian",
""
]
] |
cs/0611085 | Timothy McJunkin | Timothy R. McJunkin and Jill R. Scott | Fuzzy Logic Classification of Imaging Laser Desorption Fourier Transform
Mass Spectrometry Data | null | null | null | null | cs.AI | null | A fuzzy logic based classification engine has been developed for classifying
mass spectra obtained with an imaging internal source Fourier transform mass
spectrometer (I^2LD-FTMS). Traditionally, an operator uses the relative
abundance of ions with specific mass-to-charge (m/z) ratios to categorize
spectra. An operator does this by comparing the spectrum of m/z versus
abundance of an unknown sample against a library of spectra from known samples.
Automated positioning and acquisition allow I^2LD-FTMS to acquire data from
very large grids, this would require classification of up to 3600 spectrum per
hour to keep pace with the acquisition. The tedious job of classifying numerous
spectra generated in an I^2LD-FTMS imaging application can be replaced by a
fuzzy rule base if the cues an operator uses can be encapsulated. We present
the translation of linguistic rules to a fuzzy classifier for mineral phases in
basalt. This paper also describes a method for gathering statistics on ions,
which are not currently used in the rule base, but which may be candidates for
making the rule base more accurate and complete or to form new rule bases based
on data obtained from known samples. A spatial method for classifying spectra
with low membership values, based on neighboring sample classifications, is
also presented.
| [
{
"version": "v1",
"created": "Fri, 17 Nov 2006 19:14:47 GMT"
}
] | 1,179,878,400,000 | [
[
"McJunkin",
"Timothy R.",
""
],
[
"Scott",
"Jill R.",
""
]
] |
cs/0611118 | Florentin Smarandache | Haibin Wang, Andre Rogatko, Florentin Smarandache, Rajshekhar
Sunderraman | A Neutrosophic Description Logic | 18 pages. Presented at the IEEE International Conference on Granular
Computing, Georgia State University, Atlanta, USA, May 2006 | Proceedings of 2006 IEEE International Conference on Granular
Computing, edited by Yan-Qing Zhang and Tsau Young Lin, Georgia State
University, Atlanta, pp. 305-308, 2006 | 10.1142/S1793005708001100 | null | cs.AI | null | Description Logics (DLs) are appropriate, widely used, logics for managing
structured knowledge. They allow reasoning about individuals and concepts, i.e.
set of individuals with common properties. Typically, DLs are limited to
dealing with crisp, well defined concepts. That is, concepts for which the
problem whether an individual is an instance of it is yes/no question. More
often than not, the concepts encountered in the real world do not have a
precisely defined criteria of membership: we may say that an individual is an
instance of a concept only to a certain degree, depending on the individual's
properties. The DLs that deal with such fuzzy concepts are called fuzzy DLs. In
order to deal with fuzzy, incomplete, indeterminate and inconsistent concepts,
we need to extend the fuzzy DLs, combining the neutrosophic logic with a
classical DL. In particular, concepts become neutrosophic (here neutrosophic
means fuzzy, incomplete, indeterminate, and inconsistent), thus reasoning about
neutrosophic concepts is supported. We'll define its syntax, its semantics, and
describe its properties.
| [
{
"version": "v1",
"created": "Wed, 22 Nov 2006 20:04:21 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Mar 2008 00:49:49 GMT"
}
] | 1,479,340,800,000 | [
[
"Wang",
"Haibin",
""
],
[
"Rogatko",
"Andre",
""
],
[
"Smarandache",
"Florentin",
""
],
[
"Sunderraman",
"Rajshekhar",
""
]
] |
cs/0611135 | Marc Schoenauer | Christian Gagn\'e (INRIA Futurs, ISI), Marc Schoenauer (INRIA Futurs,
LRI), Mich\`ele Sebag (LRI), Marco Tomassini (ISI) | Genetic Programming for Kernel-based Learning with Co-evolving Subsets
Selection | null | Dans PPSN'06, 4193 (2006) 1008-1017 | null | null | cs.AI | null | Support Vector Machines (SVMs) are well-established Machine Learning (ML)
algorithms. They rely on the fact that i) linear learning can be formalized as
a well-posed optimization problem; ii) non-linear learning can be brought into
linear learning thanks to the kernel trick and the mapping of the initial
search space onto a high dimensional feature space. The kernel is designed by
the ML expert and it governs the efficiency of the SVM approach. In this paper,
a new approach for the automatic design of kernels by Genetic Programming,
called the Evolutionary Kernel Machine (EKM), is presented. EKM combines a
well-founded fitness function inspired from the margin criterion, and a
co-evolution framework ensuring the computational scalability of the approach.
Empirical validation on standard ML benchmark demonstrates that EKM is
competitive using state-of-the-art SVMs with tuned hyper-parameters.
| [
{
"version": "v1",
"created": "Mon, 27 Nov 2006 14:38:44 GMT"
}
] | 1,471,305,600,000 | [
[
"Gagné",
"Christian",
"",
"INRIA Futurs, ISI"
],
[
"Schoenauer",
"Marc",
"",
"INRIA Futurs,\n LRI"
],
[
"Sebag",
"Michèle",
"",
"LRI"
],
[
"Tomassini",
"Marco",
"",
"ISI"
]
] |
cs/0611138 | Marc Schoenauer | Vojtech Krmicek (INRIA Futurs, LRI), Mich\`ele Sebag (INRIA Futurs,
LRI) | Functional Brain Imaging with Multi-Objective Multi-Modal Evolutionary
Optimization | null | Dans PPSN'06, 4193 (2006) 382-391 | null | null | cs.AI | null | Functional brain imaging is a source of spatio-temporal data mining problems.
A new framework hybridizing multi-objective and multi-modal optimization is
proposed to formalize these data mining problems, and addressed through
Evolutionary Computation (EC). The merits of EC for spatio-temporal data mining
are demonstrated as the approach facilitates the modelling of the experts'
requirements, and flexibly accommodates their changing goals.
| [
{
"version": "v1",
"created": "Tue, 28 Nov 2006 00:54:43 GMT"
}
] | 1,471,305,600,000 | [
[
"Krmicek",
"Vojtech",
"",
"INRIA Futurs, LRI"
],
[
"Sebag",
"Michèle",
"",
"INRIA Futurs,\n LRI"
]
] |
cs/0611141 | Peter Tiedemann | Peter Tiedemann, Henrik Reif Andersen and Rasmus Pagh | A Generic Global Constraint based on MDDs | Tech report, 31 pages, 3 figures | null | null | null | cs.AI | null | The paper suggests the use of Multi-Valued Decision Diagrams (MDDs) as the
supporting data structure for a generic global constraint. We give an algorithm
for maintaining generalized arc consistency (GAC) on this constraint that
amortizes the cost of the GAC computation over a root-to-terminal path in the
search tree. The technique used is an extension of the GAC algorithm for the
regular language constraint on finite length input. Our approach adds support
for skipped variables, maintains the reduced property of the MDD dynamically
and provides domain entailment detection. Finally we also show how to adapt the
approach to constraint types that are closely related to MDDs, such as AOMDDs
and Case DAGs.
| [
{
"version": "v1",
"created": "Tue, 28 Nov 2006 14:23:23 GMT"
}
] | 1,179,878,400,000 | [
[
"Tiedemann",
"Peter",
""
],
[
"Andersen",
"Henrik Reif",
""
],
[
"Pagh",
"Rasmus",
""
]
] |
cs/0612056 | Gayathree U | U. Gayathree | Conscious Intelligent Systems - Part 1 : I X I | null | null | null | null | cs.AI | null | Did natural consciousness and intelligent systems arise out of a path that
was co-evolutionary to evolution? Can we explain human self-consciousness as
having risen out of such an evolutionary path? If so how could it have been?
In this first part of a two-part paper (titled IXI), we take a learning
system perspective to the problem of consciousness and intelligent systems, an
approach that may look unseasonable in this age of fMRI's and high tech
neuroscience.
We posit conscious intelligent systems in natural environments and wonder how
natural factors influence their design paths. Such a perspective allows us to
explain seamlessly a variety of natural factors, factors ranging from the rise
and presence of the human mind, man's sense of I, his self-consciousness and
his looping thought processes to factors like reproduction, incubation,
extinction, sleep, the richness of natural behavior, etc. It even allows us to
speculate on a possible human evolution scenario and other natural phenomena.
| [
{
"version": "v1",
"created": "Sat, 9 Dec 2006 17:18:20 GMT"
}
] | 1,179,878,400,000 | [
[
"Gayathree",
"U.",
""
]
] |
cs/0612057 | Gayathree U | U. Gayathree | Conscious Intelligent Systems - Part II - Mind, Thought, Language and
Understanding | null | null | null | null | cs.AI | null | This is the second part of a paper on Conscious Intelligent Systems. We use
the understanding gained in the first part (Conscious Intelligent Systems Part
1: IXI (arxiv id cs.AI/0612056)) to look at understanding. We see how the
presence of mind affects understanding and intelligent systems; we see that the
presence of mind necessitates language. The rise of language in turn has
important effects on understanding. We discuss the humanoid question and how
the question of self-consciousness (and by association mind/thought/language)
would affect humanoids too.
| [
{
"version": "v1",
"created": "Sat, 9 Dec 2006 17:28:24 GMT"
}
] | 1,179,878,400,000 | [
[
"Gayathree",
"U.",
""
]
] |
cs/0612068 | Esben Rune Hansen | Esben Rune Hansen and Henrik Reif Andersen | Interactive Configuration by Regular String Constraints | Tech Report | null | null | null | cs.AI | null | A product configurator which is complete, backtrack free and able to compute
the valid domains at any state of the configuration can be constructed by
building a Binary Decision Diagram (BDD). Despite the fact that the size of the
BDD is exponential in the number of variables in the worst case, BDDs have
proved to work very well in practice. Current BDD-based techniques can only
handle interactive configuration with small finite domains. In this paper we
extend the approach to handle string variables constrained by regular
expressions. The user is allowed to change the strings by adding letters at the
end of the string. We show how to make a data structure that can perform fast
valid domain computations given some assignment on the set of string variables.
We first show how to do this by using one large DFA. Since this approach is
too space consuming to be of practical use, we construct a data structure that
simulates the large DFA and in most practical cases are much more space
efficient. As an example a configuration problem on $n$ string variables with
only one solution in which each string variable is assigned to a value of
length of $k$ the former structure will use $\Omega(k^n)$ space whereas the
latter only need $O(kn)$. We also show how this framework easily can be
combined with the recent BDD techniques to allow both boolean, integer and
string variables in the configuration problem.
| [
{
"version": "v1",
"created": "Tue, 12 Dec 2006 16:21:16 GMT"
}
] | 1,179,878,400,000 | [
[
"Hansen",
"Esben Rune",
""
],
[
"Andersen",
"Henrik Reif",
""
]
] |
cs/0612109 | Vicen G\'omez Cerd\`a | Vicenc Gomez, J. M. Mooij, H. J. Kappen | Truncating the loop series expansion for Belief Propagation | 31 pages, 12 figures, submitted to Journal of Machine Learning
Research | The Journal of Machine Learning Research, 8(Sep):1987--2016, 2007 | null | null | cs.AI | null | Recently, M. Chertkov and V.Y. Chernyak derived an exact expression for the
partition sum (normalization constant) corresponding to a graphical model,
which is an expansion around the Belief Propagation solution. By adding
correction terms to the BP free energy, one for each "generalized loop" in the
factor graph, the exact partition sum is obtained. However, the usually
enormous number of generalized loops generally prohibits summation over all
correction terms. In this article we introduce Truncated Loop Series BP
(TLSBP), a particular way of truncating the loop series of M. Chertkov and V.Y.
Chernyak by considering generalized loops as compositions of simple loops. We
analyze the performance of TLSBP in different scenarios, including the Ising
model, regular random graphs and on Promedas, a large probabilistic medical
diagnostic system. We show that TLSBP often improves upon the accuracy of the
BP solution, at the expense of increased computation time. We also show that
the performance of TLSBP strongly depends on the degree of interaction between
the variables. For weak interactions, truncating the series leads to
significant improvements, whereas for strong interactions it can be
ineffective, even if a high number of terms is considered.
| [
{
"version": "v1",
"created": "Thu, 21 Dec 2006 17:29:28 GMT"
},
{
"version": "v2",
"created": "Wed, 25 Jul 2007 08:59:01 GMT"
}
] | 1,320,883,200,000 | [
[
"Gomez",
"Vicenc",
""
],
[
"Mooij",
"J. M.",
""
],
[
"Kappen",
"H. J.",
""
]
] |
cs/0701013 | Zengyou He | Zengyou He, Xaiofei Xu, Shengchun Deng | Attribute Value Weighting in K-Modes Clustering | 15 pages | null | null | Tr-06-0615 | cs.AI | null | In this paper, the traditional k-modes clustering algorithm is extended by
weighting attribute value matches in dissimilarity computation. The use of
attribute value weighting technique makes it possible to generate clusters with
stronger intra-similarities, and therefore achieve better clustering
performance. Experimental results on real life datasets show that these value
weighting based k-modes algorithms are superior to the standard k-modes
algorithm with respect to clustering accuracy.
| [
{
"version": "v1",
"created": "Wed, 3 Jan 2007 09:06:03 GMT"
}
] | 1,179,878,400,000 | [
[
"He",
"Zengyou",
""
],
[
"Xu",
"Xaiofei",
""
],
[
"Deng",
"Shengchun",
""
]
] |
cs/0701184 | Joerg Hoffmann | Joerg Hoffmann and Carla Gomes and Bart Selman | Structure and Problem Hardness: Goal Asymmetry and DPLL Proofs in<br>
SAT-Based Planning | null | Logical Methods in Computer Science, Volume 3, Issue 1 (February
26, 2007) lmcs:2228 | 10.2168/LMCS-3(1:6)2007 | null | cs.AI | null | In Verification and in (optimal) AI Planning, a successful method is to
formulate the application as boolean satisfiability (SAT), and solve it with
state-of-the-art DPLL-based procedures. There is a lack of understanding of why
this works so well. Focussing on the Planning context, we identify a form of
problem structure concerned with the symmetrical or asymmetrical nature of the
cost of achieving the individual planning goals. We quantify this sort of
structure with a simple numeric parameter called AsymRatio, ranging between 0
and 1. We run experiments in 10 benchmark domains from the International
Planning Competitions since 2000; we show that AsymRatio is a good indicator of
SAT solver performance in 8 of these domains. We then examine carefully crafted
synthetic planning domains that allow control of the amount of structure, and
that are clean enough for a rigorous analysis of the combinatorial search
space. The domains are parameterized by size, and by the amount of structure.
The CNFs we examine are unsatisfiable, encoding one planning step less than the
length of the optimal plan. We prove upper and lower bounds on the size of the
best possible DPLL refutations, under different settings of the amount of
structure, as a function of size. We also identify the best possible sets of
branching variables (backdoors). With minimum AsymRatio, we prove exponential
lower bounds, and identify minimal backdoors of size linear in the number of
variables. With maximum AsymRatio, we identify logarithmic DPLL refutations
(and backdoors), showing a doubly exponential gap between the two structural
extreme cases. The reasons for this behavior -- the proof arguments --
illuminate the prototypical patterns of structure causing the empirical
behavior observed in the competition benchmarks.
| [
{
"version": "v1",
"created": "Mon, 29 Jan 2007 12:47:08 GMT"
},
{
"version": "v2",
"created": "Mon, 26 Feb 2007 11:38:45 GMT"
}
] | 1,484,092,800,000 | [
[
"Hoffmann",
"Joerg",
""
],
[
"Gomes",
"Carla",
""
],
[
"Selman",
"Bart",
""
]
] |
cs/0702028 | Florentin Smarandache | Florentin Smarandache, Jean Dezert | Uniform and Partially Uniform Redistribution Rules | 4 pages; "Advances and Applications of DSmT for Plausible and
Paradoxical reasoning for Information Fusion", International Workshop
organized by the Bulgarian IST Centre of Competence in 21st Century, December
14, 2006, Bulg. Acad. of Sciences, Sofia, Bulgaria | International Journal of Uncertainty, Fuzziness and
Knowledge-Based Systems (IJUFKS), World Scientific, Vol. 19, No. 6, 921-937,
2011 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This short paper introduces two new fusion rules for combining quantitative
basic belief assignments. These rules although very simple have not been
proposed in literature so far and could serve as useful alternatives because of
their low computation cost with respect to the recent advanced Proportional
Conflict Redistribution rules developed in the DSmT framework.
| [
{
"version": "v1",
"created": "Mon, 5 Feb 2007 14:56:49 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Jul 2011 14:06:14 GMT"
}
] | 1,323,043,200,000 | [
[
"Smarandache",
"Florentin",
""
],
[
"Dezert",
"Jean",
""
]
] |
cs/0702170 | Peter Tiedemann | Peter Tiedemann, Henrik Reif Andersen, Rasmus Pagh | Generic Global Constraints based on MDDs | Preliminary 15 pages version of the tech-report cs.AI/0611141 | null | null | null | cs.AI | null | Constraint Programming (CP) has been successfully applied to both constraint
satisfaction and constraint optimization problems. A wide variety of
specialized global constraints provide critical assistance in achieving a good
model that can take advantage of the structure of the problem in the search for
a solution. However, a key outstanding issue is the representation of 'ad-hoc'
constraints that do not have an inherent combinatorial nature, and hence are
not modeled well using narrowly specialized global constraints. We attempt to
address this issue by considering a hybrid of search and compilation.
Specifically we suggest the use of Reduced Ordered Multi-Valued Decision
Diagrams (ROMDDs) as the supporting data structure for a generic global
constraint. We give an algorithm for maintaining generalized arc consistency
(GAC) on this constraint that amortizes the cost of the GAC computation over a
root-to-leaf path in the search tree without requiring asymptotically more
space than used for the MDD. Furthermore we present an approach for
incrementally maintaining the reduced property of the MDD during the search,
and show how this can be used for providing domain entailment detection.
Finally we discuss how to apply our approach to other similar data structures
such as AOMDDs and Case DAGs. The technique used can be seen as an extension of
the GAC algorithm for the regular language constraint on finite length input.
| [
{
"version": "v1",
"created": "Wed, 28 Feb 2007 15:32:48 GMT"
}
] | 1,179,878,400,000 | [
[
"Tiedemann",
"Peter",
""
],
[
"Andersen",
"Henrik Reif",
""
],
[
"Pagh",
"Rasmus",
""
]
] |
cs/0703060 | Florentin Smarandache | Jose L. Salmeron, Florentin Smarandache | Redesigning Decision Matrix Method with an indeterminacy-based inference
process | 12 pages, 4 figures, one table | A short version published in Advances in Fuzzy Sets and Systems,
Vol. 1(2), 263-271, 2006 | null | null | cs.AI | null | For academics and practitioners concerned with computers, business and
mathematics, one central issue is supporting decision makers. In this paper, we
propose a generalization of Decision Matrix Method (DMM), using Neutrosophic
logic. It emerges as an alternative to the existing logics and it represents a
mathematical model of uncertainty and indeterminacy. This paper proposes the
Neutrosophic Decision Matrix Method as a more realistic tool for decision
making. In addition, a de-neutrosophication process is included.
| [
{
"version": "v1",
"created": "Tue, 13 Mar 2007 02:18:09 GMT"
}
] | 1,179,878,400,000 | [
[
"Salmeron",
"Jose L.",
""
],
[
"Smarandache",
"Florentin",
""
]
] |
cs/0703124 | Cheng-Yuan Liou | Cheng-Yuan Liou, Tai-Hei Wu, Chia-Ying Lee | Modelling Complexity in Musical Rhythm | 21 pages, 13 figures, 2 tables | Complexity 15(4) (2010) 19~30 final form at
http://www3.interscience.wiley.com/cgi-bin/fulltext/123191810/PDFSTART | null | null | cs.AI | null | This paper constructs a tree structure for the music rhythm using the
L-system. It models the structure as an automata and derives its complexity. It
also solves the complexity for the L-system. This complexity can resolve the
similarity between trees. This complexity serves as a measure of psychological
complexity for rhythms. It resolves the music complexity of various
compositions including the Mozart effect K488.
Keyword: music perception, psychological complexity, rhythm, L-system,
automata, temporal associative memory, inverse problem, rewriting rule,
bracketed string, tree similarity
| [
{
"version": "v1",
"created": "Mon, 26 Mar 2007 07:37:11 GMT"
}
] | 1,268,179,200,000 | [
[
"Liou",
"Cheng-Yuan",
""
],
[
"Wu",
"Tai-Hei",
""
],
[
"Lee",
"Chia-Ying",
""
]
] |
cs/0703130 | Robert Jeansoulin | Omar Doukari (LSIS), Robert Jeansoulin (IGM-LabInfo) | Space-contained conflict revision, for geographic information | 14 pages | Proc. of 10th AGILE International Conference on Geographic
Information Science, AGILE 2007. (07/05/2007) 1-14 | null | null | cs.AI | null | Using qualitative reasoning with geographic information, contrarily, for
instance, with robotics, looks not only fastidious (i.e.: encoding knowledge
Propositional Logics PL), but appears to be computational complex, and not
tractable at all, most of the time. However, knowledge fusion or revision, is a
common operation performed when users merge several different data sets in a
unique decision making process, without much support. Introducing logics would
be a great improvement, and we propose in this paper, means for deciding -a
priori- if one application can benefit from a complete revision, under only the
assumption of a conjecture that we name the "containment conjecture", which
limits the size of the minimal conflicts to revise. We demonstrate that this
conjecture brings us the interesting computational property of performing a
not-provable but global, revision, made of many local revisions, at a tractable
size. We illustrate this approach on an application.
| [
{
"version": "v1",
"created": "Mon, 26 Mar 2007 12:18:32 GMT"
}
] | 1,179,878,400,000 | [
[
"Doukari",
"Omar",
"",
"LSIS"
],
[
"Jeansoulin",
"Robert",
"",
"IGM-LabInfo"
]
] |
cs/0703156 | Fadi Badra | Mathieu D'Aquin (KMI), Fadi Badra (INRIA Lorraine - LORIA), Sandrine
Lafrogne (INRIA Lorraine - LORIA), Jean Lieber (INRIA Lorraine - LORIA),
Amedeo Napoli (INRIA Lorraine - LORIA), Laszlo Szathmary (INRIA Lorraine -
LORIA) | Case Base Mining for Adaptation Knowledge Acquisition | null | Dans Twentieth International Joint Conference on Artificial
Intelligence - IJCAI'07 (2007) 750--755 | null | null | cs.AI | null | In case-based reasoning, the adaptation of a source case in order to solve
the target problem is at the same time crucial and difficult to implement. The
reason for this difficulty is that, in general, adaptation strongly depends on
domain-dependent knowledge. This fact motivates research on adaptation
knowledge acquisition (AKA). This paper presents an approach to AKA based on
the principles and techniques of knowledge discovery from databases and
data-mining. It is implemented in CABAMAKA, a system that explores the
variations within the case base to elicit adaptation knowledge. This system has
been successfully tested in an application of case-based reasoning to decision
support in the domain of breast cancer treatment.
| [
{
"version": "v1",
"created": "Fri, 30 Mar 2007 16:16:11 GMT"
}
] | 1,179,878,400,000 | [
[
"D'Aquin",
"Mathieu",
"",
"KMI"
],
[
"Badra",
"Fadi",
"",
"INRIA Lorraine - LORIA"
],
[
"Lafrogne",
"Sandrine",
"",
"INRIA Lorraine - LORIA"
],
[
"Lieber",
"Jean",
"",
"INRIA Lorraine - LORIA"
],
[
"Napoli",
"Amedeo",
"",
"INRIA Lorraine - LORIA"
],
[
"Szathmary",
"Laszlo",
"",
"INRIA Lorraine -\n LORIA"
]
] |
cs/9308101 | null | M. L. Ginsberg | Dynamic Backtracking | See http://www.jair.org/ for an online appendix and other files
accompanying this article | Journal of Artificial Intelligence Research, Vol 1, (1993), 25-46 | null | null | cs.AI | null | Because of their occasional need to return to shallow points in a search
tree, existing backtracking methods can sometimes erase meaningful progress
toward solving a search problem. In this paper, we present a method by which
backtrack points can be moved deeper in the search space, thereby avoiding this
difficulty. The technique developed is a variant of dependency-directed
backtracking that uses only polynomial space while still providing useful
control information and retaining the completeness guarantees provided by
earlier approaches.
| [
{
"version": "v1",
"created": "Sun, 1 Aug 1993 00:00:00 GMT"
}
] | 1,201,996,800,000 | [
[
"Ginsberg",
"M. L.",
""
]
] |
cs/9308102 | null | M. P. Wellman | A Market-Oriented Programming Environment and its Application to
Distributed Multicommodity Flow Problems | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 1, (1993), 1-23 | null | null | cs.AI | null | Market price systems constitute a well-understood class of mechanisms that
under certain conditions provide effective decentralization of decision making
with minimal communication overhead. In a market-oriented programming approach
to distributed problem solving, we derive the activities and resource
allocations for a set of computational agents by computing the competitive
equilibrium of an artificial economy. WALRAS provides basic constructs for
defining computational market structures, and protocols for deriving their
corresponding price equilibria. In a particular realization of this approach
for a form of multicommodity flow problem, we see that careful construction of
the decision process according to economic principles can lead to efficient
distributed resource allocation, and that the behavior of the system can be
meaningfully analyzed in economic terms.
| [
{
"version": "v1",
"created": "Sun, 1 Aug 1993 00:00:00 GMT"
}
] | 1,201,996,800,000 | [
[
"Wellman",
"M. P.",
""
]
] |
cs/9309101 | null | I. P. Gent, T. Walsh | An Empirical Analysis of Search in GSAT | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 1, (1993), 47-59 | null | null | cs.AI | null | We describe an extensive study of search in GSAT, an approximation procedure
for propositional satisfiability. GSAT performs greedy hill-climbing on the
number of satisfied clauses in a truth assignment. Our experiments provide a
more complete picture of GSAT's search than previous accounts. We describe in
detail the two phases of search: rapid hill-climbing followed by a long plateau
search. We demonstrate that when applied to randomly generated 3SAT problems,
there is a very simple scaling with problem size for both the mean number of
satisfied clauses and the mean branching rate. Our results allow us to make
detailed numerical conjectures about the length of the hill-climbing phase, the
average gradient of this phase, and to conjecture that both the average score
and average branching rate decay exponentially during plateau search. We end by
showing how these results can be used to direct future theoretical analysis.
This work provides a case study of how computer experiments can be used to
improve understanding of the theoretical properties of algorithms.
| [
{
"version": "v1",
"created": "Wed, 1 Sep 1993 00:00:00 GMT"
}
] | 1,201,996,800,000 | [
[
"Gent",
"I. P.",
""
],
[
"Walsh",
"T.",
""
]
] |
cs/9311101 | null | F. Bergadano, D. Gunetti, U. Trinchero | The Difficulties of Learning Logic Programs with Cut | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 1, (1993), 91-107 | null | null | cs.AI | null | As real logic programmers normally use cut (!), an effective learning
procedure for logic programs should be able to deal with it. Because the cut
predicate has only a procedural meaning, clauses containing cut cannot be
learned using an extensional evaluation method, as is done in most learning
systems. On the other hand, searching a space of possible programs (instead of
a space of independent clauses) is unfeasible. An alternative solution is to
generate first a candidate base program which covers the positive examples, and
then make it consistent by inserting cut where appropriate. The problem of
learning programs with cut has not been investigated before and this seems to
be a natural and reasonable approach. We generalize this scheme and investigate
the difficulties that arise. Some of the major shortcomings are actually
caused, in general, by the need for intensional evaluation. As a conclusion,
the analysis of this paper suggests, on precise and technical grounds, that
learning cut is difficult, and current induction techniques should probably be
restricted to purely declarative logic languages.
| [
{
"version": "v1",
"created": "Mon, 1 Nov 1993 00:00:00 GMT"
}
] | 1,201,996,800,000 | [
[
"Bergadano",
"F.",
""
],
[
"Gunetti",
"D.",
""
],
[
"Trinchero",
"U.",
""
]
] |
cs/9311102 | null | J. C. Schlimmer, L. A. Hermens | Software Agents: Completing Patterns and Constructing User Interfaces | See http://www.jair.org/ for an online appendix and other files
accompanying this article | Journal of Artificial Intelligence Research, Vol 1, (1993), 61-89 | null | null | cs.AI | null | To support the goal of allowing users to record and retrieve information,
this paper describes an interactive note-taking system for pen-based computers
with two distinctive features. First, it actively predicts what the user is
going to write. Second, it automatically constructs a custom, button-box user
interface on request. The system is an example of a learning-apprentice
software- agent. A machine learning component characterizes the syntax and
semantics of the user's information. A performance system uses this learned
information to generate completion strings and construct a user interface.
Description of Online Appendix: People like to record information. Doing this
on paper is initially efficient, but lacks flexibility. Recording information
on a computer is less efficient but more powerful. In our new note taking
softwre, the user records information directly on a computer. Behind the
interface, an agent acts for the user. To help, it provides defaults and
constructs a custom user interface. The demonstration is a QuickTime movie of
the note taking agent in action. The file is a binhexed self-extracting
archive. Macintosh utilities for binhex are available from
mac.archive.umich.edu. QuickTime is available from ftp.apple.com in the
dts/mac/sys.soft/quicktime.
| [
{
"version": "v1",
"created": "Mon, 1 Nov 1993 00:00:00 GMT"
}
] | 1,253,836,800,000 | [
[
"Schlimmer",
"J. C.",
""
],
[
"Hermens",
"L. A.",
""
]
] |
cs/9312101 | null | M. Buchheit, F. M. Donini, A. Schaerf | Decidable Reasoning in Terminological Knowledge Representation Systems | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 1, (1993),
109-138 | null | null | cs.AI | null | Terminological knowledge representation systems (TKRSs) are tools for
designing and using knowledge bases that make use of terminological languages
(or concept languages). We analyze from a theoretical point of view a TKRS
whose capabilities go beyond the ones of presently available TKRSs. The new
features studied, often required in practical applications, can be summarized
in three main points. First, we consider a highly expressive terminological
language, called ALCNR, including general complements of concepts, number
restrictions and role conjunction. Second, we allow to express inclusion
statements between general concepts, and terminological cycles as a particular
case. Third, we prove the decidability of a number of desirable TKRS-deduction
services (like satisfiability, subsumption and instance checking) through a
sound, complete and terminating calculus for reasoning in ALCNR-knowledge
bases. Our calculus extends the general technique of constraint systems. As a
byproduct of the proof, we get also the result that inclusion statements in
ALCNR can be simulated by terminological cycles, if descriptive semantics is
adopted.
| [
{
"version": "v1",
"created": "Wed, 1 Dec 1993 00:00:00 GMT"
}
] | 1,416,182,400,000 | [
[
"Buchheit",
"M.",
""
],
[
"Donini",
"F. M.",
""
],
[
"Schaerf",
"A.",
""
]
] |
cs/9401101 | null | N. Nilsson | Teleo-Reactive Programs for Agent Control | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 1, (1994),
139-158 | null | null | cs.AI | null | A formalism is presented for computing and organizing actions for autonomous
agents in dynamic environments. We introduce the notion of teleo-reactive (T-R)
programs whose execution entails the construction of circuitry for the
continuous computation of the parameters and conditions on which agent action
is based. In addition to continuous feedback, T-R programs support parameter
binding and recursion. A primary difference between T-R programs and many other
circuit-based systems is that the circuitry of T-R programs is more compact; it
is constructed at run time and thus does not have to anticipate all the
contingencies that might arise over all possible runs. In addition, T-R
programs are intuitive and easy to write and are written in a form that is
compatible with automatic planning and learning methods. We briefly describe
some experimental applications of T-R programs in the control of simulated and
actual mobile robots.
| [
{
"version": "v1",
"created": "Sat, 1 Jan 1994 00:00:00 GMT"
}
] | 1,416,182,400,000 | [
[
"Nilsson",
"N.",
""
]
] |
cs/9402101 | null | C. X. Ling | Learning the Past Tense of English Verbs: The Symbolic Pattern
Associator vs. Connectionist Models | See http://www.jair.org/ for an online appendix and other files
accompanying this article | Journal of Artificial Intelligence Research, Vol 1, (1994),
209-229 | null | null | cs.AI | null | Learning the past tense of English verbs - a seemingly minor aspect of
language acquisition - has generated heated debates since 1986, and has become
a landmark task for testing the adequacy of cognitive modeling. Several
artificial neural networks (ANNs) have been implemented, and a challenge for
better symbolic models has been posed. In this paper, we present a
general-purpose Symbolic Pattern Associator (SPA) based upon the decision-tree
learning algorithm ID3. We conduct extensive head-to-head comparisons on the
generalization ability between ANN models and the SPA under different
representations. We conclude that the SPA generalizes the past tense of unseen
verbs better than ANN models by a wide margin, and we offer insights as to why
this should be the case. We also discuss a new default strategy for
decision-tree learning algorithms.
| [
{
"version": "v1",
"created": "Tue, 1 Feb 1994 00:00:00 GMT"
}
] | 1,253,836,800,000 | [
[
"Ling",
"C. X.",
""
]
] |
cs/9402102 | null | D. J. Cook, L. B. Holder | Substructure Discovery Using Minimum Description Length and Background
Knowledge | See http://www.jair.org/ for an online appendix and other files
accompanying this article | Journal of Artificial Intelligence Research, Vol 1, (1994),
231-255 | null | null | cs.AI | null | The ability to identify interesting and repetitive substructures is an
essential component to discovering knowledge in structural data. We describe a
new version of our SUBDUE substructure discovery system based on the minimum
description length principle. The SUBDUE system discovers substructures that
compress the original data and represent structural concepts in the data. By
replacing previously-discovered substructures in the data, multiple passes of
SUBDUE produce a hierarchical description of the structural regularities in the
data. SUBDUE uses a computationally-bounded inexact graph match that identifies
similar, but not identical, instances of a substructure and finds an
approximate measure of closeness of two substructures when under computational
constraints. In addition to the minimum description length principle, other
background knowledge can be used by SUBDUE to guide the search towards more
appropriate substructures. Experiments in a variety of domains demonstrate
SUBDUE's ability to find substructures capable of compressing the original data
and to discover structural concepts important to the domain. Description of
Online Appendix: This is a compressed tar file containing the SUBDUE discovery
system, written in C. The program accepts as input databases represented in
graph form, and will output discovered substructures with their corresponding
value.
| [
{
"version": "v1",
"created": "Tue, 1 Feb 1994 00:00:00 GMT"
}
] | 1,201,996,800,000 | [
[
"Cook",
"D. J.",
""
],
[
"Holder",
"L. B.",
""
]
] |
cs/9402103 | null | M. Koppel, R. Feldman, A. M. Segre | Bias-Driven Revision of Logical Domain Theories | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 1, (1994),
159-208 | null | null | cs.AI | null | The theory revision problem is the problem of how best to go about revising a
deficient domain theory using information contained in examples that expose
inaccuracies. In this paper we present our approach to the theory revision
problem for propositional domain theories. The approach described here, called
PTR, uses probabilities associated with domain theory elements to numerically
track the ``flow'' of proof through the theory. This allows us to measure the
precise role of a clause or literal in allowing or preventing a (desired or
undesired) derivation for a given example. This information is used to
efficiently locate and repair flawed elements of the theory. PTR is proved to
converge to a theory which correctly classifies all examples, and shown
experimentally to be fast and accurate even for deep theories.
| [
{
"version": "v1",
"created": "Tue, 1 Feb 1994 00:00:00 GMT"
}
] | 1,416,182,400,000 | [
[
"Koppel",
"M.",
""
],
[
"Feldman",
"R.",
""
],
[
"Segre",
"A. M.",
""
]
] |
cs/9403101 | null | P. M. Murphy, M. J. Pazzani | Exploring the Decision Forest: An Empirical Investigation of Occam's
Razor in Decision Tree Induction | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 1, (1994),
257-275 | null | null | cs.AI | null | We report on a series of experiments in which all decision trees consistent
with the training data are constructed. These experiments were run to gain an
understanding of the properties of the set of consistent decision trees and the
factors that affect the accuracy of individual trees. In particular, we
investigated the relationship between the size of a decision tree consistent
with some training data and the accuracy of the tree on test data. The
experiments were performed on a massively parallel Maspar computer. The results
of the experiments on several artificial and two real world problems indicate
that, for many of the problems investigated, smaller consistent decision trees
are on average less accurate than the average accuracy of slightly larger
trees.
| [
{
"version": "v1",
"created": "Tue, 1 Mar 1994 00:00:00 GMT"
}
] | 1,201,996,800,000 | [
[
"Murphy",
"P. M.",
""
],
[
"Pazzani",
"M. J.",
""
]
] |
cs/9406101 | null | A. Borgida, P. F. Patel-Schneider | A Semantics and Complete Algorithm for Subsumption in the CLASSIC
Description Logic | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 1, (1994),
277-308 | null | null | cs.AI | null | This paper analyzes the correctness of the subsumption algorithm used in
CLASSIC, a description logic-based knowledge representation system that is
being used in practical applications. In order to deal efficiently with
individuals in CLASSIC descriptions, the developers have had to use an
algorithm that is incomplete with respect to the standard, model-theoretic
semantics for description logics. We provide a variant semantics for
descriptions with respect to which the current implementation is complete, and
which can be independently motivated. The soundness and completeness of the
polynomial-time subsumption algorithm is established using description graphs,
which are an abstracted version of the implementation structures used in
CLASSIC, and are of independent interest.
| [
{
"version": "v1",
"created": "Wed, 1 Jun 1994 00:00:00 GMT"
}
] | 1,253,836,800,000 | [
[
"Borgida",
"A.",
""
],
[
"Patel-Schneider",
"P. F.",
""
]
] |
cs/9406102 | null | R. Sebastiani | Applying GSAT to Non-Clausal Formulas | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 1, (1994),
309-314 | null | null | cs.AI | null | In this paper we describe how to modify GSAT so that it can be applied to
non-clausal formulas. The idea is to use a particular ``score'' function which
gives the number of clauses of the CNF conversion of a formula which are false
under a given truth assignment. Its value is computed in linear time, without
constructing the CNF conversion itself. The proposed methodology applies to
most of the variants of GSAT proposed so far.
| [
{
"version": "v1",
"created": "Wed, 1 Jun 1994 00:00:00 GMT"
}
] | 1,416,182,400,000 | [
[
"Sebastiani",
"R.",
""
]
] |
cs/9408101 | null | A. J. Grove, J. Y. Halpern, D. Koller | Random Worlds and Maximum Entropy | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 2, (1994), 33-88 | null | null | cs.AI | null | Given a knowledge base KB containing first-order and statistical facts, we
consider a principled method, called the random-worlds method, for computing a
degree of belief that some formula Phi holds given KB. If we are reasoning
about a world or system consisting of N individuals, then we can consider all
possible worlds, or first-order models, with domain {1,...,N} that satisfy KB,
and compute the fraction of them in which Phi is true. We define the degree of
belief to be the asymptotic value of this fraction as N grows large. We show
that when the vocabulary underlying Phi and KB uses constants and unary
predicates only, we can naturally associate an entropy with each world. As N
grows larger, there are many more worlds with higher entropy. Therefore, we can
use a maximum-entropy computation to compute the degree of belief. This result
is in a similar spirit to previous work in physics and artificial intelligence,
but is far more general. Of equal interest to the result itself are the
limitations on its scope. Most importantly, the restriction to unary predicates
seems necessary. Although the random-worlds method makes sense in general, the
connection to maximum entropy seems to disappear in the non-unary case. These
observations suggest unexpected limitations to the applicability of
maximum-entropy methods.
| [
{
"version": "v1",
"created": "Mon, 1 Aug 1994 00:00:00 GMT"
}
] | 1,253,836,800,000 | [
[
"Grove",
"A. J.",
""
],
[
"Halpern",
"J. Y.",
""
],
[
"Koller",
"D.",
""
]
] |
cs/9408102 | null | T. Kitani, Y. Eriguchi, M. Hara | Pattern Matching and Discourse Processing in Information Extraction from
Japanese Text | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 2, (1994), 89-110 | null | null | cs.AI | null | Information extraction is the task of automatically picking up information of
interest from an unconstrained text. Information of interest is usually
extracted in two steps. First, sentence level processing locates relevant
pieces of information scattered throughout the text; second, discourse
processing merges coreferential information to generate the output. In the
first step, pieces of information are locally identified without recognizing
any relationships among them. A key word search or simple pattern search can
achieve this purpose. The second step requires deeper knowledge in order to
understand relationships among separately identified pieces of information.
Previous information extraction systems focused on the first step, partly
because they were not required to link up each piece of information with other
pieces. To link the extracted pieces of information and map them onto a
structured output format, complex discourse processing is essential. This paper
reports on a Japanese information extraction system that merges information
using a pattern matcher and discourse processor. Evaluation results show a high
level of system performance which approaches human performance.
| [
{
"version": "v1",
"created": "Mon, 1 Aug 1994 00:00:00 GMT"
}
] | 1,201,996,800,000 | [
[
"Kitani",
"T.",
""
],
[
"Eriguchi",
"Y.",
""
],
[
"Hara",
"M.",
""
]
] |
cs/9408103 | null | S. K. Murthy, S. Kasif, S. Salzberg | A System for Induction of Oblique Decision Trees | See http://www.jair.org/ for an online appendix and other files
accompanying this article | Journal of Artificial Intelligence Research, Vol 2, (1994), 1-32 | null | null | cs.AI | null | This article describes a new system for induction of oblique decision trees.
This system, OC1, combines deterministic hill-climbing with two forms of
randomization to find a good oblique split (in the form of a hyperplane) at
each node of a decision tree. Oblique decision tree methods are tuned
especially for domains in which the attributes are numeric, although they can
be adapted to symbolic or mixed symbolic/numeric attributes. We present
extensive empirical studies, using both real and artificial data, that analyze
OC1's ability to construct oblique trees that are smaller and more accurate
than their axis-parallel counterparts. We also examine the benefits of
randomization for the construction of oblique decision trees.
| [
{
"version": "v1",
"created": "Mon, 1 Aug 1994 00:00:00 GMT"
}
] | 1,201,996,800,000 | [
[
"Murthy",
"S. K.",
""
],
[
"Kasif",
"S.",
""
],
[
"Salzberg",
"S.",
""
]
] |
cs/9409101 | null | S. Safra, M. Tennenholtz | On Planning while Learning | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 2, (1994),
111-129 | null | null | cs.AI | null | This paper introduces a framework for Planning while Learning where an agent
is given a goal to achieve in an environment whose behavior is only partially
known to the agent. We discuss the tractability of various plan-design
processes. We show that for a large natural class of Planning while Learning
systems, a plan can be presented and verified in a reasonable time. However,
coming up algorithmically with a plan, even for simple classes of systems is
apparently intractable. We emphasize the role of off-line plan-design
processes, and show that, in most natural cases, the verification (projection)
part can be carried out in an efficient algorithmic manner.
| [
{
"version": "v1",
"created": "Thu, 1 Sep 1994 00:00:00 GMT"
}
] | 1,416,182,400,000 | [
[
"Safra",
"S.",
""
],
[
"Tennenholtz",
"M.",
""
]
] |
cs/9412101 | null | S. Soderland, Lehnert. W | Wrap-Up: a Trainable Discourse Module for Information Extraction | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 2, (1994),
131-158 | null | null | cs.AI | null | The vast amounts of on-line text now available have led to renewed interest
in information extraction (IE) systems that analyze unrestricted text,
producing a structured representation of selected information from the text.
This paper presents a novel approach that uses machine learning to acquire
knowledge for some of the higher level IE processing. Wrap-Up is a trainable IE
discourse component that makes intersentential inferences and identifies
logical relations among information extracted from the text. Previous
corpus-based approaches were limited to lower level processing such as
part-of-speech tagging, lexical disambiguation, and dictionary construction.
Wrap-Up is fully trainable, and not only automatically decides what classifiers
are needed, but even derives the feature set for each classifier automatically.
Performance equals that of a partially trainable discourse module requiring
manual customization for each domain.
| [
{
"version": "v1",
"created": "Thu, 1 Dec 1994 00:00:00 GMT"
}
] | 1,416,182,400,000 | [
[
"Soderland",
"S.",
""
],
[
"W",
"Lehnert.",
""
]
] |
cs/9412102 | null | W. L. Buntine | Operations for Learning with Graphical Models | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 2, (1994),
159-225 | null | null | cs.AI | null | This paper is a multidisciplinary review of empirical, statistical learning
from a graphical model perspective. Well-known examples of graphical models
include Bayesian networks, directed graphs representing a Markov chain, and
undirected networks representing a Markov field. These graphical models are
extended to model data analysis and empirical learning using the notation of
plates. Graphical operations for simplifying and manipulating a problem are
provided including decomposition, differentiation, and the manipulation of
probability models from the exponential family. Two standard algorithm schemas
for learning are reviewed in a graphical framework: Gibbs sampling and the
expectation maximization algorithm. Using these operations and schemas, some
popular algorithms can be synthesized from their graphical specification. This
includes versions of linear regression, techniques for feed-forward networks,
and learning Gaussian and discrete Bayesian networks from data. The paper
concludes by sketching some implications for data analysis and summarizing how
some popular algorithms fall within the framework presented. The main original
contributions here are the decomposition techniques and the demonstration that
graphical models provide a framework for understanding and developing complex
learning algorithms.
| [
{
"version": "v1",
"created": "Thu, 1 Dec 1994 00:00:00 GMT"
}
] | 1,416,182,400,000 | [
[
"Buntine",
"W. L.",
""
]
] |
cs/9412103 | null | S. Minton, J. Bresina, M. Drummond | Total-Order and Partial-Order Planning: A Comparative Analysis | See http://www.jair.org/ for an online appendix and other files
accompanying this article | Journal of Artificial Intelligence Research, Vol 2, (1994),
227-262 | null | null | cs.AI | null | For many years, the intuitions underlying partial-order planning were largely
taken for granted. Only in the past few years has there been renewed interest
in the fundamental principles underlying this paradigm. In this paper, we
present a rigorous comparative analysis of partial-order and total-order
planning by focusing on two specific planners that can be directly compared. We
show that there are some subtle assumptions that underly the wide-spread
intuitions regarding the supposed efficiency of partial-order planning. For
instance, the superiority of partial-order planning can depend critically upon
the search strategy and the structure of the search space. Understanding the
underlying assumptions is crucial for constructing efficient planners.
| [
{
"version": "v1",
"created": "Thu, 1 Dec 1994 00:00:00 GMT"
}
] | 1,253,836,800,000 | [
[
"Minton",
"S.",
""
],
[
"Bresina",
"J.",
""
],
[
"Drummond",
"M.",
""
]
] |
cs/9501101 | null | T. G. Dietterich, G. Bakiri | Solving Multiclass Learning Problems via Error-Correcting Output Codes | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 2, (1995),
263-286 | null | null | cs.AI | null | Multiclass learning problems involve finding a definition for an unknown
function f(x) whose range is a discrete set containing k > 2 values (i.e., k
``classes''). The definition is acquired by studying collections of training
examples of the form [x_i, f (x_i)]. Existing approaches to multiclass learning
problems include direct application of multiclass algorithms such as the
decision-tree algorithms C4.5 and CART, application of binary concept learning
algorithms to learn individual binary functions for each of the k classes, and
application of binary concept learning algorithms with distributed output
representations. This paper compares these three approaches to a new technique
in which error-correcting codes are employed as a distributed output
representation. We show that these output representations improve the
generalization performance of both C4.5 and backpropagation on a wide range of
multiclass learning tasks. We also demonstrate that this approach is robust
with respect to changes in the size of the training sample, the assignment of
distributed representations to particular classes, and the application of
overfitting avoidance techniques such as decision-tree pruning. Finally, we
show that---like the other methods---the error-correcting code technique can
provide reliable class probability estimates. Taken together, these results
demonstrate that error-correcting output codes provide a general-purpose method
for improving the performance of inductive learning programs on multiclass
problems.
| [
{
"version": "v1",
"created": "Sun, 1 Jan 1995 00:00:00 GMT"
}
] | 1,416,182,400,000 | [
[
"Dietterich",
"T. G.",
""
],
[
"Bakiri",
"G.",
""
]
] |
cs/9501102 | null | S. Hanks, D. S. Weld | A Domain-Independent Algorithm for Plan Adaptation | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 2, (1995),
319-360 | null | null | cs.AI | null | The paradigms of transformational planning, case-based planning, and plan
debugging all involve a process known as plan adaptation - modifying or
repairing an old plan so it solves a new problem. In this paper we provide a
domain-independent algorithm for plan adaptation, demonstrate that it is sound,
complete, and systematic, and compare it to other adaptation algorithms in the
literature. Our approach is based on a view of planning as searching a graph of
partial plans. Generative planning starts at the graph's root and moves from
node to node using plan-refinement operators. In planning by adaptation, a
library plan - an arbitrary node in the plan graph - is the starting point for
the search, and the plan-adaptation algorithm can apply both the same
refinement operators available to a generative planner and can also retract
constraints and steps from the plan. Our algorithm's completeness ensures that
the adaptation algorithm will eventually search the entire graph and its
systematicity ensures that it will do so without redundantly searching any
parts of the graph.
| [
{
"version": "v1",
"created": "Sun, 1 Jan 1995 00:00:00 GMT"
}
] | 1,416,182,400,000 | [
[
"Hanks",
"S.",
""
],
[
"Weld",
"D. S.",
""
]
] |
cs/9501103 | null | P. Cichosz | Truncating Temporal Differences: On the Efficient Implementation of
TD(lambda) for Reinforcement Learning | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 2, (1995),
287-318 | null | null | cs.AI | null | Temporal difference (TD) methods constitute a class of methods for learning
predictions in multi-step prediction problems, parameterized by a recency
factor lambda. Currently the most important application of these methods is to
temporal credit assignment in reinforcement learning. Well known reinforcement
learning algorithms, such as AHC or Q-learning, may be viewed as instances of
TD learning. This paper examines the issues of the efficient and general
implementation of TD(lambda) for arbitrary lambda, for use with reinforcement
learning algorithms optimizing the discounted sum of rewards. The traditional
approach, based on eligibility traces, is argued to suffer from both
inefficiency and lack of generality. The TTD (Truncated Temporal Differences)
procedure is proposed as an alternative, that indeed only approximates
TD(lambda), but requires very little computation per action and can be used
with arbitrary function representation methods. The idea from which it is
derived is fairly simple and not new, but probably unexplored so far.
Encouraging experimental results are presented, suggesting that using lambda
> 0 with the TTD procedure allows one to obtain a significant learning
speedup at essentially the same cost as usual TD(0) learning.
| [
{
"version": "v1",
"created": "Sun, 1 Jan 1995 00:00:00 GMT"
}
] | 1,201,996,800,000 | [
[
"Cichosz",
"P.",
""
]
] |
cs/9503102 | null | P. D. Turney | Cost-Sensitive Classification: Empirical Evaluation of a Hybrid Genetic
Decision Tree Induction Algorithm | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 2, (1995),
369-409 | null | null | cs.AI | null | This paper introduces ICET, a new algorithm for cost-sensitive
classification. ICET uses a genetic algorithm to evolve a population of biases
for a decision tree induction algorithm. The fitness function of the genetic
algorithm is the average cost of classification when using the decision tree,
including both the costs of tests (features, measurements) and the costs of
classification errors. ICET is compared here with three other algorithms for
cost-sensitive classification - EG2, CS-ID3, and IDX - and also with C4.5,
which classifies without regard to cost. The five algorithms are evaluated
empirically on five real-world medical datasets. Three sets of experiments are
performed. The first set examines the baseline performance of the five
algorithms on the five datasets and establishes that ICET performs
significantly better than its competitors. The second set tests the robustness
of ICET under a variety of conditions and shows that ICET maintains its
advantage. The third set looks at ICET's search in bias space and discovers a
way to improve the search.
| [
{
"version": "v1",
"created": "Wed, 1 Mar 1995 00:00:00 GMT"
}
] | 1,253,836,800,000 | [
[
"Turney",
"P. D.",
""
]
] |
cs/9504101 | null | S. K. Donoho, L. A. Rendell | Rerepresenting and Restructuring Domain Theories: A Constructive
Induction Approach | See http://www.jair.org/ for an online appendix and other files
accompanying this article | Journal of Artificial Intelligence Research, Vol 2, (1995),
411-446 | null | null | cs.AI | null | Theory revision integrates inductive learning and background knowledge by
combining training examples with a coarse domain theory to produce a more
accurate theory. There are two challenges that theory revision and other
theory-guided systems face. First, a representation language appropriate for
the initial theory may be inappropriate for an improved theory. While the
original representation may concisely express the initial theory, a more
accurate theory forced to use that same representation may be bulky,
cumbersome, and difficult to reach. Second, a theory structure suitable for a
coarse domain theory may be insufficient for a fine-tuned theory. Systems that
produce only small, local changes to a theory have limited value for
accomplishing complex structural alterations that may be required.
Consequently, advanced theory-guided learning systems require flexible
representation and flexible structure. An analysis of various theory revision
systems and theory-guided learning systems reveals specific strengths and
weaknesses in terms of these two desired properties. Designed to capture the
underlying qualities of each system, a new system uses theory-guided
constructive induction. Experiments in three domains show improvement over
previous theory-guided systems. This leads to a study of the behavior,
limitations, and potential of theory-guided constructive induction.
| [
{
"version": "v1",
"created": "Sat, 1 Apr 1995 00:00:00 GMT"
}
] | 1,201,996,800,000 | [
[
"Donoho",
"S. K.",
""
],
[
"Rendell",
"L. A.",
""
]
] |
cs/9505101 | null | P. David | Using Pivot Consistency to Decompose and Solve Functional CSPs | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 2, (1995),
447-474 | null | null | cs.AI | null | Many studies have been carried out in order to increase the search efficiency
of constraint satisfaction problems; among them, some make use of structural
properties of the constraint network; others take into account semantic
properties of the constraints, generally assuming that all the constraints
possess the given property. In this paper, we propose a new decomposition
method benefiting from both semantic properties of functional constraints (not
bijective constraints) and structural properties of the network; furthermore,
not all the constraints need to be functional. We show that under some
conditions, the existence of solutions can be guaranteed. We first characterize
a particular subset of the variables, which we name a root set. We then
introduce pivot consistency, a new local consistency which is a weak form of
path consistency and can be achieved in O(n^2d^2) complexity (instead of
O(n^3d^3) for path consistency), and we present associated properties; in
particular, we show that any consistent instantiation of the root set can be
linearly extended to a solution, which leads to the presentation of the
aforementioned new method for solving by decomposing functional CSPs.
| [
{
"version": "v1",
"created": "Mon, 1 May 1995 00:00:00 GMT"
}
] | 1,416,182,400,000 | [
[
"David",
"P.",
""
]
] |
cs/9505102 | null | A. Schaerf, Y. Shoham, M. Tennenholtz | Adaptive Load Balancing: A Study in Multi-Agent Learning | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 2, (1995),
475-500 | null | null | cs.AI | null | We study the process of multi-agent reinforcement learning in the context of
load balancing in a distributed system, without use of either central
coordination or explicit communication. We first define a precise framework in
which to study adaptive load balancing, important features of which are its
stochastic nature and the purely local information available to individual
agents. Given this framework, we show illuminating results on the interplay
between basic adaptive behavior parameters and their effect on system
efficiency. We then investigate the properties of adaptive load balancing in
heterogeneous populations, and address the issue of exploration vs.
exploitation in that context. Finally, we show that naive use of communication
may not improve, and might even harm system efficiency.
| [
{
"version": "v1",
"created": "Mon, 1 May 1995 00:00:00 GMT"
}
] | 1,416,182,400,000 | [
[
"Schaerf",
"A.",
""
],
[
"Shoham",
"Y.",
""
],
[
"Tennenholtz",
"M.",
""
]
] |
cs/9505103 | null | S. J. Russell, D. Subramanian | Provably Bounded-Optimal Agents | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 2, (1995),
575-609 | null | null | cs.AI | null | Since its inception, artificial intelligence has relied upon a theoretical
foundation centered around perfect rationality as the desired property of
intelligent systems. We argue, as others have done, that this foundation is
inadequate because it imposes fundamentally unsatisfiable requirements. As a
result, there has arisen a wide gap between theory and practice in AI,
hindering progress in the field. We propose instead a property called bounded
optimality. Roughly speaking, an agent is bounded-optimal if its program is a
solution to the constrained optimization problem presented by its architecture
and the task environment. We show how to construct agents with this property
for a simple class of machine architectures in a broad class of real-time
environments. We illustrate these results using a simple model of an automated
mail sorting facility. We also define a weaker property, asymptotic bounded
optimality (ABO), that generalizes the notion of optimality in classical
complexity theory. We then construct universal ABO programs, i.e., programs
that are ABO no matter what real-time constraints are applied. Universal ABO
programs can be used as building blocks for more complex systems. We conclude
with a discussion of the prospects for bounded optimality as a theoretical
basis for AI, and relate it to similar trends in philosophy, economics, and
game theory.
| [
{
"version": "v1",
"created": "Mon, 1 May 1995 00:00:00 GMT"
}
] | 1,416,182,400,000 | [
[
"Russell",
"S. J.",
""
],
[
"Subramanian",
"D.",
""
]
] |
cs/9505104 | null | W. W. Cohen | Pac-Learning Recursive Logic Programs: Efficient Algorithms | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 2, (1995),
501-539 | null | null | cs.AI | null | We present algorithms that learn certain classes of function-free recursive
logic programs in polynomial time from equivalence queries. In particular, we
show that a single k-ary recursive constant-depth determinate clause is
learnable. Two-clause programs consisting of one learnable recursive clause and
one constant-depth determinate non-recursive clause are also learnable, if an
additional ``basecase'' oracle is assumed. These results immediately imply the
pac-learnability of these classes. Although these classes of learnable
recursive programs are very constrained, it is shown in a companion paper that
they are maximally general, in that generalizing either class in any natural
way leads to a computationally difficult learning problem. Thus, taken together
with its companion paper, this paper establishes a boundary of efficient
learnability for recursive logic programs.
| [
{
"version": "v1",
"created": "Mon, 1 May 1995 00:00:00 GMT"
}
] | 1,416,182,400,000 | [
[
"Cohen",
"W. W.",
""
]
] |
cs/9505105 | null | W. W. Cohen | Pac-learning Recursive Logic Programs: Negative Results | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 2, (1995),
541-573 | null | null | cs.AI | null | In a companion paper it was shown that the class of constant-depth
determinate k-ary recursive clauses is efficiently learnable. In this paper we
present negative results showing that any natural generalization of this class
is hard to learn in Valiant's model of pac-learnability. In particular, we show
that the following program classes are cryptographically hard to learn:
programs with an unbounded number of constant-depth linear recursive clauses;
programs with one constant-depth determinate clause containing an unbounded
number of recursive calls; and programs with one linear recursive clause of
constant locality. These results immediately imply the non-learnability of any
more general class of programs. We also show that learning a constant-depth
determinate program with either two linear recursive clauses or one linear
recursive clause and one non-recursive clause is as hard as learning boolean
DNF. Together with positive results from the companion paper, these negative
results establish a boundary of efficient learnability for recursive
function-free clauses.
| [
{
"version": "v1",
"created": "Mon, 1 May 1995 00:00:00 GMT"
}
] | 1,416,182,400,000 | [
[
"Cohen",
"W. W.",
""
]
] |
cs/9506101 | null | M. Veloso, P. Stone | FLECS: Planning with a Flexible Commitment Strategy | See http://www.jair.org/ for an online appendix and other files
accompanying this article | Journal of Artificial Intelligence Research, Vol 3, (1995), 25-52 | null | null | cs.AI | null | There has been evidence that least-commitment planners can efficiently handle
planning problems that involve difficult goal interactions. This evidence has
led to the common belief that delayed-commitment is the "best" possible
planning strategy. However, we recently found evidence that eager-commitment
planners can handle a variety of planning problems more efficiently, in
particular those with difficult operator choices. Resigned to the futility of
trying to find a universally successful planning strategy, we devised a planner
that can be used to study which domains and problems are best for which
planning strategies. In this article we introduce this new planning algorithm,
FLECS, which uses a FLExible Commitment Strategy with respect to plan-step
orderings. It is able to use any strategy from delayed-commitment to
eager-commitment. The combination of delayed and eager operator-ordering
commitments allows FLECS to take advantage of the benefits of explicitly using
a simulated execution state and reasoning about planning constraints. FLECS can
vary its commitment strategy across different problems and domains, and also
during the course of a single planning problem. FLECS represents a novel
contribution to planning in that it explicitly provides the choice of which
commitment strategy to use while planning. FLECS provides a framework to
investigate the mapping from planning domains and problems to efficient
planning strategies.
| [
{
"version": "v1",
"created": "Thu, 1 Jun 1995 00:00:00 GMT"
}
] | 1,253,836,800,000 | [
[
"Veloso",
"M.",
""
],
[
"Stone",
"P.",
""
]
] |
cs/9506102 | null | R. J. Mooney, M. E. Califf | Induction of First-Order Decision Lists: Results on Learning the Past
Tense of English Verbs | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 3, (1995), 1-24 | null | null | cs.AI | null | This paper presents a method for inducing logic programs from examples that
learns a new class of concepts called first-order decision lists, defined as
ordered lists of clauses each ending in a cut. The method, called FOIDL, is
based on FOIL (Quinlan, 1990) but employs intensional background knowledge and
avoids the need for explicit negative examples. It is particularly useful for
problems that involve rules with specific exceptions, such as learning the
past-tense of English verbs, a task widely studied in the context of the
symbolic/connectionist debate. FOIDL is able to learn concise, accurate
programs for this problem from significantly fewer examples than previous
methods (both connectionist and symbolic).
| [
{
"version": "v1",
"created": "Thu, 1 Jun 1995 00:00:00 GMT"
}
] | 1,201,996,800,000 | [
[
"Mooney",
"R. J.",
""
],
[
"Califf",
"M. E.",
""
]
] |
cs/9507101 | null | R. Bergmann, W. Wilke | Building and Refining Abstract Planning Cases by Change of
Representation Language | See http://www.jair.org/ for an online appendix and other files
accompanying this article | Journal of Artificial Intelligence Research, Vol 3, (1995), 53-118 | null | null | cs.AI | null | ion is one of the most promising approaches to improve the performance of
problem solvers. In several domains abstraction by dropping sentences of a
domain description -- as used in most hierarchical planners -- has proven
useful. In this paper we present examples which illustrate significant
drawbacks of abstraction by dropping sentences. To overcome these drawbacks, we
propose a more general view of abstraction involving the change of
representation language. We have developed a new abstraction methodology and a
related sound and complete learning algorithm that allows the complete change
of representation language of planning cases from concrete to abstract.
However, to achieve a powerful change of the representation language, the
abstract language itself as well as rules which describe admissible ways of
abstracting states must be provided in the domain model. This new abstraction
approach is the core of Paris (Plan Abstraction and Refinement in an Integrated
System), a system in which abstract planning cases are automatically learned
from given concrete cases. An empirical study in the domain of process planning
in mechanical engineering shows significant advantages of the proposed
reasoning from abstract cases over classical hierarchical planning.
| [
{
"version": "v1",
"created": "Sat, 1 Jul 1995 00:00:00 GMT"
}
] | 1,253,836,800,000 | [
[
"Bergmann",
"R.",
""
],
[
"Wilke",
"W.",
""
]
] |
cs/9508101 | null | Q. Zhao, T. Nishida | Using Qualitative Hypotheses to Identify Inaccurate Data | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 3, (1995),
119-145 | null | null | cs.AI | null | Identifying inaccurate data has long been regarded as a significant and
difficult problem in AI. In this paper, we present a new method for identifying
inaccurate data on the basis of qualitative correlations among related data.
First, we introduce the definitions of related data and qualitative
correlations among related data. Then we put forward a new concept called
support coefficient function (SCF). SCF can be used to extract, represent, and
calculate qualitative correlations among related data within a dataset. We
propose an approach to determining dynamic shift intervals of inaccurate data,
and an approach to calculating possibility of identifying inaccurate data,
respectively. Both of the approaches are based on SCF. Finally we present an
algorithm for identifying inaccurate data by using qualitative correlations
among related data as confirmatory or disconfirmatory evidence. We have
developed a practical system for interpreting infrared spectra by applying the
method, and have fully tested the system against several hundred real spectra.
The experimental results show that the method is significantly better than the
conventional methods used in many similar systems.
| [
{
"version": "v1",
"created": "Tue, 1 Aug 1995 00:00:00 GMT"
}
] | 1,416,182,400,000 | [
[
"Zhao",
"Q.",
""
],
[
"Nishida",
"T.",
""
]
] |
cs/9508102 | null | C. G. Giraud-Carrier, T. R. Martinez | An Integrated Framework for Learning and Reasoning | See http://www.jair.org/ for an online appendix and other files
accompanying this article | Journal of Artificial Intelligence Research, Vol 3, (1995),
147-185 | null | null | cs.AI | null | Learning and reasoning are both aspects of what is considered to be
intelligence. Their studies within AI have been separated historically,
learning being the topic of machine learning and neural networks, and reasoning
falling under classical (or symbolic) AI. However, learning and reasoning are
in many ways interdependent. This paper discusses the nature of some of these
interdependencies and proposes a general framework called FLARE, that combines
inductive learning using prior knowledge together with reasoning in a
propositional setting. Several examples that test the framework are presented,
including classical induction, many important reasoning protocols and two
simple expert systems.
| [
{
"version": "v1",
"created": "Tue, 1 Aug 1995 00:00:00 GMT"
}
] | 1,253,836,800,000 | [
[
"Giraud-Carrier",
"C. G.",
""
],
[
"Martinez",
"T. R.",
""
]
] |
cs/9510101 | null | Y. Bengio, P. Frasconi | Diffusion of Context and Credit Information in Markovian Models | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 3, (1995),
249-270 | null | null | cs.AI | null | This paper studies the problem of ergodicity of transition probability
matrices in Markovian models, such as hidden Markov models (HMMs), and how it
makes very difficult the task of learning to represent long-term context for
sequential data. This phenomenon hurts the forward propagation of long-term
context information, as well as learning a hidden state representation to
represent long-term context, which depends on propagating credit information
backwards in time. Using results from Markov chain theory, we show that this
problem of diffusion of context and credit is reduced when the transition
probabilities approach 0 or 1, i.e., the transition probability matrices are
sparse and the model essentially deterministic. The results found in this paper
apply to learning approaches based on continuous optimization, such as gradient
descent and the Baum-Welch algorithm.
| [
{
"version": "v1",
"created": "Sun, 1 Oct 1995 00:00:00 GMT"
}
] | 1,416,182,400,000 | [
[
"Bengio",
"Y.",
""
],
[
"Frasconi",
"P.",
""
]
] |
cs/9510102 | null | G. Pinkas, R. Dechter | Improving Connectionist Energy Minimization | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 3, (1995),
223-248 | null | null | cs.AI | null | Symmetric networks designed for energy minimization such as Boltzman machines
and Hopfield nets are frequently investigated for use in optimization,
constraint satisfaction and approximation of NP-hard problems. Nevertheless,
finding a global solution (i.e., a global minimum for the energy function) is
not guaranteed and even a local solution may take an exponential number of
steps. We propose an improvement to the standard local activation function used
for such networks. The improved algorithm guarantees that a global minimum is
found in linear time for tree-like subnetworks. The algorithm, called activate,
is uniform and does not assume that the network is tree-like. It can identify
tree-like subnetworks even in cyclic topologies (arbitrary networks) and avoid
local minima along these trees. For acyclic networks, the algorithm is
guaranteed to converge to a global minimum from any initial state of the system
(self-stabilization) and remains correct under various types of schedulers. On
the negative side, we show that in the presence of cycles, no uniform algorithm
exists that guarantees optimality even under a sequential asynchronous
scheduler. An asynchronous scheduler can activate only one unit at a time while
a synchronous scheduler can activate any number of units in a single time step.
In addition, no uniform algorithm exists to optimize even acyclic networks when
the scheduler is synchronous. Finally, we show how the algorithm can be
improved using the cycle-cutset scheme. The general algorithm, called
activate-with-cutset, improves over activate and has some performance
guarantees that are related to the size of the network's cycle-cutset.
| [
{
"version": "v1",
"created": "Sun, 1 Oct 1995 00:00:00 GMT"
}
] | 1,416,182,400,000 | [
[
"Pinkas",
"G.",
""
],
[
"Dechter",
"R.",
""
]
] |
cs/9510103 | null | K. Woods, D. Cook, L. Hall, K. Bowyer, L. Stark | Learning Membership Functions in a Function-Based Object Recognition
System | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 3, (1995),
187-222 | null | null | cs.AI | null | Functionality-based recognition systems recognize objects at the category
level by reasoning about how well the objects support the expected function.
Such systems naturally associate a ``measure of goodness'' or ``membership
value'' with a recognized object. This measure of goodness is the result of
combining individual measures, or membership values, from potentially many
primitive evaluations of different properties of the object's shape. A
membership function is used to compute the membership value when evaluating a
primitive of a particular physical property of an object. In previous versions
of a recognition system known as Gruff, the membership function for each of the
primitive evaluations was hand-crafted by the system designer. In this paper,
we provide a learning component for the Gruff system, called Omlet, that
automatically learns membership functions given a set of example objects
labeled with their desired category measure. The learning algorithm is
generally applicable to any problem in which low-level membership values are
combined through an and-or tree structure to give a final overall membership
value.
| [
{
"version": "v1",
"created": "Sun, 1 Oct 1995 00:00:00 GMT"
}
] | 1,253,836,800,000 | [
[
"Woods",
"K.",
""
],
[
"Cook",
"D.",
""
],
[
"Hall",
"L.",
""
],
[
"Bowyer",
"K.",
""
],
[
"Stark",
"L.",
""
]
] |
cs/9511101 | null | S. B. Huffman, J. E. Laird | Flexibly Instructable Agents | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 3, (1995),
271-324 | null | null | cs.AI | null | This paper presents an approach to learning from situated, interactive
tutorial instruction within an ongoing agent. Tutorial instruction is a
flexible (and thus powerful) paradigm for teaching tasks because it allows an
instructor to communicate whatever types of knowledge an agent might need in
whatever situations might arise. To support this flexibility, however, the
agent must be able to learn multiple kinds of knowledge from a broad range of
instructional interactions. Our approach, called situated explanation, achieves
such learning through a combination of analytic and inductive techniques. It
combines a form of explanation-based learning that is situated for each
instruction with a full suite of contextually guided responses to incomplete
explanations. The approach is implemented in an agent called Instructo-Soar
that learns hierarchies of new tasks and other domain knowledge from
interactive natural language instructions. Instructo-Soar meets three key
requirements of flexible instructability that distinguish it from previous
systems: (1) it can take known or unknown commands at any instruction point;
(2) it can handle instructions that apply to either its current situation or to
a hypothetical situation specified in language (as in, for instance,
conditional instructions); and (3) it can learn, from instructions, each class
of knowledge it uses to perform tasks.
| [
{
"version": "v1",
"created": "Wed, 1 Nov 1995 00:00:00 GMT"
}
] | 1,416,182,400,000 | [
[
"Huffman",
"S. B.",
""
],
[
"Laird",
"J. E.",
""
]
] |
cs/9512101 | null | G. I. Webb | OPUS: An Efficient Admissible Algorithm for Unordered Search | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 3, (1995),
431-465 | null | null | cs.AI | null | OPUS is a branch and bound search algorithm that enables efficient admissible
search through spaces for which the order of search operator application is not
significant. The algorithm's search efficiency is demonstrated with respect to
very large machine learning search spaces. The use of admissible search is of
potential value to the machine learning community as it means that the exact
learning biases to be employed for complex learning tasks can be precisely
specified and manipulated. OPUS also has potential for application in other
areas of artificial intelligence, notably, truth maintenance.
| [
{
"version": "v1",
"created": "Fri, 1 Dec 1995 00:00:00 GMT"
}
] | 1,416,182,400,000 | [
[
"Webb",
"G. I.",
""
]
] |
cs/9512102 | null | A. Broggi, S. Berte | Vision-Based Road Detection in Automotive Systems: A Real-Time
Expectation-Driven Approach | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 3, (1995),
325-348 | null | null | cs.AI | null | The main aim of this work is the development of a vision-based road detection
system fast enough to cope with the difficult real-time constraints imposed by
moving vehicle applications. The hardware platform, a special-purpose massively
parallel system, has been chosen to minimize system production and operational
costs. This paper presents a novel approach to expectation-driven low-level
image segmentation, which can be mapped naturally onto mesh-connected massively
parallel SIMD architectures capable of handling hierarchical data structures.
The input image is assumed to contain a distorted version of a given template;
a multiresolution stretching process is used to reshape the original template
in accordance with the acquired image content, minimizing a potential function.
The distorted template is the process output.
| [
{
"version": "v1",
"created": "Fri, 1 Dec 1995 00:00:00 GMT"
}
] | 1,253,836,800,000 | [
[
"Broggi",
"A.",
""
],
[
"Berte",
"S.",
""
]
] |
cs/9512103 | null | P. Idestam-Almquist | Generalization of Clauses under Implication | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 3, (1995),
467-489 | null | null | cs.AI | null | In the area of inductive learning, generalization is a main operation, and
the usual definition of induction is based on logical implication. Recently
there has been a rising interest in clausal representation of knowledge in
machine learning. Almost all inductive learning systems that perform
generalization of clauses use the relation theta-subsumption instead of
implication. The main reason is that there is a well-known and simple technique
to compute least general generalizations under theta-subsumption, but not under
implication. However generalization under theta-subsumption is inappropriate
for learning recursive clauses, which is a crucial problem since recursion is
the basic program structure of logic programs. We note that implication between
clauses is undecidable, and we therefore introduce a stronger form of
implication, called T-implication, which is decidable between clauses. We show
that for every finite set of clauses there exists a least general
generalization under T-implication. We describe a technique to reduce
generalizations under implication of a clause to generalizations under
theta-subsumption of what we call an expansion of the original clause. Moreover
we show that for every non-tautological clause there exists a T-complete
expansion, which means that every generalization under T-implication of the
clause is reduced to a generalization under theta-subsumption of the expansion.
| [
{
"version": "v1",
"created": "Fri, 1 Dec 1995 00:00:00 GMT"
}
] | 1,416,182,400,000 | [
[
"Idestam-Almquist",
"P.",
""
]
] |
cs/9512104 | null | D. Heckerman, R. Shachter | Decision-Theoretic Foundations for Causal Reasoning | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 3, (1995),
405-430 | null | null | cs.AI | null | We present a definition of cause and effect in terms of decision-theoretic
primitives and thereby provide a principled foundation for causal reasoning.
Our definition departs from the traditional view of causation in that causal
assertions may vary with the set of decisions available. We argue that this
approach provides added clarity to the notion of cause. Also in this paper, we
examine the encoding of causal relationships in directed acyclic graphs. We
describe a special class of influence diagrams, those in canonical form, and
show its relationship to Pearl's representation of cause and effect. Finally,
we show how canonical form facilitates counterfactual reasoning.
| [
{
"version": "v1",
"created": "Fri, 1 Dec 1995 00:00:00 GMT"
}
] | 1,416,182,400,000 | [
[
"Heckerman",
"D.",
""
],
[
"Shachter",
"R.",
""
]
] |
cs/9512105 | null | R. Khardon | Translating between Horn Representations and their Characteristic Models | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 3, (1995),
349-372 | null | null | cs.AI | null | Characteristic models are an alternative, model based, representation for
Horn expressions. It has been shown that these two representations are
incomparable and each has its advantages over the other. It is therefore
natural to ask what is the cost of translating, back and forth, between these
representations. Interestingly, the same translation questions arise in
database theory, where it has applications to the design of relational
databases. This paper studies the computational complexity of these problems.
Our main result is that the two translation problems are equivalent under
polynomial reductions, and that they are equivalent to the corresponding
decision problem. Namely, translating is equivalent to deciding whether a given
set of models is the set of characteristic models for a given Horn expression.
We also relate these problems to the hypergraph transversal problem, a well
known problem which is related to other applications in AI and for which no
polynomial time algorithm is known. It is shown that in general our translation
problems are at least as hard as the hypergraph transversal problem, and in a
special case they are equivalent to it.
| [
{
"version": "v1",
"created": "Fri, 1 Dec 1995 00:00:00 GMT"
}
] | 1,416,182,400,000 | [
[
"Khardon",
"R.",
""
]
] |
cs/9512106 | null | M. Buro | Statistical Feature Combination for the Evaluation of Game Positions | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 3, (1995),
373-382 | null | null | cs.AI | null | This article describes an application of three well-known statistical methods
in the field of game-tree search: using a large number of classified Othello
positions, feature weights for evaluation functions with a
game-phase-independent meaning are estimated by means of logistic regression,
Fisher's linear discriminant, and the quadratic discriminant function for
normally distributed features. Thereafter, the playing strengths are compared
by means of tournaments between the resulting versions of a world-class Othello
program. In this application, logistic regression - which is used here for the
first time in the context of game playing - leads to better results than the
other approaches.
| [
{
"version": "v1",
"created": "Fri, 1 Dec 1995 00:00:00 GMT"
}
] | 1,416,182,400,000 | [
[
"Buro",
"M.",
""
]
] |
cs/9512107 | null | S. M. Weiss, N. Indurkhya | Rule-based Machine Learning Methods for Functional Prediction | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 3, (1995),
383-403 | null | null | cs.AI | null | We describe a machine learning method for predicting the value of a
real-valued function, given the values of multiple input variables. The method
induces solutions from samples in the form of ordered disjunctive normal form
(DNF) decision rules. A central objective of the method and representation is
the induction of compact, easily interpretable solutions. This rule-based
decision model can be extended to search efficiently for similar cases prior to
approximating function values. Experimental results on real-world data
demonstrate that the new techniques are competitive with existing machine
learning and statistical methods and can sometimes yield superior regression
performance.
| [
{
"version": "v1",
"created": "Fri, 1 Dec 1995 00:00:00 GMT"
}
] | 1,416,182,400,000 | [
[
"Weiss",
"S. M.",
""
],
[
"Indurkhya",
"N.",
""
]
] |
cs/9601101 | null | P. vanBeek, D. W. Manchak | The Design and Experimental Analysis of Algorithms for Temporal
Reasoning | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 4, (1996), 1-18 | null | null | cs.AI | null | Many applications -- from planning and scheduling to problems in molecular
biology -- rely heavily on a temporal reasoning component. In this paper, we
discuss the design and empirical analysis of algorithms for a temporal
reasoning system based on Allen's influential interval-based framework for
representing temporal information. At the core of the system are algorithms for
determining whether the temporal information is consistent, and, if so, finding
one or more scenarios that are consistent with the temporal information. Two
important algorithms for these tasks are a path consistency algorithm and a
backtracking algorithm. For the path consistency algorithm, we develop
techniques that can result in up to a ten-fold speedup over an already highly
optimized implementation. For the backtracking algorithm, we develop variable
and value ordering heuristics that are shown empirically to dramatically
improve the performance of the algorithm. As well, we show that a previously
suggested reformulation of the backtracking search problem can reduce the time
and space requirements of the backtracking search. Taken together, the
techniques we develop allow a temporal reasoning component to solve problems
that are of practical size.
| [
{
"version": "v1",
"created": "Mon, 1 Jan 1996 00:00:00 GMT"
}
] | 1,472,601,600,000 | [
[
"vanBeek",
"P.",
""
],
[
"Manchak",
"D. W.",
""
]
] |
cs/9602101 | null | G. Brewka | Well-Founded Semantics for Extended Logic Programs with Dynamic
Preferences | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 4, (1996), 19-36 | null | null | cs.AI | null | The paper describes an extension of well-founded semantics for logic programs
with two types of negation. In this extension information about preferences
between rules can be expressed in the logical language and derived dynamically.
This is achieved by using a reserved predicate symbol and a naming technique.
Conflicts among rules are resolved whenever possible on the basis of derived
preference information. The well-founded conclusions of prioritized logic
programs can be computed in polynomial time. A legal reasoning example
illustrates the usefulness of the approach.
| [
{
"version": "v1",
"created": "Thu, 1 Feb 1996 00:00:00 GMT"
}
] | 1,201,996,800,000 | [
[
"Brewka",
"G.",
""
]
] |
cs/9602102 | null | A. L. Delcher, A. J. Grove, S. Kasif, J. Pearl | Logarithmic-Time Updates and Queries in Probabilistic Networks | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 4, (1996), 37-59 | null | null | cs.AI | null | Traditional databases commonly support efficient query and update procedures
that operate in time which is sublinear in the size of the database. Our goal
in this paper is to take a first step toward dynamic reasoning in probabilistic
databases with comparable efficiency. We propose a dynamic data structure that
supports efficient algorithms for updating and querying singly connected
Bayesian networks. In the conventional algorithm, new evidence is absorbed in
O(1) time and queries are processed in time O(N), where N is the size of the
network. We propose an algorithm which, after a preprocessing phase, allows us
to answer queries in time O(log N) at the expense of O(log N) time per evidence
absorption. The usefulness of sub-linear processing time manifests itself in
applications requiring (near) real-time response over large probabilistic
databases. We briefly discuss a potential application of dynamic probabilistic
reasoning in computational biology.
| [
{
"version": "v1",
"created": "Thu, 1 Feb 1996 00:00:00 GMT"
}
] | 1,253,836,800,000 | [
[
"Delcher",
"A. L.",
""
],
[
"Grove",
"A. J.",
""
],
[
"Kasif",
"S.",
""
],
[
"Pearl",
"J.",
""
]
] |
cs/9603101 | null | T. Hogg | Quantum Computing and Phase Transitions in Combinatorial Search | See http://www.jair.org/ for an online appendix and other files
accompanying this article | Journal of Artificial Intelligence Research, Vol 4, (1996), 91-128 | null | null | cs.AI | null | We introduce an algorithm for combinatorial search on quantum computers that
is capable of significantly concentrating amplitude into solutions for some NP
search problems, on average. This is done by exploiting the same aspects of
problem structure as used by classical backtrack methods to avoid unproductive
search choices. This quantum algorithm is much more likely to find solutions
than the simple direct use of quantum parallelism. Furthermore, empirical
evaluation on small problems shows this quantum algorithm displays the same
phase transition behavior, and at the same location, as seen in many previously
studied classical search methods. Specifically, difficult problem instances are
concentrated near the abrupt change from underconstrained to overconstrained
problems.
| [
{
"version": "v1",
"created": "Fri, 1 Mar 1996 00:00:00 GMT"
}
] | 1,253,836,800,000 | [
[
"Hogg",
"T.",
""
]
] |
cs/9603102 | null | L. K. Saul, T. Jaakkola, M. I. Jordan | Mean Field Theory for Sigmoid Belief Networks | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 4, (1996), 61-76 | null | null | cs.AI | null | We develop a mean field theory for sigmoid belief networks based on ideas
from statistical mechanics. Our mean field theory provides a tractable
approximation to the true probability distribution in these networks; it also
yields a lower bound on the likelihood of evidence. We demonstrate the utility
of this framework on a benchmark problem in statistical pattern
recognition---the classification of handwritten digits.
| [
{
"version": "v1",
"created": "Fri, 1 Mar 1996 00:00:00 GMT"
}
] | 1,253,836,800,000 | [
[
"Saul",
"L. K.",
""
],
[
"Jaakkola",
"T.",
""
],
[
"Jordan",
"M. I.",
""
]
] |
cs/9603103 | null | J. R. Quinlan | Improved Use of Continuous Attributes in C4.5 | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 4, (1996), 77-90 | null | null | cs.AI | null | A reported weakness of C4.5 in domains with continuous attributes is
addressed by modifying the formation and evaluation of tests on continuous
attributes. An MDL-inspired penalty is applied to such tests, eliminating some
of them from consideration and altering the relative desirability of all tests.
Empirical trials show that the modifications lead to smaller decision trees
with higher predictive accuracies. Results also confirm that a new version of
C4.5 incorporating these changes is superior to recent approaches that use
global discretization and that construct small trees with multi-interval
splits.
| [
{
"version": "v1",
"created": "Fri, 1 Mar 1996 00:00:00 GMT"
}
] | 1,253,836,800,000 | [
[
"Quinlan",
"J. R.",
""
]
] |