corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-675901 | cs/9308101 | Dynamic Backtracking | <|reference_start|>Dynamic Backtracking: Because of their occasional need to return to shallow points in a search tree, existing backtracking methods can sometimes erase meaningful progress toward solving a search problem. In this paper, we present a method by which backtrack points can be moved deeper in the search space, thereby avoiding this difficulty. The technique developed is a variant of dependency-directed backtracking that uses only polynomial space while still providing useful control information and retaining the completeness guarantees provided by earlier approaches.<|reference_end|> | arxiv | @article{ginsberg1993dynamic,
title={Dynamic Backtracking},
author={M. L. Ginsberg},
journal={Journal of Artificial Intelligence Research, Vol 1, (1993), 25-46},
year={1993},
archivePrefix={arXiv},
eprint={cs/9308101},
primaryClass={cs.AI}
} | ginsberg1993dynamic |
arxiv-675902 | cs/9308102 | A Market-Oriented Programming Environment and its Application to Distributed Multicommodity Flow Problems | <|reference_start|>A Market-Oriented Programming Environment and its Application to Distributed Multicommodity Flow Problems: Market price systems constitute a well-understood class of mechanisms that under certain conditions provide effective decentralization of decision making with minimal communication overhead. In a market-oriented programming approach to distributed problem solving, we derive the activities and resource allocations for a set of computational agents by computing the competitive equilibrium of an artificial economy. WALRAS provides basic constructs for defining computational market structures, and protocols for deriving their corresponding price equilibria. In a particular realization of this approach for a form of multicommodity flow problem, we see that careful construction of the decision process according to economic principles can lead to efficient distributed resource allocation, and that the behavior of the system can be meaningfully analyzed in economic terms.<|reference_end|> | arxiv | @article{wellman1993a,
title={A Market-Oriented Programming Environment and its Application to
Distributed Multicommodity Flow Problems},
author={M. P. Wellman},
journal={Journal of Artificial Intelligence Research, Vol 1, (1993), 1-23},
year={1993},
archivePrefix={arXiv},
eprint={cs/9308102},
primaryClass={cs.AI}
} | wellman1993a |
arxiv-675903 | cs/9309101 | An Empirical Analysis of Search in GSAT | <|reference_start|>An Empirical Analysis of Search in GSAT: We describe an extensive study of search in GSAT, an approximation procedure for propositional satisfiability. GSAT performs greedy hill-climbing on the number of satisfied clauses in a truth assignment. Our experiments provide a more complete picture of GSAT's search than previous accounts. We describe in detail the two phases of search: rapid hill-climbing followed by a long plateau search. We demonstrate that when applied to randomly generated 3SAT problems, there is a very simple scaling with problem size for both the mean number of satisfied clauses and the mean branching rate. Our results allow us to make detailed numerical conjectures about the length of the hill-climbing phase, the average gradient of this phase, and to conjecture that both the average score and average branching rate decay exponentially during plateau search. We end by showing how these results can be used to direct future theoretical analysis. This work provides a case study of how computer experiments can be used to improve understanding of the theoretical properties of algorithms.<|reference_end|> | arxiv | @article{gent1993an,
title={An Empirical Analysis of Search in GSAT},
author={I. P. Gent, T. Walsh},
journal={Journal of Artificial Intelligence Research, Vol 1, (1993), 47-59},
year={1993},
archivePrefix={arXiv},
eprint={cs/9309101},
primaryClass={cs.AI}
} | gent1993an |
arxiv-675904 | cs/9311101 | The Difficulties of Learning Logic Programs with Cut | <|reference_start|>The Difficulties of Learning Logic Programs with Cut: As real logic programmers normally use cut (!), an effective learning procedure for logic programs should be able to deal with it. Because the cut predicate has only a procedural meaning, clauses containing cut cannot be learned using an extensional evaluation method, as is done in most learning systems. On the other hand, searching a space of possible programs (instead of a space of independent clauses) is unfeasible. An alternative solution is to generate first a candidate base program which covers the positive examples, and then make it consistent by inserting cut where appropriate. The problem of learning programs with cut has not been investigated before and this seems to be a natural and reasonable approach. We generalize this scheme and investigate the difficulties that arise. Some of the major shortcomings are actually caused, in general, by the need for intensional evaluation. As a conclusion, the analysis of this paper suggests, on precise and technical grounds, that learning cut is difficult, and current induction techniques should probably be restricted to purely declarative logic languages.<|reference_end|> | arxiv | @article{bergadano1993the,
title={The Difficulties of Learning Logic Programs with Cut},
author={F. Bergadano, D. Gunetti, U. Trinchero},
journal={Journal of Artificial Intelligence Research, Vol 1, (1993), 91-107},
year={1993},
archivePrefix={arXiv},
eprint={cs/9311101},
primaryClass={cs.AI}
} | bergadano1993the |
arxiv-675905 | cs/9311102 | Software Agents: Completing Patterns and Constructing User Interfaces | <|reference_start|>Software Agents: Completing Patterns and Constructing User Interfaces: To support the goal of allowing users to record and retrieve information, this paper describes an interactive note-taking system for pen-based computers with two distinctive features. First, it actively predicts what the user is going to write. Second, it automatically constructs a custom, button-box user interface on request. The system is an example of a learning-apprentice software- agent. A machine learning component characterizes the syntax and semantics of the user's information. A performance system uses this learned information to generate completion strings and construct a user interface. Description of Online Appendix: People like to record information. Doing this on paper is initially efficient, but lacks flexibility. Recording information on a computer is less efficient but more powerful. In our new note taking softwre, the user records information directly on a computer. Behind the interface, an agent acts for the user. To help, it provides defaults and constructs a custom user interface. The demonstration is a QuickTime movie of the note taking agent in action. The file is a binhexed self-extracting archive. Macintosh utilities for binhex are available from mac.archive.umich.edu. QuickTime is available from ftp.apple.com in the dts/mac/sys.soft/quicktime.<|reference_end|> | arxiv | @article{schlimmer1993software,
title={Software Agents: Completing Patterns and Constructing User Interfaces},
author={J. C. Schlimmer, L. A. Hermens},
journal={Journal of Artificial Intelligence Research, Vol 1, (1993), 61-89},
year={1993},
archivePrefix={arXiv},
eprint={cs/9311102},
primaryClass={cs.AI}
} | schlimmer1993software |
arxiv-675906 | cs/9311103 | Set Theory for Verification: I From Foundations to Functions | <|reference_start|>Set Theory for Verification: I From Foundations to Functions: A logic for specification and verification is derived from the axioms of Zermelo-Fraenkel set theory. The proofs are performed using the proof assistant Isabelle. Isabelle is generic, supporting several different logics. Isabelle has the flexibility to adapt to variants of set theory. Its higher-order syntax supports the definition of new binding operators. Unknowns in subgoals can be instantiated incrementally. The paper describes the derivation of rules for descriptions, relations and functions, and discusses interactive proofs of Cantor's Theorem, the Composition of Homomorphisms challenge [9], and Ramsey's Theorem [5]. A generic proof assistant can stand up against provers dedicated to particular logics.<|reference_end|> | arxiv | @article{paulson2000set,
title={Set Theory for Verification: I. From Foundations to Functions},
author={Lawrence C. Paulson},
journal={published in Journal of Journal of Automated Reasoning 11 (1993),
353-389},
year={2000},
archivePrefix={arXiv},
eprint={cs/9311103},
primaryClass={cs.LO}
} | paulson2000set |
arxiv-675907 | cs/9312101 | Decidable Reasoning in Terminological Knowledge Representation Systems | <|reference_start|>Decidable Reasoning in Terminological Knowledge Representation Systems: Terminological knowledge representation systems (TKRSs) are tools for designing and using knowledge bases that make use of terminological languages (or concept languages). We analyze from a theoretical point of view a TKRS whose capabilities go beyond the ones of presently available TKRSs. The new features studied, often required in practical applications, can be summarized in three main points. First, we consider a highly expressive terminological language, called ALCNR, including general complements of concepts, number restrictions and role conjunction. Second, we allow to express inclusion statements between general concepts, and terminological cycles as a particular case. Third, we prove the decidability of a number of desirable TKRS-deduction services (like satisfiability, subsumption and instance checking) through a sound, complete and terminating calculus for reasoning in ALCNR-knowledge bases. Our calculus extends the general technique of constraint systems. As a byproduct of the proof, we get also the result that inclusion statements in ALCNR can be simulated by terminological cycles, if descriptive semantics is adopted.<|reference_end|> | arxiv | @article{buchheit1993decidable,
title={Decidable Reasoning in Terminological Knowledge Representation Systems},
author={M. Buchheit, F. M. Donini, A. Schaerf},
journal={Journal of Artificial Intelligence Research, Vol 1, (1993),
109-138},
year={1993},
archivePrefix={arXiv},
eprint={cs/9312101},
primaryClass={cs.AI}
} | buchheit1993decidable |
arxiv-675908 | cs/9401101 | Teleo-Reactive Programs for Agent Control | <|reference_start|>Teleo-Reactive Programs for Agent Control: A formalism is presented for computing and organizing actions for autonomous agents in dynamic environments. We introduce the notion of teleo-reactive (T-R) programs whose execution entails the construction of circuitry for the continuous computation of the parameters and conditions on which agent action is based. In addition to continuous feedback, T-R programs support parameter binding and recursion. A primary difference between T-R programs and many other circuit-based systems is that the circuitry of T-R programs is more compact; it is constructed at run time and thus does not have to anticipate all the contingencies that might arise over all possible runs. In addition, T-R programs are intuitive and easy to write and are written in a form that is compatible with automatic planning and learning methods. We briefly describe some experimental applications of T-R programs in the control of simulated and actual mobile robots.<|reference_end|> | arxiv | @article{nilsson1994teleo-reactive,
title={Teleo-Reactive Programs for Agent Control},
author={N. Nilsson},
journal={Journal of Artificial Intelligence Research, Vol 1, (1994),
139-158},
year={1994},
archivePrefix={arXiv},
eprint={cs/9401101},
primaryClass={cs.AI}
} | nilsson1994teleo-reactive |
arxiv-675909 | cs/9401102 | Mini-indexes for literate programs | <|reference_start|>Mini-indexes for literate programs: This paper describes how to implement a documentation technique that helps readers to understand large programs or collections of programs, by providing local indexes to all identifiers that are visible on every two-page spread. A detailed example is given for a program that finds all Hamiltonian circuits in an undirected graph.<|reference_end|> | arxiv | @article{knuth1994mini-indexes,
title={Mini-indexes for literate programs},
author={Donald E. Knuth},
journal={Software -- Concepts and Tools 15 (1994), 2--11},
year={1994},
number={Knuth migration 11/2004},
archivePrefix={arXiv},
eprint={cs/9401102},
primaryClass={cs.PL}
} | knuth1994mini-indexes |
arxiv-675910 | cs/9402101 | Learning the Past Tense of English Verbs: The Symbolic Pattern Associator vs Connectionist Models | <|reference_start|>Learning the Past Tense of English Verbs: The Symbolic Pattern Associator vs Connectionist Models: Learning the past tense of English verbs - a seemingly minor aspect of language acquisition - has generated heated debates since 1986, and has become a landmark task for testing the adequacy of cognitive modeling. Several artificial neural networks (ANNs) have been implemented, and a challenge for better symbolic models has been posed. In this paper, we present a general-purpose Symbolic Pattern Associator (SPA) based upon the decision-tree learning algorithm ID3. We conduct extensive head-to-head comparisons on the generalization ability between ANN models and the SPA under different representations. We conclude that the SPA generalizes the past tense of unseen verbs better than ANN models by a wide margin, and we offer insights as to why this should be the case. We also discuss a new default strategy for decision-tree learning algorithms.<|reference_end|> | arxiv | @article{ling1994learning,
title={Learning the Past Tense of English Verbs: The Symbolic Pattern
Associator vs. Connectionist Models},
author={C. X. Ling},
journal={Journal of Artificial Intelligence Research, Vol 1, (1994),
209-229},
year={1994},
archivePrefix={arXiv},
eprint={cs/9402101},
primaryClass={cs.AI}
} | ling1994learning |
arxiv-675911 | cs/9402102 | Substructure Discovery Using Minimum Description Length and Background Knowledge | <|reference_start|>Substructure Discovery Using Minimum Description Length and Background Knowledge: The ability to identify interesting and repetitive substructures is an essential component to discovering knowledge in structural data. We describe a new version of our SUBDUE substructure discovery system based on the minimum description length principle. The SUBDUE system discovers substructures that compress the original data and represent structural concepts in the data. By replacing previously-discovered substructures in the data, multiple passes of SUBDUE produce a hierarchical description of the structural regularities in the data. SUBDUE uses a computationally-bounded inexact graph match that identifies similar, but not identical, instances of a substructure and finds an approximate measure of closeness of two substructures when under computational constraints. In addition to the minimum description length principle, other background knowledge can be used by SUBDUE to guide the search towards more appropriate substructures. Experiments in a variety of domains demonstrate SUBDUE's ability to find substructures capable of compressing the original data and to discover structural concepts important to the domain. Description of Online Appendix: This is a compressed tar file containing the SUBDUE discovery system, written in C. The program accepts as input databases represented in graph form, and will output discovered substructures with their corresponding value.<|reference_end|> | arxiv | @article{cook1994substructure,
title={Substructure Discovery Using Minimum Description Length and Background
Knowledge},
author={D. J. Cook, L. B. Holder},
journal={Journal of Artificial Intelligence Research, Vol 1, (1994),
231-255},
year={1994},
archivePrefix={arXiv},
eprint={cs/9402102},
primaryClass={cs.AI}
} | cook1994substructure |
arxiv-675912 | cs/9402103 | Bias-Driven Revision of Logical Domain Theories | <|reference_start|>Bias-Driven Revision of Logical Domain Theories: The theory revision problem is the problem of how best to go about revising a deficient domain theory using information contained in examples that expose inaccuracies. In this paper we present our approach to the theory revision problem for propositional domain theories. The approach described here, called PTR, uses probabilities associated with domain theory elements to numerically track the ``flow'' of proof through the theory. This allows us to measure the precise role of a clause or literal in allowing or preventing a (desired or undesired) derivation for a given example. This information is used to efficiently locate and repair flawed elements of the theory. PTR is proved to converge to a theory which correctly classifies all examples, and shown experimentally to be fast and accurate even for deep theories.<|reference_end|> | arxiv | @article{koppel1994bias-driven,
title={Bias-Driven Revision of Logical Domain Theories},
author={M. Koppel, R. Feldman, A. M. Segre},
journal={Journal of Artificial Intelligence Research, Vol 1, (1994),
159-208},
year={1994},
archivePrefix={arXiv},
eprint={cs/9402103},
primaryClass={cs.AI}
} | koppel1994bias-driven |
arxiv-675913 | cs/9403101 | Exploring the Decision Forest: An Empirical Investigation of Occam's Razor in Decision Tree Induction | <|reference_start|>Exploring the Decision Forest: An Empirical Investigation of Occam's Razor in Decision Tree Induction: We report on a series of experiments in which all decision trees consistent with the training data are constructed. These experiments were run to gain an understanding of the properties of the set of consistent decision trees and the factors that affect the accuracy of individual trees. In particular, we investigated the relationship between the size of a decision tree consistent with some training data and the accuracy of the tree on test data. The experiments were performed on a massively parallel Maspar computer. The results of the experiments on several artificial and two real world problems indicate that, for many of the problems investigated, smaller consistent decision trees are on average less accurate than the average accuracy of slightly larger trees.<|reference_end|> | arxiv | @article{murphy1994exploring,
title={Exploring the Decision Forest: An Empirical Investigation of Occam's
Razor in Decision Tree Induction},
author={P. M. Murphy, M. J. Pazzani},
journal={Journal of Artificial Intelligence Research, Vol 1, (1994),
257-275},
year={1994},
archivePrefix={arXiv},
eprint={cs/9403101},
primaryClass={cs.AI}
} | murphy1994exploring |
arxiv-675914 | cs/9406101 | A Semantics and Complete Algorithm for Subsumption in the CLASSIC Description Logic | <|reference_start|>A Semantics and Complete Algorithm for Subsumption in the CLASSIC Description Logic: This paper analyzes the correctness of the subsumption algorithm used in CLASSIC, a description logic-based knowledge representation system that is being used in practical applications. In order to deal efficiently with individuals in CLASSIC descriptions, the developers have had to use an algorithm that is incomplete with respect to the standard, model-theoretic semantics for description logics. We provide a variant semantics for descriptions with respect to which the current implementation is complete, and which can be independently motivated. The soundness and completeness of the polynomial-time subsumption algorithm is established using description graphs, which are an abstracted version of the implementation structures used in CLASSIC, and are of independent interest.<|reference_end|> | arxiv | @article{borgida1994a,
title={A Semantics and Complete Algorithm for Subsumption in the CLASSIC
Description Logic},
author={A. Borgida, P. F. Patel-Schneider},
journal={Journal of Artificial Intelligence Research, Vol 1, (1994),
277-308},
year={1994},
archivePrefix={arXiv},
eprint={cs/9406101},
primaryClass={cs.AI}
} | borgida1994a |
arxiv-675915 | cs/9406102 | Applying GSAT to Non-Clausal Formulas | <|reference_start|>Applying GSAT to Non-Clausal Formulas: In this paper we describe how to modify GSAT so that it can be applied to non-clausal formulas. The idea is to use a particular ``score'' function which gives the number of clauses of the CNF conversion of a formula which are false under a given truth assignment. Its value is computed in linear time, without constructing the CNF conversion itself. The proposed methodology applies to most of the variants of GSAT proposed so far.<|reference_end|> | arxiv | @article{sebastiani1994applying,
title={Applying GSAT to Non-Clausal Formulas},
author={R. Sebastiani},
journal={Journal of Artificial Intelligence Research, Vol 1, (1994),
309-314},
year={1994},
archivePrefix={arXiv},
eprint={cs/9406102},
primaryClass={cs.AI}
} | sebastiani1994applying |
arxiv-675916 | cs/9408101 | Random Worlds and Maximum Entropy | <|reference_start|>Random Worlds and Maximum Entropy: Given a knowledge base KB containing first-order and statistical facts, we consider a principled method, called the random-worlds method, for computing a degree of belief that some formula Phi holds given KB. If we are reasoning about a world or system consisting of N individuals, then we can consider all possible worlds, or first-order models, with domain {1,...,N} that satisfy KB, and compute the fraction of them in which Phi is true. We define the degree of belief to be the asymptotic value of this fraction as N grows large. We show that when the vocabulary underlying Phi and KB uses constants and unary predicates only, we can naturally associate an entropy with each world. As N grows larger, there are many more worlds with higher entropy. Therefore, we can use a maximum-entropy computation to compute the degree of belief. This result is in a similar spirit to previous work in physics and artificial intelligence, but is far more general. Of equal interest to the result itself are the limitations on its scope. Most importantly, the restriction to unary predicates seems necessary. Although the random-worlds method makes sense in general, the connection to maximum entropy seems to disappear in the non-unary case. These observations suggest unexpected limitations to the applicability of maximum-entropy methods.<|reference_end|> | arxiv | @article{grove1994random,
title={Random Worlds and Maximum Entropy},
author={A. J. Grove, J. Y. Halpern, D. Koller},
journal={Journal of Artificial Intelligence Research, Vol 2, (1994), 33-88},
year={1994},
archivePrefix={arXiv},
eprint={cs/9408101},
primaryClass={cs.AI}
} | grove1994random |
arxiv-675917 | cs/9408102 | Pattern Matching and Discourse Processing in Information Extraction from Japanese Text | <|reference_start|>Pattern Matching and Discourse Processing in Information Extraction from Japanese Text: Information extraction is the task of automatically picking up information of interest from an unconstrained text. Information of interest is usually extracted in two steps. First, sentence level processing locates relevant pieces of information scattered throughout the text; second, discourse processing merges coreferential information to generate the output. In the first step, pieces of information are locally identified without recognizing any relationships among them. A key word search or simple pattern search can achieve this purpose. The second step requires deeper knowledge in order to understand relationships among separately identified pieces of information. Previous information extraction systems focused on the first step, partly because they were not required to link up each piece of information with other pieces. To link the extracted pieces of information and map them onto a structured output format, complex discourse processing is essential. This paper reports on a Japanese information extraction system that merges information using a pattern matcher and discourse processor. Evaluation results show a high level of system performance which approaches human performance.<|reference_end|> | arxiv | @article{kitani1994pattern,
title={Pattern Matching and Discourse Processing in Information Extraction from
Japanese Text},
author={T. Kitani, Y. Eriguchi, M. Hara},
journal={Journal of Artificial Intelligence Research, Vol 2, (1994), 89-110},
year={1994},
archivePrefix={arXiv},
eprint={cs/9408102},
primaryClass={cs.AI}
} | kitani1994pattern |
arxiv-675918 | cs/9408103 | A System for Induction of Oblique Decision Trees | <|reference_start|>A System for Induction of Oblique Decision Trees: This article describes a new system for induction of oblique decision trees. This system, OC1, combines deterministic hill-climbing with two forms of randomization to find a good oblique split (in the form of a hyperplane) at each node of a decision tree. Oblique decision tree methods are tuned especially for domains in which the attributes are numeric, although they can be adapted to symbolic or mixed symbolic/numeric attributes. We present extensive empirical studies, using both real and artificial data, that analyze OC1's ability to construct oblique trees that are smaller and more accurate than their axis-parallel counterparts. We also examine the benefits of randomization for the construction of oblique decision trees.<|reference_end|> | arxiv | @article{murthy1994a,
title={A System for Induction of Oblique Decision Trees},
author={S. K. Murthy, S. Kasif, S. Salzberg},
journal={Journal of Artificial Intelligence Research, Vol 2, (1994), 1-32},
year={1994},
archivePrefix={arXiv},
eprint={cs/9408103},
primaryClass={cs.AI}
} | murthy1994a |
arxiv-675919 | cs/9409101 | On Planning while Learning | <|reference_start|>On Planning while Learning: This paper introduces a framework for Planning while Learning where an agent is given a goal to achieve in an environment whose behavior is only partially known to the agent. We discuss the tractability of various plan-design processes. We show that for a large natural class of Planning while Learning systems, a plan can be presented and verified in a reasonable time. However, coming up algorithmically with a plan, even for simple classes of systems is apparently intractable. We emphasize the role of off-line plan-design processes, and show that, in most natural cases, the verification (projection) part can be carried out in an efficient algorithmic manner.<|reference_end|> | arxiv | @article{safra1994on,
title={On Planning while Learning},
author={S. Safra, M. Tennenholtz},
journal={Journal of Artificial Intelligence Research, Vol 2, (1994),
111-129},
year={1994},
archivePrefix={arXiv},
eprint={cs/9409101},
primaryClass={cs.AI}
} | safra1994on |
arxiv-675920 | cs/9412101 | Wrap-Up: a Trainable Discourse Module for Information Extraction | <|reference_start|>Wrap-Up: a Trainable Discourse Module for Information Extraction: The vast amounts of on-line text now available have led to renewed interest in information extraction (IE) systems that analyze unrestricted text, producing a structured representation of selected information from the text. This paper presents a novel approach that uses machine learning to acquire knowledge for some of the higher level IE processing. Wrap-Up is a trainable IE discourse component that makes intersentential inferences and identifies logical relations among information extracted from the text. Previous corpus-based approaches were limited to lower level processing such as part-of-speech tagging, lexical disambiguation, and dictionary construction. Wrap-Up is fully trainable, and not only automatically decides what classifiers are needed, but even derives the feature set for each classifier automatically. Performance equals that of a partially trainable discourse module requiring manual customization for each domain.<|reference_end|> | arxiv | @article{soderland1994wrap-up:,
title={Wrap-Up: a Trainable Discourse Module for Information Extraction},
author={S. Soderland, Lehnert. W},
journal={Journal of Artificial Intelligence Research, Vol 2, (1994),
131-158},
year={1994},
archivePrefix={arXiv},
eprint={cs/9412101},
primaryClass={cs.AI}
} | soderland1994wrap-up: |
arxiv-675921 | cs/9412102 | Operations for Learning with Graphical Models | <|reference_start|>Operations for Learning with Graphical Models: This paper is a multidisciplinary review of empirical, statistical learning from a graphical model perspective. Well-known examples of graphical models include Bayesian networks, directed graphs representing a Markov chain, and undirected networks representing a Markov field. These graphical models are extended to model data analysis and empirical learning using the notation of plates. Graphical operations for simplifying and manipulating a problem are provided including decomposition, differentiation, and the manipulation of probability models from the exponential family. Two standard algorithm schemas for learning are reviewed in a graphical framework: Gibbs sampling and the expectation maximization algorithm. Using these operations and schemas, some popular algorithms can be synthesized from their graphical specification. This includes versions of linear regression, techniques for feed-forward networks, and learning Gaussian and discrete Bayesian networks from data. The paper concludes by sketching some implications for data analysis and summarizing how some popular algorithms fall within the framework presented. The main original contributions here are the decomposition techniques and the demonstration that graphical models provide a framework for understanding and developing complex learning algorithms.<|reference_end|> | arxiv | @article{buntine1994operations,
title={Operations for Learning with Graphical Models},
author={W. L. Buntine},
journal={Journal of Artificial Intelligence Research, Vol 2, (1994),
159-225},
year={1994},
archivePrefix={arXiv},
eprint={cs/9412102},
primaryClass={cs.AI}
} | buntine1994operations |
arxiv-675922 | cs/9412103 | Total-Order and Partial-Order Planning: A Comparative Analysis | <|reference_start|>Total-Order and Partial-Order Planning: A Comparative Analysis: For many years, the intuitions underlying partial-order planning were largely taken for granted. Only in the past few years has there been renewed interest in the fundamental principles underlying this paradigm. In this paper, we present a rigorous comparative analysis of partial-order and total-order planning by focusing on two specific planners that can be directly compared. We show that there are some subtle assumptions that underly the wide-spread intuitions regarding the supposed efficiency of partial-order planning. For instance, the superiority of partial-order planning can depend critically upon the search strategy and the structure of the search space. Understanding the underlying assumptions is crucial for constructing efficient planners.<|reference_end|> | arxiv | @article{minton1994total-order,
title={Total-Order and Partial-Order Planning: A Comparative Analysis},
author={S. Minton, J. Bresina, M. Drummond},
journal={Journal of Artificial Intelligence Research, Vol 2, (1994),
227-262},
year={1994},
archivePrefix={arXiv},
eprint={cs/9412103},
primaryClass={cs.AI}
} | minton1994total-order |
arxiv-675923 | cs/9501101 | Solving Multiclass Learning Problems via Error-Correcting Output Codes | <|reference_start|>Solving Multiclass Learning Problems via Error-Correcting Output Codes: Multiclass learning problems involve finding a definition for an unknown function f(x) whose range is a discrete set containing k > 2 values (i.e., k ``classes''). The definition is acquired by studying collections of training examples of the form [x_i, f (x_i)]. Existing approaches to multiclass learning problems include direct application of multiclass algorithms such as the decision-tree algorithms C4.5 and CART, application of binary concept learning algorithms to learn individual binary functions for each of the k classes, and application of binary concept learning algorithms with distributed output representations. This paper compares these three approaches to a new technique in which error-correcting codes are employed as a distributed output representation. We show that these output representations improve the generalization performance of both C4.5 and backpropagation on a wide range of multiclass learning tasks. We also demonstrate that this approach is robust with respect to changes in the size of the training sample, the assignment of distributed representations to particular classes, and the application of overfitting avoidance techniques such as decision-tree pruning. Finally, we show that---like the other methods---the error-correcting code technique can provide reliable class probability estimates. Taken together, these results demonstrate that error-correcting output codes provide a general-purpose method for improving the performance of inductive learning programs on multiclass problems.<|reference_end|> | arxiv | @article{dietterich1995solving,
title={Solving Multiclass Learning Problems via Error-Correcting Output Codes},
author={T. G. Dietterich, G. Bakiri},
journal={Journal of Artificial Intelligence Research, Vol 2, (1995),
263-286},
year={1995},
archivePrefix={arXiv},
eprint={cs/9501101},
primaryClass={cs.AI}
} | dietterich1995solving |
arxiv-675924 | cs/9501102 | A Domain-Independent Algorithm for Plan Adaptation | <|reference_start|>A Domain-Independent Algorithm for Plan Adaptation: The paradigms of transformational planning, case-based planning, and plan debugging all involve a process known as plan adaptation - modifying or repairing an old plan so it solves a new problem. In this paper we provide a domain-independent algorithm for plan adaptation, demonstrate that it is sound, complete, and systematic, and compare it to other adaptation algorithms in the literature. Our approach is based on a view of planning as searching a graph of partial plans. Generative planning starts at the graph's root and moves from node to node using plan-refinement operators. In planning by adaptation, a library plan - an arbitrary node in the plan graph - is the starting point for the search, and the plan-adaptation algorithm can apply both the same refinement operators available to a generative planner and can also retract constraints and steps from the plan. Our algorithm's completeness ensures that the adaptation algorithm will eventually search the entire graph and its systematicity ensures that it will do so without redundantly searching any parts of the graph.<|reference_end|> | arxiv | @article{hanks1995a,
title={A Domain-Independent Algorithm for Plan Adaptation},
author={S. Hanks, D. S. Weld},
journal={Journal of Artificial Intelligence Research, Vol 2, (1995),
319-360},
year={1995},
archivePrefix={arXiv},
eprint={cs/9501102},
primaryClass={cs.AI}
} | hanks1995a |
arxiv-675925 | cs/9501103 | Truncating Temporal Differences: On the Efficient Implementation of TD(lambda) for Reinforcement Learning | <|reference_start|>Truncating Temporal Differences: On the Efficient Implementation of TD(lambda) for Reinforcement Learning: Temporal difference (TD) methods constitute a class of methods for learning predictions in multi-step prediction problems, parameterized by a recency factor lambda. Currently the most important application of these methods is to temporal credit assignment in reinforcement learning. Well known reinforcement learning algorithms, such as AHC or Q-learning, may be viewed as instances of TD learning. This paper examines the issues of the efficient and general implementation of TD(lambda) for arbitrary lambda, for use with reinforcement learning algorithms optimizing the discounted sum of rewards. The traditional approach, based on eligibility traces, is argued to suffer from both inefficiency and lack of generality. The TTD (Truncated Temporal Differences) procedure is proposed as an alternative, that indeed only approximates TD(lambda), but requires very little computation per action and can be used with arbitrary function representation methods. The idea from which it is derived is fairly simple and not new, but probably unexplored so far. Encouraging experimental results are presented, suggesting that using lambda > 0 with the TTD procedure allows one to obtain a significant learning speedup at essentially the same cost as usual TD(0) learning.<|reference_end|> | arxiv | @article{cichosz1995truncating,
title={Truncating Temporal Differences: On the Efficient Implementation of
TD(lambda) for Reinforcement Learning},
author={P. Cichosz},
journal={Journal of Artificial Intelligence Research, Vol 2, (1995),
287-318},
year={1995},
archivePrefix={arXiv},
eprint={cs/9501103},
primaryClass={cs.AI}
} | cichosz1995truncating |
arxiv-675926 | cs/9503101 | On the Informativeness of the DNA Promoter Sequences Domain Theory | <|reference_start|>On the Informativeness of the DNA Promoter Sequences Domain Theory: The DNA promoter sequences domain theory and database have become popular for testing systems that integrate empirical and analytical learning. This note reports a simple change and reinterpretation of the domain theory in terms of M-of-N concepts, involving no learning, that results in an accuracy of 93.4% on the 106 items of the database. Moreover, an exhaustive search of the space of M-of-N domain theory interpretations indicates that the expected accuracy of a randomly chosen interpretation is 76.5%, and that a maximum accuracy of 97.2% is achieved in 12 cases. This demonstrates the informativeness of the domain theory, without the complications of understanding the interactions between various learning algorithms and the theory. In addition, our results help characterize the difficulty of learning using the DNA promoters theory.<|reference_end|> | arxiv | @article{ortega1995on,
title={On the Informativeness of the DNA Promoter Sequences Domain Theory},
author={J. Ortega},
journal={Journal of Artificial Intelligence Research, Vol 2, (1995),
361-367},
year={1995},
archivePrefix={arXiv},
eprint={cs/9503101},
primaryClass={cs.AI q-bio}
} | ortega1995on |
arxiv-675927 | cs/9503102 | Cost-Sensitive Classification: Empirical Evaluation of a Hybrid Genetic Decision Tree Induction Algorithm | <|reference_start|>Cost-Sensitive Classification: Empirical Evaluation of a Hybrid Genetic Decision Tree Induction Algorithm: This paper introduces ICET, a new algorithm for cost-sensitive classification. ICET uses a genetic algorithm to evolve a population of biases for a decision tree induction algorithm. The fitness function of the genetic algorithm is the average cost of classification when using the decision tree, including both the costs of tests (features, measurements) and the costs of classification errors. ICET is compared here with three other algorithms for cost-sensitive classification - EG2, CS-ID3, and IDX - and also with C4.5, which classifies without regard to cost. The five algorithms are evaluated empirically on five real-world medical datasets. Three sets of experiments are performed. The first set examines the baseline performance of the five algorithms on the five datasets and establishes that ICET performs significantly better than its competitors. The second set tests the robustness of ICET under a variety of conditions and shows that ICET maintains its advantage. The third set looks at ICET's search in bias space and discovers a way to improve the search.<|reference_end|> | arxiv | @article{turney1995cost-sensitive,
title={Cost-Sensitive Classification: Empirical Evaluation of a Hybrid Genetic
Decision Tree Induction Algorithm},
author={P. D. Turney},
journal={Journal of Artificial Intelligence Research, Vol 2, (1995),
369-409},
year={1995},
archivePrefix={arXiv},
eprint={cs/9503102},
primaryClass={cs.AI}
} | turney1995cost-sensitive |
arxiv-675928 | cs/9504101 | Rerepresenting and Restructuring Domain Theories: A Constructive Induction Approach | <|reference_start|>Rerepresenting and Restructuring Domain Theories: A Constructive Induction Approach: Theory revision integrates inductive learning and background knowledge by combining training examples with a coarse domain theory to produce a more accurate theory. There are two challenges that theory revision and other theory-guided systems face. First, a representation language appropriate for the initial theory may be inappropriate for an improved theory. While the original representation may concisely express the initial theory, a more accurate theory forced to use that same representation may be bulky, cumbersome, and difficult to reach. Second, a theory structure suitable for a coarse domain theory may be insufficient for a fine-tuned theory. Systems that produce only small, local changes to a theory have limited value for accomplishing complex structural alterations that may be required. Consequently, advanced theory-guided learning systems require flexible representation and flexible structure. An analysis of various theory revision systems and theory-guided learning systems reveals specific strengths and weaknesses in terms of these two desired properties. Designed to capture the underlying qualities of each system, a new system uses theory-guided constructive induction. Experiments in three domains show improvement over previous theory-guided systems. This leads to a study of the behavior, limitations, and potential of theory-guided constructive induction.<|reference_end|> | arxiv | @article{donoho1995rerepresenting,
title={Rerepresenting and Restructuring Domain Theories: A Constructive
Induction Approach},
author={S. K. Donoho, L. A. Rendell},
journal={Journal of Artificial Intelligence Research, Vol 2, (1995),
411-446},
year={1995},
archivePrefix={arXiv},
eprint={cs/9504101},
primaryClass={cs.AI}
} | donoho1995rerepresenting |
arxiv-675929 | cs/9505101 | Using Pivot Consistency to Decompose and Solve Functional CSPs | <|reference_start|>Using Pivot Consistency to Decompose and Solve Functional CSPs: Many studies have been carried out in order to increase the search efficiency of constraint satisfaction problems; among them, some make use of structural properties of the constraint network; others take into account semantic properties of the constraints, generally assuming that all the constraints possess the given property. In this paper, we propose a new decomposition method benefiting from both semantic properties of functional constraints (not bijective constraints) and structural properties of the network; furthermore, not all the constraints need to be functional. We show that under some conditions, the existence of solutions can be guaranteed. We first characterize a particular subset of the variables, which we name a root set. We then introduce pivot consistency, a new local consistency which is a weak form of path consistency and can be achieved in O(n^2d^2) complexity (instead of O(n^3d^3) for path consistency), and we present associated properties; in particular, we show that any consistent instantiation of the root set can be linearly extended to a solution, which leads to the presentation of the aforementioned new method for solving by decomposing functional CSPs.<|reference_end|> | arxiv | @article{david1995using,
title={Using Pivot Consistency to Decompose and Solve Functional CSPs},
author={P. David},
journal={Journal of Artificial Intelligence Research, Vol 2, (1995),
447-474},
year={1995},
archivePrefix={arXiv},
eprint={cs/9505101},
primaryClass={cs.AI}
} | david1995using |
arxiv-675930 | cs/9505102 | Adaptive Load Balancing: A Study in Multi-Agent Learning | <|reference_start|>Adaptive Load Balancing: A Study in Multi-Agent Learning: We study the process of multi-agent reinforcement learning in the context of load balancing in a distributed system, without use of either central coordination or explicit communication. We first define a precise framework in which to study adaptive load balancing, important features of which are its stochastic nature and the purely local information available to individual agents. Given this framework, we show illuminating results on the interplay between basic adaptive behavior parameters and their effect on system efficiency. We then investigate the properties of adaptive load balancing in heterogeneous populations, and address the issue of exploration vs. exploitation in that context. Finally, we show that naive use of communication may not improve, and might even harm system efficiency.<|reference_end|> | arxiv | @article{schaerf1995adaptive,
title={Adaptive Load Balancing: A Study in Multi-Agent Learning},
author={A. Schaerf, Y. Shoham, M. Tennenholtz},
journal={Journal of Artificial Intelligence Research, Vol 2, (1995),
475-500},
year={1995},
archivePrefix={arXiv},
eprint={cs/9505102},
primaryClass={cs.AI}
} | schaerf1995adaptive |
arxiv-675931 | cs/9505103 | Provably Bounded-Optimal Agents | <|reference_start|>Provably Bounded-Optimal Agents: Since its inception, artificial intelligence has relied upon a theoretical foundation centered around perfect rationality as the desired property of intelligent systems. We argue, as others have done, that this foundation is inadequate because it imposes fundamentally unsatisfiable requirements. As a result, there has arisen a wide gap between theory and practice in AI, hindering progress in the field. We propose instead a property called bounded optimality. Roughly speaking, an agent is bounded-optimal if its program is a solution to the constrained optimization problem presented by its architecture and the task environment. We show how to construct agents with this property for a simple class of machine architectures in a broad class of real-time environments. We illustrate these results using a simple model of an automated mail sorting facility. We also define a weaker property, asymptotic bounded optimality (ABO), that generalizes the notion of optimality in classical complexity theory. We then construct universal ABO programs, i.e., programs that are ABO no matter what real-time constraints are applied. Universal ABO programs can be used as building blocks for more complex systems. We conclude with a discussion of the prospects for bounded optimality as a theoretical basis for AI, and relate it to similar trends in philosophy, economics, and game theory.<|reference_end|> | arxiv | @article{russell1995provably,
title={Provably Bounded-Optimal Agents},
author={S. J. Russell, D. Subramanian},
journal={Journal of Artificial Intelligence Research, Vol 2, (1995),
575-609},
year={1995},
archivePrefix={arXiv},
eprint={cs/9505103},
primaryClass={cs.AI}
} | russell1995provably |
arxiv-675932 | cs/9505104 | Pac-Learning Recursive Logic Programs: Efficient Algorithms | <|reference_start|>Pac-Learning Recursive Logic Programs: Efficient Algorithms: We present algorithms that learn certain classes of function-free recursive logic programs in polynomial time from equivalence queries. In particular, we show that a single k-ary recursive constant-depth determinate clause is learnable. Two-clause programs consisting of one learnable recursive clause and one constant-depth determinate non-recursive clause are also learnable, if an additional ``basecase'' oracle is assumed. These results immediately imply the pac-learnability of these classes. Although these classes of learnable recursive programs are very constrained, it is shown in a companion paper that they are maximally general, in that generalizing either class in any natural way leads to a computationally difficult learning problem. Thus, taken together with its companion paper, this paper establishes a boundary of efficient learnability for recursive logic programs.<|reference_end|> | arxiv | @article{cohen1995pac-learning,
title={Pac-Learning Recursive Logic Programs: Efficient Algorithms},
author={W. W. Cohen},
journal={Journal of Artificial Intelligence Research, Vol 2, (1995),
501-539},
year={1995},
archivePrefix={arXiv},
eprint={cs/9505104},
primaryClass={cs.AI}
} | cohen1995pac-learning |
arxiv-675933 | cs/9505105 | Pac-learning Recursive Logic Programs: Negative Results | <|reference_start|>Pac-learning Recursive Logic Programs: Negative Results: In a companion paper it was shown that the class of constant-depth determinate k-ary recursive clauses is efficiently learnable. In this paper we present negative results showing that any natural generalization of this class is hard to learn in Valiant's model of pac-learnability. In particular, we show that the following program classes are cryptographically hard to learn: programs with an unbounded number of constant-depth linear recursive clauses; programs with one constant-depth determinate clause containing an unbounded number of recursive calls; and programs with one linear recursive clause of constant locality. These results immediately imply the non-learnability of any more general class of programs. We also show that learning a constant-depth determinate program with either two linear recursive clauses or one linear recursive clause and one non-recursive clause is as hard as learning boolean DNF. Together with positive results from the companion paper, these negative results establish a boundary of efficient learnability for recursive function-free clauses.<|reference_end|> | arxiv | @article{cohen1995pac-learning,
title={Pac-learning Recursive Logic Programs: Negative Results},
author={W. W. Cohen},
journal={Journal of Artificial Intelligence Research, Vol 2, (1995),
541-573},
year={1995},
archivePrefix={arXiv},
eprint={cs/9505105},
primaryClass={cs.AI}
} | cohen1995pac-learning |
arxiv-675934 | cs/9506101 | FLECS: Planning with a Flexible Commitment Strategy | <|reference_start|>FLECS: Planning with a Flexible Commitment Strategy: There has been evidence that least-commitment planners can efficiently handle planning problems that involve difficult goal interactions. This evidence has led to the common belief that delayed-commitment is the "best" possible planning strategy. However, we recently found evidence that eager-commitment planners can handle a variety of planning problems more efficiently, in particular those with difficult operator choices. Resigned to the futility of trying to find a universally successful planning strategy, we devised a planner that can be used to study which domains and problems are best for which planning strategies. In this article we introduce this new planning algorithm, FLECS, which uses a FLExible Commitment Strategy with respect to plan-step orderings. It is able to use any strategy from delayed-commitment to eager-commitment. The combination of delayed and eager operator-ordering commitments allows FLECS to take advantage of the benefits of explicitly using a simulated execution state and reasoning about planning constraints. FLECS can vary its commitment strategy across different problems and domains, and also during the course of a single planning problem. FLECS represents a novel contribution to planning in that it explicitly provides the choice of which commitment strategy to use while planning. FLECS provides a framework to investigate the mapping from planning domains and problems to efficient planning strategies.<|reference_end|> | arxiv | @article{veloso1995flecs:,
title={FLECS: Planning with a Flexible Commitment Strategy},
author={M. Veloso, P. Stone},
journal={Journal of Artificial Intelligence Research, Vol 3, (1995), 25-52},
year={1995},
archivePrefix={arXiv},
eprint={cs/9506101},
primaryClass={cs.AI}
} | veloso1995flecs: |
arxiv-675935 | cs/9506102 | Induction of First-Order Decision Lists: Results on Learning the Past Tense of English Verbs | <|reference_start|>Induction of First-Order Decision Lists: Results on Learning the Past Tense of English Verbs: This paper presents a method for inducing logic programs from examples that learns a new class of concepts called first-order decision lists, defined as ordered lists of clauses each ending in a cut. The method, called FOIDL, is based on FOIL (Quinlan, 1990) but employs intensional background knowledge and avoids the need for explicit negative examples. It is particularly useful for problems that involve rules with specific exceptions, such as learning the past-tense of English verbs, a task widely studied in the context of the symbolic/connectionist debate. FOIDL is able to learn concise, accurate programs for this problem from significantly fewer examples than previous methods (both connectionist and symbolic).<|reference_end|> | arxiv | @article{mooney1995induction,
title={Induction of First-Order Decision Lists: Results on Learning the Past
Tense of English Verbs},
author={R. J. Mooney, M. E. Califf},
journal={Journal of Artificial Intelligence Research, Vol 3, (1995), 1-24},
year={1995},
archivePrefix={arXiv},
eprint={cs/9506102},
primaryClass={cs.AI}
} | mooney1995induction |
arxiv-675936 | cs/9507101 | Building and Refining Abstract Planning Cases by Change of Representation Language | <|reference_start|>Building and Refining Abstract Planning Cases by Change of Representation Language: ion is one of the most promising approaches to improve the performance of problem solvers. In several domains abstraction by dropping sentences of a domain description -- as used in most hierarchical planners -- has proven useful. In this paper we present examples which illustrate significant drawbacks of abstraction by dropping sentences. To overcome these drawbacks, we propose a more general view of abstraction involving the change of representation language. We have developed a new abstraction methodology and a related sound and complete learning algorithm that allows the complete change of representation language of planning cases from concrete to abstract. However, to achieve a powerful change of the representation language, the abstract language itself as well as rules which describe admissible ways of abstracting states must be provided in the domain model. This new abstraction approach is the core of Paris (Plan Abstraction and Refinement in an Integrated System), a system in which abstract planning cases are automatically learned from given concrete cases. An empirical study in the domain of process planning in mechanical engineering shows significant advantages of the proposed reasoning from abstract cases over classical hierarchical planning.<|reference_end|> | arxiv | @article{bergmann1995building,
title={Building and Refining Abstract Planning Cases by Change of
Representation Language},
author={R. Bergmann, W. Wilke},
journal={Journal of Artificial Intelligence Research, Vol 3, (1995), 53-118},
year={1995},
archivePrefix={arXiv},
eprint={cs/9507101},
primaryClass={cs.AI}
} | bergmann1995building |
arxiv-675937 | cs/9508101 | Using Qualitative Hypotheses to Identify Inaccurate Data | <|reference_start|>Using Qualitative Hypotheses to Identify Inaccurate Data: Identifying inaccurate data has long been regarded as a significant and difficult problem in AI. In this paper, we present a new method for identifying inaccurate data on the basis of qualitative correlations among related data. First, we introduce the definitions of related data and qualitative correlations among related data. Then we put forward a new concept called support coefficient function (SCF). SCF can be used to extract, represent, and calculate qualitative correlations among related data within a dataset. We propose an approach to determining dynamic shift intervals of inaccurate data, and an approach to calculating possibility of identifying inaccurate data, respectively. Both of the approaches are based on SCF. Finally we present an algorithm for identifying inaccurate data by using qualitative correlations among related data as confirmatory or disconfirmatory evidence. We have developed a practical system for interpreting infrared spectra by applying the method, and have fully tested the system against several hundred real spectra. The experimental results show that the method is significantly better than the conventional methods used in many similar systems.<|reference_end|> | arxiv | @article{zhao1995using,
title={Using Qualitative Hypotheses to Identify Inaccurate Data},
author={Q. Zhao, T. Nishida},
journal={Journal of Artificial Intelligence Research, Vol 3, (1995),
119-145},
year={1995},
archivePrefix={arXiv},
eprint={cs/9508101},
primaryClass={cs.AI}
} | zhao1995using |
arxiv-675938 | cs/9508102 | An Integrated Framework for Learning and Reasoning | <|reference_start|>An Integrated Framework for Learning and Reasoning: Learning and reasoning are both aspects of what is considered to be intelligence. Their studies within AI have been separated historically, learning being the topic of machine learning and neural networks, and reasoning falling under classical (or symbolic) AI. However, learning and reasoning are in many ways interdependent. This paper discusses the nature of some of these interdependencies and proposes a general framework called FLARE, that combines inductive learning using prior knowledge together with reasoning in a propositional setting. Several examples that test the framework are presented, including classical induction, many important reasoning protocols and two simple expert systems.<|reference_end|> | arxiv | @article{giraud-carrier1995an,
title={An Integrated Framework for Learning and Reasoning},
author={C. G. Giraud-Carrier, T. R. Martinez},
journal={Journal of Artificial Intelligence Research, Vol 3, (1995),
147-185},
year={1995},
archivePrefix={arXiv},
eprint={cs/9508102},
primaryClass={cs.AI}
} | giraud-carrier1995an |
arxiv-675939 | cs/9510101 | Diffusion of Context and Credit Information in Markovian Models | <|reference_start|>Diffusion of Context and Credit Information in Markovian Models: This paper studies the problem of ergodicity of transition probability matrices in Markovian models, such as hidden Markov models (HMMs), and how it makes very difficult the task of learning to represent long-term context for sequential data. This phenomenon hurts the forward propagation of long-term context information, as well as learning a hidden state representation to represent long-term context, which depends on propagating credit information backwards in time. Using results from Markov chain theory, we show that this problem of diffusion of context and credit is reduced when the transition probabilities approach 0 or 1, i.e., the transition probability matrices are sparse and the model essentially deterministic. The results found in this paper apply to learning approaches based on continuous optimization, such as gradient descent and the Baum-Welch algorithm.<|reference_end|> | arxiv | @article{bengio1995diffusion,
title={Diffusion of Context and Credit Information in Markovian Models},
author={Y. Bengio, P. Frasconi},
journal={Journal of Artificial Intelligence Research, Vol 3, (1995),
249-270},
year={1995},
archivePrefix={arXiv},
eprint={cs/9510101},
primaryClass={cs.AI}
} | bengio1995diffusion |
arxiv-675940 | cs/9510102 | Improving Connectionist Energy Minimization | <|reference_start|>Improving Connectionist Energy Minimization: Symmetric networks designed for energy minimization such as Boltzman machines and Hopfield nets are frequently investigated for use in optimization, constraint satisfaction and approximation of NP-hard problems. Nevertheless, finding a global solution (i.e., a global minimum for the energy function) is not guaranteed and even a local solution may take an exponential number of steps. We propose an improvement to the standard local activation function used for such networks. The improved algorithm guarantees that a global minimum is found in linear time for tree-like subnetworks. The algorithm, called activate, is uniform and does not assume that the network is tree-like. It can identify tree-like subnetworks even in cyclic topologies (arbitrary networks) and avoid local minima along these trees. For acyclic networks, the algorithm is guaranteed to converge to a global minimum from any initial state of the system (self-stabilization) and remains correct under various types of schedulers. On the negative side, we show that in the presence of cycles, no uniform algorithm exists that guarantees optimality even under a sequential asynchronous scheduler. An asynchronous scheduler can activate only one unit at a time while a synchronous scheduler can activate any number of units in a single time step. In addition, no uniform algorithm exists to optimize even acyclic networks when the scheduler is synchronous. Finally, we show how the algorithm can be improved using the cycle-cutset scheme. The general algorithm, called activate-with-cutset, improves over activate and has some performance guarantees that are related to the size of the network's cycle-cutset.<|reference_end|> | arxiv | @article{pinkas1995improving,
title={Improving Connectionist Energy Minimization},
author={G. Pinkas, R. Dechter},
journal={Journal of Artificial Intelligence Research, Vol 3, (1995),
223-248},
year={1995},
archivePrefix={arXiv},
eprint={cs/9510102},
primaryClass={cs.AI}
} | pinkas1995improving |
arxiv-675941 | cs/9510103 | Learning Membership Functions in a Function-Based Object Recognition System | <|reference_start|>Learning Membership Functions in a Function-Based Object Recognition System: Functionality-based recognition systems recognize objects at the category level by reasoning about how well the objects support the expected function. Such systems naturally associate a ``measure of goodness'' or ``membership value'' with a recognized object. This measure of goodness is the result of combining individual measures, or membership values, from potentially many primitive evaluations of different properties of the object's shape. A membership function is used to compute the membership value when evaluating a primitive of a particular physical property of an object. In previous versions of a recognition system known as Gruff, the membership function for each of the primitive evaluations was hand-crafted by the system designer. In this paper, we provide a learning component for the Gruff system, called Omlet, that automatically learns membership functions given a set of example objects labeled with their desired category measure. The learning algorithm is generally applicable to any problem in which low-level membership values are combined through an and-or tree structure to give a final overall membership value.<|reference_end|> | arxiv | @article{woods1995learning,
title={Learning Membership Functions in a Function-Based Object Recognition
System},
author={K. Woods, D. Cook, L. Hall, K. Bowyer, L. Stark},
journal={Journal of Artificial Intelligence Research, Vol 3, (1995),
187-222},
year={1995},
archivePrefix={arXiv},
eprint={cs/9510103},
primaryClass={cs.AI}
} | woods1995learning |
arxiv-675942 | cs/9511101 | Flexibly Instructable Agents | <|reference_start|>Flexibly Instructable Agents: This paper presents an approach to learning from situated, interactive tutorial instruction within an ongoing agent. Tutorial instruction is a flexible (and thus powerful) paradigm for teaching tasks because it allows an instructor to communicate whatever types of knowledge an agent might need in whatever situations might arise. To support this flexibility, however, the agent must be able to learn multiple kinds of knowledge from a broad range of instructional interactions. Our approach, called situated explanation, achieves such learning through a combination of analytic and inductive techniques. It combines a form of explanation-based learning that is situated for each instruction with a full suite of contextually guided responses to incomplete explanations. The approach is implemented in an agent called Instructo-Soar that learns hierarchies of new tasks and other domain knowledge from interactive natural language instructions. Instructo-Soar meets three key requirements of flexible instructability that distinguish it from previous systems: (1) it can take known or unknown commands at any instruction point; (2) it can handle instructions that apply to either its current situation or to a hypothetical situation specified in language (as in, for instance, conditional instructions); and (3) it can learn, from instructions, each class of knowledge it uses to perform tasks.<|reference_end|> | arxiv | @article{huffman1995flexibly,
title={Flexibly Instructable Agents},
author={S. B. Huffman, J. E. Laird},
journal={Journal of Artificial Intelligence Research, Vol 3, (1995),
271-324},
year={1995},
archivePrefix={arXiv},
eprint={cs/9511101},
primaryClass={cs.AI}
} | huffman1995flexibly |
arxiv-675943 | cs/9511102 | Set Theory for Verification: II Induction and Recursion | <|reference_start|>Set Theory for Verification: II Induction and Recursion: A theory of recursive definitions has been mechanized in Isabelle's Zermelo-Fraenkel (ZF) set theory. The objective is to support the formalization of particular recursive definitions for use in verification, semantics proofs and other computational reasoning. Inductively defined sets are expressed as least fixedpoints, applying the Knaster-Tarski Theorem over a suitable set. Recursive functions are defined by well-founded recursion and its derivatives, such as transfinite recursion. Recursive data structures are expressed by applying the Knaster-Tarski Theorem to a set, such as V[omega], that is closed under Cartesian product and disjoint sum. Worked examples include the transitive closure of a relation, lists, variable-branching trees and mutually recursive trees and forests. The Schr\"oder-Bernstein Theorem and the soundness of propositional logic are proved in Isabelle sessions.<|reference_end|> | arxiv | @article{paulson2000set,
title={Set Theory for Verification: II. Induction and Recursion},
author={Lawrence C. Paulson},
journal={published in Journal of Journal of Automated Reasoning 15 (1995),
167-215},
year={2000},
archivePrefix={arXiv},
eprint={cs/9511102},
primaryClass={cs.LO}
} | paulson2000set |
arxiv-675944 | cs/9511103 | A Concrete Final Coalgebra Theorem for ZF Set Theory | <|reference_start|>A Concrete Final Coalgebra Theorem for ZF Set Theory: A special final coalgebra theorem, in the style of Aczel's, is proved within standard Zermelo-Fraenkel set theory. Aczel's Anti-Foundation Axiom is replaced by a variant definition of function that admits non-well-founded constructions. Variant ordered pairs and tuples, of possibly infinite length, are special cases of variant functions. Analogues of Aczel's Solution and Substitution Lemmas are proved in the style of Rutten and Turi. The approach is less general than Aczel's, but the treatment of non-well-founded objects is simple and concrete. The final coalgebra of a functor is its greatest fixedpoint. The theory is intended for machine implementation and a simple case of it is already implemented using the theorem prover Isabelle.<|reference_end|> | arxiv | @article{paulson2001a,
title={A Concrete Final Coalgebra Theorem for ZF Set Theory},
author={Lawrence C. Paulson},
journal={published in P. Dybjer, B. Nordstrm and J. Smith (editors), Types
for Proofs and Programs '94 (Springer LNCS 996, published 1995), 120-139},
year={2001},
archivePrefix={arXiv},
eprint={cs/9511103},
primaryClass={cs.LO}
} | paulson2001a |
arxiv-675945 | cs/9512101 | OPUS: An Efficient Admissible Algorithm for Unordered Search | <|reference_start|>OPUS: An Efficient Admissible Algorithm for Unordered Search: OPUS is a branch and bound search algorithm that enables efficient admissible search through spaces for which the order of search operator application is not significant. The algorithm's search efficiency is demonstrated with respect to very large machine learning search spaces. The use of admissible search is of potential value to the machine learning community as it means that the exact learning biases to be employed for complex learning tasks can be precisely specified and manipulated. OPUS also has potential for application in other areas of artificial intelligence, notably, truth maintenance.<|reference_end|> | arxiv | @article{webb1995opus:,
title={OPUS: An Efficient Admissible Algorithm for Unordered Search},
author={G. I. Webb},
journal={Journal of Artificial Intelligence Research, Vol 3, (1995),
431-465},
year={1995},
archivePrefix={arXiv},
eprint={cs/9512101},
primaryClass={cs.AI}
} | webb1995opus: |
arxiv-675946 | cs/9512102 | Vision-Based Road Detection in Automotive Systems: A Real-Time Expectation-Driven Approach | <|reference_start|>Vision-Based Road Detection in Automotive Systems: A Real-Time Expectation-Driven Approach: The main aim of this work is the development of a vision-based road detection system fast enough to cope with the difficult real-time constraints imposed by moving vehicle applications. The hardware platform, a special-purpose massively parallel system, has been chosen to minimize system production and operational costs. This paper presents a novel approach to expectation-driven low-level image segmentation, which can be mapped naturally onto mesh-connected massively parallel SIMD architectures capable of handling hierarchical data structures. The input image is assumed to contain a distorted version of a given template; a multiresolution stretching process is used to reshape the original template in accordance with the acquired image content, minimizing a potential function. The distorted template is the process output.<|reference_end|> | arxiv | @article{broggi1995vision-based,
title={Vision-Based Road Detection in Automotive Systems: A Real-Time
Expectation-Driven Approach},
author={A. Broggi, S. Berte},
journal={Journal of Artificial Intelligence Research, Vol 3, (1995),
325-348},
year={1995},
archivePrefix={arXiv},
eprint={cs/9512102},
primaryClass={cs.AI}
} | broggi1995vision-based |
arxiv-675947 | cs/9512103 | Generalization of Clauses under Implication | <|reference_start|>Generalization of Clauses under Implication: In the area of inductive learning, generalization is a main operation, and the usual definition of induction is based on logical implication. Recently there has been a rising interest in clausal representation of knowledge in machine learning. Almost all inductive learning systems that perform generalization of clauses use the relation theta-subsumption instead of implication. The main reason is that there is a well-known and simple technique to compute least general generalizations under theta-subsumption, but not under implication. However generalization under theta-subsumption is inappropriate for learning recursive clauses, which is a crucial problem since recursion is the basic program structure of logic programs. We note that implication between clauses is undecidable, and we therefore introduce a stronger form of implication, called T-implication, which is decidable between clauses. We show that for every finite set of clauses there exists a least general generalization under T-implication. We describe a technique to reduce generalizations under implication of a clause to generalizations under theta-subsumption of what we call an expansion of the original clause. Moreover we show that for every non-tautological clause there exists a T-complete expansion, which means that every generalization under T-implication of the clause is reduced to a generalization under theta-subsumption of the expansion.<|reference_end|> | arxiv | @article{idestam-almquist1995generalization,
title={Generalization of Clauses under Implication},
author={P. Idestam-Almquist},
journal={Journal of Artificial Intelligence Research, Vol 3, (1995),
467-489},
year={1995},
archivePrefix={arXiv},
eprint={cs/9512103},
primaryClass={cs.AI}
} | idestam-almquist1995generalization |
arxiv-675948 | cs/9512104 | Decision-Theoretic Foundations for Causal Reasoning | <|reference_start|>Decision-Theoretic Foundations for Causal Reasoning: We present a definition of cause and effect in terms of decision-theoretic primitives and thereby provide a principled foundation for causal reasoning. Our definition departs from the traditional view of causation in that causal assertions may vary with the set of decisions available. We argue that this approach provides added clarity to the notion of cause. Also in this paper, we examine the encoding of causal relationships in directed acyclic graphs. We describe a special class of influence diagrams, those in canonical form, and show its relationship to Pearl's representation of cause and effect. Finally, we show how canonical form facilitates counterfactual reasoning.<|reference_end|> | arxiv | @article{heckerman1995decision-theoretic,
title={Decision-Theoretic Foundations for Causal Reasoning},
author={D. Heckerman, R. Shachter},
journal={Journal of Artificial Intelligence Research, Vol 3, (1995),
405-430},
year={1995},
archivePrefix={arXiv},
eprint={cs/9512104},
primaryClass={cs.AI}
} | heckerman1995decision-theoretic |
arxiv-675949 | cs/9512105 | Translating between Horn Representations and their Characteristic Models | <|reference_start|>Translating between Horn Representations and their Characteristic Models: Characteristic models are an alternative, model based, representation for Horn expressions. It has been shown that these two representations are incomparable and each has its advantages over the other. It is therefore natural to ask what is the cost of translating, back and forth, between these representations. Interestingly, the same translation questions arise in database theory, where it has applications to the design of relational databases. This paper studies the computational complexity of these problems. Our main result is that the two translation problems are equivalent under polynomial reductions, and that they are equivalent to the corresponding decision problem. Namely, translating is equivalent to deciding whether a given set of models is the set of characteristic models for a given Horn expression. We also relate these problems to the hypergraph transversal problem, a well known problem which is related to other applications in AI and for which no polynomial time algorithm is known. It is shown that in general our translation problems are at least as hard as the hypergraph transversal problem, and in a special case they are equivalent to it.<|reference_end|> | arxiv | @article{khardon1995translating,
title={Translating between Horn Representations and their Characteristic Models},
author={R. Khardon},
journal={Journal of Artificial Intelligence Research, Vol 3, (1995),
349-372},
year={1995},
archivePrefix={arXiv},
eprint={cs/9512105},
primaryClass={cs.AI}
} | khardon1995translating |
arxiv-675950 | cs/9512106 | Statistical Feature Combination for the Evaluation of Game Positions | <|reference_start|>Statistical Feature Combination for the Evaluation of Game Positions: This article describes an application of three well-known statistical methods in the field of game-tree search: using a large number of classified Othello positions, feature weights for evaluation functions with a game-phase-independent meaning are estimated by means of logistic regression, Fisher's linear discriminant, and the quadratic discriminant function for normally distributed features. Thereafter, the playing strengths are compared by means of tournaments between the resulting versions of a world-class Othello program. In this application, logistic regression - which is used here for the first time in the context of game playing - leads to better results than the other approaches.<|reference_end|> | arxiv | @article{buro1995statistical,
title={Statistical Feature Combination for the Evaluation of Game Positions},
author={M. Buro},
journal={Journal of Artificial Intelligence Research, Vol 3, (1995),
373-382},
year={1995},
archivePrefix={arXiv},
eprint={cs/9512106},
primaryClass={cs.AI}
} | buro1995statistical |
arxiv-675951 | cs/9512107 | Rule-based Machine Learning Methods for Functional Prediction | <|reference_start|>Rule-based Machine Learning Methods for Functional Prediction: We describe a machine learning method for predicting the value of a real-valued function, given the values of multiple input variables. The method induces solutions from samples in the form of ordered disjunctive normal form (DNF) decision rules. A central objective of the method and representation is the induction of compact, easily interpretable solutions. This rule-based decision model can be extended to search efficiently for similar cases prior to approximating function values. Experimental results on real-world data demonstrate that the new techniques are competitive with existing machine learning and statistical methods and can sometimes yield superior regression performance.<|reference_end|> | arxiv | @article{weiss1995rule-based,
title={Rule-based Machine Learning Methods for Functional Prediction},
author={S. M. Weiss, N. Indurkhya},
journal={Journal of Artificial Intelligence Research, Vol 3, (1995),
383-403},
year={1995},
archivePrefix={arXiv},
eprint={cs/9512107},
primaryClass={cs.AI}
} | weiss1995rule-based |
arxiv-675952 | cs/9601101 | The Design and Experimental Analysis of Algorithms for Temporal Reasoning | <|reference_start|>The Design and Experimental Analysis of Algorithms for Temporal Reasoning: Many applications -- from planning and scheduling to problems in molecular biology -- rely heavily on a temporal reasoning component. In this paper, we discuss the design and empirical analysis of algorithms for a temporal reasoning system based on Allen's influential interval-based framework for representing temporal information. At the core of the system are algorithms for determining whether the temporal information is consistent, and, if so, finding one or more scenarios that are consistent with the temporal information. Two important algorithms for these tasks are a path consistency algorithm and a backtracking algorithm. For the path consistency algorithm, we develop techniques that can result in up to a ten-fold speedup over an already highly optimized implementation. For the backtracking algorithm, we develop variable and value ordering heuristics that are shown empirically to dramatically improve the performance of the algorithm. As well, we show that a previously suggested reformulation of the backtracking search problem can reduce the time and space requirements of the backtracking search. Taken together, the techniques we develop allow a temporal reasoning component to solve problems that are of practical size.<|reference_end|> | arxiv | @article{vanbeek1996the,
title={The Design and Experimental Analysis of Algorithms for Temporal
Reasoning},
author={P. vanBeek, D. W. Manchak},
journal={Journal of Artificial Intelligence Research, Vol 4, (1996), 1-18},
year={1996},
archivePrefix={arXiv},
eprint={cs/9601101},
primaryClass={cs.AI}
} | vanbeek1996the |
arxiv-675953 | cs/9602101 | Well-Founded Semantics for Extended Logic Programs with Dynamic Preferences | <|reference_start|>Well-Founded Semantics for Extended Logic Programs with Dynamic Preferences: The paper describes an extension of well-founded semantics for logic programs with two types of negation. In this extension information about preferences between rules can be expressed in the logical language and derived dynamically. This is achieved by using a reserved predicate symbol and a naming technique. Conflicts among rules are resolved whenever possible on the basis of derived preference information. The well-founded conclusions of prioritized logic programs can be computed in polynomial time. A legal reasoning example illustrates the usefulness of the approach.<|reference_end|> | arxiv | @article{brewka1996well-founded,
title={Well-Founded Semantics for Extended Logic Programs with Dynamic
Preferences},
author={G. Brewka},
journal={Journal of Artificial Intelligence Research, Vol 4, (1996), 19-36},
year={1996},
archivePrefix={arXiv},
eprint={cs/9602101},
primaryClass={cs.AI}
} | brewka1996well-founded |
arxiv-675954 | cs/9602102 | Logarithmic-Time Updates and Queries in Probabilistic Networks | <|reference_start|>Logarithmic-Time Updates and Queries in Probabilistic Networks: Traditional databases commonly support efficient query and update procedures that operate in time which is sublinear in the size of the database. Our goal in this paper is to take a first step toward dynamic reasoning in probabilistic databases with comparable efficiency. We propose a dynamic data structure that supports efficient algorithms for updating and querying singly connected Bayesian networks. In the conventional algorithm, new evidence is absorbed in O(1) time and queries are processed in time O(N), where N is the size of the network. We propose an algorithm which, after a preprocessing phase, allows us to answer queries in time O(log N) at the expense of O(log N) time per evidence absorption. The usefulness of sub-linear processing time manifests itself in applications requiring (near) real-time response over large probabilistic databases. We briefly discuss a potential application of dynamic probabilistic reasoning in computational biology.<|reference_end|> | arxiv | @article{delcher1996logarithmic-time,
title={Logarithmic-Time Updates and Queries in Probabilistic Networks},
author={A. L. Delcher, A. J. Grove, S. Kasif, J. Pearl},
journal={Journal of Artificial Intelligence Research, Vol 4, (1996), 37-59},
year={1996},
archivePrefix={arXiv},
eprint={cs/9602102},
primaryClass={cs.AI}
} | delcher1996logarithmic-time |
arxiv-675955 | cs/9603101 | Quantum Computing and Phase Transitions in Combinatorial Search | <|reference_start|>Quantum Computing and Phase Transitions in Combinatorial Search: We introduce an algorithm for combinatorial search on quantum computers that is capable of significantly concentrating amplitude into solutions for some NP search problems, on average. This is done by exploiting the same aspects of problem structure as used by classical backtrack methods to avoid unproductive search choices. This quantum algorithm is much more likely to find solutions than the simple direct use of quantum parallelism. Furthermore, empirical evaluation on small problems shows this quantum algorithm displays the same phase transition behavior, and at the same location, as seen in many previously studied classical search methods. Specifically, difficult problem instances are concentrated near the abrupt change from underconstrained to overconstrained problems.<|reference_end|> | arxiv | @article{hogg1996quantum,
title={Quantum Computing and Phase Transitions in Combinatorial Search},
author={T. Hogg},
journal={Journal of Artificial Intelligence Research, Vol 4, (1996), 91-128},
year={1996},
archivePrefix={arXiv},
eprint={cs/9603101},
primaryClass={cs.AI}
} | hogg1996quantum |
arxiv-675956 | cs/9603102 | Mean Field Theory for Sigmoid Belief Networks | <|reference_start|>Mean Field Theory for Sigmoid Belief Networks: We develop a mean field theory for sigmoid belief networks based on ideas from statistical mechanics. Our mean field theory provides a tractable approximation to the true probability distribution in these networks; it also yields a lower bound on the likelihood of evidence. We demonstrate the utility of this framework on a benchmark problem in statistical pattern recognition---the classification of handwritten digits.<|reference_end|> | arxiv | @article{saul1996mean,
title={Mean Field Theory for Sigmoid Belief Networks},
author={L. K. Saul, T. Jaakkola, M. I. Jordan},
journal={Journal of Artificial Intelligence Research, Vol 4, (1996), 61-76},
year={1996},
archivePrefix={arXiv},
eprint={cs/9603102},
primaryClass={cs.AI}
} | saul1996mean |
arxiv-675957 | cs/9603103 | Improved Use of Continuous Attributes in C45 | <|reference_start|>Improved Use of Continuous Attributes in C45: A reported weakness of C4.5 in domains with continuous attributes is addressed by modifying the formation and evaluation of tests on continuous attributes. An MDL-inspired penalty is applied to such tests, eliminating some of them from consideration and altering the relative desirability of all tests. Empirical trials show that the modifications lead to smaller decision trees with higher predictive accuracies. Results also confirm that a new version of C4.5 incorporating these changes is superior to recent approaches that use global discretization and that construct small trees with multi-interval splits.<|reference_end|> | arxiv | @article{quinlan1996improved,
title={Improved Use of Continuous Attributes in C4.5},
author={J. R. Quinlan},
journal={Journal of Artificial Intelligence Research, Vol 4, (1996), 77-90},
year={1996},
archivePrefix={arXiv},
eprint={cs/9603103},
primaryClass={cs.AI}
} | quinlan1996improved |
arxiv-675958 | cs/9603104 | Active Learning with Statistical Models | <|reference_start|>Active Learning with Statistical Models: For many types of machine learning algorithms, one can compute the statistically `optimal' way to select training data. In this paper, we review how optimal data selection techniques have been used with feedforward neural networks. We then show how the same principles may be used to select data for two alternative, statistically-based learning architectures: mixtures of Gaussians and locally weighted regression. While the techniques for neural networks are computationally expensive and approximate, the techniques for mixtures of Gaussians and locally weighted regression are both efficient and accurate. Empirically, we observe that the optimality criterion sharply decreases the number of training examples the learner needs in order to achieve good performance.<|reference_end|> | arxiv | @article{cohn1996active,
title={Active Learning with Statistical Models},
author={D. A. Cohn, Z. Ghahramani, M. I. Jordan},
journal={Journal of Artificial Intelligence Research, Vol 4, (1996),
129-145},
year={1996},
archivePrefix={arXiv},
eprint={cs/9603104},
primaryClass={cs.AI}
} | cohn1996active |
arxiv-675959 | cs/9604101 | A Divergence Critic for Inductive Proof | <|reference_start|>A Divergence Critic for Inductive Proof: Inductive theorem provers often diverge. This paper describes a simple critic, a computer program which monitors the construction of inductive proofs attempting to identify diverging proof attempts. Divergence is recognized by means of a ``difference matching'' procedure. The critic then proposes lemmas and generalizations which ``ripple'' these differences away so that the proof can go through without divergence. The critic enables the theorem prover Spike to prove many theorems completely automatically from the definitions alone.<|reference_end|> | arxiv | @article{walsh1996a,
title={A Divergence Critic for Inductive Proof},
author={T. Walsh},
journal={Journal of Artificial Intelligence Research, Vol 4, (1996),
209-235},
year={1996},
archivePrefix={arXiv},
eprint={cs/9604101},
primaryClass={cs.AI}
} | walsh1996a |
arxiv-675960 | cs/9604102 | Practical Methods for Proving Termination of General Logic Programs | <|reference_start|>Practical Methods for Proving Termination of General Logic Programs: Termination of logic programs with negated body atoms (here called general logic programs) is an important topic. One reason is that many computational mechanisms used to process negated atoms, like Clark's negation as failure and Chan's constructive negation, are based on termination conditions. This paper introduces a methodology for proving termination of general logic programs w.r.t. the Prolog selection rule. The idea is to distinguish parts of the program depending on whether or not their termination depends on the selection rule. To this end, the notions of low-, weakly up-, and up-acceptable program are introduced. We use these notions to develop a methodology for proving termination of general logic programs, and show how interesting problems in non-monotonic reasoning can be formalized and implemented by means of terminating general logic programs.<|reference_end|> | arxiv | @article{marchiori1996practical,
title={Practical Methods for Proving Termination of General Logic Programs},
author={E. Marchiori},
journal={Journal of Artificial Intelligence Research, Vol 4, (1996),
179-208},
year={1996},
archivePrefix={arXiv},
eprint={cs/9604102},
primaryClass={cs.AI}
} | marchiori1996practical |
arxiv-675961 | cs/9604103 | Iterative Optimization and Simplification of Hierarchical Clusterings | <|reference_start|>Iterative Optimization and Simplification of Hierarchical Clusterings: Clustering is often used for discovering structure in data. Clustering systems differ in the objective function used to evaluate clustering quality and the control strategy used to search the space of clusterings. Ideally, the search strategy should consistently construct clusterings of high quality, but be computationally inexpensive as well. In general, we cannot have it both ways, but we can partition the search so that a system inexpensively constructs a `tentative' clustering for initial examination, followed by iterative optimization, which continues to search in background for improved clusterings. Given this motivation, we evaluate an inexpensive strategy for creating initial clusterings, coupled with several control strategies for iterative optimization, each of which repeatedly modifies an initial clustering in search of a better one. One of these methods appears novel as an iterative optimization strategy in clustering contexts. Once a clustering has been constructed it is judged by analysts -- often according to task-specific criteria. Several authors have abstracted these criteria and posited a generic performance task akin to pattern completion, where the error rate over completed patterns is used to `externally' judge clustering utility. Given this performance task, we adapt resampling-based pruning strategies used by supervised learning systems to the task of simplifying hierarchical clusterings, thus promising to ease post-clustering analysis. Finally, we propose a number of objective functions, based on attribute-selection measures for decision-tree induction, that might perform well on the error rate and simplicity dimensions.<|reference_end|> | arxiv | @article{fisher1996iterative,
title={Iterative Optimization and Simplification of Hierarchical Clusterings},
author={D. Fisher},
journal={Journal of Artificial Intelligence Research, Vol 4, (1996),
147-178},
year={1996},
archivePrefix={arXiv},
eprint={cs/9604103},
primaryClass={cs.AI}
} | fisher1996iterative |
arxiv-675962 | cs/9605101 | Further Experimental Evidence against the Utility of Occam's Razor | <|reference_start|>Further Experimental Evidence against the Utility of Occam's Razor: This paper presents new experimental evidence against the utility of Occam's razor. A~systematic procedure is presented for post-processing decision trees produced by C4.5. This procedure was derived by rejecting Occam's razor and instead attending to the assumption that similar objects are likely to belong to the same class. It increases a decision tree's complexity without altering the performance of that tree on the training data from which it is inferred. The resulting more complex decision trees are demonstrated to have, on average, for a variety of common learning tasks, higher predictive accuracy than the less complex original decision trees. This result raises considerable doubt about the utility of Occam's razor as it is commonly applied in modern machine learning.<|reference_end|> | arxiv | @article{webb1996further,
title={Further Experimental Evidence against the Utility of Occam's Razor},
author={G. I. Webb},
journal={Journal of Artificial Intelligence Research, Vol 4, (1996),
397-417},
year={1996},
archivePrefix={arXiv},
eprint={cs/9605101},
primaryClass={cs.AI}
} | webb1996further |
arxiv-675963 | cs/9605102 | Least Generalizations and Greatest Specializations of Sets of Clauses | <|reference_start|>Least Generalizations and Greatest Specializations of Sets of Clauses: The main operations in Inductive Logic Programming (ILP) are generalization and specialization, which only make sense in a generality order. In ILP, the three most important generality orders are subsumption, implication and implication relative to background knowledge. The two languages used most often are languages of clauses and languages of only Horn clauses. This gives a total of six different ordered languages. In this paper, we give a systematic treatment of the existence or non-existence of least generalizations and greatest specializations of finite sets of clauses in each of these six ordered sets. We survey results already obtained by others and also contribute some answers of our own. Our main new results are, firstly, the existence of a computable least generalization under implication of every finite set of clauses containing at least one non-tautologous function-free clause (among other, not necessarily function-free clauses). Secondly, we show that such a least generalization need not exist under relative implication, not even if both the set that is to be generalized and the background knowledge are function-free. Thirdly, we give a complete discussion of existence and non-existence of greatest specializations in each of the six ordered languages.<|reference_end|> | arxiv | @article{nienhuys-cheng1996least,
title={Least Generalizations and Greatest Specializations of Sets of Clauses},
author={S. H. Nienhuys-Cheng, R. deWolf},
journal={Journal of Artificial Intelligence Research, Vol 4, (1996),
341-363},
year={1996},
archivePrefix={arXiv},
eprint={cs/9605102},
primaryClass={cs.AI}
} | nienhuys-cheng1996least |
arxiv-675964 | cs/9605103 | Reinforcement Learning: A Survey | <|reference_start|>Reinforcement Learning: A Survey: This paper surveys the field of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the field and a broad selection of current work are summarized. Reinforcement learning is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. The work described here has a resemblance to work in psychology, but differs considerably in the details and in the use of the word ``reinforcement.'' The paper discusses central issues of reinforcement learning, including trading off exploration and exploitation, establishing the foundations of the field via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state. It concludes with a survey of some implemented systems and an assessment of the practical utility of current methods for reinforcement learning.<|reference_end|> | arxiv | @article{kaelbling1996reinforcement,
title={Reinforcement Learning: A Survey},
author={L. P. Kaelbling, M. L. Littman, A. W. Moore},
journal={Journal of Artificial Intelligence Research, Vol 4, (1996),
237-285},
year={1996},
archivePrefix={arXiv},
eprint={cs/9605103},
primaryClass={cs.AI}
} | kaelbling1996reinforcement |
arxiv-675965 | cs/9605104 | Adaptive Problem-solving for Large-scale Scheduling Problems: A Case Study | <|reference_start|>Adaptive Problem-solving for Large-scale Scheduling Problems: A Case Study: Although most scheduling problems are NP-hard, domain specific techniques perform well in practice but are quite expensive to construct. In adaptive problem-solving solving, domain specific knowledge is acquired automatically for a general problem solver with a flexible control architecture. In this approach, a learning system explores a space of possible heuristic methods for one well-suited to the eccentricities of the given domain and problem distribution. In this article, we discuss an application of the approach to scheduling satellite communications. Using problem distributions based on actual mission requirements, our approach identifies strategies that not only decrease the amount of CPU time required to produce schedules, but also increase the percentage of problems that are solvable within computational resource limitations.<|reference_end|> | arxiv | @article{gratch1996adaptive,
title={Adaptive Problem-solving for Large-scale Scheduling Problems: A Case
Study},
author={J. Gratch, S. Chien},
journal={Journal of Artificial Intelligence Research, Vol 4, (1996),
365-396},
year={1996},
archivePrefix={arXiv},
eprint={cs/9605104},
primaryClass={cs.AI}
} | gratch1996adaptive |
arxiv-675966 | cs/9605105 | A Formal Framework for Speedup Learning from Problems and Solutions | <|reference_start|>A Formal Framework for Speedup Learning from Problems and Solutions: Speedup learning seeks to improve the computational efficiency of problem solving with experience. In this paper, we develop a formal framework for learning efficient problem solving from random problems and their solutions. We apply this framework to two different representations of learned knowledge, namely control rules and macro-operators, and prove theorems that identify sufficient conditions for learning in each representation. Our proofs are constructive in that they are accompanied with learning algorithms. Our framework captures both empirical and explanation-based speedup learning in a unified fashion. We illustrate our framework with implementations in two domains: symbolic integration and Eight Puzzle. This work integrates many strands of experimental and theoretical work in machine learning, including empirical learning of control rules, macro-operator learning, Explanation-Based Learning (EBL), and Probably Approximately Correct (PAC) Learning.<|reference_end|> | arxiv | @article{tadepalli1996a,
title={A Formal Framework for Speedup Learning from Problems and Solutions},
author={P. Tadepalli, B. K. Natarajan},
journal={Journal of Artificial Intelligence Research, Vol 4, (1996),
445-475},
year={1996},
archivePrefix={arXiv},
eprint={cs/9605105},
primaryClass={cs.AI}
} | tadepalli1996a |
arxiv-675967 | cs/9605106 | 2Planning for Contingencies: A Decision-based Approach | <|reference_start|>2Planning for Contingencies: A Decision-based Approach: A fundamental assumption made by classical AI planners is that there is no uncertainty in the world: the planner has full knowledge of the conditions under which the plan will be executed and the outcome of every action is fully predictable. These planners cannot therefore construct contingency plans, i.e., plans in which different actions are performed in different circumstances. In this paper we discuss some issues that arise in the representation and construction of contingency plans and describe Cassandra, a partial-order contingency planner. Cassandra uses explicit decision-steps that enable the agent executing the plan to decide which plan branch to follow. The decision-steps in a plan result in subgoals to acquire knowledge, which are planned for in the same way as any other subgoals. Cassandra thus distinguishes the process of gathering information from the process of making decisions. The explicit representation of decisions in Cassandra allows a coherent approach to the problems of contingent planning, and provides a solid base for extensions such as the use of different decision-making procedures.<|reference_end|> | arxiv | @article{pryor19962planning,
title={2Planning for Contingencies: A Decision-based Approach},
author={L. Pryor, G. Collins},
journal={Journal of Artificial Intelligence Research, Vol 4, (1996),
287-339},
year={1996},
archivePrefix={arXiv},
eprint={cs/9605106},
primaryClass={cs.AI}
} | pryor19962planning |
arxiv-675968 | cs/9606101 | A Principled Approach Towards Symbolic Geometric Constraint Satisfaction | <|reference_start|>A Principled Approach Towards Symbolic Geometric Constraint Satisfaction: An important problem in geometric reasoning is to find the configuration of a collection of geometric bodies so as to satisfy a set of given constraints. Recently, it has been suggested that this problem can be solved efficiently by symbolically reasoning about geometry. This approach, called degrees of freedom analysis, employs a set of specialized routines called plan fragments that specify how to change the configuration of a set of bodies to satisfy a new constraint while preserving existing constraints. A potential drawback, which limits the scalability of this approach, is concerned with the difficulty of writing plan fragments. In this paper we address this limitation by showing how these plan fragments can be automatically synthesized using first principles about geometric bodies, actions, and topology.<|reference_end|> | arxiv | @article{bhansali1996a,
title={A Principled Approach Towards Symbolic Geometric Constraint Satisfaction},
author={S. Bhansali, G. A. Kramer, T. J. Hoar},
journal={Journal of Artificial Intelligence Research, Vol 4, (1996),
419-443},
year={1996},
archivePrefix={arXiv},
eprint={cs/9606101},
primaryClass={cs.AI}
} | bhansali1996a |
arxiv-675969 | cs/9606102 | On Partially Controlled Multi-Agent Systems | <|reference_start|>On Partially Controlled Multi-Agent Systems: Motivated by the control theoretic distinction between controllable and uncontrollable events, we distinguish between two types of agents within a multi-agent system: controllable agents, which are directly controlled by the system's designer, and uncontrollable agents, which are not under the designer's direct control. We refer to such systems as partially controlled multi-agent systems, and we investigate how one might influence the behavior of the uncontrolled agents through appropriate design of the controlled agents. In particular, we wish to understand which problems are naturally described in these terms, what methods can be applied to influence the uncontrollable agents, the effectiveness of such methods, and whether similar methods work across different domains. Using a game-theoretic framework, this paper studies the design of partially controlled multi-agent systems in two contexts: in one context, the uncontrollable agents are expected utility maximizers, while in the other they are reinforcement learners. We suggest different techniques for controlling agents' behavior in each domain, assess their success, and examine their relationship.<|reference_end|> | arxiv | @article{brafman1996on,
title={On Partially Controlled Multi-Agent Systems},
author={R. I. Brafman, M. Tennenholtz},
journal={Journal of Artificial Intelligence Research, Vol 4, (1996),
477-507},
year={1996},
archivePrefix={arXiv},
eprint={cs/9606102},
primaryClass={cs.AI}
} | brafman1996on |
arxiv-675970 | cs/9608103 | Spatial Aggregation: Theory and Applications | <|reference_start|>Spatial Aggregation: Theory and Applications: Visual thinking plays an important role in scientific reasoning. Based on the research in automating diverse reasoning tasks about dynamical systems, nonlinear controllers, kinematic mechanisms, and fluid motion, we have identified a style of visual thinking, imagistic reasoning. Imagistic reasoning organizes computations around image-like, analogue representations so that perceptual and symbolic operations can be brought to bear to infer structure and behavior. Programs incorporating imagistic reasoning have been shown to perform at an expert level in domains that defy current analytic or numerical methods. We have developed a computational paradigm, spatial aggregation, to unify the description of a class of imagistic problem solvers. A program written in this paradigm has the following properties. It takes a continuous field and optional objective functions as input, and produces high-level descriptions of structure, behavior, or control actions. It computes a multi-layer of intermediate representations, called spatial aggregates, by forming equivalence classes and adjacency relations. It employs a small set of generic operators such as aggregation, classification, and localization to perform bidirectional mapping between the information-rich field and successively more abstract spatial aggregates. It uses a data structure, the neighborhood graph, as a common interface to modularize computations. To illustrate our theory, we describe the computational structure of three implemented problem solvers -- KAM, MAPS, and HIPAIR --- in terms of the spatial aggregation generic operators by mixing and matching a library of commonly used routines.<|reference_end|> | arxiv | @article{yip1996spatial,
title={Spatial Aggregation: Theory and Applications},
author={K. Yip, F. Zhao},
journal={Journal of Artificial Intelligence Research, Vol 5, (1996), 1-26},
year={1996},
archivePrefix={arXiv},
eprint={cs/9608103},
primaryClass={cs.AI}
} | yip1996spatial |
arxiv-675971 | cs/9608104 | A Hierarchy of Tractable Subsets for Computing Stable Models | <|reference_start|>A Hierarchy of Tractable Subsets for Computing Stable Models: Finding the stable models of a knowledge base is a significant computational problem in artificial intelligence. This task is at the computational heart of truth maintenance systems, autoepistemic logic, and default logic. Unfortunately, it is NP-hard. In this paper we present a hierarchy of classes of knowledge bases, Omega_1,Omega_2,..., with the following properties: first, Omega_1 is the class of all stratified knowledge bases; second, if a knowledge base Pi is in Omega_k, then Pi has at most k stable models, and all of them may be found in time O(lnk), where l is the length of the knowledge base and n the number of atoms in Pi; third, for an arbitrary knowledge base Pi, we can find the minimum k such that Pi belongs to Omega_k in time polynomial in the size of Pi; and, last, where K is the class of all knowledge bases, it is the case that union{i=1 to infty} Omega_i = K, that is, every knowledge base belongs to some class in the hierarchy.<|reference_end|> | arxiv | @article{ben-eliyahu1996a,
title={A Hierarchy of Tractable Subsets for Computing Stable Models},
author={R. Ben-Eliyahu},
journal={Journal of Artificial Intelligence Research, Vol 5, (1996), 27-52},
year={1996},
archivePrefix={arXiv},
eprint={cs/9608104},
primaryClass={cs.AI}
} | ben-eliyahu1996a |
arxiv-675972 | cs/9608105 | Shellsort with three increments | <|reference_start|>Shellsort with three increments: A perturbation technique can be used to simplify and sharpen A. C. Yao's theorems about the behavior of shellsort with increments $(h,g,1)$. In particular, when $h=\Theta(n^{7/15})$ and $g=\Theta(h^{1/5})$, the average running time is $O(n^{23/15})$. The proof involves interesting properties of the inversions in random permutations that have been $h$-sorted and $g$-sorted.<|reference_end|> | arxiv | @article{janson1996shellsort,
title={Shellsort with three increments},
author={Svante Janson and Donald E. Knuth},
journal={Random Structures Algorithms 10 (1997), no. 1-2, 125--142},
year={1996},
number={Knuth migration 11/2004},
archivePrefix={arXiv},
eprint={cs/9608105},
primaryClass={cs.DS}
} | janson1996shellsort |
arxiv-675973 | cs/9609101 | Accelerating Partial-Order Planners: Some Techniques for Effective Search Control and Pruning | <|reference_start|>Accelerating Partial-Order Planners: Some Techniques for Effective Search Control and Pruning: We propose some domain-independent techniques for bringing well-founded partial-order planners closer to practicality. The first two techniques are aimed at improving search control while keeping overhead costs low. One is based on a simple adjustment to the default A* heuristic used by UCPOP to select plans for refinement. The other is based on preferring ``zero commitment'' (forced) plan refinements whenever possible, and using LIFO prioritization otherwise. A more radical technique is the use of operator parameter domains to prune search. These domains are initially computed from the definitions of the operators and the initial and goal conditions, using a polynomial-time algorithm that propagates sets of constants through the operator graph, starting in the initial conditions. During planning, parameter domains can be used to prune nonviable operator instances and to remove spurious clobbering threats. In experiments based on modifications of UCPOP, our improved plan and goal selection strategies gave speedups by factors ranging from 5 to more than 1000 for a variety of problems that are nontrivial for the unmodified version. Crucially, the hardest problems gave the greatest improvements. The pruning technique based on parameter domains often gave speedups by an order of magnitude or more for difficult problems, both with the default UCPOP search strategy and with our improved strategy. The Lisp code for our techniques and for the test problems is provided in on-line appendices.<|reference_end|> | arxiv | @article{gerevini1996accelerating,
title={Accelerating Partial-Order Planners: Some Techniques for Effective
Search Control and Pruning},
author={A. Gerevini, L. Schubert},
journal={Journal of Artificial Intelligence Research, Vol 5, (1996), 95-137},
year={1996},
archivePrefix={arXiv},
eprint={cs/9609101},
primaryClass={cs.AI}
} | gerevini1996accelerating |
arxiv-675974 | cs/9609102 | Cue Phrase Classification Using Machine Learning | <|reference_start|>Cue Phrase Classification Using Machine Learning: Cue phrases may be used in a discourse sense to explicitly signal discourse structure, but also in a sentential sense to convey semantic rather than structural information. Correctly classifying cue phrases as discourse or sentential is critical in natural language processing systems that exploit discourse structure, e.g., for performing tasks such as anaphora resolution and plan recognition. This paper explores the use of machine learning for classifying cue phrases as discourse or sentential. Two machine learning programs (Cgrendel and C4.5) are used to induce classification models from sets of pre-classified cue phrases and their features in text and speech. Machine learning is shown to be an effective technique for not only automating the generation of classification models, but also for improving upon previous results. When compared to manually derived classification models already in the literature, the learned models often perform with higher accuracy and contain new linguistic insights into the data. In addition, the ability to automatically construct classification models makes it easier to comparatively analyze the utility of alternative feature representations of the data. Finally, the ease of retraining makes the learning approach more scalable and flexible than manual methods.<|reference_end|> | arxiv | @article{litman1996cue,
title={Cue Phrase Classification Using Machine Learning},
author={D. J. Litman},
journal={Journal of Artificial Intelligence Research, Vol 5, (1996), 53-94},
year={1996},
archivePrefix={arXiv},
eprint={cs/9609102},
primaryClass={cs.AI}
} | litman1996cue |
arxiv-675975 | cs/9610101 | Mechanisms for Automated Negotiation in State Oriented Domains | <|reference_start|>Mechanisms for Automated Negotiation in State Oriented Domains: This paper lays part of the groundwork for a domain theory of negotiation, that is, a way of classifying interactions so that it is clear, given a domain, which negotiation mechanisms and strategies are appropriate. We define State Oriented Domains, a general category of interaction. Necessary and sufficient conditions for cooperation are outlined. We use the notion of worth in an altered definition of utility, thus enabling agreements in a wider class of joint-goal reachable situations. An approach is offered for conflict resolution, and it is shown that even in a conflict situation, partial cooperative steps can be taken by interacting agents (that is, agents in fundamental conflict might still agree to cooperate up to a certain point). A Unified Negotiation Protocol (UNP) is developed that can be used in all types of encounters. It is shown that in certain borderline cooperative situations, a partial cooperative agreement (i.e., one that does not achieve all agents' goals) might be preferred by all agents, even though there exists a rational agreement that would achieve all their goals. Finally, we analyze cases where agents have incomplete information on the goals and worth of other agents. First we consider the case where agents' goals are private information, and we analyze what goal declaration strategies the agents might adopt to increase their utility. Then, we consider the situation where the agents' goals (and therefore stand-alone costs) are common knowledge, but the worth they attach to their goals is private information. We introduce two mechanisms, one 'strict', the other 'tolerant', and analyze their affects on the stability and efficiency of negotiation outcomes.<|reference_end|> | arxiv | @article{zlotkin1996mechanisms,
title={Mechanisms for Automated Negotiation in State Oriented Domains},
author={G. Zlotkin, J. S. Rosenschein},
journal={Journal of Artificial Intelligence Research, Vol 5, (1996),
163-238},
year={1996},
archivePrefix={arXiv},
eprint={cs/9610101},
primaryClass={cs.AI}
} | zlotkin1996mechanisms |
arxiv-675976 | cs/9610102 | Learning First-Order Definitions of Functions | <|reference_start|>Learning First-Order Definitions of Functions: First-order learning involves finding a clause-form definition of a relation from examples of the relation and relevant background information. In this paper, a particular first-order learning system is modified to customize it for finding definitions of functional relations. This restriction leads to faster learning times and, in some cases, to definitions that have higher predictive accuracy. Other first-order learning systems might benefit from similar specialization.<|reference_end|> | arxiv | @article{quinlan1996learning,
title={Learning First-Order Definitions of Functions},
author={J. R. Quinlan},
journal={Journal of Artificial Intelligence Research, Vol 5, (1996),
139-161},
year={1996},
archivePrefix={arXiv},
eprint={cs/9610102},
primaryClass={cs.AI}
} | quinlan1996learning |
arxiv-675977 | cs/9611101 | MUSE CSP: An Extension to the Constraint Satisfaction Problem | <|reference_start|>MUSE CSP: An Extension to the Constraint Satisfaction Problem: This paper describes an extension to the constraint satisfaction problem (CSP) called MUSE CSP (MUltiply SEgmented Constraint Satisfaction Problem). This extension is especially useful for those problems which segment into multiple sets of partially shared variables. Such problems arise naturally in signal processing applications including computer vision, speech processing, and handwriting recognition. For these applications, it is often difficult to segment the data in only one way given the low-level information utilized by the segmentation algorithms. MUSE CSP can be used to compactly represent several similar instances of the constraint satisfaction problem. If multiple instances of a CSP have some common variables which have the same domains and constraints, then they can be combined into a single instance of a MUSE CSP, reducing the work required to apply the constraints. We introduce the concepts of MUSE node consistency, MUSE arc consistency, and MUSE path consistency. We then demonstrate how MUSE CSP can be used to compactly represent lexically ambiguous sentences and the multiple sentence hypotheses that are often generated by speech recognition algorithms so that grammar constraints can be used to provide parses for all syntactically correct sentences. Algorithms for MUSE arc and path consistency are provided. Finally, we discuss how to create a MUSE CSP from a set of CSPs which are labeled to indicate when the same variable is shared by more than a single CSP.<|reference_end|> | arxiv | @article{helzerman1996muse,
title={MUSE CSP: An Extension to the Constraint Satisfaction Problem},
author={R. A Helzerman, M. P. Harper},
journal={Journal of Artificial Intelligence Research, Vol 5, (1996),
239-288},
year={1996},
archivePrefix={arXiv},
eprint={cs/9611101},
primaryClass={cs.AI}
} | helzerman1996muse |
arxiv-675978 | cs/9612101 | Exploiting Causal Independence in Bayesian Network Inference | <|reference_start|>Exploiting Causal Independence in Bayesian Network Inference: A new method is proposed for exploiting causal independencies in exact Bayesian network inference. A Bayesian network can be viewed as representing a factorization of a joint probability into the multiplication of a set of conditional probabilities. We present a notion of causal independence that enables one to further factorize the conditional probabilities into a combination of even smaller factors and consequently obtain a finer-grain factorization of the joint probability. The new formulation of causal independence lets us specify the conditional probability of a variable given its parents in terms of an associative and commutative operator, such as ``or'', ``sum'' or ``max'', on the contribution of each parent. We start with a simple algorithm VE for Bayesian network inference that, given evidence and a query variable, uses the factorization to find the posterior distribution of the query. We show how this algorithm can be extended to exploit causal independence. Empirical studies, based on the CPCS networks for medical diagnosis, show that this method is more efficient than previous methods and allows for inference in larger networks than previous algorithms.<|reference_end|> | arxiv | @article{zhang1996exploiting,
title={Exploiting Causal Independence in Bayesian Network Inference},
author={N. L. Zhang, D. Poole},
journal={Journal of Artificial Intelligence Research, Vol 5, (1996),
301-328},
year={1996},
archivePrefix={arXiv},
eprint={cs/9612101},
primaryClass={cs.AI}
} | zhang1996exploiting |
arxiv-675979 | cs/9612102 | Quantitative Results Comparing Three Intelligent Interfaces for Information Capture: A Case Study Adding Name Information into an Electronic Personal Organizer | <|reference_start|>Quantitative Results Comparing Three Intelligent Interfaces for Information Capture: A Case Study Adding Name Information into an Electronic Personal Organizer: Efficiently entering information into a computer is key to enjoying the benefits of computing. This paper describes three intelligent user interfaces: handwriting recognition, adaptive menus, and predictive fillin. In the context of adding a personUs name and address to an electronic organizer, tests show handwriting recognition is slower than typing on an on-screen, soft keyboard, while adaptive menus and predictive fillin can be twice as fast. This paper also presents strategies for applying these three interfaces to other information collection domains.<|reference_end|> | arxiv | @article{schlimmer1996quantitative,
title={Quantitative Results Comparing Three Intelligent Interfaces for
Information Capture: A Case Study Adding Name Information into an Electronic
Personal Organizer},
author={J. C. Schlimmer, P. C. Wells},
journal={Journal of Artificial Intelligence Research, Vol 5, (1996),
329-349},
year={1996},
archivePrefix={arXiv},
eprint={cs/9612102},
primaryClass={cs.AI}
} | schlimmer1996quantitative |
arxiv-675980 | cs/9612103 | Characterizations of Decomposable Dependency Models | <|reference_start|>Characterizations of Decomposable Dependency Models: Decomposable dependency models possess a number of interesting and useful properties. This paper presents new characterizations of decomposable models in terms of independence relationships, which are obtained by adding a single axiom to the well-known set characterizing dependency models that are isomorphic to undirected graphs. We also briefly discuss a potential application of our results to the problem of learning graphical models from data.<|reference_end|> | arxiv | @article{decampos1996characterizations,
title={Characterizations of Decomposable Dependency Models},
author={L. M. deCampos},
journal={Journal of Artificial Intelligence Research, Vol 5, (1996),
289-300},
year={1996},
archivePrefix={arXiv},
eprint={cs/9612103},
primaryClass={cs.AI}
} | decampos1996characterizations |
arxiv-675981 | cs/9612104 | Mechanizing Set Theory: Cardinal Arithmetic and the Axiom of Choice | <|reference_start|>Mechanizing Set Theory: Cardinal Arithmetic and the Axiom of Choice: Fairly deep results of Zermelo-Frenkel (ZF) set theory have been mechanized using the proof assistant Isabelle. The results concern cardinal arithmetic and the Axiom of Choice (AC). A key result about cardinal multiplication is K*K = K, where K is any infinite cardinal. Proving this result required developing theories of orders, order-isomorphisms, order types, ordinal arithmetic, cardinals, etc.; this covers most of Kunen, Set Theory, Chapter I. Furthermore, we have proved the equivalence of 7 formulations of the Well-ordering Theorem and 20 formulations of AC; this covers the first two chapters of Rubin and Rubin, Equivalents of the Axiom of Choice, and involves highly technical material. The definitions used in the proofs are largely faithful in style to the original mathematics.<|reference_end|> | arxiv | @article{paulson2001mechanizing,
title={Mechanizing Set Theory: Cardinal Arithmetic and the Axiom of Choice.},
author={Lawrence C. Paulson and Krzysztof Grabczewski},
journal={Journal of Automated Reasoning 17 (1996), 291-323},
year={2001},
archivePrefix={arXiv},
eprint={cs/9612104},
primaryClass={cs.LO}
} | paulson2001mechanizing |
arxiv-675982 | cs/9701101 | Improved Heterogeneous Distance Functions | <|reference_start|>Improved Heterogeneous Distance Functions: Instance-based learning techniques typically handle continuous and linear input values well, but often do not handle nominal input attributes appropriately. The Value Difference Metric (VDM) was designed to find reasonable distance values between nominal attribute values, but it largely ignores continuous attributes, requiring discretization to map continuous values into nominal values. This paper proposes three new heterogeneous distance functions, called the Heterogeneous Value Difference Metric (HVDM), the Interpolated Value Difference Metric (IVDM), and the Windowed Value Difference Metric (WVDM). These new distance functions are designed to handle applications with nominal attributes, continuous attributes, or both. In experiments on 48 applications the new distance metrics achieve higher classification accuracy on average than three previous distance functions on those datasets that have both nominal and continuous attributes.<|reference_end|> | arxiv | @article{wilson1997improved,
title={Improved Heterogeneous Distance Functions},
author={D. R. Wilson, T. R. Martinez},
journal={Journal of Artificial Intelligence Research, Vol 6, (1997), 1-34},
year={1997},
archivePrefix={arXiv},
eprint={cs/9701101},
primaryClass={cs.AI}
} | wilson1997improved |
arxiv-675983 | cs/9701102 | SCREEN: Learning a Flat Syntactic and Semantic Spoken Language Analysis Using Artificial Neural Networks | <|reference_start|>SCREEN: Learning a Flat Syntactic and Semantic Spoken Language Analysis Using Artificial Neural Networks: Previous approaches of analyzing spontaneously spoken language often have been based on encoding syntactic and semantic knowledge manually and symbolically. While there has been some progress using statistical or connectionist language models, many current spoken- language systems still use a relatively brittle, hand-coded symbolic grammar or symbolic semantic component. In contrast, we describe a so-called screening approach for learning robust processing of spontaneously spoken language. A screening approach is a flat analysis which uses shallow sequences of category representations for analyzing an utterance at various syntactic, semantic and dialog levels. Rather than using a deeply structured symbolic analysis, we use a flat connectionist analysis. This screening approach aims at supporting speech and language processing by using (1) data-driven learning and (2) robustness of connectionist networks. In order to test this approach, we have developed the SCREEN system which is based on this new robust, learned and flat analysis. In this paper, we focus on a detailed description of SCREEN's architecture, the flat syntactic and semantic analysis, the interaction with a speech recognizer, and a detailed evaluation analysis of the robustness under the influence of noisy or incomplete input. The main result of this paper is that flat representations allow more robust processing of spontaneous spoken language than deeply structured representations. In particular, we show how the fault-tolerance and learning capability of connectionist networks can support a flat analysis for providing more robust spoken-language processing within an overall hybrid symbolic/connectionist framework.<|reference_end|> | arxiv | @article{wermter1997screen:,
title={SCREEN: Learning a Flat Syntactic and Semantic Spoken Language Analysis
Using Artificial Neural Networks},
author={S. Wermter, V. Weber},
journal={Journal of Artificial Intelligence Research, Vol 6, (1997), 35-85},
year={1997},
archivePrefix={arXiv},
eprint={cs/9701102},
primaryClass={cs.AI}
} | wermter1997screen: |
arxiv-675984 | cs/9703101 | A Uniform Framework for Concept Definitions in Description Logics | <|reference_start|>A Uniform Framework for Concept Definitions in Description Logics: Most modern formalisms used in Databases and Artificial Intelligence for describing an application domain are based on the notions of class (or concept) and relationship among classes. One interesting feature of such formalisms is the possibility of defining a class, i.e., providing a set of properties that precisely characterize the instances of the class. Many recent articles point out that there are several ways of assigning a meaning to a class definition containing some sort of recursion. In this paper, we argue that, instead of choosing a single style of semantics, we achieve better results by adopting a formalism that allows for different semantics to coexist. We demonstrate the feasibility of our argument, by presenting a knowledge representation formalism, the description logic muALCQ, with the above characteristics. In addition to the constructs for conjunction, disjunction, negation, quantifiers, and qualified number restrictions, muALCQ includes special fixpoint constructs to express (suitably interpreted) recursive definitions. These constructs enable the usual frame-based descriptions to be combined with definitions of recursive data structures such as directed acyclic graphs, lists, streams, etc. We establish several properties of muALCQ, including the decidability and the computational complexity of reasoning, by formulating a correspondence with a particular modal logic of programs called the modal mu-calculus.<|reference_end|> | arxiv | @article{degiacomo1997a,
title={A Uniform Framework for Concept Definitions in Description Logics},
author={G. DeGiacomo, M. Lenzerini},
journal={Journal of Artificial Intelligence Research, Vol 6, (1997), 87-110},
year={1997},
archivePrefix={arXiv},
eprint={cs/9703101},
primaryClass={cs.AI}
} | degiacomo1997a |
arxiv-675985 | cs/9704101 | Lifeworld Analysis | <|reference_start|>Lifeworld Analysis: We argue that the analysis of agent/environment interactions should be extended to include the conventions and invariants maintained by agents throughout their activity. We refer to this thicker notion of environment as a lifeworld and present a partial set of formal tools for describing structures of lifeworlds and the ways in which they computationally simplify activity. As one specific example, we apply the tools to the analysis of the Toast system and show how versions of the system with very different control structures in fact implement a common control structure together with different conventions for encoding task state in the positions or states of objects in the environment.<|reference_end|> | arxiv | @article{agre1997lifeworld,
title={Lifeworld Analysis},
author={P. Agre, I. Horswill},
journal={Journal of Artificial Intelligence Research, Vol 6, (1997),
111-145},
year={1997},
archivePrefix={arXiv},
eprint={cs/9704101},
primaryClass={cs.AI}
} | agre1997lifeworld |
arxiv-675986 | cs/9705101 | Query DAGs: A Practical Paradigm for Implementing Belief-Network Inference | <|reference_start|>Query DAGs: A Practical Paradigm for Implementing Belief-Network Inference: We describe a new paradigm for implementing inference in belief networks, which consists of two steps: (1) compiling a belief network into an arithmetic expression called a Query DAG (Q-DAG); and (2) answering queries using a simple evaluation algorithm. Each node of a Q-DAG represents a numeric operation, a number, or a symbol for evidence. Each leaf node of a Q-DAG represents the answer to a network query, that is, the probability of some event of interest. It appears that Q-DAGs can be generated using any of the standard algorithms for exact inference in belief networks (we show how they can be generated using clustering and conditioning algorithms). The time and space complexity of a Q-DAG generation algorithm is no worse than the time complexity of the inference algorithm on which it is based. The complexity of a Q-DAG evaluation algorithm is linear in the size of the Q-DAG, and such inference amounts to a standard evaluation of the arithmetic expression it represents. The intended value of Q-DAGs is in reducing the software and hardware resources required to utilize belief networks in on-line, real-world applications. The proposed framework also facilitates the development of on-line inference on different software and hardware platforms due to the simplicity of the Q-DAG evaluation algorithm. Interestingly enough, Q-DAGs were found to serve other purposes: simple techniques for reducing Q-DAGs tend to subsume relatively complex optimization techniques for belief-network inference, such as network-pruning and computation-caching.<|reference_end|> | arxiv | @article{darwiche1997query,
title={Query DAGs: A Practical Paradigm for Implementing Belief-Network
Inference},
author={A. Darwiche, G. Provan},
journal={Journal of Artificial Intelligence Research, Vol 6, (1997),
147-176},
year={1997},
archivePrefix={arXiv},
eprint={cs/9705101},
primaryClass={cs.AI}
} | darwiche1997query |
arxiv-675987 | cs/9705102 | Connectionist Theory Refinement: Genetically Searching the Space of Network Topologies | <|reference_start|>Connectionist Theory Refinement: Genetically Searching the Space of Network Topologies: An algorithm that learns from a set of examples should ideally be able to exploit the available resources of (a) abundant computing power and (b) domain-specific knowledge to improve its ability to generalize. Connectionist theory-refinement systems, which use background knowledge to select a neural network's topology and initial weights, have proven to be effective at exploiting domain-specific knowledge; however, most do not exploit available computing power. This weakness occurs because they lack the ability to refine the topology of the neural networks they produce, thereby limiting generalization, especially when given impoverished domain theories. We present the REGENT algorithm which uses (a) domain-specific knowledge to help create an initial population of knowledge-based neural networks and (b) genetic operators of crossover and mutation (specifically designed for knowledge-based networks) to continually search for better network topologies. Experiments on three real-world domains indicate that our new algorithm is able to significantly increase generalization compared to a standard connectionist theory-refinement system, as well as our previous algorithm for growing knowledge-based networks.<|reference_end|> | arxiv | @article{opitz1997connectionist,
title={Connectionist Theory Refinement: Genetically Searching the Space of
Network Topologies},
author={D. W. Opitz, J. W. Shavlik},
journal={Journal of Artificial Intelligence Research, Vol 6, (1997),
177-209},
year={1997},
archivePrefix={arXiv},
eprint={cs/9705102},
primaryClass={cs.AI}
} | opitz1997connectionist |
arxiv-675988 | cs/9706101 | Flaw Selection Strategies for Partial-Order Planning | <|reference_start|>Flaw Selection Strategies for Partial-Order Planning: Several recent studies have compared the relative efficiency of alternative flaw selection strategies for partial-order causal link (POCL) planning. We review this literature, and present new experimental results that generalize the earlier work and explain some of the discrepancies in it. In particular, we describe the Least-Cost Flaw Repair (LCFR) strategy developed and analyzed by Joslin and Pollack (1994), and compare it with other strategies, including Gerevini and Schubert's (1996) ZLIFO strategy. LCFR and ZLIFO make very different, and apparently conflicting claims about the most effective way to reduce search-space size in POCL planning. We resolve this conflict, arguing that much of the benefit that Gerevini and Schubert ascribe to the LIFO component of their ZLIFO strategy is better attributed to other causes. We show that for many problems, a strategy that combines least-cost flaw selection with the delay of separable threats will be effective in reducing search-space size, and will do so without excessive computational overhead. Although such a strategy thus provides a good default, we also show that certain domain characteristics may reduce its effectiveness.<|reference_end|> | arxiv | @article{pollack1997flaw,
title={Flaw Selection Strategies for Partial-Order Planning},
author={M. E. Pollack, D. Joslin, M. Paolucci},
journal={Journal of Artificial Intelligence Research, Vol 6, (1997),
223-262},
year={1997},
archivePrefix={arXiv},
eprint={cs/9706101},
primaryClass={cs.AI}
} | pollack1997flaw |
arxiv-675989 | cs/9706102 | A Complete Classification of Tractability in RCC-5 | <|reference_start|>A Complete Classification of Tractability in RCC-5: We investigate the computational properties of the spatial algebra RCC-5 which is a restricted version of the RCC framework for spatial reasoning. The satisfiability problem for RCC-5 is known to be NP-complete but not much is known about its approximately four billion subclasses. We provide a complete classification of satisfiability for all these subclasses into polynomial and NP-complete respectively. In the process, we identify all maximal tractable subalgebras which are four in total.<|reference_end|> | arxiv | @article{jonsson1997a,
title={A Complete Classification of Tractability in RCC-5},
author={P. Jonsson, T. Drakengren},
journal={Journal of Artificial Intelligence Research, Vol 6, (1997),
211-221},
year={1997},
archivePrefix={arXiv},
eprint={cs/9706102},
primaryClass={cs.AI}
} | jonsson1997a |
arxiv-675990 | cs/9707101 | A New Look at the Easy-Hard-Easy Pattern of Combinatorial Search Difficulty | <|reference_start|>A New Look at the Easy-Hard-Easy Pattern of Combinatorial Search Difficulty: The easy-hard-easy pattern in the difficulty of combinatorial search problems as constraints are added has been explained as due to a competition between the decrease in number of solutions and increased pruning. We test the generality of this explanation by examining one of its predictions: if the number of solutions is held fixed by the choice of problems, then increased pruning should lead to a monotonic decrease in search cost. Instead, we find the easy-hard-easy pattern in median search cost even when the number of solutions is held constant, for some search methods. This generalizes previous observations of this pattern and shows that the existing theory does not explain the full range of the peak in search cost. In these cases the pattern appears to be due to changes in the size of the minimal unsolvable subproblems, rather than changing numbers of solutions.<|reference_end|> | arxiv | @article{mammen1997a,
title={A New Look at the Easy-Hard-Easy Pattern of Combinatorial Search
Difficulty},
author={D. L. Mammen, T. Hogg},
journal={Journal of Artificial Intelligence Research, Vol 7, (1997), 47-66},
year={1997},
archivePrefix={arXiv},
eprint={cs/9707101},
primaryClass={cs.AI}
} | mammen1997a |
arxiv-675991 | cs/9707102 | Eight Maximal Tractable Subclasses of Allen's Algebra with Metric Time | <|reference_start|>Eight Maximal Tractable Subclasses of Allen's Algebra with Metric Time: This paper combines two important directions of research in temporal resoning: that of finding maximal tractable subclasses of Allen's interval algebra, and that of reasoning with metric temporal information. Eight new maximal tractable subclasses of Allen's interval algebra are presented, some of them subsuming previously reported tractable algebras. The algebras allow for metric temporal constraints on interval starting or ending points, using the recent framework of Horn DLRs. Two of the algebras can express the notion of sequentiality between intervals, being the first such algebras admitting both qualitative and metric time.<|reference_end|> | arxiv | @article{drakengren1997eight,
title={Eight Maximal Tractable Subclasses of Allen's Algebra with Metric Time},
author={T. Drakengren, P. Jonsson},
journal={Journal of Artificial Intelligence Research, Vol 7, (1997), 25-45},
year={1997},
archivePrefix={arXiv},
eprint={cs/9707102},
primaryClass={cs.AI}
} | drakengren1997eight |
arxiv-675992 | cs/9707103 | Defining Relative Likelihood in Partially-Ordered Preferential Structures | <|reference_start|>Defining Relative Likelihood in Partially-Ordered Preferential Structures: Starting with a likelihood or preference order on worlds, we extend it to a likelihood ordering on sets of worlds in a natural way, and examine the resulting logic. Lewis earlier considered such a notion of relative likelihood in the context of studying counterfactuals, but he assumed a total preference order on worlds. Complications arise when examining partial orders that are not present for total orders. There are subtleties involving the exact approach to lifting the order on worlds to an order on sets of worlds. In addition, the axiomatization of the logic of relative likelihood in the case of partial orders gives insight into the connection between relative likelihood and default reasoning.<|reference_end|> | arxiv | @article{halpern1997defining,
title={Defining Relative Likelihood in Partially-Ordered Preferential
Structures},
author={J. Y. Halpern},
journal={Journal of Artificial Intelligence Research, Vol 7, (1997), 1-24},
year={1997},
archivePrefix={arXiv},
eprint={cs/9707103},
primaryClass={cs.AI}
} | halpern1997defining |
arxiv-675993 | cs/9709101 | Towards Flexible Teamwork | <|reference_start|>Towards Flexible Teamwork: Many AI researchers are today striving to build agent teams for complex, dynamic multi-agent domains, with intended applications in arenas such as education, training, entertainment, information integration, and collective robotics. Unfortunately, uncertainties in these complex, dynamic domains obstruct coherent teamwork. In particular, team members often encounter differing, incomplete, and possibly inconsistent views of their environment. Furthermore, team members can unexpectedly fail in fulfilling responsibilities or discover unexpected opportunities. Highly flexible coordination and communication is key in addressing such uncertainties. Simply fitting individual agents with precomputed coordination plans will not do, for their inflexibility can cause severe failures in teamwork, and their domain-specificity hinders reusability. Our central hypothesis is that the key to such flexibility and reusability is providing agents with general models of teamwork. Agents exploit such models to autonomously reason about coordination and communication, providing requisite flexibility. Furthermore, the models enable reuse across domains, both saving implementation effort and enforcing consistency. This article presents one general, implemented model of teamwork, called STEAM. The basic building block of teamwork in STEAM is joint intentions (Cohen & Levesque, 1991b); teamwork in STEAM is based on agents' building up a (partial) hierarchy of joint intentions (this hierarchy is seen to parallel Grosz & Kraus's partial SharedPlans, 1996). Furthermore, in STEAM, team members monitor the team's and individual members' performance, reorganizing the team as necessary. Finally, decision-theoretic communication selectivity in STEAM ensures reduction in communication overheads of teamwork, with appropriate sensitivity to the environmental conditions. This article describes STEAM's application in three different complex domains, and presents detailed empirical results.<|reference_end|> | arxiv | @article{tambe1997towards,
title={Towards Flexible Teamwork},
author={M. Tambe},
journal={Journal of Artificial Intelligence Research, Vol 7, (1997), 83-124},
year={1997},
archivePrefix={arXiv},
eprint={cs/9709101},
primaryClass={cs.AI}
} | tambe1997towards |
arxiv-675994 | cs/9709102 | Identifying Hierarchical Structure in Sequences: A linear-time algorithm | <|reference_start|>Identifying Hierarchical Structure in Sequences: A linear-time algorithm: SEQUITUR is an algorithm that infers a hierarchical structure from a sequence of discrete symbols by replacing repeated phrases with a grammatical rule that generates the phrase, and continuing this process recursively. The result is a hierarchical representation of the original sequence, which offers insights into its lexical structure. The algorithm is driven by two constraints that reduce the size of the grammar, and produce structure as a by-product. SEQUITUR breaks new ground by operating incrementally. Moreover, the method's simple structure permits a proof that it operates in space and time that is linear in the size of the input. Our implementation can process 50,000 symbols per second and has been applied to an extensive range of real world sequences.<|reference_end|> | arxiv | @article{nevill-manning1997identifying,
title={Identifying Hierarchical Structure in Sequences: A linear-time algorithm},
author={C. G. Nevill-Manning, I. H. Witten},
journal={Journal of Artificial Intelligence Research, Vol 7, (1997), 67-82},
year={1997},
archivePrefix={arXiv},
eprint={cs/9709102},
primaryClass={cs.AI}
} | nevill-manning1997identifying |
arxiv-675995 | cs/9710101 | Analysis of Three-Dimensional Protein Images | <|reference_start|>Analysis of Three-Dimensional Protein Images: A fundamental goal of research in molecular biology is to understand protein structure. Protein crystallography is currently the most successful method for determining the three-dimensional (3D) conformation of a protein, yet it remains labor intensive and relies on an expert's ability to derive and evaluate a protein scene model. In this paper, the problem of protein structure determination is formulated as an exercise in scene analysis. A computational methodology is presented in which a 3D image of a protein is segmented into a graph of critical points. Bayesian and certainty factor approaches are described and used to analyze critical point graphs and identify meaningful substructures, such as alpha-helices and beta-sheets. Results of applying the methodologies to protein images at low and medium resolution are reported. The research is related to approaches to representation, segmentation and classification in vision, as well as to top-down approaches to protein structure prediction.<|reference_end|> | arxiv | @article{leherte1997analysis,
title={Analysis of Three-Dimensional Protein Images},
author={L. Leherte, J. Glasgow, K. Baxter, E. Steeg, S. Fortier},
journal={Journal of Artificial Intelligence Research, Vol 7, (1997),
125-159},
year={1997},
archivePrefix={arXiv},
eprint={cs/9710101},
primaryClass={cs.AI q-bio}
} | leherte1997analysis |
arxiv-675996 | cs/9711102 | Storing and Indexing Plan Derivations through Explanation-based Analysis of Retrieval Failures | <|reference_start|>Storing and Indexing Plan Derivations through Explanation-based Analysis of Retrieval Failures: Case-Based Planning (CBP) provides a way of scaling up domain-independent planning to solve large problems in complex domains. It replaces the detailed and lengthy search for a solution with the retrieval and adaptation of previous planning experiences. In general, CBP has been demonstrated to improve performance over generative (from-scratch) planning. However, the performance improvements it provides are dependent on adequate judgements as to problem similarity. In particular, although CBP may substantially reduce planning effort overall, it is subject to a mis-retrieval problem. The success of CBP depends on these retrieval errors being relatively rare. This paper describes the design and implementation of a replay framework for the case-based planner DERSNLP+EBL. DERSNLP+EBL extends current CBP methodology by incorporating explanation-based learning techniques that allow it to explain and learn from the retrieval failures it encounters. These techniques are used to refine judgements about case similarity in response to feedback when a wrong decision has been made. The same failure analysis is used in building the case library, through the addition of repairing cases. Large problems are split and stored as single goal subproblems. Multi-goal problems are stored only when these smaller cases fail to be merged into a full solution. An empirical evaluation of this approach demonstrates the advantage of learning from experienced retrieval failure.<|reference_end|> | arxiv | @article{ihrig1997storing,
title={Storing and Indexing Plan Derivations through Explanation-based Analysis
of Retrieval Failures},
author={L. H. Ihrig, S. Kambhampati},
journal={Journal of Artificial Intelligence Research, Vol 7, (1997),
161-198},
year={1997},
archivePrefix={arXiv},
eprint={cs/9711102},
primaryClass={cs.AI}
} | ihrig1997storing |
arxiv-675997 | cs/9711103 | A Model Approximation Scheme for Planning in Partially Observable Stochastic Domains | <|reference_start|>A Model Approximation Scheme for Planning in Partially Observable Stochastic Domains: Partially observable Markov decision processes (POMDPs) are a natural model for planning problems where effects of actions are nondeterministic and the state of the world is not completely observable. It is difficult to solve POMDPs exactly. This paper proposes a new approximation scheme. The basic idea is to transform a POMDP into another one where additional information is provided by an oracle. The oracle informs the planning agent that the current state of the world is in a certain region. The transformed POMDP is consequently said to be region observable. It is easier to solve than the original POMDP. We propose to solve the transformed POMDP and use its optimal policy to construct an approximate policy for the original POMDP. By controlling the amount of additional information that the oracle provides, it is possible to find a proper tradeoff between computational time and approximation quality. In terms of algorithmic contributions, we study in details how to exploit region observability in solving the transformed POMDP. To facilitate the study, we also propose a new exact algorithm for general POMDPs. The algorithm is conceptually simple and yet is significantly more efficient than all previous exact algorithms.<|reference_end|> | arxiv | @article{zhang1997a,
title={A Model Approximation Scheme for Planning in Partially Observable
Stochastic Domains},
author={N. L. Zhang, W. Liu},
journal={Journal of Artificial Intelligence Research, Vol 7, (1997),
199-230},
year={1997},
archivePrefix={arXiv},
eprint={cs/9711103},
primaryClass={cs.AI}
} | zhang1997a |
arxiv-675998 | cs/9711104 | Dynamic Non-Bayesian Decision Making | <|reference_start|>Dynamic Non-Bayesian Decision Making: The model of a non-Bayesian agent who faces a repeated game with incomplete information against Nature is an appropriate tool for modeling general agent-environment interactions. In such a model the environment state (controlled by Nature) may change arbitrarily, and the feedback/reward function is initially unknown. The agent is not Bayesian, that is he does not form a prior probability neither on the state selection strategy of Nature, nor on his reward function. A policy for the agent is a function which assigns an action to every history of observations and actions. Two basic feedback structures are considered. In one of them -- the perfect monitoring case -- the agent is able to observe the previous environment state as part of his feedback, while in the other -- the imperfect monitoring case -- all that is available to the agent is the reward obtained. Both of these settings refer to partially observable processes, where the current environment state is unknown. Our main result refers to the competitive ratio criterion in the perfect monitoring case. We prove the existence of an efficient stochastic policy that ensures that the competitive ratio is obtained at almost all stages with an arbitrarily high probability, where efficiency is measured in terms of rate of convergence. It is further shown that such an optimal policy does not exist in the imperfect monitoring case. Moreover, it is proved that in the perfect monitoring case there does not exist a deterministic policy that satisfies our long run optimality criterion. In addition, we discuss the maxmin criterion and prove that a deterministic efficient optimal strategy does exist in the imperfect monitoring case under this criterion. Finally we show that our approach to long-run optimality can be viewed as qualitative, which distinguishes it from previous work in this area.<|reference_end|> | arxiv | @article{monderer1997dynamic,
title={Dynamic Non-Bayesian Decision Making},
author={D. Monderer, M. Tennenholtz},
journal={Journal of Artificial Intelligence Research, Vol 7, (1997),
231-248},
year={1997},
archivePrefix={arXiv},
eprint={cs/9711104},
primaryClass={cs.AI}
} | monderer1997dynamic |
arxiv-675999 | cs/9711105 | Mechanizing Coinduction and Corecursion in Higher-order Logic | <|reference_start|>Mechanizing Coinduction and Corecursion in Higher-order Logic: A theory of recursive and corecursive definitions has been developed in higher-order logic (HOL) and mechanized using Isabelle. Least fixedpoints express inductive data types such as strict lists; greatest fixedpoints express coinductive data types, such as lazy lists. Well-founded recursion expresses recursive functions over inductive data types; corecursion expresses functions that yield elements of coinductive data types. The theory rests on a traditional formalization of infinite trees. The theory is intended for use in specification and verification. It supports reasoning about a wide range of computable functions, but it does not formalize their operational semantics and can express noncomputable functions also. The theory is illustrated using finite and infinite lists. Corecursion expresses functions over infinite lists; coinduction reasons about such functions.<|reference_end|> | arxiv | @article{paulson2000mechanizing,
title={Mechanizing Coinduction and Corecursion in Higher-order Logic},
author={Lawrence C. Paulson},
journal={published in Journal of Logic and Computation 7 (March 1997),
175-204},
year={2000},
archivePrefix={arXiv},
eprint={cs/9711105},
primaryClass={cs.LO}
} | paulson2000mechanizing |
arxiv-676000 | cs/9711106 | Generic Automatic Proof Tools | <|reference_start|>Generic Automatic Proof Tools: This book chapter establishes connections between the interactive proof tool Isabelle and classical tableau and resolution technology. Isabelle's classical reasoner is described and demonstrated by an extended case study: the Church-Rosser theorem for combinators. Compared with other interactive theorem provers, Isabelle's classical reasoner achieves a high degree of automation.<|reference_end|> | arxiv | @article{paulson2001generic,
title={Generic Automatic Proof Tools},
author={Lawrence C. Paulson},
journal={published in Robert Veroff (editor), Automated Reasoning and its
Applications: Essays in Honor of Larry Wos (MIT Press, 1997), 23-47},
year={2001},
archivePrefix={arXiv},
eprint={cs/9711106},
primaryClass={cs.LO}
} | paulson2001generic |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.