id
stringlengths
9
10
submitter
stringlengths
5
47
authors
stringlengths
5
1.72k
title
stringlengths
11
234
comments
stringlengths
1
491
journal-ref
stringlengths
4
396
doi
stringlengths
13
97
report-no
stringlengths
4
138
categories
stringclasses
1 value
license
stringclasses
9 values
abstract
stringlengths
29
3.66k
versions
listlengths
1
21
update_date
int64
1,180B
1,718B
authors_parsed
sequencelengths
1
98
1102.0714
Jose Hernandez-Orallo
Javier Insa-Cabrera, Jose Hernandez-Orallo
An architecture for the evaluation of intelligent systems
112 pages. In Spanish. Final Project Thesis
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the main research areas in Artificial Intelligence is the coding of agents (programs) which are able to learn by themselves in any situation. This means that agents must be useful for purposes other than those they were created for, as, for example, playing chess. In this way we try to get closer to the pristine goal of Artificial Intelligence. One of the problems to decide whether an agent is really intelligent or not is the measurement of its intelligence, since there is currently no way to measure it in a reliable way. The purpose of this project is to create an interpreter that allows for the execution of several environments, including those which are generated randomly, so that an agent (a person or a program) can interact with them. Once the interaction between the agent and the environment is over, the interpreter will measure the intelligence of the agent according to the actions, states and rewards the agent has undergone inside the environment during the test. As a result we will be able to measure agents' intelligence in any possible environment, and to make comparisons between several agents, in order to determine which of them is the most intelligent. In order to perform the tests, the interpreter must be able to randomly generate environments that are really useful to measure agents' intelligence, since not any randomly generated environment will serve that purpose.
[ { "version": "v1", "created": "Thu, 3 Feb 2011 15:58:18 GMT" } ]
1,296,777,600,000
[ [ "Insa-Cabrera", "Javier", "" ], [ "Hernandez-Orallo", "Jose", "" ] ]
1102.0831
Madhu G
G.Madhu (1), Dr.A.Govardhan (2), Dr.T.V.Rajinikanth (3)
Intelligent Semantic Web Search Engines: A Brief Survey
null
International journal of Web & Semantic Technology (IJWesT) Vol.2, No.1, January 2011
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The World Wide Web (WWW) allows the people to share the information (data) from the large database repositories globally. The amount of information grows billions of databases. We need to search the information will specialize tools known generically search engine. There are many of search engines available today, retrieving meaningful information is difficult. However to overcome this problem in search engines to retrieve meaningful information intelligently, semantic web technologies are playing a major role. In this paper we present survey on the search engine generations and the role of search engines in intelligent web and semantic search technologies.
[ { "version": "v1", "created": "Fri, 4 Feb 2011 03:56:09 GMT" } ]
1,297,036,800,000
[ [ "Madhu", "G.", "" ], [ "Govardhan", "Dr. A.", "" ], [ "Rajinikanth", "Dr. T. V.", "" ] ]
1102.2670
Yasin Abbasi-Yadkori Yasin Abbasi-Yadkori
Yasin Abbasi-Yadkori, David Pal, Csaba Szepesvari
Online Least Squares Estimation with Self-Normalized Processes: An Application to Bandit Problems
Submitted to the 24th Annual Conference on Learning Theory (COLT 2011)
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The analysis of online least squares estimation is at the heart of many stochastic sequential decision making problems. We employ tools from the self-normalized processes to provide a simple and self-contained proof of a tail bound of a vector-valued martingale. We use the bound to construct a new tighter confidence sets for the least squares estimate. We apply the confidence sets to several online decision problems, such as the multi-armed and the linearly parametrized bandit problems. The confidence sets are potentially applicable to other problems such as sleeping bandits, generalized linear bandits, and other linear control problems. We improve the regret bound of the Upper Confidence Bound (UCB) algorithm of Auer et al. (2002) and show that its regret is with high-probability a problem dependent constant. In the case of linear bandits (Dani et al., 2008), we improve the problem dependent bound in the dimension and number of time steps. Furthermore, as opposed to the previous result, we prove that our bound holds for small sample sizes, and at the same time the worst case bound is improved by a logarithmic factor and the constant is improved.
[ { "version": "v1", "created": "Mon, 14 Feb 2011 04:06:31 GMT" } ]
1,297,728,000,000
[ [ "Abbasi-Yadkori", "Yasin", "" ], [ "Pal", "David", "" ], [ "Szepesvari", "Csaba", "" ] ]
1102.2984
Andreas Baldi
Rjab Hajlaoui, Mariem Gzara, Abdelaziz Dammak
Hybrid Model for Solving Multi-Objective Problems Using Evolutionary Algorithm and Tabu Search
5 pages
World of Computer Science and Information Technology Journal (WCSIT),ISSN: 2221-0741,Vol. 1, No. 1, 5-9, Feb. 2011
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a new multi-objective hybrid model that makes cooperation between the strength of research of neighborhood methods presented by the tabu search (TS) and the important exploration capacity of evolutionary algorithm. This model was implemented and tested in benchmark functions (ZDT1, ZDT2, and ZDT3), using a network of computers.
[ { "version": "v1", "created": "Tue, 15 Feb 2011 07:43:03 GMT" } ]
1,297,814,400,000
[ [ "Hajlaoui", "Rjab", "" ], [ "Gzara", "Mariem", "" ], [ "Dammak", "Abdelaziz", "" ] ]
1102.4924
Minghao Yin
Junping Zhou, Minghao Yin
New Worst-Case Upper Bound for #XSAT
submitted to AAAI-10
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An algorithm running in O(1.1995n) is presented for counting models for exact satisfiability formulae(#XSAT). This is faster than the previously best algorithm which runs in O(1.2190n). In order to improve the efficiency of the algorithm, a new principle, i.e. the common literals principle, is addressed to simplify formulae. This allows us to eliminate more common literals. In addition, we firstly inject the resolution principles into solving #XSAT problem, and therefore this further improves the efficiency of the algorithm.
[ { "version": "v1", "created": "Thu, 24 Feb 2011 08:16:59 GMT" } ]
1,298,592,000,000
[ [ "Zhou", "Junping", "" ], [ "Yin", "Minghao", "" ] ]
1102.5385
Martin Slota
Martin Slota and Jo\~ao Leite
Back and Forth Between Rules and SE-Models (Extended Version)
25 pages; extended version of the paper accepted for LPNMR 2011
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Rules in logic programming encode information about mutual interdependencies between literals that is not captured by any of the commonly used semantics. This information becomes essential as soon as a program needs to be modified or further manipulated. We argue that, in these cases, a program should not be viewed solely as the set of its models. Instead, it should be viewed and manipulated as the set of sets of models of each rule inside it. With this in mind, we investigate and highlight relations between the SE-model semantics and individual rules. We identify a set of representatives of rule equivalence classes induced by SE-models, and so pinpoint the exact expressivity of this semantics with respect to a single rule. We also characterise the class of sets of SE-interpretations representable by a single rule. Finally, we discuss the introduction of two notions of equivalence, both stronger than strong equivalence [1] and weaker than strong update equivalence [2], which seem more suitable whenever the dependency information found in rules is of interest.
[ { "version": "v1", "created": "Sat, 26 Feb 2011 03:06:55 GMT" }, { "version": "v2", "created": "Tue, 1 Mar 2011 18:08:20 GMT" } ]
1,299,024,000,000
[ [ "Slota", "Martin", "" ], [ "Leite", "João", "" ] ]
1102.5635
Martin Josef Geiger
Martin Josef Geiger, Marc Sevaux
Practical inventory routing: A problem definition and an optimization method
null
Proceedings of the EU/MEeting 2011 - Workshop on Client-Centered Logistics and International Aid, February 21-22, 2011, pages 32-35
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The global objective of this work is to provide practical optimization methods to companies involved in inventory routing problems, taking into account this new type of data. Also, companies are sometimes not able to deal with changing plans every period and would like to adopt regular structures for serving customers.
[ { "version": "v1", "created": "Mon, 28 Feb 2011 10:42:29 GMT" } ]
1,298,937,600,000
[ [ "Geiger", "Martin Josef", "" ], [ "Sevaux", "Marc", "" ] ]
1103.0127
Shobha Shankar
Shobha Shankar, Dr. T. Ananthapadmanabha
Fuzzy Approach to Critical Bus Ranking under Normal and Line Outage Contingencies
12 pages, 7 figures, CCSIT Conference
Advanced Computing, CCSIT Proceedings, Part III, pp. 400-406, Jan 2011
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Identification of critical or weak buses for a given operating condition is an important task in the load dispatch centre. It has become more vital in view of the threat of voltage instability leading to voltage collapse. This paper presents a fuzzy approach for ranking critical buses in a power system under normal and network contingencies based on Line Flow index and voltage profiles at load buses. The Line Flow index determines the maximum load that is possible to be connected to a bus in order to maintain stability before the system reaches its bifurcation point. Line Flow index (LF index) along with voltage profiles at the load buses are represented in Fuzzy Set notation. Further they are evaluated using fuzzy rules to compute Criticality Index. Based on this index, critical buses are ranked. The bus with highest rank is the weakest bus as it can withstand a small amount of load before causing voltage collapse. The proposed method is tested on Five Bus Test System.
[ { "version": "v1", "created": "Tue, 1 Mar 2011 10:35:44 GMT" } ]
1,299,024,000,000
[ [ "Shankar", "Shobha", "" ], [ "Ananthapadmanabha", "Dr. T.", "" ] ]
1103.0632
Hioual Ouassila
Hioual Ouassila and Boufaida Zizette
An Agent Based Architecture (Using Planning) for Dynamic and Semantic Web Services Composition in an EBXML Context
22 pages, 11 figures, 1 table
International Journal of Database Management Systems ( IJDMS ), Vol.3, No.1, February 2011 110-131
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The process-based semantic composition of Web Services is gaining a considerable momentum as an approach for the effective integration of distributed, heterogeneous, and autonomous applications. To compose Web Services semantically, we need an ontology. There are several ways of inserting semantics in Web Services. One of them consists of using description languages like OWL-S. In this paper, we introduce our work which consists in the proposition of a new model and the use of semantic matching technology for semantic and dynamic composition of ebXML business processes.
[ { "version": "v1", "created": "Thu, 3 Mar 2011 09:44:06 GMT" } ]
1,433,116,800,000
[ [ "Ouassila", "Hioual", "" ], [ "Zizette", "Boufaida", "" ] ]
1103.0697
Adrian Walker
Adrian Walker
A Wiki for Business Rules in Open Vocabulary, Executable English
9 pages
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The problem of business-IT alignment is of widespread economic concern. As one way of addressing the problem, this paper describes an online system that functions as a kind of Wiki -- one that supports the collaborative writing and running of business and scientific applications, as rules in open vocabulary, executable English, using a browser. Since the rules are in English, they are indexed by Google and other search engines. This is useful when looking for rules for a task that one has in mind. The design of the system integrates the semantics of data, with a semantics of an inference method, and also with the meanings of English sentences. As such, the system has functionality that may be useful for the Rules, Logic, Proof and Trust requirements of the Semantic Web. The system accepts rules, and small numbers of facts, typed or copy-pasted directly into a browser. One can then run the rules, again using a browser. For larger amounts of data, the system uses information in the rules to automatically generate and run SQL over networked databases. From a few highly declarative rules, the system typically generates SQL that would be too complicated to write reliably by hand. However, the system can explain its results in step-by-step hypertexted English, at the business or scientific level As befits a Wiki, shared use of the system is free.
[ { "version": "v1", "created": "Thu, 3 Mar 2011 14:31:32 GMT" } ]
1,299,196,800,000
[ [ "Walker", "Adrian", "" ] ]
1103.1003
Eray Ozkural
Eray \"Ozkural
Teraflop-scale Incremental Machine Learning
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a long-term memory design for artificial general intelligence based on Solomonoff's incremental machine learning methods. We use R5RS Scheme and its standard library with a few omissions as the reference machine. We introduce a Levin Search variant based on Stochastic Context Free Grammar together with four synergistic update algorithms that use the same grammar as a guiding probability distribution of programs. The update algorithms include adjusting production probabilities, re-using previous solutions, learning programming idioms and discovery of frequent subprograms. Experiments with two training sequences demonstrate that our approach to incremental learning is effective.
[ { "version": "v1", "created": "Sat, 5 Mar 2011 03:41:30 GMT" } ]
1,426,723,200,000
[ [ "Özkural", "Eray", "" ] ]
1103.1157
Nicola Di Mauro
Nicola Di Mauro, Teresa M.A. Basile, Stefano Ferilli, Floriana Esposito
GRASP and path-relinking for Coalition Structure Generation
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In Artificial Intelligence with Coalition Structure Generation (CSG) one refers to those cooperative complex problems that require to find an optimal partition, maximising a social welfare, of a set of entities involved in a system into exhaustive and disjoint coalitions. The solution of the CSG problem finds applications in many fields such as Machine Learning (covering machines, clustering), Data Mining (decision tree, discretization), Graph Theory, Natural Language Processing (aggregation), Semantic Web (service composition), and Bioinformatics. The problem of finding the optimal coalition structure is NP-complete. In this paper we present a greedy adaptive search procedure (GRASP) with path-relinking to efficiently search the space of coalition structures. Experiments and comparisons to other algorithms prove the validity of the proposed method in solving this hard combinatorial problem.
[ { "version": "v1", "created": "Sun, 6 Mar 2011 18:54:04 GMT" }, { "version": "v2", "created": "Wed, 9 Mar 2011 10:55:28 GMT" } ]
1,299,715,200,000
[ [ "Di Mauro", "Nicola", "" ], [ "Basile", "Teresa M. A.", "" ], [ "Ferilli", "Stefano", "" ], [ "Esposito", "Floriana", "" ] ]
1103.1205
Minal Tomar
Minal Tomar and Pratibha Singh
A Directional Feature with Energy based Offline Signature Verification Network
10 pages, 6 figures
International Journal on Soft Computing ( IJSC ), Vol.2, No.1, February 2011
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Signature used as a biometric is implemented in various systems as well as every signature signed by each person is distinct at the same time. So, it is very important to have a computerized signature verification system. In offline signature verification system dynamic features are not available obviously, but one can use a signature as an image and apply image processing techniques to make an effective offline signature verification system. Author proposes a intelligent network used directional feature and energy density both as inputs to the same network and classifies the signature. Neural network is used as a classifier for this system. The results are compared with both the very basic energy density method and a simple directional feature method of offline signature verification system and this proposed new network is found very effective as compared to the above two methods, specially for less number of training samples, which can be implemented practically.
[ { "version": "v1", "created": "Mon, 7 Mar 2011 07:17:13 GMT" } ]
1,299,542,400,000
[ [ "Tomar", "Minal", "" ], [ "Singh", "Pratibha", "" ] ]
1103.1711
D. Bryce
D. Bryce, S. Kambhampati, D. E. Smith
Planning Graph Heuristics for Belief Space Search
null
Journal Of Artificial Intelligence Research, Volume 26, pages 35-99, 2006
10.1613/jair.1869
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Some recent works in conditional planning have proposed reachability heuristics to improve planner scalability, but many lack a formal description of the properties of their distance estimates. To place previous work in context and extend work on heuristics for conditional planning, we provide a formal basis for distance estimates between belief states. We give a definition for the distance between belief states that relies on aggregating underlying state distance measures. We give several techniques to aggregate state distances and their associated properties. Many existing heuristics exhibit a subset of the properties, but in order to provide a standardized comparison we present several generalizations of planning graph heuristics that are used in a single planner. We compliment our belief state distance estimate framework by also investigating efficient planning graph data structures that incorporate BDDs to compute the most effective heuristics. We developed two planners to serve as test-beds for our investigation. The first, CAltAlt, is a conformant regression planner that uses A* search. The second, POND, is a conditional progression planner that uses AO* search. We show the relative effectiveness of our heuristic techniques within these planners. We also compare the performance of these planners with several state of the art approaches in conditional planning.
[ { "version": "v1", "created": "Wed, 9 Mar 2011 06:43:55 GMT" } ]
1,305,244,800,000
[ [ "Bryce", "D.", "" ], [ "Kambhampati", "S.", "" ], [ "Smith", "D. E.", "" ] ]
1103.2091
Tejbanta Singh Chingtham Mr
Tejbanta Singh Chingtham, G. Sahoo and M.K. Ghose
An Artificial Immune System Model for Multi-Agents Resource Sharing in Distributed Environments
null
International Journal on Computer Science and Engineering (IJCSE), Vol. 02, No. 05, 2010, pp 1813-1818
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Natural Immune system plays a vital role in the survival of the all living being. It provides a mechanism to defend itself from external predates making it consistent systems, capable of adapting itself for survival incase of changes. The human immune system has motivated scientists and engineers for finding powerful information processing algorithms that has solved complex engineering tasks. This paper explores one of the various possibilities for solving problem in a Multiagent scenario wherein multiple robots are deployed to achieve a goal collectively. The final goal is dependent on the performance of individual robot and its survival without having to lose its energy beyond a predetermined threshold value by deploying an evolutionary computational technique otherwise called the artificial immune system that imitates the biological immune system.
[ { "version": "v1", "created": "Thu, 24 Feb 2011 09:12:17 GMT" } ]
1,299,801,600,000
[ [ "Chingtham", "Tejbanta Singh", "" ], [ "Sahoo", "G.", "" ], [ "Ghose", "M. K.", "" ] ]
1103.2342
Tiago Silva
Tiago Silva and In\^es Dutra
SPPAM - Statistical PreProcessing AlgorithM
Submited to IJCAI11 conference on January 25, 2011
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most machine learning tools work with a single table where each row is an instance and each column is an attribute. Each cell of the table contains an attribute value for an instance. This representation prevents one important form of learning, which is, classification based on groups of correlated records, such as multiple exams of a single patient, internet customer preferences, weather forecast or prediction of sea conditions for a given day. To some extent, relational learning methods, such as inductive logic programming, can capture this correlation through the use of intensional predicates added to the background knowledge. In this work, we propose SPPAM, an algorithm that aggregates past observations in one single record. We show that applying SPPAM to the original correlated data, before the learning task, can produce classifiers that are better than the ones trained using all records.
[ { "version": "v1", "created": "Fri, 11 Mar 2011 18:58:40 GMT" } ]
1,300,060,800,000
[ [ "Silva", "Tiago", "" ], [ "Dutra", "Inês", "" ] ]
1103.2376
Leonid Perlovsky
Leonid Perlovsky (Harvard University and the AFRL)
Language, Emotions, and Cultures: Emotional Sapir-Whorf Hypothesis
16p, 2 figs
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An emotional version of Sapir-Whorf hypothesis suggests that differences in language emotionalities influence differences among cultures no less than conceptual differences. Conceptual contents of languages and cultures to significant extent are determined by words and their semantic differences; these could be borrowed among languages and exchanged among cultures. Emotional differences, as suggested in the paper, are related to grammar and mostly cannot be borrowed. Conceptual and emotional mechanisms of languages are considered here along with their functions in the mind and cultural evolution. A fundamental contradiction in human mind is considered: language evolution requires reduced emotionality, but "too low" emotionality makes language "irrelevant to life," disconnected from sensory-motor experience. Neural mechanisms of these processes are suggested as well as their mathematical models: the knowledge instinct, the language instinct, the dual model connecting language and cognition, dynamic logic, neural modeling fields. Mathematical results are related to cognitive science, linguistics, and psychology. Experimental evidence and theoretical arguments are discussed. Approximate equations for evolution of human minds and cultures are obtained. Their solutions identify three types of cultures: "conceptual"-pragmatic cultures, in which emotionality of language is reduced and differentiation overtakes synthesis resulting in fast evolution at the price of uncertainty of values, self doubts, and internal crises; "traditional-emotional" cultures where differentiation lags behind synthesis, resulting in cultural stability at the price of stagnation; and "multi-cultural" societies combining fast cultural evolution and stability. Unsolved problems and future theoretical and experimental directions are discussed.
[ { "version": "v1", "created": "Fri, 11 Mar 2011 21:13:38 GMT" } ]
1,300,147,200,000
[ [ "Perlovsky", "Leonid", "", "Harvard University and the AFRL" ] ]
1103.3123
Yong Lai
Yong Lai, Dayou Liu, Shengsheng Wang
Reduced Ordered Binary Decision Diagram with Implied Literals: A New knowledge Compilation Approach
18 pages, 13 figures
null
10.1007/s10115-012-0525-6
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Knowledge compilation is an approach to tackle the computational intractability of general reasoning problems. According to this approach, knowledge bases are converted off-line into a target compilation language which is tractable for on-line querying. Reduced ordered binary decision diagram (ROBDD) is one of the most influential target languages. We generalize ROBDD by associating some implied literals in each node and the new language is called reduced ordered binary decision diagram with implied literals (ROBDD-L). Then we discuss a kind of subsets of ROBDD-L called ROBDD-i with precisely i implied literals (0 \leq i \leq \infty). In particular, ROBDD-0 is isomorphic to ROBDD; ROBDD-\infty requires that each node should be associated by the implied literals as many as possible. We show that ROBDD-i has uniqueness over some specific variables order, and ROBDD-\infty is the most succinct subset in ROBDD-L and can meet most of the querying requirements involved in the knowledge compilation map. Finally, we propose an ROBDD-i compilation algorithm for any i and a ROBDD-\infty compilation algorithm. Based on them, we implement a ROBDD-L package called BDDjLu and then get some conclusions from preliminary experimental results: ROBDD-\infty is obviously smaller than ROBDD for all benchmarks; ROBDD-\infty is smaller than the d-DNNF the benchmarks whose compilation results are relatively small; it seems that it is better to transform ROBDDs-\infty into FBDDs and ROBDDs rather than straight compile the benchmarks.
[ { "version": "v1", "created": "Wed, 16 Mar 2011 08:12:05 GMT" }, { "version": "v2", "created": "Thu, 24 Mar 2011 04:23:05 GMT" } ]
1,368,489,600,000
[ [ "Lai", "Yong", "" ], [ "Liu", "Dayou", "" ], [ "Wang", "Shengsheng", "" ] ]
1103.3223
Piero Giacomelli
Piero Giacomelli, Giulia Munaro and Roberto Rosso
Using Soft Computer Techniques on Smart Devices for Monitoring Chronic Diseases: the CHRONIOUS case
presented at "The Third International Conference on eHealth, Telemedicine, and Social Medicine (eTELEMED 2011)"
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
CHRONIOUS is an Open, Ubiquitous and Adaptive Chronic Disease Management Platform for Chronic Obstructive Pulmonary Disease(COPD) Chronic Kidney Disease (CKD) and Renal Insufficiency. It consists of several modules: an ontology based literature search engine, a rule based decision support system, remote sensors interacting with lifestyle interfaces (PDA, monitor touchscreen) and a machine learning module. All these modules interact each other to allow the monitoring of two types of chronic diseases and to help clinician in taking decision for cure purpose. This paper illustrates how some machine learning algorithms and a rule based decision support system can be used in smart devices, to monitor chronic patient. We will analyse how a set of machine learning algorithms can be used in smart devices to alert the clinician in case of a patient health condition worsening trend.
[ { "version": "v1", "created": "Wed, 16 Mar 2011 16:28:00 GMT" } ]
1,300,320,000,000
[ [ "Giacomelli", "Piero", "" ], [ "Munaro", "Giulia", "" ], [ "Rosso", "Roberto", "" ] ]
1103.3240
Ken Duffy
K. R. Duffy and C. Bordenave and D. J. Leith
Decentralized Constraint Satisfaction
null
IEEE/ACM Transactions on Networking, 21 (4), 1298-1308, 2013
10.1109/TNET.2012.2222923
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show that several important resource allocation problems in wireless networks fit within the common framework of Constraint Satisfaction Problems (CSPs). Inspired by the requirements of these applications, where variables are located at distinct network devices that may not be able to communicate but may interfere, we define natural criteria that a CSP solver must possess in order to be practical. We term these algorithms decentralized CSP solvers. The best known CSP solvers were designed for centralized problems and do not meet these criteria. We introduce a stochastic decentralized CSP solver and prove that it will find a solution in almost surely finite time, should one exist, also showing it has many practically desirable properties. We benchmark the algorithm's performance on a well-studied class of CSPs, random k-SAT, illustrating that the time the algorithm takes to find a satisfying assignment is competitive with stochastic centralized solvers on problems with order a thousand variables despite its decentralized nature. We demonstrate the solver's practical utility for the problems that motivated its introduction by using it to find a non-interfering channel allocation for a network formed from data from downtown Manhattan.
[ { "version": "v1", "created": "Wed, 2 Mar 2011 15:00:09 GMT" }, { "version": "v2", "created": "Mon, 25 Jul 2011 14:44:16 GMT" }, { "version": "v3", "created": "Wed, 7 Sep 2011 11:00:47 GMT" }, { "version": "v4", "created": "Tue, 9 Oct 2012 07:46:22 GMT" } ]
1,379,548,800,000
[ [ "Duffy", "K. R.", "" ], [ "Bordenave", "C.", "" ], [ "Leith", "D. J.", "" ] ]
1103.3417
Andreas Baldi
Hazim A. Farhan, Hussein H. Owaied, Suhaib I. Al-Ghazi
Finding Shortest Path for Developed Cognitive Map Using Medial Axis
9 pages
World of Computer Science and Information Technology Journal (WCSIT), ISSN: 2221-0741, Vol. 1, No. 2, 17-25, 2011
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
this paper presents an enhancement of the medial axis algorithm to be used for finding the optimal shortest path for developed cognitive map. The cognitive map has been developed, based on the architectural blueprint maps. The idea for using the medial-axis is to find main path central pixels; each center pixel represents the center distance between two side boarder pixels. The need for these pixels in the algorithm comes from the need of building a network of nodes for the path, where each node represents a turning in the real world (left, right, critical left, critical right...). The algorithm also ignores from finding the center pixels paths that are too small for intelligent robot navigation. The Idea of this algorithm is to find the possible shortest path between start and end points. The goal of this research is to extract a simple, robust representation of the shape of the cognitive map together with the optimal shortest path between start and end points. The intelligent robot will use this algorithm in order to decrease the time that is needed for sweeping the targeted building.
[ { "version": "v1", "created": "Thu, 17 Mar 2011 14:02:50 GMT" } ]
1,300,406,400,000
[ [ "Farhan", "Hazim A.", "" ], [ "Owaied", "Hussein H.", "" ], [ "Al-Ghazi", "Suhaib I.", "" ] ]
1103.3420
Sofiene Haboubi
Sofiene Haboubi and Samia Maddouri
Extraction of handwritten areas from colored image of bank checks by an hybrid method
International Conference on Machine Intelligence (ACIDCA-ICIM), Tozeur, Tunisia, November 2005
null
null
null
cs.AI
http://creativecommons.org/licenses/publicdomain/
One of the first step in the realization of an automatic system of check recognition is the extraction of the handwritten area. We propose in this paper an hybrid method to extract these areas. This method is based on digit recognition by Fourier descriptors and different steps of colored image processing . It requires the bank recognition of its code which is located in the check marking band as well as the handwritten color recognition by the method of difference of histograms. The areas extraction is then carried out by the use of some mathematical morphology tools.
[ { "version": "v1", "created": "Thu, 17 Mar 2011 14:13:36 GMT" } ]
1,300,406,400,000
[ [ "Haboubi", "Sofiene", "" ], [ "Maddouri", "Samia", "" ] ]
1103.3687
Subbarao Kambhampati
William Cushing and J. Benton and Subbarao Kambhampati
Cost Based Satisficing Search Considered Harmful
Longer version of an extended abstract from SOCS 2010
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, several researchers have found that cost-based satisficing search with A* often runs into problems. Although some "work arounds" have been proposed to ameliorate the problem, there has not been any concerted effort to pinpoint its origin. In this paper, we argue that the origins can be traced back to the wide variance in action costs that is observed in most planning domains. We show that such cost variance misleads A* search, and that this is no trifling detail or accidental phenomenon, but a systemic weakness of the very concept of "cost-based evaluation functions + systematic search + combinatorial graphs". We show that satisficing search with sized-based evaluation functions is largely immune to this problem.
[ { "version": "v1", "created": "Fri, 18 Mar 2011 18:57:46 GMT" } ]
1,300,665,600,000
[ [ "Cushing", "William", "" ], [ "Benton", "J.", "" ], [ "Kambhampati", "Subbarao", "" ] ]
1103.3745
Nina Narodytska
Christian Bessiere, Nina Narodytska, Claude-Guy Quimper, Toby Walsh
The AllDifferent Constraint with Precedences
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose AllDiffPrecedence, a new global constraint that combines together an AllDifferent constraint with precedence constraints that strictly order given pairs of variables. We identify a number of applications for this global constraint including instruction scheduling and symmetry breaking. We give an efficient propagation algorithm that enforces bounds consistency on this global constraint. We show how to implement this propagator using a decomposition that extends the bounds consistency enforcing decomposition proposed for the AllDifferent constraint. Finally, we prove that enforcing domain consistency on this global constraint is NP-hard in general.
[ { "version": "v1", "created": "Sat, 19 Mar 2011 03:50:45 GMT" } ]
1,300,752,000,000
[ [ "Bessiere", "Christian", "" ], [ "Narodytska", "Nina", "" ], [ "Quimper", "Claude-Guy", "" ], [ "Walsh", "Toby", "" ] ]
1103.3949
Ana Sofia Gomes
Ana Sofia Gomes, Jose Julio Alferes, Terrance Swift
A Goal-Directed Implementation of Query Answering for Hybrid MKNF Knowledge Bases
null
Theory and Practice of Logic Programming 14 (2014) 239-264
10.1017/S1471068412000439
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ontologies and rules are usually loosely coupled in knowledge representation formalisms. In fact, ontologies use open-world reasoning while the leading semantics for rules use non-monotonic, closed-world reasoning. One exception is the tightly-coupled framework of Minimal Knowledge and Negation as Failure (MKNF), which allows statements about individuals to be jointly derived via entailment from an ontology and inferences from rules. Nonetheless, the practical usefulness of MKNF has not always been clear, although recent work has formalized a general resolution-based method for querying MKNF when rules are taken to have the well-founded semantics, and the ontology is modeled by a general oracle. That work leaves open what algorithms should be used to relate the entailments of the ontology and the inferences of rules. In this paper we provide such algorithms, and describe the implementation of a query-driven system, CDF-Rules, for hybrid knowledge bases combining both (non-monotonic) rules under the well-founded semantics and a (monotonic) ontology, represented by a CDF Type-1 (ALQ) theory. To appear in Theory and Practice of Logic Programming (TPLP)
[ { "version": "v1", "created": "Mon, 21 Mar 2011 09:51:36 GMT" }, { "version": "v2", "created": "Thu, 1 Nov 2012 14:03:07 GMT" } ]
1,582,070,400,000
[ [ "Gomes", "Ana Sofia", "" ], [ "Alferes", "Jose Julio", "" ], [ "Swift", "Terrance", "" ] ]
1103.3954
Olivier Bailleux
Olivier Bailleux
BoolVar/PB v1.0, a java library for translating pseudo-Boolean constraints into CNF formulae
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
BoolVar/PB is an open source java library dedicated to the translation of pseudo-Boolean constraints into CNF formulae. Input constraints can be categorized with tags. Several encoding schemes are implemented in a way that each input constraint can be translated using one or several encoders, according to the related tags. The library can be easily extended by adding new encoders and / or new output formats.
[ { "version": "v1", "created": "Mon, 21 Mar 2011 10:14:40 GMT" } ]
1,300,752,000,000
[ [ "Bailleux", "Olivier", "" ] ]
1103.5034
Tong Chern
Tong Chern
On Understanding and Machine Understanding
due to some serious errors on page 2,3 and 5
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the present paper, we try to propose a self-similar network theory for the basic understanding. By extending the natural languages to a kind of so called idealy sufficient language, we can proceed a few steps to the investigation of the language searching and the language understanding of AI. Image understanding, and the familiarity of the brain to the surrounding environment are also discussed. Group effects are discussed by addressing the essense of the power of influences, and constructing the influence network of a society. We also give a discussion of inspirations.
[ { "version": "v1", "created": "Thu, 24 Mar 2011 03:35:24 GMT" }, { "version": "v2", "created": "Thu, 1 Feb 2018 14:02:38 GMT" } ]
1,517,529,600,000
[ [ "Chern", "Tong", "" ] ]
1104.0843
Minghao Yin
Jian Gao, Minghao Yin, and Ke Xu
Phase Transitions in Knowledge Compilation: an Experimental Study
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Phase transitions in many complex combinational problems have been widely studied in the past decade. In this paper, we investigate phase transitions in the knowledge compilation empirically, where DFA, OBDD and d-DNNF are chosen as the target languages to compile random k-SAT instances. We perform intensive experiments to analyze the sizes of compilation results and draw the following conclusions: there exists an easy-hard-easy pattern in compilations; the peak point of sizes in the pattern is only related to the ratio of the number of clauses to that of variables when k is fixed, regardless of target languages; most sizes of compilation results increase exponentially with the number of variables growing, but there also exists a phase transition that separates a polynomial-increment region from the exponential-increment region; Moreover, we explain why the phase transition in compilations occurs by analyzing microstructures of DFAs, and conclude that a kind of solution interchangeability with more than 2 variables has a sharp transition near the peak point of the easy-hard-easy pattern, and thus it has a great impact on sizes of DFAs.
[ { "version": "v1", "created": "Tue, 5 Apr 2011 13:25:43 GMT" }, { "version": "v2", "created": "Sun, 17 Apr 2011 12:41:23 GMT" }, { "version": "v3", "created": "Fri, 3 Jun 2011 07:05:11 GMT" } ]
1,307,318,400,000
[ [ "Gao", "Jian", "" ], [ "Yin", "Minghao", "" ], [ "Xu", "Ke", "" ] ]
1104.1677
Muhammad Zaheer Aslam
Bashir Ahmad, Shakeel Ahmad, Shahid Hussain, Muhammad Zaheer Aslam and Zafar Abbas
Automatic Vehicle Checking Agent (VCA)
5 pages, 2 figures
Control Theory and Informatics,ISSN 2224-5774 (print) ISSN 2225-0492 (online),Vol 1, No.2, 2011
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-sa/3.0/
A definition of intelligence is given in terms of performance that can be quantitatively measured. In this study, we have presented a conceptual model of Intelligent Agent System for Automatic Vehicle Checking Agent (VCA). To achieve this goal, we have introduced several kinds of agents that exhibit intelligent features. These are the Management agent, internal agent, External Agent, Watcher agent and Report agent. Metrics and measurements are suggested for evaluating the performance of Automatic Vehicle Checking Agent (VCA). Calibrate data and test facilities are suggested to facilitate the development of intelligent systems.
[ { "version": "v1", "created": "Sat, 9 Apr 2011 06:31:24 GMT" }, { "version": "v2", "created": "Sat, 3 Dec 2011 17:22:50 GMT" } ]
1,323,129,600,000
[ [ "Ahmad", "Bashir", "" ], [ "Ahmad", "Shakeel", "" ], [ "Hussain", "Shahid", "" ], [ "Aslam", "Muhammad Zaheer", "" ], [ "Abbas", "Zafar", "" ] ]
1104.1678
Muhammad Zaheer Aslam
Muhammad Zaheer Aslam, Nasimullah, Abdur Rashid Khan
A Proposed Decision Support System/Expert System for Guiding Fresh Students in Selecting a Faculty in Gomal University, Pakistan
I have withdrawn for some changes
Industrial Engineering Letters www.iiste.org ISSN 2224-6096 (Print) ISSN 2225-0581(Online) Vol 1, No.4, 2011
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents the design and development of a proposed rule based Decision Support System that will help students in selecting the best suitable faculty/major decision while taking admission in Gomal University, Dera Ismail Khan, Pakistan. The basic idea of our approach is to design a model for testing and measuring the student capabilities like intelligence, understanding, comprehension, mathematical concepts plus his/her past academic record plus his/her intelligence level, and applying the module results to a rule-based decision support system to determine the compatibility of those capabilities with the available faculties/majors in Gomal University. The result is shown as a list of suggested faculties/majors with the student capabilities and abilities.
[ { "version": "v1", "created": "Sat, 9 Apr 2011 06:32:13 GMT" }, { "version": "v2", "created": "Fri, 20 Jan 2012 06:55:33 GMT" }, { "version": "v3", "created": "Thu, 8 Mar 2012 04:18:26 GMT" } ]
1,331,251,200,000
[ [ "Aslam", "Muhammad Zaheer", "" ], [ "Nasimullah", "", "" ], [ "Khan", "Abdur Rashid", "" ] ]
1104.1924
David Tolpin
David Tolpin, Solomon Eyal Shimony
Rational Deployment of CSP Heuristics
7 pages, 2 figures, to appear in IJCAI-2011, http://www.ijcai.org/
IJCAI-2011
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Heuristics are crucial tools in decreasing search effort in varied fields of AI. In order to be effective, a heuristic must be efficient to compute, as well as provide useful information to the search algorithm. However, some well-known heuristics which do well in reducing backtracking are so heavy that the gain of deploying them in a search algorithm might be outweighed by their overhead. We propose a rational metareasoning approach to decide when to deploy heuristics, using CSP backtracking search as a case study. In particular, a value of information approach is taken to adaptive deployment of solution-count estimation heuristics for value ordering. Empirical results show that indeed the proposed mechanism successfully balances the tradeoff between decreasing backtracking and heuristic computational overhead, resulting in a significant overall search time reduction.
[ { "version": "v1", "created": "Mon, 11 Apr 2011 12:12:14 GMT" } ]
1,302,566,400,000
[ [ "Tolpin", "David", "" ], [ "Shimony", "Solomon Eyal", "" ] ]
1104.3250
Salah Rifai
Salah Rifai, Xavier Glorot, Yoshua Bengio, Pascal Vincent
Adding noise to the input of a model trained with a regularized objective
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Regularization is a well studied problem in the context of neural networks. It is usually used to improve the generalization performance when the number of input samples is relatively small or heavily contaminated with noise. The regularization of a parametric model can be achieved in different manners some of which are early stopping (Morgan and Bourlard, 1990), weight decay, output smoothing that are used to avoid overfitting during the training of the considered model. From a Bayesian point of view, many regularization techniques correspond to imposing certain prior distributions on model parameters (Krogh and Hertz, 1991). Using Bishop's approximation (Bishop, 1995) of the objective function when a restricted type of noise is added to the input of a parametric function, we derive the higher order terms of the Taylor expansion and analyze the coefficients of the regularization terms induced by the noisy input. In particular we study the effect of penalizing the Hessian of the mapping function with respect to the input in terms of generalization performance. We also show how we can control independently this coefficient by explicitly penalizing the Jacobian of the mapping function on corrupted inputs.
[ { "version": "v1", "created": "Sat, 16 Apr 2011 18:09:13 GMT" } ]
1,303,171,200,000
[ [ "Rifai", "Salah", "" ], [ "Glorot", "Xavier", "" ], [ "Bengio", "Yoshua", "" ], [ "Vincent", "Pascal", "" ] ]
1104.3927
Christian Drescher
Christian Drescher and Toby Walsh
Translation-based Constraint Answer Set Solving
Self-archived version for IJCAI'11 Best Paper Track submission
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We solve constraint satisfaction problems through translation to answer set programming (ASP). Our reformulations have the property that unit-propagation in the ASP solver achieves well defined local consistency properties like arc, bound and range consistency. Experiments demonstrate the computational value of this approach.
[ { "version": "v1", "created": "Wed, 20 Apr 2011 02:31:07 GMT" } ]
1,303,344,000,000
[ [ "Drescher", "Christian", "" ], [ "Walsh", "Toby", "" ] ]
1104.4053
Maurizio Lenzerini
Maurizio Lenzerini, Domenico Fabio Savo
On the evolution of the instance level of DL-lite knowledge bases
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent papers address the issue of updating the instance level of knowledge bases expressed in Description Logic following a model-based approach. One of the outcomes of these papers is that the result of updating a knowledge base K is generally not expressible in the Description Logic used to express K. In this paper we introduce a formula-based approach to this problem, by revisiting some research work on formula-based updates developed in the '80s, in particular the WIDTIO (When In Doubt, Throw It Out) approach. We show that our operator enjoys desirable properties, including that both insertions and deletions according to such operator can be expressed in the DL used for the original KB. Also, we present polynomial time algorithms for the evolution of the instance level knowledge bases expressed in the most expressive Description Logics of the DL-lite family.
[ { "version": "v1", "created": "Wed, 20 Apr 2011 15:19:14 GMT" } ]
1,303,344,000,000
[ [ "Lenzerini", "Maurizio", "" ], [ "Savo", "Domenico Fabio", "" ] ]
1104.4153
Salah Rifai
Salah Rifai, Xavier Muller, Xavier Glorot, Gregoire Mesnil, Yoshua Bengio and Pascal Vincent
Learning invariant features through local space contraction
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present in this paper a novel approach for training deterministic auto-encoders. We show that by adding a well chosen penalty term to the classical reconstruction cost function, we can achieve results that equal or surpass those attained by other regularized auto-encoders as well as denoising auto-encoders on a range of datasets. This penalty term corresponds to the Frobenius norm of the Jacobian matrix of the encoder activations with respect to the input. We show that this penalty term results in a localized space contraction which in turn yields robust features on the activation layer. Furthermore, we show how this penalty term is related to both regularized auto-encoders and denoising encoders and how it can be seen as a link between deterministic and non-deterministic auto-encoders. We find empirically that this penalty helps to carve a representation that better captures the local directions of variation dictated by the data, corresponding to a lower-dimensional non-linear manifold, while being more invariant to the vast majority of directions orthogonal to the manifold. Finally, we show that by using the learned features to initialize a MLP, we achieve state of the art classification error on a range of datasets, surpassing other methods of pre-training.
[ { "version": "v1", "created": "Thu, 21 Apr 2011 01:39:25 GMT" } ]
1,303,430,400,000
[ [ "Rifai", "Salah", "" ], [ "Muller", "Xavier", "" ], [ "Glorot", "Xavier", "" ], [ "Mesnil", "Gregoire", "" ], [ "Bengio", "Yoshua", "" ], [ "Vincent", "Pascal", "" ] ]
1104.4290
Sebastian Ordyniak
Eun Jung Kim, Sebastian Ordyniak, Stefan Szeider
Algorithms and Complexity Results for Persuasive Argumentation
null
Artificial Intelligence 175 (2011) pp. 1722-1736
10.1016/j.artint.2011.03.001
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The study of arguments as abstract entities and their interaction as introduced by Dung (Artificial Intelligence 177, 1995) has become one of the most active research branches within Artificial Intelligence and Reasoning. A main issue for abstract argumentation systems is the selection of acceptable sets of arguments. Value-based argumentation, as introduced by Bench-Capon (J. Logic Comput. 13, 2003), extends Dung's framework. It takes into account the relative strength of arguments with respect to some ranking representing an audience: an argument is subjectively accepted if it is accepted with respect to some audience, it is objectively accepted if it is accepted with respect to all audiences. Deciding whether an argument is subjectively or objectively accepted, respectively, are computationally intractable problems. In fact, the problems remain intractable under structural restrictions that render the main computational problems for non-value-based argumentation systems tractable. In this paper we identify nontrivial classes of value-based argumentation systems for which the acceptance problems are polynomial-time tractable. The classes are defined by means of structural restrictions in terms of the underlying graphical structure of the value-based system. Furthermore we show that the acceptance problems are intractable for two classes of value-based systems that where conjectured to be tractable by Dunne (Artificial Intelligence 171, 2007).
[ { "version": "v1", "created": "Thu, 21 Apr 2011 15:22:36 GMT" } ]
1,305,504,000,000
[ [ "Kim", "Eun Jung", "" ], [ "Ordyniak", "Sebastian", "" ], [ "Szeider", "Stefan", "" ] ]
1104.4910
Minghao Yin
Jian Gao, Minghao Yin, Junping Zhou
Hybrid Tractable Classes of Binary Quantified Constraint Satisfaction Problems
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we investigate the hybrid tractability of binary Quantified Constraint Satisfaction Problems (QCSPs). First, a basic tractable class of binary QCSPs is identified by using the broken-triangle property. In this class, the variable ordering for the broken-triangle property must be same as that in the prefix of the QCSP. Second, we break this restriction to allow that existentially quantified variables can be shifted within or out of their blocks, and thus identify some novel tractable classes by introducing the broken-angle property. Finally, we identify a more generalized tractable class, i.e., the min-of-max extendable class for QCSPs.
[ { "version": "v1", "created": "Tue, 26 Apr 2011 13:08:48 GMT" } ]
1,303,862,400,000
[ [ "Gao", "Jian", "" ], [ "Yin", "Minghao", "" ], [ "Zhou", "Junping", "" ] ]
1104.5069
Tuan Nguyen
Tuan Nguyen and Subbarao Kambhampati and Minh Do
Synthesizing Robust Plans under Incomplete Domain Models
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most current planners assume complete domain models and focus on generating correct plans. Unfortunately, domain modeling is a laborious and error-prone task. While domain experts cannot guarantee completeness, often they are able to circumscribe the incompleteness of the model by providing annotations as to which parts of the domain model may be incomplete. In such cases, the goal should be to generate plans that are robust with respect to any known incompleteness of the domain. In this paper, we first introduce annotations expressing the knowledge of the domain incompleteness, and formalize the notion of plan robustness with respect to an incomplete domain model. We then propose an approach to compiling the problem of finding robust plans to the conformant probabilistic planning problem. We present experimental results with Probabilistic-FF, a state-of-the-art planner, showing the promise of our approach.
[ { "version": "v1", "created": "Wed, 27 Apr 2011 04:05:19 GMT" } ]
1,303,948,800,000
[ [ "Nguyen", "Tuan", "" ], [ "Kambhampati", "Subbarao", "" ], [ "Do", "Minh", "" ] ]
1105.0288
Martin Slota
Martin Slota and Jo\~ao Leite and Terrance Swift
Splitting and Updating Hybrid Knowledge Bases (Extended Version)
64 pages; extended version of the paper accepted for ICLP 2011
Theory and Practice of Logic Programming, 11(4-5), 801-819, 2011
10.1017/S1471068411000317
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Over the years, nonmonotonic rules have proven to be a very expressive and useful knowledge representation paradigm. They have recently been used to complement the expressive power of Description Logics (DLs), leading to the study of integrative formal frameworks, generally referred to as hybrid knowledge bases, where both DL axioms and rules can be used to represent knowledge. The need to use these hybrid knowledge bases in dynamic domains has called for the development of update operators, which, given the substantially different way Description Logics and rules are usually updated, has turned out to be an extremely difficult task. In [SL10], a first step towards addressing this problem was taken, and an update operator for hybrid knowledge bases was proposed. Despite its significance -- not only for being the first update operator for hybrid knowledge bases in the literature, but also because it has some applications - this operator was defined for a restricted class of problems where only the ABox was allowed to change, which considerably diminished its applicability. Many applications that use hybrid knowledge bases in dynamic scenarios require both DL axioms and rules to be updated. In this paper, motivated by real world applications, we introduce an update operator for a large class of hybrid knowledge bases where both the DL component as well as the rule component are allowed to dynamically change. We introduce splitting sequences and splitting theorem for hybrid knowledge bases, use them to define a modular update semantics, investigate its basic properties, and illustrate its use on a realistic example about cargo imports.
[ { "version": "v1", "created": "Mon, 2 May 2011 10:12:59 GMT" } ]
1,311,724,800,000
[ [ "Slota", "Martin", "" ], [ "Leite", "João", "" ], [ "Swift", "Terrance", "" ] ]
1105.0650
Miroslaw Truszczynski
Yuliya Lierler and Miroslaw Truszczynski
Transition Systems for Model Generators - A Unifying Approach
30 pages; Accepted for presentation at ICLP 2011 and for publication in Theory and Practice of Logic Programming; contains the appendix with proofs
Theory and Practice of Logic Programming, volume 11, issue 4-5, pp 629 - 646, 2011
10.1017/S1471068411000214
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A fundamental task for propositional logic is to compute models of propositional formulas. Programs developed for this task are called satisfiability solvers. We show that transition systems introduced by Nieuwenhuis, Oliveras, and Tinelli to model and analyze satisfiability solvers can be adapted for solvers developed for two other propositional formalisms: logic programming under the answer-set semantics, and the logic PC(ID). We show that in each case the task of computing models can be seen as "satisfiability modulo answer-set programming," where the goal is to find a model of a theory that also is an answer set of a certain program. The unifying perspective we develop shows, in particular, that solvers CLASP and MINISATID are closely related despite being developed for different formalisms, one for answer-set programming and the latter for the logic PC(ID).
[ { "version": "v1", "created": "Tue, 3 May 2011 18:29:58 GMT" } ]
1,412,726,400,000
[ [ "Lierler", "Yuliya", "" ], [ "Truszczynski", "Miroslaw", "" ] ]
1105.0974
Seyed Salim Tabatabaei
Seyed Salim Tabatabaei and Mark Coates and Michael Rabbat
GANC: Greedy Agglomerative Normalized Cut
Submitted to Pattern Recognition. 27 pages, 5 figures
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes a graph clustering algorithm that aims to minimize the normalized cut criterion and has a model order selection procedure. The performance of the proposed algorithm is comparable to spectral approaches in terms of minimizing normalized cut. However, unlike spectral approaches, the proposed algorithm scales to graphs with millions of nodes and edges. The algorithm consists of three components that are processed sequentially: a greedy agglomerative hierarchical clustering procedure, model order selection, and a local refinement. For a graph of n nodes and O(n) edges, the computational complexity of the algorithm is O(n log^2 n), a major improvement over the O(n^3) complexity of spectral methods. Experiments are performed on real and synthetic networks to demonstrate the scalability of the proposed approach, the effectiveness of the model order selection procedure, and the performance of the proposed algorithm in terms of minimizing the normalized cut metric.
[ { "version": "v1", "created": "Thu, 5 May 2011 04:55:53 GMT" } ]
1,304,640,000,000
[ [ "Tabatabaei", "Seyed Salim", "" ], [ "Coates", "Mark", "" ], [ "Rabbat", "Michael", "" ] ]
1105.1247
Manojit Chattopadhyay Mr.
Manojit Chattopadhyay (Pailan College of Management & Technology), Surajit Chattopadhyay (Pailan College of Management & Technology), Pranab K. Dan (West Bengal University of Technology)
Machine-Part cell formation through visual decipherable clustering of Self Organizing Map
18 pages,3 table, 4 figures
The International Journal of Advanced Manufacturing Technology, 2011, Volume 52, Numbers 9-12, 1019-1030
10.1007/s00170-010-2802-4
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Machine-part cell formation is used in cellular manufacturing in order to process a large variety, quality, lower work in process levels, reducing manufacturing lead-time and customer response time while retaining flexibility for new products. This paper presents a new and novel approach for obtaining machine cells and part families. In the cellular manufacturing the fundamental problem is the formation of part families and machine cells. The present paper deals with the Self Organising Map (SOM) method an unsupervised learning algorithm in Artificial Intelligence, and has been used as a visually decipherable clustering tool of machine-part cell formation. The objective of the paper is to cluster the binary machine-part matrix through visually decipherable cluster of SOM color-coding and labelling via the SOM map nodes in such a way that the part families are processed in that machine cells. The Umatrix, component plane, principal component projection, scatter plot and histogram of SOM have been reported in the present work for the successful visualization of the machine-part cell formation. Computational result with the proposed algorithm on a set of group technology problems available in the literature is also presented. The proposed SOM approach produced solutions with a grouping efficacy that is at least as good as any results earlier reported in the literature and improved the grouping efficacy for 70% of the problems and found immensely useful to both industry practitioners and researchers.
[ { "version": "v1", "created": "Fri, 6 May 2011 09:27:49 GMT" } ]
1,304,899,200,000
[ [ "Chattopadhyay", "Manojit", "", "Pailan College of Management & Technology" ], [ "Chattopadhyay", "Surajit", "", "Pailan College of Management & Technology" ], [ "Dan", "Pranab K.", "", "West Bengal University of Technology" ] ]
1105.1436
Jingchao Chen
Jingchao Chen
Solving Rubik's Cube Using SAT Solvers
13 pages
SPA 2011: SAT for Practical Applications
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-sa/3.0/
Rubik's Cube is an easily-understood puzzle, which is originally called the "magic cube". It is a well-known planning problem, which has been studied for a long time. Yet many simple properties remain unknown. This paper studies whether modern SAT solvers are applicable to this puzzle. To our best knowledge, we are the first to translate Rubik's Cube to a SAT problem. To reduce the number of variables and clauses needed for the encoding, we replace a naive approach of 6 Boolean variables to represent each color on each facelet with a new approach of 3 or 2 Boolean variables. In order to be able to solve quickly Rubik's Cube, we replace the direct encoding of 18 turns with the layer encoding of 18-subtype turns based on 6-type turns. To speed up the solving further, we encode some properties of two-phase algorithm as an additional constraint, and restrict some move sequences by adding some constraint clauses. Using only efficient encoding cannot solve this puzzle. For this reason, we improve the existing SAT solvers, and develop a new SAT solver based on PrecoSAT, though it is suited only for Rubik's Cube. The new SAT solver replaces the lookahead solving strategy with an ALO (\emph{at-least-one}) solving strategy, and decomposes the original problem into sub-problems. Each sub-problem is solved by PrecoSAT. The empirical results demonstrate both our SAT translation and new solving technique are efficient. Without the efficient SAT encoding and the new solving technique, Rubik's Cube will not be able to be solved still by any SAT solver. Using the improved SAT solver, we can find always a solution of length 20 in a reasonable time. Although our solver is slower than Kociemba's algorithm using lookup tables, but does not require a huge lookup table.
[ { "version": "v1", "created": "Sat, 7 May 2011 12:07:49 GMT" } ]
1,304,985,600,000
[ [ "Chen", "Jingchao", "" ] ]
1105.1929
Fabian Suchanek
Fabian Suchanek (INRIA Saclay - Ile de France), Aparna Varde, Richi Nayak (QUT), Pierre Senellart
The Hidden Web, XML and Semantic Web: A Scientific Data Management Perspective
EDBT - Tutorial (2011)
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The World Wide Web no longer consists just of HTML pages. Our work sheds light on a number of trends on the Internet that go beyond simple Web pages. The hidden Web provides a wealth of data in semi-structured form, accessible through Web forms and Web services. These services, as well as numerous other applications on the Web, commonly use XML, the eXtensible Markup Language. XML has become the lingua franca of the Internet that allows customized markups to be defined for specific domains. On top of XML, the Semantic Web grows as a common structured data source. In this work, we first explain each of these developments in detail. Using real-world examples from scientific domains of great interest today, we then demonstrate how these new developments can assist the managing, harvesting, and organization of data on the Web. On the way, we also illustrate the current research avenues in these domains. We believe that this effort would help bridge multiple database tracks, thereby attracting researchers with a view to extend database technology.
[ { "version": "v1", "created": "Tue, 10 May 2011 12:33:41 GMT" } ]
1,305,072,000,000
[ [ "Suchanek", "Fabian", "", "INRIA Saclay - Ile de France" ], [ "Varde", "Aparna", "", "QUT" ], [ "Nayak", "Richi", "", "QUT" ], [ "Senellart", "Pierre", "" ] ]
1105.2902
Zahra Forootan Jahromi
Zahra Forootan Jahromi, Amir Rajabzadeh and Ali Reza Manashty
A Multi-Purpose Scenario-based Simulator for Smart House Environments
null
(IJCSIS) International Journal of Computer Science and Information Security, Vol. 9, No. 1, January 2011
null
null
cs.AI
http://creativecommons.org/licenses/by/3.0/
Developing smart house systems has been a great challenge for researchers and engineers in this area because of the high cost of implementation and evaluation process of these systems, while being very time consuming. Testing a designed smart house before actually building it is considered as an obstacle towards an efficient smart house project. This is because of the variety of sensors, home appliances and devices available for a real smart environment. In this paper, we present the design and implementation of a multi-purpose smart house simulation system for designing and simulating all aspects of a smart house environment. This simulator provides the ability to design the house plan and different virtual sensors and appliances in a two dimensional model of the virtual house environment. This simulator can connect to any external smart house remote controlling system, providing evaluation capabilities to their system much easier than before. It also supports detailed adding of new emerging sensors and devices to help maintain its compatibility with future simulation needs. Scenarios can also be defined for testing various possible combinations of device states; so different criteria and variables can be simply evaluated without the need of experimenting on a real environment.
[ { "version": "v1", "created": "Sat, 14 May 2011 15:11:00 GMT" } ]
1,305,590,400,000
[ [ "Jahromi", "Zahra Forootan", "" ], [ "Rajabzadeh", "Amir", "" ], [ "Manashty", "Ali Reza", "" ] ]
1105.3486
Ladislau B\"ol\"oni
Ladislau B\"ol\"oni
Xapagy: a cognitive architecture for narrative reasoning
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce the Xapagy cognitive architecture: a software system designed to perform narrative reasoning. The architecture has been designed from scratch to model and mimic the activities performed by humans when witnessing, reading, recalling, narrating and talking about stories.
[ { "version": "v1", "created": "Tue, 17 May 2011 20:28:31 GMT" } ]
1,426,723,200,000
[ [ "Bölöni", "Ladislau", "" ] ]
1105.3635
M. C. Garrido
M. C. Garrido, P. E. Lopez-de-Teruel, A. Ruiz
Probabilistic Inference from Arbitrary Uncertainty using Mixtures of Factorized Generalized Gaussians
null
Journal Of Artificial Intelligence Research, Volume 9, pages 167-217, 1998
10.1613/jair.533
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a general and efficient framework for probabilistic inference and learning from arbitrary uncertain information. It exploits the calculation properties of finite mixture models, conjugate families and factorization. Both the joint probability density of the variables and the likelihood function of the (objective or subjective) observation are approximated by a special mixture model, in such a way that any desired conditional distribution can be directly obtained without numerical integration. We have developed an extended version of the expectation maximization (EM) algorithm to estimate the parameters of mixture models from uncertain training examples (indirect observations). As a consequence, any piece of exact or uncertain information about both input and output values is consistently handled in the inference and learning stages. This ability, extremely useful in certain situations, is not found in most alternative methods. The proposed framework is formally justified from standard probabilistic principles and illustrative examples are provided in the fields of nonparametric pattern classification, nonlinear regression and pattern completion. Finally, experiments on a real application and comparative results over standard databases provide empirical evidence of the utility of the method in a wide range of applications.
[ { "version": "v1", "created": "Wed, 18 May 2011 14:06:49 GMT" } ]
1,305,763,200,000
[ [ "Garrido", "M. C.", "" ], [ "Lopez-de-Teruel", "P. E.", "" ], [ "Ruiz", "A.", "" ] ]
1105.3821
Peter de Blanc
Peter de Blanc
Ontological Crises in Artificial Agents' Value Systems
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Decision-theoretic agents predict and evaluate the results of their actions using a model, or ontology, of their environment. An agent's goal, or utility function, may also be specified in terms of the states of, or entities within, its ontology. If the agent may upgrade or replace its ontology, it faces a crisis: the agent's original goal may not be well-defined with respect to its new ontology. This crisis must be resolved before the agent can make plans towards achieving its goals. We discuss in this paper which sorts of agents will undergo ontological crises and why we may want to create such agents. We present some concrete examples, and argue that a well-defined procedure for resolving ontological crises is needed. We point to some possible approaches to solving this problem, and evaluate these methods on our examples.
[ { "version": "v1", "created": "Thu, 19 May 2011 09:32:46 GMT" } ]
1,305,849,600,000
[ [ "de Blanc", "Peter", "" ] ]
1105.3833
Eliezer Lozinskii
Eliezer L. Lozinskii
Typical models: minimizing false beliefs
null
Journal of Experimental & Theoretical Artificial Intelligence, vol. 22, no.4, December 2010, 321-340
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A knowledge system S describing a part of real world does in general not contain complete information. Reasoning with incomplete information is prone to errors since any belief derived from S may be false in the present state of the world. A false belief may suggest wrong decisions and lead to harmful actions. So an important goal is to make false beliefs as unlikely as possible. This work introduces the notions of "typical atoms" and "typical models", and shows that reasoning with typical models minimizes the expected number of false beliefs over all ways of using incomplete information. Various properties of typical models are studied, in particular, correctness and stability of beliefs suggested by typical models, and their connection to oblivious reasoning.
[ { "version": "v1", "created": "Thu, 19 May 2011 10:00:39 GMT" } ]
1,305,849,600,000
[ [ "Lozinskii", "Eliezer L.", "" ] ]
1105.5440
J. M. Ahuactzin
J. M. Ahuactzin, P. Bessiere, E. Mazer
The Ariadne's Clew Algorithm
null
Journal Of Artificial Intelligence Research, Volume 9, pages 295-316, 1998
10.1613/jair.468
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a new approach to path planning, called the "Ariadne's clew algorithm". It is designed to find paths in high-dimensional continuous spaces and applies to robots with many degrees of freedom in static, as well as dynamic environments - ones where obstacles may move. The Ariadne's clew algorithm comprises two sub-algorithms, called Search and Explore, applied in an interleaved manner. Explore builds a representation of the accessible space while Search looks for the target. Both are posed as optimization problems. We describe a real implementation of the algorithm to plan paths for a six degrees of freedom arm in a dynamic environment where another six degrees of freedom arm is used as a moving obstacle. Experimental results show that a path is found in about one second without any pre-processing.
[ { "version": "v1", "created": "Fri, 27 May 2011 01:44:34 GMT" } ]
1,306,713,600,000
[ [ "Ahuactzin", "J. M.", "" ], [ "Bessiere", "P.", "" ], [ "Mazer", "E.", "" ] ]
1105.5441
C. Backstrom
C. Backstrom
Computational Aspects of Reordering Plans
null
Journal Of Artificial Intelligence Research, Volume 9, pages 99-137, 1998
10.1613/jair.477
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article studies the problem of modifying the action ordering of a plan in order to optimise the plan according to various criteria. One of these criteria is to make a plan less constrained and the other is to minimize its parallel execution time. Three candidate definitions are proposed for the first of these criteria, constituting a sequence of increasing optimality guarantees. Two of these are based on deordering plans, which means that ordering relations may only be removed, not added, while the third one uses reordering, where arbitrary modifications to the ordering are allowed. It is shown that only the weakest one of the three criteria is tractable to achieve, the other two being NP-hard and even difficult to approximate. Similarly, optimising the parallel execution time of a plan is studied both for deordering and reordering of plans. In the general case, both of these computations are NP-hard. However, it is shown that optimal deorderings can be computed in polynomial time for a class of planning languages based on the notions of producers, consumers and threats, which includes most of the commonly used planning languages. Computing optimal reorderings can potentially lead to even faster parallel executions, but this problem remains NP-hard and difficult to approximate even under quite severe restrictions.
[ { "version": "v1", "created": "Fri, 27 May 2011 01:44:57 GMT" } ]
1,306,713,600,000
[ [ "Backstrom", "C.", "" ] ]
1105.5442
O. Ledeniov
O. Ledeniov, S. Markovitch
The Divide-and-Conquer Subgoal-Ordering Algorithm for Speeding up Logic Inference
null
Journal Of Artificial Intelligence Research, Volume 9, pages 37-97, 1998
10.1613/jair.509
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is common to view programs as a combination of logic and control: the logic part defines what the program must do, the control part -- how to do it. The Logic Programming paradigm was developed with the intention of separating the logic from the control. Recently, extensive research has been conducted on automatic generation of control for logic programs. Only a few of these works considered the issue of automatic generation of control for improving the efficiency of logic programs. In this paper we present a novel algorithm for automatic finding of lowest-cost subgoal orderings. The algorithm works using the divide-and-conquer strategy. The given set of subgoals is partitioned into smaller sets, based on co-occurrence of free variables. The subsets are ordered recursively and merged, yielding a provably optimal order. We experimentally demonstrate the utility of the algorithm by testing it in several domains, and discuss the possibilities of its cooperation with other existing methods.
[ { "version": "v1", "created": "Fri, 27 May 2011 01:45:23 GMT" } ]
1,306,713,600,000
[ [ "Ledeniov", "O.", "" ], [ "Markovitch", "S.", "" ] ]
1105.5443
J. Culberson
J. Culberson, B. Vandegriend
The Gn,m Phase Transition is Not Hard for the Hamiltonian Cycle Problem
null
Journal Of Artificial Intelligence Research, Volume 9, pages 219-245, 1998
10.1613/jair.512
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Using an improved backtrack algorithm with sophisticated pruning techniques, we revise previous observations correlating a high frequency of hard to solve Hamiltonian Cycle instances with the Gn,m phase transition between Hamiltonicity and non-Hamiltonicity. Instead all tested graphs of 100 to 1500 vertices are easily solved. When we artificially restrict the degree sequence with a bounded maximum degree, although there is some increase in difficulty, the frequency of hard graphs is still low. When we consider more regular graphs based on a generalization of knight's tours, we observe frequent instances of really hard graphs, but on these the average degree is bounded by a constant. We design a set of graphs with a feature our algorithm is unable to detect and so are very hard for our algorithm, but in these we can vary the average degree from O(1) to O(n). We have so far found no class of graphs correlated with the Gn,m phase transition which asymptotically produces a high frequency of hard instances.
[ { "version": "v1", "created": "Fri, 27 May 2011 01:45:52 GMT" } ]
1,306,713,600,000
[ [ "Culberson", "J.", "" ], [ "Vandegriend", "B.", "" ] ]
1105.5444
P. Resnik
P. Resnik
Semantic Similarity in a Taxonomy: An Information-Based Measure and its Application to Problems of Ambiguity in Natural Language
null
Journal Of Artificial Intelligence Research, Volume 11, pages 95-130, 1999
10.1613/jair.514
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article presents a measure of semantic similarity in an IS-A taxonomy based on the notion of shared information content. Experimental evaluation against a benchmark set of human similarity judgments demonstrates that the measure performs better than the traditional edge-counting approach. The article presents algorithms that take advantage of taxonomic similarity in resolving syntactic and semantic ambiguity, along with experimental results demonstrating their effectiveness.
[ { "version": "v1", "created": "Fri, 27 May 2011 01:46:05 GMT" } ]
1,306,713,600,000
[ [ "Resnik", "P.", "" ] ]
1105.5446
A. Artale
A. Artale, E. Franconi
A Temporal Description Logic for Reasoning about Actions and Plans
null
Journal Of Artificial Intelligence Research, Volume 9, pages 463-506, 1998
10.1613/jair.516
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A class of interval-based temporal languages for uniformly representing and reasoning about actions and plans is presented. Actions are represented by describing what is true while the action itself is occurring, and plans are constructed by temporally relating actions and world states. The temporal languages are members of the family of Description Logics, which are characterized by high expressivity combined with good computational properties. The subsumption problem for a class of temporal Description Logics is investigated and sound and complete decision procedures are given. The basic language TL-F is considered first: it is the composition of a temporal logic TL -- able to express interval temporal networks -- together with the non-temporal logic F -- a Feature Description Logic. It is proven that subsumption in this language is an NP-complete problem. Then it is shown how to reason with the more expressive languages TLU-FU and TL-ALCF. The former adds disjunction both at the temporal and non-temporal sides of the language, the latter extends the non-temporal side with set-valued features (i.e., roles) and a propositionally complete language.
[ { "version": "v1", "created": "Fri, 27 May 2011 01:46:39 GMT" } ]
1,306,713,600,000
[ [ "Artale", "A.", "" ], [ "Franconi", "E.", "" ] ]
1105.5447
D. J. Cook
D. J. Cook, R. C. Varnell
Adaptive Parallel Iterative Deepening Search
null
Journal Of Artificial Intelligence Research, Volume 9, pages 139-165, 1998
10.1613/jair.518
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many of the artificial intelligence techniques developed to date rely on heuristic search through large spaces. Unfortunately, the size of these spaces and the corresponding computational effort reduce the applicability of otherwise novel and effective algorithms. A number of parallel and distributed approaches to search have considerably improved the performance of the search process. Our goal is to develop an architecture that automatically selects parallel search strategies for optimal performance on a variety of search problems. In this paper we describe one such architecture realized in the Eureka system, which combines the benefits of many different approaches to parallel heuristic search. Through empirical and theoretical analyses we observe that features of the problem space directly affect the choice of optimal parallel search strategy. We then employ machine learning techniques to select the optimal parallel search strategy for a given problem space. When a new search task is input to the system, Eureka uses features describing the search space and the chosen architecture to automatically select the appropriate search strategy. Eureka has been tested on a MIMD parallel processor, a distributed network of workstations, and a single workstation using multithreading. Results generated from fifteen puzzle problems, robot arm motion problems, artificial search spaces, and planning problems indicate that Eureka outperforms any of the tested strategies used exclusively for all problem instances and is able to greatly reduce the search time for these applications.
[ { "version": "v1", "created": "Fri, 27 May 2011 01:47:18 GMT" } ]
1,306,713,600,000
[ [ "Cook", "D. J.", "" ], [ "Varnell", "R. C.", "" ] ]
1105.5448
E. Davis
E. Davis
Order of Magnitude Comparisons of Distance
null
Journal Of Artificial Intelligence Research, Volume 10, pages 1-38, 1999
10.1613/jair.520
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Order of magnitude reasoning - reasoning by rough comparisons of the sizes of quantities - is often called 'back of the envelope calculation', with the implication that the calculations are quick though approximate. This paper exhibits an interesting class of constraint sets in which order of magnitude reasoning is demonstrably fast. Specifically, we present a polynomial-time algorithm that can solve a set of constraints of the form 'Points a and b are much closer together than points c and d.' We prove that this algorithm can be applied if `much closer together' is interpreted either as referring to an infinite difference in scale or as referring to a finite difference in scale, as long as the difference in scale is greater than the number of variables in the constraint set. We also prove that the first-order theory over such constraints is decidable.
[ { "version": "v1", "created": "Fri, 27 May 2011 01:47:48 GMT" } ]
1,306,713,600,000
[ [ "Davis", "E.", "" ] ]
1105.5449
G. Di Caro
G. Di Caro, M. Dorigo
AntNet: Distributed Stigmergetic Control for Communications Networks
null
Journal Of Artificial Intelligence Research, Volume 9, pages 317-365, 1998
10.1613/jair.530
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces AntNet, a novel approach to the adaptive learning of routing tables in communications networks. AntNet is a distributed, mobile agents based Monte Carlo system that was inspired by recent work on the ant colony metaphor for solving optimization problems. AntNet's agents concurrently explore the network and exchange collected information. The communication among the agents is indirect and asynchronous, mediated by the network itself. This form of communication is typical of social insects and is called stigmergy. We compare our algorithm with six state-of-the-art routing algorithms coming from the telecommunications and machine learning fields. The algorithms' performance is evaluated over a set of realistic testbeds. We run many experiments over real and artificial IP datagram networks with increasing number of nodes and under several paradigmatic spatial and temporal traffic distributions. Results are very encouraging. AntNet showed superior performance under all the experimental conditions with respect to its competitors. We analyze the main characteristics of the algorithm and try to explain the reasons for its superiority.
[ { "version": "v1", "created": "Fri, 27 May 2011 01:48:39 GMT" } ]
1,306,713,600,000
[ [ "Di Caro", "G.", "" ], [ "Dorigo", "M.", "" ] ]
1105.5450
J. Y. Halpern
J. Y. Halpern
A Counter Example to Theorems of Cox and Fine
null
Journal Of Artificial Intelligence Research, Volume 10, pages 67-85, 1999
10.1613/jair.536
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cox's well-known theorem justifying the use of probability is shown not to hold in finite domains. The counterexample also suggests that Cox's assumptions are insufficient to prove the result even in infinite domains. The same counterexample is used to disprove a result of Fine on comparative conditional probability.
[ { "version": "v1", "created": "Fri, 27 May 2011 01:49:04 GMT" } ]
1,306,713,600,000
[ [ "Halpern", "J. Y.", "" ] ]
1105.5451
M. Fox
M. Fox, D. Long
The Automatic Inference of State Invariants in TIM
null
Journal Of Artificial Intelligence Research, Volume 9, pages 367-421, 1998
10.1613/jair.544
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As planning is applied to larger and richer domains the effort involved in constructing domain descriptions increases and becomes a significant burden on the human application designer. If general planners are to be applied successfully to large and complex domains it is necessary to provide the domain designer with some assistance in building correctly encoded domains. One way of doing this is to provide domain-independent techniques for extracting, from a domain description, knowledge that is implicit in that description and that can assist domain designers in debugging domain descriptions. This knowledge can also be exploited to improve the performance of planners: several researchers have explored the potential of state invariants in speeding up the performance of domain-independent planners. In this paper we describe a process by which state invariants can be extracted from the automatically inferred type structure of a domain. These techniques are being developed for exploitation by STAN, a Graphplan based planner that employs state analysis techniques to enhance its performance.
[ { "version": "v1", "created": "Fri, 27 May 2011 01:49:44 GMT" } ]
1,306,713,600,000
[ [ "Fox", "M.", "" ], [ "Long", "D.", "" ] ]
1105.5452
D. Calvanese
D. Calvanese, M. Lenzerini, D. Nardi
Unifying Class-Based Representation Formalisms
null
Journal Of Artificial Intelligence Research, Volume 11, pages 199-240, 1999
10.1613/jair.548
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The notion of class is ubiquitous in computer science and is central in many formalisms for the representation of structured knowledge used both in knowledge representation and in databases. In this paper we study the basic issues underlying such representation formalisms and single out both their common characteristics and their distinguishing features. Such investigation leads us to propose a unifying framework in which we are able to capture the fundamental aspects of several representation languages used in different contexts. The proposed formalism is expressed in the style of description logics, which have been introduced in knowledge representation as a means to provide a semantically well-founded basis for the structural aspects of knowledge representation systems. The description logic considered in this paper is a subset of first order logic with nice computational characteristics. It is quite expressive and features a novel combination of constructs that has not been studied before. The distinguishing constructs are number restrictions, which generalize existence and functional dependencies, inverse roles, which allow one to refer to the inverse of a relationship, and possibly cyclic assertions, which are necessary for capturing real world domains. We are able to show that it is precisely such combination of constructs that makes our logic powerful enough to model the essential set of features for defining class structures that are common to frame systems, object-oriented database languages, and semantic data models. As a consequence of the established correspondences, several significant extensions of each of the above formalisms become available. The high expressiveness of the logic we propose and the need for capturing the reasoning in different contexts forces us to distinguish between unrestricted and finite model reasoning. A notable feature of our proposal is that reasoning in both cases is decidable. We argue that, by virtue of the high expressive power and of the associated reasoning capabilities on both unrestricted and finite models, our logic provides a common core for class-based representation formalisms.
[ { "version": "v1", "created": "Fri, 27 May 2011 01:49:59 GMT" } ]
1,306,713,600,000
[ [ "Calvanese", "D.", "" ], [ "Lenzerini", "M.", "" ], [ "Nardi", "D.", "" ] ]
1105.5453
J. Rintanen
J. Rintanen
Complexity of Prioritized Default Logics
null
Journal Of Artificial Intelligence Research, Volume 9, pages 423-461, 1998
10.1613/jair.554
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In default reasoning, usually not all possible ways of resolving conflicts between default rules are acceptable. Criteria expressing acceptable ways of resolving the conflicts may be hardwired in the inference mechanism, for example specificity in inheritance reasoning can be handled this way, or they may be given abstractly as an ordering on the default rules. In this article we investigate formalizations of the latter approach in Reiter's default logic. Our goal is to analyze and compare the computational properties of three such formalizations in terms of their computational complexity: the prioritized default logics of Baader and Hollunder, and Brewka, and a prioritized default logic that is based on lexicographic comparison. The analysis locates the propositional variants of these logics on the second and third levels of the polynomial hierarchy, and identifies the boundary between tractable and intractable inference for restricted classes of prioritized default theories.
[ { "version": "v1", "created": "Fri, 27 May 2011 01:50:12 GMT" } ]
1,306,713,600,000
[ [ "Rintanen", "J.", "" ] ]
1105.5454
D. P. Clements
D. P. Clements, D. E. Joslin
Squeaky Wheel Optimization
null
Journal Of Artificial Intelligence Research, Volume 10, pages 353-373, 1999
10.1613/jair.561
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We describe a general approach to optimization which we term `Squeaky Wheel' Optimization (SWO). In SWO, a greedy algorithm is used to construct a solution which is then analyzed to find the trouble spots, i.e., those elements, that, if improved, are likely to improve the objective function score. The results of the analysis are used to generate new priorities that determine the order in which the greedy algorithm constructs the next solution. This Construct/Analyze/Prioritize cycle continues until some limit is reached, or an acceptable solution is found. SWO can be viewed as operating on two search spaces: solutions and prioritizations. Successive solutions are only indirectly related, via the re-prioritization that results from analyzing the prior solution. Similarly, successive prioritizations are generated by constructing and analyzing solutions. This `coupled search' has some interesting properties, which we discuss. We report encouraging experimental results on two domains, scheduling problems that arise in fiber-optic cable manufacturing, and graph coloring problems. The fact that these domains are very different supports our claim that SWO is a general technique for optimization.
[ { "version": "v1", "created": "Fri, 27 May 2011 01:51:22 GMT" } ]
1,306,713,600,000
[ [ "Clements", "D. P.", "" ], [ "Joslin", "D. E.", "" ] ]
1105.5455
D. Barber
D. Barber, P. de van Laar
Variational Cumulant Expansions for Intractable Distributions
null
Journal Of Artificial Intelligence Research, Volume 10, pages 435-455, 1999
10.1613/jair.567
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Intractable distributions present a common difficulty in inference within the probabilistic knowledge representation framework and variational methods have recently been popular in providing an approximate solution. In this article, we describe a perturbational approach in the form of a cumulant expansion which, to lowest order, recovers the standard Kullback-Leibler variational bound. Higher-order terms describe corrections on the variational approach without incurring much further computational cost. The relationship to other perturbational approaches such as TAP is also elucidated. We demonstrate the method on a particular class of undirected graphical models, Boltzmann machines, for which our simulation results confirm improved accuracy and enhanced stability during learning.
[ { "version": "v1", "created": "Fri, 27 May 2011 01:51:46 GMT" } ]
1,306,713,600,000
[ [ "Barber", "D.", "" ], [ "de van Laar", "P.", "" ] ]
1105.5457
M. Fox
M. Fox, D. Long
Efficient Implementation of the Plan Graph in STAN
null
Journal Of Artificial Intelligence Research, Volume 10, pages 87-115, 1999
10.1613/jair.570
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
STAN is a Graphplan-based planner, so-called because it uses a variety of STate ANalysis techniques to enhance its performance. STAN competed in the AIPS-98 planning competition where it compared well with the other competitors in terms of speed, finding solutions fastest to many of the problems posed. Although the domain analysis techniques STAN exploits are an important factor in its overall performance, we believe that the speed at which STAN solved the competition problems is largely due to the implementation of its plan graph. The implementation is based on two insights: that many of the graph construction operations can be implemented as bit-level logical operations on bit vectors, and that the graph should not be explicitly constructed beyond the fix point. This paper describes the implementation of STAN's plan graph and provides experimental results which demonstrate the circumstances under which advantages can be obtained from using this implementation.
[ { "version": "v1", "created": "Fri, 27 May 2011 01:52:09 GMT" } ]
1,306,713,600,000
[ [ "Fox", "M.", "" ], [ "Long", "D.", "" ] ]
1105.5458
M. Fuchs
M. Fuchs, D. Fuchs
Cooperation between Top-Down and Bottom-Up Theorem Provers
null
Journal Of Artificial Intelligence Research, Volume 10, pages 169-198, 1999
10.1613/jair.573
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Top-down and bottom-up theorem proving approaches each have specific advantages and disadvantages. Bottom-up provers profit from strong redundancy control but suffer from the lack of goal-orientation, whereas top-down provers are goal-oriented but often have weak calculi when their proof lengths are considered. In order to integrate both approaches, we try to achieve cooperation between a top-down and a bottom-up prover in two different ways: The first technique aims at supporting a bottom-up with a top-down prover. A top-down prover generates subgoal clauses, they are then processed by a bottom-up prover. The second technique deals with the use of bottom-up generated lemmas in a top-down prover. We apply our concept to the areas of model elimination and superposition. We discuss the ability of our techniques to shorten proofs as well as to reorder the search space in an appropriate manner. Furthermore, in order to identify subgoal clauses and lemmas which are actually relevant for the proof task, we develop methods for a relevancy-based filtering. Experiments with the provers SETHEO and SPASS performed in the problem library TPTP reveal the high potential of our cooperation approaches.
[ { "version": "v1", "created": "Fri, 27 May 2011 01:52:28 GMT" } ]
1,306,713,600,000
[ [ "Fuchs", "M.", "" ], [ "Fuchs", "D.", "" ] ]
1105.5459
T. Hogg
T. Hogg
Solving Highly Constrained Search Problems with Quantum Computers
null
Journal Of Artificial Intelligence Research, Volume 10, pages 39-66, 1999
10.1613/jair.574
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A previously developed quantum search algorithm for solving 1-SAT problems in a single step is generalized to apply to a range of highly constrained k-SAT problems. We identify a bound on the number of clauses in satisfiability problems for which the generalized algorithm can find a solution in a constant number of steps as the number of variables increases. This performance contrasts with the linear growth in the number of steps required by the best classical algorithms, and the exponential number required by classical and quantum methods that ignore the problem structure. In some cases, the algorithm can also guarantee that insoluble problems in fact have no solutions, unlike previously proposed quantum search algorithms.
[ { "version": "v1", "created": "Fri, 27 May 2011 01:52:46 GMT" } ]
1,306,713,600,000
[ [ "Hogg", "T.", "" ] ]
1105.5460
C. Boutilier
C. Boutilier, T. Dean, S. Hanks
Decision-Theoretic Planning: Structural Assumptions and Computational Leverage
null
Journal Of Artificial Intelligence Research, Volume 11, pages 1-94, 1999
10.1613/jair.575
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Planning under uncertainty is a central problem in the study of automated sequential decision making, and has been addressed by researchers in many different fields, including AI planning, decision analysis, operations research, control theory and economics. While the assumptions and perspectives adopted in these areas often differ in substantial ways, many planning problems of interest to researchers in these fields can be modeled as Markov decision processes (MDPs) and analyzed using the techniques of decision theory. This paper presents an overview and synthesis of MDP-related methods, showing how they provide a unifying framework for modeling many classes of planning problems studied in AI. It also describes structural properties of MDPs that, when exhibited by particular classes of problems, can be exploited in the construction of optimal or approximately optimal policies or plans. Planning problems commonly possess structure in the reward and value functions used to describe performance criteria, in the functions used to describe state transitions and observations, and in the relationships among features used to describe states, actions, rewards, and observations. Specialized representations, and algorithms employing these representations, can achieve computational leverage by exploiting these various forms of structure. Certain AI techniques -- in particular those based on the use of structured, intensional representations -- can be viewed in this way. This paper surveys several types of representations for both classical and decision-theoretic planning problems, and planning algorithms that exploit these representations in a number of different ways to ease the computational burden of constructing policies or plans. It focuses primarily on abstraction, aggregation and decomposition techniques based on AI-style representations.
[ { "version": "v1", "created": "Fri, 27 May 2011 01:53:02 GMT" } ]
1,306,713,600,000
[ [ "Boutilier", "C.", "" ], [ "Dean", "T.", "" ], [ "Hanks", "S.", "" ] ]
1105.5461
T. Lukasiewicz
T. Lukasiewicz
Probabilistic Deduction with Conditional Constraints over Basic Events
null
Journal Of Artificial Intelligence Research, Volume 10, pages 199-241, 1999
10.1613/jair.577
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the problem of probabilistic deduction with conditional constraints over basic events. We show that globally complete probabilistic deduction with conditional constraints over basic events is NP-hard. We then concentrate on the special case of probabilistic deduction in conditional constraint trees. We elaborate very efficient techniques for globally complete probabilistic deduction. In detail, for conditional constraint trees with point probabilities, we present a local approach to globally complete probabilistic deduction, which runs in linear time in the size of the conditional constraint trees. For conditional constraint trees with interval probabilities, we show that globally complete probabilistic deduction can be done in a global approach by solving nonlinear programs. We show how these nonlinear programs can be transformed into equivalent linear programs, which are solvable in polynomial time in the size of the conditional constraint trees.
[ { "version": "v1", "created": "Fri, 27 May 2011 01:53:20 GMT" } ]
1,306,713,600,000
[ [ "Lukasiewicz", "T.", "" ] ]
1105.5462
T. S. Jaakkola
T. S. Jaakkola, M. I. Jordan
Variational Probabilistic Inference and the QMR-DT Network
null
Journal Of Artificial Intelligence Research, Volume 10, pages 291-322, 1999
10.1613/jair.583
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We describe a variational approximation method for efficient inference in large-scale probabilistic models. Variational methods are deterministic procedures that provide approximations to marginal and conditional probabilities of interest. They provide alternatives to approximate inference methods based on stochastic sampling or search. We describe a variational approach to the problem of diagnostic inference in the `Quick Medical Reference' (QMR) network. The QMR network is a large-scale probabilistic graphical model built on statistical and expert knowledge. Exact probabilistic inference is infeasible in this model for all but a small set of cases. We evaluate our variational inference algorithm on a large set of diagnostic test cases, comparing the algorithm to a state-of-the-art stochastic sampling method.
[ { "version": "v1", "created": "Fri, 27 May 2011 01:53:36 GMT" } ]
1,306,713,600,000
[ [ "Jaakkola", "T. S.", "" ], [ "Jordan", "M. I.", "" ] ]
1105.5463
A. Borgida
A. Borgida
Extensible Knowledge Representation: the Case of Description Reasoners
null
Journal Of Artificial Intelligence Research, Volume 10, pages 399-434, 1999
10.1613/jair.584
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper offers an approach to extensible knowledge representation and reasoning for a family of formalisms known as Description Logics. The approach is based on the notion of adding new concept constructors, and includes a heuristic methodology for specifying the desired extensions, as well as a modularized software architecture that supports implementing extensions. The architecture detailed here falls in the normalize-compared paradigm, and supports both intentional reasoning (subsumption) involving concepts, and extensional reasoning involving individuals after incremental updates to the knowledge base. The resulting approach can be used to extend the reasoner with specialized notions that are motivated by specific problems or application areas, such as reasoning about dates, plans, etc. In addition, it provides an opportunity to implement constructors that are not currently yet sufficiently well understood theoretically, but are needed in practice. Also, for constructors that are provably hard to reason with (e.g., ones whose presence would lead to undecidability), it allows the implementation of incomplete reasoners where the incompleteness is tailored to be acceptable for the application at hand.
[ { "version": "v1", "created": "Fri, 27 May 2011 01:53:50 GMT" } ]
1,306,713,600,000
[ [ "Borgida", "A.", "" ] ]
1105.5465
J. Rintanen
J. Rintanen
Constructing Conditional Plans by a Theorem-Prover
null
Journal Of Artificial Intelligence Research, Volume 10, pages 323-352, 1999
10.1613/jair.591
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The research on conditional planning rejects the assumptions that there is no uncertainty or incompleteness of knowledge with respect to the state and changes of the system the plans operate on. Without these assumptions the sequences of operations that achieve the goals depend on the initial state and the outcomes of nondeterministic changes in the system. This setting raises the questions of how to represent the plans and how to perform plan search. The answers are quite different from those in the simpler classical framework. In this paper, we approach conditional planning from a new viewpoint that is motivated by the use of satisfiability algorithms in classical planning. Translating conditional planning to formulae in the propositional logic is not feasible because of inherent computational limitations. Instead, we translate conditional planning to quantified Boolean formulae. We discuss three formalizations of conditional planning as quantified Boolean formulae, and present experimental results obtained with a theorem-prover.
[ { "version": "v1", "created": "Fri, 27 May 2011 01:54:30 GMT" } ]
1,306,713,600,000
[ [ "Rintanen", "J.", "" ] ]
1105.5466
K. M. Ting
K. M. Ting, I. H. Witten
Issues in Stacked Generalization
null
Journal Of Artificial Intelligence Research, Volume 10, pages 271-289, 1999
10.1613/jair.594
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Stacked generalization is a general method of using a high-level model to combine lower-level models to achieve greater predictive accuracy. In this paper we address two crucial issues which have been considered to be a `black art' in classification tasks ever since the introduction of stacked generalization in 1992 by Wolpert: the type of generalizer that is suitable to derive the higher-level model, and the kind of attributes that should be used as its input. We find that best results are obtained when the higher-level model combines the confidence (and not just the predictions) of the lower-level ones. We demonstrate the effectiveness of stacked generalization for combining three different types of learning algorithms for classification tasks. We also compare the performance of stacked generalization with majority vote and published results of arcing and bagging.
[ { "version": "v1", "created": "Fri, 27 May 2011 01:54:47 GMT" } ]
1,306,713,600,000
[ [ "Ting", "K. M.", "" ], [ "Witten", "I. H.", "" ] ]
1105.5516
Fabian Suchanek
Fabian Suchanek (INRIA Saclay - Ile de France), Serge Abiteboul (INRIA Saclay - Ile de France), Pierre Senellart
Ontology Alignment at the Instance and Schema Level
Technical Report at INRIA RT-0408
N° RT-0408 (2011)
null
RT-0408
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present PARIS, an approach for the automatic alignment of ontologies. PARIS aligns not only instances, but also relations and classes. Alignments at the instance-level cross-fertilize with alignments at the schema-level. Thereby, our system provides a truly holistic solution to the problem of ontology alignment. The heart of the approach is probabilistic. This allows PARIS to run without any parameter tuning. We demonstrate the efficiency of the algorithm and its precision through extensive experiments. In particular, we obtain a precision of around 90% in experiments with two of the world's largest ontologies.
[ { "version": "v1", "created": "Fri, 27 May 2011 10:18:08 GMT" }, { "version": "v2", "created": "Wed, 1 Jun 2011 12:38:05 GMT" }, { "version": "v3", "created": "Thu, 18 Aug 2011 13:00:27 GMT" } ]
1,313,712,000,000
[ [ "Suchanek", "Fabian", "", "INRIA Saclay - Ile de France" ], [ "Abiteboul", "Serge", "", "INRIA\n Saclay - Ile de France" ], [ "Senellart", "Pierre", "" ] ]
1105.5667
Nina Narodytska
Jessica Davies, George Katsirelos, Nina Narodytska, Toby Walsh
Complexity of and Algorithms for Borda Manipulation
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We prove that it is NP-hard for a coalition of two manipulators to compute how to manipulate the Borda voting rule. This resolves one of the last open problems in the computational complexity of manipulating common voting rules. Because of this NP-hardness, we treat computing a manipulation as an approximation problem where we try to minimize the number of manipulators. Based on ideas from bin packing and multiprocessor scheduling, we propose two new approximation methods to compute manipulations of the Borda rule. Experiments show that these methods significantly outperform the previous best known %existing approximation method. We are able to find optimal manipulations in almost all the randomly generated elections tested. Our results suggest that, whilst computing a manipulation of the Borda rule by a coalition is NP-hard, computational complexity may provide only a weak barrier against manipulation in practice.
[ { "version": "v1", "created": "Fri, 27 May 2011 23:11:40 GMT" } ]
1,306,800,000,000
[ [ "Davies", "Jessica", "" ], [ "Katsirelos", "George", "" ], [ "Narodytska", "Nina", "" ], [ "Walsh", "Toby", "" ] ]
1105.6124
F. Barber
F. Barber
Reasoning on Interval and Point-based Disjunctive Metric Constraints in Temporal Contexts
null
Journal Of Artificial Intelligence Research, Volume 12, pages 35-86, 2000
10.1613/jair.693
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a temporal model for reasoning on disjunctive metric constraints on intervals and time points in temporal contexts. This temporal model is composed of a labeled temporal algebra and its reasoning algorithms. The labeled temporal algebra defines labeled disjunctive metric point-based constraints, where each disjunct in each input disjunctive constraint is univocally associated to a label. Reasoning algorithms manage labeled constraints, associated label lists, and sets of mutually inconsistent disjuncts. These algorithms guarantee consistency and obtain a minimal network. Additionally, constraints can be organized in a hierarchy of alternative temporal contexts. Therefore, we can reason on context-dependent disjunctive metric constraints on intervals and points. Moreover, the model is able to represent non-binary constraints, such that logical dependencies on disjuncts in constraints can be handled. The computational cost of reasoning algorithms is exponential in accordance with the underlying problem complexity, although some improvements are proposed.
[ { "version": "v1", "created": "Mon, 30 May 2011 22:09:11 GMT" } ]
1,306,886,400,000
[ [ "Barber", "F.", "" ] ]
1105.6148
Mohammed El-Dosuky
M. A. El-Dosuky, T. T. Hamza, M. Z. Rashad and A. H. Naguib
Overcoming Misleads In Logic Programs by Redefining Negation
8 pages, 1 figure
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-sa/3.0/
Negation as failure and incomplete information in logic programs have been studied by many researchers In order to explains HOW a negated conclusion was reached, we introduce and proof a different way for negating facts to overcoming misleads in logic programs. Negating facts can be achieved by asking the user for constants that do not appear elsewhere in the knowledge base.
[ { "version": "v1", "created": "Tue, 31 May 2011 02:19:21 GMT" }, { "version": "v2", "created": "Mon, 4 Mar 2013 23:40:11 GMT" } ]
1,362,528,000,000
[ [ "El-Dosuky", "M. A.", "" ], [ "Hamza", "T. T.", "" ], [ "Rashad", "M. Z.", "" ], [ "Naguib", "A. H.", "" ] ]
1106.0171
Gilberto de Paiva
Gilberto de Paiva
Proposal of Pattern Recognition as a necessary and sufficient Principle to Cognitive Science
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite the prevalence of the Computational Theory of Mind and the Connectionist Model, the establishing of the key principles of the Cognitive Science are still controversy and inconclusive. This paper proposes the concept of Pattern Recognition as Necessary and Sufficient Principle for a general cognitive science modeling, in a very ambitious scientific proposal. A formal physical definition of the pattern recognition concept is also proposed to solve many key conceptual gaps on the field.
[ { "version": "v1", "created": "Tue, 31 May 2011 06:40:52 GMT" } ]
1,306,972,800,000
[ [ "de Paiva", "Gilberto", "" ] ]
1106.0218
E. Birnbaum
E. Birnbaum, E. L. Lozinskii
The Good Old Davis-Putnam Procedure Helps Counting Models
null
Journal Of Artificial Intelligence Research, Volume 10, pages 457-477, 1999
10.1613/jair.601
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As was shown recently, many important AI problems require counting the number of models of propositional formulas. The problem of counting models of such formulas is, according to present knowledge, computationally intractable in a worst case. Based on the Davis-Putnam procedure, we present an algorithm, CDP, that computes the exact number of models of a propositional CNF or DNF formula F. Let m and n be the number of clauses and variables of F, respectively, and let p denote the probability that a literal l of F occurs in a clause C of F, then the average running time of CDP is shown to be O(nm^d), where d=-1/log(1-p). The practical performance of CDP has been estimated in a series of experiments on a wide variety of CNF formulas.
[ { "version": "v1", "created": "Wed, 1 Jun 2011 16:14:46 GMT" } ]
1,306,972,800,000
[ [ "Birnbaum", "E.", "" ], [ "Lozinskii", "E. L.", "" ] ]
1106.0219
C. E. Brodley
C. E. Brodley, M. A. Friedl
Identifying Mislabeled Training Data
null
Journal Of Artificial Intelligence Research, Volume 11, pages 131-167, 1999
10.1613/jair.606
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a new approach to identifying and eliminating mislabeled training instances for supervised learning. The goal of this approach is to improve classification accuracies produced by learning algorithms by improving the quality of the training data. Our approach uses a set of learning algorithms to create classifiers that serve as noise filters for the training data. We evaluate single algorithm, majority vote and consensus filters on five datasets that are prone to labeling errors. Our experiments illustrate that filtering significantly improves classification accuracy for noise levels up to 30 percent. An analytical and empirical evaluation of the precision of our approach shows that consensus filters are conservative at throwing away good data at the expense of retaining bad data and that majority filters are better at detecting bad data at the expense of throwing away good data. This suggests that for situations in which there is a paucity of data, consensus filters are preferable, whereas majority vote filters are preferable for situations with an abundance of data.
[ { "version": "v1", "created": "Wed, 1 Jun 2011 16:15:28 GMT" } ]
1,306,972,800,000
[ [ "Brodley", "C. E.", "" ], [ "Friedl", "M. A.", "" ] ]
1106.0220
S. Argamon-Engelson
S. Argamon-Engelson, I. Dagan
Committee-Based Sample Selection for Probabilistic Classifiers
null
Journal Of Artificial Intelligence Research, Volume 11, pages 335-360, 1999
10.1613/jair.612
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In many real-world learning tasks, it is expensive to acquire a sufficient number of labeled examples for training. This paper investigates methods for reducing annotation cost by `sample selection'. In this approach, during training the learning program examines many unlabeled examples and selects for labeling only those that are most informative at each stage. This avoids redundantly labeling examples that contribute little new information. Our work follows on previous research on Query By Committee, extending the committee-based paradigm to the context of probabilistic classification. We describe a family of empirical methods for committee-based sample selection in probabilistic classification models, which evaluate the informativeness of an example by measuring the degree of disagreement between several model variants. These variants (the committee) are drawn randomly from a probability distribution conditioned by the training set labeled so far. The method was applied to the real-world natural language processing task of stochastic part-of-speech tagging. We find that all variants of the method achieve a significant reduction in annotation cost, although their computational efficiency differs. In particular, the simplest variant, a two member committee with no parameters to tune, gives excellent results. We also show that sample selection yields a significant reduction in the size of the model used by the tagger.
[ { "version": "v1", "created": "Wed, 1 Jun 2011 16:15:56 GMT" } ]
1,306,972,800,000
[ [ "Argamon-Engelson", "S.", "" ], [ "Dagan", "I.", "" ] ]
1106.0224
R. Rosati
R. Rosati
Reasoning about Minimal Belief and Negation as Failure
null
Journal Of Artificial Intelligence Research, Volume 11, pages 277-300, 1999
10.1613/jair.637
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the problem of reasoning in the propositional fragment of MBNF, the logic of minimal belief and negation as failure introduced by Lifschitz, which can be considered as a unifying framework for several nonmonotonic formalisms, including default logic, autoepistemic logic, circumscription, epistemic queries, and logic programming. We characterize the complexity and provide algorithms for reasoning in propositional MBNF. In particular, we show that entailment in propositional MBNF lies at the third level of the polynomial hierarchy, hence it is harder than reasoning in all the above mentioned propositional formalisms for nonmonotonic reasoning. We also prove the exact correspondence between negation as failure in MBNF and negative introspection in Moore's autoepistemic logic.
[ { "version": "v1", "created": "Wed, 1 Jun 2011 16:17:18 GMT" } ]
1,306,972,800,000
[ [ "Rosati", "R.", "" ] ]
1106.0225
R. Bar-Yehuda
R. Bar-Yehuda, A. Becker, D. Geiger
Randomized Algorithms for the Loop Cutset Problem
null
Journal Of Artificial Intelligence Research, Volume 12, pages 219-234, 2000
10.1613/jair.638
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show how to find a minimum weight loop cutset in a Bayesian network with high probability. Finding such a loop cutset is the first step in the method of conditioning for inference. Our randomized algorithm for finding a loop cutset outputs a minimum loop cutset after O(c 6^k kn) steps with probability at least 1 - (1 - 1/(6^k))^c6^k, where c > 1 is a constant specified by the user, k is the minimal size of a minimum weight loop cutset, and n is the number of vertices. We also show empirically that a variant of this algorithm often finds a loop cutset that is closer to the minimum weight loop cutset than the ones found by the best deterministic algorithms known.
[ { "version": "v1", "created": "Wed, 1 Jun 2011 16:17:38 GMT" } ]
1,306,972,800,000
[ [ "Bar-Yehuda", "R.", "" ], [ "Becker", "A.", "" ], [ "Geiger", "D.", "" ] ]
1106.0229
R. M. Jensen
R. M. Jensen, M. M. Veloso
OBDD-based Universal Planning for Synchronized Agents in Non-Deterministic Domains
null
Journal Of Artificial Intelligence Research, Volume 13, pages 189-226, 2000
10.1613/jair.649
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently model checking representation and search techniques were shown to be efficiently applicable to planning, in particular to non-deterministic planning. Such planning approaches use Ordered Binary Decision Diagrams (OBDDs) to encode a planning domain as a non-deterministic finite automaton and then apply fast algorithms from model checking to search for a solution. OBDDs can effectively scale and can provide universal plans for complex planning domains. We are particularly interested in addressing the complexities arising in non-deterministic, multi-agent domains. In this article, we present UMOP, a new universal OBDD-based planning framework for non-deterministic, multi-agent domains. We introduce a new planning domain description language, NADL, to specify non-deterministic, multi-agent domains. The language contributes the explicit definition of controllable agents and uncontrollable environment agents. We describe the syntax and semantics of NADL and show how to build an efficient OBDD-based representation of an NADL description. The UMOP planning system uses NADL and different OBDD-based universal planning algorithms. It includes the previously developed strong and strong cyclic planning algorithms. In addition, we introduce our new optimistic planning algorithm that relaxes optimality guarantees and generates plausible universal plans in some domains where no strong nor strong cyclic solution exists. We present empirical results applying UMOP to domains ranging from deterministic and single-agent with no environment actions to non-deterministic and multi-agent with complex environment actions. UMOP is shown to be a rich and efficient planning system.
[ { "version": "v1", "created": "Wed, 1 Jun 2011 16:22:36 GMT" } ]
1,306,972,800,000
[ [ "Jensen", "R. M.", "" ], [ "Veloso", "M. M.", "" ] ]
1106.0230
S. Kambhampati
S. Kambhampati
Planning Graph as a (Dynamic) CSP: Exploiting EBL, DDB and other CSP Search Techniques in Graphplan
null
Journal Of Artificial Intelligence Research, Volume 12, pages 1-34, 2000
10.1613/jair.655
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper reviews the connections between Graphplan's planning-graph and the dynamic constraint satisfaction problem and motivates the need for adapting CSP search techniques to the Graphplan algorithm. It then describes how explanation based learning, dependency directed backtracking, dynamic variable ordering, forward checking, sticky values and random-restart search strategies can be adapted to Graphplan. Empirical results are provided to demonstrate that these augmentations improve Graphplan's performance significantly (up to 1000x speedups) on several benchmark problems. Special attention is paid to the explanation-based learning and dependency directed backtracking techniques as they are empirically found to be most useful in improving the performance of Graphplan.
[ { "version": "v1", "created": "Wed, 1 Jun 2011 16:22:50 GMT" } ]
1,306,972,800,000
[ [ "Kambhampati", "S.", "" ] ]
1106.0233
M. Cadoli
M. Cadoli, F. M. Donini, P. Liberatore, M. Schaerf
Space Efficiency of Propositional Knowledge Representation Formalisms
null
Journal Of Artificial Intelligence Research, Volume 13, pages 1-31, 2000
10.1613/jair.664
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the space efficiency of a Propositional Knowledge Representation (PKR) formalism. Intuitively, the space efficiency of a formalism F in representing a certain piece of knowledge A, is the size of the shortest formula of F that represents A. In this paper we assume that knowledge is either a set of propositional interpretations (models) or a set of propositional formulae (theorems). We provide a formal way of talking about the relative ability of PKR formalisms to compactly represent a set of models or a set of theorems. We introduce two new compactness measures, the corresponding classes, and show that the relative space efficiency of a PKR formalism in representing models/theorems is directly related to such classes. In particular, we consider formalisms for nonmonotonic reasoning, such as circumscription and default logic, as well as belief revision operators and the stable model semantics for logic programs with negation. One interesting result is that formalisms with the same time complexity do not necessarily belong to the same space efficiency class.
[ { "version": "v1", "created": "Wed, 1 Jun 2011 16:24:29 GMT" } ]
1,306,972,800,000
[ [ "Cadoli", "M.", "" ], [ "Donini", "F. M.", "" ], [ "Liberatore", "P.", "" ], [ "Schaerf", "M.", "" ] ]
1106.0234
M. Hauskrecht
M. Hauskrecht
Value-Function Approximations for Partially Observable Markov Decision Processes
null
Journal Of Artificial Intelligence Research, Volume 13, pages 33-94, 2000
10.1613/jair.678
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Partially observable Markov decision processes (POMDPs) provide an elegant mathematical framework for modeling complex decision and planning problems in stochastic domains in which states of the system are observable only indirectly, via a set of imperfect or noisy observations. The modeling advantage of POMDPs, however, comes at a price -- exact methods for solving them are computationally very expensive and thus applicable in practice only to very simple problems. We focus on efficient approximation (heuristic) methods that attempt to alleviate the computational problem and trade off accuracy for speed. We have two objectives here. First, we survey various approximation methods, analyze their properties and relations and provide some new insights into their differences. Second, we present a number of new approximation methods and novel refinements of existing techniques. The theoretical results are supported by experiments on a problem from the agent navigation domain.
[ { "version": "v1", "created": "Wed, 1 Jun 2011 16:24:43 GMT" } ]
1,306,972,800,000
[ [ "Hauskrecht", "M.", "" ] ]
1106.0237
R. M. Neal
R. M. Neal
On Deducing Conditional Independence from d-Separation in Causal Graphs with Feedback (Research Note)
null
Journal Of Artificial Intelligence Research, Volume 12, pages 87-91, 2000
10.1613/jair.689
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pearl and Dechter (1996) claimed that the d-separation criterion for conditional independence in acyclic causal networks also applies to networks of discrete variables that have feedback cycles, provided that the variables of the system are uniquely determined by the random disturbances. I show by example that this is not true in general. Some condition stronger than uniqueness is needed, such as the existence of a causal dynamics guaranteed to lead to the unique solution.
[ { "version": "v1", "created": "Wed, 1 Jun 2011 16:36:47 GMT" } ]
1,306,972,800,000
[ [ "Neal", "R. M.", "" ] ]
1106.0238
A. Borgida
A. Borgida, R. Kusters
What's in an Attribute? Consequences for the Least Common Subsumer
null
Journal Of Artificial Intelligence Research, Volume 14, pages 167-203, 2001
10.1613/jair.702
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Functional relationships between objects, called `attributes', are of considerable importance in knowledge representation languages, including Description Logics (DLs). A study of the literature indicates that papers have made, often implicitly, different assumptions about the nature of attributes: whether they are always required to have a value, or whether they can be partial functions. The work presented here is the first explicit study of this difference for subclasses of the CLASSIC DL, involving the same-as concept constructor. It is shown that although determining subsumption between concept descriptions has the same complexity (though requiring different algorithms), the story is different in the case of determining the least common subsumer (lcs). For attributes interpreted as partial functions, the lcs exists and can be computed relatively easily; even in this case our results correct and extend three previous papers about the lcs of DLs. In the case where attributes must have a value, the lcs may not exist, and even if it exists it may be of exponential size. Interestingly, it is possible to decide in polynomial time if the lcs exists.
[ { "version": "v1", "created": "Wed, 1 Jun 2011 16:36:59 GMT" } ]
1,306,972,800,000
[ [ "Borgida", "A.", "" ], [ "Kusters", "R.", "" ] ]
1106.0239
S. Tobies
S. Tobies
The Complexity of Reasoning with Cardinality Restrictions and Nominals in Expressive Description Logics
null
Journal Of Artificial Intelligence Research, Volume 12, pages 199-217, 2000
10.1613/jair.705
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the complexity of the combination of the Description Logics ALCQ and ALCQI with a terminological formalism based on cardinality restrictions on concepts. These combinations can naturally be embedded into C^2, the two variable fragment of predicate logic with counting quantifiers, which yields decidability in NExpTime. We show that this approach leads to an optimal solution for ALCQI, as ALCQI with cardinality restrictions has the same complexity as C^2 (NExpTime-complete). In contrast, we show that for ALCQ, the problem can be solved in ExpTime. This result is obtained by a reduction of reasoning with cardinality restrictions to reasoning with the (in general weaker) terminological formalism of general axioms for ALCQ extended with nominals. Using the same reduction, we show that, for the extension of ALCQI with nominals, reasoning with general axioms is a NExpTime-complete problem. Finally, we sharpen this result and show that pure concept satisfiability for ALCQI with nominals is NExpTime-complete. Without nominals, this problem is known to be PSpace-complete.
[ { "version": "v1", "created": "Wed, 1 Jun 2011 16:37:11 GMT" } ]
1,306,972,800,000
[ [ "Tobies", "S.", "" ] ]
1106.0240
I. P. Gent
I. P. Gent, J. Singer, A. Smaill
Backbone Fragility and the Local Search Cost Peak
null
Journal Of Artificial Intelligence Research, Volume 12, pages 235-270, 2000
10.1613/jair.711
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The local search algorithm WSat is one of the most successful algorithms for solving the satisfiability (SAT) problem. It is notably effective at solving hard Random 3-SAT instances near the so-called `satisfiability threshold', but still shows a peak in search cost near the threshold and large variations in cost over different instances. We make a number of significant contributions to the analysis of WSat on high-cost random instances, using the recently-introduced concept of the backbone of a SAT instance. The backbone is the set of literals which are entailed by an instance. We find that the number of solutions predicts the cost well for small-backbone instances but is much less relevant for the large-backbone instances which appear near the threshold and dominate in the overconstrained region. We show a very strong correlation between search cost and the Hamming distance to the nearest solution early in WSat's search. This pattern leads us to introduce a measure of the backbone fragility of an instance, which indicates how persistent the backbone is as clauses are removed. We propose that high-cost random instances for local search are those with very large backbones which are also backbone-fragile. We suggest that the decay in cost beyond the satisfiability threshold is due to increasing backbone robustness (the opposite of backbone fragility). Our hypothesis makes three correct predictions. First, that the backbone robustness of an instance is negatively correlated with the local search cost when other factors are controlled for. Second, that backbone-minimal instances (which are 3-SAT instances altered so as to be more backbone-fragile) are unusually hard for WSat. Third, that the clauses most often unsatisfied during search are those whose deletion has the most effect on the backbone. In understanding the pathologies of local search methods, we hope to contribute to the development of new and better techniques.
[ { "version": "v1", "created": "Wed, 1 Jun 2011 16:37:25 GMT" } ]
1,306,972,800,000
[ [ "Gent", "I. P.", "" ], [ "Singer", "J.", "" ], [ "Smaill", "A.", "" ] ]
1106.0241
M. A. Walker
M. A. Walker
An Application of Reinforcement Learning to Dialogue Strategy Selection in a Spoken Dialogue System for Email
null
Journal Of Artificial Intelligence Research, Volume 12, pages 387-416, 2000
10.1613/jair.713
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes a novel method by which a spoken dialogue system can learn to choose an optimal dialogue strategy from its experience interacting with human users. The method is based on a combination of reinforcement learning and performance modeling of spoken dialogue systems. The reinforcement learning component applies Q-learning (Watkins, 1989), while the performance modeling component applies the PARADISE evaluation framework (Walker et al., 1997) to learn the performance function (reward) used in reinforcement learning. We illustrate the method with a spoken dialogue system named ELVIS (EmaiL Voice Interactive System), that supports access to email over the phone. We conduct a set of experiments for training an optimal dialogue strategy on a corpus of 219 dialogues in which human users interact with ELVIS over the phone. We then test that strategy on a corpus of 18 dialogues. We show that ELVIS can learn to optimize its strategy selection for agent initiative, for reading messages, and for summarizing email folders.
[ { "version": "v1", "created": "Wed, 1 Jun 2011 16:37:37 GMT" } ]
1,306,972,800,000
[ [ "Walker", "M. A.", "" ] ]
1106.0242
J. Goldsmith
J. Goldsmith, C. Lusena, M. Mundhenk
Nonapproximability Results for Partially Observable Markov Decision Processes
null
Journal Of Artificial Intelligence Research, Volume 14, pages 83-103, 2001
10.1613/jair.714
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show that for several variations of partially observable Markov decision processes, polynomial-time algorithms for finding control policies are unlikely to or simply don't have guarantees of finding policies within a constant factor or a constant summand of optimal. Here "unlikely" means "unless some complexity classes collapse," where the collapses considered are P=NP, P=PSPACE, or P=EXP. Until or unless these collapses are shown to hold, any control-policy designer must choose between such performance guarantees and efficient computation.
[ { "version": "v1", "created": "Wed, 1 Jun 2011 16:37:53 GMT" } ]
1,306,972,800,000
[ [ "Goldsmith", "J.", "" ], [ "Lusena", "C.", "" ], [ "Mundhenk", "M.", "" ] ]
1106.0243
J. Hoffmann
J. Hoffmann, J. Koehler
On Reasonable and Forced Goal Orderings and their Use in an Agenda-Driven Planning Algorithm
null
Journal Of Artificial Intelligence Research, Volume 12, pages 338-386, 2000
10.1613/jair.715
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper addresses the problem of computing goal orderings, which is one of the longstanding issues in AI planning. It makes two new contributions. First, it formally defines and discusses two different goal orderings, which are called the reasonable and the forced ordering. Both orderings are defined for simple STRIPS operators as well as for more complex ADL operators supporting negation and conditional effects. The complexity of these orderings is investigated and their practical relevance is discussed. Secondly, two different methods to compute reasonable goal orderings are developed. One of them is based on planning graphs, while the other investigates the set of actions directly. Finally, it is shown how the ordering relations, which have been derived for a given set of goals G, can be used to compute a so-called goal agenda that divides G into an ordered set of subgoals. Any planner can then, in principle, use the goal agenda to plan for increasing sets of subgoals. This can lead to an exponential complexity reduction, as the solution to a complex planning problem is found by solving easier subproblems. Since only a polynomial overhead is caused by the goal agenda computation, a potential exists to dramatically speed up planning algorithms as we demonstrate in the empirical evaluation, where we use this method in the IPP planner.
[ { "version": "v1", "created": "Wed, 1 Jun 2011 16:38:06 GMT" } ]
1,306,972,800,000
[ [ "Hoffmann", "J.", "" ], [ "Koehler", "J.", "" ] ]
1106.0244
D. F. Gordon
D. F. Gordon
Asimovian Adaptive Agents
null
Journal Of Artificial Intelligence Research, Volume 13, pages 95-153, 2000
10.1613/jair.720
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The goal of this research is to develop agents that are adaptive and predictable and timely. At first blush, these three requirements seem contradictory. For example, adaptation risks introducing undesirable side effects, thereby making agents' behavior less predictable. Furthermore, although formal verification can assist in ensuring behavioral predictability, it is known to be time-consuming. Our solution to the challenge of satisfying all three requirements is the following. Agents have finite-state automaton plans, which are adapted online via evolutionary learning (perturbation) operators. To ensure that critical behavioral constraints are always satisfied, agents' plans are first formally verified. They are then reverified after every adaptation. If reverification concludes that constraints are violated, the plans are repaired. The main objective of this paper is to improve the efficiency of reverification after learning, so that agents have a sufficiently rapid response time. We present two solutions: positive results that certain learning operators are a priori guaranteed to preserve useful classes of behavioral assurance constraints (which implies that no reverification is needed for these operators), and efficient incremental reverification algorithms for those learning operators that have negative a priori results.
[ { "version": "v1", "created": "Wed, 1 Jun 2011 16:38:25 GMT" } ]
1,306,972,800,000
[ [ "Gordon", "D. F.", "" ] ]
1106.0245
J. Baxter
J. Baxter
A Model of Inductive Bias Learning
null
Journal Of Artificial Intelligence Research, Volume 12, pages 149-198, 2000
10.1613/jair.731
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A major problem in machine learning is that of inductive bias: how to choose a learner's hypothesis space so that it is large enough to contain a solution to the problem being learnt, yet small enough to ensure reliable generalization from reasonably-sized training sets. Typically such bias is supplied by hand through the skill and insights of experts. In this paper a model for automatically learning bias is investigated. The central assumption of the model is that the learner is embedded within an environment of related learning tasks. Within such an environment the learner can sample from multiple tasks, and hence it can search for a hypothesis space that contains good solutions to many of the problems in the environment. Under certain restrictions on the set of all hypothesis spaces available to the learner, we show that a hypothesis space that performs well on a sufficiently large number of training tasks will also perform well when learning novel tasks in the same environment. Explicit bounds are also derived demonstrating that learning multiple tasks within an environment of related tasks can potentially give much better generalization than learning a single task.
[ { "version": "v1", "created": "Wed, 1 Jun 2011 16:38:38 GMT" } ]
1,306,972,800,000
[ [ "Baxter", "J.", "" ] ]
1106.0246
C. Bhattacharyya
C. Bhattacharyya, S. S. Keerthi
Mean Field Methods for a Special Class of Belief Networks
null
Journal Of Artificial Intelligence Research, Volume 15, pages 91-114, 2001
10.1613/jair.734
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The chief aim of this paper is to propose mean-field approximations for a broad class of Belief networks, of which sigmoid and noisy-or networks can be seen as special cases. The approximations are based on a powerful mean-field theory suggested by Plefka. We show that Saul, Jaakkola and Jordan' s approach is the first order approximation in Plefka's approach, via a variational derivation. The application of Plefka's theory to belief networks is not computationally tractable. To tackle this problem we propose new approximations based on Taylor series. Small scale experiments show that the proposed schemes are attractive.
[ { "version": "v1", "created": "Wed, 1 Jun 2011 16:38:54 GMT" } ]
1,306,972,800,000
[ [ "Bhattacharyya", "C.", "" ], [ "Keerthi", "S. S.", "" ] ]
1106.0247
B. Nebel
B. Nebel
On the Compilability and Expressive Power of Propositional Planning Formalisms
null
Journal Of Artificial Intelligence Research, Volume 12, pages 271-315, 2000
10.1613/jair.735
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The recent approaches of extending the GRAPHPLAN algorithm to handle more expressive planning formalisms raise the question of what the formal meaning of "expressive power" is. We formalize the intuition that expressive power is a measure of how concisely planning domains and plans can be expressed in a particular formalism by introducing the notion of "compilation schemes" between planning formalisms. Using this notion, we analyze the expressiveness of a large family of propositional planning formalisms, ranging from basic STRIPS to a formalism with conditional effects, partial state specifications, and propositional formulae in the preconditions. One of the results is that conditional effects cannot be compiled away if plan size should grow only linearly but can be compiled away if we allow for polynomial growth of the resulting plans. This result confirms that the recently proposed extensions to the GRAPHPLAN algorithm concerning conditional effects are optimal with respect to the "compilability" framework. Another result is that general propositional formulae cannot be compiled into conditional effects if the plan size should be preserved linearly. This implies that allowing general propositional formulae in preconditions and effect conditions adds another level of difficulty in generating a plan.
[ { "version": "v1", "created": "Wed, 1 Jun 2011 16:39:07 GMT" } ]
1,306,972,800,000
[ [ "Nebel", "B.", "" ] ]
1106.0249
C. Boutilier
C. Boutilier, R. I. Brafman
Partial-Order Planning with Concurrent Interacting Actions
null
Journal Of Artificial Intelligence Research, Volume 14, pages 105-136, 2001
10.1613/jair.740
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In order to generate plans for agents with multiple actuators, agent teams, or distributed controllers, we must be able to represent and plan using concurrent actions with interacting effects. This has historically been considered a challenging task requiring a temporal planner with the ability to reason explicitly about time. We show that with simple modifications, the STRIPS action representation language can be used to represent interacting actions. Moreover, algorithms for partial-order planning require only small modifications in order to be applied in such multiagent domains. We demonstrate this fact by developing a sound and complete partial-order planner for planning with concurrent interacting actions, POMP, that extends existing partial-order planners in a straightforward way. These results open the way to the use of partial-order planners for the centralized control of cooperative multiagent systems.
[ { "version": "v1", "created": "Wed, 1 Jun 2011 16:39:53 GMT" } ]
1,306,972,800,000
[ [ "Boutilier", "C.", "" ], [ "Brafman", "R. I.", "" ] ]
1106.0250
J. L. Ambite
J. L. Ambite, C. A. Knoblock
Planning by Rewriting
null
Journal Of Artificial Intelligence Research, Volume 15, pages 207-261, 2001
10.1613/jair.754
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Domain-independent planning is a hard combinatorial problem. Taking into account plan quality makes the task even more difficult. This article introduces Planning by Rewriting (PbR), a new paradigm for efficient high-quality domain-independent planning. PbR exploits declarative plan-rewriting rules and efficient local search techniques to transform an easy-to-generate, but possibly suboptimal, initial plan into a high-quality plan. In addition to addressing the issues of planning efficiency and plan quality, this framework offers a new anytime planning algorithm. We have implemented this planner and applied it to several existing domains. The experimental results show that the PbR approach provides significant savings in planning effort while generating high-quality plans.
[ { "version": "v1", "created": "Wed, 1 Jun 2011 16:40:10 GMT" } ]
1,306,972,800,000
[ [ "Ambite", "J. L.", "" ], [ "Knoblock", "C. A.", "" ] ]