image
imagewidth (px)
1k
1.76k
markdown
stringlengths
1
43.7k
# A Collection Of Definitions Of Intelligence Shane Legg IDSIA, Galleria 2, Manno-Lugano CH-6928, Switzerland [email protected] www.idsia.ch/∼shane ## Marcus Hutter IDSIA, Galleria 2, Manno-Lugano CH-6928, Switzerland RSISE/ANU/NICTA, Canberra, ACT, 0200, Australia [email protected] www.hutter1.net 15 June 2007 Abstract This paper is a survey of a large number of informal definitions of "intelligence" that the authors have collected over the years. Naturally, compiling a complete list would be impossible as many definitions of intelligence are buried deep inside articles and books. Nevertheless, the 70-odd definitions presented here are, to the authors' knowledge, the largest and most well referenced collection there is. Contents 1 Introduction 2 2 Collective definitions 2 3 Psychologist definitions 4 4 AI researcher definitions 7 5 Is a single definition possible? 9 References 9 arXiv:0706.3639v1 [cs.AI] 25 Jun 2007 Intelligence definitions, collective, psychologist, artificial, universal. Keywords
[22] S. Legg and M. Hutter. A formal measure of machine intelligence. In Proc. 15th Annual Machine Learning Conference of Belgium and The Netherlands (Benelearn'06), pages 73–80, Ghent, 2006. [23] D. Lenat and E. Feigenbaum. On the thresholds of knowledge. *Artificial Intelligence*, 47:185–250, 1991. [24] H. Masum, S. Christensen, and F. Oppacher. The Turing ratio: Metrics for open-ended tasks. In *GECCO 2002: Proceedings of the Genetic and Evolutionary Computation Conference*, pages 973–980, New York, 2002. Morgan Kaufmann Publishers. [25] J. McCarthy. What is artificial intelligence? www-formal.stanford.edu/jmc/whatisai/whatisai.html, 2004. [26] M. Minsky. *The Society of Mind*. Simon and Schuster, New York, 1985. [27] H. Nakashima. AI as complex information processing. *Minds and machines*, 9:57–80, 1999. [28] U. Neisser, G. Boodoo, T. J. Bouchard, Jr., A. W. Boykin, N. Brody, S. J. Ceci, D. F. Halpern, J. C. Loehlin, R. Perloff, R. J. Sternberg, and S. Urbina. Intelligence: Knowns and unknowns. *American Psychologist*, 51(2):77–101, 96. [29] A. Newell and H. A. Simon. Computer science as empirical enquiry: Symbols and search. *Communications of the ACM 19*, 3:113–126, 1976. [30] J. Piaget. *The psychology of intelligence*. Routledge, New York, 1963. [31] D. Poole, A. Mackworth, and R. Goebel. Computational Intelligence: A logical approach. Oxford University Press, New York, NY, USA, 1998. [32] R. Schank. Where's the AI? *AI magazine*, 12(4):38–49, 1991. [33] D. K. Simonton. An interview with Dr. Simonton. In J. A. Plucker, editor, Human intelligence: Historical influences, current controversies, teaching resources. http://www.indiana.edu/∼ intell, 2003. [34] J. Slatter. *Assessment of children: Cognitive applications*. Jermone M. Satler Publisher Inc., San Diego, 4th edition, 2001. [35] R. J. Sternberg, editor. *Handbook of Intelligence*. Cambridge University Press, 2000. [36] R. J. Sternberg. An interview with Dr. Sternberg. In J. A. Plucker, editor, Human intelligence: Historical influences, current controversies, teaching resources. http://www.indiana.edu/∼ intell, 2003.
[37] L. L. Thurstone. *The nature of intelligence*. Routledge, London, 1924. [38] P. Voss. Essentials of general intelligence: The direct path to AGI. In B. Goertzel and C. Pennachin, editors, *Artificial General Intelligence*. SpringerVerlag, 2005. [39] P. Wang. On the working definition of intelligence. Technical Report 94, Center for Research on Concepts and Cognition, Indiana University, 1995. [40] D. Wechsler. *The measurement and appraisal of adult intelligence*. Williams & Wilkinds, Baltimore, 4 edition, 1958. [41] R. M. Yerkes and A. W. Yerkes. *The great apes: A study of anthropoid life*. Yale University Press, New Haven, 1929.
## 1 Introduction "Viewed narrowly, there seem to be almost as many definitions of intelligence as there were experts asked to define it." - R. J. Sternberg quoted in [14] Despite a long history of research and debate, there is still no standard definition of intelligence. This has lead some to believe that intelligence may be approximately described, but cannot be fully defined. We believe that this degree of pessimism is too strong. Although there is no single standard definition, if one surveys the many definitions that have been proposed, strong similarities between many of the definitions quickly become obvious. In many cases different definitions, suitably interpreted, actually say the same thing but in different words. This observation lead us to believe that a single general and encompassing definition for arbitrary systems was possible. Indeed we have constructed a formal definition of intelligence, called universal intelligence [22], which has strong connections to the theory of optimal learning agents [19]. Rather than exploring very general formal definitions of intelligence, here we will instead take the opportunity to present the many informal definitions that we have collected over the years. Naturally, compiling a complete list would be impossible as many definitions of intelligence are buried deep inside articles and books. Nevertheless, the 70 odd definitions presented below are, to the best of our knowledge, the largest and most well referenced collection there is. We continue to add to this collect as we discover further definitions, and keep the most up to date version of the collection available online [21]. If you know of additional definitions that we could add, please send us an email. ## 2 Collective Definitions In this section we present definitions that have been proposed by groups or organisations. In many cases definitions of intelligence given in encyclopedias have been either contributed by an individual psychologist or quote an earlier definition given by a psychologist. In these cases we have chosen to attribute the quote to the psychologist, and have placed it in the next section. In this section we only list those definitions that either cannot be attributed to a specific individuals, or represent a collective definition agreed upon by many individuals. As many dictionaries source their definitions from other dictionaries, we have endeavoured to always list the original source. 1. "The ability to use memory, knowledge, experience, understanding, reasoning, imagination and judgement in order to solve problems and adapt to new situations." AllWords Dictionary, 2006
2. "The capacity to acquire and apply knowledge." The American Heritage Dictionary, fourth edition, 2000 3. "Individuals differ from one another in their ability to understand complex ideas, to adapt effectively to the environment, to learn from experience, to engage in various forms of reasoning, to overcome obstacles by taking thought." American Psychological Association [28] 4. "The ability to learn, understand and make judgments or have opinions that are based on reason" Cambridge Advance Learner's Dictionary, 2006 5. "Intelligence is a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience." Common statement with 52 expert signatories [13] 6. "The ability to learn facts and skills and apply them, especially when this ability is highly developed." Encarta World English Dictionary, 2006 7. ". . . ability to adapt effectively to the environment, either by making a change in oneself or by changing the environment or finding a new one . . . intelligence is not a single mental process, but rather a combination of many mental processes directed toward effective adaptation to the environment." Encyclopedia Britannica, 2006 8. "the general mental ability involved in calculating, reasoning, perceiving relationships and analogies, learning quickly, storing and retrieving information, using language fluently, classifying, generalizing, and adjusting to new situations." Columbia Encyclopedia, sixth edition, 2006 9. "Capacity for learning, reasoning, understanding, and similar forms of mental activity; aptitude in grasping truths, relationships, facts, meanings, etc." Random House Unabridged Dictionary, 2006 10. "The ability to learn, understand, and think about things." Longman Dictionary or Contemporary English, 2006 11. ": the ability to learn or understand or to deal with new or trying situations : . . . the skilled use of reason (2) : the ability to apply knowledge to manipulate one's environment or to think abstractly as measured by objective criteria (as tests)" Merriam-Webster Online Dictionary, 2006 12. "The ability to acquire and apply knowledge and skills." Compact Oxford English Dictionary, 2006 13. ". . . the ability to adapt to the environment." World Book Encyclopedia, 2006
14. "Intelligence is a property of mind that encompasses many related mental abilities, such as the capacities to reason, plan, solve problems, think abstractly, comprehend ideas and language, and learn." Wikipedia, 4 October, 2006 15. "Capacity of mind, especially to understand principles, truths, facts or meanings, acquire knowledge, and apply it to practise; the ability to learn and comprehend." Wiktionary, 4 October, 2006 16. "The ability to learn and understand or to deal with problems." Word Central Student Dictionary, 2006 17. "The ability to comprehend; to understand and profit from experience." Wordnet 2.1, 2006 18. "The capacity to learn, reason, and understand." Wordsmyth Dictionary, 2006 ## 3 Psychologist Definitions This section contains definitions from psychologists. In some cases we have not yet managed to locate the exact reference and would appreciate any help in doing so. 1. "Intelligence is not a single, unitary ability, but rather a composite of several functions. The term denotes that combination of abilities required for survival and advancement within a particular culture." A. Anastasi [2] 2. ". . . that facet of mind underlying our capacity to think, to solve novel problems, to reason and to have knowledge of the world." M. Anderson [3] 3. "It seems to us that in intelligence there is a fundamental faculty, the alteration or the lack of which, is of the utmost importance for practical life. This faculty is judgement, otherwise called good sense, practical sense, initiative, the faculty of adapting ones self to circumstances." A. Binet [5] 4. "We shall use the term 'intelligence' to mean the ability of an organism to solve new problems . . . " W. V. Bingham [6] 5. "Intelligence is what is measured by intelligence tests." E. Boring [7] 6. ". . . a quality that is intellectual and not emotional or moral: in measuring it we try to rule out the effects of the child's zeal, interest, industry, and the like. Secondly, it denotes a general capacity, a capacity that enters into everything the child says or does or thinks; any want of 'intelligence' will therefore be revealed to some degree in almost all that he attempts;" C. L. Burt [8] 7. "A person possesses intelligence insofar as he has learned, or can learn, to adjust himself to his environment." S. S. Colvin quoted in [35]
8. ". . . the ability to plan and structure one's behavior with an end in view." J. P. Das 9. "The capacity to learn or to profit by experience." W. F. Dearborn quoted in [35] 10. ". . . in its lowest terms intelligence is present where the individual animal, or human being, is aware, however dimly, of the relevance of his behaviour to an objective. Many definitions of what is indefinable have been attempted by psychologists, of which the least unsatisfactory are 1. the capacity to meet novel situations, or to learn to do so, by new adaptive responses and 2. the ability to perform tests or tasks, involving the grasping of relationships, the degree of intelligence being proportional to the complexity, or the abstractness, or both, of the relationship." J. Drever [9] 11. "Intelligence A: the biological substrate of mental ability, the brains' neuroanatomy and physiology; Intelligence B: the manifestation of intelligence A, and everything that influences its expression in real life behavior; Intelligence C: the level of performance on psychometric tests of cognitive ability." H. J. Eysenck. 12. "Sensory capacity, capacity for perceptual recognition, quickness, range or flexibility or association, facility and imagination, span of attention, quickness or alertness in response." F. N. Freeman quoted in [35] 13. ". . . adjustment or adaptation of the individual to his total environment, or limited aspects thereof . . . the capacity to reorganize one's behavior patterns so as to act more effectively and more appropriately in novel situations . . . the ability to learn . . . the extent to which a person is educable . . . the ability to carry on abstract thinking . . . the effective use of concepts and symbols in dealing with a problem to be solved . . . " W. Freeman 14. "An intelligence is the ability to solve problems, or to create products, that are valued within one or more cultural settings." H. Gardner [11] 15. ". . . performing an operation on a specific type of content to produce a particular product." J. P. Guilford 16. "Sensation, perception, association, memory, imagination, discrimination, judgement and reasoning." N. E. Haggerty quoted in [35] 17. "The capacity for knowledge, and knowledge possessed." V. A. C. Henmon [16] 18. ". . . cognitive ability." R. J. Herrnstein and C. Murray [17]
29. "The ability to carry on abstract thinking." L. M. Terman quoted in [35] 30. "Intelligence, considered as a mental trait, is the capacity to make impulses focal at their early, unfinished stage of formation. Intelligence is therefore the capacity for abstraction, which is an inhibitory process." L. L. Thurstone [37] 31. "The capacity to inhibit an instinctive adjustment, the capacity to redefine the inhibited instinctive adjustment in the light of imaginally experienced trial and error, and the capacity to realise the modified instinctive adjustment in overt behavior to the advantage of the individual as a social animal." L. L. Thurstone quoted in [35] 32. "A global concept that involves an individual's ability to act purposefully, think rationally, and deal effectively with the environment." D. Wechsler [40] 33. "The capacity to acquire capacity." H. Woodrow quoted in [35] 34. ". . . the term intelligence designates a complexly interrelated assemblage of functions, no one of which is completely or accurately known in man . . . " R. M. Yerkes and A. W. Yerkes [41] 35. ". . . that faculty of mind by which order is perceived in a situation previously considered disordered." R. W. Young quoted in [20] 1. ". . . the ability of a system to act appropriately in an uncertain environment, where appropriate action is that which increases the probability of success, and success is the achievement of behavioral subgoals that support the system's ultimate goal." J. S. Albus [1] 2. "Any system . . . that generates adaptive behviour to meet goals in a range of environments can be said to be intelligent." D. Fogel [10] 3. "Achieving complex goals in complex environments" B. Goertzel [12] 4. "Intelligent systems are expected to work, and work well, in many different environments. Their property of intelligence allows them to maximize the probability of success even if full knowledge of the situation is not available. Functioning of intelligent systems cannot be considered separately from the environment and the concrete situation including the goal." R. R. Gudwin [15] 5. "[Performance intelligence is] the successful (i.e., goal-achieving) performance of the system in a complicated environment." J. A. Horst [18] ## 4 Ai Researcher Definitions This section lists definitions from researchers in artificial intelligence. 7
6. "Intelligence is the ability to use optimally limited resources - including time - to achieve goals." R. Kurzweil [20] 7. "Intelligence is the power to rapidly find an adequate solution in what appears a priori (to observers) to be an immense search space." D. Lenat and E. Feigenbaum [23] 8. "Intelligence measures an agent's ability to achieve goals in a wide range of environments." S. Legg and M. Hutter [22] 9. ". . . doing well at a broad range of tasks is an empirical definition of 'intelligence' " H. Masum [24] 10. "Intelligence is the computational part of the ability to achieve goals in the world. Varying kinds and degrees of intelligence occur in people, many animals and some machines." J. McCarthy [25] 11. ". . . the ability to solve hard problems." M. Minsky [26] 12. "Intelligence is the ability to process information properly in a complex environment. The criteria of properness are not predefined and hence not available beforehand. They are acquired as a result of the information processing." H. Nakashima [27] 13. ". . . in any real situation behavior appropriate to the ends of the system and adaptive to the demands of the environment can occur, within some limits of speed and complexity." A. Newell and H. A. Simon [29] 14. "[An intelligent agent does what] is appropriate for its circumstances and its goal, it is flexible to changing environments and changing goals, it learns from experience, and it makes appropriate choices given perceptual limitations and finite computation." D. Poole [31] 15. "Intelligence means getting better over time." Schank [32] 16. "Intelligence is the ability for an information processing system to adapt to its environment with insufficient knowledge and resources." P. Wang [39] 17. ". . . the mental ability to sustain successful life." K. Warwick quoted in [4] 18. ". . . the essential, domain-independent skills necessary for acquiring a wide range of domain-specific knowledge - the ability to learn anything. Achieving this with 'artificial general intelligence' (AGI) requires a highly adaptive, general-purpose system that can autonomously acquire an extremely wide range of specific knowledge and skills and can improve its own cognitive ability through self-directed learning." P. Voss [38]
# A Robust Linguistic Platform For Efficient And Domain Specific Web Content Analysis Thierry Hamon, Adeline Nazarenko, Thierry Poibeau, Sophie Aubin, Julien **Derivière,** LIPN - UMR CNRS 7030 99, avenue J.B. Clément F-93430 Villetaneuse, France [email protected] Abstract Web semantic access in specific domains calls for specialized search engines with enhanced semantic querying and indexing capacities, which pertain both to information retrieval (IR) and to information extraction (IE). A rich linguistic analysis is required either to identify the relevant semantic units to index and weight them according to linguistic specific statistical distribution, or as the basis of an information extraction process. Recent developments make Natural Language Processing (NLP) techniques reliable enough to process large collections of documents and to enrich them with semantic annotations. This paper focuses on the design and the development of a text processing platform, Ogmios, which has been developed in the ALVIS project. The Ogmios platform exploits existing NLP modules and resources, which may be tuned to specific domains and produces linguistically annotated documents. We show how the three constraints of genericity, domain semantic awareness and performance can be handled all together. ## 1. Introduction Search engines like Google or Yahoo offer access to billions of textual web pages. These tools are very popular and seem to be sufficient for a large number of general user queries on the Internet. However, some other queries are more complex, requiring specific knowledge or processing strategies: no really satisfactory solution exists for these requests. There is thus a need for more specific search engines dedicated to specialized domain or users. Let us consider the case of text mining in Microbiology for example. Given the specificity and the reliability of the information that is sought by scientists, it is clear than one needs more than existing search engines. Even if recent developments in biology and biomedicine are reported in large bibliographical databases (e.g. Flybase, specialized on Drosophilia Menogaster or Medline), such databases and the associated searching functionalities are not sufficient to satisfy biologists' specific information needs, such as finding information on gene interactions in order to progressively figure out a whole interaction network. We previously argued that looking for this kind of relational information requires a domain-specific linguistic analysis and parsing of the documents (Alphonse et *al.,* 2004). The ALVIS project aims at developing an open source search engine, with extended semantic search facilities. Compared to state of the art search engines (like Google, the most popular one), the ALVIS search engine is domain specific. It relies on a specialized crawler, which selects the web pages on terminological grounds. Indexing exploits various types of linguistic and domain specific annotation. Through a dedicated user interface, the ALVIS search engine processes the query more accurately, taking into account the topic and the context of search to refine both the query and the document analysis.
time, we added about 500 words of the biological domain to the LP lexicon in different classes, mainly nouns, adjectives and verbs. ## Specific Constructions Some words already defined in the LP lexicon present a specific usage in biological texts, which implied some modifications including moving words from one class to another and adapting or creating rules. The main motivation for moving words from one class to another is that the abstracts are written by non-native English speakers. This point was also raised by (Pyysalo et al., 2004). One way to allow the parsing of such ungrammatical sentences is to relax constraints by moving some words from the countable to the mass-countable class for instance. Some very frequent words present idiosyncratic uses (particular valency of verbs for instance), which induced the modification or creation of rules. Numbers and measure units are omnipresent in the corpus and were not necessarily well described or even present in the lexicon/grammar. ## Structural Ambiguity We identified two cases of ambiguity that can be partially resolved by exploiting terminological information. Prepositional attachment is a tricky point that is often fixed using statistical information from the text itself (Hindle & Rooth, 1993), a larger corpus (Bourigault & Frérot, 2004), the web (Volk, 2002) or external resources such as WordNet (Stetina & Nagao, 1997). The second major ambiguity factor is the attachment of series of more than two nouns. like in two-component signal *transduction systems*. We noticed that such cases often appear inside larger nominal phrases often corresponding to domain specific terms. For this reason, we decided to identify terms in a pre-processing step and to reduce them to their syntactic head. If needed, the internal analysis of terms is added to the parsing result for the simplified sentence. The strategy proposed by (Sutcliffe et *al.,* 1995) that consists in the linkage of the words contained in a compound (for instance *sporulation_process*) was excluded, as it increases the lexicon size augment without reducing the parsing complexity. Before practically integrating the use of terminology in our processing suite, we made a simulation of this simplification of terms. ## 6.3. Evaluation We performed a two-stage evaluation of the modifications in order to measure the respective contribution of the LP adaptation on the one hand and of the term simplification on the other hand. ## Corpus And Criteria We used a subset (10 files 5) of the MED-TEST corpus but, contrary to the first evaluation designed for choosing a parser, we wanted to measure the quality of the whole parse and not only of specific relations. Table 1 (for the MED-TEST subset) shows the way that out-of-lexicon words (OoL), i.e. unknown (UW) and guessed (GW) words, are handled by giving the percentage of incorrect
morpho-syntactic category assignments with the original resources (lp), those adapted to biology (lp-bio) and finally the latter associated with the simplification of terms (lp-bio-t). | lp | Lp-bio | lp-bio-t | | | | | |------|----------|------------|-----|-------|----|-------| | a | b | a | B | a | b | | | UW | 244 | 41.1% | 53 | 52.8% | 26 | 19.2% | | GW | 24 | 4.2% | 72 | 0% | 31 | 0% | | OoL | 268 | 38% | 125 | 22.4% | 57 | 8.8% | In Table 2, five criteria inform on the parsing time and quality for each sentence : the number of linkages (NbL), the parsing time (PT) in seconds, the fact that a complete linkage is found or not (CLF), the number of erroneous links (EL) and the quality of the constituency parse (CQ). NbW is the average number of words in a sentence which varies with term simplification. The results are given for each one of the three versions of the parser. | lp | Lp-bio | lp-bio-t | | | | |-------|----------|------------|--------|-------|--------| | Crit. | Avg | Avg | %/lp | Avg | %/lp | | NbW | 24.05 | 24.05 | 100% | 18.9 | 78.6% | | NbL | 190,306 | 232,622 | 122,2% | 1,431 | 0.75% | | PT | 37.83 | 29.4 | 77.7% | 0.53 | 1.4% | | CLF | 0.54 | 0.72 | 133% | 0.77 | 142.6% | | EL | 2.87 | 1.91 | 66.5% | 1.15 | 40.1% | | CQ | 0.54 | 0.7 | 129.6% | 0.8 | 148.1% | UW, GW, NbL, PT and CLF are objective data while EL and CQ necessitate linguistic expertise. The CQ evaluation consisted in the assignment of a general quality score to the sentence. ## Results And Comments The extension of the MG module reduced the number of erroneous morpho-syntactic category assignments (see Table 1) from 38% to 22.4%. 61% of the sentences where one or more assignment error was corrected by the MG module actually have better parsing results (15% have been degraded). More generally, guessing more forms makes category assignment more reliable. The extension of the lexicon discharged the two modules from 143 assignments out of 268 (50 of which were wrong). 64% of the sentences where one or more assignment error was corrected by the extension of lexicon have better parsing results (18% of the sentences were degraded). The effect of rule modification and creation is difficult to evaluate precisely though it actually improves the parsing, especially by relaxing the constraints on determiners and inserts. The most obvious contribution to the better parsing quality is the one of term simplification. The drastic reduction in parsing time and number of linkages gives an idea of the reduction of complexity. It is not only due to the smaller number of words since the number of erroneous links is reduced by 60% while the number of words is reduced by only 21.4%. This confirms
previous similar studies that showed a reduction of 40% of the error rate on the main syntactic relations with a French corpus. The remaining errors are due to four different phenomena. First, the normalization step, prior to parsing, needs to be enhanced. Concerning LP, there are still lexicon gaps, wrong class assignments and a still unsatisfactory handling of numerical expressions. In addition, and like (Sutcliffe et *al.,* 1995), we identified a weakness of LP regarding coordination. A specific study of the coordination system in LP and in the biological texts may be necessary. Finally, some ambiguous nominal and prepositional attachments still remain in spite of term simplification. These may be resolved in a post-processing step like in ExtrAns that uses a corpus based approach to retrieve the correct attachment from the different linkages given by LP for a sentence. Thus, the parser adaptation relies on three methods: the exploitation of a small base of morphological rules, the modification of the grammar, and an adequate integration that relieve the parser from all what do not directly deal with structural ambiguity (POS and term tagging, especially). ## 7. Conclusion We have presented in this paper a platform that has been designed to enrich specialized domain documents with linguistic annotations. While developments and experiments have been performed on biomedical texts, we assume that this architecture is generic enough to process other specialized documents. The platform is designed as a framework using existing NLP tools which can be substituted by others if necessary. Several NLP modules have been integrated: named entity tagging, word and sentence segmentation, POS tagging, lemmatization, term tagging, and syntactic parsing. Semantic type tagging and anaphora resolution are currently being under stress. We also focused on the system performance, since this point is crucial for most Internet applications. We have experimented a distributed design of the platform, by splitting the corpus in equal parts: this strategy dramatically increased the overall performance (see (Ravichandran et al., 2004). We have also shown that Ogmios is a robust NLP platform with respect to the high heterogeneity of the document sizes and types. These first experiments show that a deep analysis of web documents is possible. Besides the necessary improvement the Ogmios platform, our next goal is to assess the impact of NLP on IR performance. Our hypothesis is that this impact should be higher in the case of a specialized search engines than for a generic IR framework, on which the IR-NLP cooperation has mainly been tested until now. Specific experiments are currently carried out in the ALVIS project to test the potential resulting enhanced functionalities on a microbiological search engine. ## Acknowledgements This work is supported by the EU 6th Framework Program in the IST Priority under the ALVIS project. The material benefits from interactions with the ALVIS partners, especially with INRA-MIG. ## References Alphonse, E., Aubin, S., Bessieres, P., Bisson, G., Hamon, T., Laguarrigue, S., Manine, A.P., Nazarenko, A., Nedellec, C., Vetah, M.O.A., Poibeau, T., Weissenbacher, D.: Event-based
information extraction for the biomedical domain: the CADERIGE project. In: Workshop BioNLP (Biology and Natural language Processing), Conference Computational Linguistics (Coling 2004), Geneva (2004) Aubin, S., Nazarenko, A., Nédellec, C.: Adapting a General Parser to a Sublanguage. In: Proceedings of the International Conference on Recent Advances in Natural *Language* Processing (RANLP'05), Borovets, Bulgaria (2005) 89–93 Berroyer, J.F., Poibeau, T.: TagEN, un analyseur d'entitées nommées. LIPN Internal *Report*, Université Paris-Nord (2004) Bontcheva, K., Tablan, V., Maynard, D., Cunningham, H.: Evolving GATE to meet new challenges in language engineering. Natural Language *Engineering* 10 (2004) 349–374 Bourigault, D., Fr ́erot, C.: Ambiguïté de rattachement prépositionnel : introduction de ressources exogènes de sous-catégorisation dans un analyseur syntaxique de corpus endogène. In: Actes des 11mes journées sur le Traitement Automatique des *Langues* Naturelles, F`es, Maroc. (2004) Consortium, T.G.O.: Creating the Gene Ontology Resource: Design and Implementation. Genome Res. 11 (2001) 1425–1433 Cunningham, H., Bontcheva, K., Tablan, V., Wilks, Y.: Software infrastructure for language resources: a taxonomy of previous work and a requirements analysis. In: Proceedings of the 2nd International Conference on Language Resources *and Evaluation (LREC-2)*, Athens (2000) Ding, J., Berleant, D., Xu, J., Fulmer, A.W.: Extracting Biochemical Interactions from MEDLINE Using a Link Grammar Parser. In: 15th IEEE International Conference on Tools with Artificial Intelligence *(ICTAI'03)*. (2003) 467–471 Ferrucci, D., Lally, A.: UIMA: an architecture approach to unstructured information processing in a corporate research environment. Natural Language *Engineering* 10 (2004) 327–348 Grefenstette, G., Tapanainen P.: What is a word, what is a sentence? Problems of tokenization. In: Proceedings of the 3rd International Conference on Computational *Lexicography*. (1994) 79–87 Grishman, R.: Tipster architecture design document version 2.3. Technical report, *DARPA* (1997) Grover, C., Lapata, M., Lascarides, A.: A Comparison of Parsing Technologies for the Biomedical Domain. Journal of Natural Language *Engineering* (2004) Hindle, D., Rooth, M.: Structural Ambiguity and Lexical Relations. In: Meeting of the Association for Computational *Linguistics*. (1993) 229–236 Lewis D.D.: An Evaluation of Phrasal and Clustered Representations on a Text Categorization Task. In proceedings of the 13th Annual International ACM SIGIR Conference on Research and Development *in Information Retrieval*, Copenhaguen, Denmark, 1992. MeSH: Medical subject headings. WWW page http://www.nlm.nih.gov/mesh/ meshhome.html, Library of Medicine, Bethesda, Maryland (1998) Molla, D., Schneider, G., Schwitter, R., Hess, M.: Answer Extraction Using a Dependency Grammar in ExtrAns. Traitement Automatique de Langues, Special Issue *on Dependency* Grammars (2000) 145–178 Moreau F. Revisiter le couplage traitement automatique des langues et recherché d'information. Thèse d'informatique de l'Université de Rennes 1, déc. 2006. Muller, H.M., Kenny, E.E., Sternberg, P.W.: Textpresso: an ontology-based information retrieval and extraction system for biological literature. *PLoS Biology* 2 (2004) 1984–1998 National Library of Medicine, ed.: UMLS Knowledge Source. 13 th edn. (2003) Nazarenko A., Alphonse E., Derivière J., Hamon T., Vauvert G., Weissenbacher D.: The ALVIS Format for Linguistically Annotated Documents. Proceedings of the Language and Ressources Evaluation Conference (LREC *2006*), pp.1782-1786, Genoa, Italy, 24-25-26 May 2006. Nédellec C., Mould Abdel Vetah M., Bessières P. (2001): Sentence Filtering for Information Extraction in Genomics: A Classification Problem. In Proceedings of the *International* Conference on Practical Knowledge Discovery in *Databases* (PKDD'2001), pp. 326–338. Springer Verlag, LNAI 2167, Freiburg. Neff, M.S., Byrd, R.J., Boguraev, B.K.: The talent system: Textract architecture and data model. Natural Language *Engineering* 10 (2004) 307–326 Popov, B., Kiryakov, A., Ognyanoff, D., Manov, D., Kirilov, A.: Kim - a semantic platform for information extraction and retrieval. Natural Language *Engineering* 10 (2004) 375–392 Pyysalo, S., Ginter, F., Pahikkala, T., Boberg, J., Jarvinen, J., Salakoski, T., Koivula, J.: Analysis of link grammar on biomedical dependency corpus targeted at protein-protein
interactions. In: Proceedings of the international Workshop on Natural *Language* Processing in Biomedicine and its Applications *(JNLPBA)*. (2004) 15–21 Schmid, H.: Probabilistic part-of-speech tagging using decision trees. In Jones, D., Somers, H., eds.: New Methods in Language Processing Studies in Computational *Linguistics*. (1997) Sleator, D., Temperley, D.: Parsing English with a Link Grammar. In: Third International Workshop on Parsing Technologies. (1993) Stetina, J., Nagao, M.: Corpus Based PP Attachment Ambiguity Resolution with a Semantic Dictionary. In Zhou, J., Church, K.W., eds.: Proceedings of the Fifth Workshop on *Very* large *Corpora*, Beijing, China (1997) 66–80 Sutcliffe, R.F.E., Brehony, T., McElligott, A.: The Grammatical Analysis of Technical Texts using a Link Parser. In: Second Conference of the Pacific Association for *Computational* Linguistics, *PACLING'95*. (19-22 April 1995) Volk, M.: Using the Web as Corpus for Linguistic Research. In Pajusalu, R., Hennoste, T., eds.: Tähendusepüüdja. Catcher of the Meaning. A Festschrift for Professor Haldur Õim. Publications of the Department of General *Linguistics* 3. University of Tartu, Estonia (2002)
<image> This paper focuses on the design and the development of the text processing platform, Ogmios, which has been developed in the ALVIS project. The challenges were - to handle rather large domain specific collections of documents (typical specialized collections gather hundreds of thousands of documents, rather than hundreds of millions of documents), - to analyze documents from the web using a single platform, how heterogeneous they may be, - to enrich documents with domain-specific semantic information to allow semantic querying. The present paper shows how the three constraints of genericity, domain semantic awareness and performance can be handled all together. The Ogmios platform is a generic one. It is instantiated using existing NLP modules and resources, which can be tuned to specific domains. The figure 1 shows the role of the NLP annotation and resource acquisition in the whole IR process. For processing texts in the biological domain, we exploited a specific named entity dictionaries and terminologies and we adapted a generic syntactic analyzer. In this paper, we focus on parsing which is the most challenging NLP step when one wants to annotate a large document collection. Section 2 gives an overview of the existing platforms designed for document annotation. Sections 3 and 4 describe the global architecture of the platform and its various NLP modules. Section 5 describes the performance of our system on a collection of crawled documents relative to Microbiology. The last section presents our NLP adaptation strategy, showing as an example how a specialized parser can be derived from a generic one. ## 2. Background Several text engineering architectures have been proposed to manage text processing over the last decade (Cunningham et al., 2000). GATE (General Architecture for Text Engineering) (Bontcheva et al., 2004) has been essentially designed for information extraction tasks. It aims at reusing NLP tools in built-in components. The interchange annotation format (CPSL –
Common Pattern Specific Language) is based on the TIPSTER annotation format (Grishman, 1997). Based on an external linguistic annotation platform, namely GATE, the KIM platform (Popov et al., 2004) can be considered as a "meta-platform". It is designed for ontology population, semantic indexing and information retrieval. KIM has been integrated in massive semantic annotation projects such as the SWAN clusters 1 and SEKT. 2 The authors identify scalability as a critical parameter for two reasons: (1) it has to be able to process large amounts of data, in order to build and train statistical models for Information Extraction; (2) it has to support its own use as an online public service. UIMA (Furrucci & Lally, 2004), a new implementation architecture of TEXTRACT (Neff et al., 2004), is similar to GATE. It mainly differs from GATE in the data representation model. UIMA is a framework for the development of analysis engines. It offers components for the analysis of unstructured information streams as HTML web pages. These components are supposed to range from lightweight to highly scalable implementations. The UIMA SDK is a collection of Java classes. The UIMA annotation format is called CAS (Common Analysis Structure). It is mainly based on the TIPSTER format (Grishman, 1997). CAS annotations are stand-off for the sake of flexibility. Documents can be processed either at a single document level or at a collection level. Collections are handled in UIMA by the Collection Processing Engine, which has some interesting features such as filtering, performance monitoring and parallelization. The Textpresso system (Muller et al., 2004) has been specifically developed to mine biological documents, abstracts as well as articles. For instance, it has been used to process 16,000 abstracts and 3,000 full text articles related to Caenorhabditis *elegans*. It is designed as a curation system extracting gene-gene interaction that is also used as a search engine. It integrates the following NLP modules: tokenizer, sentence segmentation, Part-Of-Speech (POS) tagging, and an semantic tagging based on Gene Ontology (GOConsortium, 2001). While Textpresso is specifically designed for biomedical texts, our platform is more similar to GATE in its aim: proposing a generic platform to process large document collections. Generally, very little information is given to evaluate the behavior of the systems on a collection of documents whereas from our point of view, this aspect is crucial for such a system. Our first test shows that GATE is not suited to process large collections of documents. GATE has been designed as a powerful environment for conception and development of NLP applications in information extraction. Scalability is not central in its design, and information extraction deals with small sets of documents. However, we have observed that problems appear on small sets of documents. We choose to propose a platform able to analyze large amounts of documents, and focus on the efficiency of the processing. ## 3. A Modular And Tunable Platform In the development of Ogmios, we focused on tool integration. Our initial goal was to exploit existing NLP tools rather than developing new ones 3 but integrating heterogeneous tools and nevertheless achieve good performance in document annotation was challenging. Ogmios platform was designed to test various combinations of annotations in order to identify which 1 http://deri.ie/projects/swan 2 http://sekt.semanticweb.org 3 We developed NLP systems only when no other solution was available. We preferably chose GPL or free licence software when possible.
ones have a significant impact on information retrieval, information extraction or even extraction rule learning. In that respect, the platform can be viewed as a modular software architecture that can be configured to achieve various tasks. ## 3.1. Specific Constraints The reuse of NLP tools imposes specific constraints regarding software engineering and processing domain-specific documents requires tuning resources to better fit the data. From the software engineering point of view, the constraints mainly concern the input/output formats of the integrated NLP tools. Each tool has its own input and output format. Linking together several tools requires defining an interchange format. This engineering point of view is important for testing various combinations of annotations. The second type of constraints is the cost linguistic analysis in terms of processing time. The main pitfall is the deep syntactic dependency parsing which is time consuming) and which lead us to design a distributed architecture. A domain specific annotation platform also requires lexical and ontological resources or the tuning of NLP tools such as the Part-of-Speech tagger or parser. For instance, we have argued in (Alphonse et al., 2004) that identification of gene interaction requires gene name tagging, which relates to traditional named entity recognition, term recognition and a reliable syntactic analysis. ## 3.2. General Architecture The different processing steps are traditionally separated in modules (Bontcheva et al., 2004). Each module carries out a specific processing step: named entity recognition, word segmentation, POS tagging, parsing, semantic tagging or anaphora resolution. It wraps an NLP tool to ensure the conformity of the input/output format with the DTD. Annotations are recorded in an XML stand-off format to deal with the heterogeneity of NLP tools input/output (the DTD is fully described in (Nazarenko et al., 2006)). The modularity of the architecture simplifies the substitution of a tool by another. Tuning to a specific field is insured by the exploitation of specialized resources by each module. For instance, a targeted species or gene list can be added to the biology-specific named entity recognizer to process Medline abstracts. In the ALVIS project, the problem of acquiring automatically these specialized resources from a training corpus is also addressed (see Figure 1 and (Alphonse et al., 2004)) but this question falls out of the scope of the present paper. Figure 2 gives an overview of the architecture. The various modules composing the NLP line are represented as boxes. The description of these modules is given in section 4. The arrows represent the data processing flow. Intermediary levels of annotations can be produced if the complete NLP line is not used. For instance, anaphora resolution is seldom activated.
- Named Entity tagging takes place very early in the NLP line because unrecognized named entities hinder most NLP steps, in many sublanguages; - Terminological tagging is used as such but is also considered as an aid for syntactic parsing. As this latter step is time consuming, we exploit the fact that terminological analysis simplifies the parsing cost. For each document, the NLP modules are called sequentially. The outputs of the modules are stored in memory until the end of the processing. XML output is recorded at the end of the document processing. ## 4. Description Of The Nlp Modules This section describes the different NLP modules. Il also explains what is the expected impact of each linguistic annotation step on IR or IE performance. ## Named Entity Tagging The Named Entity tagging module aims at annotating semantic units, with syntactic and semantic types. Each text sequence corresponding to a named entity is tagged with a unique tag corresponding to its semantic value (for example a "gene" type for gene names, "species" type for species names, etc.). We use the TagEN Named Entity tagger (Berroyer, 2004), which is based on a set of linguistic resources and grammars. Named entity tagging has a direct impact on search performance when the query contains one or two named entities, as those semantic units are have a high discriminative power. ## Word And Sentence Segmentation This module identifies sentence and word boundaries. We use simple regular expressions, based on the algorithm proposed in (Grefenstette & Tapanainen, 1994). Part of the segmentation has been implicitly performed during the Named Entity tagging to solve some ambiguities such as the abbreviation dot in the sequence "B. subtilis", which could be understood as a full stop if it were not analyzed beforehand. ## Morpho-Syntactic Tagging This module aims at associating a part of speech (POS) tag to each word. It assumes that the word and sentence segmentation has been performed. We are using a probabilistic Part-Of-Speech tagger: TreeTagger (Schmid, 1997). The POS tags are not used as such for IR but POS tagging facilitates the rest of the linguistic processing. ## Lemmatization This module associates its lemma, i.e. its canonical form, to each word. The experiments presented in (Moreau, 2006) show that this morphological normalization increases the performance of search engines. If the word cannot be lemmatized (for instance a number or a foreign word), the information is omitted. This module assumes that word segmentation and morpho-syntactic information are provided. Even if it is a distinct module, we currently exploit the TreeTagger output which provides lemma as well as POS tags. ## Terminology Tagging This module aims at recognizing the domain specific phrases in a document, like *gene* expression or spore coat *cell*. These phrases considered as the most relevant terminological items. They can be provided through terminological resources such as the Gene Ontology (GOConsortium, 2001), the MeSH (MeSH) or more widely UMLS (UMLS). They can also be acquired through corpus analysis (see Figure 1). Providing a given terminology tunes the term
tagging to the corresponding domain. Previous annotation levels as lemmatization and word segmentation but also named entities are required. The goal in identifying domain specific phrases in the documents is the same as for the named entitiy recognition, *i.e.* to identify the relevant semantic units. Even if previous experiments (see (Lewis, 1992) among others) have shown a little impact of the phrases on IR performance, we argue that terminology should have a more significant impact on specialized search engines, as a terminology is relevant for a specific domain. In addition to that, a normalization procedure can associate a canonical form to any phrase occurrence (e.g. gene expression, expression of gene, gene *expressed*…). This normalization step is similar to the lemmatization one for words. Gathering associated variants under a single form modifies the phrase frequencies and thus affects IR. ## Parsing The parsing module aims at exhibiting the graph of the syntactic dependency relations between the words of the sentence. Parsing is a time and resource-consuming NLP, especially when compared to other NLP tasks like named entity recognition or part-of-speech tagging. As mentioned above, the syntactic analysis is especially important for the tasks that involve relations between entities (either information extraction or relational queries such as X's speeches as opposed to speeches on or *relative* to X). However, this technology is not yet fully compatible with Information Retrieval or Extraction. Even if processing time is a critical point for syntactic parsing, we argue that it may enhance the semantic access to web documents. On the one hand, it is usually not necessary to parse the entire documents. A good filtering procedure may select the more relevant sections to parse. We still have to develop a method for pre-filtering the textual segments that are worth parsing as proposed in (Nédellec et al., 2001). On the other hand, as we will show in Section 5, a good recognition of the terms can reduce significantly the number of possible parses and consequently the parsing processing time. In Ogmios, the word level of annotation is required in the parser input. Depending on the choice of the parser, the morpho-syntactic level may be needed. The Link Grammar Parser (Sleator & Temperley, 1993) is integrated. ## Semantic Type Tagging And Anaphora Resolution The last modules are currently under test and should be integrated in the next release of the platform. The semantic type tagging associates to the previously identified semantic units tags referring to ontological concepts. This allows a semantic querying of the document base. The anaphora resolution module establishes coreference links between the anaphoric pronoun occurrences and the antecedents they refer to. Even if solving anaphora has a small impact on the frequency counts and therefore on IE, it increases IE recall: for instance it *inhibits* Y may stand for X *inhibits* Y and must be interpreted as such in a extraction engine dealing with gene interactions. ## 5. Performance Analysis We carried out an experiment on a collection of 55,329 web documents from the biological domain. All the documents went through all NLP modules, up to the term tagging (as mentioned before, the goal is not to parse the whole documents but only some filtered part of them). A 400,000 named entity list, including species and gene names, and a 375,000 term list, issued from the MeSH and Gene Ontology have been used.
Figure 3 shows the distribution of the input document size (both axes are on a log scale). Most <image> documents have an XML size between 1KB and 100KB. The size of the biggest document is about 5.7 MB. We used 20 machines to annotate these documents. Most of these machines were standard Personal Computers with 1GB of RAM and 2.9 or 3.1 GHz processor. We also used a computer with 8GB of RAM and two 2.8GHz Xeon (dual-core) processors. Their operating system were either Debian Linux or Mandrake Linux. The server and three NLP clients were running on the 8GB/biprocessor. Only one NLP client was running on each standard Personal Computer. Even if a real benchmark requires several tests to evaluate the performance, we consider this performance as an interesting indication of the platform processing time. Timers are run between each function call in order to measure how long each step is (user-time-wise). We used the functions provided in the Time::Hires Perl package. All the time results are recorded in the annotated XML documents. | Average number of units | Total number of units in the | | |-------------------------------|--------------------------------|-------------| | by document | document collection | | | Tokens | 5,021.9 | 277,846,470 | | Named entities | 81.88 | 4,530,368 | | Words | 1,912.65 | 105,821,243 | | Sentences | 85.41 | 4,726,003 | | Part-of-speech tags and lemma | 1883.5 | 104,208,536 | | Terms | 250.76 | 13,874,089 | Table 3: Average and total numbers of linguistic units. The annotation of the documents was completed in 35 hours. Table 3 shows the total number of entities found in the document collection. 106 million words and 4.72 million sentences
were processed; 4.53 million named entities and 13.9 million domain specific phrases were identified. Each document contains, on average, 1,913 words, 85 sentences, 82 named entities and 251 domain specific phrases. 147 documents contained no words at all; they therefore underwent the tokenization step only. One of our NLP clients processed a 414,995 word document. Table 4 shows the average processing time for each document. Each document has been processed in 37 seconds. Due to the exploited resource, the most time-consuming steps are the term tagging (56% of the overall processing time) and the named entity recognition (16% of the overall processing time). | Average time processing | Percentage | | |------------------------------------------|--------------|-------| | loading XML input doc. | 0.38 | 1.02 | | tokenization | 0.7 | 1.88 | | named entity recognition | 6.12 | 16.42 | | word segmentation | 5.19 | 13.92 | | sentence segmentation | 0.18 | 0.48 | | part-of-speech tagging and lemmatization | 1.84 | 4.94 | | term tagging | 20.83 | 55.89 | | rendering XML output doc. | 2.03 | 5.45 | | Total | 37,27 | 100 | Table 4: Average time for one document processing (in seconds). The whole document collection, except two documents, has been analysed. Thanks to the distribution of the processing, the problems occuring on a specific document had no consequence on the whole process. Clients in charge of the analysis of these documents have been simply restarted. The performance we get on this collection show the robustness of the NLP platform, and its ability to analyse large and heterogeneous collection of documents in a reasonable time. We have proven the efficiency of the overall process for semantic crawlers and its accuracy for a precise indexing of web documents. ## 6. Tuning A Syntactic Analyzer To The Biological Domain This section presents our strategy to tune NLP tools to a given specialized domain. We take the parser as an example as its adaptation is the richest and the most complex one. In order to extract structured pieces of information from texts, one needs to link isolated chunks of texts together. Most of the time, chunks of texts correspond to named entities and relations are expressed through verbs or predicative nouns. We thus need a reliable and precise analysis of syntactic relations between phrases. For those reasons, we chose to integrate a symbolic dependency-based parser seemed (in contrast with a constituent-based parser). Instead of redeveloping new parsers for each sublanguage, we try to define a method for adapting a general parser to a specific sublanguage. This section presents a strategy to adapt the Link Parser (LP) (Sleator & Temerley, 1993) to parse Medline abstracts dealing with genomics. More details are given in (Aubin et al., 2005).
# Mixed Integer Linear Programming For Exact Finite-Horizon Planning In Decentralized Pomdps Raghav Aras Alain Dutech Franc¸ois Charpillet INRIA-Lorraine / Loria, 615 rue du Jardin Botanique, 54602 Villers-l`es-Nancy, France {aras, dutech, charp}@loria.fr October 28, 2018 ## Abstract 1 We consider the problem of finding an n-agent joint-policy for the optimal finite-horizon control of a decentralized Pomdp (Dec-Pomdp). This is a problem of very high complexity (NEXP-hard in n ≥ 2). In this paper, we propose a new mathematical programming approach for the problem. Our approach is based on two ideas: First, we represent each agent's policy in the sequence-form and not in the treeform, thereby obtaining a very compact representation of the set of joint-policies. Second, using this compact representation, we solve this problem as an instance of combinatorial optimization for which we formulate a mixed integer linear program (MILP). The optimal solution of the MILP directly yields an optimal joint-policy for the DecPomdp. Computational experience shows that formulating and solving the MILP requires significantly less time to solve benchmark DecPomdp problems than existing algorithms. For example, the multiagent tiger problem for horizon 4 is solved in 72 secs with the MILP whereas existing algorithms require several hours to solve it. ## 1 Introduction In a *finite-horizon* Dec-Pomdp [1], a set of n agents cooperate to control a Markov decision process for κ steps under two constraints: *partial observability* and *decentralization*. Partial observability signifies that the agents are imperfectly informed about the state of the process during control. Decentralization signifies that the agents are differently imperfectly informed during the control. The agents begin the control of the process with the same, possibly imperfect, information about the state. During the control each agent receives *private* information about the state of the process, which he arXiv:0707.2506v1 [cs.AI] 17 Jul 2007
## 5.1 Linearization Of F(X): Step 1 The simple idea in linearizing a nonlinear function is to use a variable for each nonlinear term that appears in the function. In the case of f(x), the nonlinear terms are, for each joint-sequence q of length κ,Qn i=1 xi[qi]. Therefore, to replace the nonlinear terms in f(x), we need to use a variable for every joint-sequence q of length κ. Let y[q] ≥ 0 be the variable for q and let, $$f(\overline{{{y}}})\equiv\sum_{q\in{\mathcal{S}}^{\kappa}}\nu(q)\overline{{{y}}}[q]$$ $$(17)$$ ν(q)y[q] (17) So the first step in linearizing f(x) is to change the objective in MP-Dec to f(y) and introduce the |Sκ|-vector y ≥ 0 of variables in it. We denote this modified MP by MP1-Dec. ## 5.2 Linearization Of F(X): Step 2 Once the objective is changed to f(y), we need to relate the variables representing joint-sequences (y) to those representing agents' sequences (the xi vectors). In other words, we need to add the following constraints to MP1-Dec, $$\prod_{i=1}^{n}\overline{{{x}}}_{i}[q_{i}]=\overline{{{y}}}[q],\ \ \ \ \forall\,q\in{\mathcal{S}}^{\kappa}$$ $$(18)$$ But the constraints (18) are *nonconvex*. So, if they are added to MP1-Dec, it would amount to maximizing a linear function under nonconvex, nonlinear constraints, and again we would not have any guarantee of finding the globally optimal solution. We therefore must also linearize these constraints. We shall do this in this step and the next. Suppose that (x1, x2, **. . .**, xn) is a solution to MP1-Dec. Then, for each joint-sequence q of length κ,Qn i=1 xi[qi] takes a value in [0,1]. In other words, it can take an infinite number of values. We can limit the values it can take by requiring that the vectors xi be vectors of *binary* variables, 0 or 1. Moreover, since we want Qn i=1 xi[qi] to equal y[q], but want to avoid the constraints (18), we should also require that each y variable be a binary variable. Thus, the second step in linearizing f(x) is to add the following constraints to MP1-Dec: $$\begin{array}{l l}{{\overline{{{x}}}_{i}[p]\in\{0,1\},}}&{{\quad\forall\,i\in N,\forall\,p\in{\mathcal{S}}_{i}}}\\ {{\overline{{{y}}}[q]\in\{0,1\},}}&{{\quad\forall\,q\in{\mathcal{S}}^{\kappa}}}\end{array}$$ Note that with these constraints in MP1-Dec, xi would represent a deterministic κpolicy of the ith agent. Constraints (19)-(20) are called *integer constraints*. We denote the MP formed by adding integer constraints to MP1-Dec by MP2-Dec. ## 5.3 Linearization Of F(X): Step 3 This is key step in the linearization. The number of sequences of length κ in a κ-policy of the ith agent is τi = |Ωi| κ−1. Hence the number of joint-sequences of length κ $$(19)$$
in a κ-joint-policy τ =Qn i=1 τi. Let, τ−i = τ τi . Now suppose (x1, x2, **. . .**, xn) is a solution to MP2-Dec. Each xiis a κ-step deterministic policy of the ith agent. The κ-joint-policy formed by them is also deterministic. If for a sequence p of length κ, xi[p**] = 1**, then it implies that for exactly τ−ijoint-sequences q of length κ in which the sequence of the ith agent is p,Qn i=1 xi[q**] = 1**. On the other hand, if xi[p**] = 0**, then for each joint-sequence q in which the sequence of the i agent is p,Qn i=1 xi[q**] = 0**. This can be represented mathematically as, $$\sum_{q\in{\mathcal{S}}^{\kappa}:q_{i}=p}\prod_{j=1}^{n}{\overline{{x}}}_{j}[q_{j}]=\tau_{-i}{\overline{{x}}}_{i}[p],\quad\forall\;i\in N,\forall\;p\in{\mathcal{S}}_{i}^{\kappa}$$ i(21) $$(21)$$ The set of equations (21)is true for every κ-step deterministic joint-policy, and it allows us to linearize the constraints (18). All we have to do is to add the following set of linear constraints to MP2-Dec, $$\sum_{q\in{\mathcal{S}}^{\kappa}:q_{i}=p}{\overline{{y}}}[q]=\tau_{-i}{\overline{{x}}}_{i}[p],\quad\ \forall\ i\in N,\,\forall\ p\in{\mathcal{S}}_{i}^{\kappa}$$ If these constraints are added to MP2-Dec then the following holds, $$(22)$$ $$\prod_{j=1}^{n}\overline{{{x}}}_{j}[q_{j}]=\overline{{{y}}}[q],\quad\ \forall\,q\in{\mathcal{S}}^{\kappa}$$ $$(23)$$ xj [qj ] = y[q], ∀ q ∈ Sκ(23) because the r.h.s. of their corresponding equations are equal. Thus, we have achieved the linearization of the constraints (18) and therefore of f(x). We shall call the constraints (22) as the *joint-policy constraints*. The MP obtained on adding the jointpolicy constraints to MP2-Dec gives us the integer linear program ILP-Dec, on which the mixed ILP (MILP), the main contribution of this paper, is based We give ILP-Dec below for the sake of completeness. ## 5.4 Integer Linear Program Ilp-Dec 1. Variables: (a) A |Sκ|-vector of variables, y. (b) For each agent i ∈ N, an |Si|-vector of variables, xi. 2. Objective: $$\mathrm{maximize}\qquad f({\overline{{y}}})\equiv\sum_{q\in{\mathcal{S}}^{\kappa}}\nu(q){\overline{{y}}}[q]$$ ν(q)y[q] (24) * 3. Constraints: for each agent $i\in N$. $${\mathrm{(a)~Policy~constraints}}\colon$$ $$\sum_{a_{i}\in A_{i}}\overline{{{x}}}_{i}[a_{i}]=1$$ $$(24)$$ $$(25)$$ $$11$$
$\forall\;t\in\{1,2,\ldots,\kappa-1\},\forall\;p\in\mathcal{S}_{i}^{t},\forall\;o_{i}\in\Omega_{i},$ $$\overline{x}_{i}[p]-\sum_{a\in A_{i}}\overline{x}_{i}[poa]=0$$ $$(26)$$ $$\sum_{q\in{\mathcal{S}}^{k}:q_{i}=p}{\overline{{y}}}[q]=\tau_{-i}{\overline{{x}}}_{i}[p]$$ $$(27)$$ $$(28)$$ $$4.\ {\mathrm{Integer~constraints}}:$$ $$\begin{array}{l l}{{\overline{{{x}}}_{i}[p]\in\{0,1\},}}&{{\quad\forall\,i\in N,\,\forall\,p\in{\mathcal{S}}_{i}}}\\ {{\overline{{{y}}}[q]\in\{0,1\},}}&{{\quad\forall\,q\in{\mathcal{S}}^{\kappa}}}\end{array}$$ ## 5.5 Mixed Integer Linear Program Milp-Dec We thus have the following result. Theorem 1 *An optimal solution (*x1, x2, **. . .**, xn*) to* ILP-Dec yields an optimal κ*-jointpolicy for the given Dec-Pomdp.* (Proof is omitted) An ILP is so called because it is an LP whose variables are constrained to take integer values. In ILP-Dec, each variable can be either 0 or 1. The principle method for solving an integer linear program is *branch and bound*. So when solving ILP-Dec, a tree of LPs is solved in which each LP is identical to the ILP-Dec but in which the integer constraints are replaced by non-negativity constraints (i.e., all the variables are allowed to take real values greater than or equal to 0). In general, the lesser the number of integer variables in an LP, the faster a solution will be obtained. Therefore it is desirable to minimize the number of integer variables in an LP. An LP in which some variables are allowed to take real values while the remaining ones are constrained to be integers is called a mixed ILP (MILP). Thus, an MILP may be solved faster than an ILP of the same size. We say that an MILP is equivalent to an ILP if every solution to the MILP is also a solution to the ILP. An MILP that is equivalent to ILP-Dec can be conceived as follows. Let this MILP be denoted by MILP-Dec. Let MILP-Dec be identical to ILP-Dec in all respects except the following: in each vector xi, only those variables representing sequences of length κ be constrained to take integer values 0 or 1; all the other variables in each xi and all the variables in the vector y be allowed to take real values greater than or equal to 0. Due to the equivalence, we have the following result. Theorem 2 *An optimal solution (*x1, x2, **. . .**, xn*) to* MILP-Dec yields an optimal κ*joint-policy for the given Dec-Pomdp* The proof of this theorem (and of the claim that MILP-Dec is equivalent to ILP-Dec) is omitted due to lack of space. The discussion henceforth applies to ILP-Dec as well. (b) Joint-policy $\cos$ (b) Joint-policy constraints: for each p ∈ Sκ $${\mathfrak{o}}\in S_{i}^{\kappa},$$
## 6 Improving Milp-Dec We now discuss two heuristics for improving the space and time requirement of formulating and solving MILP-Dec. ## 6.1 Identifying Dominated Sequences The number of variables required in the MILP-Dec can be minimized by using variables for only those sequences of each agent that are not *dominated*. Dominated sequences need not be represented in the MILP-Dec because there always exists an optimal κjoint-policy in which none of the policies contains a dominated sequence. We first define dominated sequences of length κ. Given sequences p and p ′ of length κ of the ith agent, p ′shall be called a *co-sequence* of p if it is identical to p except for its last action. Let C(p) denote the set of co-sequences of p. Then, p is said to be dominated if there exists a probability distribution θ over C(p), such that for every joint-sequence q of length κ in which the sequence of the ith agent is p, the following is true: $$\nu(q)\leq\sum_{p^{\prime}\in{\mathcal{C}}(p)}\theta(p^{\prime})\nu(q^{\prime})$$ $$(30)^{\frac{1}{2}}$$ ′) (30) in which q ′ = (q1, **. . .**, qi−1, p ′, qi+1, **. . .**, qn). Dominated sequences of length κ can be identified through *iterated elimination*. Identifying sequences of lengths less than κ is easier. A sequence p of length t is a *descendant* of a sequence p ′′ of length **j < t** if the first j actions and j - 1 observations in p are identical to the j actions and j - 1 observations in p ′′. A sequence p ′′ of length j is dominated if every descendant of p ′′ is dominated. So, for each agent, we first identify dominated sequences of length κ, and then working backwards, we identify dominated sequences of lengths less than κ. Note that if dominated sequences are not represented by variables in MILP-Dec, then in each joint-policy constraint the = sign must be replaced by the ≤ sign. The MILP that results when dominated sequences of all the agents are not represented by variables in MILP-Dec and the above modifications are made shall be denoted by MILP-Pr-Dec. ## 6.2 Adding Bounds Into Milp-Dec The MILP solver can be guided in its path selection in the tree of LP problems or made to terminate as early as possible by providing lower and/or upper bounds on the objective function. In this paper, we wish to illustrate the importance of integrating bounds in MILP-Dec, and so we have used rather loose bounds. Given V(t), the value of an optimal t-joint-policy, a lower bound on the value of the optimal (t **+ 1)**-jointpolicy is, ℓ = V(t**) + max** a∈A min s∈S R a s(31) For an upper bound, the value u of an optimal κ-step policy of the Pomdp corresponding to the Dec-Pomdp can be used. This value can be determined by the linear program (32)-(35) which also finds the optimal κ-step policy for the Pomdp. Let S t denote $\begin{array}{l}\texttt{IIIa}\texttt{L}\texttt{i}\texttt{m}\texttt{i}\texttt{n}\texttt{l}\texttt{i}\\ \texttt{a}\in A\quad\texttt{s}\in\texttt{S}\end{array}$.
| Algorithm | MABC | MA-tiger | | | | |----------------|--------|------------|------|------|------| | κ | 3 | 4 | 5 | 3 | 4 | | MILP-Dec | 0.86 | 900 | − | 3.7 | · | | MILP-Dec(u) | 1.03 | 907 | − | 3.5 | · | | MILP-Dec(ℓ) | 0.93 | 900 | − | 4.9 | 72 | | MILP-Pr-Dec | 0.84 | 80 | · | 6.4 | · | | MILP-Pr-Dec(u) | 0.93 | 10.2 | 25 | 6.2 | · | | MILP-Pr-Dec(ℓ) | 0.84 | 120 | · | 7.6 | 175 | | DP | 5 | 103 | | | | | MAA∗ | t3 | t4 | t3 | t4 | | | PBDP | 1.0 | 2.0 | 105 | t3 | t4 | | DP-JESP | 0 | 0.02 | | | | | Approx-DP | 0.05 | 1.0 | | | | | MBDP | 0.01 | 0.01 | 0.02 | 0.46 | 0.72 | the set of joint-sequences of length t. Let qoa denote the joint-sequence obtained on appending the joint-observation o and the joint-action a to the joint-sequence q. $$\begin{array}{r l}{{\mathrm{maximize}}}&{{u=\sum_{q\in{\mathcal{S}}^{*}}y[q]\quad{\mathrm{s.t.}}\colon}}\\ {{}}&{{}}\\ {{\sum_{a\in A}y[a]=1}}\\ {{}}&{{}}\\ {{y[q]-\sum_{a\in A}y[q o a]=0,\forall\,t<\kappa,\,q\in{\mathcal{S}}^{t},\,o\in\Omega}}\\ {{}}&{{}}\\ {{y\geq0}}\end{array}$$ $$(32)$$ $$(33)$$ (34) $\binom{35}{2}$ . A bound is added to MILP-Dec by adding a constraint. The constraint f(y) ≥ ℓ is added for adding the lower bound and the constraint f(y) ≤ u is added for adding the upper bound. ## 7 Experiments We formulated the MABC problem and MA-tiger problem as MILPs, and solved it using the ILOG Cplex 10 solver on an Intel P4 machine with 3.40 gigahertz processor speed and 2.0 GB ram. The runtime in seconds of MILP-Dec and MILP-Pr-Dec for different values of κ is shown in Table 1. In the first column, a parenthesis, if present indicates which bound is used. The runtime includes the time taken to identify dominated sequences and compute the bound (for e.g., solve the LP for the Pomdp),
where applicable. We have listed the runtime of existing exact and approximate dynamic programming Dec-Pomdp algorithms as reported in the literature. The three exact algorithms are DP, MAA∗and PBDP. The approximate algorithms are DP-JESP [8], Approximate-DP and MBDP. As far as dominated sequences are concerned, the MABC problem had about 75% dominated sequences per agent for κ = 5, while MATiger had no dominated sequences for any horizon. ## 8 Discussion And Future Directions In this paper we have introduced a new exact algorithm that for solving finite-horizon Dec-Pomdps. The results from Table 1 show a clear advantage of the MILP algorithms over existing exact algorithm for the longest horizons considered in each problem. We now point out three directions in which this work can be extended. Approximate algorithm: Our approach could be a good candidate to construct an approximate algorithm. For instance, if MILP-Dec or one of its variant is able to solve a problem optimally for horizon κ very quickly, then it can be used as a ratchet for solving approximately for longer horizons in divisions of κ steps. Our initial experiments with this simple method on the MABC and MA-Tiger problems indicate that it may be comparable in runtime and value of the joint-policy found with current approximate algorithms for solving long horizons (50,100). This is particularly useful when the Dec-Pomdp problem cycles back to the original state in a few steps. In the MA-Tiger problem, for example, upon the execution of the optimal 3-step joint-policy, denoted by σ 3, the process returns back to its initial belief state. The value of σ 3is 5.19. So we can perpetually execute σ 3to get in m steps, a total expected reward of (5.19m/3). Now, the value of σ 2, the optimal 2-step joint-policy is −2. For controlling the MA-Tiger problem for m steps, we may either (a) execute σ 3 m/3 times or (b) σ 2 m/2 times. The loss for doing (b) instead of (a) would be 2.73/m per step. This can be made arbitrarily high by changing the reward function. In other words, finding σ 3is much more important that finding σ 2. We can arrange for a similar difference in quality between σ 4and σ 3; and MILP-Dec is able to find σ 4in 72 secs while other algorithms take hours. Thus, the role of an exact, fast algorithm, such as ours, may prove crucial even for very small problems. Dynamic programming: In formulating MILP-Dec we are required to first generate the set S κ i for each agent i. The size of this set is exponential in κ. The generation of this set acts as the major bottleneck for formulating MILP-Dec in memory. However, we can use dynamic programming to create each set S κ i incrementally in a backward fashion. Such a procedure does not require the knowledge of b0 and it is based on the same principle as the DP algorithm. In brief, the procedure is explained as follows. For each nonzero t ≤ κ, we generate for each agent a set of sequences of length t by doing a *backup* of a previously generated set of sequences of length t - 1 of the agent. We then compute for
each joint-sequence of length t, an |S|-vector containing the values of the jointsequence when the initial belief state is one of the states in S. We then *prune*, for each agent, sequences of length t that are dominated over belief space formed by the cross-product of S and the set of joint-sequences of length t. By starting out with the set S 1 i (which is in fact just the set Ai) for each agent i, we can incrementally build the set S κ i . Note that a backup of the set S t i creates |Ai||Ωi||St i | new sequences; i.e., the growth is linear. In contrast, the backing-up of a set of policies represents an exponential growth. The merit of this procedure is that we may be able to compute an optimal joint-policy for a slightly longer horizon. But more importantly, due to the linear growth of sequences in each iteration, it may be possible to solve for the infinite-horizon by iterating until some stability or convergence in the values of joint-sequences in realized. Pompds: Finally, the approach consisting of the use of the sequence-form and mathematical programming could be applied to Pomdps. We have already shown in this paper how a finite-horizon Pomdp can be solved. In conjunction with the dynamic programming approach analogous to the one described above, it may be possible to compute the infinite-horizon discounted value function of a Pomdp. ## Acknowledgements We are grateful to the anonymous reviewers for providing us with valuable comments on this work and suggestions for improving the paper. ## References [1] Bernstein, D.; Givan, R.; Immerman, N.; and Zilberstein, S. 2002. The complexity of decentralized control of markov decision processes. Mathematics of Operations Research 27(4):819–840. [2] Blair, J. R.; Mutchler, D.; and van Lent, M. 1996. Perfect recall and pruning in games with imperfect information. *Computational Intelligence* 12:131–154. [3] Emery-Montemerlo, R.; Gordon, G.; Schneider, J.; and Thrun, S. 2004. Approximate solutions for partially observable stochastic games with common payoffs. In Proceedings of the Third International Joint Conference on *Autonomous Agents and* Multi-Agent Systems (AAMAS). [4] Hansen, E.; Bernstein, D.; and Zilberstein, S. 2004. Dynamic programming for partially observable stochastic games. *Proceedings of the 19th National Conference* on Artificial Intelligence 709–715. [5] Koller, D., and Megiddo, N. 1992. The complexity of zero-sum games in extensive form. *Games and Economic Behavior* 4(4):528–552.
[6] Koller, D.; Megiddo, N.; and von Stengel, B. 1994. Fast algorithms for finding randomized strategies in game trees. In Proceedings of the 26th ACM Symposium on Theory of Computing 750–759. [7] Kuhn, H. 1953. Extensive games and the problem of information. *Contributions* to the Theory of Games II 193–216. [8] Nair, R.; Pynadath, D.; Yokoo, M.; and Tambe, M. 2003. Taming decentralized pomdps: Towards efficient policy computation for multiagent settings. *Proceedings* of the 18th International Joint Conference on Artificial Intelligence 705–711. [9] Seuken, S., and Zilberstein, S. 2007. Memory-bounded dynamic programming for dec-pomdps. *In Proceedings of the 20th International Joint Conference on Artificial* Intelligence (IJCAI). [10] Szer, D., and Charpillet, F. 2006. Point-based dynamic programming for decpomdps. In Proceedings of the 21st National Conference on Artificial *Intelligence*. [11] Szer, D.; Charpillet, F.; and Zilberstein, S. 2005. Maa*: A heuristic search algorithm for solving decentralized pomdps. In Proceedings of the Twenty-First Conference on Uncertainty in Artificial Intelligence. [12] Wilson, R. 1972. Computing equilibria of two-person games from the extensive form. *Management Science* 18:448–460.
of Koller's approach (and therefore of our approach) is that the size of the set of sequences from each subset is drawn is only exponential in the horizon and not doubly exponential in it, as is the case with the size of the set of policy trees. This allows us to formulate an MILP whose size is exponential in κ and n. For small problems such as MA-Tiger and MABC, it is feasible to represent the MILP in memory. Furthermore, and equally importantly, the constraints matrix of the MILP is *sparse*. The consequence of this is that in practice the MILP is solved very quickly (in the order of seconds). Thus, we have an effective method to compute an optimal deterministic finite-horizon joint-policy. Restricting attention to deterministic joint-policies does not limit the applicability of our approach in any way since in every finite-horizon DecPomdp there exists at least one optimal joint-policy that is deterministic. It is also not evident that relaxing this restriction has any benefit. Implicitly, existing algorithms also restrict attention to deterministic joint-policies. In this paper 'policy' and 'joint-policy' shall mean deterministic policy and deterministic joint-policy respectively unless otherwise specified. ## 2 The Finite-Horizon Dec-Pomdp Problem A finite-horizon Dec-Pomdp problem is defined by the following elements. We are given N, a set of n agents and S, a set of states. The n agents in N are numbered from 1 to n. The states are numbered from 1 to |S|. For each ith agent, we are given Ai, the agent's set of actions and Ωi, his set of observations. The cross-product A1 × A2 **. . .** × An is called the set of *joint-actions* and it is denoted by A. Similarly, the cross-product Ω1 × Ω2 **. . .** × Ωn is called the set of *joint-observations* and it is denoted by Ω. The joint-actions are numbered from 1 to |A| and the joint-observations are numbered from 1 to |Ω|. Then, we are given for each ath joint-action, the matrices T a, Z aand the vector Ra: (a) T a ss′ is the probability of transitioning to the s ′th state if the agents take the ath joint-action in sth state. (b) Z a s ′o is the probability of the agents receiving the oth joint-observation and transitioning to s ′th if they take the ath. (c) Ra s is the real-valued reward the agents obtain if they take the ath joint-action in the sth state. We are given b0, which represents the initial *belief state* and it is common knowledge amongst the agents. A belief state is a probability distribution over S. In a belief state b, the probability of the sth state is denoted by b[s]. Finally, we are given κ ≥ 1, a finite number that is the *horizon* of the control. The control of the Dec-Pomdp is described as follows. At each step t of κ steps: the agents take a joint-action, they receive a joint-observation, they receive a common reward rt, and the process transitions to a new belief state as a function of the previous belief state, the joint-action and the jointobservation. However, at each step, agents do not reveal to one another the actions they take and observations they receive at that step or at previous steps. Since an agent does not know the actions taken by the other agents and the observations received by the
<image> other agents during the κ steps, at each step he takes actions strictly as a function of the actions he has taken previously and observations he has received previously. This function is called his *policy*. To control the Dec-Pomdp for κ steps, each agent requires a κ-step policy, henceforth written as κ-policy. The tuple of the agents' policies forms a *joint-policy*. An optimal joint-policy is one which maximizes E(Pκ t=1 rt), the sum of expected rewards the agents obtain for the κ steps. ## 2.1 Policy In The Tree-Form The canonical representation of a policy, used in existing Dec-Pomdp algorithms, is the tree-form. In this form, a κ-policy of the ith agent can be represented as a rooted tree with κ levels in which each non-terminal node has |Ωi| children. This tree is called a κ-policy-tree. Each node is labeled by an action to take and each edge is labeled by an observation that may occur. Using a policy-tree, during the control, the agent follows a path from the root to a leaf depending on the observations he receives. An example of a policy-tree is shown in Figure 1. The number of nodes in a κ-policy-tree of the ith agent is |Ωi| κ−1 |Ωi|−1 . It is thus exponential in κ. For example, with |Ωi| = 2, a 3-policytree, as the one shown in Figure 1, has 2 3−1 2−1 = 7 nodes. The set of κ-policy-trees of the ith agent is the set of all the |Ωi| κ−1 |Ωi|−1 sized permutations of the actions in Ai. Therefore, ``` the size of the set of κ-policy-trees of the ith agent is |Ai| |Ωi |κ−1 |Ωi |−1, doubly exponential ``` in κ. ## 3 Policy In The Sequence-Form The double exponentiality associated with the set of policy-trees can be avoided by using the *sequence-form* representation of a policy. We begin a description of this representation by defining a sequence. Definition 1 A sequence of length t of the ith agent is an ordered list of 2t *- 1 elements,* t ≥ 1, in which the elements in odd positions are actions from Ai and those in even positions are observations from Ωi. Thus, in a sequence of length t there are t actions and t - 1 observations. The shortest possible sequence is of length 1, which consists of just an action and no observations.
We denote the set of all possible sequences of length t which can be conceived from Ai and Oi by S t i . We denote the set S 1 i ∪ S2 i ∪ **. . .** S κ i by Si. We shall now see how a κ-policy can be represented as a set of sequences, or more precisely as a subset of Si. Assume that κ = 3 and the policy-tree ψ shown in Figure 1 is of the ith agent. Starting from the root-node and descending down the edges of the tree, we can enumerate the sequences of this tree. The first sequence we obtain is in the root-node itself, the sequence consisting of the action c and no observations. This is a sequence of length 1. Then, going down the edge labeled by u from the root-node, we come to the node labeled by the action f. At this point, we obtain a second sequence cuf, which is of length 2. It has two actions and one observation. Similarly, taking the other edge from the root-node, we come to the node labeled by d and obtain a third sequence cvd, also of length 2. When all the leaves of the tree have been visited, the set of sequences we obtain is, ## S(Ψ) = Nc, Cuf, Cvd, Cufuc, Cufvf, Cvdud, **Cvdvc** O This set contains 1 sequence of length 1, 2 sequences of length 2 and 4 sequences of length 3 to give a total of 7 sequences corresponding to the 7 nodes in ψ. It is evident that the set S(ψ) is equivalent to the policy-tree ψ. That is, *given* the set S(ψ), the agent can use it as a 3-step policy. As this simple exercise shows, any finite-step policy can be written as a finite set of sequences. Now, S(ψ) *is a subset of* Si, the set of all possible sequences of lengths less than or equal to 3, and so is every 3-policy of the ith agent. Thus, for any given value of κ, every κ-policy of the ith agent is a subset of Si. This is main idea of the sequence-form representation of a policy. ## 3.1 Policy As A Vector We can streamline the subset-set relationship between a κ-policy and Si by representing the former as a vector of binary values. Let the sequences in Si be numbered from 1 to |Si|. Since every κ-policy of the ith agent is a subset of Si, every sequence in Si is either in the policy or it is not. Thus a κ-policy of the ith agent can be represented as a |Si|-vector of binary values 0 or 1, such that if the jth sequence in Siis in the policy then the jth element of the vector equals 1 and if it is not, then the jth element of the vector equals 0. Let the set of |Si|-vectors of binary values 0 or 1 be denoted by Xi. Thus every κ-policy of the ith agent is member of the set Xi. Let p be the jth sequence in Si. For a vector xi ∈ Xi, value of the jth element in xi shall be conveniently represented as xi[p]. ## 3.2 Policy Constraints Of The Ith Agent Thus, every κ-policy of the ith agent is a member of Xi. The inverse of this is course untrue; not every member of Xiis a κ-policy. We therefore need to define which vectors in Xi can represent a κ-policy. We shall give a more general definition, one that includes stochastic policies as well as deterministic policies. We shall in fact define which vectors in R |Si|represent a κ-step policy, be it a stochastic policy or a deterministic one. The definition takes the form of a system of linear equations which must be
satisfied by a vector in R |Si|if it is to represent a κ-policy. Given a sequence p, an action a and an observation o, let poa denote the sequence obtained on appending o and a to the end of p. Let S ′ i denote the set S 1 i ∪ S2 i ∪ **. . .** S κ−1 i. Definition 2 Let |Si| = z*. A vector* w ∈ R zis a κ*-step, possibly stochastic, policy of* the i*th agent if,* $$\sum_{a\in A_{i}}w[a]=1\tag{1}$$ $$w[p]-\sum_{a\in A_{i}}w[poa]=0,\quad\forall\;p\in{\cal S}^{\prime}_{i},\,o\in\Omega_{i}$$ (2) $$w\geq0\tag{3}$$ We call the system of linear equations (1)-(3) the policy constraints of the i*th agent*. Policy constraints recreate the tree structure of a policy. They appear in a slightly different form, as Lemma 5.1 in [6]. We can write the policy constraints in the matrix form as Ciw = bi, w ≥ 0, where Ciis the matrix of the coefficients of the variables in the equations (1)-(2) and biis a vector of appropriate length whose first element is 1 and the remaining elements are 0, representing the r.h.s of the equations. Note that it is implicit in the above definition that the value of each element of w is constrained to be in the interval [0,1]. Hence, we can define a deterministic κ-policy of the ith agent as follows. ## Definition 3 A Vector Xi ∈ Xiis A Κ-Policy Of The Ith Agent If Cixi = Bi. We shall call a policy represented as a vector as a *policy-vector* just to distinguish it from a policy-tree. The representation of a policy as a policy-vector is in fact the sequence-form representation we have been alluding to. Given a vector from xi ∈ Xi which satisfies the policy constraints, the agent can use it just as he would use as a policy-tree without requiring any additional book-keeping. Let choosing a sequence mean taking the last action in the sequence. In using xi, at the first step, he chooses the action a such that xi[a**] = 1**. There will be only one such action. Then on receiving an observation, say o, he chooses the sequence aoa′such that xi[aoa′**] = 1**. Again there will be only one such sequence. In general, if at step t he has chosen the sequence p and then received the observation o, then he chooses the unique sequence poa′′ such that xi[poa′′**] = 1** at the (t **+ 1)**th step. Thus, at each step, the agent must know the sequence of actions he has taken and the sequence of observations he has received till that step in order to know which action to take according to xi. This requirement is called *perfect recall* in game theory, and it is implicit in the use of a policy-tree. ## 3.3 Advantage Of The Sequence-Form Representation The size of S t i is |Ai| t|Ωi| t−1. The size of Siis thus Pκ t=1 |Ai| t|Ωi| t−1, exponential in κ. Since every κ-policy is in theory available if the set Siis available, the latter serves as a *search space* for κ-policies of the ith agent. The good news is of course that this search space is only exponential in κ. This compares favorably with the search space represented by the set of κ-policy-trees which is doubly exponential in κ. We thus
have at our disposal an exponentially smaller space in which to search for an agent's policy. More precisely, to find a κ-policy of the ith agent, we need to set up and solve the system the policy constraints. The number of equations i P n this system is ci = 1 + κ−1 t=1 |Ai| t|Ωi| t. Ciis thus a ci **× |S**i| matrix. Now notice that Ciis a *sparse* matrix, that is, it has only a very small number of nonzero entries per row or column, while most of its entries are 0s. In Ci, the number of nonzero entries per row is only 1 +|Ai|, and it is constant per row. Sparse matrices are typically easier to solve that dense matrices of the same size. The relatively small size of Ci and its sparsity combine to form a relatively efficient method to find a κ-policy of the ith agent. ## 4 Value Of A Joint-Policy The agents control the the finite-horizon Dec-Pomdp by a κ-step joint-policy, henceforth written as a κ-joint-policy. A joint-policy is just the tuple formed by the agents' individual policies. Thus, a κ-joint-policy is an n-tuple of κ-policies. A κ-joint-policy may be an n-tuple of κ-policy-trees or it may be an n-tuple of κ-policy-vectors. Given a joint-policy π in either representation, the policy of the ith agent in it shall be denoted by πi. A joint-policy is evaluated by computing its *value*. The value of a joint-policy represents the sum of expected rewards the agents obtain if it is executed starting from the given initial belief state b0. The value of a joint-policy π shall be denoted by V(π). ## 4.1 Value Of A Joint-Policy As An N-Tuple Of Policy-Trees Given a t-policy σ of an agent, t ≤ κ, let a(σ) denote the action in the root node of σ and let σ(o ′) denote the sub-tree attached to the root-node of σ into which the edge labeled by the observation o ′enters. Furthermore, given a t-joint-policy π, let a(π) denote the joint-action (a(π1), a(π2), **. . .**, a(πn)) and given a joint-observation o, let π(o) denote the (t − 1)-joint-policy (π1(o1), π2(o2), **. . .**, πn(on)). Now let π be a κjoint-policy which is an n-tuple of κ-policy trees. The value of π is expressed in terms of the κ-step *value-function* of the Dec-Pomdp denoted by V κas follows, $${\mathcal{V}}(\pi)=\sum_{s\in S}b_{0}[s]V^{\kappa}(s,\pi)$$ $$\left(4\right)$$ $$\mathbf{I}_{\mathrm{max}}\mathbf{I}_{\mathrm{max}}\mathbf{I}_{\mathrm{max}}=\mathbf{I}_{\mathrm{max}}\mathbf{I}_{\mathrm{max}}\mathbf{I}_{\mathrm{max}}\mathbf{I}_{\mathrm{max}}\mathbf{I}_{\mathrm{max}}$$ in which V κis expressed recursively as, $\mathbf{v}\cdot\mathbf{w}$ 4. $\mathbf{a}\to\mathbf{p}$ $$V^{\kappa}(s,\pi)=R_{s}^{a(\pi)}+\sum_{o\in\Omega}\sum_{s^{\prime}\in S}T_{ss^{\prime}}^{a(\pi)}Z_{s^{\prime}o}^{a(\pi)}V^{\kappa-1}(s^{\prime},\pi(o))\tag{5}$$ For $t=1$, $V^t(s,a)=R_s^a$. An _op_. . An *optimal* κ-joint-policy is one whose value is the maximum. ## 4.2 Value Of A Joint-Policy As An N-Tuple Of Policy-Vectors The value of a κ-joint-policy that is an n-tuple of policy-vectors is expressed in terms of the values of its *joint-sequences*. A joint-sequence is defined analogously to a sequence.
# A Leaf Recognition Algorithm For Plant Classification Using Probabilistic Neural Network Stephen Gang Wu 1 , Forrest Sheng Bao 2 , Eric You Xu 3 , Yu-Xuan Wang 4 , Yi-Fan Chang 5 and Qiao-Liang Xiang 4 1 Institute of Applied Chemistry, Chinese Academy of Science, P. R. China 2 Dept. of Computer Science, Texas Tech University, USA 3 Dept. of Computer Science & Engineering, Washington University in St. Louis, USA 4 School of Information & Telecommunications Eng., Nanjing Univ. of P & T, P. R. China 5 Dept. of Electronic Engineering, National Taiwan Univ. of Science & Technology, Taiwan, R. O. China Corresponding author's E-mail : [email protected] Abstract—In this paper, we employ Probabilistic Neural Network (PNN) with image and data processing techniques to implement a general purpose automated leaf recognition for **plant** classification. 12 leaf features are extracted and orthogonalized into 5 principal variables which consist the input vector of the PNN. The PNN is trained by 1800 leaves to classify 32 kinds of plants with an accuracy greater than 90%. Compared with other approaches, our algorithm is an accurate artificial intelligence approach which is fast in execution and easy in implementation. Index Terms**—Probabilistic Neural Network, feature extraction, leaf recognition, plant classification** ## I. Introduction Plants exist everywhere we live, as well as places without us. Many of them carry significant information for the development of human society. The urgent situation is that many plants are at the risk of extinction. So it is very necessary to set up a database for plant protection [1]–[4]. We believe that the first step is to teach a computer how to classify plants. Compared with other methods, such as cell and molecule biology methods, classification based on leaf image is the first choice for leaf plant classification. Sampling leaves and photoing them are low-cost and convenient. One can easily transfer the leaf image to a computer and a computer can extract features automatically in image processing techniques. Some systems employ descriptions used by botanists [5]– [8]. But it is not easy to extract and transfer those features to a computer automatically. This paper tries to prevent human interference in feature extraction. It is also a long discussed topic on how to extract or measure leaf features [9]–[15]. That makes the application of pattern recognition in this field a new challenge [1] [16]. According to [1], data acquisition from living plant automatically by the computer has not been implemented. Several other approaches used their pre-defined features. Miao et al. proposed an evidence-theory-based rose classification [3] based on many features of roses. *Gu et al.* tried leaf recognition using skeleton segmentation by wavelet transform and Gaussian interpolation [17]. *Wang et al.* used a moving median center (MMC) hypersphere classifier [18]. Similar method was proposed by *Du et al.* [1]. Their another paper proposed a modified dynamic programming algorithm for leaf shape matching [19]. *Ye et al.* compared the similarity between features to classify plants [2]. Many approaches above employk-nearest neighbor (k-NN) classifier [1] [17] [18] while some papers adopted Artificial Neural Network (ANN). *Saitoh et al.* combined flower and leaf information to classify wild flowers [20]. *Heymans et al.* proposed an application of ANN to classify opuntia species [21]. *Du et al.* introduced shape recognition based on radial basis probabilistic neural network which is trained by orthogonal least square algorithm (OLSA) and optimized by recursive OLSA [22]. It performs plant recognition through modified Fourier descriptors of leaf shape. Previous work have some disadvantages. Some are only applicable to certain species [3] [16] [21]. As expert system, some methods compare the similarity between features [2] [8]. It requires pre-process work of human to enter keys manually. This problem also happens on methods extracting features used by botanists [7] [16]. Among all approaches, ANN has the fastest speed and best accuracy for classification work. [22] indicates that ANN classifiers (MLPN, BPNN, RBFNN and RBPNN) run faster than k-NN (k=1, 4) and MMC hypersphere classifier while ANN classifiers advance other classifiers on accuracy. So this paper adopts an ANN approach. This paper implements a leaf recognition algorithm using easy-to-extract features and high efficient recognition algorithm. Our main improvements are on feature extraction and the classifier. All features are extracted from digital leaf image. Except one feature, all features can be extracted automatically. 12 features are orthogonalized by Principal Components Analysis (PCA) [23]. As to the classifier, we use PNN [24] for its fast speed and simple structure. The whole algorithm is easyto-implement, using common approaches. The rest of this paper is organized as follows. Sec. II discusses image pre-processing. Sec. III introduces how 12 leaf features are extracted. PCA and PNN are discussed in Sec. IV. Experimental results are given in Sec. V. Future work on improving our algorithm is mentioned in Sec. VI. Sec. VII concludes this paper.
<image> ## Ii. Image Pre-Processing A. Converting Rgb Image To Binary Image The leaf image is acquired by scanners or digital cameras. Since we have not found any digitizing device to save the image in a lossless compression format, the image format here is JPEG. All leaf images are in 800 x 600 resolution. There is no restriction on the direction of leaves when photoing. An RGB image is firstly converted into a grayscale image. Eq. 1 is the formula used to convert RGB value of a pixel into its grayscale value. gray = 0.2989 ∗ R + 0.5870 ∗ G + 0.1140 ∗ B (1) where R, G, B correspond to the color of the pixel, respectively. The level to convert grayscale into binary image is determined according to the RGB histogram. We accumulate the pixel values to color R, G, B respectively for 3000 leaves and divide them by 3000, the number of leaves. The average histogram to RGB of 3000 leaf images is shown as Fig. 2. <image> There are two peaks in every color's histogram. The left peak refers to pixels consisting the leaf while the right peak refers to pixels consisting the white background. The lowest point between two peaks is around the value 242 on the average. So we choose the level as 0.95 (242/255=0.949). The output image replaces all pixels in the input image with luminance greater than the level by the value 1 and replaces all other pixels by the value 0. A rectangular averaging filter of size 3 × 3 is applied to filter noises. Then pixel values are rounded to 0 or 1. B. Boundary Enhancement When mentioning the leaf shape, the first thing appears in your mind might be the margin of a leaf. Convolving the image with a Laplacian filter of following 3 × 3 spatial mask: $$\begin{array}{l l l}{{0}}&{{1}}&{{0}}\\ {{1}}&{{-4}}&{{1}}\\ {{0}}&{{1}}&{{0}}\end{array}$$ we can have the margin of the leaf image. An example of image pre-processing is illustrated in Fig. 3. To make boundary as a black curve on white background, the "0" "1" value of pixels is swapped. <image> ## Iii. Feature Extraction In this paper, 12 commonly used digital morphological features (DMFs), derived from 5 basic features, are extracted so that a computer can obtain feature values quickly and automatically (only one exception). ## A. Basic Geometric Features Firstly, we obtain 5 basic geometric features. 1) Diameter: The diameter is defined as the longest distance between any two points on the margin of the leaf. It is denoted as D. 2) Physiological Length: The only human interfered part of our algorithm is that you need to mark the two terminals of the main vein of the leaf via mouse click. The distance between the two terminals is defined as the physiological length. It is denoted as Lp. 3) Physiological Width: Drawing a line passing through the two terminals of the main vein, one can plot infinite lines orthogonal to that line. The number of intersection pairs between those lines and the leaf margin is also infinite. The longest distance between points of those intersection pairs is defined at the physiological width. It is denoted as Wp. Since the coordinates of pixels are discrete, we consider two lines are orthogonal if their degree is 90◦ ± 0.5 ◦ . The relationship between physiological length and physiological width is illustrated in Fig. 4.
<image> 4) Leaf Area: The value of leaf area is easy to evaluate, just counting the number of pixels of binary value 1 on smoothed leaf image. It is denoted as A. 5) Leaf Perimeter: Denoted as P, leaf perimeter is calculated by counting the number of pixels consisting leaf margin. ## B. 12 Digital Morphological Features Based on 5 basic features introduced previously, we can define 12 digital morphological features used for leaf recognition. 1) Smooth factor: We use the effect of noises to image area to describe the smoothness of leaf image. In this paper, smooth factor is defined as the ratio between area of leaf image smoothed by 5 × 5 rectangular averaging filter and the one smoothed by 2 × 2 rectangular averaging filter. 2) Aspect ratio: The aspect ratio is defined as the ratio of physiological length Lp to physiological width Wp, thus Lp/Wp. 3) Form factor: This feature is used to describe the difference between a leaf and a circle. It is defined as 4**πA/P**2, where A is the leaf area and P is the perimeter of the leaf margin. 4) Rectangularity: Rectangularity describes the similarity between a leaf and a rectangle. It is defined as LpWp/A, where Lp is the physiological length, Wp is the physiological width and A is the leaf area. 5) Narrow factor: Narrow factor is defined as the ratio of the diameter D and physiological length Lp, thus D/Lp. 6) Perimeter ratio of diameter: Ratio of perimeter to diameter, representing the ratio of leaf perimeter P and leaf diameter D, is calculated by P/D. 7) Perimeter ratio of physiological length and physiological width: This feature is defined as the ratio of leaf perimeter P and the sum of physiological length Lp and physiological width Wp, thus P/(Lp + Wp). 8) Vein features: We perform morphological opening [25] on grayscale image with falt, disk-shaped structuring element of radius 1,2,3,4 and substract remained image by the margin. The results look like the vein. That is why following 5 feature are called vein features. Areas of left pixels are denoted as Av1, Av2, Av3 and Av4 respectively. Then we obtain the last 5 features: Av1/A, Av2/A, Av3/A, Av4/A, Av4/Av1. Now we have finished the step of feature acquisition and go on to the data analysis section. ## Iv. Proposed Scheme A. Principal Component Analysis (Pca) To reduce the dimension of input vector of neural network, PCA is used to orthogonalize 12 features. The purpose of PCA is to present the information of original data as the linear combination of certain linear irrelevant variables. Mathematically, PCA transforms the data to a new coordinate system such that the greatest variance by any projection of the data comes to lie on the first coordinate, the second greatest variance on the second coordinate, and so on. Each coordinate is called a principal component. In this paper, the contribution of first 5 principal components is 93.6%. To balance the computational complexity and accuracy, we adopt 5 principal components. When using our algorithm, one can use the mapping f : R 12 → R 5 to obtain the values of components in the new coordinate system. ## B. Introduction To Probabilistic Neural Network An artificial neural network (ANN) is an interconnected group of artificial neurons simulating the thinking process of human brain. One can consider an ANN as a "magical" black box trained to achieve expected intelligent process, against the input and output information stream. Thus, there is no need for a specified algorithm on how to identify different plants. PNN is derived from Radial Basis Function (RBF) Network which is an ANN using RBF. RBF is a bell shape function that scales the variable nonlinearly. PNN is adopted for it has many advantages [26]. Its training speed is many times faster than a BP network. PNN can approach a Bayes optimal result under certain easily met conditions [24]. Additionally, it is robust to noise examples. We choose it also for its simple structure and training manner. The most important advantage of PNN is that training is easy and instantaneous [24]. Weights are not "trained" but assigned. Existing weights will never be alternated but only new vectors are inserted into weight matrices when training. So it can be used in real-time. Since the training and running procedure can be implemented by matrix manipulation, the speed of PNN is very fast. The network classfies input vector into a specific class because that class has the maximum probability to be correct. In this paper, the PNN has three layers: the Input layer, Radial Basis Layer and the Competitive Layer. Radial Basis Layer evaluates vector distances between input vector and row weight vectors in weight matrix. These distances are scaled by Radial Basis Function nonlinearly. Then the Competitive Layer finds the shortest distance among them, and thus finds the training pattern closest to the input pattern based on their distance.
<image> ## C. Network Structure The network structure in our purposed scheme is illustrated in Fig. 5. We adopt symbols and notations used in the book Neural Network Design [27]. These symbols and notations are also used by MATLAB Neural Network Toolbox [28]. Dimensions of arrays are marked under their names. 1) Input Layer: The input vector, denoted as p, is presented as the black vertical bar in Fig. 5. Its dimension is R × 1. In this paper, R = 5. 2) Radial Basis Layer: In Radial Basis Layer, the vector distances between input vector p and the weight vector made of each row of weight matrix W are calculated. Here, the vector distance is defined as the dot product between two vectors [29]. Assume the dimension of W is Q × R. The dot product between p and the i-th row of W produces the i-th element of the distance vector ||W− p||, whose dimension is Q × 1, as shown in Fig. 5. The minus symbol, "−", indicates that it is the distance between vectors. Then, the bias vector b is combined with ||W − p|| by an element-by-element multiplication, represented as "·∗" in Fig. 5. The result is denoted as n = ||W − p**|| · ∗**p. The transfer function in PNN has built into a distance criterion with respect to a center. In this paper, we define it as $$r a d b a s(n)=e^{-n^{2}}$$ (2) Each element of n is substituted into Eq. 2 and produces corresponding element of a, the output vector of Radial Basis Layer. We can represent the i-th element of a as $$a_{i}=r a d b a s(||\mathbf{W}_{i}-\mathbf{p}||\cdot*\mathbf{b}_{i})$$ ai = radbas(||Wi − p**|| · ∗**bi) (3) where Wiis the vector made of the i-th row of W and biis the i-th element of bias vector b. 3) Some characteristics of Radial Basis Layer: The i-th element of a equals to 1 if the input p is identical to the i-th row of input weight matrix W. A radial basis neuron with a weight vector close to the input vector p produces a value near 1 and then its output weights in the competitive layer will pass their values to the competitive function which will be discussed later. It is also possible that several elements of a are close to 1 since the input pattern is close to several training patterns. 4) Competitive Layer: There is no bias in Competitive Layer. In Competitive Layer, the vector a is firstly multiplied with layer weight matrix M, producing an output vector d. The competitive function, denoted as C in Fig. 5, produces a 1 corresponding to the largest element of d, and 0's elsewhere. The output vector of competitive function is denoted as c. The index of 1 in c is the number of plant that our system can classify. It can be used as the index to look for the scientific name of this plant. The dimension of output vector, K, is 32 in this paper. ## D. Network Training Totally 1800 pure leaves are sampled to train this network. Those leaves are sampled in the campus of Nanjing University and Sun Yat-Sen arboretum, Nanking, China. Most of them are common plants in Yangtze Delta, China. Details about the leaf numbers of different kinds of plants are given in Table I. The reason why we sample different pieces of leaves to different plants is the difficulty to sample leaves varies on plants. 1) Radial Basis Layer Weights: W is set to the transpose of R×Q matrix of Q training vectors. Each row of W consists of 5 principal variables of one trainging samples. Since 1800 samples are used for training, Q = 1800 in this paper. 2) Radial Basis Layer Biases: All biases in radial basis layer are all set to √ln 0.5/s resulting in radial basis functions that cross 0.5 at weighted inputs of ±s. s is called the spread constant of PNN. The value of s can not be selected arbitrarily. Each neuron in radial basis layer will respond with 0.5 or more to any input vectors within a vector distance of s from their weight vector. A too small s value can result in a solution that does not generalize from the input/target vectors used in the design. In contrast, if the spread constant is large enough, the radial basis neurons will output large values (near 1.0) for all the inputs used to design the network. In this paper, the s is set to 0.03(≃ 1/32) according to our experience. 3) Competitive Layer Weights: M is set to K×Q matrix of Q target class vectors. The target class vectors are converted from class indices corresponding to input vectors. This process generates a sparse matrix of vectors, with one 1 in each column, as indicated by indices. For example, if the i-th
TABLE I DETAILS ABOUT THE LEAF NUMBERS OF DIFFERENT TYPES OF PLANTS Scientific Name(in Latin) Common Name training samples number of incorrect recognition Phyllostachys edulis (Carr.) Houz. pubescent bamboo 58 0 Aesculus chinensis Chinese horse chestnut 63 0 <image> Populus ×*canadensis* Moench Carolina poplar 58 3 Liriodendron chinense (Hemsl.) Sarg. Chinese tulip tree 50 0 Citrus reticulata Blanco tangerine 51 0 sample in training set is the j-th kind of plant, then we have one 1 on the j-th row of i-th column of M. ## V. Experimental Result To each kind of plant, 10 pieces of leaves from testing sets are used to test the accuracy of our algorithm. Numbers incorrect recognition are listed in the last column of Table I. The average accuracy is 90.312%. Some species get a low accuracy in Table I. Due to the simplicity of our algorithm framework, we can add more features to boost the accuracy. We compared the accuracy of our algorithm with other general purpose (not only applicable to certain species) classification algorithms that only use leaf-shape information. According to Table II, the accuracy of our algorithm is very similar to other schemes. Considering our advantage respect to other automated/semi-automated general purpose schemes, easy-to-implement framework and fast speed of PNN, the performance is very good. The source code in MATLAB can be downloaded now from http://flavia.sf.net. ## Vi. Future Work Since the essential of the competitive function is to output the index of the maximum value in an array, we plan to let our algorithm output not only the index of maximum value, but also the indices of the second greatest value and the third greatest value. It is based on this consideration that the index <image> <image> of the second greatest value corresponds to the second top matched plant. So does the index of the third greatest value. Sometimes, maybe the correct plant is in the second or the third most possible plant. We are going to provide all these three possible answers to users. Further more, users can choose the correct one they think so that our algorithm can learn from it to improve its accuracy. Other features are also under consideration. Daniel Drucker from Department of Psychology, University of Pennsylvania, suggested us to use Fourier Descriptors so that we can do some mathematical manipulations later. We are also trying to use other features having psychology proof that is useful for human to recognize things like the leaf, such as the surface qualities [30]. Our plant database is under construction. The number of | ACCURACY COMPARISON Scheme Accuracy proposed in [2] 71% 1-NN in [17] 93% | | |----------------------------------------------------------------------------|------| | k-NN (k = 5) in [17] | 86% | | RBPNN in [17] | 91% | | MMC in [1] | 91% | | k-NN (k = 4) in [1] | 92% | | MMC in [18] | 92% | | BPNN in [18] | 92% | | RBFNN in [22] | 94% | | MLNN in [22] | 94 % | | Our algorithm | 90% |
## Vii. Conclusion This paper introduces a neural network approach for plant leaf recognition. The computer can automatically classify 32 kinds of plants via the leaf images loaded from digital cameras or scanners. PNN is adopted for it has fast speed on training and simple structure. 12 features are extracted and processed by PCA to form the input vector of PNN. Experimental result indicates that our algorithm is workable with an accuracy greater than 90% on 32 kinds of plants. Compared with other methods, this algorithm is fast in execution, efficient in recognition and easy in implementation. Future work is under consideration to improve it. ## Acknowledgements Prof. Xin-Jun Tian, Department of Botany, School of Life Sciences, Nanjing University provided the lab and some advises for this research. Yue Zhu, a master student of Department of Botany, School of Life Sciences, Nanjing University, helped us sampling plant leaves. Ang Li and Bing Chen from Institute of Botany, Chinese Academy of Science, provided us some advises on plant taxonomy and searched the scientific name for plants. Shi Chen, a PhD student from School of Agriculture, Pennsylvania State University, initiated another project which inspired us this research. The authors also wish to thank secretary Crystal HwanMing Chan, for her assistance to our project. ## References [1] J.-X. Du, X.-F. Wang, and G.-J. Zhang, "Leaf shape based plant species recognition," *Applied Mathematics and Computation*, vol. 185, 2007. [2] Y. Ye, C. Chen, C.-T. Li, H. Fu, and Z. Chi, "A computerized plant species recognition system," in *Proceedings of 2004 International Symposium on Intelligent Multimedia, Video and Speech Processing*, Hong Kong, October 2004. [3] Z. Miao, M.-H. Gandelin, and B. Yuan, "An oopr-based rose variety recognition system," *Engineering Applications of Artificial Intelligence*, vol. 19, 2006. [4] R. de Oliveira Plotze, M. Falvo, J. G. Pdua, L. C. Bernacci, M. L. C. Vieira, G. C. X. Oliveira, and O. M. Bruno, "Leaf shape analysis using the multiscale minkowski fractal dimension, a new morphometric method: a study with passifliora (passifloraceae)," Canada Journal of Botany, vol. 83, 2005. [5] M. J. Dallwitz, "A general system for coding taxonomic descriptions," Taxon, vol. 29, 1980. [6] H. Fu, Z. Chi, D. Feng, and J. Song, "Machine learning techniques for ontology-based leaf classification," in *IEEE 2004 8th International* Conference on Control, Automation, Robotics and Vision, Kunming, China, 2004. [7] D. Warren, "Automated leaf shape description for variety testing in chrysanthemums," in Proceedings of IEE 6th International Conference Image Processing and Its Applications, 1997. [8] T. Brendel, J. Schwanke, P. Jensch, and R. Megnet, "Knowledgebased object recognition for different morphological classes of plants," Proceedings of SPIE, vol. 2345, 1995. [9] Y. Li, Q. Zhu, Y. Cao, and C. Wang, "A leaf vein extraction method based on snakes technique," in *Proceedings of IEEE International* Conference on Neural Networks and Brain, 2005. [10] H. Fu and Z. Chi, "Combined thresholding and neural network approach for vein pattern extraction from leaf images," *IEE Proceedings-Vision,* Image and Signal Processing, vol. 153, no. 6, December 2006. 6 [11] Y. Nam, E. Hwang, and K. Byeon, "Elis: An efficient leaf image retrieval system," in *Proceedings of International Conference on Advances in* Pattern Recognition 2005, ser. LNCS 3687. Springer, 2005. [12] H. Fu and Z. Chi, "A two-stage approach for leaf vein extraction," in Proceedings of IEEE International Conference on Neural Networks and Signal Processing, Nanjing, China, 2003. [13] Z. Wang, Z. Chi, and D. Feng, "Shape based leaf image retrieval," IEE Proceedings-Vision, Image and Signal Processing, vol. 150, no. 1, February 2003. [14] H. QI and J.-G. YANG, "Sawtooth feature extraction of leaf edge based on support vector machine," in Proceedings of the Second International Conference on Machine Learning and Cybernetics, November 2003. [15] S. M. Hong, B. Simpson, and G. V. G. Baranoski, "Interactive venationbased leaf shape modeling," *Computer Animation and Virtual Worlds*, vol. 16, 2005. [16] F. Gouveia, V. Filipe, M. Reis, C. Couto, and J. Bulas-Cruz, "Biometry: the characterisation of chestnut-tree leaves using computer vision," in Proceedings of IEEE International Symposium on Industrial *Electronics*, Guimar˜aes, Portugal, 1997. [17] X. Gu, J.-X. Du, and X.-F. Wang, "Leaf recognition based on the combination of wavelet transform and gaussian interpolation," in *Proceedings* of International Conference on Intelligent Computing 2005, ser. LNCS 3644. Springer, 2005. [18] X.-F. Wang, J.-X. Du, and G.-J. Zhang, "Recognition of leaf images based on shape features using a hypersphere classifier," in *Proceedings* of International Conference on Intelligent Computing 2005, ser. LNCS 3644. Springer, 2005. [19] J.-X. Du, D.-S. Huang, X.-F. Wang, and X. Gu, "Computer-aided plant species identification (capsi) based on leaf shape matching technique," Transactions of the Institute of Measurement and Control, vol. 28, 2006. [20] T. K. Takeshi Saitoh, "Automatic recognition of wild flowers," in Proceedings of 15th International Conference on Pattern Recognition (ICPR'00), vol. 2, 2000. [21] B. C. Heymans, J. P. Onema, and J. O. Kuti, "A neural network for opuntia leaf-form recognition," in Proceedings of IEEE International Joint Conference on Neural Networks, 1991. [22] J. Du, D. Huang, X. Wang, and X. Gu, "Shape recognition based on radial basis probabilistic neural network and application to plant species identification," in Proceedings of 2005 International Symposium of Neural Networks, ser. LNCS 3497. Springer, 2005. [23] J. Shlens. (2005, December) A tutorial on principal component analysis. [Online]. Available: http://www.cs.cmu.edu/∼elaw/papers/pca.pdf [24] D. F. Specht, "Probabilistic neural networks," *Neural Networks*, vol. 3, 1990. [25] R. C. Gonzalez, R. E. Woods, and S. L. Eddins, *Digital Image Processing Using MATLAB*. Prentice Hall, 2004. [26] T. Master, *Practical Neural Network Recipes*. New York: John Wiley, 1993. [27] M. T. Hagan, H. B. Demut, and M. H. Beale, *Neural Network Design*, 2002. [28] (2007) Matlab neural network toolbox documentation. MathWorks. Inc. [Online]. Available: http://www.mathworks.com/access/helpdesk/help/toolbox/nnet/radial10.html\#8378 [29] D. F. Specht, "Probabilistic neural networks for classification mapping, or associative memory," in *Proceedings of IEEE International Conference on Neural Networks*, vol. 1, 1988. [30] I. Motoyoshi, S. Nishida, L. Sharan, and E. H. Adelson, "Image statistics and the perception of surface qualities," *Nature*, vol. 447, May 2007.
# 2006: Celebrating 75 Years Of Ai - History And Outlook: The Next 25 Years∗ J¨urgen Schmidhuber TU Munich, Boltzmannstr. 3, 85748 Garching bei M¨unchen, Germany & IDSIA, Galleria 2, 6928 Manno (Lugano), Switzerland [email protected] - http://www.idsia.ch/∼juergen ## Abstract When Kurt G¨odel layed the foundations of theoretical computer science in 1931, he also introduced essential concepts of the theory of Artificial Intelligence (AI). Although much of subsequent AI research has focused on heuristics, which still play a major role in many practical AI applications, in the new millennium AI theory has finally become a full-fledged formal science, with important optimality results for embodied agents living in unknown environments, obtained through a combination of theory *`a la* G¨odel and probability theory. Here we look back at important milestones of AI history, mention essential recent results, and speculate about what we may expect from the next 25 years, emphasizing the significance of the ongoing dramatic hardware speedups, and discussing G¨odel-inspired, selfreferential, self-improving universal problem solvers. ## 1 Highlights Of Ai History—From Godel To 2006 ¨ Godel and Lilienfeld. ¨ In 1931, 75 years ago and just a few years after Julius Lilienfeld patented the transistor, Kurt G¨odel layed the foundations of theoretical computer science (CS) with his work on universal formal languages and the limits of proof and computation [5]. He constructed formal systems allowing for self-referential statements that talk about themselves, in particular, about whether they can be derived from a set of given axioms through a computational theorem proving procedure. G¨odel went on to construct statements that claim their own unprovability, to demonstrate that traditional math is either flawed in a certain algorithmic sense or contains unprovable but true statements. G¨odel's incompleteness result is widely regarded as the most remarkable achievement of 20th century mathematics, although some mathematicians say it is logic, not math, and others call it the fundamental result of theoretical computer science, a discipline that did not yet officially exist back then but was effectively created through
Truly nontrivial predictions are those that most will not believe until they come true. We will mostly restrict ourselves to trivial predictions like those above and refrain from too much speculation in form of nontrivial ones. However, we may have a look at previous unexpected scientific breakthroughs and try to discern a pattern, a pattern that may not allow us to precisely predict the details of the next revolution but at least its timing. ## 3.1 A Pattern In The History Of Revolutions? Let us put the AI-oriented developments [27] discussed above in a broader context, and look at the history of major scientific revolutions and essential historic developments (that is, the subjects of the major chapters in history books) since the beginnings of modern man over 40,000 years ago [30, 31]. Amazingly, they seem to match a binary logarithmic scale marking exponentially declining temporal intervals [31], each half the size of the previous one, and measurable in terms of powers of 2 multiplied by a human lifetime (roughly 80 years—throughout recorded history many individuals have reached this age, although the average lifetime often was shorter, mostly due to high children mortality). It looks as if history itself will *converge* in a historic singularity or Omega point Ω around 2040 (the term *historic singularity* is apparently due to Stanislaw Ulam (1950s) and was popularized by Vernor Vinge [39] in the 1990s). To convince yourself of history's convergence, associate an error bar of not much more than 10 percent with each date below: 1. Ω − 2 9lifetimes: modern humans start colonizing the world from Africa 2. Ω − 2 8lifetimes: bow and arrow invented; hunting revolution 3. Ω−2 7lifetimes: invention of agriculture; first permanent settlements; beginnings of civilization 4. Ω − 2 6lifetimes: first high civilizations (Sumeria, Egypt), and the most important invention of recorded history, namely, the one that made recorded history possible: writing 5. Ω−2 5lifetimes: the ancient Greeks invent democracy and lay the foundations of Western science and art and philosophy, from algorithmic procedures and formal proofs to anatomically perfect sculptures, harmonic music, and organized sports. Old Testament written (basis of Judaism, Christianity, Islam); major Asian religions founded. High civilizations in China, origin of the first calculation tools, and India, origin of alphabets and the zero 6. Ω − 2 4lifetimes: bookprint (often called the most important invention of the past 2000 years) invented in China. Islamic science and culture start spreading across large parts of the known world (this has sometimes been called the most important event between Antiquity and the age of discoveries) 7. Ω − 2 3lifetimes: the Mongolian Empire, the largest and most dominant empire ever (possibly including most of humanity and the world economy), stretches
across Asia from Korea all the way to Germany. Chinese fleets and later also European vessels start exploring the world. Gun powder and guns invented in China. Rennaissance and Western bookprint (often called the most influential invention of the past 1000 years) and subsequent Reformation in Europe. Begin of the Scientific Revolution 8. Ω − 2 2lifetimes: Age of enlightenment and rational thought in Europe. Massive progress in the sciences; first flying machines; first steam engines prepare the industrial revolution 9. Ω − 2 lifetimes: Second industrial revolution based on combustion engines, cheap electricity, and modern chemistry. Birth of modern medicine through the germ theory of disease; genetic and evolution theory. European colonialism at its short-lived peak 10. Ω − 1 lifetime: modern post-World War II society and pop culture emerges; superpower stalemate based on nuclear deterrence. The 20th century superexponential population explosion (from 1.6 billion to 6 billion people, mainly due to the Haber-Bosch process [34]) is at its peak. First spacecraft and commercial computers; DNA structure unveiled 11. Ω − 1/2 lifetime (now): for the first time in history most of the most destructive weapons are dismantled, after the Cold War's peaceful end. 3rd industrial revolution based on personal computers and the World Wide Web. A mathematical theory of universal AI emerges (see sections above) - will this be considered a milestone in the future? 12. Ω − 1/4 lifetime: This point will be reached around 2020. By then many computers will have substantially more raw computing power than human brains. 13. Ω−1/8 lifetime (100 years after G¨odel's paper): will practical variants of G¨odel machines start a runaway evolution of continually self-improving superminds way beyond human imagination, causing far more unpredictable revolutions in the final decade before Ω than during all the millennia before? ## 14. ... The following disclosure should help the reader to take this list with a grain of salt though. The author, who admits being very interested in witnessing Ω, was born in 1963, and therefore perhaps should not expect to live long past 2040. This may motivate him to uncover certain historic patterns that fit his desires, while ignoring other patterns that do not. Perhaps there even is a general rule for both the individual memory of single humans and the collective memory of entire societies and their history books: constant amounts of memory space get allocated to exponentially larger, adjacent time intervals further and further into the past. Maybe that's why there has never been a shortage of prophets predicting that the end is near - the important events according to one's own view of the past always seem to accelerate exponentially. See [31] for a more thorough discussion of this possibility.
## References [1] C. M. Bishop. *Neural networks for pattern recognition*. Oxford University Press, 1995. [2] R. A. Brooks. Intelligence without reason. In *Proceedings of the Twelveth Intarnationl Joint Conference on Artificial Intelligence*, pages 569–595, 1991. [3] E. D. Dickmanns, R. Behringer, D. Dickmanns, T. Hildebrandt, M. Maurer, F. Thomanek, and J. Schiehlen. The seeing passenger car 'VaMoRs-P'. In *Proc.* Int. Symp. on Intelligent Vehicles '94, Paris, pages 68–73, 1994. [4] M. Dorigo, G. Di Caro, and L. M. Gambardella. Ant algorithms for discrete optimization. *Artificial Life*, 5(2):137–172, 1999. [5] K. G¨odel. Uber formal unentscheidbare S¨atze der Principia Mathemat ¨ ica und verwandter Systeme I. *Monatshefte fur Mathematik und Physik* ¨ , 38:173–198, 1931. [6] F. Gomez, J. Schmidhuber, and R. Miikkulainen. Efficient non-linear control through neuroevolution. In *ECML 2006: Proceedings of the 17th European Conference on Machine Learning*. Springer, 2006. [7] F. J. Gomez and R. Miikkulainen. Active guidance for a finless rocket using neuroevolution. In *Proc. GECCO 2003, Chicago*, 2003. *Winner of Best Paper* Award in Real World Applications. Gomez is working at IDSIA on a CSEM grant to J. Schmidhuber. [8] A. Graves, S. Fernandez, F. Gomez, and J. Schmidhuber. Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural nets. In *ICML '06: Proceedings of the International Conference on Machine Learning*, 2006. [9] S. Hochreiter and J. Schmidhuber. Long short-term memory. *Neural Computation*, 9(8):1735–1780, 1997. [10] J. H. Holland. *Adaptation in Natural and Artificial Systems*. University of Michigan Press, Ann Arbor, 1975. [11] M. Hutter. *Universal Artificial Intelligence: Sequential Decisions based on Algorithmic Probability*. Springer, Berlin, 2004. (On J. Schmidhuber's SNF grant 20-61847). [12] T. Kohonen. *Self-Organization and Associative Memory*. Springer, second edition, 1988. [13] A. N. Kolmogorov. *Grundbegriffe der Wahrscheinlichkeitsrechnung*. Springer, Berlin, 1933. [14] A. N. Kolmogorov. Three approaches to the quantitative definition of information. Problems of Information Transmission, 1:1–11, 1965.
[15] L. A. Levin. Universal sequential search problems. *Problems of Information* Transmission, 9(3):265–266, 1973. [16] S. Lohmeier, K. Loeffler, M. Gienger, H. Ulbrich, and F. Pfeiffer. Sensor system and trajectory control of a biped robot. In *Proc. 8th IEEE International Workshop* on Advanced Motion Control (AMC'04), Kawasaki, Japan, pages 393–398, 2004. [17] M. Minsky and S. Papert. *Perceptrons*. Cambridge, MA: MIT Press, 1969. [18] N. J. Nilsson. *Principles of artificial intelligence*. Morgan Kaufmann, San Francisco, CA, USA, 1980. [19] B. A. Pearlmutter. Gradient calculations for dynamic recurrent neural networks: A survey. *IEEE Transactions on Neural Networks*, 6(5):1212–1228, 1995. [20] R. Pfeifer and C. Scheier. *Understanding Intelligence*. MIT Press, 2001. [21] K. R. Popper. *All Life Is Problem Solving*. Routledge, London, 1999. [22] I. Rechenberg. Evolutionsstrategie - Optimierung technischer Systeme nach Prinzipien der biologischen Evolution. Dissertation, 1971. Published 1973 by Fromman-Holzboog. [23] J. Rissanen. Modeling by shortest data description. *Automatica*, 14:465–471, 1978. [24] P. S. Rosenbloom, J. E. Laird, and A. Newell. *The SOAR Papers*. MIT Press, 1993. [25] J. Schmidhuber. Curious model-building control systems. In Proceedings of the International Joint Conference on Neural Networks, Singapore, volume 2, pages 1458–1463. IEEE press, 1991. [26] J. Schmidhuber. The Speed Prior: a new simplicity measure yielding near-optimal computable predictions. In J. Kivinen and R. H. Sloan, editors, Proceedings of the 15th Annual Conference on Computational Learning Theory (COLT 2002), Lecture Notes in Artificial Intelligence, pages 216–228. Springer, Sydney, Australia, 2002. [27] J. Schmidhuber. Artificial Intelligence - history highlights and outlook: AI maturing and becoming a real formal science, 2006. http://www.idsia.ch/˜juergen/ai.html. [28] J. Schmidhuber. Developmental robotics, optimal artificial curiosity, creativity, music, and the fine arts. *Connection Science*, 18(2):173–187, 2006. [29] J. Schmidhuber. G¨odel machines: fully self-referential optimal universal problem solvers. In B. Goertzel and C. Pennachin, editors, *Artificial General Intelligence*, pages 199–226. Springer Verlag, 2006.
[30] J. Schmidhuber. Is history converging? Again?, 2006. http://www.idsia.ch/˜juergen/history.html. [31] J. Schmidhuber. New millennium AI and the convergence of history. In W. Duch and J. Mandziuk, editors, *Challenges to Computational Intelligence*. Springer, in press, 2006. Also available as TR IDSIA-04-03, cs.AI/0302012. [32] J. Schmidhuber, D. Wierstra, M. Gagliolo, and F. Gomez. Training recurrent networks by EVOLINO. *Neural Computation*, 19(3):757–779, 2007. [33] C. E. Shannon. A mathematical theory of communication (parts I and II). *Bell* System Technical Journal, XXVII:379–423, 1948. [34] V. Smil. Detonator of the population explosion. *Nature*, 400:415, 1999. [35] R. J. Solomonoff. Complexity-based induction systems. IEEE Transactions on Information Theory, IT-24(5):422–432, 1978. [36] R. Sutton and A. Barto. *Reinforcement learning: An introduction*. Cambridge, MA, MIT Press, 1998. [37] A. M. Turing. On computable numbers, with an application to the Entscheidungsproblem. *Proceedings of the London Mathematical Society, Series 2*, 41:230–267, 1936. [38] V. Vapnik. *The Nature of Statistical Learning Theory*. Springer, New York, 1995. [39] V. Vinge. The coming technological singularity, 1993. VISION-21 Symposium sponsored by NASA Lewis Research Center, and Whole Earth Review, Winter issue. [40] P. J. Werbos. Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences. PhD thesis, Harvard University, 1974. [41] Xin Yao. A review of evolutionary artificial neural networks. *International Journal of Intelligent Systems*, 4:203–222, 1993.
G¨odel's work. It had enormous impact not only on computer science but also on philosophy and other fields. In particular, since humans can "see" the truth of G¨odel's unprovable statements, some researchers mistakenly thought that his results show that machines and Artificial Intelligences (AIs) will always be inferior to humans. Given the tremendous impact of G¨odel's results on AI theory, it does make sense to date AI's beginnings back to his 1931 publication 75 years ago. Zuse and Turing. In 1936 Alan Turing [37] introduced the *Turing machine* to reformulate G¨odel's results and Alonzo Church's extensionsthereof. TMs are often more convenient than G¨odel's integer-based formal systems, and later became a central tool of CS theory. Simultaneously Konrad Zuse built the first working program-controlled computers (1935-1941), using the binary arithmetic and the *bits* of Gottfried Wilhelm von Leibniz (1701) instead of the more cumbersome decimal system used by Charles Babbage, who pioneered the concept of program-controlled computers in the 1840s, and tried to build one, although without success. By 1941, all the main ingredients of 'modern' computer science were in place, a decade after G¨odel's paper, a century after Babbage, and roughly three centuries after Wilhelm Schickard, who started the history of automatic computing hardware by constructing the first non-program-controlled computer in 1623. In the 1940s Zuse went on to devise the first high-level programming language (Plankalk¨ul), which he used to write the first chess program. Back then chess-playing was considered an intelligent activity, hence one might call this chess program the first design of an AI program, although Zuse did not really implement it back then. Soon afterwards, in 1948, Claude Shannon [33] published information theory, recycling several older ideas such as Ludwig Boltzmann's entropy from 19th century statistical mechanics, and the *bit of information* (Leibniz, 1701). Relays, Tubes, Transistors. Alternative instances of transistors, the concept pioneered and patented by Julius Edgar Lilienfeld (1920s) and Oskar Heil (1935), were built by William Shockley, Walter H. Brattain & John Bardeen (1948: point contact transistor) as well as Herbert F. Matar´e & Heinrich Walker (1948, exploiting transconductance effects of germanium diodes observed in the *Luftwaffe* during WW-II). Today most transistors are of the field-effect type *a la* ` Lilienfeld & Heil. In principle a switch remains a switch no matter whether it is implemented as a relay or a tube or a transistor, but transistors switch faster than relays (Zuse, 1941) and tubes (Colossus, 1943; ENIAC, 1946). This eventually led to significant speedups of computer hardware, which was essential for many subsequent AI applications. The I in AI. In 1950, some 56 years ago, Turing invented a famous subjective test to decide whether a machine or something else is intelligent. 6 years later, and 25 years after G¨odel's paper, John McCarthy finally coined the term "AI". 50 years later, in 2006, this prompted some to celebrate the 50th birthday of AI, but this chapter's title should make clear that its author cannot agree with this view—it is the thing that counts, not its name. Roots of Probability-Based AI. In the 1960s and 1970s Ray Solomonoff combined theoretical CS and probability theory to establish a general theory of universal inductive inference and predictive AI [35] closely related to the concept of Kolmogorov complexity [14]. His theoretically optimal predictors and their Bayesian learning algorithms only assume that the observable reactions of the environment in response to cer-
tain action sequences are sampled from an unknown probability distribution contained in a set M of all enumerable distributions. That is, given an observation sequence we only assume there exists a computer program that can compute the probabilities of the next possible observations. This includes all scientific theories of physics, of course. Since we typically do not know this program, we predict using a weighted sum ξ of all distributions in M, where the sum of the weights does not exceed 1. It turns out that this is indeed the best one can possibly do, in a very general sense [11, 35]. Although the universal approach is practically infeasible since M contains infinitely many distributions, it does represent the first sound and general theory of optimal prediction based on experience, identifying the limits of both human and artificial predictors, and providing a yardstick for all prediction machines to come. AI vs Astrology? Unfortunately, failed prophecies of human-level AI with just a tiny fraction of the brain's computing power discredited some of the AI research in the 1960s and 70s. Many theoretical computer scientists actually regarded much of the field with contempt for its perceived lack of hard theoretical results. ETH Zurich's Turing award winner and creator of the PASCAL programming language, Niklaus Wirth, did not hesitate to link AI to astrology. Practical AI of that era was dominated by rule-based expert systems and Logic Programming. That is, despite Solomonoff's fundamental results, a main focus of that time was on logical, deterministic deduction of facts from previously known facts, as opposed to (probabilistic) induction of hypotheses from experience. Evolution, Neurons, Ants. Largely unnoticed by mainstream AI gurus of that era, a biology-inspired type of AI emerged in the 1960s when Ingo Rechenberg pioneered the method of artificial evolution to solve complex optimization tasks [22], such as the design of optimal airplane wings or combustion chambers of rocket nozzles. Such methods (and later variants thereof, e.g., Holland [10] (1970s), often gave better results than classical approaches. In the following decades, other types of "subsymbolic" AI also became popular, especially neural networks. Early neural net papers include those of McCulloch & Pitts, 1940s (linking certain simple neural nets to old and well-known, simple mathematical concepts such as linear regression); Minsky & Papert [17] (temporarily discouraging neural network research), Kohonen [12], Amari, 1960s; Werbos [40], 1970s; and many others in the 1980s. Orthogonal approaches included fuzzy logic (Zadeh, 1960s), Rissanen's practical variants [23] of Solomonoff's universal method, "representation-free" AI (Brooks [2]), Artificial Ants (Dorigo & Gambardella [4], 1990s), statistical learning theory (in less general settings than those studied by Solomonoff) & support vector machines (Vapnik [38] and others). As of 2006, this alternative type of AI research is receiving more attention than "Good Old-Fashioned AI" (GOFAI). Mainstream AI Marries Statistics. A dominant theme of the 1980s and 90s was the marriage of mainstream AI and old concepts from probability theory. Bayes networks, Hidden Markov Models, and numerous other probabilistic models found wide applications ranging from pattern recognition, medical diagnosis, data mining, machine translation, robotics, etc. Hardware Outshining Software: Humanoids, Robot Cars, Etc. In the 1990s and 2000s, much of the progress in practical AI was due to better hardware, getting roughly 1000 times faster per Euro per decade. In 1995, a fast vision-based robot car
by Ernst Dickmanns (whose team built the world's first reliable robot cars in the early 1980s with the help of Mercedes-Benz, e. g., [3]) autonomously drove 1000 miles from Munich to Denmark and back, in traffic at up to 120 mph, automatically passing other cars (a safety driver took over only rarely in critical situations). Japanese labs (Honda, Sony) and Pfeiffer's lab at TU Munich built famous humanoid walking robots. Engineering problems often seemed more challenging than AI-related problems. Another source of progress was the dramatically improved access to all kinds of data through the WWW, created by Tim Berners-Lee at the European particle collider CERN (Switzerland) in 1990. This greatly facilitated and encouraged all kinds of "intelligent" data mining applications. However, there were few if any obvious fundamental algorithmic breakthroughs; improvements / extensions of already existing algorithms seemed less impressive and less crucial than hardware advances. For example, chess world champion Kasparov was beaten by a fast IBM computer running a fairly standard algorithm. Rather simple but computationally expensive probabilistic methods for speech recognition, statistical machine translation, computer vision, optimization, virtual realities etc. started to become feasible on PCs, mainly because PCs had become 1000 times more powerful within a decade or so. 2006. As noted by Stefan Artmann (personal communication, 2006), today's AI textbooks seem substantially more complex and less unified than those of several decades ago, e. g., [18], since they have to cover so many apparently quite different subjects. There seems to be a need for a new unifying view of intelligence. In the author's opinion this view already exists, as will be discussed below. ## 2 Subjective Selected Highlights Of Present Ai The more recent some event, the harder it is to judge its long-term significance. But this biased author thinks that the most important thing that happened recently in AI is the begin of a transition from a heuristics-dominated science (e.g., [24]) to a real formal science. Let us elaborate on this topic. ## 2.1 The Two Ways Of Making A Dent In Ai Research There are at least two convincing ways of doing AI research: (1) construct a (possibly heuristic) machine or algorithm that somehow (it does not really matter how) solves a previously unsolved interesting problem, such as beating the best human player of Go (success will outshine any lack of theory). Or (2) prove that a particular novel algorithm is optimal for an important class of AI problems. It is the nature of heuristics (case (1)) that they lack staying power, as they may soon get replaced by next year's even better heuristics. Theorems (case (2)), however, are for eternity. That's why formal sciences prefer theorems. For example, probability theory became a formal science centuries ago, and totally formal in 1933 with Kolmogorov's axioms [13], shortly after G¨odel's paper [5]. Old but provably optimal techniques of probability theory are still in every day's use, and in fact highly significant for modern AI, while many initially successful heuristic approaches eventually became unfashionable, of interest mainly to the historians of the field.
## 2.2 No Brain Without A Body / Ai Becoming A Formal Science Heuristic approaches will continue to play an important role in many AI applications, to the extent they empirically outperform competing methods. But like with all young sciences at the transition point between an early intuition-dominated and a later formal era, the importance of mathematical optimality theorems is growing quickly. Progress in the formal era, however, is and will be driven by a different breed of researchers, a fact that is not necessarily universally enjoyed and welcomed by all the earlier pioneers. Today the importance of embodied, embedded AI is almost universally acknowledged (e. g., [20]), as obvious from frequently overheard remarks such as "let the physics compute" and "no brain without a body." Many present AI researchers focus on real robots living in real physical environments. To some of them the title of this subsection may seem oxymoronic: the extension of AI into the realm of the physical body seems to be a step away from formalism. But the new millennium's formal point of view is actually taking this step into account in a very general way, through the first mathematical theory of universal embedded AI, combining "old" theoretical computer science and "ancient" probability theory to derive optimal behavior for embedded, embodied rational agents living in unknown but learnable environments. More on this below. ## 2.3 What'S The I In Ai? What Is Life? Etc. Before we proceed, let us clarify what we are talking about. Shouldn't researchers on Artificial Intelligence (AI) and Artificial Life (AL) agree on basic questions such as: What is Intelligence? What is Life? Interestingly they don't. Are Cars Alive? For example, AL researchers often offer definitions of life such as: it must reproduce, evolve, etc. Cars are alive, too, according to most of these definitions. For example, cars evolve and multiply. They need complex environments with car factories to do so, but living animals also need complex environments full of chemicals and other animals to reproduce - the DNA information by itself does not suffice. There is no obvious fundamental difference between an organism whose self-replication information is stored in its DNA, and a car whose self-replication information is stored in a car builder's manual in the glove compartment. To copy itself, the organism needs its mothers womb plus numerous other objects and living beings in its environment (such as trillions of bacteria inside and outside of the mother's body). The car needs iron mines and car part factories and human workers. What is Intelligence? If we cannot agree on what's life, or, for that matter, love, or consciousness (another fashionable topic), how can there be any hope to define intelligence? Turing's definition (1950, 19 years after G¨odel's paper) was totally subjective: intelligent is what convinces me that it is intelligent while I am interacting with it. Fortunately, however, there are more formal and less subjective definitions. ## 2.4 Formal Ai Definitions Popper said: all life is problem solving [21]. Instead of defining intelligence in Turing's rather vague and subjective way we define intelligence with respect to the abilities of
universal optimal problem solvers. Consider a learning robotic agent with a single life which consists of discrete cycles or time steps t = 1, 2**, . . . , T** . Its total lifetime T may or may not be known in advance. In what follows,the value of any time-varying variable Q at time t (1 ≤ t ≤ T ) will be denoted by Q(t), the ordered sequence of values Q(1)**, . . . , Q**(t) by Q(≤ t), and the (possibly empty) sequence Q(1), . . . , Q(t − 1) by Q(< t). At any given t the robot receives a real-valued input vector x(t) from the environment and executes a real-valued action y(t) which may affect future inputs; at times t < T its goal is to maximize future success or *utility* $$u(t)=E_{\mu}\left[\sum_{\tau=t+1}^{T}\,r(\tau)\,\,\,\Bigg|\,\,\,h(\leq t)\right],$$ $$(1)$$ where r(t) is an additional real-valued reward input at time t, h(t) the ordered triple [x(t), y(t), r(t)] (hence h(≤ t) is the known history up to t), and Eµ(**· | ·**) denotes the conditional expectation operator with respect to some possibly unknown distribution µ from a set M of possible distributions. Here M reflects whatever is known about the possibly probabilistic reactions of the environment. For example, M may contain all computable distributions [11, 35]. Note that unlike in most previous work by others [36], there is just one life, no need for predefined repeatable trials, no restriction to Markovian interfaces between sensors and environment, and the utility function implicitly takes into account the expected remaining lifespan Eµ(T | h(≤ t)) and thus the possibility to extend it through appropriate actions [29]. Any formal problem or sequence of problems can be encoded in the reward function. For example, the reward functions of many living or robotic beings cause occasional hunger or pain or pleasure signals etc. At time t an optimal AI will make the best possible use of experience h(≤ t) to maximize u(t). But how? ## 2.5 Universal, Mathematically Optimal, But Incomputable Ai Unbeknownst to many traditional AI researchers, there is indeed an extremely general "best" way of exploiting previous experience. At any time t, the recent theoretically optimal yet practically infeasible reinforcement learning (RL) algorithm AIXI [11] uses Solomonoff's above-mentioned universal prediction scheme to select those action sequences that promise maximal future reward up to some horizon, given the current data h(≤ t). Using a variant of Solomonoff's universal probability mixture ξ, in cycle t+ 1, AIXI selects as its next action the first action of an action sequence maximizing ξ-predicted reward up to the horizon. Hutter's recent work [11] demonstrated AIXI's optimal use of observations as follows. The Bayes-optimal policy p ξ based on the mixture ξ is self-optimizing in the sense that its average utility value converges asymptotically for all µ ∈ M to the optimal value achieved by the (infeasible) Bayesoptimal policy p µ which knows µ in advance. The necessary condition that M admits self-optimizing policies is also sufficient. Of course one cannot claim the old AI is devoid of formal research! The recent approach above, however, goes far beyond previous formally justified but very limited AI-related approaches ranging from linear perceptrons [17] to the A∗-algorithm [18].
It provides, for the first time, a mathematically sound theory of general AI and optimal decision making based on experience, identifying the limits of both human and artificial intelligence, and a yardstick for any future, scaled-down, practically feasible approach to general AI. ## 2.6 Optimal Curiosity And Creativity No theory of AI will be convincing if it does not explain curiosity and creativity, which many consider as important ingredients of intelligence. We can provide an explanation in the framework of optimal reward maximizers such as those from the previous subsection. It is possible to come up with theoretically optimal ways of improving the predictive world model of a curious robotic agent [28], extending earlier ideas on how to implement artificial curiosity [25]: The rewards of an optimal reinforcement learner are the predictor's improvements on the observation history so far. They encourage the reinforcement learner to produce action sequences that cause the creation and the learning of new, previously unknown regularities in the sensory input stream. It turns out that art and creativity can be explained as by-products of such intrinsic curiosity rewards: good observer-dependent art deepens the observer'sinsights about this world or possible worlds, connecting previously disconnected patterns in an initially surprising way that eventually becomes known and boring. While previous attempts at describing what is satisfactory art or music were informal, this work permits the first technical, formal approach to understanding the nature of art and creativity [28]. ## 2.7 Computable, Asymptotically Optimal General Problem Solver Using the Speed Prior [26] one can scale down the universal approach above such that it becomes computable. In what follows we will mention general methods whose optimality criteria explicitly take into account the computational costs of prediction and decision making—compare [15]. The recent asymptotically optimal search algorithm for all well-defined problems [11] allocates part of the total search time to searching the space of proofs for provably correct candidate programs with provable upper runtime bounds; at any given time it focuses resources on those programs with the currently best proven time bounds. The method is as fast as the initially unknown fastest problem solver for the given problem class, save for a constant slowdown factor of at most 1 + ǫ, ǫ > 0, and an additive constant that does not depend on the problem instance! Is this algorithm then the *holy grail* of computer science? Unfortunately not quite, since the additive constant (which disappears in the O()-notation of theoretical CS) may be huge, and practical applications may not ignore it. This motivates the next section, which addresses all kinds of formal optimality (not just asymptotic optimality). ## 2.8 Fully Self-Referential, Self-Improving Godel Machine ¨ We may use G¨odel's self-reference trick to build a universal general, fully self-referential, self-improving, optimally efficient problem solver [29]. A G¨odel Machine is a com7
puter whose original software includes axioms describing the hardware and the original software (this is possible without circularity) plus whatever is known about the (probabilistic) environment plus some formal goal in form of an arbitrary user-defined utility function, e.g., cumulative future expected reward in a sequence of optimization tasks - see equation (1). The original software also includes a proof searcher which uses the axioms (and possibly an online variant of Levin's universal search [15]) to systematically make pairs ("proof", "program") until it finds a proof that a rewrite of the original software through "program" will increase utility. The machine can be designed such that each self-rewrite is necessarily globally optimal in the sense of the utility function, even those rewrites that destroy the proof searcher [29]. ## 2.9 Practical Algorithms For Program Learning The theoretically optimal universal methods above are optimal in ways that do not (yet) immediately yield practically feasible general problem solvers, due to possibly large initial overhead costs. Which are today's practically most promising extensions of traditional machine learning? Since virtually all realistic sensory inputs of robots and other cognitive systems are sequential by nature, the future of machine learning and AI in general depends on progress in in sequence processing as opposed to the traditional processing of stationary input patterns. To narrow the gap between learning abilities of humans and machines, we will have to study how to learn general algorithms instead of such reactive mappings. Most traditional methods for learning time series and mappings from sequences to sequences, however, are based on simple time windows: one of the numerous feedforward ML techniques such as feedforward neural nets (NN) [1] or support vector machines [38] is used to map a restricted, fixed time window of sequential input values to desired target values. Of course such approaches are bound to fail if there are temporal dependencies exceeding the time window size. Large time windows, on the other hand, yield unacceptable numbers of free parameters. Presently studied, rather general sequence learners include certain probabilistic approaches and especially recurrent neural networks (RNNs), e.g., [19]. RNNs have adaptive feedback connections that allow them to learn mappings from input sequences to output sequences. They can implement any sequential, algorithmic behavior implementable on a personal computer. In gradient-based RNNs, however, we can *differentiate our wishes with respect to programs,* to obtain a search direction in algorithm space. RNNs are biologically more plausible and computationally more powerful than other adaptive models such as Hidden Markov Models (HMMs - no continuous internal states), feedforward networks & Support Vector Machines (no internal states at all). For several reasons, however, the first RNNs could not learn to look far back into the past. This problem was overcome by RNNs of the *Long Short-Term Memory* type (LSTM), currently the most powerful and practical supervised RNN architecture for many applications, trainable either by gradient descent [9] or evolutionary methods [32], occasionally profiting from a marriage with probabilistic approaches [8]. Unsupervised RNNs that learn without a teacher to control physical processes or robots frequently use evolutionary algorithms [10, 22] to learn appropriate programs (RNN weight matrices) through trial and error [41]. Recent work brought progress
through a focus on reducing search spaces by co-evolving the comparatively small weight vectors of individual recurrent neurons [7]. Such RNNs can learn to create memories of important events, solving numerous RL / optimization tasks unsolvable by traditional RL methods [6, 7]. They are among the most promising methods for practical program learning, and currently being applied to the control of sophisticated robots such as the walking biped of TU Munich [16]. ## 3 The Next 25 Years Where will AI research stand in 2031, 25 years from now, 100 years after G¨odel's ground-breaking paper [5], some 200 years after Babbage's first designs, some 400 years after the first automatic calculator by Schickard (and some 2000 years after the crucifixion of the man whose birth year anchors the Western calendar)? Trivial predictions are those that just naively extrapolate the current trends, such as: computers will continue to get faster by a factor of roughly 1000 per decade; hence they will be at least a million times faster by 2031. According to frequent estimates, current supercomputers achieve roughly 1 percent of the raw computational power of a human brain, hence those of 2031 will have 10,000 "brain powers"; and even cheap devices will achieve many brain powers. Many tasks that are hard for today's software on present machines will become easy without even fundamentally changing the algorithms. This includes numerous pattern recognition and control tasks arising in factories of many industries, currently still employing humans instead of robots. Will theoretical advances and practical software keep up with the hardware development? We are convinced they will. As discussed above, the new millennium has already brought fundamental new insights into the problem of constructing theoretically optimal rational agents or universal AIs, even if those do not yet immediately translate into practically feasible methods. On the other hand, on a more practical level, there has been rapid progress in learning algorithms for agents interacting with a dynamic environment, autonomously discovering true sequence-processing, problemsolving programs, as opposed to the reactive mappings from stationary inputs to outputs studied in most of traditional machine learning research. In the author's opinion the above-mentioned theoretical and practical strands are going to converge. In conjunction with the ongoing hardware advances this will yield non-universal but nevertheless rather general artificial problem-solvers whose capabilities will exceed those of most if not all humans in many domains of commercial interest. This may seem like a bold prediction to some, but it is actually a trivial one as there are so many experts who would agree with it. Nontrivial predictions are those that anticipate truly unexpected, revolutionary breakthroughs. By definition, these are hard to predict. For example, in 1985 only very few scientists and science fiction authors predicted the WWW revolution of the 1990s. The few who did were not influential enough to make a significant part of humanity believe in their predictions and prepare for their coming true. Similarly, after the latest stock market crash one can always find with high probability some "prophet in the desert" who predicted it in advance, but had few if any followers until the crash really occurred.
# Qualitative Belief Conditioning Rules (Qbcr) Florentin Smarandache Department of Mathematics University of New Mexico Gallup, NM 87301, U.S.A. [email protected] Jean Dezert ONERA 29 Av. de la Division Leclerc 92320 Chˆatillon, France. [email protected] Abstract - In this paper we extend the new family of (quantitative) Belief Conditioning Rules (BCR) recently developed in the Dezert-Smarandache Theory (DSmT) to their qualitative counterpart for belief revision. Since the revision of quantitative as well as qualitative belief assignment given the occurrence of a new event (the conditioning constraint) can be done in many possible ways, we present here only what we consider as the most appealing Qualitative Belief Conditioning Rules (QBCR) which allow to revise the belief directly with words and linguistic labels and thus avoids the introduction of ad-hoc translations of quantitative beliefs into quantitative ones for solving the problem. Keywords: **qualitative belief, belief conditioning rules (BCRs), computing with words, Dezert-Smarandache** Theory (DSmT), reasoning under uncertainty. ## 1 Introduction In this paper, we propose a simple arithmetic of linguistic labels which allows a direct extension of quantitative Belief Conditioning Rules (BCR) proposed in the DSmT [3, 4] framework to their qualitative counterpart. Qualitative beliefs assignments are well adapted for manipulated information expressed in natural language and usually reported by human expert or AI-based expert systems. A **new method for computing directly with words** (CW) for combining and conditioning qualitative information is presented. CW, more precisely computing with linguistic labels, is usually more vague, less precise than computing with **numbers, but it is expected to offer a** better robustness and flexibility for combining uncertain and conflicting human reports than computing with numbers because in most of cases human experts are less efficient to provide (and to justify) precise quantitative beliefs than qualitative beliefs. Before extending the quantitative DSmT-based conditioning rules to their qualitative counterparts, it will be necessary to define few but new important operators on linguistic labels and what is a qualitative belief assignment. Then we will show though simple examples how the combination of qualitative beliefs can be obtained in the DSmT framework. ## 2 Qualitative Operators And Belief Assignments Since one wants to compute directly with words (CW) instead of numbers, we define without loss of generality a finite set of linguistic labelsL˜={L1, L2, . . . , Ln} wheren≥ 2 is an integer.L˜ **is endowed with a total** order relationship≺, so thatL1≺L2≺ . . .≺Ln**. To work on a close linguistic set under linguistic addition** and multiplication operators, one extendsL˜ with two extreme valuesL0 andLn+1 whereL0 **corresponds to** the minimal qualitative value andLn+1 **corresponds to the maximal qualitative value, in such a way that** L0≺L1≺L2≺ . . .≺Ln≺Ln+1 where≺ **means inferior to, or less, or smaller (in quality) than, etc.** Therefore, one will work on the extended ordered setL of qualitative valuesL={L0, L1, L2, . . . , Ln, Ln+1}. The qualitative addition and multiplication of linguistic labels, which are commutative, associative, and unitary operators, are defined as follows - see Chapter 10 in [4] for details and examples : - Addition : if i + j < n + 1, Li + Lj = Li+j **otherwise** Li + Lj = Ln+1. arXiv:0709.0522v1 [cs.AI] 4 Sep 2007 1
<image> b) Using QBCR2: one gets qmQBCR2(A|D**¯) =** L3 <image> qmQBCR2(C|D**¯) =** L3 Same concluding remarks as for case 1 can be drawn for the case 2. **Note that in this case, there is uncertainty** in the decision to bomb zone A or zone C **because they have the same supporting belief. The only difference** with respect to case 1, it that the zone to be bomb (whatever the one chosen - A or C**) will remain larger than** in case 1 because D has no intersection with A, B and C **for this model.** ## 6.3 Example 3 Let's modify the previous example for examining what happens when using an unconventional **bombing strategy.** Here we still consider four zones under surveillance, i.e. Θ = {A, B, C, D} and L = {L0, L1, L2, L3, L4, L5, L6} but with the following prior quasi-normalized qualitative basic belief mass qm(.): ## Qm(A) = L1, Qm(C) = L3, Qm(D) = L2 All other qualitative masses take the value L0**. Such prior suggests normally/rationally to bomb in priority the** zone C since it is the one carrying the higher belief on the location of enemies. **But for some unknown reasons** (military, political or whatever) let's assume that the headquarter has finally decided to bomb D **first. Let's** examine how will be revised the prior qm(.**) with QBCR1 and QBCR2 in such situation for the two cases:** ## - **Case 1**: D¯ 6= A ∪ B ∪ C. a) Using QBCR1: qm(A) = L1 is transferred to A ∩ D¯ , since A ∩ D¯ **is the largest element from** D¯ which is included in A, so we get qmQBCR1(A ∩ D¯ |D¯ ) = L1; and similarly qm(C) = L3 is transferred to C ∩ D¯ , since C ∩ D¯ is the largest element from D¯ which is included in C**, so we get** qmQBCR1(C∩D¯|D¯) = L3; Also, qm2(D) = L2 is transferred to D¯ since no element from D¯ **is included** in D, therefore qmQBCR1(D¯ |D¯ ) = L2. Analogously, this qualitative conditioned mass qmQBCR1(.**) is** quasi-normalized since L1 + L3 + L2 = L6 = Lmax**. In summary, with QBCR1 one gets in this case:** $\mathbf{f}$ $$\lambda=L_{3}$$ $\mathbf{L}\times\mathbf{L}$ 1. qmQBCR1(A ∩ D¯ |D¯ ) = L1 qmQBCR1(C ∩ D¯ |D¯ ) = L3 qmQBCR1(D¯ |D¯ ) = L2 a) Using QBCR2: qm(A) = L1 is transferred to A ∩ D¯ , and qm(C) = L3 is transferred to C ∩ D¯ **. Since** no qualitative focal element exists in D¯ , then qm(D) = L2 is transferred to D¯ **, and we get the same** result as for QBCR1.
a) Using QBCR1: the qualitative masses of A, B, C **do not change since they are included in** A∪B ∪C where the truth is. The qualitative mass of D **becomes** zero **(i.e. it takes the linguistic value** L0) since D is outside the truth, and qm(D) = L2 is transferred to A ∪ B ∪ C**. Hence:** qmQBCR1(A|D**¯) =** L1 qmQBCR1(C|D**¯) =** L3 qmQBCR1(A ∪ B ∪ C|D**¯) =** L2 This resulting qualitative conditional mass is also quasi-normalized. b) QBCR2, the qualitative mass of D **becomes (linguistically)** zero since D **is outside the truth, but now** qm(D) = L2 is equally split to A and C **since they are the only qualitative focal elements from** D1 which means all parts of A ∪ B ∪ C, therefore each of them A and C receive (1/2)L2 = L1**. Hence:** qmQBCR2(A|D¯) = L1 **+ (1**/2)L2 = L1 + L2/2 = L1 + L1 = L2 qmQBCR1(C|D¯ ) = L3 **+ (1**/2)L2 = L3 + L2/2 = L3 + L1 = L4 Again, the resulting qualitative conditional mass is quasi-normalized. As concluding remark, we see that even if a unconventional **bombing strategy is chosen first, the results** obtained by QBCR rules 1 or 2 are legitimate and coherent with intuition **since they commit the higher** belief in either C ∩ D¯ (case 1) or C (case 2) which is normal because the prior belief mass in C **was the** higher one before bombing D. ## 6.4 Example 4 Let's complicate a bit the previous example by working directly with a prior qm(.**) defined on the super-power set** S Θ **(see the previous Footnote 3), i.e. the complement is allowed among the set of propositions to deal with. As** previously, we consider four zones under surveillance, i.e. Θ = {A, B, C, D} and L = {L0, L1, L2, L3, L4, L5, L6}. The following prior qualitative basic belief mass qm(.**) is extended from the hyper-power set to the super-power** set, i.e. qm(.) : S Θ → L: qm(A) = L1, qm(C) = L1, qm(D) = L2 qm(C ∪ D) = L1, qm(C ∩ D**¯) =** L1 All other qualitative masses take the value L0**. This qualitative mass is quasi-normalized since** L1 + L1 + L2 + L1 + L1 = L1+1+2+1+1 = L6 = Lmax We assume that the military headquarter has decided to bomb in priority region D **because there was a high** qualitative belief on the presence of enemies in D according to the prior qbba qm(.**). But after bombing and** verification, it turns out that the enemies were not in D **(same scenario as for example 2). Let's examine the** results of the conditioning by the rules QBCR1 and QBCR2 for the cases 1 and 2: ## - **Case 1**: D¯ 6= A ∪ B ∪ C. a) Using QBCR1: qm(A) = L1 is transferred to A ∩ D¯ , since A ∩ D¯ **is the largest element (with respect** to inclusion) from D¯ which is included in A. qm(C) = L1 is similarly transferred to C ∩ D¯ **, since** C ∩ D¯ is the largest element from D¯ which is included in C. qm(C ∪ D) = L1 **is also transferred** to C ∩ D¯ since C ∩ D¯ is the largest element from D¯ which is included in C ∪ D. qm(D) = L2 is transferred to D¯ since no element from D¯ is included in D**. In summary, we get:** qmQBCR1(A ∩ D¯|D**¯) =** L1 qmQBCR1(C ∩ D¯|D¯) = qm(C ∩ D¯) + qm(C) + qm(C ∪ D) = L1 + L1 + L1 = L3 qmQBCR1(D¯|D**¯) =** L2 All others are equal to L0**. The resulting qualitative conditioned mass is quasi-normalized since** L1 + L3 + L2 = L6 = Lmax.
## - Multiplication1: Li × Lj = Lmin{I,J} Let's consider a finite and discrete frame of discernment Θ = {θ1, . . . , θn} **for the given problem under** consideration where the true solution must lie in; its model M**(Θ) defined by the set of integrity constraints** on elements of Θ (i.e. free-DSm model, hybrid model or Shafer's model) and its corresponding hyper-power set denoted DΘ**; that is, the Dedekind's lattice on Θ [3] which is nothing but the space of propositions generated** with ∩ and ∪ **operators and elements of Θ taking into account the integrity constraints (if any) of the model.** A qualitative basic belief assignment (qbba) also called qualitative belief mass is a mapping function qm(.) : DΘ 7→ L**. In the sequel, all qualitative masses not explicitly specified in the examples, are by default (and for** notation convenience) assumed to take the minimal linguistic value L0. ## 3 Quasi-Normalization Of Qualitative Masses There is no way to define a normalized qm(.**), but a qualitative quasi-normalization [4] is nevertheless possible** if needed as follows: a) If the previous defined labels L0, L1, L2, . . ., Ln, Ln+1 from the set L **are equidistant, i.e. the (linguistic)** distance between any two consecutive labels Lj and Lj+1 is the same, for any j ∈ {0, 1, 2, . . ., n}**, then** one can make an isomorphism between L and a set of sub-unitary numbers from the interval [0, **1] in the** following way: Li = i/(n + 1), for all i ∈ {0, 1, 2, . . ., n + 1}, and therefore the interval [0, **1] is divided** into n + 1 equal parts. Hence, a qualitative mass, qm(Xi) = Li**, is equivalent to a quantitative mass** m(Xi) = i/(n **+ 1) which is normalized if** $$\sum_{X\in D^{\Theta}}m(X)=\sum_{k}i_{k}/(n+1)=1$$ $$\sum_{X\in D^{\Theta}}qm(X)=\sum_{k}L_{i_{k}}=L_{n+1}$$ It is the probability that (d = 1) $$\operatorname{t}\!\!\mathrm{o}$$ but this one is equivalent to In this case we have a qualitative normalization**, similar to the (classical) numerical normalization.** b) But, if the previous defined labels L0, L1, L2, . . ., Ln, Ln+1 from the set L **are not equidistant, so the** interval [0, **1] cannot be split into equal parts according to the distribution of the labels, then it makes** sense to consider a qualitative quasi-normalization**, i.e. an approximation of the (classical) numerical** normalization for the qualitative masses in the same way: $$\sum_{X\in D^{\Theta}}q m(X)=L_{n+1}$$ In general, if we don't know if the labels are equidistant or not, we say **that a qualitative mass is quasinormalized when the above summation holds.** ## 4 Quantitative Belief Conditioning Rules (Bcr) Before presenting the new Qualitative Belief Conditioning Rules (QBCR) in the next section, it is first important and necessary to briefly recall herein what are the (quantitative) **Belief Conditioning Rules (BCR) and what** was the motivation for their development in DSmT framework and also **the fundamental difference between** BCR and Shafer's Conditioning Rule (SCR) proposed in [2]. So, let's suppose one has a prior basic belief assignment (bba) m(.) defined on hyper-power set DΘ**, and one** finds out (or one assumes) that the truth is in a given element A ∈ DΘ, i.e. A **has really occurred or is supposed** to have occurred. The problem of belief conditioning is on how to revise properly the prior bba m(.**) with the** knowledge about the occurrence of A. Simply stated: how to compute m(.|A**) from the knowledge available,** that is with any prior bba m(.**) and** A ? 1A more precise multiplication operator has been proposed in [1].
## 4.1 Shafer'S Conditioning Rule (Scr) Until very recently, the most commonly used conditioning rule for belief revision was the one proposed by Shafer [2] and referred here as Shafer's Conditioning Rule (SCR). The SCR consists in combining the prior bba m(.) with a specific bba focused on A with Dempster's rule of combination for transferring the conflicting **mass to** non-empty sets in order to provide the revised bba. In other words, the conditioning by a proposition A**, is** obtained by SCR as follows : $$m_{S C R}(.|A)=[m\oplus m_{S}](.)$$ mSCR(.|A) = [m ⊕ mS](.) (1) where m(.) is the prior bba to update, A is the conditioning event, mS(.) is the bba focused on A **defined by** mS(A) = 1 and mS(X) = 0 for all X 6= A and ⊕ **denotes the Dempster's rule of combination [2].** The SCR approach based on Dempster's rule of combination of the prior bba with the bba focused on the conditioning event remains subjective **since actually in such belief revision process both sources are subjective** and SCR doesn't manage properly the objective nature/absolute truth carried by the conditioning term. Indeed, when conditioning a prior mass m(.), knowing (or assuming) that the truth is in A**, means that we have in hands** an absolute (not subjective) knowledge, i.e. the truth in A **has occurred (or is assumed to have occurred), thus** A is realized (or is assumed to be realized) and this is (or at least must be **interpreted as) an absolute truth. The** conditioning term "Given A" must therefore be considered as an absolute truth, while mS(A**) = 1 introduced** in SCR cannot refer to an absolute truth actually, but only to a subjective certainty **on the possible occurrence** of A **from a** virtual **second source of evidence. The advantage of SCR remains undoubtedly in its simplicity** and the main argument in its favor is its coherence with conditional probability when manipulating Bayesian belief assignment. But in our opinion, SCR should better be interpreted as the fusion of m(.**) with a particular** subjective bba mS(A**) = 1 rather than an objective belief conditioning rule. This fundamental remark motivated** us to develop a new family of BCR [4] based on hyper-power set decomposition (HPSD) explained briefly in the next section. It turns out that many BCR are possible because the **redistribution of masses of elements outside** of A (the conditioning event) to those inside A can be done in n**-ways. This will be briefly presented right after** the next section. ## 4.2 Hyper-Power Set Decomposition (Hpsd) Let Θ = {θ1, θ2, . . . , θn}, n ≥ 2, a model M**(Θ) associated for Θ (free DSm model, hybrid or Shafer's model)** and its corresponding hyper-power set DΘ. Let's consider a (quantitative) basic belief assignment (bba) m(.) : DΘ 7→ [0, 1] such that PX∈DΘ m(X) = 1. Suppose one finds out that the truth is in the set A ∈ DΘ \ {∅}**. Let** PD(A) = 2A ∩ DΘ \ {∅}, i.e. all non-empty parts (subsets) of A which are included in DΘ**. Let's consider the** normal cases when A 6= ∅ and PY ∈PD(A) m(Y ) > 0. For the degenerate case when the truth is in A = ∅**, we** consider Smets' open-world, which means that there are other hypotheses Θ′ = {θn+1, θn+2**, . . . θ**n+m}, m ≥ 1, and the truth is in A ∈ DΘ′\ {∅}. If A = ∅ **and we consider a close-world, then it means that the problem** is impossible. For another degenerate case, when PY ∈PD(A) m(Y **) = 0, i.e. when the source gave us a totally** (100%) wrong information m(.), then, we define: m(A|A) , 1 and, as a consequence, m(X|A**) = 0 for any** X 6= A. Let s(A) = {θi1 , θi2 , . . . , θip }, 1 ≤ p ≤ n, be the singletons/atoms that compose A **(for example, if** A = θ1 ∪ (θ3 ∩ θ4) then s(A) = {θ1, θ3, θ4}). The Hyper-Power Set Decomposition (HPSD) of DΘ \ ∅ **consists** in its decomposition into the three following subsets generated by A: - D1 = PD(A), the parts of A **which are included in the hyper-power set, except the empty set;** - D2 = {(Θ \ s(A)), ∪, ∩} \ {∅}, i.e. the sub-hyper-power set generated by Θ \ s(A) under ∪ and ∩**, without** the empty set. - D3 = (DΘ \ {∅}) \ (D1 ∪ D2); each set from D3 has in its formula singletons from both s(A**) and Θ** \ s(A) in the case when Θ \ s(A**) is different from empty set.** D1, D2 and D3 have no element in common two by two and their union is DΘ **\ {∅}**. Simple example of HPSD: Let's consider Θ = {θ1, θ2, θ3} **with Shafer's model (i.e. all elements of Θ are exclusive)** and let's assume that the truth is in θ2 ∪ θ3, i.e. the conditioning term is θ2 ∪ θ3. Then one has the following
mSCR(θ2|θ2 ∪ θ3**) = 0**.25 mSCR(θ3|θ2 ∪ θ3**) = 0**.25 mSCR(θ2 ∪ θ3|θ2 ∪ θ3**) = 0**.50 More complex and detailed examples can be found in [3]. ## 5 Qualitative Belief Conditioning Rules (Qbcr) In this section we propose two Qualitative belief conditioning rules (QBCR) which extend the principles of quantitative BCR in the qualitative domain using the operators on linguistic labels defined in section 2. We consider from now on a general frame Θ = {θ1, θ2, . . . , θn}, a given model M**(Θ) with its hyper-power set** DΘ and a given extended ordered set L of qualitative values L = {L0, L1, L2, . . . , Lm, Lm+1}**. The prior qualitative** basic belief assignment (qbba) taking its values in L is denoted qm(.**). We assume in the sequel that the** conditioning event is A 6= ∅, A ∈ DΘ**, i.e. the absolute truth is in** A. ## 5.1 Qualitative Belief Conditioning Rule No 1 (Qbcr1) The first QBCR, denoted QBCR1, does the redistribution of masses **in a pessimistic/prudent way, as follows:** - transfer the mass of each element Y in D2 ∪ D3 to the largest element X in D1 **which is contained by** Y ; - if no such X element exists, then the mass of Y **is transferred to** A. The mathematical formula for QBCR1 is then given by: - If X /∈ D1, qmQBCR1(X|A) = Lmin ≡ L0 (3) - If X ∈ D1, qmQBCR1(X|A) = qm(X) + qS1(X, A) + qS2(X, A**) (4)** where the addition operator involved in (4) corresponds to the addition operator on linguistic labels **defined in** section 2 and where the qualitative summations qS1(X, A) and qS2(X, A**) are defined by:** $$q m_{Q B C R1}(X|A)=L_{\operatorname*{min}}\equiv L_{0}$$ $qS_{1}(X,A)\stackrel{{\triangle}}{{=}}\sum_{Y\in D_{2}\cup D_{3}}qm(Y)$ $X\!\subset\!Y$ $X\!=\!$max $qS_{2}(X,A)\stackrel{{\triangle}}{{=}}\sum_{Y\in D_{2}\cup D_{3}}qm(Y)$ $Y\!\cap\!A\!=\!0$ $X\!=\!A$ $$\left({\mathrm{3}}\right)$$ $$A)+q S_{2}(X,A)$$ $\left(4\right)$. $$\left(5\right)$$ qm(Y **) (5)** $$(6)$$ qm(Y **) (6)** qS1(X, A) corresponds to the transfer of qualitative mass of each element Y in D2 ∪ D3 **to the largest element** X in D1 and qS2(X, A) corresponds to the transfer of the mass of Y is to A **when no such largest element** X in D1 exists.
+ L0 L1 L2 L3 L4 L5 L6 <image> L0 L0 L1 L2 L3 L4 L5 L6 L1 L1 L2 L3 L4 L5 L6 L6 L2 L2 L3 L4 L5 L6 L6 L6 L3 L3 L4 L5 L6 L6 L6 L6 L4 L4 L5 L6 L6 L6 L6 L6 L5 L5 L6 L6 L6 L6 L6 L6 L6 L6 L6 L6 L6 L6 L6 L6 Table 1: Addition table × L0 L1 L2 L3 L4 L5 L6 <image> <image> L0 L0 L0 L0 L0 L0 L0 L0 L1 L0 L1 L1 L1 L1 L1 L1 L2 L0 L1 L2 L2 L2 L2 L2 L3 L0 L1 L2 L3 L3 L3 L3 L4 L0 L1 L2 L3 L4 L4 L4 L5 L0 L1 L2 L3 L4 L5 L5 L6 L0 L1 L2 L3 L4 L5 L6 <image> <image> ## 6.1 Example 1 and the qualitative masses of all other elements of GΘ take the minimal value L0**. This qualitative mass is** quasi-normalized since L1 + L1 + L4 = L1+1+4 = L6 = Lmax. <image> If we assume that the conditioning event is the proposition A ∪ B**, i.e. the absolute truth is in** A ∪ B, the hyper-power set decomposition (HPSD) is obtained as follows: D1 **is formed by all parts included in** A ∪ B, i.e. D1 = {A ∩ B, A, B, A ∪ B, B ∩ D, A ∪ (B ∩ D),(A ∩ B) ∪ (B ∩ D)}, D2 **is the set generated by** {(C, D), ∪, ∩} \ ∅ = {C, D, C ∪ D, C ∩ D}, and D3 = {A ∪ C, A ∪ D, B ∪ C, B ∪ D, A ∪ B ∪ C, A ∪ (C ∩ D)**, . . .**}. The qualitative mass of element D is transferred to D ∩ (A ∪ B) = B ∩ D **according to the model, since** D is in the set D2 ∩ D3 and the largest element X in D1 which is contained by element D is B ∩ D**. Whence** qmQBCR1(B ∩ D|A ∪ B) = L4, while qmQBCR1(D|A ∪ B) = L0. The qualitative mass of element C**, which is** in D2 ∪ D3, but C has no intersection with A ∪ B **(i.e. the intersection is empty), is transferred to the whole** A ∪ B. Whence qmQBCR1(A ∪ B|A ∪ B) = L1, while qmQBCR1(C|A ∪ B) = L0**. Since the truth is in** A ∪ B, then the qualitative masses of the elements A and B, which are included in A ∪ B**, are not changed in this** example, i.e. qmQBCR1(A|A ∪ B) = L1 and qmQBCR1(B|A ∪ B) = L0. One sees that the resulting qualitative
conditional mass, qmQBCR1(.**) is also quasi-normalized since** L4 + L0 + L1 + L0 + L1 + L0 = L6 = Lmax In summary, one gets the following qualitative conditioned masses with QBCR12: qmQBCR1(B ∩ D|A ∪ B) = L4 qmQBCR1(A ∪ B|A ∪ B) = L1 qmQBCR1(A|A ∪ B) = L1 Analogously to QBCR1, with QBCR2 the qualitative mass of the element D is transferred to D ∩(A∪B) = B ∩ D according to the model, since D is in D2 ∪ D3 and the largest element X in D1 **which is contained** by D is B ∩ D. Whence qmQBCR2(B ∩ D|A ∪ B) = L4, while qmQBCR2(D|A ∪ B) = L0**. But, differently** from QBCR1, the qualitative mass of C, which is in D2 ∪ D3, but C has no intersection with A ∪ B **(i.e. the** intersection is empty), is transferred A only since A ∈ A ∪ B and qm1(A**) is different from zero (while other** sets included in A ∪ B have the qualitative mass equal to L0). Whence qmQBCR2(A|A ∪ B) = L1 + L1 = L2, while qmQBCR2(C|A ∪ B) = L0. Similarly, the resulting qualitative conditional mass, qmQBCR2(.**) is also** quasi-normalized since L4 + L0 + L2 + L0 = L6 = Lmax**. Therefore the result obtained with QBCR2 is:** qmQBCR2(B ∩ D|A ∪ B) = L4 qmQBCR2(A|A ∪ B) = L2 ## 6.2 Example 2 Let's consider a more complex example related with military decision support. We assume that the frame Θ = {A, B, C, D} **corresponds to the set of four regions under surveillance because these regions are known to** potentially protect some dangerous enemies. The linguistic labels used for specifying qualitative masses belong to L = {L0, L1, L2, L3, L4, L5, L6}. Let's consider the following prior qualitative mass qm(.**) defined by:** qm(A) = L1, qm(C) = L1, qm(D) = L4 All other masses take the value L0. This qualitative mass is quasi-normalized since L1 + L1 + L4 = L**1+1+4** = L6 = Lmax. We assume that the military headquarter has decided to bomb in priority region D **because there was a high** qualitative belief on the presence of enemies in zone D according to the prior qbba qm(.**). But let's suppose** that after bombing and verification, it turns out that the enemies were not in D**. The important question the** headquarter is now face to is on how to revise its prior qualitative belief qm(.**) knowing that the absolute truth** is now not in D, i.e. D¯ (the complement of D**) is absolutely true. The problem is a bit different from the** previous one since the conditioning term D¯ in this example does not belong to the hyper-power set DΘ**. In such** case, one has to work actually directly on the super-power set3 as proposed in [4] (Chap. 8). D¯ **belongs to** DΘ only if Shafer's model (or for some other specific hybrid models - see **case 2 below) is adopted, i.e. when region** D has no overlap with regions A, B or C. The truth is not in D **is in general (but with Shafer's model or with** some specific hybrid models) not equivalent to the truth is in A ∪ B ∪ C but with the truth is in D¯ **. That's why** the following two cases need to be analyzed: ## - **Case 1**: D¯ 6= A ∪ B ∪ C. If we consider the model represented in Figure 2, then it is clear that D¯ 6= A ∪ B ∪ C. The Super-Power Set Decomposition (SPSD) is the following: - if the truth is in A, then D1 **is formed by all non-empty parts of** A;
<image> - D2 **is formed by all non-empty parts of** A¯; - D3 is formed by what's left, i.e. D3 = (S Θ \ {∅}) \ (D1 ∪ D2); thus D3 **is formed by all elements from** S Θ which have the form of unions of some element(s) from D1 and some element(s) from D2**, or by** all elements from S Θ that overlap A and A¯. In our particular example: D1 is formed by all non-empty parts of D¯ ; D2 **is formed by all non-empty** parts of D; D3 = {A, B, C, A ∪ D, B ∪ D, A ∪ **B, . . .**}. a) Using QBCR1: one gets: $$\begin{array}{c}{{q m_{Q B C R1}(A\cap\bar{D}|\bar{D})=L_{1}}}\\ {{q m_{Q B C R1}(C\cap\bar{D}|\bar{D})=L_{1}}}\\ {{q m_{Q B C R1}(\bar{D}|\bar{D})=L_{4}}}\end{array}$$ $$\textrm{\textrm{D)}Us}^2$$ b) Using QBCR2: one gets $qm_{QBCR2}(A\cap\bar{D}|\bar{D})=L_{1}+\frac{1}{2}L_{4}=L_{1}+L_{\lfloor\frac{4}{2}\rfloor}=L_{3}$ $qm_{QBCR2}(C\cap\bar{D}|\bar{D})=L_{1}+\frac{1}{2}L_{4}=L_{3}$ Note that with both conditioning rules, one gets quasi-normalized qualitative belief masses. The results indicate that zones A and C have the same level of qualitative belief after the conditioning which is **normal.** QBRC1 however, which is more prudent, just commits the higher belief to the whole zone A∪B ∪C **which** represents actually the less specific information, while QBRC2 commits equal beliefs to the restricted zones A ∩ D¯ and C ∩ D¯ **only. As far as only the minimal surface of the zone to bomb is concerned (and** if zones A ∩ D¯ and C ∩ D¯ **have the same surface), then a random decision has to be taken between both** possibilities. Of course some other military constraints need to be taking into account in the decision process in such situation if the random decision choice is not preferred. - Case 2: D¯ = A ∪ B ∪ C. This case occurs only when D ∩ (A ∪ B ∪ C) = ∅ **as for example to the** following model4. In this second case, "the truth is not in D**" is equivalent to "the truth is in** A ∪ B ∪ C". The decomposition is the following: D1 **is formed by all non-empty parts of** A ∪ B ∪ C; D2 = {D}; D3 = {A∪D, B∪D, C ∪D, A∪B∪D, A∪C ∪D, B∪C ∪D, A∪B∪C ∪D,(A∩B)∪D,(A∩B∩C)∪**D, ...**}. a) Using QBCR1: one gets
# Using Rdf To Model The Structure And Process Of Systems ∗ Marko A. Rodriguez Jennifer H. Watkins Johan Bollen Los Alamos National Laboratory {marko,jhw,jbollen}@lanl.gov Carlos Gershenson New England Complex Systems Institute [email protected] October 28, 2018 ## Abstract 1 Many systems can be described in terms of networks of discrete elements and their various relationships to one another. A semantic network, or multi-relational network, is a directed labeled graph consisting of a heterogeneous set of entities connected by a heterogeneous set of relationships. Semantic networks serve as a promising general-purpose modeling substrate for complex systems. Various standardized formats and tools are now available to support practical, large-scale semantic network models. First, the Resource Description Framework (RDF) offers a standardized semantic network data model that can be further formalized by ontology modeling languages such as RDF Schema (RDFS) and the Web Ontology Language (OWL). Second, the recent introduction of highly performant triple-stores (i.e. semantic network databases) allows semantic network models on the order of 10 9edges to be efficiently stored and manipulated. RDF and its related technologies are currently used extensively in the domains of computer science, digital library science, and the biological sciences. This article will provide an introduction to RDF/RDFS/OWL and an examination of its suitability to model discrete element complex systems. ∗Rodriguez, M.A., Watkins, J.H., Bollen, J., Gershenson, C., "Using RDF to Model the Structure and Process of Systems", International Conference on Complex Systems, Boston, Massachusetts, October 2007. arXiv:0709.1167v2 [cs.AI] 15 Oct 2007
## 1 Introduction The figurehead of the Semantic Web initiative, Tim Berners-Lee, describes the Semantic Web as ... an extension of the current web in which information is given well-defined meaning, better enabling computers and people to work in cooperation [2]. However, Berners-Lee's definition assumes an application space that is specific to the "web" and to the interaction between humans and machines. More generally, the Semantic Web is actually a conglomeration of standards and technologies that can be used in various disparate application spaces. The Semantic Web is simply a highly-distributed, standardized semantic network (i.e. directed labeled network) data model and a set of tools to operate on that data model. With respect to the purpose of this article, the Semantic Web and its associated technologies can be leveraged to model and manipulate any system that can be represented as a heterogeneous set of discrete elements connected to one another by a set of heterogeneous relationships whether those elements are web pages, automata, cells, people, cities, etc. This article will introduce complexity science researchers to a collection of standards designed for modeling the heterogeneous relationships that compose systems and technologies that support large-scale data sets on the order to 109edges. This article has the following outline. Section 2 presents a review of the Resource Description Framework (RDF). RDF is the standardized data model for representing a semantic network and is the foundational technology of the Semantic Web. Section 3 presents a review of both RDF Schema (RDFS) and the Web Ontology Language (OWL). RDFS and OWL are languages for abstractly defining the topological features of an RDF network and are analogous, in some ways, to the database schemas of relational databases (e.g. MySQL and Oracle). Section 4 presents a review of triple-store technology and its similarities and differences with the relational database. Finally, Section 5 presents the semantic network programming language Neno and the RDF virtual machine Fhat. ## 2 The Resource Description Framework The Resource Description Framework (RDF) is a standardized data model for representing a semantic network [5]. RDF is not a syntax (i.e. data format). There exist various RDF syntaxes and depending on the application space one syntax may be more appropriate than another. An RDF-based semantic network is called an RDF network. An RDF network differs from the directed network of common knowledge because the edges in the network are qualified. For instance, in a directed network, an edge is represented as an ordered pair (*i, j*). This relationship states that i is related to j by some unspecified type of relationship. Because edges are not qualified, all edges have a homogenous meaning in a directed network (e.g. a coauthorship network, a friendship network, a transportation network). On the other hand, in an RDF network, edges are qualified such that a relationship is represented by an ordered triple h*i, ω, j*i. A triple
can be interpreted as a statement composed of a subject, a predicate, and an object. The subject i is related to the object j by the predicate ω. For instance, a scholarly network can be represented as an RDF network where an article cites an article, an author collaborates with an author, and an author is affiliated with an institution. Because edges are qualified, a heterogeneous set of elements can interact in multiple different ways within the same RDF network representation. It is the labeled edge that makes the Semantic Web and the semantic network, in general, an appropriate data model for systems that require this level of description. In an RDF network, elements (i.e. vertices, nodes) are called resources and resources are identified by Uniform Resource Identifiers (URI) [1]. The purpose of the URI is to provide a standardized, globally-unique naming convention for identifying any type of resource, where a "resource" can be anything (e.g. physical, virtual, conceptual, etc.). The URI allows every vertex and edge label in a semantic network to be uniquely identified such that RDF networks from disparate organizations can be unioned to form larger, and perhaps more complete, models. The Semantic Web can span institutional boundaries to support a world-scale model. The generic syntax for a URI is <scheme name> : <hierarchical part> [ \# <fragment> ] Examples of entities that can be denoted by a URI include: - a physical object (e.g. http://www.lanl.gov/people\#marko) - a physical component (e.g. http://www.lanl.gov/people\#markos arm) - a virtual object (e.g. http://www.lanl.gov/index.html) - an abstract class (e.g. http://www.lanl.gov/people\#Human). Even though each of the URIs presented above have an http schema name, only one is a Uniform Resource Locator (URL) [9] of popular knowledge: namely, http://www.lanl.gov/index.html. The URL is a subclass of the URI. The URL is an address to a particular harvestable resource. While URIs can point to harvestable resources, in general, it is best to think of the URI as an address (i.e. pointer) to a particular concept. With respects to the previously presented URIs, Marko, his arm, and the class of humans are all concepts that are uniquely identified by some prescribed globally-unique URI. Along with URI resources, RDF supports the concept of a literal. Example literals include the integer 1, the string "marko", the float (or double) 1.034, the date 2007-11-30, etc. Refer to the XML Schema and Datatypes (XSD) specification for the complete classification of literals [3]. If U is the set of all URIs and L is the set of all literals, then an RDF network (or the Semantic Web in general) can be formally defined as1 G ⊆ hU × U × (U ∪ L)i. (1) To ease readability and creation, schema and hierarchies are usually prefixed (i.e. abbreviated). For example, in the following two triples, lanl is the prefix for http://www.lanl.gov/people\#: 1Note that there also exists the concept of a blank node (i.e. anonymous node). Blank nodes are important for creating n-ary relationships in RDF networks. Please refer to the official RDF specification for more information on the role of blank nodes.
<lanl:marko, lanl:worksWith, lanl:jhw> <lanl:marko, lanl:hasBodyPart, lanl:markos_arm> These triples are diagrammed in Figure 1. The union of all RDF triples is the Semantic Web. <image> The benefit of RDF, and perhaps what is not generally appreciated, is that with RDF it is possible to represent anything in relation to anything by any type of qualified relationship. In many cases, this generality can lead to an uncontrolled soup of relationships; however, thanks to ontology languages such as RDFS and OWL, it is possible to formally constrain the topological features of an RDF network and thus, subsets of the larger Semantic Web. ## 3 The Rdf Schema And Web Ontology Language The Resource Description Framework and Schema (RDFS) [4] and the Web Ontology Language (OWL) [6] are both RDF languages used to abstractly define resources in an RDF network. RDFS is simpler than OWL and is useful for creating class hierarchies and for specifying how instances of those classes can relate to one another. It provides three important constructs: rdfs:domain, rdfs:range, and rdfs:subClassOf2. While other constructs exist, these three tend to be the most frequently used when developing an RDFS ontology. Figure 2 provides an example of how these constructs are used. With RDFS (and OWL), there is a sharp distinction between the ontological- and instance-level of an RDF network. The ontological-level defines abstract classes (e.g. lanl:Human) and how they are related to one another. The instance-level is tied to the ontological-level using the rdf:type predicate3. For example, any lanl:Human can be the rdfs:domain (subject) of a lanl:worksFor triple that has a lanl:Institution as its rdfs:range (object). Note that the lanl:Laboratory is an rdfs:subClassOf a lanl:Institution. According to the property of subsumption in RDFS reasoning, subclasses inherit their parent class restrictions. Thus, lanl:marko can have a lanl:worksFor relationship with lanl:LANL. Note that RDFS is not intended to constrain relationships, but instead to infer new relationships based on restrictions. For instance, if lanl:marko lanl:worksFor some 2rdfs is a prefix for http://www.w3.org/2000/01/rdf-schema\# 3rdf is a prefix for http://www.w3.org/1999/02/22-rdf-syntax-ns\#
<image> other organization denoted X, it is inferred that that X is an rdf:type of lanl:Institution. While this is not intuitive for those familiar with constraint-based database schemas, such inferencing of new relationships is the norm in the RDFS and OWL world. Beyond the previously presented RDFS constructs, OWL has one primary construct that is used repeatedly: owl:Restriction4. Example owl:Restrictions include, but are note limited to, owl:maxCardinality, owl:minCardinality, owl:cardinality, owl:hasValue, etc. With OWL, it is possible to state that a lanl:Human can work for no more than 1 lanl:Institution. In such cases, the owl:maxCardinality restriction would be specified on the lanl:worksFor predicate. If there exist the triples <lanl:marko, lanl:worksFor, lanl:LANL> <lanl:marko, lanl:worksFor, lanl:LosAlamos>, an OWL reasoner would assume that lanl:LANL and lanl:LosAlamos are the same entity. This reasoning is due to the cardinality restriction on the lanl:worksFor predicate. There are two popular tools for creating RDFS and OWL ontologies: Prot´eg´e5(open source) and Top Braid Composer6(proprietary). ## 4 The Triple-Store There are many ways in which RDF networks are stored and distributed. In the simple situation, an RDF network is encoded in one of the many RDF syntaxes and made available through a web server (i.e. as a web document). In other situations, where RDF networks are large, a triple-store is used. A triple-store is to an RDF network what a relational database is to a data table. Other names for triple-stores include semantic repository, RDF store, graph store, RDF database. There are many different
propriety and open-source triple-store providers. The most popular proprietary solutions include AllegroGraph7, Oracle RDF Spatial8and the OWLIM semantic repository9. The most popular open-source solution is Open Sesame10. The primary interface to a triple-store is SPARQL [7]. SPARQL is analogous to the relational database query language SQL. However, SPARQL is perhaps more similar to the query model employed by logic languages such as Prolog. The example query SELECT ?x WHERE { ?x <lanl:worksWith> <lanl:jhw> . } returns all resources that work with lanl:jhw. The variable ?x is a binding variable that must hold true for the duration for the query. A more complicated example is SELECT ?x ?y WHERE { ?x <lanl:worksWith> ?y . ?x <rdf:type> <lanl:Human> . ?y <rdf:type> <lanl:Human> . ?y <lanl:worksFor> <lanl:LANL> . ?x <lanl:worksFor> <necsi:NECSI> . } The above query returns all collaborators such that one collaborator works for the Los Alamos National Laboratory (LANL) and the other collaborator works for the New England Complex Systems Institute (NECSI). An example return would be ------------------------------- | ?x | ?y | ------------------------------- | lanl:marko | necsi:carlos | | lanl:jhw | necsi:carlos | | lanl:jbollen | necsi:carlos | ------------------------------- The previous query would require a complex joining of tables in the relational database model to yield the same information. Unlike the relational database index, the triple-store index is optimized for such semantic network queries (i.e. multi-relational queries). The triple-store a useful tool for storing, querying, and manipulating an RDF network. ## 5 A Semantic Network Programming Language And An Rdf Virtual Machine Neno/Fhat is a semantic network programming language and RDF virtual machine (RVM) specification [8]. Neno is an object-oriented language similar to C++ and Java. However, instead of Neno code compiling down to machine code or Java byte-code, Neno compiles to Fhat triple-code. An example Neno class is 7AllegroGraph available at: http://www.franz.com/products/allegrograph/ 8Oracle RDF Spatial available at: http://www.oracle.com/technology/tech/semantic technologies/ 9OWLIM available at: http://www.ontotext.com/owlim/ 10Open Sesame available at: http://www.openrdf.org/
owl:Thing lanl:Human { lanl:Institution lanl:worksFor[0..1]; xsd:nil lanl:quit(lanl:Institution x) { this.worksFor =- x; } } The above code defines the class lanl:Human. Any instance of lanl:Human can have either 0 or 1 lanl:worksFor relationships (i.e. owl:maxCardinality of 1). Furthermore, when the method lanl:quit is executed, it will destroy any lanl:worksFor triple from that lanl:Human instance to the provided lanl:Institution x. Fhat is a virtual machine encoded in an RDF network and processes Fhat triple-code. This means that a Fhat's program counter, operand stack, variable frames, etc., are RDF sub-netwoks. Figure 3 denotes a Fhat processor (A) processing Neno triple-code (B) and other RDF data (C). <image> With Neno it is possible to represent both the system model and its algorithmic processes in a single RDF network. Furthermore with Fhat, it is possible to include the virtual machine that executes those algorithms in the same substrate. Given that the Semantic Web is a distributed data structure, where sub-networks of the larger Semantic Web RDF network exist in different triple-stores or RDF documents around the world, it is possible to leverage Neno/Fhat to allow for distributed computing across these various data sets. If a particular model exists at domain X and a researcher located at domain Y needs to utilize that model for a computation, it is not necessary for the researcher at domain Y to download the data set from X. Instead, a Fhat processor and associated Neno code can move to domain X to utilize the data and return with results. In Neno/Fhat, the data doesn't move to the process, the process moves to the data. 7
## 6 Conclusion This article presented a review of the standards and technologies associated with the Semantic Web that can be used for complex systems modeling. The World Wide Web provides a common, standardized substrate whereby researchers can easily publish and distribute documents (e.g. web pages, scholarly articles, etc.). Now with the Semantic Web, researchers can easily publish and distribute models and processes (e.g. data sets, algorithms, computing machines, etc.). ## References [1] Tim Berners-Lee, , R. Fielding, Day Software, L. Masinter, and Adobe Systems. Uniform Resource Identifier (URI): Generic Syntax, January 2005. [2] Tim Berners-Lee, James A. Hendler, and Ora Lassila. The Semantic Web. *Scientific American*, pages 34–43, May 2001. [3] Paul V. Biron and Ashok Malhotra. XML schema part 2: Datatypes second edition. Technical report, World Wide Web Consortium, 2004. [4] Dan Brickley and R.V. Guha. RDF vocabulary description language 1.0: RDF schema. Technical report, World Wide Web Consortium, 2004. [5] Frank Manola and Eric Miller. RDF primer: W3C recommendation, February 2004. [6] Deborah L. McGuinness and Frank van Harmelen. OWL web ontology language overview, February 2004. [7] Eric Prud'hommeaux and Andy Seaborne. SPARQL query language for RDF. Technical report, World Wide Web Consortium, October 2004. [8] Marko A. Rodriguez. General-purpose computing on a semantic network substrate. Technical Report LA-UR-07-2885, Los Alamos National Laboratory, 2007. [9] W3C/IETF. URIs, URLs, and URNs: Clarifications and recommendations 1.0, September 2001. 8
| A | B | A ∪ B | | |--------|--------|---------|--------| | qm1(·) | L1(NB) | L2(P S) | L3(NS) | | qm2(·) | L4(NM) | L2(NS) | L0(O) | <image> Table 2: qm1(·), qm2(·**) with qualitative enriched labels** ## 6.2 Qualitative Masses With Qualitative Enriched Labels Using qualitative supporting degrees (i.e. enriched labels of Type 2) taking their values in the linguistic set X = {NB, NM, NS, O, P S, PM, P B}, with NB ≺ NM ≺ NS ≺ O ≺ P S ≺ PM ≺ P B **we get similar result** for this example. So, let's consider a frame Θ = {A, B} with Shafer's model and qm1(·) and qm2(·**) chosen as** in Table 2 The qualitative conjunctive and PCR5 fusion rules are obtained with derivations identical to the previous ones, since NB ≺ NM ≺ NS ≺ O ≺ P S ≺ PM ≺ P B and we associated NB = 0.3 or less, NM = [0.5, 0.6], NS = [0.7, 0.8], O = 1 and P S = 1.1. The minimum operator on X**(qualitative degrees) works similarly as on** R + **(quantitative degrees). Thus, finally one gets results according to Table 3.** | A | B | A ∪ B | A ∩ B | | |------------|--------|---------|---------|--------| | qm12(·) | L3(NB) | L2(NS) | L0(NS) | L1(NB) | | qmP CR5(·) | L4(NB) | L2(NB) | L0(NS) | L0(O) | <image> Table 3: Result obtained with qualitative conjunctive and PCR5 fuion rules ## 7 Conclusion With the recent development of qualitative methods for reasoning under uncertainty developed in Artificial Intelligence, more and more experts and scholars have great interest on qualitative information fusion, especially those working in the development of modern multi-source systems for defense, robot navigation, mapping, localization and path planning and so on. In this paper, we have proposed two possible enrichments (quantitative and/or qualitative) of linguistic labels and a simple and direct extension of the q**-operators developed in the** DSmT framework. We have also shown how to fuse qualitative-enriched belief assignments which can be expressed in natural language by human experts. Two illustrating examples have been presented in details to explain how our qualitative-enriched operators (qe**-operators) and qualitative PCR5 rule of combination work.** Some research in robotics of the application of qe**-operators (with quantitative or qualitative supporting degrees)** is under progress and will be presented in a forthcoming publication. ## Acknowledgment This work is partially supported by the National Nature Science Foundation of China (No.60675028). ## References [1] S. Badaloni and M. Giacomin, "The algebra IAfuz**: a framework for qualitative fuzzy temporal reasoning,"** Artificial Intelligence**, Vol. 170, No. 10, pp. 872–908, July 2006.** [2] G. Brewka, S. Benferhat and D. L. Berre, "Qualitative choice logic," Artificial Intelligence**, Vol.157, No.** 1-2, pp. 203–237, August 2004. [3] D. Dubois and H. Prade, "Representation and combination of uncertainty with belief functions and possibility measures," Computational Intelligence**, Vol. 4, pp. 244–264, 1988.** 7The confidence level/degree in the labels does not matter in the definition of quasi-normalization.
[4] M. Duckham, J. Lingham, K. Mason and M. Worboys, "Qualitative reasoning about consistency in geographic information," Information Sciences**, Vol. 176, No. 6, pp. 601–627, 2006.** [5] X. Li, X. Huang, J. Dezert and F. Smarandache, "Enrichment of **Qualitative Belief for Reasoning under Uncertainty,"** Proceedings of International Conference on Information Fusion, Fusion 2007**, Qu´ebec,** Canada, 9-12 July 2007. [6] R. Moratz and M. Ragni, "Qualitative spatial reasoning about relative point position ,"Journal of Visual Languages and Computing**, In Press, Available online since January 25th, 2007.** [7] S. Parsons and E. Mamdani, "Qualitative Dempster - Shafer theory", Proc. of the 3th EMACS Int. Workshop on Qualitative Reasoning and Decision Technologies**, Barcelona, Spain,1993.** [8] S. Parsons, "Some qualitative approaches to applying Dempster-Shafer theory," Information and Decision Technologies**, Vol. 19, pp. 321–337, 1994.** [9] S. Parsons, "A proof theoretic approach to qualitative probabilistic reasoning," Int. J. of Approx. Reasoning,**, Vol. 19, No. 3-4, pp. 265–297, 1998.** [10] G. Polya, "Patterns of Plausible Inference", Princeton University Press**, Princeton, NJ, 1954.** [11] P. Ranganathan, J.B. Hayet, M. Devy et al., "Topological navigation and qualitative localization for indoor environment using multi-sensory perception,"Robotics and Autonomous Systems**, Vol. 49, No 1-2, pp. 25–42,** 2004. [12] G. Shafer, "A Mathematical Theory of Evidence", Princeton University Press**, Princeton, NJ, 1976.** [13] F. Smarandache and J. Dezert (Editors), "Applications and Advances of DSmT for Information Fusion (Collected works)", American Research Press**, Rehoboth, 2004.** http://www.gallup.unm.edu/~**smarandache/DSmT-book1.pdf**. [14] F. Smarandache and J. Dezert (Editors), "Applications and Advances of DSmT for Information Fusion (Collected works)", Vol.2, American Research Press**, Rehoboth, 2006.** http://www.gallup.unm.edu/~**smarandache/DSmT-book2.pdf**. [15] F. Smarandache, J. Dezert, "Qualitative Belief Conditioning Rules (QBCR)", Proceedings of International Conference on Information Fusion, Fusion 2007**, Qu´ebec, Canada, 9-12 July 2007.** [16] T. Wagner, U. Visser and O. Herzog, "Egocentric qualitative spatial knowledge representation for physical robots,"Robotics and Autonomous Systems**, Vol. 49, No 1-2, pp. 25–42, 2004.** [17] M.P. Wellman, "Some varieties of qualitative probability", Proc.of the 5th Int. Conf. on Information Processing and the Management of Uncertainty (IPMU)**, Paris , July, 1994.** [18] S.K.M Wong and P. Lingras, "Representation of qualitative user preference by quantitative belief functions,"IEEE Trans. on Knowlwdge and Data Engineering**, Vol.6, No.1, pp. 72–78, 1994.** [19] L. Zadeh, "A Theory of Approximate Reasoning,"Machine Intelligence**, Vol. 9, pp. 149–194, 1979.** [20] L. Zadeh, "Fuzzy logic = Computing with words,"IEEE Transactions on Fuzzy Systems**, Vol. 4, No. 2, pp.** 103–111, 1996.
a) ∅, θ1, θ2, · · · , θn ∈ DΘ. b) If A, B ∈ DΘ, then A ∩ B ∈ DΘ and A ∪ B ∈ DΘ. c) No other elements belong to DΘ**, except those obtained by using rules a) or b).** A (quantitative) basic belief assignment (bba) expressing the belief committed to the elements of DΘ **by a** given source/body of evidence S is a mapping function m(·): DΘ → [0, **1] such that:** $$m(\varnothing)=0\qquad{\mathrm{and}}\qquad\sum_{A\in D^{\Theta}}m(A)=1$$ m(A**) = 1 (1)** Elements A ∈ DΘ having m(A) > **0 are called** focal elements of the bba m(.**). The general belief function** and plausibility functions are defined respectively in almost the same manner as within the DST [12], i.e. $$B e l(A)=\sum_{B\in D^{\Theta},B\subseteq A}m(B)$$ $$P l(A)=\sum_{B\in D^{\Theta},B\cap A\neq\emptyset}m(B)$$ The main concern in information fusion is the combination of sources of evidence and the efficient management of conflicting and uncertain information. DSmT offers several **fusion rules, denoted by the generic symbol** ⊕**, for combining basic belief assignments. The simplest one, well adapted when working with the free-DSm**1 model Mf**(Θ) and called DSmC (standing for** DSm Classical rule) is nothing but the conjunctive fusion operator of bba's defined over the hyper-power set DΘ. Mathematically, DSmC for the fusion of k ≥ **2 sources of** evidence is defined by mMf (Θ)(∅) = 0 and ∀A 6= ∅ ∈ DΘ, $$m_{{\mathcal{M}}^{f}(\Theta)}(A)\triangleq[m_{1}\oplus\cdots\oplus m_{k}](A)$$ $$m_{{\mathcal{M}}^{f}(\Theta)}(A)=\sum_{\stackrel{X_{1},\cdots,X_{k}\in D\Theta}{X_{1}\cap\cdots\cap X_{k}=A}}\prod_{s=1}^{k}m_{s}(X_{s})$$ $$\left(4\right)$$ ms(Xs**) (4)** When working with hybrid models and/or Shafer's model M0**(Θ), other rules for combination must be used** for taking into account integrity constraints of the model (i.e. some exclusivity constraints and even sometimes no-existing constraints in dynamical problems of fusion where the model and the frame can change with time). For managing efficiently the conflicts between sources of evidence, **DSmT proposes mainly two alternatives to the** classical Dempster's rule of combination [12] for working efficiently with (possibly) high conflicting sources. The first rule proposed in [13] was the DSm hybrid rule (DSmH) of combination which offers a prudent/pessimistic way of redistributing partial conflicting mass. The basic idea of DSmH **is to redistribute the partial conflicting** mass to corresponding partial ignorance. For example: let's consider only two sources with two bba's m1(.**) and** m2(.), if A ∩ B = ∅ is an integrity constraint of the model of Θ and if m1(A)m2(B) > **0, then** m1(A)m2(B) will be transferred to A ∪ B **through DSmH. The general formula for DSmH is quite complicated and can** be found in [13] and is not reported here due to space limitation. DSmH is actually a natural extension of Dubois & Prade's rule of combination [3] which allows also to work with dynamical changes of the frame and its model. A much more precise fusion rule, called Proportional Conflict Redistribution rule no. 5 (PCR5) has been developed recently in [14] for transferring more efficiently **all partial conflicting masses. Basically, the** idea of PCR5 is to transfer the conflicting mass only to the elements involved in the conflict and proportionally to their individual masses. For example: let's assume as before only two sources with bba's m1(.**) and** m2(.), A∩B = ∅ for the model of Θ and m1(A) = 0.6 and m2(B) = 0.**3. Then with PCR5, the partial conflicting mass** m1(A)m2(B) = 0.6 · 0.3 = 0.18 is redistributed to A and B **only with the following proportions respectively:** xA = 0.12 and xB = 0.**06 because the proportionalization requires** $${\frac{x_{A}}{m_{1}(A)}}={\frac{x_{B}}{m_{2}(B)}}={\frac{m_{1}(A)m_{2}(B)}{m_{1}(A)+m_{2}(B)}}={\frac{0.18}{0.9}}=0.2$$ General PCR5 fusion formula for the combination of k ≥ **2 sources of evidence can be found in [14].** for the combination of $k\geq2$ sources. 1**We call it** free because no integrity constraint is introduced in such model. $$(1)$$ $$\left({2}\right)$$ $$\left({\boldsymbol{3}}\right)$$
## 3 Extension Of Dsmt For Qualitative Beliefs In order to compute with words (i.e. linguistic labels) and qualitative belief assignments instead of quantitative belief assignments2 over GΘ**, Smarandache and Dezert have defined in [14] a** qualitative basic belief assignment qm(.) as a mapping function from GΘ into a set of linguistic labels L = {L0,L, L ˜n+1} where L˜ = {L1, · · · , Ln} is a finite set of linguistic labels and where n ≥ 2 is an integer. For example, L1 **can take the linguistic value "poor",** L2 the linguistic value "good", etc. L˜ is endowed with a total order relationship ≺, so that L1 ≺ L2 **≺ · · · ≺** Ln. To work on a true closed linguistic set L **under linguistic addition and multiplication operators, Smarandache** and Dezert extended naturally L˜ with two extreme values L0 = Lmin and Ln+1 = Lmax, where L0 **corresponds** to the minimal qualitative value and Ln+1 **corresponds to the maximal qualitative value, in such a way that** L0 ≺ L1 ≺ L2 ≺ · · · ≺ Ln ≺ Ln+1, where ≺ **means inferior to, or less (in quality) than, or smaller than, etc.** Labels L0, L1, L2, . . . , Ln, Ln+1 **are said** linguistically equidistant if: Li+1 −Li = Li−Li−1 for all i = 1, 2**, . . . , n** where the definition of subtraction of labels is given in the sequel by (11). In the sequel Li ∈ L **are assumed** linguistically equidistant3labels such that we can make an isomorphism between L = {L0, L1, L2, . . . , Ln, Ln+1} and {0, 1/(n + 1), 2/(n + 1), . . . , n/(n + 1), 1}, defined as Li = i/(n + 1) for all i = 0, 1, 2, . . . , n, n **+ 1. Using** this isomorphism, and making an analogy to the classical operations of real numbers, we are able to define the following qualitative operators (or q**-operators for short):** - q**-addition of linguistic labels** $$L_{i}+L_{j}=\frac{i}{n+1}+\frac{j}{n+1}=\frac{i+j}{n+1}=L_{i+j}\tag{1}$$ but of course we set the restriction that i+j < n+1; in the case when i+j ≥ n**+1 we restrict** Li+j = Ln+1. So this is the justification of the qualitative addition we have defined. - q**-multiplication of linguistic labels**4 Since $L_i\times L_j=\frac{i}{n+1}\cdot\frac{j}{n+1}=\frac{(i\cdot j)/(n+1)}{n+1}$, this means the closest integer to $x$, i.e. n+1 , the best approximation would be L[(i·j)/(n+1)]**, where [**x] $$L_{i}\times L_{j}=L_{[(i\cdot j)/(n+1)]}$$ Li × Lj = L[(i·j)/(n+1)] (6) For example, if we have L0, L1, L2, L3, L4, L5, corresponding to respectively 0, 0.2, 0.4, 0.**6, 0**.8, 1, then L2 · L3 = L[(2·3)/5] = L[6/5] = L[1.2] = L1; using numbers: 0.4 · 0.6 = 0.24 ≈ 0.2 = L1**; also** L3 · L3 = L[(3·3)/5] = L[9/5] = L[1.8] = L2; using numbers 0.6 · 0.6 = 0.36 ≈ 0.4 = L2. b) A simpler approximation of the multiplication, but less accurate (as **proposed in [14]) is thus** $$\left(7\right)$$ $L_i\times L_j=L_{\min\{i,j\}}$. Li × Lj = Lmin{i,j} (7) - **Scalar multiplication of a linguistic label** Let a **be a real number. We define the multiplication of a linguistic label by a scalar as follows:** $$a\cdot L_{i}={\frac{a\cdot i}{n+1}}\approx{\begin{cases}L_{[a\cdot i]}&{\mathrm{if~}}[a\cdot i]\geq0,\\ L_{-[a\cdot i]}&{\mathrm{otherwise.}}\end{cases}}$$ $$({\mathfrak{s}})$$ - **Division of linguistic labels** $$\left(5\right)$$
a) Division as an internal operator: / : $L\times L\to L.$ Let $j\neq0,$ then ? $$L_{i}/L_{j}=\begin{cases}L_{[(i/j)\cdot(n+1)]}&\text{if}[(i/j)\cdot(n+1)]<n+1,\\ L_{n+1}&\text{otherwise.}\end{cases}$$ $$[(i/j)\cdot(n+1)]<n+1,$$ $$L_{i}/L_{j}={\frac{i/(n+1)}{j/(n+1)}}={\frac{(i/j)\cdot(n+1)}{n+1}}=L_{[(i/j)\cdot(n+1)]}$$ $$L_{i}-L_{j}=\begin{cases}L_{i-j}&\text{if}\quad i\geq j,\\ -L_{j-i}&\text{if}\quad i<j.\end{cases}\tag{11}$$ where $-L=\{-L_{1},-L_{2},\cdots,-L_{n},-L_{n+1}\}$. The $q$-subtraction above is well justified since when $i\geq j$, one has $L_{i}-L_{j}=\frac{i}{n+1}-\frac{1}{n+1}=\frac{1}{n+1}$. $$(12)$$ LiLjLk Lp + Lq = L(ijk)/(n+1)2 Lp+q= L(ijk)/(n+1)2 p+q·(n+1) = L(ijk)/(n+1) p+q= L ijk (n+1)(p+q)(12) the integer part of the index. The first equality in (9) is well justified because when [(i/j) · (n + 1)] < n **+ 1, one has** For example, if we have L0, L1, L2, L3, L4, L5, corresponding to respectively 0, 0.2, 0.4, 0.**6, 0**.8, 1, then: L1/L3 = L[(1/3)·5] = L[5/3] = L[1.66] ≈ L2. L4/L2 = L[(4/2)·5] = L[2·5] = Lmax = L5 **since** 10 > 5. b) Division as an external operator: ⊘ : L×L → R +. Let j 6= 0. Since Li⊘Lj = (i/(n+1))/(j/(n**+1)) =** i/j**, we simply define** Li ⊘ Lj = i/j **(10)** Justification of b): when we divide say L4/L1 in the above example, we get 0.8/0.**2 = 4, but no** label is corresponding to number 4 which is not even in the interval [0, **1], hence in the division as** an internal operator we need to get as response a label, so in our example we approximate it to Lmax = L5**, which is a very rough approximation! So, depending on the fusion combination rules, it** might better to consider the qualitative division as an external operator, which gives us the exact result. - q-subtraction of linguistic labels: − : L × L → {L, −L}, The above qualitative operators are logical, justified due to the isomorphism between the set of linguistic equidistant labels and a set of equidistant numbers in the interval [0, **1]. These qualitative operators are built** exactly on the track of their corresponding numerical operators, so they are more mathematical than the ad-hoc definition of qualitative operators proposed so far in the literature. They are similar to the PCR5 combination numerical rule with respect to other fusion combination numerical rules based on the conjunctive rule. But moving to the enriched label qualitative operators the accuracy decreases. Remark about doing multi-operations on labels**: When working with labels, no matter how many operations we have, the best (most accurate) result is obtained if we do only one approximation, and that one should** be just at the very end. For example, if we have to compute terms like LiLjLk/(Lp + Lq**) as for qPCR5 (see** example in section 6), we compute all operations as defined above, but without any approximations (i.e. not even calculating the integer part of indexes, neither replacing by n **+ 1 if the intermediate results is bigger than** n **+ 1), so:** and now, when all work is done, we compute the integer part of the index, i.e. [ ijk (n**+1)(**p+q) ] or replace it by n+ 1 if the final result is bigger than n + 1. Therefore, the term LiLjLk/(Lp + Lq**) will take the linguistic value** Ln+1 whenever [ ijk (n**+1)(**p+q) ] > n **+ 1. This method also insures us of a unique result, and it is mathematically closer** to the result that would be obtained if working with corresponding numerical masses. Otherwise, if one does approximations either at the beginning or after each operation or in **the middle of calculations, the inaccuracy** propagates (becomes bigger and bigger) and we obtain different results, depending on the places where the approximations were done. $$({\mathfrak{g}})$$ 5
## 4 Quasi-Normalization Of Qm(.) There is no way to define a normalized qm(.**), but a qualitative quasi-normalization [14, 15] is nevertheless** possible when considering equidistant linguistic labels because in such case, qm(Xi) = Li**, is equivalent to a** quantitative mass m(Xi) = i/(n **+ 1) which is normalized if** $$\sum_{X\in D^{\Theta}}m(X)=\sum_{k}i_{k}/(n+1)=1$$ but this one is equivalent to $$\sum_{X\in D^{\Theta}}q m(X)=\sum_{k}L_{i_{k}}=L_{n+1}$$ In this case, we have a qualitative normalization**, similar to the (classical) numerical normalization. But, if the** previous labels L0, L1, L2, . . ., Ln, Ln+1 from the set L are not equidistant, so the interval [0, **1] cannot be** split into equal parts according to the distribution of the labels, then it makes sense to consider a qualitative quasi-normalization**, i.e. an approximation of the (classical) numerical normalization for the qualitative masses** in the same way: $$\sum_{X\in D^{\Theta}}q m(X)=L_{n+1}$$ In general, if we don't know if the labels are equidistant or not, we say **that a qualitative mass is quasi-normalized** when the above summation holds. In the sequel, for simplicity, one assumes to work with quasi-normalized qualitative basic belief assignments. From these very simple qualitative operators, it is thus possible to extend directly the DSmH fusion rule for combining qualitative basic belief assignments by replacing classical addition and multiplication operators on numbers with those for linguistic labels in DSmH formula. In a similar way, it is also possible to extend PCR5 formula as shown with detailed examples in [14] and in section 6 of this paper. In the next section, we propose new qualitative-enriched (qe) operators for dealing with enriched linguistic labels which mix the linguistic value with either quantitative/numerical supporting degree or qualitative supporting degree as well. The direct qualitative discounting (or reinforcement) is motivated by the fact that in general human experts provide more easily qualitative values than quantitative values when analyzing complex situations. In this paper, both quantitative enrichments and qualitative enrichments **of linguistic labels are considered** and unified through same general qe**-operators. The quantitative enrichment is based directly on the percentage** of discounting (or reinforcement) of any linguistic label. This is what we call a Type 1 of enriched labels. The qualitative enrichment comes from the idea of direct qualitative discounting (or reinforcement) and constitutes the Type 2 of enriched labels. ## 5 Qe**-Operators** We propose to improve the previous q-operators for dealing now with enriched qualitative beliefs provided **from** human experts. We call these operators the qe**-operators. The basic idea is to use "enriched"-linguistic labels** denoted Li(ǫi), where ǫi can be either a numerical supporting degree in [0, ∞**) or a qualitative supporting degre** taken its value in a given (ordered) set X of linguistic labels. Li(ǫi**) is called the qualitative enrichment**5 of Li. When ǫi ∈ [0, ∞), Li(ǫi) is called an enriched label of Type 1, whereas when ǫi ∈ X, Li(ǫi**) is called an** enriched label of Type 2. The (quantitative or qualitative) quantity ǫi **characterizes the weight of reinforcing or** discounting expressed by the source when using label Li for committing its qualitative belief to a given proposition A ∈ GΘ**. It can be interpreted as a second order type of linguistic label which includes both the linguistic** value itself but also the associated degree of confidence expressed by the source. The values of ǫi **express the** expert's attitude (reinforcement, neutral, or discounting) to a certain proposition when using a given linguistic label for expressing its qualitative belief assignment.
For example with enriched labels of Type 1, if the label L1 , L1**(1) represents the linguistic variable** Good, then L1(ǫ1) represents either the reinforced or discounted L1 value which depends on the value taken by ǫ1**. In** this example, ǫ1 represents the (numerical) supporting degree of the linguistic value L1 = Good. If ǫ1 = 1.2, then we say that the linguistic value L1 **= Good has been reinforced by 20% with respect to its nominal/neutral** supporting degree. If ǫ1 = 0.4, then it means that the linguistic value L1 **is discounted 60% by the source.** With enriched labels of Type 2, if one chooses by example X = {NB, NM, NS, O, P S, PM, P B}**, where** elements of X have the following meaning: NB , "negative big", NM , "negative medium", NS ,**"negative** small", O , "neutral" (i.e. no discounting, neither reinforcement), P S , "positive small", PM ,**"positive** medium" and P B , "positive big". Then, if the label L1 , L1(O**) represents the linguistic variable** Good**, then** L1(ǫ1), ǫ1 ∈ X, represents either the qualitative reinforced or discounted L1 **value which depends on the value** taken by ǫ1 in X. ǫ1 = O means a neutral qualitative supporting degree corresponding to ǫ1 **= 1 for enriched** label of Type 1. ǫ1 represents the qualitative supporting degree of the linguistic value L1 = Good. If ǫ1 = P S, then we say that the linguistic value L1 **= Good has been reinforced a little bit positively with respect to its** nominal/neutral supporting degree. If ǫ1 = NB, then it means that the linguistic value L1 **is discounted slightly** and negatively by the source. We denote by L˜(ǫ) any given set of (classical/pure) linguistic labels L˜ = {L1, L2, . . . , Ln} **endowed with the** supporting degree property (i.e. discounting, neutral and/or reinforcement). In other words, ## L˜(Ǫ) = {L1(Ǫ1), L2(Ǫ2)**, . . . , L**N(Ǫn)} represents a given set of enriched linguistic labels6. We assume the same order relationship ≺ on L˜(ǫ**) as the one** defined on L˜. Moreover we extend L˜(ǫ**) with two extreme (minimal and maximal) enriched qualitative values** L0(ǫ) and Ln+1(ǫ) in order to get closed under qe-operators on L(ǫ) , {L0(ǫ),L˜(ǫ), Ln+1(ǫ)}**. For working** with enriched labels (and then with qualitative enriched basic belief assignments), it is necessary to extend the previous q-operators in a consistent way. This is the purpose of our new qe**-operators.** An enriched label Li(ǫi) means that the source has discounted (or reinforced) the label Li **by a quantitative** or qualitative factor ǫi. Similarly for Lj(ǫj ). So we use the q-operators for Li, Lj **labels, but for confidences we** propose three possible versions: If the confidence in Liis ǫi and the confidence in Lj is ǫj **, then the confidence** in combining Li with Lj **can be:** a) either the average, i.e. (ǫi + ǫj )/2; b) or min{ǫi, ǫj}; c) or we may consider a confidence interval as in statistics, so we get [ǫmin, ǫmax], where ǫmin , min{ǫi, ǫj} and ǫmax , max{ǫi, ǫj}; if ǫi = ǫj **then the confidence interval is reduced to a single point,** ǫi. In the sequel, we denote by "c" any of the above resulting confidence of combined enriched labels. **All these** versions coincide when ǫi = ǫj = 1 (for Type 1) or when ǫi = ǫj = O **(for Type 2), i.e. where there is no** reinforcement or no discounting of the linguistic label. However the confidence degree average operator (case a) ) is not associative, so in many cases it's inconvenient to use it. The best among these three, more prudent and easier to use, is the min operator. The confidence interval operator provides both a lower and a upper confidence level, so in an optimistic way, we may take at the end the midpoint of this **confidence interval as a confidence level.** The new extended operators allowing working with enriched labels of Type 1 or Type 2 are then defined by: - qe**-addition of enriched labels** 7
$$L_{i}(\epsilon_{i})+L_{j}(\epsilon_{j})=\begin{cases}L_{n+1}(c)&\text{if}\quad i+j\geq n+1,\\ L_{i+j}(c)&\text{otherwise.}\end{cases}\tag{1}$$ - qe**-multiplication of linguistic labels** a) As direct extension of (6), the multiplication of enriched labels is defined by Li(ǫi) × Lj (ǫj ) = L[(i·j)/(n+1)](c**) (14)** b) as another multiplication of labels, easier, but less exact: $$L_{i}(\epsilon_{i})\times L_{j}(\epsilon_{j})=L_{\operatorname*{min}\{i,j\}}(c)$$ Li(ǫi) × Lj(ǫj ) = Lmin{i,j}(c**) (15)** - **Scalar multiplication of a enriched label** Let a **be a real number. We define the multiplication of an enriched linguistic label by a scalar as follows:** $$a\cdot L_{i}(\epsilon_{i})\approx\begin{cases}L_{[a\cdot i]}(\epsilon_{i})&\text{if}[a\cdot i]\geq0,\\ L_{-[a\cdot i]}(\epsilon_{i})&\text{otherwise.}\end{cases}\tag{1}$$ - qe**-division of enriched labels** $\textbf{Division as an}$. $$\mathbf{a})$$ a) Division as an internal operator: $$(17)$$ $${\frac{L_{i}(\epsilon_{i})}{L_{j}(\epsilon_{j})}}={\begin{cases}L_{n+1}(c)&{\mathrm{if~}}[(i/j)\cdot(n+1)]\geq n+1,\\ L_{[(i/j)\cdot(n+1)]}(c)&{\mathrm{otherwise.}}\end{cases}}$$ Let $j\neq0$, then . b) Division as an external operator: Let j 6**= 0, then we can also consider the division of enriched labels as external operator too as follows:** $$(19)$$ Li(ǫi) ⊘ Lj(ǫj ) = (i/j)supp(c) **(18)** $(i/j)_{\text{sup}}$ The notation (i/j)supp(c) means that the numerical value (i/j**) is supported with the degree** c. - qe**-subtraction of enriched labels** $$L_{i}(\epsilon_{i})-L_{j}(\epsilon_{j})=\begin{cases}L_{i-j}(c)&\text{if}\quad i\geq j,\\ -L_{j-i}(c)&\text{if}\quad i<j.\end{cases}\tag{1}$$ These qe**-operators with numerical confidence degrees are consistent with the classical qualitative operators** when ei = ej = 1 since c = 1 and Li(1) = Li for all i, and the qe**-operators with qualitative confidence degrees** are also consistent with the classical qualitative operators when ei = ej = O **(this is letter "O", not zero, hence** the neutral qualitative confidence degree) since c = O **(neutral).** ## 6 Examples Of Qpcr5 Fusion Of Qualitative Belief Assignments 6.1 Qualitative Masses Using Quantitative Enriched Labels Let's consider a simple frame Θ = {A, B} with Shafer's model (i.e. A∩B = ∅**), two qualitative belief assignments** qm1(·) and qm2(·), the set of ordered linguistic labels L = {L0, L1, L2, L3, L4, L5, L6}, n **= 5, enriched with** quantitative support degree (i.e. enriched labels of Type 1). For this example the (prudent) min operator for combining confidences proposed in section 5 (case b) ) is used, but other methods a) and c) can also be applied.We consider the following qbba summarized in the Table 1: $$\left(13\right)$$ $$\left(14\right)$$ $$(15)$$ $$(16)$$ 8
| A | B | A ∪ B | A ∩ B | | |---------|---------|---------|---------|---------| | qm1(·) | L1(0.3) | L2(1.1) | L3(0.8) | | | qm2(·) | L4(0.6) | L2(0.7) | L0(1) | | | qm12(·) | L3(0.3) | L2(0.7) | L0(0.8) | L1(0.3) | Table 1: qm1(·), qm2(·) and qm12(·**) with quantitative enriched labels** Note that qm1(·) and qm2(·) are quasi-normalized since L1 + L2 + L3 = L4 + L2 + L0 = L6 = Lmax**. The last** raw of Table 1, corresponds to the result qm12(·**) obtained when applying the qualitative conjunction rule. The** values for qm12(·**) are obtained using intermediate approximations as follows:** qm12(A) = qm1(A)qm2(A) + qm1(A)qm2(A ∪ B) + qm2(A)qm1(A ∪ B) = L1(0.3)L4(0.6) + L1(0.3)L0**(1) +** L4(0.6)L3(0.8) ≈ L[(1·4)/6](min{0.3, 0.6}) + L[(0·1)/6](min{0.3, 1}) + L[(4·3)/6]**(min**{0.6, 0.8}) = L1(0.3) + L0(0.3) + L2(0.6) = L1+0+2(min{0.3, 0.3, 0.6}) = L3(0.3) qm12(B) = qm1(B)qm2(B) + qm1(B)qm2(A ∪ B) + qm2(B)qm1(A ∪ B) = L2(1.1)L2(0.7) + L2(1.1)L0**(1) +** L2(0.7)L3(0.8) ≈ L[(2·2)/6](min{1.1, 0.7}) + L[(2·0)/6](min{1.1, 1}) + L[(2·3)/6]**(min**{0.7, 0.8}) = L1(0.7) + L0(1) + L1(0.7) = L1+0+1(min{0.7, 1, 0.7}) = L2(0.7) qm12(A ∪ B) = qm1(A ∪ B)qm2(A ∪ B) = L3(0.8)L0(1) ≈ L[(3·0)/6](min{0.8, 1}) = L0(0.8) and the conflicting qualitative mass by qm12(∅) = qm12(A ∩ B) = qm1(A)qm2(B) + qm2(A)qm1(B) = L1(0.3)L2(0.**7) +** L4(0.6)L2(1.1) ≈ L[(1·2)/6](min{0.3, 0.7}) + L[(4·2)/6]**(min**{0.6, 1.1}) = L0(0.3) + L1(0.6) = L0+1(min{0.3, 0.6}) = L1(0.3) The resulted qualitative mass, qm12(·**), is (using intermediate approximations) quasi-normalized since** L3+L2+ L0 + L1 = L6 = Lmax. Note that, when the derivation of qm12(.**) is carried out with the approximations done at the end (i.e. the best** way to carry derivations), one gets for qm12(A**) in a similar way as in (12):** qm12(A) = qm1(A)qm2(A) + qm1(A)qm2(A ∪ B) + qm2(A)qm1(A ∪ B) = L1(0.3)L4(0.6) + L1(0.3)L0**(1) +** L4(0.6)L3(0.8) = (L1×4/6 + L1×0/6 + L4×3/6**)(min**{0.3, 0.6, 0.3, 1, 0.6, 0.8}) = L4/6+0/6+12/6(0.3) = L16/6(0.3) ≈ L[16/6](0.**3) =** L3(0.3) Similarly: qm12(B) = qm1(B)qm2(B) + qm1(B)qm2(A ∪ B) + qm2(B)qm1(A ∪ B) = L2(1.1)L2(0.7) + L2(1.1)L0**(1) +** L2(0.7)L3(0.8) = L2×2/6+2×0/6+2×3/6**(min**{1.1, 0.7, 1.1, 1, 0.7, 0.8}) = L4/6+0/6+6/6(0.7) = L10/6(0.7) ≈ L[10/6](0.7) = L2(0.7) <image>
In our model, the unconscious domain contains (besides the processing domain and the unconscious control center UC) a special collector for repressed ideas -- - By Freud's it is also a part of Ego (but an unconscious part). After a few attempts to transform an idea-attractor belonging to the domain of doubtful ideas into some non-doubtful idea, SCC sends such a doubtful idea-attractor to the collector for repressed ideas. What can one say about the further evolution of a doubtful idea in the collector for repressed ideas? It depends on a cognitive system (in particular, a human individual). In principle, this collector might play just the role of a churchyard for doubtful ideas. Such a collector would not have output connections and a doubtful idea (hidden forbidden desire, wish, impulse, experience) would disappear after some period of time. ## / ,- . 1 - However, Freud demonstrated that advanced cognitive systems (such as human individuals) could not isolate <image> completely a hidden forbidden wish. They could not perform the complete interment of doubtful ideas in the collector for repressed ideas. In our model, this collector has an output connection with the unconscious control center UC. At this moment the existence of such a connection seems to be just a disadvantage in the mental architecture of a cognitive system. It seems that such a cognitive system was simply not able to develop a neuronal structure for 100\%-isolation of the collector for repressed ideas. However, later we shall see that the cyclic pathway: from SCC to the collector for doubtful ideas, then to UC, and, finally, again to SCC, has important cognitive functions. We might speculate that such a connection was specially created in the process of evolution. But we start with the discussion on negative consequences of existence of this cyclic pathway.
,- - - * Starting with an initial idea J0 a processor PR0 produces an attractor J; the analyzer computes the measures of interest and interdiction for this idea-attractor; the analyzer considers it as a doubtful idea: both measures of interest and interdiction are too high, they are larger than the maximal thresholds; SCC sends this idea-attractor to the collector of repressed ideas; it becomes hidden forbidden wish; it moves from the collector of repressed ideas to the unconscious control center UC; UC sends it to some processor PR that produces a new ideaattractor JS. The analyzer may decide that this idea-attractor can be realized (depending on the distances from JS to databases of interesting and forbidden ideas and the magnitude of the realization threshold). In this case the analyzer sends JS through the collector (of ideas waiting for realization) to a performance. Such an idea-attractor JS is a symptom}induced by the original idea-attractor J (in fact, by the initial idea J0). Our present considerations can be interpreted as creation of AI-models for Freud's theory of subconscious/unconscious mind. In our model an idea belonging to the collector for repressed ideas has the possibility to move to UC. The unconscious control center UC sends this idea to one of the thinking processors. This processor performs iterations starting with this hidden forbidden wish as an initial idea. It produces an idea-attractor. In the simplest case the processor sends its output, an idea-attractor, to the subconscious domain. The subconscious analyzer performs analysis of this idea. If the idea does not belong to the domain of doubts, then the analyzer sends the idea to the collector of ideas waiting for realization. After some period of waiting the idea will be send to realization. By such a realization SCC removes this idea from the collector of ideas waiting for realization. However, SCC does not remove the root of the idea (complex), namely the original hidden forbidden wish, because the latter is now located in the unconscious domain. And SCC is not able to control anything in this domain. A new idea-attractor generated by this forbidden wish is nothing other than its new (unusual) performance. Such unconscious transformations of forbidden wishes were studied in Freud, 1962a,b, 1900. In general a new wish—the final idea-attractor -- has no direct relation to the original forbidden wish. This is nothing but a - of a cognitive system, cf. Freud, 1962b: "But the repressed wishful impulse continues to exist in the unconscious.} It is on the look-out for an opportunity of being activated, and when that **happens** it succeeds in sending into consciousness a disguised and unrecognized **substitute** for what has been repressed, and to this there soon become attached the **same** feelings of unpleasure which it was hoped had been saved by repression. **This** substitute for the repressed idea—the symptom—is proof against further **attacks** of defensive ego; and in place of a short conflict an aliment now appears **which** is not brought to an end by the passage of **time."** ## / 0 - A cognitive system wants to prevent a new appearance of forbidden wishes (which were expelled into the collector of repressed ideas) in the subconscious (and then conscious) domain. In our model a brain has an additional analyzer, the unconscious one, (located in the unconscious domain) that must analyze nearness of an idea-attractor produced by some processor and ideas which has been already collected the collector of repressed ideas. The unconscious analyzer contains a comparator that measures the distance between an idea-attractor which has been produced by a thinking block and the database of hidden forbidden wishes: Then this collector calculates the corresponding measure of interdiction by using the same rule as it was used for the database of forbidden ideas in the subconsciousness. If such an unconscious interdiction is large (approximately one), then this idea-attractor is too close to ne of former hidden forbidden wishes. This idea should not be transmitted to the subconscious (and then conscious) domain. Each individual has its own % **-**: if the measure of unconscious interdiction (based on the comparing with the database of hidden forbidden wishes) is less than the blocking threshold, then such an ideaattractor is transmitted into the subconscious and then conscious domains; if this measure is larger than the threshold, then such an idea-attractor is deleted directly in the unconscious domain. In the latter case the idea-attractor will never come to the conscious domain. 13 This blocking threshold determines the degree of blocking of some thinking processors by forbidden wishes. Thresholds can depend on processors. For some individuals (having rather small values of blocking thresholds), a forbidden wish may completely stop the flow of information from some processors to the subconscious domain. The same hidden forbidden wish may play a negligible role for individuals having rather large magnitude of blocking thresholds. Therefore the blocking thresholds are important characteristics which can be used to distinguish normal and abnormal behaviors. 14In our mental cybernetic model blocking thresholds play the
role of sources of the resistance force which does not permit reappearance of hidden forbidden wishes, desires and wild impulses which were repressed. We note again that blocking thresholds depends on thinking processors. Thus the same individual can have the normal <image> threshold for one thinking block and abnormal degree of blocking for another thinking block. ,- - - / The unconscious analyzer computes the distance between an idea-attractor (produced by a thinking block PR1) and the database of hidden forbidden wishes. If this distance is relatively small, i.e., the measure of unconscious interdiction is relatively large, then such an idea-attractor does not go to the subconscious domain.} ## 2 - The pleasure principle is a psychoanalytical term coined by Sigmund Freud. Respectively, the desire for immediate gratification versus the deferral of that gratification. Quite simply, the pleasure principle drives one to seek pleasure and to avoid pain. We shall present AI-justification of this principle. It is convenient to consider the evolution of the pleasure-function through development from Model 1 to Model 4. We start with Model 2. In this model pleasure is identified with the interest-measure. This mental quantity takes its values in the segment [k, 1]. Thus we quantified pleasure. The value k corresponds to minimal pleasure and the value 1 to maximal pleasure. This quantity of interest-pleasure is the basis for ordering of ideas attractors for their realization. The brain wants most the ideas having the highest magnitude of pleasure. Such ideas have the highest priority in realization. 15 Moreover, ideas inducing not so much pleasure (e.g., pleasure which little bit extends the value k) might be never realized, because they might just disappear from the collector of ideas waiting for realization. Thus a brain based on Model 2 would like to
maximize the pleasure-function which is defined on the space of ideas. We recall that the interest-measure increases with the decreasing of the distance from an idea-attractor to the interest-database. We now consider Model 3. Here a brain can calculate not only the interest-function on the space of ideas, but also the interdiction function. The purpose of the latter one is to prevent such a brain from conflicts with reality. Here we chose a pleasure-reality function is identified with the consistency-function. As was mentioned, the simplest form of the consistency function is simply the difference between the functions of interest and interdiction: ## Pleasure—Reality = Interest - Interdiction, Thus the greatest pleasure and at the same time the greatest consistency with reality is approached in the case of the highest interest and the lowest interdiction, e.g., the interest-measure = 1 and the interdiction-measure = k , so the the pleasure-reality function takes the value 1-k. It is a good point to remark that the pleasure-reality function (given by the measure of consistency) depends on an individual. In general it is an arbitrary linear combination of interest and interdiction: PLEASURE—REALITY = a INTEREST + b INTERDICTION, ## Where A And B Some Coefficients. In Model 4 the pleasure-reality function is the same as in Model 3. Finally, we consider Model 1. Here we have only dynamical systems which process external and internal stimuli and produce ideas-attractors which play the role of reactions to those stimuli. Pleasure is approached by realizations of those ideas-attractors. Here Id totally dominates. We now analyze deeper the structure of the pleasure function. As everything in our models this function is based on the mental distance. Therefore the pleasure principle as well as the reality principle are based on the metric structure of mental space. ## 3 4 - **** We do not plan to present here a detailed review on other approaches to psycho-robots. To emphasize differences of our approach from other developments of psycho-robots, we present a citation from the work of Potkonjak el al, 2002: "Man–machine communication had been recognized a long time ago as a significant issue in the implementation of automation. It influences the machine effectiveness through direct costs for operator training and through more or less comfortable working conditions. The solution for the increased effectiveness might be found in user-friendly human–machine interface. In robotics, the question of communication and its user-friendliness is becoming even more significant. It is no longer satisfactory that a communication can be called "human–machine interface", since one must see robots as future collaborators, service workers, and probably personal helpers." In contrast, the main aim of our modeling is not at all creation of friendly helpers to increase their effectiveness. We would like to create AI-systems which would really have essential elements of human psyche. We shown that already psycho-robots with a rather simple AI-psyche - two emotions and two corresponding data bases - would exhibit (if one really wants to simulate human's psyche) very complicated psychological behavior. In particular, they would create various psychical complexes which would be exhibited via symptoms. We also point out to the crucial difference of our "Freudian psycho-robots" from psycho-robots created for different computer game ("psycho-automata"). Our aims are similar of those formulated for humanoid robots, see e.g. Brooks et al., 1981a,b, 1999, 2002. However, we jump directly to high level psyche (without to create e.g. the visual representation of reality). The idea of Luc Steels to create a robot culture via societies of self-educating robots, Manuel, 2003, is also very attractive for us. It is clear that real humanoid psyche (including complexes and symptoms) could be created only in society of interacting Psychots and people. Moreover, such AI-societies of Psychots can be used for modeling psychoanalytic problems and development of new methodologies of treatment of such problems. ## 5 , We proposed a series of the AI-type models for advanced psychological behavior. Our approach is based on geometrization of psychological processes via introduction of mental metric space, dynamical processing of mental states and emotional-type decision making based on quantative measures of interest and interdiction and corresponding data bases of ideas. Increasing complexity of AI-modeling implies with necessity appearance of psychological features such as complexes and symptoms which are being handled by psychoanalysis during the last hundred years. Such a
complicated behavior has not only negative consequences (e.g. hysteric reactions) 16, but it also plays an important controlling role. The presented AI-models can be used for creation of AI-systems, which we call psycho-robots (Psychots), exhibiting important elements of human psyche. At the moment domestic robots are merely simple working devices However, in future one can expect demand in systems which be able not only perform simple work tasks, but would have elements of human self-developing psyche. Such AI-psyche could play an important role both in relations between psychorobots and their owners as well as between psycho-robots. Since the presence of a huge numbers of psycho-complexes is an essential characteristic of human psychology, it would be interesting to model them in the AI-framework. As was already pointed out, complex psychological behavior induces important controlling structures which are selfdeveloping (in the process of interaction of a psycho-robot with human beings or other psycho-robots). However, psycho-robots would pay the same price for complexity of their psyche as it was paid by people. Some psycho-robots would exhibit elements of psychopathic behavior. One of major contributions of AI and cognitive science to psychology has been the information human thinking in which metaphor brain- as-computer is taken literally. In the present paper we extended the AI-approach to modeling of human psychology. We created the computer-architecture for modeling of very delicate features of human psychological behavior. ## 06-6067,61 Albeverio, S., Khrennikov, A. Yu., Kloeden, P., 1999. Memory retrieval as a $p$-adic dynamical system. Biosystems 49, 105-115. Ashby, R., 1952, Design of a brain. Chapman-Hall, London. Baars, B. J., 1997, In the theater of consciousness. The workspace of mind. Oxford University Press, Oxford. Boden, M. A. 2006, Mind as Machine - A History of Cognitive Science. Oxford, UK: Oxford University Press. Boden, M. A, 1998, Creativity and Artificial Intelligence. Artificial Intelligence 103(1-2): 347-356. Boden, Margaret A. 1996. Artificial Genius. Discover 17: 104-107. Brooks, R., Breazeal, C., Marjanovic, M., Scassellati, B., and Williamson, M., 1998a, The Cog Project: Building a Humanoid Robot. In Computation for Metaphors, Analogy and Agents, 1562, Springer Lecture Notes in Artificial Intelligence, Springer-Verlag. pp. 8-13. Brooks, R. A., Breazeal (Ferrell) C., Irie, R., Kemp, C. C., Marjanovic, M, Scassellati, B., and Williamson, M., 1998b, Alternate Essences of Intelligence. AAAI-98. Brooks, R. A,, 1999, . Cambrian Intelligence, The Early History of the New AI. Cambridge, MA: The MIT Press, pp. 8-9. Brooks, R. A., 2002, Flesh and Machines, How Robots Will Change Us. New York: Pantheon Books, p. 65. Chomsky, N., 1963, Formal properties of grammas. Handbook of mathematical psychology. Luce, R. D.; Bush, R.R.; Galanter, E.; Eds. 2, Wiley: New York, pp. 323-418. Churchland, P.S., Sejnovski, T., 1992, The computational brain. MITP: Cambridge. Collings, R.J.Jefferson, D.R., 1992, AntFarm: Towards simulated evolution. In: Langton, C. G.,Taylor, C., Farmer, J.D., Rasmussen, S. (Eds), Artificial Life-2, pp. 579-601. Redwood City, CA, Addison Wesley. Donnart, J. Y. and Meyer, J.A., 1996, Learning reactive and planning rules in a motivationally autonomous animat. IEEE Trans. Systems, Man., and Cybernetics. Part B: Cybernetics, 26, N 3, pp.381-395. Edelman, G. M. 1989. The remembered present: a biological theory of consciousness. New York, Basic Books. Eliasmith, C., 1996. The third contender: a critical examination of the dynamicist theory of cognition. Phil. Psychology 9(4), 441-463. Fodor, J.A. and Pylyshyn, Z. W., 1988. Connectionism and cognitive architecture: a critical analysis, Cognition, 280, 3--17. Freud, S., 1900. The interpretation of dreams. Standard Edition, 4 and 5. Freud, S., 1962a. New introductory lectures on psychoanalysis. New York, Penguin Books. Freud, S., 1962b. Two short accounts of psycho-analysis. New York, Penguin Books. Gay, P., 1988. Freud: A life for our time. W.W. Norton, NY. Green, V., 2003. Emotional Development in Psychoanalysis, Attachment Theory and Neuroscience: Creating Connections. Routledge. Khrennikov, A. Yu. , 1997. Non-Archimedean analysis: quantum paradoxes, dynamical systems and biological models. Kluwer, Dordrecht. 16 A hysteric reaction of domestic robot ? Why not?!
Khrennikov, A. Yu., 1998a. Human subconscious as the P-adic dynamical system. J. of Theor. Biology 193, 179-196. Khrennikov, A. Yu., 1998b. P-adic dynamical systems: description of concurrent struggle in biological population with limited growth. Dokl. Akad. Nauk 361, 752. Khrennikov, A. Yu., 1999a, Description of the operation of the human subconscious by means of P-adic dynamical systems. Dokl. Akad. Nauk 365, 458-460. Khrennikov, A. Yu., 2000a, P-adic discrete dynamical systems and collective behaviour of information states in cognitive models. Discrete Dynamics in Nature and Society 5, 59-69. Khrennikov, A. Yu., 2000b. Classical and quantum mechanics on P-adic trees of ideas. BioSystems 56, 95-120. Khrennikov A.Yu., 2002a. Classical and quantum mental models and Freud's theory of unconscious mind. Series Math. Modelling in Phys., Engineering and Cognitive sciences, 1. Växj\ö Univ. Press, Växjö. Khrennikov, A.Yu., 2004a, Information dynamics in cognitive, psychological, social, and anomalous phenomena. Kluwer, Dordreht. Khrennikov, A. Yu., 2004b, Probabilistic pathway representation of cognitive information. J. Theor. Biology 231, 597613. Langton, C. G.,Taylor, C., Farmer, J.D., Rasmussen, S. (Eds), 1992, Artificial Life-2, pp.41-91, Redwood City, CA, Addison Wesley. Langton, C. G., 1992, Life at the edge of chaos. In: Langton, C. G.,Taylor, C., Farmer, J.D., Rasmussen, S. (Eds), Artificial Life-2. Redwood City, CA, Addison Wesley. Macmillan, M., 1997. The Completed Arc: Freud Evaluated. MIT Press., Cambridge MA. Manuel, T.L., 2003, Creating a Robot Culture: An Interview with Luc Steels. IEEE Intelligent Systems, 18(3),59-61 May/June. Meyer, J.-A., Guillot, A., 1994, From SAB90 to SAB94: Four years of Animat research. In: Proc. of the Third International Conference of Adaptive Behavior. Cambridge: The MIT Press. Potkonjak, V., Radojicic, J., Tzafestas, S., 2002, Modeling Robot "Psycho-Physical" State and Reactions - A New Option in Human–Robot Communication Part 1: Concept and Background Journal of Intelligent and Robotic Systems, 35, 339 - 352 Solms, M. and Turnbull, O., 2003. The Brain and the Inner World: An Introduction to the Neuroscience of Subjective Experience. New York, Other Press. Solms, M., 2002. An Introduction to the Neuroscientific Works of Freud. In The Pre-Psychoanalytic Writings of Sigmund Freud. Eds.: G. van der Vijver and F. Geerardyn. Karnac, London, 25-26. Solms, M., 2006a. Putting the psyche into neuropsychology. Psychologist 19 (9), 538-539. Solms, M., 2006b. Sigmund Freud today. Psychoanalysis and neuroscience in dialogue. Psyche-Zeiteschrift fur Psychoanalyse und ihre Anwendungen 60 (9-10), 829-859. Stein, D. J., Solms, M., van Honk, J., 2006. The cognitive-affective neuroscience of the unconscious. CNS Spectrums 11 (8), 580-583. Strogatz, S. H., 1994. Nonlinear dynamics and chaos with applications to physics, biology, chemistry, and engineering. Addison Wesley, Reading, Mass. van Gelder, T., Port, R., 1995. It's about time: Overview of the dynamical approach to cognition. in Mind as motion: Explorations in the dynamics of cognition. Ed.: T. van Gelder, R. Port. MITP, Cambridge, Mass, 1-43. van Gelder, T., 1995. What might cognition be, if not computation? J. of Philosophy 91, 345-381. Voronkov, G.S., 2002a. Information and brain: viewpoint of neurophysiolog. Neurocomputers: development and applications N 1-2, 79-88. Young-Bruehl, E., 1998. Subject to Biography. Harvard University Press, Boston. Yaeger, L., 1994, Computational genetics, physiology, metabolism, neural systems, learning, vision, and behavior or Polyworld: Life in a new context. In: Artificial Life-3, pp. 263-298. Redwood City, CA, Addison Wesley.
## - The aim of this paper is to present a mental AI-model based on a geometric representation of mental processes. This model can be considered as the first step in coming AI-formalization of foundations of psychoanalytic research. 1 Mathematical foundations for the present AI-model were developed in a series of works Khrennikov, 1997, 1998a,b, 1999a,b, 2000a,b, 2002a and Albeverio et al., 1999, and Dubischar et al, 1999. Unfortunately, the high level of mathematical presentation in these works makes them non readable for people working in AI, computer science, psychology. In the present article we would like to present the main distinguishing AI-features of our model without using the formal mathematical apparatus. Another important difference of this paper from mentioned works is that now we do not try to specify the set-theoretic and topological structure of mental space. In previous works we developed one special mathematical model of mental space given by hierarchical trees (so called utrametric spaces), see Khrennikov, 2004a, b, for neurophysiological basis for such spaces. Although such encoding of hierarchy into space topology is very promising (especially by taking into account the role of hierarchical structures in psychology), we find possible to proceed in modeling of flows of conscious/unconscious mind in the most general framework of arbitrary metric mental space. However, from the very beginning we emphasize that we could not exclude that practical creation of AI-psyche would be based on the hierarchical encoding of information by using trees equipped with ultrametric distance. Our basic idea is to repeat in psychology and cognitive sciences the program of - - which has been performed in physics. And we hope that through such geometrization we shall be able to represent some elements of human psyche in the AI-framework. We recall that in physics the starting point of the mathematical formalization was creation of an adequate mathematical model of physical space. It was not so easy task. It took about three hundred years. However, finally, physicists got a well established model of space -- infinitely divisible real continuum. Physical systems where embedded in this space. Evolution of a physical system was represented by dynamical system (continuous—differential equation or discrete—itterations of some map from physical space into itself) . The basic dynamical law—the second Newton law—was given in a simple differential form. We would like to do the same with mind: a) to introduce mental space—"space of ideas" ; b) to consider dynamics in mental space—flows of ideas. After performing such a nontrivial task, we apply our approach to modeling of Freud's psychoanalysis. Here we propose a mental AI-model describing flows of mind in the unconscious, the subconsciousness, and the consciousness as well as mental flows between these domains. The main attention will be paid to dynamics in the unconscious and the subconsciousness and their feedback coupling. Our model does not say so much about consciousness. Moreover, we are not sure that a mental AI-model could be applied to the problem of consciousness at all. Our model describes (and can be even used for a mathematical simulation) of such basic features of psychoanalysis as repression of forbidden wishes, desires and impulses (coming to the subconsciousness from the unconscious and going to the consciousness), complexes and corresponding symptoms. In a series of Models 1-4, we consider AI-modeling of cognitive systems with increasing complexity of psychological behavior determined by the structure of flows of ideas. One immediately recognizes that our models can be used for creation of AI-systems, which we call **** , exhibiting important elements of human psyche. Creation of such psycho-robots may be useful improvement of - At the moment domestic robots are merely simple working devices (e.g. vacuum cleaners or lawn mowers) . However, in future one can expect demand in systems which be able not only perform simple work tasks, but would have elements of human self-developing psyche. 2Such AI-psyche could play an important role both in relations between psycho-robots and their owners as well as between psycho-robots. Since the presence of a huge numbers of psycho-complexes (results of repression of forbidden desires) is an essential characteristic of human psychology, it would be interesting to model them in the AIframework.
Our approach can be considered as extension of the artificial intelligence approach, Chomsky, 1963, Churchland and Sejnovski, 1992, to simulation of psychological behavior, cf. Boden, 1996, 1998, 2006. Especially close relation can be found with models of , see Langton et al., 1992 (and especially the article of Langton, 1992) , Yaeger, 1994, Collings and Jefferson, 1992. We extend modeling of AI-life to psychological processes. On the basis of the presented models, we can create AI-societies of psycho-robots interacting with real people and observe evolution of psyche of psycho-robots (and even people interacting with them). We also mention development of theory of , see e.g. Meyer and Guillot, 1994 and Donnart and Meyer, 1996. By similarity with Animats we call our psycho-robots: Finally, as a motivation of our activity, we cite Herbert Simon: "AI can have two purposes. One is to use the power of computers to argument human thinking, just as we use motors to argument human or horse power. Robots and expert systems are major branches of this. The other is to use a computer's AI to understand how humans think, In a humanoid way…. You are using AI to understand the human mind." Our aim is precisely to understand human mind and psychology via AI-modeling. ## ! The notion of a is used in many applications for describing distances between objects. There is given a set of objects of any sort. They are called points. There is defined a distance (metric) between any two points which is nonnegative and it has the following properties: 1). - the distance between two points equals to zero if and only if these points coincide; 2) the distance between two points does not depend on order in which points are taken.; 3) " take three points and consider the corresponding triangle; each side of this triangle is less than or equal to the sum of two other sides. The main examples of metric spaces which are used in physics are Euclidean spaces and their generalizations. However, as we have seen in Khrennikov, 1997, 1998a,b, 1999a,b, 2000a,b, 2002a and Albeverio et al., 1999, and Dubischar et al, 1999., another class of metric spaces might be essentially more adequate for applications to psychology and cognitive sciences, so called (in mathematical literature they are also called nonArchimedean spaces, Khrennikov, 1997). Those spaces have geometries which differ crucially from geometries of physical spaces. However, we are not able to go into detail in the present communication. # $ % We shall use the following mathematical model for mental space: - **&' (** ## - Dynamical thinking, )- - , is performed via the following procedure: a) an initial mental state (e.g. an external sensory input) is sent to the unconscious domain; b) it is iterated by some dynamical system 3 which is given by a map from the mental metric space into itself; c) if iterations converge (with respect to the mental space metric) to an attractor 4, then this attractor is communicated to the subconsciousness; this is the solution of the initial problem. 5In the simplest model, see Model 1 in section 4.1, this attractor is sent directly to the consciousness. Thus in our model unconscious functioning of the brain is - - - - The unconsciousness is a collection of dynamical systems (thinking processors) which produce new mental states
practically automatically. The consciousness only uses and control results (attractors in spaces of ideas) of functioning of unconscious processors. ## * - - - We represent a few mathematical models of the information architecture of conscious systems cf., e.g., Fodor and Pylyshyn, 1988, Edelman, 1989, Voronkov, 2002a. We start with a quite simple model (Model 1). This model will be developed to more complex models which describe some essential features of human cognitive behavior. The following sequence of cognitive models is related to the process of evolution of the mental architecture of cognitive systems. ## * !- A). The brain of a cognitive system is split into three domains: + **+** B). There are two control centers, namely, - SCC and - UC. C). The main part of the unconscious domain is a - Dynamical thinking processors are located in this domain. In our mathematical models such processors are represented by maps from mental space into itself. In the simplest case the outputs of one group of thinking processors are always sent to the unconscious control center UC and the outputs of another group are always sent to the subconscious control center SCC. 6 The brain of such a cognitive system works in the following way, see section 3: a). external information (e.g., a sensor stimulus) is transformed by SCC into some initial idea-problem; the SCC sends this idea to a thinking processor which is located in the processing domain; b). Starting with this initial idea, the processor produces via iterations an idea-attractor. We consider two possibilities: c1). If the thinking processor under consideration is one of processors with the UC-output, then the idea-attractor is transmitted to the control center UC. This center sends it either as an initial idea to the processing domain or to an unconscious performance. c11). In the first case some processor (it can have either conscious or unconscious output) performs iterations starting with this idea and it produces a new idea-attractor. c12). In the second case there is produced some unconscious reaction. c2). If the thinking processor under consideration is one of processors with the SCC-output, then the idea-attractor is transmitted directly to the control center SCC. This center sends it either again to the processing domain (as an initial idea) or to physical or mental performance (speech, writing), or to memory. Those performances can be conscious as well as unconscious. In the first case the idea-attractor should be transmitted by SCC to the conscious domain. In this primitive model there is no additional analysis of the idea-attractor which was produced in the unconscious domain. Each attractor is recognized by the control center SCC as the solution of an initial problem, compared with models 2-4. Those attractors are wishes, desires and impulses produced by the unconsciousness. Moreover, it is natural to assume that some group of thinking processors have their outputs only inside the processing domain. Thus they do not send outputs to the control centers. An idea-attractor produced by such a processor is transmitted neither to SCC nor to UC. The idea-attractor is directly used as the initial condition by some processor.
should depend on time and it should decrease with time. The speed of decreasing of the measure of interest can depend on the idea. Finally, if the measure of interest becomes less than the realization threshold such an idea-attractor is deleted from the collector. It is natural to assume the presence of a ) **-** If an idea has an extremely high value of interestwhich is larger than the preserving threshold, then such an idea must be realized in any case. In our model we postulate that for such an idea the measure of interest is not changed with time. We now describe one of the possible models for finding the value of interest for ideas-attractors. It is based on the fundamental assumption that the brain is able to measure the distance between ideas. The subconscious domain of the brain of a cognitive system contains a - . The interest-database is continuously created on the basis of mental experiences. It is the cornerstone of Ego (in coming models Ego will be essentially extended). The subconscious domain contains a special block, comparator, that measures the distance between two ideas, and the distance between an idea and the set of interesting ideas. At the present level of development of neurophysiology we cannot specify mental distance. Moreover, neural realization of mental distance may depend on a cognitive system or class of cognitive systems. However, the % gives some reasons to suppose that functioning of a brain might be based on neuronal trees which induce ultrametric mental space, see Khrennikov 2002a, 2004a. Our present considerations are presented for an arbitrary metric. It is only important that the brain is able to measure the distance between ideas and between an idea and a collection of ideas. We recall that the distance between a point (in our case an idea) and a finite set (in our case the collection of interesting ideas) is defined as the minimum of distances between this point and points of the finite set. If an idea-attractor is close to some idea from the interest-database, then the distance between this idea-attractor and the database is also small. If an idea-attractor is far from all interesting ideas, then the distance between this idea-attractor and the database is large. We now define mathematically a measure of interest for an idea-attractor as the following quantity: one over the sum of the distance (between this idea-attractor and the interest-database) and one: 7 ## Measure Of Interest = 1/(Distance +1). Thus, If The Distance Is Small The Measure Of Interest Is Large ; If The Distance Is Large It Is Small. 8 We now can determine the value of the parameter k (the lowest possible value of the measure of interest). Denote by the symbol L maximum of distances between all possible pairs of mental points. Thus the minimal possible value of the measure of interest is equal to k=1/(L+1). Here L can finite as well as infinite. In the latter case k=0. Finally, we remark that, since the minimal distance equals zero, the maximal value of the measure of interest is 1. Thus it takes values in the segment [k, 1].
## Consistency = A Interest + B Interdiction, where a and b are some real coefficients. Such a linear combination depends on a cognitive system. In the simplest case it can be just the difference between these measures: CONSISTENCY = INTEREST - INTERDICTION Such a functional describes "normal behaviour." A risky person may have e.g. the following functional: CONSISTENCY = a INTEREST - INTERDICTION, where the coefficient a is sufficiently large. Such a guy would neglect danger and interdiction and he will be extremely stimulated even by a minimal interest. We can consider even an "adrenalin-guy" having ## Consistency = Interest + Interdiction For him danger and interdiction are not less exciting than the interest. We now modify Model 2 and consider, instead of the realization threshold based on the measure of interest, a realization threshold which is based on the measure of consistency. The presence of such a threshold plays the role of a filter against "inconsistent ideas-attractors." If for some idea-attractor its measure of consistency is larger than the consistency-threshold, then such an idea will stay in the collector of ideas waiting for realization. In the opposite case such an idea will be deleted without any further analysis. It is convenient to consider a special block in the subconscious domain, an . This block contains: a) the comparator which measures distances from an idea-attractor to databases of interesting and forbidden ideas; b) a computation device which calculates measures of interest, interdiction and consistency; this device also checks consistency of an idea (by comparing it with the realization threshold); c) a transmission device which sends an ideaattractor to the collector or trash. In the model under consideration the order in the queue of ideas in the collector is based on the measure of consistency. It is also convenient to introduce a special block -- ), in the subconscious <image>
,- - - \# A cognitive system described by Model 3 has complex cognitive behavior. However, this complexity does not imply "mental problems". The use of the consistency functional -- a linear combination of the measures of interest and interdiction -- solves the contradiction between interest and interdiction for an idea-attractor. We can again assume that there exists a preserving threshold such that ideas-attractors having the consistency larger than this threshold must be realized in any case. It is natural to assume that the preserving threshold is essentially larger than the realization one. This threshold plays the important role in the process of the time evolution of consistency of an idea-attractor in the collector. We can assume that the consistency-measure decreases exponentially with time (thus this quantity will very quickly become less than the realization threshold and after that this idea will disappear from the collector without any trace and hence it will be never realized). But it will be assumed that if the consistency-measure is larger than the preserving threshold then this measure will not be changed. The main disadvantage of the cognitive system described by Model 3 is that the analyzer permits the realization of ideas which have at the same time very high levels of interest and interdiction (if the measures of interest and interdiction compensate each other in the consistency function). For example, let the consistency function be equal to the difference between the measure of interest and the measure of interdiction. Assume that the realization threshold is equal to zero. For such a brain the analyzer sends to the collector totally forbidden ideas (with measure of interdiction which is approximately equal to one) having extremely high interest (with measure of interest which is approximately equal to one) Such a behavior (a storm of cravings) can be dangerous, especially in a group of cognitive systems with a social structure. Therefore functioning of the analyzer must be based on a more complex analysis of ideas-attractors which is not reduced to the calculation of the consistency function and comparing it with the realization threshold. ## ** !* - Suppose that a cognitive system described by Model 3 improves its brain by introducing two new thresholds: the - - . and the **-** - . -. If for some idea-attractor its measure of interest is larger than the maximal interest threshold, then such an idea is extremely interesting. The cognitive system can not simply delete this attractor. If for some idea-attractor its measure of interest is larger than the maximal interdiction threshold, then such an idea is strongly forbidden. The cognitive system can not simply send this idea to the collector to wait for realization. We now introduce the '- (. These are ideas such that both measures of interest and interdiction are larger that the corresponding maximal thresholds. If an idea-attractor belongs to the domain of doubts, then the cognitive system cannot take automatically (on the basis of the value of the consistency) the decision on realization of this idea. ## / **0 -** On the one hand, the creation of an additional block in the analyzer to perform analysis of ideas-attractors by comparing them with maximal-interest and maximal-interdiction thresholds plays the positive role. Such a brain does not proceed automatically to realizations of dangerous ideas-attractors, despite their high attraction. We recall that a brain described by Model 3 would proceed totally automatically by comparing the measure of consistency with the realization threshold. On the other hand, this step in the cognitive evolution induces mental problems for a cognitive system. In fact, the appearance of the domain of doubts in the mental space is the origin of some psychical problems and mental diseases. Let the analyser find that an idea-attractor belongs to the domain of doubts— forbidden wish (desire, impulse, experience). 11 The brain is not able neither to realize such an idea nor simply to delete it. What could a brain do in this situation? The answer to this question was given in Freud, 1962a,b: such a forbidden wish is shackled into the unconscious domain. 12
where E[f(st)] and var(f(st**)) are the expected value and the variance of** f(st). Estimates r(k) of autocorrelation coefficients ρ(k**) can be calculated** with a time series (s1, s2, . . . , sL**) of length** L : $$r(k)={\frac{\sum_{j=1}^{L-k}(f(s_{j})-{\bar{f}})(f(s_{j+k})-{\bar{f}})}{\sum_{j=1}^{L}(f(s_{j})-{\bar{f}})^{2}}}$$ where ¯f =1L PL j=1 f(sj ), and L >> **0. A random walk is representative of** the entire landscape when the landscape is statistically isotropic. In **this case,** whatever the starting point of random walks and the selected neighbors during the walks, estimates of r(n**) must be nearly the same. Estimation error** diminishes with the walk length. The correlation length τ **measures how the autocorrelation function decreases** and it summarizes the ruggedness of the landscape: the larger the **correlation** length, the smoother is the landscape. Weinberger's definition τ = −1 ln(ρ(1)) makes the assumption that the autocorrelation function decreases exponentially. Here we will use another definition that comes from a more general analysis of time series, the Box-Jenkins approach [32], introduced in **the field** of fitness landscapes by Hordijk [33]. The time series of fitness values will be approached by an autoregressive moving-average (ARMA) model. In ARMA(**p, q**) model, the current value depends linearly on the p **previous values and the** q previous white noises. $$f(s_{t})=c+\sum_{i=1}^{p}\alpha_{i}f(s_{t-i})+\epsilon_{t}+\sum_{i=1}^{q}\beta_{i}\epsilon_{t-i}{\mathrm{~where~}}\epsilon_{t}{\mathrm{~are~white~noises.}}$$ The approach consists in the iteration of three stages [32]. The **identification** stage consists in determining the value of p and q **using the autocorrelation** function (acf) and the partial autocorrelation function (pacf) of the time series. The estimation stage consists in determining the values c, αi and βi **using the** pacf. The significance of this values is tested by a t-test. The value **is not** significant if t-test is below 2. The diagnostic checking stage **is composed of** two parts. The first one checks the adequation between data and **estimated** data. We use the square correlation R2 **between observed data of the time** series and estimated data produced by the model and the **Akaide information** criterion AIC: $$A I C(p,q)=l o g(\hat{\sigma}^{2})+2(p+q)/L\mathrm{~where~}\hat{\sigma}^{2}=L^{-1}\sum_{j=1}^{L}(y_{j}-\hat{y}_{j})^{2}$$ The second one checks the white noise of residuals which is the difference between observed data value and estimated values. For this, the autocorrelation of residuals and the p-value of Ljung-Box test are computed.
<image> $$n s c=\sum_{i=1}^{m-1}c_{i},\ \ {\mathrm{where:}}\ \forall i\in[1,m)\ \ c_{i}=m i n(P_{i},0)$$ We hypothesize that nsc **can give some indication of problem difficulty in the** following sense: if nsc= 0, the problem is easy, if nsc< **0 the problem is difficult** and the value of nsc **quantifies this difficulty: the smaller its value, the more** difficult the problem. In other words, according to our hypothesis, **a problem** is difficult if at least one of the segments S1, S2, . . . , Sm−1 **has a negative slope** and the sum of all the negative slopes gives a measure of problem hardness. The idea is that the presence of a segment with negative slope indicates a bad evolvability for individuals having fitness values contained in that segment. ## 4 Analysis Of The Majority Problem Fitness Landscape 4.1 Definition of the fitness landscape As in Mitchell [4], we use CA of radius r **= 3 and configurations of length** N = 149. The set S **of potential solutions of the Majority Fitness Landscape** is the set of binary string which represent the possible CA rules. The **size of** S **is 2**2 2r+1 = 2128**, and each automaton should be tested on the 2**149 **possible** different ICs. This gives 2277 possibilities, a size far too large to be searched exhaustively. Since performance can be defined in several ways, the **consequence** is that for each feasible CA in the search space, the associated fitness can be
different, and thus effectively inducing different landscapes. In this **work we** will use one type of performance measure based on the fraction of n **initial** configurations that are correctly classified from one sample. We call **standard** performance **(see also section 2.3) the performance when the sample is drawn** from a binomial distribution (i.e., each bit is independently drawn with probability 1/**2 of being 0). Standard performance is a hard measure because of** the predominance in the sample of ICs close to 0.**5 and it has been typically** employed to measure a CA's capability on the density task. The standard performance cannot be known perfectly due to random variation of samples of ICs. The fitness function of the landscape is stochatic one which allows population of solutions to drift arround neutral networks. The error of evaluation leads us to define the neutrality of landscape. The ICs are chosen independently, so the fitness value f **of a solution follows a normal law** N (f, σ(f) √n ), where σ is the standard deviation of sample of fitness f, and n **is the** sample size. For binomial sample, σ 2(f) = f(1−f**), the variance of Bernouilli** trial. Then two neighbors s and s ′ are neutral neighbors (isNeutral(**s, s** ′ **) is** true) if a t-test accepts the hypothesis of equality of f(s**) and** f(s ′ **) with 95** percent of confidence (fig. 3). The maximum number of fitness values statistically different for standard performance is 113 for n = 104, 36 for n **= 10**3 and 12 for n **= 10**2. <image> The DOS of the Majority problem landscape was computed using the uniform random sampling technique. The number of sampled points is 4000 and, among them, the number of solutions with a fitness value equal to 0 is **3979.** Clearly, the space appears to be a difficult one to search since the tail of the distribution to the right is non-existent. Figure 4 shows the DOS obtained using the Metropolis-Hastings technique. This time, over the 4000 solutions sampled, only 176 have a fitness equal to zero, and the DOS clearly shows a more uniform distribution of rules over many different fitness values.
<image> (a) (b) It is important to remark a considerable number of solutions sampled **with a** fitness approximately equal to 0.**5. Furthermore, no individual with a fitness** value superior to 0.**514 has been sampled. For the details of the techniques** used to sample the space, see [36,37] The autocorrelation along random walks is not significant due to the large number of zero fitness points and is thus not reported here. The FDC, calculated over a sample of 4000 individuals generated using the Metropolis-Hastings technique, are shown in table 1. Each value has **been** obtained using one of the best local optima known to date (see section 4.4). FDC values are approximately close to zero for DAS optimum. For ABK optimum, FDC value is near to -0.15, value identified by Jones as the threshold between difficult and straightforward problems. For all the other optima, FDC are close to −0.**10. So, the FDC does not provide information about problem** difficulty. | generated with the Metropolis-Hastings sampling technique. Rules GLK [38] Davis [7] Das [39] ABK [7] Coe1 [40] | Coe2 [40] | | | | | | |------------------------------------------------------------------------------------------------------------------|-------------|---------|---------|---------|---------|---------| | FDC | -0.1072 | -0.0809 | -0.0112 | -0.1448 | -0.1076 | -0.1105 |
Table 2 <image> <image> Computational costs do not allow us to analyze many neutral networks. In this section we analyze two important large neutral networks (NN**). A large number of CAs solve the majority density problem on only half of ICs because they** converge nearly always on the final configuration (O) N or (1)N **and thus have** performance about 0.**5. Mitchell [5] calls these "default strategies" and notices** that they are the first stage in the evolution of the population before jumping to higher performance values associated to "block-expanding" strategies (see section 2.3). We will study this large NN, denoted NN0.5 **around standard** performance 0.5 to understand the link between NN properties and GA evolution. The other NN, denoted NN0.76, is the NN around fitness 0.**7645 which** contains one neighbor of a CA found by Mitchell et al**. The description of this** "high" NN could give clues as how to "escape" from NN **toward even higher** fitness values. In our experiments, we perform 5 neutral walks on NN0.5 **and 19 on** NN0.76. Each neutral walk has the same starting point on each NN**. The solution** with performance 0.**5 is randomly solution and the solution with performance**
<image> <image> The innovation rate and the number of new better fitnesses found **along the** longest neutral random walk for each NN **is given in figure 8. The majority of** new fitness value found along random walk is deleterious, very few solutions are fitter. This study give us a better description of Majority fitness landscape neutrality which have important consequence on metaheuristic design. The **neutral** degree is high. Therefore, the selection operator should take into **account the** case of equality of fitness values. Likewise the mutation rate and population size should fit to this neutral degree in order to find rare good solutions outside NN [42]. For two potential solutions x and y on NN**, the probability** p that at least one solution escaped from NN is P(x 6∈ NN ∪ y 6∈ NN) =