id
stringlengths
9
10
submitter
stringlengths
5
47
authors
stringlengths
5
1.72k
title
stringlengths
11
234
comments
stringlengths
1
491
journal-ref
stringlengths
4
396
doi
stringlengths
13
97
report-no
stringlengths
4
138
categories
stringclasses
1 value
license
stringclasses
9 values
abstract
stringlengths
29
3.66k
versions
listlengths
1
21
update_date
int64
1,180B
1,718B
authors_parsed
sequencelengths
1
98
cs/0405009
Ajith Abraham
Ajith Abraham
Intelligent Systems: Architectures and Perspectives
null
Recent Advances in Intelligent Paradigms and Applications, Abraham A., Jain L. and Kacprzyk J. (Eds.), Studies in Fuzziness and Soft Computing, Springer Verlag Germany, ISBN 3790815381, Chapter 1, pp. 1-35, 2002
null
null
cs.AI
null
The integration of different learning and adaptation techniques to overcome individual limitations and to achieve synergetic effects through the hybridization or fusion of these techniques has, in recent years, contributed to a large number of new intelligent system designs. Computational intelligence is an innovative framework for constructing intelligent hybrid architectures involving Neural Networks (NN), Fuzzy Inference Systems (FIS), Probabilistic Reasoning (PR) and derivative free optimization techniques such as Evolutionary Computation (EC). Most of these hybridization approaches, however, follow an ad hoc design methodology, justified by success in certain application domains. Due to the lack of a common framework it often remains difficult to compare the various hybrid systems conceptually and to evaluate their performance comparatively. This chapter introduces the different generic architectures for integrating intelligent systems. The designing aspects and perspectives of different hybrid archirectures like NN-FIS, EC-FIS, EC-NN, FIS-PR and NN-FIS-EC systems are presented. Some conclusions are also provided towards the end.
[ { "version": "v1", "created": "Tue, 4 May 2004 23:48:39 GMT" } ]
1,179,878,400,000
[ [ "Abraham", "Ajith", "" ] ]
cs/0405010
Ajith Abraham
Ajith Abraham and Baikunth Nath
A Neuro-Fuzzy Approach for Modelling Electricity Demand in Victoria
null
Applied Soft Computing Journal, Elsevier Science, Volume 1&2, pp. 127-138, 2001
null
null
cs.AI
null
Neuro-fuzzy systems have attracted growing interest of researchers in various scientific and engineering areas due to the increasing need of intelligent systems. This paper evaluates the use of two popular soft computing techniques and conventional statistical approach based on Box--Jenkins autoregressive integrated moving average (ARIMA) model to predict electricity demand in the State of Victoria, Australia. The soft computing methods considered are an evolving fuzzy neural network (EFuNN) and an artificial neural network (ANN) trained using scaled conjugate gradient algorithm (CGA) and backpropagation (BP) algorithm. The forecast accuracy is compared with the forecasts used by Victorian Power Exchange (VPX) and the actual energy demand. To evaluate, we considered load demand patterns for 10 consecutive months taken every 30 min for training the different prediction models. Test results show that the neuro-fuzzy system performed better than neural networks, ARIMA model and the VPX forecasts.
[ { "version": "v1", "created": "Wed, 5 May 2004 00:27:53 GMT" } ]
1,179,878,400,000
[ [ "Abraham", "Ajith", "" ], [ "Nath", "Baikunth", "" ] ]
cs/0405011
Ajith Abraham
Ajith Abraham
Neuro Fuzzy Systems: Sate-of-the-Art Modeling Techniques
null
Connectionist Models of Neurons, Learning Processes, and Artificial Intelligence, Lecture Notes in Computer Science. Volume. 2084, Springer Verlag Germany, Jose Mira and Alberto Prieto (Eds.), ISBN 3540422358, Spain, pp. 269-276, 2001
null
null
cs.AI
null
Fusion of Artificial Neural Networks (ANN) and Fuzzy Inference Systems (FIS) have attracted the growing interest of researchers in various scientific and engineering areas due to the growing need of adaptive intelligent systems to solve the real world problems. ANN learns from scratch by adjusting the interconnections between layers. FIS is a popular computing framework based on the concept of fuzzy set theory, fuzzy if-then rules, and fuzzy reasoning. The advantages of a combination of ANN and FIS are obvious. There are several approaches to integrate ANN and FIS and very often it depends on the application. We broadly classify the integration of ANN and FIS into three categories namely concurrent model, cooperative model and fully fused model. This paper starts with a discussion of the features of each model and generalize the advantages and deficiencies of each model. We further focus the review on the different types of fused neuro-fuzzy systems and citing the advantages and disadvantages of each model.
[ { "version": "v1", "created": "Wed, 5 May 2004 00:32:52 GMT" } ]
1,179,878,400,000
[ [ "Abraham", "Ajith", "" ] ]
cs/0405012
Ajith Abraham
Ajith Abraham & Dan Steinberg
Is Neural Network a Reliable Forecaster on Earth? A MARS Query!
null
Bio-Inspired Applications of Connectionism, Lecture Notes in Computer Science. Volume. 2085, Springer Verlag Germany, Jose Mira and Alberto Prieto (Eds.), ISBN 3540422374, Spain, pp.679-686, 2001
null
null
cs.AI
null
Long-term rainfall prediction is a challenging task especially in the modern world where we are facing the major environmental problem of global warming. In general, climate and rainfall are highly non-linear phenomena in nature exhibiting what is known as the butterfly effect. While some regions of the world are noticing a systematic decrease in annual rainfall, others notice increases in flooding and severe storms. The global nature of this phenomenon is very complicated and requires sophisticated computer modeling and simulation to predict accurately. In this paper, we report a performance analysis for Multivariate Adaptive Regression Splines (MARS)and artificial neural networks for one month ahead prediction of rainfall. To evaluate the prediction efficiency, we made use of 87 years of rainfall data in Kerala state, the southern part of the Indian peninsula situated at latitude -longitude pairs (8o29'N - 76o57' E). We used an artificial neural network trained using the scaled conjugate gradient algorithm. The neural network and MARS were trained with 40 years of rainfall data. For performance evaluation, network predicted outputs were compared with the actual rainfall data. Simulation results reveal that MARS is a good forecasting tool and performed better than the considered neural network.
[ { "version": "v1", "created": "Wed, 5 May 2004 00:36:17 GMT" } ]
1,179,878,400,000
[ [ "Abraham", "Ajith", "" ], [ "Steinberg", "Dan", "" ] ]
cs/0405013
Ajith Abraham
Golam Sorwar and Ajith Abraham
DCT Based Texture Classification Using Soft Computing Approach
null
Malaysian Journal of Computer Science, 2004 (forth coming)
null
null
cs.AI
null
Classification of texture pattern is one of the most important problems in pattern recognition. In this paper, we present a classification method based on the Discrete Cosine Transform (DCT) coefficients of texture image. As DCT works on gray level image, the color scheme of each image is transformed into gray levels. For classifying the images using DCT we used two popular soft computing techniques namely neurocomputing and neuro-fuzzy computing. We used a feedforward neural network trained using the backpropagation learning and an evolving fuzzy neural network to classify the textures. The soft computing models were trained using 80% of the texture data and remaining was used for testing and validation purposes. A performance comparison was made among the soft computing models for the texture classification problem. We also analyzed the effects of prolonged training of neural networks. It is observed that the proposed neuro-fuzzy model performed better than neural network.
[ { "version": "v1", "created": "Wed, 5 May 2004 00:44:12 GMT" } ]
1,179,878,400,000
[ [ "Sorwar", "Golam", "" ], [ "Abraham", "Ajith", "" ] ]
cs/0405014
Ajith Abraham
Andy AuYeung and Ajith Abraham
Estimating Genome Reversal Distance by Genetic Algorithm
null
2003 IEEE Congress on Evolutionary Computation (CEC2003), Australia, IEEE Press, ISBN 0780378040, pp. 1157-1161, 2003
null
null
cs.AI
null
Sorting by reversals is an important problem in inferring the evolutionary relationship between two genomes. The problem of sorting unsigned permutation has been proven to be NP-hard. The best guaranteed error bounded is the 3/2- approximation algorithm. However, the problem of sorting signed permutation can be solved easily. Fast algorithms have been developed both for finding the sorting sequence and finding the reversal distance of signed permutation. In this paper, we present a way to view the problem of sorting unsigned permutation as signed permutation. And the problem can then be seen as searching an optimal signed permutation in all n2 corresponding signed permutations. We use genetic algorithm to conduct the search. Our experimental result shows that the proposed method outperform the 3/2-approximation algorithm.
[ { "version": "v1", "created": "Wed, 5 May 2004 00:57:34 GMT" } ]
1,179,878,400,000
[ [ "AuYeung", "Andy", "" ], [ "Abraham", "Ajith", "" ] ]
cs/0405016
Ajith Abraham
Srinivas Mukkamala, Andrew H. Sung, Ajith Abraham and Vitorino Ramos
Intrusion Detection Systems Using Adaptive Regression Splines
null
6th International Conference on Enterprise Information Systems, ICEIS'04, Portugal, I. Seruca, J. Filipe, S. Hammoudi and J. Cordeiro (Eds.), ISBN 972-8865-00-7, Vol. 3, pp. 26-33, 2004
null
null
cs.AI
null
Past few years have witnessed a growing recognition of intelligent techniques for the construction of efficient and reliable intrusion detection systems. Due to increasing incidents of cyber attacks, building effective intrusion detection systems (IDS) are essential for protecting information systems security, and yet it remains an elusive goal and a great challenge. In this paper, we report a performance analysis between Multivariate Adaptive Regression Splines (MARS), neural networks and support vector machines. The MARS procedure builds flexible regression models by fitting separate splines to distinct intervals of the predictor variables. A brief comparison of different neural network learning algorithms is also given.
[ { "version": "v1", "created": "Wed, 5 May 2004 02:22:16 GMT" } ]
1,179,878,400,000
[ [ "Mukkamala", "Srinivas", "" ], [ "Sung", "Andrew H.", "" ], [ "Abraham", "Ajith", "" ], [ "Ramos", "Vitorino", "" ] ]
cs/0405017
Ajith Abraham
Marcin Paprzycki, Ajith Abraham and Ruiyuan Guo
Data Mining Approach for Analyzing Call Center Performance
null
The 17th International Conference on Industrial & Engineering Applications of Artificial Intelligence and Expert Systems, Canada, Springer Verlag, Germany, 2004 (forth coming)
null
null
cs.AI
null
The aim of our research was to apply well-known data mining techniques (such as linear neural networks, multi-layered perceptrons, probabilistic neural networks, classification and regression trees, support vector machines and finally a hybrid decision tree neural network approach) to the problem of predicting the quality of service in call centers; based on the performance data actually collected in a call center of a large insurance company. Our aim was two-fold. First, to compare the performance of models built using the above-mentioned techniques and, second, to analyze the characteristics of the input sensitivity in order to better understand the relationship between the perform-ance evaluation process and the actual performance and in this way help improve the performance of call centers. In this paper we summarize our findings.
[ { "version": "v1", "created": "Wed, 5 May 2004 02:27:43 GMT" } ]
1,179,878,400,000
[ [ "Paprzycki", "Marcin", "" ], [ "Abraham", "Ajith", "" ], [ "Guo", "Ruiyuan", "" ] ]
cs/0405018
Ajith Abraham
Ajith Abraham, Ninan Sajith Philip and P. Saratchandran
Modeling Chaotic Behavior of Stock Indices Using Intelligent Paradigms
null
International Journal of Neural, Parallel & Scientific Computations, USA, Volume 11, Issue (1&2), pp. 143-160, 2003
null
null
cs.AI
null
The use of intelligent systems for stock market predictions has been widely established. In this paper, we investigate how the seemingly chaotic behavior of stock markets could be well represented using several connectionist paradigms and soft computing techniques. To demonstrate the different techniques, we considered Nasdaq-100 index of Nasdaq Stock MarketS and the S&P CNX NIFTY stock index. We analyzed 7 year's Nasdaq 100 main index values and 4 year's NIFTY index values. This paper investigates the development of a reliable and efficient technique to model the seemingly chaotic behavior of stock markets. We considered an artificial neural network trained using Levenberg-Marquardt algorithm, Support Vector Machine (SVM), Takagi-Sugeno neuro-fuzzy model and a Difference Boosting Neural Network (DBNN). This paper briefly explains how the different connectionist paradigms could be formulated using different learning methods and then investigates whether they can provide the required level of performance, which are sufficiently good and robust so as to provide a reliable forecast model for stock market indices. Experiment results reveal that all the connectionist paradigms considered could represent the stock indices behavior very accurately.
[ { "version": "v1", "created": "Wed, 5 May 2004 02:38:25 GMT" } ]
1,179,878,400,000
[ [ "Abraham", "Ajith", "" ], [ "Philip", "Ninan Sajith", "" ], [ "Saratchandran", "P.", "" ] ]
cs/0405019
Ajith Abraham
Sonja Petrovic-Lazarevic and Ajith Abraham
Hybrid Fuzzy-Linear Programming Approach for Multi Criteria Decision Making Problems
null
International Journal of Neural, Parallel & Scientific Computations, USA, Volume 11, Issues (1&2), pp. 53-68, 2003
null
null
cs.AI
null
The purpose of this paper is to point to the usefulness of applying a linear mathematical formulation of fuzzy multiple criteria objective decision methods in organising business activities. In this respect fuzzy parameters of linear programming are modelled by preference-based membership functions. This paper begins with an introduction and some related research followed by some fundamentals of fuzzy set theory and technical concepts of fuzzy multiple objective decision models. Further a real case study of a manufacturing plant and the implementation of the proposed technique is presented. Empirical results clearly show the superiority of the fuzzy technique in optimising individual objective functions when compared to non-fuzzy approach. Furthermore, for the problem considered, the optimal solution helps to infer that by incorporating fuzziness in a linear programming model either in constraints, or both in objective functions and constraints, provides a similar (or even better) level of satisfaction for obtained results compared to non-fuzzy linear programming.
[ { "version": "v1", "created": "Wed, 5 May 2004 02:44:41 GMT" } ]
1,179,878,400,000
[ [ "Petrovic-Lazarevic", "Sonja", "" ], [ "Abraham", "Ajith", "" ] ]
cs/0405024
Ajith Abraham
Ajith Abraham
Meta-Learning Evolutionary Artificial Neural Networks
null
Neurocomputing Journal, Elsevier Science, Netherlands, Vol. 56c, pp. 1-38, 2004
null
null
cs.AI
null
In this paper, we present MLEANN (Meta-Learning Evolutionary Artificial Neural Network), an automatic computational framework for the adaptive optimization of artificial neural networks wherein the neural network architecture, activation function, connection weights; learning algorithm and its parameters are adapted according to the problem. We explored the performance of MLEANN and conventionally designed artificial neural networks for function approximation problems. To evaluate the comparative performance, we used three different well-known chaotic time series. We also present the state of the art popular neural network learning algorithms and some experimentation results related to convergence speed and generalization performance. We explored the performance of backpropagation algorithm; conjugate gradient algorithm, quasi-Newton algorithm and Levenberg-Marquardt algorithm for the three chaotic time series. Performances of the different learning algorithms were evaluated when the activation functions and architecture were changed. We further present the theoretical background, algorithm, design strategy and further demonstrate how effective and inevitable is the proposed MLEANN framework to design a neural network, which is smaller, faster and with a better generalization performance.
[ { "version": "v1", "created": "Thu, 6 May 2004 13:44:20 GMT" } ]
1,179,878,400,000
[ [ "Abraham", "Ajith", "" ] ]
cs/0405025
Ajith Abraham
Andy Auyeung and Ajith Abraham
The Largest Compatible Subset Problem for Phylogenetic Data
null
Genetic and Evolutionary Computation 2004 Conference (GECCO-2004), Bird-of-a-feather Workshop On Application of Hybrid Evolutionary Algorithms to Complex Optimization Problems, Springer Verlag Germany, 2004 (forth coming)
null
null
cs.AI
null
The phylogenetic tree construction is to infer the evolutionary relationship between species from the experimental data. However, the experimental data are often imperfect and conflicting each others. Therefore, it is important to extract the motif from the imperfect data. The largest compatible subset problem is that, given a set of experimental data, we want to discard the minimum such that the remaining is compatible. The largest compatible subset problem can be viewed as the vertex cover problem in the graph theory that has been proven to be NP-hard. In this paper, we propose a hybrid Evolutionary Computing (EC) method for this problem. The proposed method combines the EC approach and the algorithmic approach for special structured graphs. As a result, the complexity of the problem is dramatically reduced. Experiments were performed on randomly generated graphs with different edge densities. The vertex covers produced by the proposed method were then compared to the vertex covers produced by a 2-approximation algorithm. The experimental results showed that the proposed method consistently outperformed a classical 2- approximation algorithm. Furthermore, a significant improvement was found when the graph density was small.
[ { "version": "v1", "created": "Thu, 6 May 2004 13:52:23 GMT" } ]
1,179,878,400,000
[ [ "Auyeung", "Andy", "" ], [ "Abraham", "Ajith", "" ] ]
cs/0405026
Ajith Abraham
Cong Tran, Ajith Abraham and Lakhmi Jain
A Concurrent Fuzzy-Neural Network Approach for Decision Support Systems
null
The IEEE International Conference on Fuzzy Systems, FUZZ-IEEE'03, IEEE Press, ISBN 0780378113, pp. 1092-1097, 2003
10.1109/FUZZ.2003.1206584
null
cs.AI
null
Decision-making is a process of choosing among alternative courses of action for solving complicated problems where multi-criteria objectives are involved. The past few years have witnessed a growing recognition of Soft Computing technologies that underlie the conception, design and utilization of intelligent systems. Several works have been done where engineers and scientists have applied intelligent techniques and heuristics to obtain optimal decisions from imprecise information. In this paper, we present a concurrent fuzzy-neural network approach combining unsupervised and supervised learning techniques to develop the Tactical Air Combat Decision Support System (TACDSS). Experiment results clearly demonstrate the efficiency of the proposed technique.
[ { "version": "v1", "created": "Thu, 6 May 2004 13:58:41 GMT" } ]
1,479,340,800,000
[ [ "Tran", "Cong", "" ], [ "Abraham", "Ajith", "" ], [ "Jain", "Lakhmi", "" ] ]
cs/0405028
Ajith Abraham
Ajith Abraham
Analysis of Hybrid Soft and Hard Computing Techniques for Forex Monitoring Systems
null
IEEE International Conference on Fuzzy Systems (IEEE FUZZ'02), 2002 IEEE World Congress on Computational Intelligence, Hawaii, ISBN 0780372808, IEEE Press pp. 1616 -1622, 2002
10.1109/FUZZ.2002.1006749
null
cs.AI
null
In a universe with a single currency, there would be no foreign exchange market, no foreign exchange rates, and no foreign exchange. Over the past twenty-five years, the way the market has performed those tasks has changed enormously. The need for intelligent monitoring systems has become a necessity to keep track of the complex forex market. The vast currency market is a foreign concept to the average individual. However, once it is broken down into simple terms, the average individual can begin to understand the foreign exchange market and use it as a financial instrument for future investing. In this paper, we attempt to compare the performance of hybrid soft computing and hard computing techniques to predict the average monthly forex rates one month ahead. The soft computing models considered are a neural network trained by the scaled conjugate gradient algorithm and a neuro-fuzzy model implementing a Takagi-Sugeno fuzzy inference system. We also considered Multivariate Adaptive Regression Splines (MARS), Classification and Regression Trees (CART) and a hybrid CART-MARS technique. We considered the exchange rates of Australian dollar with respect to US dollar, Singapore dollar, New Zealand dollar, Japanese yen and United Kingdom pounds. The models were trained using 70% of the data and remaining was used for testing and validation purposes. It is observed that the proposed hybrid models could predict the forex rates more accurately than all the techniques when applied individually. Empirical results also reveal that the hybrid hard computing approach also improved some of our previous work using a neuro-fuzzy approach.
[ { "version": "v1", "created": "Fri, 7 May 2004 00:10:07 GMT" } ]
1,479,340,800,000
[ [ "Abraham", "Ajith", "" ] ]
cs/0405030
Ajith Abraham
Ajith Abraham
Business Intelligence from Web Usage Mining
null
Journal of Information & Knowledge Management (JIKM), World Scientific Publishing Co., Singapore, Vol. 2, No. 4, pp. 375-390, 2003
null
null
cs.AI
null
The rapid e-commerce growth has made both business community and customers face a new situation. Due to intense competition on one hand and the customer's option to choose from several alternatives business community has realized the necessity of intelligent marketing strategies and relationship management. Web usage mining attempts to discover useful knowledge from the secondary data obtained from the interactions of the users with the Web. Web usage mining has become very critical for effective Web site management, creating adaptive Web sites, business and support services, personalization, network traffic flow analysis and so on. In this paper, we present the important concepts of Web usage mining and its various practical applications. We further present a novel approach 'intelligent-miner' (i-Miner) to optimize the concurrent architecture of a fuzzy clustering algorithm (to discover web data clusters) and a fuzzy inference system to analyze the Web site visitor trends. A hybrid evolutionary fuzzy clustering algorithm is proposed in this paper to optimally segregate similar user interests. The clustered data is then used to analyze the trends using a Takagi-Sugeno fuzzy inference system learned using a combination of evolutionary algorithm and neural network learning. Proposed approach is compared with self-organizing maps (to discover patterns) and several function approximation techniques like neural networks, linear genetic programming and Takagi-Sugeno fuzzy inference system (to analyze the clusters). The results are graphically illustrated and the practical significance is discussed in detail. Empirical results clearly show that the proposed Web usage-mining framework is efficient.
[ { "version": "v1", "created": "Thu, 6 May 2004 23:54:39 GMT" } ]
1,179,878,400,000
[ [ "Abraham", "Ajith", "" ] ]
cs/0405031
Ajith Abraham
Cong Tran, Lakhmi Jain, Ajith Abraham
Adaptation of Mamdani Fuzzy Inference System Using Neuro - Genetic Approach for Tactical Air Combat Decision Support System
null
15th Australian Joint Conference on Artificial Intelligence (AI'02) Australia, LNAI 2557, Springer Verlag, Germany, pp. 672-679, 2002
null
null
cs.AI
null
Normally a decision support system is build to solve problem where multi-criteria decisions are involved. The knowledge base is the vital part of the decision support containing the information or data that is used in decision-making process. This is the field where engineers and scientists have applied several intelligent techniques and heuristics to obtain optimal decisions from imprecise information. In this paper, we present a hybrid neuro-genetic learning approach for the adaptation a Mamdani fuzzy inference system for the Tactical Air Combat Decision Support System (TACDSS). Some simulation results demonstrating the difference of the learning techniques and are also provided.
[ { "version": "v1", "created": "Thu, 6 May 2004 23:58:46 GMT" } ]
1,179,878,400,000
[ [ "Tran", "Cong", "" ], [ "Jain", "Lakhmi", "" ], [ "Abraham", "Ajith", "" ] ]
cs/0405032
Ajith Abraham
Ajith Abraham
EvoNF: A Framework for Optimization of Fuzzy Inference Systems Using Neural Network Learning and Evolutionary Computation
null
The 17th IEEE International Symposium on Intelligent Control, ISIC'02, IEEE Press, ISBN 0780376218, pp 327-332, 2002
10.1109/ISIC.2002.1157784
null
cs.AI
null
Several adaptation techniques have been investigated to optimize fuzzy inference systems. Neural network learning algorithms have been used to determine the parameters of fuzzy inference system. Such models are often called as integrated neuro-fuzzy models. In an integrated neuro-fuzzy model there is no guarantee that the neural network learning algorithm converges and the tuning of fuzzy inference system will be successful. Success of evolutionary search procedures for optimization of fuzzy inference system is well proven and established in many application areas. In this paper, we will explore how the optimization of fuzzy inference systems could be further improved using a meta-heuristic approach combining neural network learning and evolutionary computation. The proposed technique could be considered as a methodology to integrate neural networks, fuzzy inference systems and evolutionary search procedures. We present the theoretical frameworks and some experimental results to demonstrate the efficiency of the proposed technique.
[ { "version": "v1", "created": "Fri, 7 May 2004 00:01:54 GMT" } ]
1,479,168,000,000
[ [ "Abraham", "Ajith", "" ] ]
cs/0405033
Ajith Abraham
Ajith Abraham
Optimization of Evolutionary Neural Networks Using Hybrid Learning Algorithms
null
IEEE International Joint Conference on Neural Networks (IJCNN'02), 2002 IEEE World Congress on Computational Intelligence, Hawaii, ISBN 0780372786, IEEE Press, Volume 3, pp. 2797-2802, 2002
10.1109/IJCNN.2002.1007591
null
cs.AI
null
Evolutionary artificial neural networks (EANNs) refer to a special class of artificial neural networks (ANNs) in which evolution is another fundamental form of adaptation in addition to learning. Evolutionary algorithms are used to adapt the connection weights, network architecture and learning algorithms according to the problem environment. Even though evolutionary algorithms are well known as efficient global search algorithms, very often they miss the best local solutions in the complex solution space. In this paper, we propose a hybrid meta-heuristic learning approach combining evolutionary learning and local search methods (using 1st and 2nd order error information) to improve the learning and faster convergence obtained using a direct evolutionary approach. The proposed technique is tested on three different chaotic time series and the test results are compared with some popular neuro-fuzzy systems and a recently developed cutting angle method of global optimization. Empirical results reveal that the proposed technique is efficient in spite of the computational complexity.
[ { "version": "v1", "created": "Fri, 7 May 2004 00:08:16 GMT" } ]
1,479,340,800,000
[ [ "Abraham", "Ajith", "" ] ]
cs/0405049
Ajith Abraham
Ron Edwards, Ajith Abraham and Sonja Petrovic-Lazarevic
Export Behaviour Modeling Using EvoNF Approach
null
The International Conference on Computational Science 2003 (ICCS 2003), Springer Verlag, Lecture Notes in Computer Science Volume 2660, Sloot P.M.A. et al (Eds.), pp. 169-178, 2003
null
null
cs.AI
null
The academic literature suggests that the extent of exporting by multinational corporation subsidiaries (MCS) depends on their product manufactured, resources, tax protection, customers and markets, involvement strategy, financial independence and suppliers' relationship with a multinational corporation (MNC). The aim of this paper is to model the complex export pattern behaviour using a Takagi-Sugeno fuzzy inference system in order to determine the actual volume of MCS export output (sales exported). The proposed fuzzy inference system is optimised by using neural network learning and evolutionary computation. Empirical results clearly show that the proposed approach could model the export behaviour reasonable well compared to a direct neural network approach.
[ { "version": "v1", "created": "Sun, 16 May 2004 03:24:55 GMT" } ]
1,179,878,400,000
[ [ "Edwards", "Ron", "" ], [ "Abraham", "Ajith", "" ], [ "Petrovic-Lazarevic", "Sonja", "" ] ]
cs/0405050
Ajith Abraham
Miao M. Chong, Ajith Abraham, Marcin Paprzycki
Traffic Accident Analysis Using Decision Trees and Neural Networks
null
IADIS International Conference on Applied Computing, Portugal, IADIS Press, Pedro Isaias et al. (Eds.), ISBN: 9729894736, Volume 2, pp. 39-42, 2004
null
null
cs.AI
null
The costs of fatalities and injuries due to traffic accident have a great impact on society. This paper presents our research to model the severity of injury resulting from traffic accidents using artificial neural networks and decision trees. We have applied them to an actual data set obtained from the National Automotive Sampling System (NASS) General Estimates System (GES). Experiment results reveal that in all the cases the decision tree outperforms the neural network. Our research analysis also shows that the three most important factors in fatal injury are: driver's seat belt usage, light condition of the roadway, and driver's alcohol usage.
[ { "version": "v1", "created": "Sun, 16 May 2004 03:33:20 GMT" } ]
1,179,878,400,000
[ [ "Chong", "Miao M.", "" ], [ "Abraham", "Ajith", "" ], [ "Paprzycki", "Marcin", "" ] ]
cs/0405051
Ajith Abraham
Muhammad Riaz Khan and Ajith Abraham
Short Term Load Forecasting Models in Czech Republic Using Soft Computing Paradigms
null
International Journal of Knowledge-Based Intelligent Engineering Systems, IOS Press Netherlands, Volume 7, Number 4, pp. 172-179, 2003
null
null
cs.AI
null
This paper presents a comparative study of six soft computing models namely multilayer perceptron networks, Elman recurrent neural network, radial basis function network, Hopfield model, fuzzy inference system and hybrid fuzzy neural network for the hourly electricity demand forecast of Czech Republic. The soft computing models were trained and tested using the actual hourly load data for seven years. A comparison of the proposed techniques is presented for predicting 2 day ahead demands for electricity. Simulation results indicate that hybrid fuzzy neural network and radial basis function networks are the best candidates for the analysis and forecasting of electricity demand.
[ { "version": "v1", "created": "Sun, 16 May 2004 03:44:06 GMT" } ]
1,179,878,400,000
[ [ "Khan", "Muhammad Riaz", "" ], [ "Abraham", "Ajith", "" ] ]
cs/0405052
Ajith Abraham
Cong Tran, Ajith Abraham and Lakhmi Jain
Decision Support Systems Using Intelligent Paradigms
null
International Journal of American Romanian Academy of Arts and Sciences, 2004 (forth coming)
null
null
cs.AI
null
Decision-making is a process of choosing among alternative courses of action for solving complicated problems where multi-criteria objectives are involved. The past few years have witnessed a growing recognition of Soft Computing (SC) technologies that underlie the conception, design and utilization of intelligent systems. In this paper, we present different SC paradigms involving an artificial neural network trained using the scaled conjugate gradient algorithm, two different fuzzy inference methods optimised using neural network learning/evolutionary algorithms and regression trees for developing intelligent decision support systems. We demonstrate the efficiency of the different algorithms by developing a decision support system for a Tactical Air Combat Environment (TACE). Some empirical comparisons between the different algorithms are also provided.
[ { "version": "v1", "created": "Sun, 16 May 2004 03:50:05 GMT" } ]
1,179,878,400,000
[ [ "Tran", "Cong", "" ], [ "Abraham", "Ajith", "" ], [ "Jain", "Lakhmi", "" ] ]
cs/0405071
Tuan Le Mr.
Le-Chi Tuan, Chitta Baral, and Tran Cao Son
Regression with respect to sensing actions and partial states
38 pages
null
null
null
cs.AI
null
In this paper, we present a state-based regression function for planning domains where an agent does not have complete information and may have sensing actions. We consider binary domains and employ the 0-approximation [Son & Baral 2001] to define the regression function. In binary domains, the use of 0-approximation means using 3-valued states. Although planning using this approach is incomplete with respect to the full semantics, we adopt it to have a lower complexity. We prove the soundness and completeness of our regression formulation with respect to the definition of progression. More specifically, we show that (i) a plan obtained through regression for a planning problem is indeed a progression solution of that planning problem, and that (ii) for each plan found through progression, using regression one obtains that plan or an equivalent one. We then develop a conditional planner that utilizes our regression function. We prove the soundness and completeness of our planning algorithm and present experimental results with respect to several well known planning problems in the literature.
[ { "version": "v1", "created": "Fri, 21 May 2004 12:43:19 GMT" } ]
1,179,878,400,000
[ [ "Tuan", "Le-Chi", "" ], [ "Baral", "Chitta", "" ], [ "Son", "Tran Cao", "" ] ]
cs/0405090
Jiang Qiu
Michael J. Maher
Propositional Defeasible Logic has Linear Complexity
Appeared in Theory and Practice of Logic Programming, vol. 1, no. 6, 2001
Theory and Practice of Logic Programming, vol. 1, no. 6, 2001
null
null
cs.AI
null
Defeasible logic is a rule-based nonmonotonic logic, with both strict and defeasible rules, and a priority relation on rules. We show that inference in the propositional form of the logic can be performed in linear time. This contrasts markedly with most other propositional nonmonotonic logics, in which inference is intractable.
[ { "version": "v1", "created": "Mon, 24 May 2004 15:45:59 GMT" } ]
1,254,182,400,000
[ [ "Maher", "Michael J.", "" ] ]
cs/0405106
Carlos Ches\~nevar
Carlos Iv\'an Ches\~nevar and Guillermo Ricardo Simari and Alejandro Javier Garc\'ia
Pruning Search Space in Defeasible Argumentation
11 pages
Proc. of the Workshop on Advances and Trends in Search in Artificial Intelligence, pp.40-47. International Conf. of the Chilean Society in Computer Science, Santiago, Chile, 2000
null
null
cs.AI
null
Defeasible argumentation has experienced a considerable growth in AI in the last decade. Theoretical results have been combined with development of practical applications in AI & Law, Case-Based Reasoning and various knowledge-based systems. However, the dialectical process associated with inference is computationally expensive. This paper focuses on speeding up this inference process by pruning the involved search space. Our approach is twofold. On one hand, we identify distinguished literals for computing defeat. On the other hand, we restrict ourselves to a subset of all possible conflicting arguments by introducing dialectical constraints.
[ { "version": "v1", "created": "Thu, 27 May 2004 18:43:39 GMT" } ]
1,471,305,600,000
[ [ "Chesñevar", "Carlos Iván", "" ], [ "Simari", "Guillermo Ricardo", "" ], [ "García", "Alejandro Javier", "" ] ]
cs/0405113
Andrea Severe
Andrea Severe
A proposal to design expert system for the calculations in the domain of QFT
null
null
null
null
cs.AI
null
Main purposes of the paper are followings: 1) To show examples of the calculations in domain of QFT via ``derivative rules'' of an expert system; 2) To consider advantages and disadvantage that technology of the calculations; 3) To reflect about how one would develop new physical theories, what knowledge would be useful in their investigations and how this problem can be connected with designing an expert system.
[ { "version": "v1", "created": "Mon, 31 May 2004 10:50:23 GMT" }, { "version": "v2", "created": "Tue, 1 Jun 2004 11:59:07 GMT" } ]
1,179,878,400,000
[ [ "Severe", "Andrea", "" ] ]
cs/0406038
Vladan Vuckovic V.
Vladan Vuckovic, Djordje Vidanovic
A New Approach to Draw Detection by Move Repetition in Computer Chess Programming
15 pages, 4 figures
null
null
null
cs.AI
null
We will try to tackle both the theoretical and practical aspects of a very important problem in chess programming as stated in the title of this article - the issue of draw detection by move repetition. The standard approach that has so far been employed in most chess programs is based on utilising positional matrices in original and compressed format as well as on the implementation of the so-called bitboard format. The new approach that we will be trying to introduce is based on using variant strings generated by the search algorithm (searcher) during the tree expansion in decision making. We hope to prove that this approach is more efficient than the standard treatment of the issue, especially in positions with few pieces (endgames). To illustrate what we have in mind a machine language routine that implements our theoretical assumptions is attached. The routine is part of the Axon chess program, developed by the authors. Axon, in its current incarnation, plays chess at master strength (ca. 2400-2450 Elo, based on both Axon vs computer programs and Axon vs human masters in over 3000 games altogether).
[ { "version": "v1", "created": "Mon, 21 Jun 2004 13:42:03 GMT" } ]
1,179,878,400,000
[ [ "Vuckovic", "Vladan", "" ], [ "Vidanovic", "Djordje", "" ] ]
cs/0407008
Karthik Narayanaswami
S. Ravichandran and M.N. Karthik
Autogenic Training With Natural Language Processing Modules: A Recent Tool For Certain Neuro Cognitive Studies
2 Pages. Proceedings of 11th International Congress on Biological & Medical Engineering, Singapore (IEEE-EMBS & IFMBE endorsed)
null
null
null
cs.AI
null
Learning to respond to voice-text input involves the subject's ability in understanding the phonetic and text based contents and his/her ability to communicate based on his/her experience. The neuro-cognitive facility of the subject has to support two important domains in order to make the learning process complete. In many cases, though the understanding is complete, the response is partial. This is one valid reason why we need to support the information from the subject with scalable techniques such as Natural Language Processing (NLP) for abstraction of the contents from the output. This paper explores the feasibility of using NLP modules interlaced with Neural Networks to perform the required task in autogenic training related to medical applications.
[ { "version": "v1", "created": "Fri, 2 Jul 2004 20:15:02 GMT" } ]
1,179,878,400,000
[ [ "Ravichandran", "S.", "" ], [ "Karthik", "M. N.", "" ] ]
cs/0407037
Ambedkar Dukkipati
Ambedkar Dukkipati, M. Narasimha Murty and Shalabh Bhatnagar
Generalized Evolutionary Algorithm based on Tsallis Statistics
Submitted to Physical Review E, 5 pages, 6 figures
null
null
null
cs.AI
null
Generalized evolutionary algorithm based on Tsallis canonical distribution is proposed. The algorithm uses Tsallis generalized canonical distribution to weigh the configurations for `selection' instead of Gibbs-Boltzmann distribution. Our simulation results show that for an appropriate choice of non-extensive index that is offered by Tsallis statistics, evolutionary algorithms based on this generalization outperform algorithms based on Gibbs-Boltzmann distribution.
[ { "version": "v1", "created": "Fri, 16 Jul 2004 06:08:22 GMT" } ]
1,179,878,400,000
[ [ "Dukkipati", "Ambedkar", "" ], [ "Murty", "M. Narasimha", "" ], [ "Bhatnagar", "Shalabh", "" ] ]
cs/0407040
W. J. van Hoeve
W.J. van Hoeve and M. Milano
Decomposition Based Search - A theoretical and experimental evaluation
16 pages, 8 figures. LIA Technical Report LIA00203, University of Bologna, 2003
null
null
null
cs.AI
null
In this paper we present and evaluate a search strategy called Decomposition Based Search (DBS) which is based on two steps: subproblem generation and subproblem solution. The generation of subproblems is done through value ranking and domain splitting. Subdomains are explored so as to generate, according to the heuristic chosen, promising subproblems first. We show that two well known search strategies, Limited Discrepancy Search (LDS) and Iterative Broadening (IB), can be seen as special cases of DBS. First we present a tuning of DBS that visits the same search nodes as IB, but avoids restarts. Then we compare both theoretically and computationally DBS and LDS using the same heuristic. We prove that DBS has a higher probability of being successful than LDS on a comparable number of nodes, under realistic assumptions. Experiments on a constraint satisfaction problem and an optimization problem show that DBS is indeed very effective if compared to LDS.
[ { "version": "v1", "created": "Fri, 16 Jul 2004 13:38:19 GMT" } ]
1,179,878,400,000
[ [ "van Hoeve", "W. J.", "" ], [ "Milano", "M.", "" ] ]
cs/0407042
W. J. van Hoeve
Willem Jan van Hoeve and Michela Milano
Postponing Branching Decisions
11 pages, 3 figures
null
null
null
cs.AI
null
Solution techniques for Constraint Satisfaction and Optimisation Problems often make use of backtrack search methods, exploiting variable and value ordering heuristics. In this paper, we propose and analyse a very simple method to apply in case the value ordering heuristic produces ties: postponing the branching decision. To this end, we group together values in a tie, branch on this sub-domain, and defer the decision among them to lower levels of the search tree. We show theoretically and experimentally that this simple modification can dramatically improve the efficiency of the search strategy. Although in practise similar methods may have been applied already, to our knowledge, no empirical or theoretical study has been proposed in the literature to identify when and to what extent this strategy should be used.
[ { "version": "v1", "created": "Fri, 16 Jul 2004 14:37:11 GMT" } ]
1,179,878,400,000
[ [ "van Hoeve", "Willem Jan", "" ], [ "Milano", "Michela", "" ] ]
cs/0407044
W. J. van Hoeve
M. Milano and W.J. van Hoeve
Reduced cost-based ranking for generating promising subproblems
15 pages, 1 figure. Accepted at CP 2002
null
null
null
cs.AI
null
In this paper, we propose an effective search procedure that interleaves two steps: subproblem generation and subproblem solution. We mainly focus on the first part. It consists of a variable domain value ranking based on reduced costs. Exploiting the ranking, we generate, in a Limited Discrepancy Search tree, the most promising subproblems first. An interesting result is that reduced costs provide a very precise ranking that allows to almost always find the optimal solution in the first generated subproblem, even if its dimension is significantly smaller than that of the original problem. Concerning the proof of optimality, we exploit a way to increase the lower bound for subproblems at higher discrepancies. We show experimental results on the TSP and its time constrained variant to show the effectiveness of the proposed approach, but the technique could be generalized for other problems.
[ { "version": "v1", "created": "Fri, 16 Jul 2004 14:53:21 GMT" } ]
1,179,878,400,000
[ [ "Milano", "M.", "" ], [ "van Hoeve", "W. J.", "" ] ]
cs/0408010
Florentin Smarandache
Florentin Smarandache, Jean Dezert
A Simple Proportional Conflict Redistribution Rule
21 pages
International Journal of Applied Mathematics and Statistics, Vol. 3, No. J05, 1-36, 2005.
null
null
cs.AI
null
One proposes a first alternative rule of combination to WAO (Weighted Average Operator) proposed recently by Josang, Daniel and Vannoorenberghe, called Proportional Conflict Redistribution rule (denoted PCR1). PCR1 and WAO are particular cases of WO (the Weighted Operator) because the conflicting mass is redistributed with respect to some weighting factors. In this first PCR rule, the proportionalization is done for each non-empty set with respect to the non-zero sum of its corresponding mass matrix - instead of its mass column average as in WAO, but the results are the same as Ph. Smets has pointed out. Also, we extend WAO (which herein gives no solution) for the degenerate case when all column sums of all non-empty sets are zero, and then the conflicting mass is transferred to the non-empty disjunctive form of all non-empty sets together; but if this disjunctive form happens to be empty, then one considers an open world (i.e. the frame of discernment might contain new hypotheses) and thus all conflicting mass is transferred to the empty set. In addition to WAO, we propose a general formula for PCR1 (WAO for non-degenerate cases).
[ { "version": "v1", "created": "Tue, 3 Aug 2004 16:08:37 GMT" }, { "version": "v2", "created": "Sun, 15 Aug 2004 00:30:31 GMT" }, { "version": "v3", "created": "Wed, 18 Aug 2004 19:46:40 GMT" }, { "version": "v4", "created": "Fri, 20 Aug 2004 17:38:59 GMT" }, { "version": "v5", "created": "Sun, 19 Sep 2004 16:28:09 GMT" } ]
1,179,878,400,000
[ [ "Smarandache", "Florentin", "" ], [ "Dezert", "Jean", "" ] ]
cs/0408021
Florentin Smarandache
Florentin Smarandache, Jean Dezert
An Algorithm for Quasi-Associative and Quasi-Markovian Rules of Combination in Information Fusion
9 pages
International Journal of Applied Mathematics & Statistics, Vol. 22, No. S11 (Special Issue on Soft Computing), 33-42, 2011
null
null
cs.AI
null
In this paper one proposes a simple algorithm of combining the fusion rules, those rules which first use the conjunctive rule and then the transfer of conflicting mass to the non-empty sets, in such a way that they gain the property of associativity and fulfill the Markovian requirement for dynamic fusion. Also, a new rule, SDL-improved, is presented.
[ { "version": "v1", "created": "Sun, 8 Aug 2004 19:41:23 GMT" }, { "version": "v2", "created": "Sat, 14 Aug 2004 16:59:51 GMT" } ]
1,284,422,400,000
[ [ "Smarandache", "Florentin", "" ], [ "Dezert", "Jean", "" ] ]
cs/0408044
Michael Thielscher
Michael Thielscher
FLUX: A Logic Programming Method for Reasoning Agents
null
null
null
null
cs.AI
null
FLUX is a programming method for the design of agents that reason logically about their actions and sensor information in the presence of incomplete knowledge. The core of FLUX is a system of Constraint Handling Rules, which enables agents to maintain an internal model of their environment by which they control their own behavior. The general action representation formalism of the fluent calculus provides the formal semantics for the constraint solver. FLUX exhibits excellent computational behavior due to both a carefully restricted expressiveness and the inference paradigm of progression.
[ { "version": "v1", "created": "Thu, 19 Aug 2004 14:47:51 GMT" } ]
1,179,878,400,000
[ [ "Thielscher", "Michael", "" ] ]
cs/0408055
Ambedkar Dukkipati
Ambedkar Dukkipati, M. Narasimha Murty and Shalabh Bhatnagar
Cauchy Annealing Schedule: An Annealing Schedule for Boltzmann Selection Scheme in Evolutionary Algorithms
null
Dukkipati, A., M. N. Murty, and S. Bhatnagar, 2004, in Proceedings of the Congress on Evolutionary Computation (CEC'2004), IEEE Press, pp. 55-62
10.1109/CEC.2004.1330837
null
cs.AI
null
Boltzmann selection is an important selection mechanism in evolutionary algorithms as it has theoretical properties which help in theoretical analysis. However, Boltzmann selection is not used in practice because a good annealing schedule for the `inverse temperature' parameter is lacking. In this paper we propose a Cauchy annealing schedule for Boltzmann selection scheme based on a hypothesis that selection-strength should increase as evolutionary process goes on and distance between two selection strengths should decrease for the process to converge. To formalize these aspects, we develop formalism for selection mechanisms using fitness distributions and give an appropriate measure for selection-strength. In this paper, we prove an important result, by which we derive an annealing schedule called Cauchy annealing schedule. We demonstrate the novelty of proposed annealing schedule using simulations in the framework of genetic algorithms.
[ { "version": "v1", "created": "Tue, 24 Aug 2004 11:21:06 GMT" } ]
1,479,168,000,000
[ [ "Dukkipati", "Ambedkar", "" ], [ "Murty", "M. Narasimha", "" ], [ "Bhatnagar", "Shalabh", "" ] ]
cs/0408064
Florentin Smarandache
Florentin Smarandache, Jean Dezert
Proportional Conflict Redistribution Rules for Information Fusion
41 pages
Proceedings of the 8th International Conference on Information Fusion, Philadelphia, 25-29 July, 2005; IEEE Catalog Number: 05EX1120C, ISBN: 0-7803-9287-6.
null
null
cs.AI
null
In this paper we propose five versions of a Proportional Conflict Redistribution rule (PCR) for information fusion together with several examples. From PCR1 to PCR2, PCR3, PCR4, PCR5 one increases the complexity of the rules and also the exactitude of the redistribution of conflicting masses. PCR1 restricted from the hyper-power set to the power set and without degenerate cases gives the same result as the Weighted Average Operator (WAO) proposed recently by J{\o}sang, Daniel and Vannoorenberghe but does not satisfy the neutrality property of vacuous belief assignment. That's why improved PCR rules are proposed in this paper. PCR4 is an improvement of minC and Dempster's rules. The PCR rules redistribute the conflicting mass, after the conjunctive rule has been applied, proportionally with some functions depending on the masses assigned to their corresponding columns in the mass matrix. There are infinitely many ways these functions (weighting factors) can be chosen depending on the complexity one wants to deal with in specific applications and fusion systems. Any fusion combination rule is at some degree ad-hoc.
[ { "version": "v1", "created": "Sat, 28 Aug 2004 03:08:39 GMT" }, { "version": "v2", "created": "Sat, 18 Dec 2004 21:23:53 GMT" }, { "version": "v3", "created": "Fri, 25 Mar 2005 15:34:43 GMT" } ]
1,472,601,600,000
[ [ "Smarandache", "Florentin", "" ], [ "Dezert", "Jean", "" ] ]
cs/0409007
Florentin Smarandache
Jean Dezert, Florentin Smarandache, Milan Daniel
The Generalized Pignistic Transformation
8 pages, 3 graphs, many tables. The Seventh International Conference on Information Fusion, Stockholm, Sweden, 28 June - 1 July 2004
Proceedings of the Seventh International Conference on Information Fusion, International Society for Information Fusion, Stockholm, Sweden, 384-391, 2004
null
null
cs.AI
null
This paper presents in detail the generalized pignistic transformation (GPT) succinctly developed in the Dezert-Smarandache Theory (DSmT) framework as a tool for decision process. The GPT allows to provide a subjective probability measure from any generalized basic belief assignment given by any corpus of evidence. We mainly focus our presentation on the 3D case and provide the complete result obtained by the GPT and its validation drawn from the probability theory.
[ { "version": "v1", "created": "Mon, 6 Sep 2004 17:47:06 GMT" } ]
1,179,878,400,000
[ [ "Dezert", "Jean", "" ], [ "Smarandache", "Florentin", "" ], [ "Daniel", "Milan", "" ] ]
cs/0409040
Florentin Smarandache
Florentin Smarandache
Unification of Fusion Theories
14 pages
Presented at NATO Advanced Study Institute, Albena, Bulgaria, 16-27 May 2005. International Journal of Applied Mathematics & Statistics, Vol. 2, 1-14, 2004.
null
null
cs.AI
null
Since no fusion theory neither rule fully satisfy all needed applications, the author proposes a Unification of Fusion Theories and a combination of fusion rules in solving problems/applications. For each particular application, one selects the most appropriate model, rule(s), and algorithm of implementation. We are working in the unification of the fusion theories and rules, which looks like a cooking recipe, better we'd say like a logical chart for a computer programmer, but we don't see another method to comprise/unify all things. The unification scenario presented herein, which is now in an incipient form, should periodically be updated incorporating new discoveries from the fusion and engineering research.
[ { "version": "v1", "created": "Thu, 23 Sep 2004 02:02:44 GMT" }, { "version": "v2", "created": "Fri, 24 Sep 2004 13:50:15 GMT" }, { "version": "v3", "created": "Fri, 29 Oct 2004 17:01:36 GMT" } ]
1,179,878,400,000
[ [ "Smarandache", "Florentin", "" ] ]
cs/0410014
Stefania Costantini
Stefania Costantini and Alessandro Provetti
Normal forms for Answer Sets Programming
15 pages, To appear in Theory and Practice of Logic Programming (TPLP)
null
null
null
cs.AI
null
Normal forms for logic programs under stable/answer set semantics are introduced. We argue that these forms can simplify the study of program properties, mainly consistency. The first normal form, called the {\em kernel} of the program, is useful for studying existence and number of answer sets. A kernel program is composed of the atoms which are undefined in the Well-founded semantics, which are those that directly affect the existence of answer sets. The body of rules is composed of negative literals only. Thus, the kernel form tends to be significantly more compact than other formulations. Also, it is possible to check consistency of kernel programs in terms of colorings of the Extended Dependency Graph program representation which we previously developed. The second normal form is called {\em 3-kernel.} A 3-kernel program is composed of the atoms which are undefined in the Well-founded semantics. Rules in 3-kernel programs have at most two conditions, and each rule either belongs to a cycle, or defines a connection between cycles. 3-kernel programs may have positive conditions. The 3-kernel normal form is very useful for the static analysis of program consistency, i.e., the syntactic characterization of existence of answer sets. This result can be obtained thanks to a novel graph-like representation of programs, called Cycle Graph which presented in the companion article \cite{Cos04b}.
[ { "version": "v1", "created": "Wed, 6 Oct 2004 15:01:50 GMT" } ]
1,472,601,600,000
[ [ "Costantini", "Stefania", "" ], [ "Provetti", "Alessandro", "" ] ]
cs/0410033
Florentin Smarandache
Florentin Smarandache
An In-Depth Look at Information Fusion Rules & the Unification of Fusion Theories
27 pages. To be presented at NASA Langley Research Center (Hampton, Virginia), on November 5th, 2004
Partially published in Review of the Air Force Academy (The Scientific Informative Review), Brasov, No. 2, pp. 31-40, 2006.
null
null
cs.AI
null
This paper may look like a glossary of the fusion rules and we also introduce new ones presenting their formulas and examples: Conjunctive, Disjunctive, Exclusive Disjunctive, Mixed Conjunctive-Disjunctive rules, Conditional rule, Dempster's, Yager's, Smets' TBM rule, Dubois-Prade's, Dezert-Smarandache classical and hybrid rules, Murphy's average rule, Inagaki-Lefevre-Colot-Vannoorenberghe Unified Combination rules [and, as particular cases: Iganaki's parameterized rule, Weighting Average Operator, minC (M. Daniel), and newly Proportional Conflict Redistribution rules (Smarandache-Dezert) among which PCR5 is the most exact way of redistribution of the conflicting mass to non-empty sets following the path of the conjunctive rule], Zhang's Center Combination rule, Convolutive x-Averaging, Consensus Operator (Josang), Cautious Rule (Smets), ?-junctions rules (Smets), etc. and three new T-norm & T-conorm rules adjusted from fuzzy and neutrosophic sets to information fusion (Tchamova-Smarandache). Introducing the degree of union and degree of inclusion with respect to the cardinal of sets not with the fuzzy set point of view, besides that of intersection, many fusion rules can be improved. There are corner cases where each rule might have difficulties working or may not get an expected result.
[ { "version": "v1", "created": "Thu, 14 Oct 2004 22:53:46 GMT" }, { "version": "v2", "created": "Wed, 27 Oct 2004 17:13:04 GMT" } ]
1,233,187,200,000
[ [ "Smarandache", "Florentin", "" ] ]
cs/0410049
Joseph Y. Halpern
Joseph Y. Halpern
Intransitivity and Vagueness
A preliminary version of this paper appears in Principles of Knowledge Representation and Reasoning: Proceedings of the Ninth International Conference (KR 2004)
null
null
null
cs.AI
null
There are many examples in the literature that suggest that indistinguishability is intransitive, despite the fact that the indistinguishability relation is typically taken to be an equivalence relation (and thus transitive). It is shown that if the uncertainty perception and the question of when an agent reports that two things are indistinguishable are both carefully modeled, the problems disappear, and indistinguishability can indeed be taken to be an equivalence relation. Moreover, this model also suggests a logic of vagueness that seems to solve many of the problems related to vagueness discussed in the philosophical literature. In particular, it is shown here how the logic can handle the sorites paradox.
[ { "version": "v1", "created": "Tue, 19 Oct 2004 17:31:11 GMT" } ]
1,179,878,400,000
[ [ "Halpern", "Joseph Y.", "" ] ]
cs/0410050
Joseph Y. Halpern
Joseph Y. Halpern
Sleeping Beauty Reconsidered: Conditioning and Reflection in Asynchronous Systems
A preliminary version of this paper appears in Principles of Knowledge Representation and Reasoning: Proceedings of the Ninth International Conference (KR 2004). This version will appear in Oxford Studies in Epistemology
null
null
null
cs.AI
null
A careful analysis of conditioning in the Sleeping Beauty problem is done, using the formal model for reasoning about knowledge and probability developed by Halpern and Tuttle. While the Sleeping Beauty problem has been viewed as revealing problems with conditioning in the presence of imperfect recall, the analysis done here reveals that the problems are not so much due to imperfect recall as to asynchrony. The implications of this analysis for van Fraassen's Reflection Principle and Savage's Sure-Thing Principle are considered.
[ { "version": "v1", "created": "Tue, 19 Oct 2004 17:31:44 GMT" } ]
1,179,878,400,000
[ [ "Halpern", "Joseph Y.", "" ] ]
cs/0411015
Ziny Flikop
Ziny Flikop
Bounded Input Bounded Predefined Control Bounded Output
8 pages, 6 figures
null
null
null
cs.AI
null
The paper is an attempt to generalize a methodology, which is similar to the bounded-input bounded-output method currently widely used for the system stability studies. The presented earlier methodology allows decomposition of input space into bounded subspaces and defining for each subspace its bounding surface. It also defines a corresponding predefined control, which maps any point of a bounded input into a desired bounded output subspace. This methodology was improved by providing a mechanism for the fast defining a bounded surface. This paper presents enhanced bounded-input bounded-predefined-control bounded-output approach, which provides adaptability feature to the control and allows transferring of a controlled system along a suboptimal trajectory.
[ { "version": "v1", "created": "Mon, 8 Nov 2004 01:52:58 GMT" } ]
1,179,878,400,000
[ [ "Flikop", "Ziny", "" ] ]
cs/0411034
Balaram Das
Balaram Das
Generating Conditional Probabilities for Bayesian Networks: Easing the Knowledge Acquisition Problem
24pages, 2figures
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The number of probability distributions required to populate a conditional probability table (CPT) in a Bayesian network, grows exponentially with the number of parent-nodes associated with that table. If the table is to be populated through knowledge elicited from a domain expert then the sheer magnitude of the task forms a considerable cognitive barrier. In this paper we devise an algorithm to populate the CPT while easing the extent of knowledge acquisition. The input to the algorithm consists of a set of weights that quantify the relative strengths of the influences of the parent-nodes on the child-node, and a set of probability distributions the number of which grows only linearly with the number of associated parent-nodes. These are elicited from the domain expert. The set of probabilities are obtained by taking into consideration the heuristics that experts use while arriving at probabilistic estimations. The algorithm is used to populate the CPT by computing appropriate weighted sums of the elicited distributions. We invoke the methods of information geometry to demonstrate how these weighted sums capture the expert's judgemental strategy.
[ { "version": "v1", "created": "Fri, 12 Nov 2004 00:42:55 GMT" }, { "version": "v2", "created": "Mon, 4 Aug 2008 06:36:49 GMT" } ]
1,217,808,000,000
[ [ "Das", "Balaram", "" ] ]
cs/0411071
Pontus Svenson
Hedvig Sidenbladh, Pontus Svenson, Johan Schubert
Comparing Multi-Target Trackers on Different Force Unit Levels
9 pages
Proc SPIE Vol 5429, p 306-314 (2004)
10.1117/12.542024
null
cs.AI
null
Consider the problem of tracking a set of moving targets. Apart from the tracking result, it is often important to know where the tracking fails, either to steer sensors to that part of the state-space, or to inform a human operator about the status and quality of the obtained information. An intuitive quality measure is the correlation between two tracking results based on uncorrelated observations. In the case of Bayesian trackers such a correlation measure could be the Kullback-Leibler difference. We focus on a scenario with a large number of military units moving in some terrain. The units are observed by several types of sensors and "meta-sensors" with force aggregation capabilities. The sensors register units of different size. Two separate multi-target probability hypothesis density (PHD) particle filters are used to track some type of units (e.g., companies) and their sub-units (e.g., platoons), respectively, based on observations of units of those sizes. Each observation is used in one filter only. Although the state-space may well be the same in both filters, the posterior PHD distributions are not directly comparable -- one unit might correspond to three or four spatially distributed sub-units. Therefore, we introduce a mapping function between distributions for different unit size, based on doctrine knowledge of unit configuration. The mapped distributions can now be compared -- locally or globally -- using some measure, which gives the correlation between two PHD distributions in a bounded volume of the state-space. To locate areas where the tracking fails, a discretized quality map of the state-space can be generated by applying the measure locally to different parts of the space.
[ { "version": "v1", "created": "Fri, 19 Nov 2004 13:12:40 GMT" } ]
1,257,811,200,000
[ [ "Sidenbladh", "Hedvig", "" ], [ "Svenson", "Pontus", "" ], [ "Schubert", "Johan", "" ] ]
cs/0411072
Pontus Svenson
Pontus Svenson
Extremal optimization for sensor report pre-processing
10 pages
Proc SPIE Vol 5429, p 162-171 (2004)
10.1117/12.542027
null
cs.AI
null
We describe the recently introduced extremal optimization algorithm and apply it to target detection and association problems arising in pre-processing for multi-target tracking. Here we consider the problem of pre-processing for multiple target tracking when the number of sensor reports received is very large and arrives in large bursts. In this case, it is sometimes necessary to pre-process reports before sending them to tracking modules in the fusion system. The pre-processing step associates reports to known tracks (or initializes new tracks for reports on objects that have not been seen before). It could also be used as a pre-process step before clustering, e.g., in order to test how many clusters to use. The pre-processing is done by solving an approximate version of the original problem. In this approximation, not all pair-wise conflicts are calculated. The approximation relies on knowing how many such pair-wise conflicts that are necessary to compute. To determine this, results on phase-transitions occurring when coloring (or clustering) large random instances of a particular graph ensemble are used.
[ { "version": "v1", "created": "Fri, 19 Nov 2004 13:37:40 GMT" } ]
1,257,811,200,000
[ [ "Svenson", "Pontus", "" ] ]
cs/0412091
Florentin Smarandache
Florentin Smarandache, Jean Dezert
The Combination of Paradoxical, Uncertain, and Imprecise Sources of Information based on DSmT and Neutro-Fuzzy Inference
20 pages
A version of this paper published in Proceedings of 10th International Conference on Fuzzy Theory and Technology (FT&T 2005), Salt Lake City, Utah, USA, July 21-26, 2005.
null
null
cs.AI
null
The management and combination of uncertain, imprecise, fuzzy and even paradoxical or high conflicting sources of information has always been, and still remains today, of primal importance for the development of reliable modern information systems involving artificial reasoning. In this chapter, we present a survey of our recent theory of plausible and paradoxical reasoning, known as Dezert-Smarandache Theory (DSmT) in the literature, developed for dealing with imprecise, uncertain and paradoxical sources of information. We focus our presentation here rather on the foundations of DSmT, and on the two important new rules of combination, than on browsing specific applications of DSmT available in literature. Several simple examples are given throughout the presentation to show the efficiency and the generality of this new approach. The last part of this chapter concerns the presentation of the neutrosophic logic, the neutro-fuzzy inference and its connection with DSmT. Fuzzy logic and neutrosophic logic are useful tools in decision making after fusioning the information using the DSm hybrid rule of combination of masses.
[ { "version": "v1", "created": "Sun, 19 Dec 2004 14:56:11 GMT" } ]
1,179,878,400,000
[ [ "Smarandache", "Florentin", "" ], [ "Dezert", "Jean", "" ] ]
cs/0501068
Jean-Francois Mari
Olivier Aycard (GRAVIR - Imag, Orpailleur Loria), Jean-Francois Mari (ORPAILLEUR Loria), Richard Washington
Learning to automatically detect features for mobile robots using second-order Hidden Markov Models
2004
null
null
null
cs.AI
null
In this paper, we propose a new method based on Hidden Markov Models to interpret temporal sequences of sensor data from mobile robots to automatically detect features. Hidden Markov Models have been used for a long time in pattern recognition, especially in speech recognition. Their main advantages over other methods (such as neural networks) are their ability to model noisy temporal signals of variable length. We show in this paper that this approach is well suited for interpretation of temporal sequences of mobile-robot sensor data. We present two distinct experiments and results: the first one in an indoor environment where a mobile robot learns to detect features like open doors or T-intersections, the second one in an outdoor environment where a different mobile robot has to identify situations like climbing a hill or crossing a rock.
[ { "version": "v1", "created": "Mon, 24 Jan 2005 11:05:36 GMT" } ]
1,179,878,400,000
[ [ "Aycard", "Olivier", "", "GRAVIR - Imag, Orpailleur Loria" ], [ "Mari", "Jean-Francois", "", "ORPAILLEUR Loria" ], [ "Washington", "Richard", "" ] ]
cs/0501072
Thierry Poibeau
Dominique Dutoit, Thierry Poibeau (LIPN)
Inferring knowledge from a large semantic network
null
Inferring knowledge from a large semantic network (2002) 232-238
null
null
cs.AI
null
In this paper, we present a rich semantic network based on a differential analysis. We then detail implemented measures that take into account common and differential features between words. In a last section, we describe some industrial applications.
[ { "version": "v1", "created": "Tue, 25 Jan 2005 16:09:11 GMT" } ]
1,179,878,400,000
[ [ "Dutoit", "Dominique", "", "LIPN" ], [ "Poibeau", "Thierry", "", "LIPN" ] ]
cs/0501084
Axel Polleres
Thomas Eiter and Axel Polleres
Towards Automated Integration of Guess and Check Programs in Answer Set Programming: A Meta-Interpreter and Applications
To appear in Theory and Practice of Logic Programming (TPLP)
null
null
1843-04-01
cs.AI
null
Answer set programming (ASP) with disjunction offers a powerful tool for declaratively representing and solving hard problems. Many NP-complete problems can be encoded in the answer set semantics of logic programs in a very concise and intuitive way, where the encoding reflects the typical "guess and check" nature of NP problems: The property is encoded in a way such that polynomial size certificates for it correspond to stable models of a program. However, the problem-solving capacity of full disjunctive logic programs (DLPs) is beyond NP, and captures a class of problems at the second level of the polynomial hierarchy. While these problems also have a clear "guess and check" structure, finding an encoding in a DLP reflecting this structure may sometimes be a non-obvious task, in particular if the "check" itself is a coNP-complete problem; usually, such problems are solved by interleaving separate guess and check programs, where the check is expressed by inconsistency of the check program. In this paper, we present general transformations of head-cycle free (extended) disjunctive logic programs into stratified and positive (extended) disjunctive logic programs based on meta-interpretation techniques. The answer sets of the original and the transformed program are in simple correspondence, and, moreover, inconsistency of the original program is indicated by a designated answer set of the transformed program. Our transformations facilitate the integration of separate "guess" and "check" programs, which are often easy to obtain, automatically into a single disjunctive logic program. Our results complement recent results on meta-interpretation in ASP, and extend methods and techniques for a declarative "guess and check" problem solving paradigm through ASP.
[ { "version": "v1", "created": "Fri, 28 Jan 2005 20:19:12 GMT" } ]
1,179,878,400,000
[ [ "Eiter", "Thomas", "" ], [ "Polleres", "Axel", "" ] ]
cs/0501086
Manuela Kunze
Peter M. Kruse and Andre Naujoks and Dietmar Roesner and Manuela Kunze
Clever Search: A WordNet Based Wrapper for Internet Search Engines
null
Proceedings of 2nd GermaNet Workshop 2005
null
null
cs.AI
null
This paper presents an approach to enhance search engines with information about word senses available in WordNet. The approach exploits information about the conceptual relations within the lexical-semantic net. In the wrapper for search engines presented, WordNet information is used to specify user's request or to classify the results of a publicly available web search engine, like google, yahoo, etc.
[ { "version": "v1", "created": "Mon, 31 Jan 2005 16:00:22 GMT" } ]
1,179,878,400,000
[ [ "Kruse", "Peter M.", "" ], [ "Naujoks", "Andre", "" ], [ "Roesner", "Dietmar", "" ], [ "Kunze", "Manuela", "" ] ]
cs/0501089
Manuela Kunze
Manuela Kunze and Dietmar Roesner
Issues in Exploiting GermaNet as a Resource in Real Applications
10 pages, 3 figures
null
null
null
cs.AI
null
This paper reports about experiments with GermaNet as a resource within domain specific document analysis. The main question to be answered is: How is the coverage of GermaNet in a specific domain? We report about results of a field test of GermaNet for analyses of autopsy protocols and present a sketch about the integration of GermaNet inside XDOC. Our remarks will contribute to a GermaNet user's wish list.
[ { "version": "v1", "created": "Mon, 31 Jan 2005 10:07:52 GMT" } ]
1,179,878,400,000
[ [ "Kunze", "Manuela", "" ], [ "Roesner", "Dietmar", "" ] ]
cs/0501093
Manuela Kunze
Manuela Kunze and Dietmar Roesner
Transforming Business Rules Into Natural Language Text
3 pages
in Proceedings of IWCS-6, 2005
null
null
cs.AI
null
The aim of the project presented in this paper is to design a system for an NLG architecture, which supports the documentation process of eBusiness models. A major task is to enrich the formal description of an eBusiness model with additional information needed in an NLG task.
[ { "version": "v1", "created": "Mon, 31 Jan 2005 07:59:14 GMT" } ]
1,179,878,400,000
[ [ "Kunze", "Manuela", "" ], [ "Roesner", "Dietmar", "" ] ]
cs/0501094
Manuela Kunze
Manuela Kunze and Dietmar Roesner
Corpus based Enrichment of GermaNet Verb Frames
4 pages
in Proceedings of LREC 2004
null
null
cs.AI
null
Lexical semantic resources, like WordNet, are often used in real applications of natural language document processing. For example, we integrated GermaNet in our document suite XDOC of processing of German forensic autopsy protocols. In addition to the hypernymy and synonymy relation, we want to adapt GermaNet's verb frames for our analysis. In this paper we outline an approach for the domain related enrichment of GermaNet verb frames by corpus based syntactic and co-occurred data analyses of real documents.
[ { "version": "v1", "created": "Mon, 31 Jan 2005 08:36:39 GMT" }, { "version": "v2", "created": "Tue, 1 Feb 2005 08:06:00 GMT" } ]
1,179,878,400,000
[ [ "Kunze", "Manuela", "" ], [ "Roesner", "Dietmar", "" ] ]
cs/0501095
Manuela Kunze
Manuela Kunze and Dietmar Roesner
Context Related Derivation of Word Senses
5 pages, 2 figures
in Proceedings of Ontolex- Workshop 2004
null
null
cs.AI
null
Real applications of natural language document processing are very often confronted with domain specific lexical gaps during the analysis of documents of a new domain. This paper describes an approach for the derivation of domain specific concepts for the extension of an existing ontology. As resources we need an initial ontology and a partially processed corpus of a domain. We exploit the specific characteristic of the sublanguage in the corpus. Our approach is based on syntactical structures (noun phrases) and compound analyses to extract information required for the extension of GermaNet's lexical resources.
[ { "version": "v1", "created": "Mon, 31 Jan 2005 09:25:29 GMT" } ]
1,179,878,400,000
[ [ "Kunze", "Manuela", "" ], [ "Roesner", "Dietmar", "" ] ]
cs/0501096
Manuela Kunze
Dietmar Roesner and Manuela Kunze and Sylke Kroetzsch
Transforming and Enriching Documents for the Semantic Web
10 pages, 1 figure
KI (1), 2004
null
null
cs.AI
null
We suggest to employ techniques from Natural Language Processing (NLP) and Knowledge Representation (KR) to transform existing documents into documents amenable for the Semantic Web. Semantic Web documents have at least part of their semantics and pragmatics marked up explicitly in both a machine processable as well as human readable manner. XML and its related standards (XSLT, RDF, Topic Maps etc.) are the unifying platform for the tools and methodologies developed for different application scenarios.
[ { "version": "v1", "created": "Mon, 31 Jan 2005 09:48:46 GMT" } ]
1,179,878,400,000
[ [ "Roesner", "Dietmar", "" ], [ "Kunze", "Manuela", "" ], [ "Kroetzsch", "Sylke", "" ] ]
cs/0502060
Jean-Philippe Rennard
J.-Ph Rennard
Perspectives for Strong Artificial Life
19 pages, 5 figures
Rennard, J.-Ph., (2004), Perspective for Strong Artificial Life in De Castro, L.N. & von Zuben F.J. (Eds), Recent Developments in Biologically Inspired Computing, Hershey:IGP, 301-318
null
null
cs.AI
null
This text introduces the twin deadlocks of strong artificial life. Conceptualization of life is a deadlock both because of the existence of a continuum between the inert and the living, and because we only know one instance of life. Computationalism is a second deadlock since it remains a matter of faith. Nevertheless, artificial life realizations quickly progress and recent constructions embed an always growing set of the intuitive properties of life. This growing gap between theory and realizations should sooner or later crystallize in some kind of paradigm shift and then give clues to break the twin deadlocks.
[ { "version": "v1", "created": "Sun, 13 Feb 2005 18:20:48 GMT" } ]
1,179,878,400,000
[ [ "Rennard", "J. -Ph", "" ] ]
cs/0504064
Vitaly Schetinin
Vitaly Schetinin, Joachim Schult and Anatoly Brazhnikov
Neural-Network Techniques for Visual Mining Clinical Electroencephalograms
null
null
null
null
cs.AI
null
In this chapter we describe new neural-network techniques developed for visual mining clinical electroencephalograms (EEGs), the weak electrical potentials invoked by brain activity. These techniques exploit fruitful ideas of Group Method of Data Handling (GMDH). Section 2 briefly describes the standard neural-network techniques which are able to learn well-suited classification modes from data presented by relevant features. Section 3 introduces an evolving cascade neural network technique which adds new input nodes as well as new neurons to the network while the training error decreases. This algorithm is applied to recognize artifacts in the clinical EEGs. Section 4 presents the GMDH-type polynomial networks learnt from data. We applied this technique to distinguish the EEGs recorded from an Alzheimer and a healthy patient as well as recognize EEG artifacts. Section 5 describes the new neural-network technique developed to induce multi-class concepts from data. We used this technique for inducing a 16-class concept from the large-scale clinical EEG data. Finally we discuss perspectives of applying the neural-network techniques to clinical EEGs.
[ { "version": "v1", "created": "Thu, 14 Apr 2005 10:27:55 GMT" } ]
1,179,878,400,000
[ [ "Schetinin", "Vitaly", "" ], [ "Schult", "Joachim", "" ], [ "Brazhnikov", "Anatoly", "" ] ]
cs/0504065
Vitaly Schetinin
Vitaly Schetinin, Jonathan E. Fieldsend, Derek Partridge, Wojtek J. Krzanowski, Richard M. Everson, Trevor C. Bailey and Adolfo Hernandez
Estimating Classification Uncertainty of Bayesian Decision Tree Technique on Financial Data
null
null
null
null
cs.AI
null
Bayesian averaging over classification models allows the uncertainty of classification outcomes to be evaluated, which is of crucial importance for making reliable decisions in applications such as financial in which risks have to be estimated. The uncertainty of classification is determined by a trade-off between the amount of data available for training, the diversity of a classifier ensemble and the required performance. The interpretability of classification models can also give useful information for experts responsible for making reliable classifications. For this reason Decision Trees (DTs) seem to be attractive classification models. The required diversity of the DT ensemble can be achieved by using the Bayesian model averaging all possible DTs. In practice, the Bayesian approach can be implemented on the base of a Markov Chain Monte Carlo (MCMC) technique of random sampling from the posterior distribution. For sampling large DTs, the MCMC method is extended by Reversible Jump technique which allows inducing DTs under given priors. For the case when the prior information on the DT size is unavailable, the sweeping technique defining the prior implicitly reveals a better performance. Within this Chapter we explore the classification uncertainty of the Bayesian MCMC techniques on some datasets from the StatLog Repository and real financial data. The classification uncertainty is compared within an Uncertainty Envelope technique dealing with the class posterior distribution and a given confidence probability. This technique provides realistic estimates of the classification uncertainty which can be easily interpreted in statistical terms with the aim of risk evaluation.
[ { "version": "v1", "created": "Thu, 14 Apr 2005 10:30:54 GMT" } ]
1,179,878,400,000
[ [ "Schetinin", "Vitaly", "" ], [ "Fieldsend", "Jonathan E.", "" ], [ "Partridge", "Derek", "" ], [ "Krzanowski", "Wojtek J.", "" ], [ "Everson", "Richard M.", "" ], [ "Bailey", "Trevor C.", "" ], [ "Hernandez", "Adolfo", "" ] ]
cs/0504066
Vitaly Schetinin
Vitaly Schetinin, Jonathan E. Fieldsend, Derek Partridge, Wojtek J. Krzanowski, Richard M. Everson, Trevor C. Bailey, and Adolfo Hernandez
Comparison of the Bayesian and Randomised Decision Tree Ensembles within an Uncertainty Envelope Technique
null
Journal of Mathematical Modelling and Algorithms, 2005
null
null
cs.AI
null
Multiple Classifier Systems (MCSs) allow evaluation of the uncertainty of classification outcomes that is of crucial importance for safety critical applications. The uncertainty of classification is determined by a trade-off between the amount of data available for training, the classifier diversity and the required performance. The interpretability of MCSs can also give useful information for experts responsible for making reliable classifications. For this reason Decision Trees (DTs) seem to be attractive classification models for experts. The required diversity of MCSs exploiting such classification models can be achieved by using two techniques, the Bayesian model averaging and the randomised DT ensemble. Both techniques have revealed promising results when applied to real-world problems. In this paper we experimentally compare the classification uncertainty of the Bayesian model averaging with a restarting strategy and the randomised DT ensemble on a synthetic dataset and some domain problems commonly used in the machine learning community. To make the Bayesian DT averaging feasible, we use a Markov Chain Monte Carlo technique. The classification uncertainty is evaluated within an Uncertainty Envelope technique dealing with the class posterior distribution and a given confidence probability. Exploring a full posterior distribution, this technique produces realistic estimates which can be easily interpreted in statistical terms. In our experiments we found out that the Bayesian DTs are superior to the randomised DT ensembles within the Uncertainty Envelope technique.
[ { "version": "v1", "created": "Thu, 14 Apr 2005 10:33:33 GMT" } ]
1,179,878,400,000
[ [ "Schetinin", "Vitaly", "" ], [ "Fieldsend", "Jonathan E.", "" ], [ "Partridge", "Derek", "" ], [ "Krzanowski", "Wojtek J.", "" ], [ "Everson", "Richard M.", "" ], [ "Bailey", "Trevor C.", "" ], [ "Hernandez", "Adolfo", "" ] ]
cs/0504071
Byeong Kang Dr
Byeong Ho Kang, Achim Hoffmann, Takahira Yamaguchi, Wai Kiang Yeap
Proceedings of the Pacific Knowledge Acquisition Workshop 2004
null
null
null
null
cs.AI
null
Artificial intelligence (AI) research has evolved over the last few decades and knowledge acquisition research is at the core of AI research. PKAW-04 is one of three international knowledge acquisition workshops held in the Pacific-Rim, Canada and Europe over the last two decades. PKAW-04 has a strong emphasis on incremental knowledge acquisition, machine learning, neural nets and active mining. The proceedings contain 19 papers that were selected by the program committee among 24 submitted papers. All papers were peer reviewed by at least two reviewers. The papers in these proceedings cover the methods and tools as well as the applications related to develop expert systems or knowledge based systems.
[ { "version": "v1", "created": "Thu, 14 Apr 2005 13:14:53 GMT" } ]
1,179,878,400,000
[ [ "Kang", "Byeong Ho", "" ], [ "Hoffmann", "Achim", "" ], [ "Yamaguchi", "Takahira", "" ], [ "Yeap", "Wai Kiang", "" ] ]
cs/0505018
Jean-Francois Mari
Jean-Francois Mari (INRIA Lorraine - LORIA), Florence Le Ber (CEVH)
Temporal and Spatial Data Mining with Second-Order Hidden Models
null
null
10.1007/s00500-005-0501-0
null
cs.AI
null
In the frame of designing a knowledge discovery system, we have developed stochastic models based on high-order hidden Markov models. These models are capable to map sequences of data into a Markov chain in which the transitions between the states depend on the \texttt{n} previous states according to the order of the model. We study the process of achieving information extraction fromspatial and temporal data by means of an unsupervised classification. We use therefore a French national database related to the land use of a region, named Teruti, which describes the land use both in the spatial and temporal domain. Land-use categories (wheat, corn, forest, ...) are logged every year on each site regularly spaced in the region. They constitute a temporal sequence of images in which we look for spatial and temporal dependencies. The temporal segmentation of the data is done by means of a second-order Hidden Markov Model (\hmmd) that appears to have very good capabilities to locate stationary segments, as shown in our previous work in speech recognition. Thespatial classification is performed by defining a fractal scanning ofthe images with the help of a Hilbert-Peano curve that introduces atotal order on the sites, preserving the relation ofneighborhood between the sites. We show that the \hmmd performs aclassification that is meaningful for the agronomists.Spatial and temporal classification may be achieved simultaneously by means of a 2 levels \hmmd that measures the \aposteriori probability to map a temporal sequence of images onto a set of hidden classes.
[ { "version": "v1", "created": "Mon, 9 May 2005 06:54:57 GMT" } ]
1,179,878,400,000
[ [ "Mari", "Jean-Francois", "", "INRIA Lorraine - LORIA" ], [ "Ber", "Florence Le", "", "CEVH" ] ]
cs/0505081
Gilles Kassel
Sabine Bruaux (LaRIA), Gilles Kassel (LaRIA), Gilles Morel (LaRIA)
An ontological approach to the construction of problem-solving models
null
null
null
LRR 2005-03
cs.AI
null
Our ongoing work aims at defining an ontology-centered approach for building expertise models for the CommonKADS methodology. This approach (which we have named "OntoKADS") is founded on a core problem-solving ontology which distinguishes between two conceptualization levels: at an object level, a set of concepts enable us to define classes of problem-solving situations, and at a meta level, a set of meta-concepts represent modeling primitives. In this article, our presentation of OntoKADS will focus on the core ontology and, in particular, on roles - the primitive situated at the interface between domain knowledge and reasoning, and whose ontological status is still much debated. We first propose a coherent, global, ontological framework which enables us to account for this primitive. We then show how this novel characterization of the primitive allows definition of new rules for the construction of expertise models.
[ { "version": "v1", "created": "Mon, 30 May 2005 13:42:02 GMT" } ]
1,179,878,400,000
[ [ "Bruaux", "Sabine", "", "LaRIA" ], [ "Kassel", "Gilles", "", "LaRIA" ], [ "Morel", "Gilles", "", "LaRIA" ] ]
cs/0506031
Laurent Henocque
Patrick Albert, Laurent Henocque, Mathias Kleiner
A Constrained Object Model for Configuration Based Workflow Composition
This is an extended version of the article published at BPM'05, Third International Conference on Business Process Management, Nancy France
null
null
null
cs.AI
null
Automatic or assisted workflow composition is a field of intense research for applications to the world wide web or to business process modeling. Workflow composition is traditionally addressed in various ways, generally via theorem proving techniques. Recent research observed that building a composite workflow bears strong relationships with finite model search, and that some workflow languages can be defined as constrained object metamodels . This lead to consider the viability of applying configuration techniques to this problem, which was proven feasible. Constrained based configuration expects a constrained object model as input. The purpose of this document is to formally specify the constrained object model involved in ongoing experiments and research using the Z specification language.
[ { "version": "v1", "created": "Thu, 9 Jun 2005 14:57:53 GMT" } ]
1,179,878,400,000
[ [ "Albert", "Patrick", "" ], [ "Henocque", "Laurent", "" ], [ "Kleiner", "Mathias", "" ] ]
cs/0507010
Jiayang Wang
Jiayang Wang
A Study for the Feature Core of Dynamic Reduct
9 pages
null
null
null
cs.AI
null
To the reduct problems of decision system, the paper proposes the notion of dynamic core according to the dynamic reduct model. It describes various formal definitions of dynamic core, and discusses some properties about dynamic core. All of these show that dynamic core possesses the essential characters of the feature core.
[ { "version": "v1", "created": "Tue, 5 Jul 2005 13:02:02 GMT" } ]
1,179,878,400,000
[ [ "Wang", "Jiayang", "" ] ]
cs/0507023
Valmir Barbosa
Luis O. Rigo Jr., Valmir C. Barbosa
Two-dimensional cellular automata and the analysis of correlated time series
null
Pattern Recognition Letters 27 (2006), 1353-1360
10.1016/j.patrec.2006.01.005
null
cs.AI
null
Correlated time series are time series that, by virtue of the underlying process to which they refer, are expected to influence each other strongly. We introduce a novel approach to handle such time series, one that models their interaction as a two-dimensional cellular automaton and therefore allows them to be treated as a single entity. We apply our approach to the problems of filling gaps and predicting values in rainfall time series. Computational results show that the new approach compares favorably to Kalman smoothing and filtering.
[ { "version": "v1", "created": "Fri, 8 Jul 2005 12:47:38 GMT" } ]
1,179,878,400,000
[ [ "Rigo", "Luis O.", "Jr." ], [ "Barbosa", "Valmir C.", "" ] ]
cs/0507029
Samuel Landau
Samuel Landau (INRIA Futurs), Olivier Sigaud (LIP6), Marc Schoenauer (INRIA Futurs)
ATNoSFERES revisited
null
Dans Proceedings of the Genetic and Evolutionary Computation Conference, GECCO-2005 [OAI: oai:hal.inria.fr:inria-00000158_v1] - http://hal.inria.fr/inria-00000158
null
null
cs.AI
null
ATNoSFERES is a Pittsburgh style Learning Classifier System (LCS) in which the rules are represented as edges of an Augmented Transition Network. Genotypes are strings of tokens of a stack-based language, whose execution builds the labeled graph. The original ATNoSFERES, using a bitstring to represent the language tokens, has been favorably compared in previous work to several Michigan style LCSs architectures in the context of Non Markov problems. Several modifications of ATNoSFERES are proposed here: the most important one conceptually being a representational change: each token is now represented by an integer, hence the genotype is a string of integers; several other modifications of the underlying grammar language are also proposed. The resulting ATNoSFERES-II is validated on several standard animat Non Markov problems, on which it outperforms all previously published results in the LCS literature. The reasons for these improvement are carefully analyzed, and some assumptions are proposed on the underlying mechanisms in order to explain these good results.
[ { "version": "v1", "created": "Mon, 11 Jul 2005 13:11:25 GMT" } ]
1,556,668,800,000
[ [ "Landau", "Samuel", "", "INRIA Futurs" ], [ "Sigaud", "Olivier", "", "LIP6" ], [ "Schoenauer", "Marc", "", "INRIA Futurs" ] ]
cs/0508132
Tran Cao Son
Tran Cao Son and Enrico Pontelli
Planning with Preferences using Logic Programming
47 pages, to appear in TPLP
null
null
null
cs.AI
null
We present a declarative language, PP, for the high-level specification of preferences between possible solutions (or trajectories) of a planning problem. This novel language allows users to elegantly express non-trivial, multi-dimensional preferences and priorities over such preferences. The semantics of PP allows the identification of most preferred trajectories for a given goal. We also provide an answer set programming implementation of planning problems with PP preferences.
[ { "version": "v1", "created": "Wed, 31 Aug 2005 14:50:22 GMT" } ]
1,254,182,400,000
[ [ "Son", "Tran Cao", "" ], [ "Pontelli", "Enrico", "" ] ]
cs/0509011
Zengyou He
Zengyou He, Xiaofei Xu, Shengchun Deng
Clustering Mixed Numeric and Categorical Data: A Cluster Ensemble Approach
14 pages
null
null
Tr-2002-10
cs.AI
null
Clustering is a widely used technique in data mining applications for discovering patterns in underlying data. Most traditional clustering algorithms are limited to handling datasets that contain either numeric or categorical attributes. However, datasets with mixed types of attributes are common in real life data mining applications. In this paper, we propose a novel divide-and-conquer technique to solve this problem. First, the original mixed dataset is divided into two sub-datasets: the pure categorical dataset and the pure numeric dataset. Next, existing well established clustering algorithms designed for different types of datasets are employed to produce corresponding clusters. Last, the clustering results on the categorical and numeric dataset are combined as a categorical dataset, on which the categorical data clustering algorithm is used to get the final clusters. Our contribution in this paper is to provide an algorithm framework for the mixed attributes clustering problem, in which existing clustering algorithms can be easily integrated, the capabilities of different kinds of clustering algorithms and characteristics of different types of datasets could be fully exploited. Comparisons with other clustering algorithms on real life datasets illustrate the superiority of our approach.
[ { "version": "v1", "created": "Mon, 5 Sep 2005 02:47:12 GMT" } ]
1,179,878,400,000
[ [ "He", "Zengyou", "" ], [ "Xu", "Xiaofei", "" ], [ "Deng", "Shengchun", "" ] ]
cs/0509033
Zengyou He
Zengyou He, Xiaofei Xu, Shengchun Deng, Bin Dong
K-Histograms: An Efficient Clustering Algorithm for Categorical Dataset
11 pages
null
null
Tr-2003-08
cs.AI
null
Clustering categorical data is an integral part of data mining and has attracted much attention recently. In this paper, we present k-histogram, a new efficient algorithm for clustering categorical data. The k-histogram algorithm extends the k-means algorithm to categorical domain by replacing the means of clusters with histograms, and dynamically updates histograms in the clustering process. Experimental results on real datasets show that k-histogram algorithm can produce better clustering results than k-modes algorithm, the one related with our work most closely.
[ { "version": "v1", "created": "Tue, 13 Sep 2005 06:33:08 GMT" } ]
1,179,878,400,000
[ [ "He", "Zengyou", "" ], [ "Xu", "Xiaofei", "" ], [ "Deng", "Shengchun", "" ], [ "Dong", "Bin", "" ] ]
cs/0510050
Gilles Kassel
Gilles Kassel (LaRIA)
Integration of the DOLCE top-level ontology into the OntoSpec methodology
null
null
null
LRR-2005-08
cs.AI
null
This report describes a new version of the OntoSpec methodology for ontology building. Defined by the LaRIA Knowledge Engineering Team (University of Picardie Jules Verne, Amiens, France), OntoSpec aims at helping builders to model ontological knowledge (upstream of formal representation). The methodology relies on a set of rigorously-defined modelling primitives and principles. Its application leads to the elaboration of a semi-informal ontology, which is independent of knowledge representation languages. We recently enriched the OntoSpec methodology by endowing it with a new resource, the DOLCE top-level ontology defined at the LOA (IST-CNR, Trento, Italy). The goal of this integration is to provide modellers with additional help in structuring application ontologies, while maintaining independence vis-\`{a}-vis formal representation languages. In this report, we first provide an overview of the OntoSpec methodology's general principles and then describe the DOLCE re-engineering process. A complete version of DOLCE-OS (i.e. a specification of DOLCE in the semi-informal OntoSpec language) is presented in an appendix.
[ { "version": "v1", "created": "Tue, 18 Oct 2005 08:32:38 GMT" } ]
1,179,878,400,000
[ [ "Kassel", "Gilles", "", "LaRIA" ] ]
cs/0510062
Jamal Saboune
Jamal Saboune (INRIA Lorraine - LORIA), Fran\c{c}ois Charpillet (INRIA Lorraine - LORIA)
Using Interval Particle Filtering for Marker less 3D Human Motion Capture
null
null
null
null
cs.AI
null
In this paper we present a new approach for marker less human motion capture from conventional camera feeds. The aim of our study is to recover 3D positions of key points of the body that can serve for gait analysis. Our approach is based on foreground segmentation, an articulated body model and particle filters. In order to be generic and simple no restrictive dynamic modelling was used. A new modified particle filtering algorithm was introduced. It is used efficiently to search the model configuration space. This new algorithm which we call Interval Particle Filtering reorganizes the configurations search space in an optimal deterministic way and proved to be efficient in tracking natural human movement. Results for human motion capture from a single camera are presented and compared to results obtained from a marker based system. The system proved to be able to track motion successfully even in partial occlusions.
[ { "version": "v1", "created": "Fri, 21 Oct 2005 13:45:15 GMT" } ]
1,179,878,400,000
[ [ "Saboune", "Jamal", "", "INRIA Lorraine - LORIA" ], [ "Charpillet", "François", "", "INRIA\n Lorraine - LORIA" ] ]
cs/0510063
Jamal Saboune
Jamal Saboune (INRIA Lorraine - LORIA), Fran\c{c}ois Charpillet (INRIA Lorraine - LORIA)
Markerless Human Motion Capture for Gait Analysis
null
null
null
null
cs.AI
null
The aim of our study is to detect balance disorders and a tendency towards the falls in the elderly, knowing gait parameters. In this paper we present a new tool for gait analysis based on markerless human motion capture, from camera feeds. The system introduced here, recovers the 3D positions of several key points of the human body while walking. Foreground segmentation, an articulated body model and particle filtering are basic elements of our approach. No dynamic model is used thus this system can be described as generic and simple to implement. A modified particle filtering algorithm, which we call Interval Particle Filtering, is used to reorganise and search through the model's configurations search space in a deterministic optimal way. This algorithm was able to perform human movement tracking with success. Results from the treatment of a single cam feeds are shown and compared to results obtained using a marker based human motion capture system.
[ { "version": "v1", "created": "Fri, 21 Oct 2005 13:45:49 GMT" } ]
1,179,878,400,000
[ [ "Saboune", "Jamal", "", "INRIA Lorraine - LORIA" ], [ "Charpillet", "François", "", "INRIA\n Lorraine - LORIA" ] ]
cs/0510079
Riccardo Pucella
Joseph Y. Halpern, Riccardo Pucella
Evidence with Uncertain Likelihoods
21 pages. A preliminary version appeared in the Proceedings of UAI'05
null
null
null
cs.AI
null
An agent often has a number of hypotheses, and must choose among them based on observations, or outcomes of experiments. Each of these observations can be viewed as providing evidence for or against various hypotheses. All the attempts to formalize this intuition up to now have assumed that associated with each hypothesis h there is a likelihood function \mu_h, which is a probability measure that intuitively describes how likely each observation is, conditional on h being the correct hypothesis. We consider an extension of this framework where there is uncertainty as to which of a number of likelihood functions is appropriate, and discuss how one formal approach to defining evidence, which views evidence as a function from priors to posteriors, can be generalized to accommodate this uncertainty.
[ { "version": "v1", "created": "Tue, 25 Oct 2005 21:15:31 GMT" }, { "version": "v2", "created": "Thu, 3 Aug 2006 17:34:43 GMT" } ]
1,179,878,400,000
[ [ "Halpern", "Joseph Y.", "" ], [ "Pucella", "Riccardo", "" ] ]
cs/0510083
Nizar Kerkeni
Nizar Kerkeni (TIM), Frederic Alexandre (CORTEX), Mohamed Hedi Bedoui (TIM), Laurent Bougrain (CORTEX), Mohamed Dogui (SAHLOUL)
Neuronal Spectral Analysis of EEG and Expert Knowledge Integration for Automatic Classification of Sleep Stages
null
null
null
null
cs.AI
null
Being able to analyze and interpret signal coming from electroencephalogram (EEG) recording can be of high interest for many applications including medical diagnosis and Brain-Computer Interfaces. Indeed, human experts are today able to extract from this signal many hints related to physiological as well as cognitive states of the recorded subject and it would be very interesting to perform such task automatically but today no completely automatic system exists. In previous studies, we have compared human expertise and automatic processing tools, including artificial neural networks (ANN), to better understand the competences of each and determine which are the difficult aspects to integrate in a fully automatic system. In this paper, we bring more elements to that study in reporting the main results of a practical experiment which was carried out in an hospital for sleep pathology study. An EEG recording was studied and labeled by a human expert and an ANN. We describe here the characteristics of the experiment, both human and neuronal procedure of analysis, compare their performances and point out the main limitations which arise from this study.
[ { "version": "v1", "created": "Wed, 26 Oct 2005 14:47:07 GMT" } ]
1,179,878,400,000
[ [ "Kerkeni", "Nizar", "", "TIM" ], [ "Alexandre", "Frederic", "", "CORTEX" ], [ "Bedoui", "Mohamed Hedi", "", "TIM" ], [ "Bougrain", "Laurent", "", "CORTEX" ], [ "Dogui", "Mohamed", "", "SAHLOUL" ] ]
cs/0510091
Marc Schoenauer
Marc Schoenauer (INRIA Futurs), Yann Semet (INRIA Futurs)
An efficient memetic, permutation-based evolutionary algorithm for real-world train timetabling
null
null
null
null
cs.AI
null
Train timetabling is a difficult and very tightly constrained combinatorial problem that deals with the construction of train schedules. We focus on the particular problem of local reconstruction of the schedule following a small perturbation, seeking minimisation of the total accumulated delay by adapting times of departure and arrival for each train and allocation of resources (tracks, routing nodes, etc.). We describe a permutation-based evolutionary algorithm that relies on a semi-greedy heuristic to gradually reconstruct the schedule by inserting trains one after the other following the permutation. This algorithm can be hybridised with ILOG commercial MIP programming tool CPLEX in a coarse-grained manner: the evolutionary part is used to quickly obtain a good but suboptimal solution and this intermediate solution is refined using CPLEX. Experimental results are presented on a large real-world case involving more than one million variables and 2 million constraints. Results are surprisingly good as the evolutionary algorithm, alone or hybridised, produces excellent solutions much faster than CPLEX alone.
[ { "version": "v1", "created": "Mon, 31 Oct 2005 06:06:57 GMT" } ]
1,179,878,400,000
[ [ "Schoenauer", "Marc", "", "INRIA Futurs" ], [ "Semet", "Yann", "", "INRIA Futurs" ] ]
cs/0511004
Marc Schoenauer
Aguston E. Eiben (VU), Marc Schoenauer (FRACTALES)
Evolutionary Computing
null
null
null
null
cs.AI
null
Evolutionary computing (EC) is an exciting development in Computer Science. It amounts to building, applying and studying algorithms based on the Darwinian principles of natural selection. In this paper we briefly introduce the main concepts behind evolutionary computing. We present the main components all evolutionary algorithms (EA), sketch the differences between different types of EAs and survey application areas ranging from optimization, modeling and simulation to entertainment.
[ { "version": "v1", "created": "Tue, 1 Nov 2005 19:46:18 GMT" } ]
1,179,878,400,000
[ [ "Eiben", "Aguston E.", "", "VU" ], [ "Schoenauer", "Marc", "", "FRACTALES" ] ]
cs/0511015
Prashant Singh
Prashant
Towards a Hierarchical Model of Consciousness, Intelligence, Mind and Body
12 pages, 2 figures
null
null
null
cs.AI
null
This article is taken out.
[ { "version": "v1", "created": "Thu, 3 Nov 2005 16:28:05 GMT" }, { "version": "v2", "created": "Sat, 13 Jan 2007 00:00:56 GMT" } ]
1,179,878,400,000
[ [ "Prashant", "", "" ] ]
cs/0511091
Marc Schoenauer
Carlos Kavka (INRIA Futurs, UNSL-DI), Patricia Roggero (UNSL-DI), Marc Schoenauer (INRIA Futurs)
Evolution of Voronoi based Fuzzy Recurrent Controllers
null
Dans GECCO 2005
null
null
cs.AI
null
A fuzzy controller is usually designed by formulating the knowledge of a human expert into a set of linguistic variables and fuzzy rules. Among the most successful methods to automate the fuzzy controllers development process are evolutionary algorithms. In this work, we propose the Recurrent Fuzzy Voronoi (RFV) model, a representation for recurrent fuzzy systems. It is an extension of the FV model proposed by Kavka and Schoenauer that extends the application domain to include temporal problems. The FV model is a representation for fuzzy controllers based on Voronoi diagrams that can represent fuzzy systems with synergistic rules, fulfilling the $\epsilon$-completeness property and providing a simple way to introduce a priory knowledge. In the proposed representation, the temporal relations are embedded by including internal units that provide feedback by connecting outputs to inputs. These internal units act as memory elements. In the RFV model, the semantic of the internal units can be specified together with the a priori rules. The geometric interpretation of the rules allows the use of geometric variational operators during the evolution. The representation and the algorithms are validated in two problems in the area of system identification and evolutionary robotics.
[ { "version": "v1", "created": "Mon, 28 Nov 2005 07:14:18 GMT" } ]
1,179,878,400,000
[ [ "Kavka", "Carlos", "", "INRIA Futurs, UNSL-DI" ], [ "Roggero", "Patricia", "", "UNSL-DI" ], [ "Schoenauer", "Marc", "", "INRIA Futurs" ] ]
cs/0512045
Xuan-Ha Vu
Xuan-Ha Vu, Marius-Calin Silaghi, Djamila Sam-Haroud and Boi Faltings
Branch-and-Prune Search Strategies for Numerical Constraint Solving
43 pages, 11 figures
null
null
LIA-REPORT-2006-007
cs.AI
null
When solving numerical constraints such as nonlinear equations and inequalities, solvers often exploit pruning techniques, which remove redundant value combinations from the domains of variables, at pruning steps. To find the complete solution set, most of these solvers alternate the pruning steps with branching steps, which split each problem into subproblems. This forms the so-called branch-and-prune framework, well known among the approaches for solving numerical constraints. The basic branch-and-prune search strategy that uses domain bisections in place of the branching steps is called the bisection search. In general, the bisection search works well in case (i) the solutions are isolated, but it can be improved further in case (ii) there are continuums of solutions (this often occurs when inequalities are involved). In this paper, we propose a new branch-and-prune search strategy along with several variants, which not only allow yielding better branching decisions in the latter case, but also work as well as the bisection search does in the former case. These new search algorithms enable us to employ various pruning techniques in the construction of inner and outer approximations of the solution set. Our experiments show that these algorithms speed up the solving process often by one order of magnitude or more when solving problems with continuums of solutions, while keeping the same performance as the bisection search when the solutions are isolated.
[ { "version": "v1", "created": "Sun, 11 Dec 2005 19:47:42 GMT" }, { "version": "v2", "created": "Tue, 8 May 2007 22:59:01 GMT" } ]
1,179,878,400,000
[ [ "Vu", "Xuan-Ha", "" ], [ "Silaghi", "Marius-Calin", "" ], [ "Sam-Haroud", "Djamila", "" ], [ "Faltings", "Boi", "" ] ]
cs/0512047
Florentin Smarandache
Jose L. Salmeron, Florentin Smarandache
Processing Uncertainty and Indeterminacy in Information Systems success mapping
13 pages, 2 figures
null
null
null
cs.AI
null
IS success is a complex concept, and its evaluation is complicated, unstructured and not readily quantifiable. Numerous scientific publications address the issue of success in the IS field as well as in other fields. But, little efforts have been done for processing indeterminacy and uncertainty in success research. This paper shows a formal method for mapping success using Neutrosophic Success Map. This is an emerging tool for processing indeterminacy and uncertainty in success research. EIS success have been analyzed using this tool.
[ { "version": "v1", "created": "Tue, 13 Dec 2005 01:21:58 GMT" }, { "version": "v2", "created": "Thu, 15 Dec 2005 18:46:43 GMT" } ]
1,320,796,800,000
[ [ "Salmeron", "Jose L.", "" ], [ "Smarandache", "Florentin", "" ] ]
cs/0512099
Mark Burgin
Mark Burgin
Mathematical Models in Schema Theory
null
null
null
null
cs.AI
null
In this paper, a mathematical schema theory is developed. This theory has three roots: brain theory schemas, grid automata, and block-shemas. In Section 2 of this paper, elements of the theory of grid automata necessary for the mathematical schema theory are presented. In Section 3, elements of brain theory necessary for the mathematical schema theory are presented. In Section 4, other types of schemas are considered. In Section 5, the mathematical schema theory is developed. The achieved level of schema representation allows one to model by mathematical tools virtually any type of schemas considered before, including schemas in neurophisiology, psychology, computer science, Internet technology, databases, logic, and mathematics.
[ { "version": "v1", "created": "Tue, 27 Dec 2005 21:29:16 GMT" } ]
1,179,878,400,000
[ [ "Burgin", "Mark", "" ] ]
cs/0601001
Jens Oehlschl\"agel
Jens Oehlschl\"agel
Truecluster: robust scalable clustering with model selection
Article (10 figures). Changes in 2nd version: dropped supplements in favor of better integrated presentation, better literature coverage, put into proper English. Author's website available via http://www.truecluster.com
null
null
null
cs.AI
null
Data-based classification is fundamental to most branches of science. While recent years have brought enormous progress in various areas of statistical computing and clustering, some general challenges in clustering remain: model selection, robustness, and scalability to large datasets. We consider the important problem of deciding on the optimal number of clusters, given an arbitrary definition of space and clusteriness. We show how to construct a cluster information criterion that allows objective model selection. Differing from other approaches, our truecluster method does not require specific assumptions about underlying distributions, dissimilarity definitions or cluster models. Truecluster puts arbitrary clustering algorithms into a generic unified (sampling-based) statistical framework. It is scalable to big datasets and provides robust cluster assignments and case-wise diagnostics. Truecluster will make clustering more objective, allows for automation, and will save time and costs. Free R software is available.
[ { "version": "v1", "created": "Mon, 2 Jan 2006 13:17:09 GMT" }, { "version": "v2", "created": "Mon, 28 May 2007 17:18:09 GMT" } ]
1,181,692,800,000
[ [ "Oehlschlägel", "Jens", "" ] ]
cs/0601031
Marc Schoenauer
Marc Schoenauer (INRIA Futurs), Pierre Sav\'eant (TRT), Vincent Vidal (CRIL)
Divide-and-Evolve: a New Memetic Scheme for Domain-Independent Temporal Planning
null
Dans EvoCOP2006
null
null
cs.AI
null
An original approach, termed Divide-and-Evolve is proposed to hybridize Evolutionary Algorithms (EAs) with Operational Research (OR) methods in the domain of Temporal Planning Problems (TPPs). Whereas standard Memetic Algorithms use local search methods to improve the evolutionary solutions, and thus fail when the local method stops working on the complete problem, the Divide-and-Evolve approach splits the problem at hand into several, hopefully easier, sub-problems, and can thus solve globally problems that are intractable when directly fed into deterministic OR algorithms. But the most prominent advantage of the Divide-and-Evolve approach is that it immediately opens up an avenue for multi-objective optimization, even though the OR method that is used is single-objective. Proof of concept approach on the standard (single-objective) Zeno transportation benchmark is given, and a small original multi-objective benchmark is proposed in the same Zeno framework to assess the multi-objective capabilities of the proposed methodology, a breakthrough in Temporal Planning.
[ { "version": "v1", "created": "Mon, 9 Jan 2006 16:57:08 GMT" } ]
1,471,305,600,000
[ [ "Schoenauer", "Marc", "", "INRIA Futurs" ], [ "Savéant", "Pierre", "", "TRT" ], [ "Vidal", "Vincent", "", "CRIL" ] ]
cs/0601052
Subhash Kak
Subhash Kak
Artificial and Biological Intelligence
16 pages
ACM Ubiquity, vol. 6, number 42, 2005, pp. 1-20
null
null
cs.AI
null
This article considers evidence from physical and biological sciences to show machines are deficient compared to biological systems at incorporating intelligence. Machines fall short on two counts: firstly, unlike brains, machines do not self-organize in a recursive manner; secondly, machines are based on classical logic, whereas Nature's intelligence may depend on quantum mechanics.
[ { "version": "v1", "created": "Fri, 13 Jan 2006 19:01:42 GMT" } ]
1,179,878,400,000
[ [ "Kak", "Subhash", "" ] ]
cs/0601109
Neil Yorke-Smith
Neil Yorke-Smith and Carmen Gervet
Certainty Closure: Reliable Constraint Reasoning with Incomplete or Erroneous Data
Revised version
ACM Transactions on Computational Logic, volume 10, number 1, article 3, 2009
10.1145/1459010.1459013
null
cs.AI
null
Constraint Programming (CP) has proved an effective paradigm to model and solve difficult combinatorial satisfaction and optimisation problems from disparate domains. Many such problems arising from the commercial world are permeated by data uncertainty. Existing CP approaches that accommodate uncertainty are less suited to uncertainty arising due to incomplete and erroneous data, because they do not build reliable models and solutions guaranteed to address the user's genuine problem as she perceives it. Other fields such as reliable computation offer combinations of models and associated methods to handle these types of uncertain data, but lack an expressive framework characterising the resolution methodology independently of the model. We present a unifying framework that extends the CP formalism in both model and solutions, to tackle ill-defined combinatorial problems with incomplete or erroneous data. The certainty closure framework brings together modelling and solving methodologies from different fields into the CP paradigm to provide reliable and efficient approches for uncertain constraint problems. We demonstrate the applicability of the framework on a case study in network diagnosis. We define resolution forms that give generic templates, and their associated operational semantics, to derive practical solution methods for reliable solutions.
[ { "version": "v1", "created": "Wed, 25 Jan 2006 20:11:11 GMT" }, { "version": "v2", "created": "Wed, 25 Jan 2006 21:33:44 GMT" }, { "version": "v3", "created": "Thu, 30 Nov 2006 16:12:03 GMT" } ]
1,533,686,400,000
[ [ "Yorke-Smith", "Neil", "" ], [ "Gervet", "Carmen", "" ] ]
cs/0602022
Marc Schoenauer
Alain Ratle (LMS), Mich\`ele Sebag (LMS)
Avoiding the Bloat with Stochastic Grammar-based Genetic Programming
null
null
null
null
cs.AI
null
The application of Genetic Programming to the discovery of empirical laws is often impaired by the huge size of the search space, and consequently by the computer resources needed. In many cases, the extreme demand for memory and CPU is due to the massive growth of non-coding segments, the introns. The paper presents a new program evolution framework which combines distribution-based evolution in the PBIL spirit, with grammar-based genetic programming; the information is stored as a probability distribution on the gra mmar rules, rather than in a population. Experiments on a real-world like problem show that this approach gives a practical solution to the problem of intron growth.
[ { "version": "v1", "created": "Tue, 7 Feb 2006 07:48:27 GMT" } ]
1,471,305,600,000
[ [ "Ratle", "Alain", "", "LMS" ], [ "Sebag", "Michèle", "", "LMS" ] ]
cs/0602031
Wit Jakuczun
Wit Jakuczun
Classifying Signals with Local Classifiers
null
null
null
null
cs.AI
null
This paper deals with the problem of classifying signals. The new method for building so called local classifiers and local features is presented. The method is a combination of the lifting scheme and the support vector machines. Its main aim is to produce effective and yet comprehensible classifiers that would help in understanding processes hidden behind classified signals. To illustrate the method we present the results obtained on an artificial and a real dataset.
[ { "version": "v1", "created": "Wed, 8 Feb 2006 11:38:44 GMT" } ]
1,179,878,400,000
[ [ "Jakuczun", "Wit", "" ] ]
cs/0603025
Stijn Heymans
Stijn Heymans, Davy Van Nieuwenborgh and Dirk Vermeir
Open Answer Set Programming with Guarded Programs
51 pages, 1 figure, accepted for publication in ACM's TOCL
null
null
null
cs.AI
null
Open answer set programming (OASP) is an extension of answer set programming where one may ground a program with an arbitrary superset of the program's constants. We define a fixed point logic (FPL) extension of Clark's completion such that open answer sets correspond to models of FPL formulas and identify a syntactic subclass of programs, called (loosely) guarded programs. Whereas reasoning with general programs in OASP is undecidable, the FPL translation of (loosely) guarded programs falls in the decidable (loosely) guarded fixed point logic (mu(L)GF). Moreover, we reduce normal closed ASP to loosely guarded OASP, enabling for the first time, a characterization of an answer set semantics by muLGF formulas. We further extend the open answer set semantics for programs with generalized literals. Such generalized programs (gPs) have interesting properties, e.g., the ability to express infinity axioms. We restrict the syntax of gPs such that both rules and generalized literals are guarded. Via a translation to guarded fixed point logic, we deduce 2-exptime-completeness of satisfiability checking in such guarded gPs (GgPs). Bound GgPs are restricted GgPs with exptime-complete satisfiability checking, but still sufficiently expressive to optimally simulate computation tree logic (CTL). We translate Datalog lite programs to GgPs, establishing equivalence of GgPs under an open answer set semantics, alternation-free muGF, and Datalog lite.
[ { "version": "v1", "created": "Tue, 7 Mar 2006 17:54:59 GMT" }, { "version": "v2", "created": "Sun, 25 Feb 2007 12:32:24 GMT" } ]
1,179,878,400,000
[ [ "Heymans", "Stijn", "" ], [ "Van Nieuwenborgh", "Davy", "" ], [ "Vermeir", "Dirk", "" ] ]
cs/0603034
Ivan Jos\'e Varzinczak
Andreas Herzig and Ivan Varzinczak
Metatheory of actions: beyond consistency
null
null
null
null
cs.AI
null
Consistency check has been the only criterion for theory evaluation in logic-based approaches to reasoning about actions. This work goes beyond that and contributes to the metatheory of actions by investigating what other properties a good domain description in reasoning about actions should have. We state some metatheoretical postulates concerning this sore spot. When all postulates are satisfied together we have a modular action theory. Besides being easier to understand and more elaboration tolerant in McCarthy's sense, modular theories have interesting properties. We point out the problems that arise when the postulates about modularity are violated and propose algorithmic checks that can help the designer of an action theory to overcome them.
[ { "version": "v1", "created": "Thu, 9 Mar 2006 10:07:46 GMT" } ]
1,254,182,400,000
[ [ "Herzig", "Andreas", "" ], [ "Varzinczak", "Ivan", "" ] ]
cs/0603038
Patrik O. Hoyer
Patrik O. Hoyer, Shohei Shimizu, Antti J. Kerminen
Estimation of linear, non-gaussian causal models in the presence of confounding latent variables
8 pages, 4 figures, pdflatex
null
null
null
cs.AI
null
The estimation of linear causal models (also known as structural equation models) from data is a well-known problem which has received much attention in the past. Most previous work has, however, made an explicit or implicit assumption of gaussianity, limiting the identifiability of the models. We have recently shown (Shimizu et al, 2005; Hoyer et al, 2006) that for non-gaussian distributions the full causal model can be estimated in the no hidden variables case. In this contribution, we discuss the estimation of the model when confounding latent variables are present. Although in this case uniqueness is no longer guaranteed, there is at most a finite set of models which can fit the data. We develop an algorithm for estimating this set, and describe numerical simulations which confirm the theoretical arguments and demonstrate the practical viability of the approach. Full Matlab code is provided for all simulations.
[ { "version": "v1", "created": "Thu, 9 Mar 2006 14:46:18 GMT" }, { "version": "v2", "created": "Mon, 22 May 2006 17:02:14 GMT" } ]
1,179,878,400,000
[ [ "Hoyer", "Patrik O.", "" ], [ "Shimizu", "Shohei", "" ], [ "Kerminen", "Antti J.", "" ] ]
cs/0603081
Nikita Sakhanenko
Nikita A. Sakhanenko (1 and 2), George F. Luger (1), Hanna E. Makaruk (2), David B. Holtkamp (2) ((1) CS Dept. University of New Mexico, (2) Physics Div. Los Alamos National Laboratory)
Application of Support Vector Regression to Interpolation of Sparse Shock Physics Data Sets
13 pages, 7 figures
null
null
LA-UR-06-1739
cs.AI
null
Shock physics experiments are often complicated and expensive. As a result, researchers are unable to conduct as many experiments as they would like - leading to sparse data sets. In this paper, Support Vector Machines for regression are applied to velocimetry data sets for shock damaged and melted tin metal. Some success at interpolating between data sets is achieved. Implications for future work are discussed.
[ { "version": "v1", "created": "Mon, 20 Mar 2006 23:43:45 GMT" } ]
1,179,878,400,000
[ [ "Sakhanenko", "Nikita A.", "", "1 and 2" ], [ "Luger", "George F.", "" ], [ "Makaruk", "Hanna E.", "" ], [ "Holtkamp", "David B.", "" ] ]
cs/0603120
Zengyou He
Zengyou He
Approximation Algorithms for K-Modes Clustering
7 pages
null
null
Tr-06-0330
cs.AI
null
In this paper, we study clustering with respect to the k-modes objective function, a natural formulation of clustering for categorical data. One of the main contributions of this paper is to establish the connection between k-modes and k-median, i.e., the optimum of k-median is at most twice the optimum of k-modes for the same categorical data clustering problem. Based on this observation, we derive a deterministic algorithm that achieves an approximation factor of 2. Furthermore, we prove that the distance measure in k-modes defines a metric. Hence, we are able to extend existing approximation algorithms for metric k-median to k-modes. Empirical results verify the superiority of our method.
[ { "version": "v1", "created": "Thu, 30 Mar 2006 02:02:37 GMT" } ]
1,179,878,400,000
[ [ "He", "Zengyou", "" ] ]
cs/0604009
Alexey Melkikh
Alexey V. Melkikh
Can an Organism Adapt Itself to Unforeseen Circumstances?
null
null
null
null
cs.AI
null
A model of an organism as an autonomous intelligent system has been proposed. This model was used to analyze learning of an organism in various environmental conditions. Processes of learning were divided into two types: strong and weak processes taking place in the absence and the presence of aprioristic information about an object respectively. Weak learning is synonymous to adaptation when aprioristic programs already available in a system (an organism) are started. It was shown that strong learning is impossible for both an organism and any autonomous intelligent system. It was shown also that the knowledge base of an organism cannot be updated. Therefore, all behavior programs of an organism are congenital. A model of a conditioned reflex as a series of consecutive measurements of environmental parameters has been advanced. Repeated measurements are necessary in this case to reduce the error during decision making.
[ { "version": "v1", "created": "Wed, 5 Apr 2006 10:29:28 GMT" } ]
1,179,878,400,000
[ [ "Melkikh", "Alexey V.", "" ] ]
cs/0604042
Florentin Smarandache
M. C. Florea, J. Dezert, P. Valin, F. Smarandache, Anne-Laure Jousselme
Adaptative combination rule and proportional conflict redistribution rule for information fusion
Presented at Cogis '06 Conference, Paris, March 2006
null
null
null
cs.AI
null
This paper presents two new promising rules of combination for the fusion of uncertain and potentially highly conflicting sources of evidences in the framework of the theory of belief functions in order to palliate the well-know limitations of Dempster's rule and to work beyond the limits of applicability of the Dempster-Shafer theory. We present both a new class of adaptive combination rules (ACR) and a new efficient Proportional Conflict Redistribution (PCR) rule allowing to deal with highly conflicting sources for static and dynamic fusion applications.
[ { "version": "v1", "created": "Tue, 11 Apr 2006 14:35:15 GMT" } ]
1,179,878,400,000
[ [ "Florea", "M. C.", "" ], [ "Dezert", "J.", "" ], [ "Valin", "P.", "" ], [ "Smarandache", "F.", "" ], [ "Jousselme", "Anne-Laure", "" ] ]
cs/0604070
Yongzhi Cao
Yongzhi Cao, Mingsheng Ying, and Guoqing Chen
Retraction and Generalized Extension of Computing with Words
13 double column pages; 3 figures; to be published in the IEEE Transactions on Fuzzy Systems
IEEE Transactions on Fuzzy Systems, vol. 15(6): 1238-1250, Dec. 2007
10.1109/TED.2007.893191
null
cs.AI
null
Fuzzy automata, whose input alphabet is a set of numbers or symbols, are a formal model of computing with values. Motivated by Zadeh's paradigm of computing with words rather than numbers, Ying proposed a kind of fuzzy automata, whose input alphabet consists of all fuzzy subsets of a set of symbols, as a formal model of computing with all words. In this paper, we introduce a somewhat general formal model of computing with (some special) words. The new features of the model are that the input alphabet only comprises some (not necessarily all) fuzzy subsets of a set of symbols and the fuzzy transition function can be specified arbitrarily. By employing the methodology of fuzzy control, we establish a retraction principle from computing with words to computing with values for handling crisp inputs and a generalized extension principle from computing with words to computing with all words for handling fuzzy inputs. These principles show that computing with values and computing with all words can be respectively implemented by computing with words. Some algebraic properties of retractions and generalized extensions are addressed as well.
[ { "version": "v1", "created": "Wed, 19 Apr 2006 06:28:55 GMT" }, { "version": "v2", "created": "Tue, 28 Nov 2006 01:56:51 GMT" } ]
1,435,190,400,000
[ [ "Cao", "Yongzhi", "" ], [ "Ying", "Mingsheng", "" ], [ "Chen", "Guoqing", "" ] ]
cs/0604086
Michael Fink
Thomas Eiter, Michael Fink, and Hans Tompits
A Knowledge-Based Approach for Selecting Information Sources
53 pages, 2 Figures; to appear in Theory and Practice of Logic Programming (TPLP)
null
null
null
cs.AI
null
Through the Internet and the World-Wide Web, a vast number of information sources has become available, which offer information on various subjects by different providers, often in heterogeneous formats. This calls for tools and methods for building an advanced information-processing infrastructure. One issue in this area is the selection of suitable information sources in query answering. In this paper, we present a knowledge-based approach to this problem, in the setting where one among a set of information sources (prototypically, data repositories) should be selected for evaluating a user query. We use extended logic programs (ELPs) to represent rich descriptions of the information sources, an underlying domain theory, and user queries in a formal query language (here, XML-QL, but other languages can be handled as well). Moreover, we use ELPs for declarative query analysis and generation of a query description. Central to our approach are declarative source-selection programs, for which we define syntax and semantics. Due to the structured nature of the considered data items, the semantics of such programs must carefully respect implicit context information in source-selection rules, and furthermore combine it with possible user preferences. A prototype implementation of our approach has been realized exploiting the DLV KR system and its plp front-end for prioritized ELPs. We describe a representative example involving specific movie databases, and report about experimental results.
[ { "version": "v1", "created": "Fri, 21 Apr 2006 16:53:28 GMT" } ]
1,179,878,400,000
[ [ "Eiter", "Thomas", "" ], [ "Fink", "Michael", "" ], [ "Tompits", "Hans", "" ] ]
cs/0605012
Martin Loetzsch
L. Steels, M. Loetzsch
Perspective alignment in spatial language
to appear in: K. Coventry, J. Bateman, and T. Tenbrink (eds.) Spatial Language in Dialogue. Oxford University Press, 2008
null
null
null
cs.AI
null
It is well known that perspective alignment plays a major role in the planning and interpretation of spatial language. In order to understand the role of perspective alignment and the cognitive processes involved, we have made precise complete cognitive models of situated embodied agents that self-organise a communication system for dialoging about the position and movement of real world objects in their immediate surroundings. We show in a series of robotic experiments which cognitive mechanisms are necessary and sufficient to achieve successful spatial language and why and how perspective alignment can take place, either implicitly or based on explicit marking.
[ { "version": "v1", "created": "Thu, 4 May 2006 17:16:02 GMT" }, { "version": "v2", "created": "Wed, 13 Feb 2008 13:43:13 GMT" } ]
1,202,860,800,000
[ [ "Steels", "L.", "" ], [ "Loetzsch", "M.", "" ] ]
cs/0605017
Tran Cao Son
Phan Huy Tu, Tran Cao Son, and Chitta Baral
Reasoning and Planning with Sensing Actions, Incomplete Information, and Static Causal Laws using Answer Set Programming
72 pages, 3 figures, a preliminary version of this paper appeared in the proceedings of the 7th International Conference on Logic Programming and Non-Monotonic Reasoning, 2004. To appear in Theory and Practice of Logic Programming
null
null
null
cs.AI
null
We extend the 0-approximation of sensing actions and incomplete information in [Son and Baral 2000] to action theories with static causal laws and prove its soundness with respect to the possible world semantics. We also show that the conditional planning problem with respect to this approximation is NP-complete. We then present an answer set programming based conditional planner, called ASCP, that is capable of generating both conformant plans and conditional plans in the presence of sensing actions, incomplete information about the initial state, and static causal laws. We prove the correctness of our implementation and argue that our planner is sound and complete with respect to the proposed approximation. Finally, we present experimental results comparing ASCP to other planners.
[ { "version": "v1", "created": "Thu, 4 May 2006 22:35:12 GMT" } ]
1,179,878,400,000
[ [ "Tu", "Phan Huy", "" ], [ "Son", "Tran Cao", "" ], [ "Baral", "Chitta", "" ] ]