FileName
stringlengths 17
17
| Abstract
stringlengths 163
6.01k
| Title
stringlengths 12
421
|
---|---|---|
S1568494614005298 | This paper presents an innovative solution to model distributed adaptive systems in biomedical environments. We present an original TCBR-HMM (Text Case Based Reasoning-Hidden Markov Model) for biomedical text classification based on document content. The main goal is to propose a more effective classifier than current methods in this environment where the model needs to be adapted to new documents in an iterative learning frame. To demonstrate its achievement, we include a set of experiments, which have been performed on OSHUMED corpus. Our classifier is compared with Naive Bayes and SVM techniques, commonly used in text classification tasks. The results suggest that the TCBR-HMM Model is indeed more suitable for document classification. The model is empirically and statistically comparable to the SVM classifier and outperforms it in terms of time efficiency. | TCBR-HMM: An HMM-based text classifier with a CBR system |
S1568494614005304 | Artificial bee colony (ABC) algorithm has been introduced for solving numerical optimization problems, inspired collective behavior of honey bee colonies. ABC algorithm has three phases named as employed bee, onlooker bee and scout bee. In the model of ABC, only one design parameter of the optimization problem is updated by the artificial bees at the ABC phases by using interaction in the bees. This updating has caused the slow convergence to global or near global optimum for the algorithm. In order to accelerate convergence of the method, using a control parameter (modification rate-MR) has been proposed for ABC but this approach is based on updating more design parameters than one. In this study, we added directional information to ABC algorithms, instead of updating more design parameters than one. The performance of proposed approach was examined on well-known nine numerical benchmark functions and obtained results are compared with basic ABC and ABCs with MR. The experimental results show that the proposed approach is very effective method for solving numeric benchmark functions and successful in terms of solution quality, robustness and convergence to global optimum. | A directed artificial bee colony algorithm |
S1568494614005316 | A number of metrics have been proposed in the literature to measure text re-use between pairs of sentences or short passages. These individual metrics fail to reliably detect paraphrasing or semantic equivalence between sentences, due to the subjectivity and complexity of the task, even for human beings. This paper analyzes a set of five simple but weak lexical metrics for measuring textual similarity and presents a novel paraphrase detector with improved accuracy based on abductive machine learning. The objective here is 2-fold. First, the performance of each individual metric is boosted through the abductive learning paradigm. Second, we investigate the use of decision-level and feature-level information fusion via abductive networks to obtain a more reliable composite metric for additional performance enhancement. Several experiments were conducted using two benchmark corpora and the optimal abductive models were compared with other approaches. Results demonstrate that applying abductive learning has significantly improved the results of individual metrics and further improvement was achieved through fusion. Moreover, building simple models of polynomial functional elements that identify and integrate the smallest subset of relevant metrics yielded better results than those obtained from the support vector machine classifiers utilizing the same datasets and considered metrics. The results were also comparable to the best result reported in the literature even with larger number of more powerful features and/or using more computationally intensive techniques. | Boosting paraphrase detection through textual similarity metrics with abductive networks |
S1568494614005328 | Accurate holiday daily tourist flow forecasting is always the most important issue in tourism industry. However, it is found that holiday daily tourist flow demonstrates a complex nonlinear characteristic and obvious seasonal tendency from different periods of holidays as well as the seasonal nature of climates. Support vector regression (SVR) has been widely applied to deal with nonlinear time series forecasting problems, but it suffers from the critical parameters selection and the influence of seasonal tendency. This article proposes an approach which hybridizes SVR model with adaptive genetic algorithm (AGA) and the seasonal index adjustment, namely AGA-SSVR, to forecast holiday daily tourist flow. In addition, holiday daily tourist flow data from 2008 to 2012 for Mountain Huangshan in China are employed as numerical examples to validate the performance of the proposed model. The experimental results indicate that the AGA-SSVR model is an effective approach with more accuracy than the other alternative models including AGA-SVR and back-propagation neural network (BPNN). | Forecasting holiday daily tourist flow based on seasonal support vector regression with adaptive genetic algorithm |
S156849461400533X | Recently, a novel probabilistic model-building evolutionary algorithm (so called estimation of distribution algorithm, or EDA), named probabilistic model building genetic network programming (PMBGNP), has been proposed. PMBGNP uses graph structures for its individual representation, which shows higher expression ability than the classical EDAs. Hence, it extends EDAs to solve a range of problems, such as data mining and agent control. This paper is dedicated to propose a continuous version of PMBGNP for continuous optimization in agent control problems. Different from the other continuous EDAs, the proposed algorithm evolves the continuous variables by reinforcement learning (RL). We compare the performance with several state-of-the-art algorithms on a real mobile robot control problem. The results show that the proposed algorithm outperforms the others with statistically significant differences. | Continuous probabilistic model building genetic network programming using reinforcement learning |
S1568494614005341 | Traditionally energy has been a burning issue of mankind, however, this trend has changed with the advent of clean technologies such as wind power. It is common knowledge that wind turbines need to be installed in an open, unobstructed area to obtain the maximal power output. This document attempts to solve the problem of optimization of the layout of large wind farms by the use of nature inspired algorithms. Particular reference is made to the use of the firefly algorithm. A good comparison is made with the past approaches of the use of spread sheets and GA's for optimization. | Wind turbine micrositing by using the firefly algorithm |
S1568494614005353 | The objective of this study is to develop a genetic programming (GP) based model to predict constant amplitude fatigue crack propagation life of 2024 T3 aluminum alloys under load ratio effect based on experimental data and to compare the results with earlier proposed ANN model. It is proved that genetic programming can effectively interpret fatigue crack growth rate data and can efficiently model fatigue life of the material system under investigation in comparison to ANN model. crack length (mm) measured from the edge of the plate crack length corresponding to the ‘ith’ step/initial crack length final crack length (mm) crack length corresponding to the ‘(i + 1)th’ step (mm) specimen thickness crack growth rate (mm/cycle) remotely applied load (N) geometrical factor stress intensity factor (MPa√m) maximum stress intensity factor (MPa√m) stress intensity factor range (MPa√m) specific growth rate number of cycles corresponding to the ‘ith’ step number of cycles corresponding to the ‘(i +1)th’ step final (ANN) number of cycles final (GP) number of cycles final (experimental) number of cycles prediction ratio of ANN model prediction ratio of GP model load ratio plate width (mm) thickness (mm) | Prediction of constant amplitude fatigue crack growth life of 2024 T3 Al alloy with R-ratio effect by GP |
S1568494614005365 | In this paper, we have presented a new and effective edge detection scheme based on least squares support vector machine (LS-SVM) classification in a contourlet Hidden Markov Tree Model (contourlet HMT). First, the input noisy image is decomposed into coarser and finer coefficients using a contourlet HMT transform to derive an efficient multiscale and multidirectional image representation. Second, the feature vector is performed through spatial regularity in a contourlet HMT domain, and the coarser coefficients classified using LS-SVM classifier into two classes: noise coefficients and edge coefficients. Next, all noisy contourlet HMT coefficients are well denoised by the BayesShrink method. Finally, the denoised coefficients and edge coefficients are fused using the weighted average rule, and the inverse contourlet HMT is applied to obtain the edge image. Experimental results demonstrate that our scheme can attain improved performance over state-of-the-art edge detection approaches, both qualitatively and quantitatively. Tests were performed on several images from the Berkeley dataset corrupted with Gaussian noise and on other images such as a cameraman, pepper and medical images. The results illustrate that the performance of the proposed scheme is stable. | An edge detection scheme based on least squares support vector machine in a contourlet HMT domain |
S1568494614005377 | Particle swarm optimisation (PSO) is a well-established optimisation algorithm inspired from flocking behaviour of birds. The big problem in PSO is that it suffers from premature convergence, that is, in complex optimisation problems, it may easily get trapped in local optima. In this paper, a new PSO variant, named as enhanced leader PSO (ELPSO), is proposed for mitigating premature convergence problem. ELPSO is mainly based on a five-staged successive mutation strategy which is applied to swarm leader at each iteration. The experimental results confirm that in all terms of accuracy, scalability and convergence rate, ELPSO performs well. | Enhanced leader PSO (ELPSO): A new PSO variant for solving global optimisation problems |
S1568494614005389 | Nowadays, software designers attempt to employ design patterns in software design phase, but design patterns may be not used in the implementation phase. Therefore, one of the challenging issues is conformance checking of source code and design, i.e., design patterns. In addition, after developing a system, usually its documents are not maintained, so, identifying design pattern from source code can help to achieve the design of an existing system as a reverse engineering task. The variant implementations (i.e., different source codes) of a design pattern make hard to detect the design pattern instances from the source code. To address this issue, in this paper, we propose a new method which aims to map the design pattern detection problem into a learning problem. The proposed design pattern detector is made by learning from the information extracted from design pattern instances which normally include variant implementations. To evaluate the proposed method, we applied it on open source codes to detect six different design patterns. The experimental results show that the proposed method is promising and effective. | Source code and design conformance, design pattern detection from source code by classification approach |
S1568494614005407 | Recently, considerable attention has focused on compound sequence classification methods which integrate multiple data mining techniques. Among these methods, sequential pattern mining (SPM) based sequence classifiers are considered to be efficient for solving complex sequence classification problems. Although previous studies have demonstrated the strength of SPM-based sequence classification methods, the challenges of pattern redundancy, inappropriate sequence similarity measures, and hard-to-classify sequences remain unsolved. This paper proposes an efficient two-stage SPM-based sequence classification method to address these three problems. In the first stage, during the sequential pattern mining process, redundant sequential patterns are identified if the pattern is a sub-sequence of other sequential patterns. A list of compact sequential patterns is generated excluding redundant patterns and used as representative features for the second stage. In the second stage, a sequence similarity measurement is used to evaluate partial similarity between sequences and patterns. Finally, a particle swarm optimization-AdaBoost (PSO-AB) sequence classifier is developed to improve sequence classification accuracy. In the PSO-AB sequence classifier, the PSO algorithm is used to optimize the weights in the individual sequence classifier, while the AdaBoost strategy is used to adaptively change the distribution of patterns that are hard to classify. The experiments show that the proposed two-stage SPM-based sequence classification method is efficient and superior to other approaches. | A PSO-AB classifier for solving sequence classification problems |
S1568494614005419 | A class of compensative weighted averaging (CWA) aggregation operators, having a dedicated parameter to model compensation, is presented. The variants of CWA operator with ordered weighted averaging (OWA) operator are developed. The proposed operators are compared, both theoretically and empirically, with other operators in terms of their compensation abilities. Two approaches are proposed to learn the compensation parameter and the weight vector from the given data. The proposed learning approaches are applied in four case-studies, involving real experimental data. The usefulness of CWA operators in strategic multi-criteria decision making and supplier selection is also highlighted. | Compensative weighted averaging aggregation operators |
S1568494614005420 | The multidimensional knapsack problem (MKP) is a combinatorial optimization problem belonging to the class of NP-hard problems. This study proposes a novel self-adaptive check and repair operator (SACRO) combined with particle swarm optimization (PSO) to solve the MKP. The traditional check and repair operator (CRO) uses a unique pseudo-utility ratio, whereas SACRO dynamically and automatically changes the alternative pseudo-utility ratio as the PSO algorithm runs. Two existing PSO algorithms are used as the foundation to support the novel SACRO methods, the proposed SACRO-based algorithms were tested using 137 benchmark problems from the OR-Library to validate and demonstrate the efficiency of SACRO idea. The results were compared with those of other population-based algorithms. Simulation and evaluation results show that SACRO is more competitive and robust than the traditional CRO. The proposed SACRO-based algorithms rival other state-of-the-art PSO and other algorithms. Therefore, changing different types of pseudo-utility ratios produces solutions with better results in solving MKP. Moreover, SACRO can be combined with other population-based optimization algorithms to solve constrained optimization problems. | Self-adaptive check and repair operator-based particle swarm optimization for the multidimensional knapsack problem |
S1568494614005432 | Central force optimization (CFO) is an efficient and powerful population-based intelligence algorithm for optimization problems. CFO is deterministic in nature, unlike the most widely used metaheuristics. CFO, however, is not completely free from the problems of premature convergence. One way to overcome local optimality is to utilize the multi-start strategy. By combining the respective advantages of CFO and the multi-start strategy, a multi-start central force optimization (MCFO) algorithm is proposed in this paper. The performance of the MCFO approach is evaluated on a comprehensive set of benchmark functions. The experimental results demonstrate that MCFO not only saves the computational cost, but also performs better than some state-of-the-art CFO algorithms. MCFO is also compared with representative evolutionary algorithms. The results show that MCFO is highly competitive, achieving promising performance. | A multi-start central force optimization for global optimization |
S1568494614005456 | To get a better prediction of costs, schedule, and the risks of a software project, it is necessary to have a more accurate prediction of its development effort. Among the main prediction techniques are those based on mathematical models, such as statistical regressions or machine learning (ML). The ML models applied to predicting the development effort have mainly based their conclusions on the following weaknesses: (1) using an accuracy criterion which leads to asymmetry, (2) applying a validation method that causes a conclusion instability by randomly selecting the samples for training and testing the models, (3) omitting the explanation of how the parameters for the neural networks were determined, (4) generating conclusions from models that were not trained and tested from mutually exclusive data sets, (5) omitting an analysis of the dependence, variance and normality of data for selecting the suitable statistical test for comparing the accuracies among models, and (6) reporting results without showing a statistically significant difference. In this study, these six issues are addressed when comparing the prediction accuracy of a radial Basis Function Neural Network (RBFNN) with that of a regression statistical (the model most frequently compared with ML models), to feedforward multilayer perceptron (MLP, the most commonly used in the effort prediction of software projects), and to general regression neural network (GRNN, a RBFNN variant). The hypothesis tested is the following: the accuracy of effort prediction for RBFNN is statistically better than the accuracy obtained from a simple linear regression (SLR), MLP and GRNN when adjusted function points data, obtained from software projects, is used as the independent variable. Samples obtained from the International Software Benchmarking Standards Group (ISBSG) Release 11 related to new and enhanced projects were used. The models were trained and tested from a leave-one-out cross-validation method. The criteria for evaluating the models were based on Absolute Residuals and by a Friedman statistical test. The results showed that there was a statistically significant difference in the accuracy among the four models for new projects, but not for enhanced projects. Regarding new projects, the accuracy for RBFNN was better than for a SLR at the 99% confidence level, whereas the MLP and GRNN were better than for a SLR at the 90% confidence level. | Predictive accuracy comparison between neural networks and statistical regression for development effort of software projects |
S1568494614005468 | In this paper, a software sensor based on a genetic algorithm-based neural fuzzy system (GA-NFS) was proposed for real-time estimation of nutrient concentrations in a biological wastewater treatment process. In order to improve the network performance, self-adapted fuzzy c-means clustering algorithm and genetic algorithm were employed to extract and optimize the structure of the network. The GA-NFS was applied to a biological wastewater treatment process for nutrient removal. The simulative results indicate that the learning and generalization ability of the model performed well and also worked well for normal batch i.e., two data points. Real-time estimation of COD, NO3 − and PO4 3− concentration based on GA-NFS functioned effectively with the simple on-line information on the anoxic/oxic system. | A sensor-software based on a genetic algorithm-based neural fuzzy system for modeling and simulating a wastewater treatment process |
S156849461400547X | Bibliometrics is a discipline that analyzes bibliographic material from a quantitative perspective. It is very useful for classifying information according to different variables, including journals, institutions and countries. This paper presents a general overview of research in the fuzzy sciences using bibliometric indicators. The main advantage is that these indicators provide a general picture, identifying some of the most influential research in this area. The analysis is divided into key sections focused on relevant journals, papers, authors, institutions and countries. Most of the results are in accordance with our common knowledge, although some unexpected results are also found. Note that the aim of this paper is to be informative, and these indicators identify most of the fundamental research in this field. However, some very influential issues may be omitted if they are not included in the Web of Science database, which is used for carrying out the bibliometric analysis. | An overview of fuzzy research with bibliometric indicators |
S1568494614005481 | Properly designing an artificial neural network is very important for achieving the optimal performance. This study aims to utilize an architecture of these networks together with the Taylor polynomials, to achieve the approximate solution of second kind linear Volterra integral equations system. For this purpose, first we substitute the Nth truncation of the Taylor expansion for unknown functions in the origin system. Then we apply the suggested neural net for adjusting the numerical coefficients of given expansions in resulting system. Consequently, the reported architecture using a learning algorithm that based on the gradient descent method, will adjust the coefficients in given Taylor series. The proposed method was illustrated by several examples with computer simulations. Subsequently, performance comparisons with other developed methods was made. The comparative experimental results showed that this approach is more effective and robust. | Artificial neural networks based modeling for solving Volterra integral equations system |
S1568494614005493 | Classification is an important issue in data mining and knowledge discovery. It is a significant issue to develop effective and easy approach of rule extraction for classification. A new approach of rule extraction by features of attributes is proposed in this article for word sense disambiguation (WSD). English preposition on is taken as a target word of WSD, a data set of 600 samples is randomly selected from a 350,000 words corpus. Semantic and syntactic features are extracted from the context, and the corresponding formal context is generated. The rules for WSD of English preposition on are extracted based on the theoretical descriptions and calculation of the simple class exclusive attributes and composite class exclusive attributes. The extracted rules are used in the WSD of English preposition on, and the accuracy reaches 93.2%. The results of the comparative analysis show that the proposed feature of attribute approach is simpler, more effective and easier to use than the existing well-formed structural partial ordered attribute diagram approach. word sense disambiguation “on” with the sense of “time” “on” with the sense of neither “time” nor “place” “on” with the sense of “place” noun a formal context a set of objects a set of attributes a set of relation between objects and attributes an object an attribute extent of a concept intent of a concept a subset of M a subset of G empty set a set of a class of objects any set of objects other than G 1 decision attribute set object sets corresponding to the decision attributes non-decision attribute set of a class non-decision attribute set corresponding to G i the jth object in G 1 power set of M ci the kth attribute set of P(M ci ) object set corresponding to m cik a subset of G 1 mutual information probability of co-occurrence of w 1 and w 2 probability of w 1 the ith concept the ith attribute the jth object structural partial ordered attribute diagram | A new approach of rules extraction for word sense disambiguation by features of attributes |
S156849461400550X | Evolutionary algorithms start with an initial population vector, which is randomly generated when no preliminary knowledge about the solution is available. Recently, it has been claimed that in solving continuous domain optimization problems, the simultaneous consideration of randomness and opposition is more effective than pure randomness. In this paper it is mathematically proven that this scheme, called opposition-based learning, also does well in binary spaces. The proposed binary opposition-based scheme can be embedded inside many binary population-based algorithms. We applied it to accelerate the convergence rate of binary gravitational search algorithm (BGSA) as an application. The experimental results and mathematical proofs confirm each other. | Opposition versus randomness in binary spaces |
S1568494614005511 | In this paper, we construct the probability sum (PS) function and the proportional distribution rules of membership function and non-membership function of intuitionistic fuzzy sets (IFSs), and give their corresponding geometric interpretations. Based on which, we present the neutrality operation and the scalar neutrality operation on intuitionistic fuzzy numbers (IFNs). We propose the intuitionistic fuzzy weighted neutral averaging (IFWNA) operator and the intuitionistic fuzzy ordered weighted neutral averaging (IFOWNA) operator. The properties of the IFWNA operator and the IFOWNA operator are investigated. The principal advantages of the proposed operators are that both the attitude of the decision makers and the interactions between different intuitionistic fuzzy numbers (IFNs) are considered. Furthermore, approaches to multi-criteria decision making based on the proposed IFWNA and IFOWNA operator are given. Finally, an example is illustrated to show the feasibility and validity of the new approaches to the application of decision making. | Multi-attribute decision making based on neutral averaging operators for intuitionistic fuzzy information |
S1568494614005523 | This paper deals with the problem of equalization of channels in a digital communication system. In the literature, artificial neural network (ANN) has been increasingly used for the said problem. However, traditional methods of ANN training fall short of desired performance in the problem of equalization. In this paper, we propose a recently proposed training method for ANN for the problem. This training uses directed search optimization (DSO) as a trainer to neural networks. Then, we apply the same to the problem of nonlinear channel equalization and in that way, this paper introduces a novel strategy for equalization of nonlinear channels. Proposed method of channel equalization performs better than contemporary equalization methods used in the literature, as evident from extensive simulation results presented in this paper. | A new training scheme for neural networks and application in non-linear channel equalization |
S1568494614005535 | Preventing GSM subscribers to move to another operator is an important and crucial issue for the GSM operators. Churn management is of essential importance in detecting loyal and hopeless subscribers. Keeping current GSM number when changing the GSM operator also facilitates these subscribers to switch to another operator. Euclidean Indexing High Dimensional Model Representation (HDMR) method is a polynomial based modeling method which is used to predict the churner behavior of the GSM subscribers. An up-to-date data set consists of demographic information and call details records with the related churn behavior is used to model the churner detection problem. The proposed method uses 640 randomly selected training nodes for the modeling process while 316 nodes are used to examine the performance of the proposed method and to make comparisons with the data mining techniques. | Detecting GSM churners by using Euclidean Indexing HDMR |
S1568494614005547 | Accelerated life testing (ALT) of a field programmable gate array (FPGA) requires it to be configured with a circuit that satisfies multiple criteria. Hand-crafting such a circuit is a herculean task as many components of the criteria are orthogonal to each other demanding a complex multivariate optimization. This paper presents an evolutionary algorithm aided by particle swarm optimization methodology to generate synthetic benchmark circuits (SBC) that can be used for ALT of FPGAs. The proposed algorithm was used to generate a SBC for ALT of a commercial FPGA. The generated SBC when compared with a hand-crafted one, demonstrated to be more suitable for ALT, measured in terms of meeting the multiple criteria. The SBC generated by the proposed technique utilizes 8.37% more resources; operates at a maximum frequency which is 40% higher; and has 7.75% higher switching activity than the hand-crafted one reported in the literature. The hand-crafted circuit is very specific to the particular device of that family of FPGAs, whereas the proposed algorithm is device-independent. In addition, it took several man months to hand-craft the SBC, whereas the proposed algorithm took less than half-a-day. | Generating synthetic benchmark circuits for accelerated life testing of field programmable gate arrays using genetic algorithm and particle swarm optimization |
S1568494614005559 | This paper presents a new variant of the Differential Evolution (DE) algorithm called Sinusoidal Differential Evolution (SinDE). The key idea of the proposed SinDE is the use of new sinusoidal formulas to automatically adjust the values of the DE main parameters: the scaling factor and the crossover rate. The objective of using the proposed sinusoidal formulas is the search for a good balance between the exploration of non visited regions of the search space and the exploitation of the already found good solutions. By applying it on the recently proposed CEC-2013 set of benchmark functions, the proposed approach is statistically compared with the classical DE, the linearly parameter adjusting DE and 10 other state-of-the-art metaheuristics. The obtained results have proven the superiority of the proposed SinDE, it outperformed other approaches especially for multimodal and composition functions. | A sinusoidal differential evolution algorithm for numerical optimisation |
S1568494614005560 | This paper discusses the Single Source Capacitated Facility Location Problem (SSCFLP) where the problem consists in determining a subset of capacitated facilities to be opened in order to satisfy the customers’ demands such that total costs are minimized. The paper presents an iterated tabu search heuristic to solve the SSCFLP. The iterated tabu search heuristic combines tabu search with perturbation operators to avoid getting stuck in local optima. Experimental results show that this simple heuristic is capable of generating high quality solutions using small computing times. Moreover, it is also competitive with other metaheuristic approaches for solving the SSCFLP. | An iterated tabu search heuristic for the Single Source Capacitated Facility Location Problem |
S1568494614005572 | This article presents a survey of genetic algorithms that are designed for solving multi depot vehicle routing problem. In this context, most of the articles focus on different genetic approaches, methods and operators, commonly used in practical applications to solve this well-known and researched problem. Besides providing an up-to-date overview of the research in the field, the results of a thorough experiment are presented and discussed, which evaluated the efficiency of different existing genetic methods on standard benchmark problems in detail. In this manner, the insights into strengths and weaknesses of specific methods, operators and settings are presented, which should help researchers and practitioners to optimize their solutions in further studies done with the similar type of the problem in mind. Finally, genetic algorithm based solutions are compared with other existing approaches, both exact and heuristic, for solving this same problem. | A survey of genetic algorithms for solving multi depot vehicle routing problem |
S1568494614005584 | The development of algorithms for tackling continuous optimization problems has been one of the most active research topics in soft computing in the last decades. It led to many high performing algorithms from areas such as evolutionary computation or swarm intelligence. These developments have been side-lined by an increasing effort of benchmarking algorithms using various benchmarking sets proposed by different researchers. In this article, we explore the interaction between benchmark sets, algorithm tuning, and algorithm performance. To do so, we compare the performance of seven proven high-performing continuous optimizers on two different benchmark sets: the functions of the special session on real-parameter optimization from the IEEE 2005 Congress on Evolutionary Computation and the functions used for a recent special issue of the Soft Computing journal on large-scale optimization. While one conclusion of our experiments is that automatic algorithm tuning improves the performance of the tested continuous optimizers, our main conclusion is that the choice of the benchmark set has a much larger impact on the ranking of the compared optimizers. This latter conclusion is true whether one uses default or tuned parameter settings. | Performance evaluation of automatically tuned continuous optimizers on different benchmark sets |
S1568494614005596 | The conventional controller suffers from uncertain parameters and non-linear qualities of Quasi-Z Source converter. However they are computationally inefficient extending to optimize the fuzzy controller parameters, since they exhaustively search the optimal values to optimize the objective functions. To overcome this drawback, a PSO based fuzzy controller parameter optimization is presented in this paper. The PSO algorithm is used to find the optimal fuzzy parameters for minimizing the objective functions. The feasibility of the proposed PSO technique has been simulated and tested. The results are bench marked with conventional fuzzy controller and genetic algorithm for two types of DC/DC converters namely double input Z-Source converter and Quasi-Z Source converter. The results of both the DC/DC converters for several existing methods illustrate the effectiveness and robustness of the proposed algorithm. | Optimal fuzzy controller parameters using PSO for speed control of Quasi-Z Source DC/DC converter fed drive |
S1568494614005687 | In this paper, we explore the application of Opt-AiNet, an immune network approach for search and optimisation problems, to learning qualitative models in the form of qualitative differential equations. The Opt-AiNet algorithm is adapted to qualitative model learning problems, resulting in the proposed system QML-AiNet. The potential of QML-AiNet to address the scalability and multimodal search space issues of qualitative model learning has been investigated. More importantly, to further improve the efficiency of QML-AiNet, we also modify the mutation operator according to the features of discrete qualitative model space. Experimental results show that the performance of QML-AiNet is comparable to QML-CLONALG, a QML system using the clonal selection algorithm (CLONALG). More importantly, QML-AiNet with the modified mutation operator can significantly improve the scalability of QML and is much more efficient than QML-CLONALG. | QML-AiNet: An immune network approach to learning qualitative differential equation models |
S1568494614005699 | Cloud computing enables many applications of Web services and rekindles the interest of providing ERP services via the Internet. It has the potentials to reshape the way IT services are consumed. Recent research indicates that ERP delivered thru SaaS will outperform the traditional IT offers. However, distributing a service compared to distributing a product is more complicated because of the immateriality, the integration and the one-shot-principle referring to services. This paper defines a CloudERP platform on which enterprise customers can select web services and customize a unique ERP system to meet their specific needs. The CloudERP aims to provide enterprise users with the flexibility of renting an entire ERP service through multiple vendors. This paper also addresses the challenge of composing web services and proposes a web-based solution for automating the ERP service customization process. The proposed service composition method builds on the genetic algorithm concept and incorporates with knowledge of web services extracted from the web service platform with the rough set theory. A system prototype was built on the Google App Engine platform to verify the proposed composition process. Based on experimental results from running the prototype, the composition method works effectively and has great potential for supporting a fully functional CloudERP platform. | A cloud computing platform for ERP applications |
S1568494614005705 | TrackGen is an online tool for the generation of tracks for two open-source 3D car racing games (TORCS and Speed Dreams). It integrates interactive evolution with procedural content generation and comprises two components: (i) a web frontend that maintains the database of all the evolved populations and manages the interaction with users (by collecting users evaluations and providing access to the evolved tracks) and (ii) an evolutionary/content-generation backend that runs both the evolutionary algorithm and generates the actual game content that is available through the web frontend. The first prototype of the tool was presented in July 2011 but advertised only to researchers; the first official version which generated tracks only for TORCS was released to the game community in September 2011; due to the many requests, we released a new version soon afterwards, in January 2012, with support for Speed Dreams (the fork of TORCS focused on visual realism and graphic quality) that has been online since then. From January 2012 until July 2014, TrackGen had more than 7600 unique visitors who visited the website around 11,500 times and viewed 85,500 pages; it was employed to evolve more than 8853 tracks, and it was used to download 12,218 tracks. Some of the tracks evolve by our system have been also included in the TORCS distribution. | TrackGen: An interactive track generator for TORCS and Speed-Dreams |
S1568494614005717 | The damage analysis of coastal structure is very much essential for better and safe design of the structure. In the past, several researchers have carried out physical model studies on non-reshaped berm breakwaters, but failed to give a simple mathematical model to predict damage level for non-reshaped berm breakwaters by considering all the boundary conditions. This is due to the complexity and non-linearity associated with design parameters and damage level determination of non-reshaped berm breakwater. Soft computing tools like Artificial Neural Network, Fuzzy Logic, Support Vector Machine (SVM), etc, are successfully used to solve complex problems. In the present study, SVM and hybrid of Particle Swarm Optimization (PSO) with SVM (PSO–SVM) are developed to predict damage level of non-reshaped berm breakwaters. Optimal kernel parameters of PSO–SVM are determined by PSO algorithm. Both the models are trained on the data set obtained from experiments carried out in Marine Structures Laboratory, Department of Applied Mechanics and Hydraulics, National Institute of Technology Karnataka, Surathkal, India. Results of both models are compared in terms of statistical measures, such as correlation coefficient, root mean square error and scatter index. The PSO–SVM model with polynomial kernel function outperformed other SVM models. | Particle Swarm Optimization based support vector machine for damage level prediction of non-reshaped berm breakwater |
S1568494614005729 | The tools of soft computing will aid the knowledge mining in predicting and classifying the properties of various parameters while designing the composite preforms in the manufacturing of Powder Metallurgy (P/M) Lab. In this paper, an integrated PRNET (PCA-Radial basis functional neural NET) model is proposed in different versions to select the relevant parameters for preparing composite preforms and to predict the deformation and strain hardening properties of Al–Fe composites. It reveals that the predictability of this model has been increased by 67.89% relatively from the conventional models. A new PR-filter is proposed by slightly modifying the conventional filters of RBFNN, which improves the power of PRNET even though raw data are highly non-linear, interrelated and noisy. Moreover, fixing the range of input parameters for classifying the properties of composite preforms can be automated by the Fuzzy logic. These types of models will avoid expensive experimentation and risky environment while preparing sintered composite preforms. Thus the manufacturing process of composites in P/M Lab will be simplified with minimum energy by the support of these soft-computing tools. | Simplifying the powder metallurgy manufacturing process using soft computing tools |
S1568494614005730 | Thermal errors can have significant effects on CNC machine tool accuracy. The errors come from thermal deformations of the machine elements caused by heat sources within the machine structure or from ambient temperature change. The effect of temperature can be reduced by error avoidance or numerical compensation. The performance of a thermal error compensation system essentially depends upon the accuracy and robustness of the thermal error model and its input measurements. This paper first reviews different methods of designing thermal error models, before concentrating on employing an adaptive neuro fuzzy inference system (ANFIS) to design two thermal prediction models: ANFIS by dividing the data space into rectangular sub-spaces (ANFIS-Grid model) and ANFIS by using the fuzzy c-means clustering method (ANFIS-FCM model). Grey system theory is used to obtain the influence ranking of all possible temperature sensors on the thermal response of the machine structure. All the influence weightings of the thermal sensors are clustered into groups using the fuzzy c-means (FCM) clustering method, the groups then being further reduced by correlation analysis. A study of a small CNC milling machine is used to provide training data for the proposed models and then to provide independent testing data sets. The results of the study show that the ANFIS-FCM model is superior in terms of the accuracy of its predictive ability with the benefit of fewer rules. The residual value of the proposed model is smaller than ±4μm. This combined methodology can provide improved accuracy and robustness of a thermal error compensation system. | The application of ANFIS prediction models for thermal error compensation on CNC machine tools |
S1568494614005742 | New product development (NPD) is a term used to describe the complete process of bringing a new concept to a state of market readiness. Mechatronics based product requires a multidisciplinary approach for its modeling, design, development and implementation. An integrated and concurrent approach focusing on integrating the mechanical structure with basic three components namely sensors, controllers and actuators is required. This paper aims at developing a framework for a new Mechatronics product development. For conceptual design of Mechatronics system, various tools like Fuzzy Delphi Method (FDM), Fuzzy Interpretive Structural Modeling (FISM), Fuzzy Analytical Network Process (FANP) and Fuzzy Quality Function Deployment (FQFD) are used. Based on the prioritized design requirements, the functional specifications of the required components are developed. Then, Computer Aided Design and control system software are used to develop the detailed system design. Then, a prototype model is developed based on the integration of mechanical system with Sensor, Controller and Electrical units. Performance of the prototype model is monitored and Fuzzy failure mode and effect analysis (FMEA) is then used to rank the potential failures. Based on the results of fuzzy FMEA, the developed model is redesigned. The proposed framework is illustrated with a case study related to developing automatic power loom reed cleaning machine. | An integrated framework for mechatronics based product development in a fuzzy environment |
S1568494614005754 | A two-stage meta-heuristic optimization model is introduced to find the optimal shape of double-arch concrete dams. The main aim is that computational merits of the continuous ant colony optimization (ACOR) and the particle swarm optimization (PSO) be integrated. This proposed method called ACOR–PSO can accelerate the capability of exploitation and convergence of PSO. To achieve this, a preliminary optimization is accomplished using ACOR in the first stage of ACOR–PSO. In the second phase ACOR–PSO, PSO is utilized by using the optimal initial swarm of the first phase results. The weighted least squares support vector machine (WLS-SVM) is considered as the most reliable method for predicting the dynamic responses of dams. To testify the robustness and efficiency of the proposed ACOR–PSO, first, the well-known benchmark functions in literatures are optimized using the proposed ACOR–PSO, and provides comparisons with ACOR and PSO. Then, a real world double-arch dam is presented to demonstrate the effectiveness and practicality of the ACOR–PSO. The numerical results reveal that ACOR–PSO not only converges to better solutions but also provides faster convergence rate in comparison with ACOR and PSO. | Hybridizing two-stage meta-heuristic optimization model with weighted least squares support vector machine for optimal shape of double-arch dams |
S1568494614005766 | As a data mining method, clustering, which is one of the most important tools in information retrieval, organizes data based on unsupervised learning which means that it does not require any training data. But, some text clustering algorithms cannot update existing clusters incrementally and, instead, have to recompute a new clustering from scratch. In view of above, this paper presents a novel down-top incremental conceptual hierarchical text clustering approach using CFu-tree (ICHTC-CF) representation, which starts with each item as a separate cluster. Term-based feature extraction is used for summarizing a cluster in the process. The Comparison Variation measure criterion is also adopted for judging whether the closest pair of clusters can be merged or a previous cluster can be split. And, our incremental clustering method is not sensitive to the input data order. Experimental results show that the performance of our method outperforms k-means, CLIQUE, single linkage clustering and complete linkage clustering, which indicate our new technique is efficient and feasible. | A novel incremental conceptual hierarchical text clustering method using CFu-tree |
S1568494614005778 | This paper proposes a new method for speaker feature extraction based on Formants, Wavelet Entropy and Neural Networks denoted as FWENN. In the first stage, five formants and seven Shannon entropy wavelet packet are extracted from the speakers’ signals as the speaker feature vector. In the second stage, these 12 feature extraction coefficients are used as inputs to feed-forward neural networks. Probabilistic neural network is also proposed for comparison. In contrast to conventional speaker recognition methods that extract features from sentences (or words), the proposed method extracts the features from vowels. Advantages of using vowels include the ability to recognize speakers when only partially-recorded words are available. This may be useful for deaf-mute persons or when the recordings are damaged. Experimental results show that the proposed method succeeds in the speaker verification and identification tasks with high classification rate. This is accomplished with minimum amount of information, using only 12 coefficient features (i.e. vector length) and only one vowel signal, which is the major contribution of this work. The results are further compared to well-known classical algorithms for speaker recognition and are found to be superior. | Speaker identification using vowels features through a combined method of formants, wavelets, and neural network classifiers |
S156849461400578X | In this study, two induced generalized hesitant fuzzy hybrid operators called the induced generalized hesitant fuzzy Shapley hybrid weighted averaging (IG-HFSHWA) operator and the induced generalized hesitant fuzzy Shapley hybrid geometric mean (IG-HFSHGM) operator are defined. The prominent characteristics of these two operators are that they do not only globally consider the importance of elements and their ordered positions, but also overall reflect their correlations. Furthermore, when the weight information of the attributes and the ordered positions is partly known, using grey relational analysis (GRA) method and the Shapley function models for the optimal fuzzy measures on an attribute set and on an ordered set are respectively established. Finally, an approach to hesitant fuzzy multi-attribute decision making with incomplete weight information and interactive conditions is developed, and an illustrative example is provided to show its practicality and effectivity. | Induced generalized hesitant fuzzy Shapley hybrid operators and their application in multi-attribute decision making |
S1568494614005791 | In this paper, a metaheuristic optimizer, the multi-objective water cycle algorithm (MOWCA), is presented for solving constrained multi-objective problems. The MOWCA is based on emulation of the water cycle process in nature. In this study, a set of non-dominated solutions obtained by the proposed algorithm is kept in an archive to be used to display the exploratory capability of the MOWCA as compared to other efficient methods in the literature. Moreover, to make a comprehensive assessment about the robustness and efficiency of the proposed algorithm, the obtained optimization results are also compared with other widely used optimizers for constrained and engineering design problems. The comparisons are carried out using tabular, descriptive, and graphical presentations. | Water cycle algorithm for solving constrained multi-objective optimization problems |
S1568494614005808 | Particle swarm optimization is a stochastic population-based algorithm based on social interaction of bird flocking or fish schooling. In this paper, a new adaptive inertia weight adjusting approach is proposed based on Bayesian techniques in PSO, which is used to set up a sound tradeoff between the exploration and exploitation characteristics. It applies the Bayesian techniques to enhance the PSO's searching ability in the exploitation of past particle positions and uses the cauchy mutation for exploring the better solution. A suite of benchmark functions are employed to test the performance of the proposed method. The results demonstrate that the new method exhibits higher accuracy and faster convergence rate than other inertia weight adjusting methods in multimodal and unimodal functions. Furthermore, to show the generalization ability of BPSO method, it is compared with other types of improved PSO algorithms, which also performs well. | A new particle swarm optimization algorithm with adaptive inertia weight based on Bayesian techniques |
S156849461400581X | Removal of miscible hazardous materials from aqueous solutions is an alarming problem for the environmental scientists. Several linear and nonlinear regression models like Langmuir, Freundlich, D–R, Tempkin isotherm models are in vogue for determining the adsorbing capacity of standard adsorbents used for this purpose. In this article, we propose a novel quantum inspired backpropagation multilayer perceptron (QBMLP) based on quantum gates (single qubit rotation gates and two qubit controlled-not gates) for the prediction of this adsorption behavior exhibited by calcareous soil oftentimes used in adsorbing miscible iron from aqueous solutions. The backpropagation learning formulae for the proposed QBMLP architecture has also been generalized for multiple number of layers in both field homogeneous and field heterogeneous configurations characterized by three standard activations, viz., sigmoid, tanh and tan1.5h functions. Applications of the efficiency of the proposed QBMLP over the regression models are demonstrated with regards to the prediction behavior of the adsorption of iron by calcareous soil from an aqueous solution with effect to various characteristic adsorbent parameters. The adsorption process is considered to be a physical one since the activation energy (E A ) of ferrous ion adsorption is 9.469kJmol−1 due to Arrhenius. Moreover, the thermodynamic parameters of Gibb's free energy (G 0), enthalpy (H 0) and entropy (S 0) values indicate it be spontaneous. Results indicate that QBMLP predicts the adsorption behavior of calcareous soil to a very closer extent thereby obviating the need for further regression/experimental analysis. Comparison with the performance of a similar classical multilayer perceptron (MLP) architecture also reveals the prediction and time efficiency of the proposed QBMLP architecture. | A quantum backpropagation multilayer perceptron (QBMLP) for predicting iron adsorption capacity of calcareous soil from aqueous solution |
S1568494614005821 | The physical properties of water cause light-induced degradation of underwater images. Light rapidly loses intensity as it travels in water, depending on the color spectrum wavelength. Visible light is absorbed at the longest wavelength first. Red and blue are the most and least absorbed, respectively. Underwater images with low contrast are captured due to the degradation effects of light spectrum. Therefore, the valuable information from these images cannot be fully extracted for further processing. The current study proposes a new method to improve the contrast and reduce the noise of underwater images. The proposed method integrates the modification of image histogram into two main color models, Red–Green–Blue (RGB) and Hue-Saturation-Value (HSV). In the RGB color model, the histogram of the dominant color channel (i.e., blue channel) is stretched toward the lower level, with a maximum limit of 95%, whereas the inferior color channel (i.e., red channel) is stretched toward the upper level, with a minimum limit of 5%. The color channel between the dominant and inferior color channels (i.e., green channel) is stretched to both directions within the whole dynamic range. All stretching processes in the RGB color model are shaped to follow the Rayleigh distribution. The image is converted into the HSV color model, wherein the S and V components are modified within the limit of 1% from the minimum and maximum values. Qualitative analysis reveals that the proposed method significantly enhances the image contrast, reduces the blue-green effect, and minimizes under- and over-enhanced areas in the output image. For quantitative analysis, the test with 300 underwater images shows that the proposed method produces average mean square error (MSE) and peak signal to noise ratio (PSNR) of 76.76 and 31.13, respectively, which outperform six state-of-the-art methods. | Underwater image quality enhancement through integrated color model with Rayleigh distribution |
S1568494614005833 | Rough set theory is a useful mathematical tool for pattern classification to deal with vagueness in available information. The main disadvantage of rough set theory is that it cannot handle continuous attributes. Although various discretization methods have been proposed to deal with this problem, discretization can result in information loss. It has been found that tolerance rough sets with a tolerance relation can operate effectively on continuous attributes. A tolerance relation is related to a similarity measure which is commonly defined by a simple distance function to measure the proximity of any two patterns distributed in feature space. However, for a simple distance measure, it oversimplifies the criteria aggregation resulting from not considering attribute weights, and it is not a unique way of expressing the preference information on each attribute for any two patterns. This paper proposes a flow-based tolerance rough set using flow, which represents the intensity of preference for one pattern over another, to measure similarity between two patterns. To yield high classification performance, a genetic-algorithm-based learning algorithm has been designed to determine parameter specifications and generate the tolerance class of a pattern. The proposed method has been tested on several real-world data sets. Its classification performance is comparable to that of other rough-set-based methods. | Flow-based tolerance rough sets for pattern classification |
S1568494614005845 | In this study, three new meta-heuristic algorithms artificial immune system (AIS), iterated greedy algorithm (IG) and a hybrid approach of artificial immune system (AIS-IG) are proposed to minimize maximum completion time (makespan) for the permutation flow shop scheduling problem with the limited buffers between consecutive machines. As known, this category of scheduling problem has wide application in the manufacturing and has attracted much attention in academic fields. Different from basic artificial immune systems, the proposed AIS-IG algorithm is combined with destruction and construction phases of iterated greedy algorithm to improve the local search ability. The performances of these three approaches were evaluated over Taillard, Carlier and Reeves benchmark problems. It is shown that the AIS-IG and AIS algorithms not only generate better solutions than all of the well-known meta heuristic approaches but also can maintain their quality for large scale problems. | Minimizing makespan for flow shop scheduling problem with intermediate buffers by using hybrid approach of artificial immune system |
S1568494614005857 | Background Software fault prediction is the process of developing models that can be used by the software practitioners in the early phases of software development life cycle for detecting faulty constructs such as modules or classes. There are various machine learning techniques used in the past for predicting faults. Method In this study we perform a systematic review of studies from January 1991 to October 2013 in the literature that use the machine learning techniques for software fault prediction. We assess the performance capability of the machine learning techniques in existing research for software fault prediction. We also compare the performance of the machine learning techniques with the statistical techniques and other machine learning techniques. Further the strengths and weaknesses of machine learning techniques are summarized. Results In this paper we have identified 64 primary studies and seven categories of the machine learning techniques. The results prove the prediction capability of the machine learning techniques for classifying module/class as fault prone or not fault prone. The models using the machine learning techniques for estimating software fault proneness outperform the traditional statistical models. Conclusion Based on the results obtained from the systematic review, we conclude that the machine learning techniques have the ability for predicting software fault proneness and can be used by software practitioners and researchers. However, the application of the machine learning techniques in software fault prediction is still limited and more number of studies should be carried out in order to obtain well formed and generalizable results. We provide future guidelines to practitioners and researchers based on the results obtained in this work. | A systematic review of machine learning techniques for software fault prediction |
S1568494614005869 | The soft computing methods, especially data mining, usually enable to describe large datasets in a human-consistent way with the use of some generic and conceptually meaningful information entities like information granules. However, such information granules may be applied not only for the descriptive purposes, but also for prediction. We review the main developments and challenges of the application of the soft computing methods in the time series analysis and forecasting, and we provide a conceptual framework for the Bayesian time series forecasting using the granular computing approach. Within the proposed approach, the information granules are successfully incorporated into the Bayesian posterior simulation process. The approach is evaluated with a set of experiments on the artificial and benchmark real-life time series datasets. | Bayesian analysis of time series using granular computing approach |
S1568494614005870 | The aim of this paper was to evaluate the risk level for both intra-organizational cultures and for different industries in implementing an enterprise resource planning (ERP) system. This study adopts the Fuzzy Analytic Network Process (FANP) method to assess ERP implementation risks, which were categorized into four dimensions: management and execution, software system, users, and technology planning. An empirical survey was conducted that utilized the collected survey data of 20 ERP experts in Taiwan to assess, rank, and improve the critical risks of ERP implementation via the FANP method. Based on the results of the FANP method, a follow-up survey of ERP end-users in different departments of three industries was conducted to assess how intra-organizational cultures and cross-industries affect users’ perceived risks a real world scenario. Our research results demonstrated that “lack of management support and assistance” is vital risk for a successful ERP implementation. Top management support and involvement are crucial and essential factors to the success of a firm's ERP implementation. “Ineffective communication with users” was found to be the second highest risk factor. The benefits of using the FANP method for evaluating the risk factors come from the clear priority weights between alternatives. Finally, this study provides suggestions to help enterprises decrease ERP risks, and enhance the chances of success of ERP implementations among intra-organizational cultures and across-industries. | Using Fuzzy Analytic Network Process to assess the risks in enterprise resource planning system implementation |
S1568494614005882 | Based on clonal selection mechanism in immune system, a dynamic local search based immune automatic clustering algorithm (DLSIAC) is proposed to automatically evolve the number of clusters as well as a proper partition of datasets. The real based antibody encoding consists of the activation thresholds and the clustering centers. Then based on the special structures of chromosomes, a particular dynamic local search scheme is proposed to exploit the neighborhood of each antibody as much as possible so to realize automatic variation of the antibody length during evolution. The dynamic local search scheme includes four basic operations, namely, the external cluster swapping, the internal cluster swapping, the cluster addition and the cluster decrease. Moreover, a neighborhood structure based clonal mutation is adopted to further improve the performance of the algorithm. The proposed algorithm has been extensively compared with five state-of-the-art automatic clustering techniques over a suit of datasets. Experimental results indicate that the DLSIAC is superior to other five clustering algorithms on the optimum number of clusters found and the clustering accuracy. In addition, DLSIAC is applied to a real problem, namely image segmentation, with a good performance. | Dynamic local search based immune automatic clustering algorithm and its applications |
S1568494614005894 | This paper deals with the design of a novel fuzzy proportional–integral–derivative (PID) controller for automatic generation control (AGC) of a two unequal area interconnected thermal system. For the first time teaching–learning based optimization (TLBO) algorithm is applied in this area to obtain the parameters of the proposed fuzzy-PID controller. The design problem is formulated as an optimization problem and TLBO is employed to optimize the parameters of the fuzzy-PID controller. The superiority of proposed approach is demonstrated by comparing the results with some of the recently published approaches such as Lozi map based chaotic optimization algorithm (LCOA), genetic algorithm (GA), pattern search (PS) and simulated algorithm (SA) based PID controller for the same system under study employing the same objective function. It is observed that TLBO optimized fuzzy-PID controller gives better dynamic performance in terms of settling time, overshoot and undershoot in frequency and tie-line power deviation as compared to LCOA, GA, PS and SA based PID controllers. Further, robustness of the system is studied by varying all the system parameters from −50% to +50% in step of 25%. Analysis also reveals that TLBO optimized fuzzy-PID controller gains are quite robust and need not be reset for wide variation in system parameters. | Teaching–learning based optimization algorithm based fuzzy-PID controller for automatic generation control of multi-area power system |
S1568494614005900 | Although investment projects supported by the state are extremely important in terms of national policy the projects to be transferred from the common public funds brings with it many problems. Highly transparent and comprehensive evaluation model are required to transfer the public resources to the right investment projects. It is necessary to consider many criteria for the evaluation of an investment project. These criteria are generally subjective and extremely difficult to express in numbers. However, using the fuzzy sets provide huge facilities to decision makers in project evaluation process with linguistic variables and measurement challenges. In this study, a new evaluation model for investment projects have been proposed for development agencies operating in Turkey. To address ambiguities and relativities in real world scenarios more conveniently, type-2 fuzzy sets and crisp sets have been simultaneously used. The proposed model for the investment project evaluation problem composed of type-2 fuzzy AHP and type-2 fuzzy TOPSIS methods. The proposed fuzzy MCDM method consists of three phases: (1) identify the criteria to be used in the model, (2) type-2 fuzzy AHP computations, (3) evaluation of investment projects with type-2 fuzzy TOPSIS and determination of the final rank. To perceive proposed model better, an application with real case data have been performed in Middle Black Sea Development Agency in Turkey. As a consequence of this application, it has been observed that the proposed model have proved effective in evaluation of alternatives in multi-criteria group decision making problems in a broader perspective and flexible fashion. | Investment project evaluation by a decision making methodology based on type-2 fuzzy sets |
S1568494614005924 | This paper presents hybrid spiral-dynamic bacteria-chemotaxis algorithms for global optimisation and their application to control of a flexible manipulator system. Spiral dynamic algorithm (SDA) has faster convergence speed and good exploitation strategy. However, the incorporation of constant radius and angular displacement in its spiral model causes the exploration strategy to be less effective hence resulting in low accurate solution. Bacteria chemotaxis on the other hand, is the most prominent strategy in bacterial foraging algorithm. However, the incorporation of a constant step-size for the bacteria movement affects the algorithm performance. Defining a large step-size results in faster convergence speed but produces low accuracy while defining a small step-size gives high accuracy but produces slower convergence speed. The hybrid algorithms proposed in this paper synergise SDA and bacteria chemotaxis and thus introduce more effective exploration strategy leading to higher accuracy, faster convergence speed and low computation time. The proposed algorithms are tested with several benchmark functions and statistically analysed via nonparametric Friedman and Wilcoxon signed rank tests as well as parametric t-test in comparison to their predecessor algorithms. Moreover, they are used to optimise hybrid Proportional-Derivative-like fuzzy-logic controller for position tracking of a flexible manipulator system. The results show that the proposed algorithms significantly improve both convergence speed as well as fitness accuracy and result in better system response in controlling the flexible manipulator. | Novel metaheuristic hybrid spiral-dynamic bacteria-chemotaxis algorithms for global optimisation |
S1568494614005936 | Using suitable Membership Functions (MFs) has a substantial impact on increasing of the accuracy and reduction of redundancy in a neuro-fuzzy modeling approach. In this paper, we suggest sigmoid-based MFs which can generate flexible convex hyper-polygon validity regions in Takagi–Sugeno (TS) fuzzy models. To this end, the sigmoid-based function is equal to the product of some sigmoid functions whose arguments are hyperplane equations. It is discussed that such function can represent a soft, convex, flat region with arbitrary number of hyperplane borders (edges). Afterwards, we introduce first-order and high-order TS fuzzy models, where, the suggested sigmoid-based functions are utilized in the premise parts of the fuzzy rules and linear models or quadratic functions are used as submodels in consequence parts. It is shown that utilized submodels can be optimized locally and globally. An incremental learning algorithm is then suggested to identify both first-order and high-order TS fuzzy models. The performance of introduced TS fuzzy models are examined and compared with existing models in prediction of a chaotic time series as well as in function approximation of a sun sensor. Obtained results demonstrate high accuracy and low redundancy of the suggested high-order TS fuzzy model. Finally, the learning performance of two introduced first-order and high-order TS fuzzy models are compared with each other in identification of a steam generator model. | Generating flexible convex hyper-polygon validity regions via sigmoid-based membership functions in TS modeling |
S1568494614005948 | In this paper, the Thyristor-Controlled Series-Compensated (TCSC) devices are located for congestion management in the power system by considering the non-smooth fuel cost function and penalty cost of emission. For this purpose, it is considered that the objective function of the proposed optimal power flow (OPF) problem is minimizing fuel and emission penalty cost of generators. A hybrid method that is the combination of the bacterial foraging (BF) algorithm with Nelder–Mead (NM) method (BF-NM) is employed to solve the OPF problems. The optimal location of the TCSC devices are then determined for congestion management. The size of the TCSC is obtained by using of the BF-NM algorithm to minimize the cost of generation, cost of emission, and cost of TCSC. The simulation results on IEEE 30-bus, modified IEEE 30-bus and IEEE 118-bus test system confirm the efficiency of the proposed method for finding the optimal location of the TCSC with non-smooth non-convex cost function and emission for congestion management in the power system. In addition, the results clearly show that a better solution can be achieved by using the proposed OPF problem in comparison with other intelligence methods. | Congestion management by determining optimal location of series FACTS devices using hybrid bacterial foraging and Nelder–Mead algorithm |
S156849461400595X | This paper presents a novel parameter automation strategy for particle swarm optimization algorithm for solving non-convex emission constrained economic dispatch (NECED) problems. Many evolutionary techniques such as particle swarm optimization, differential evolution have been applied to solve these problems and found to perform in a better way in comparison with conventional optimization methods. But often these methods converge to a sub-optimal solution prematurely. This paper presents a new improved particle swarm optimization technique called self-organizing hierarchical particle swarm optimization technique with time-varying acceleration coefficients (SOHPSO_TVAC) for non-convex emission constrained economic dispatch (NECED) problems to avoid premature convergence. Generator ramp rate limits and prohibited operating zones are taken into account in problem formulation. Non-convex emission constrained economic dispatch (NECED) problem is obtained by considering both the economy and emission objectives. The performance of the proposed method is demonstrated on two sample test systems. The results of the proposed method are compared with other methods. It is found that the results obtained by the proposed method are superior in terms of fuel cost, emission output and losses. | Non-convex emission constrained economic dispatch using a new self-adaptive particle swarm optimization technique |
S1568494614005961 | This work introduces a fuzzy rule-based system operating as a selector of color constancy algorithms for the enhancement of dark images. In accordance with the actual content of an image, the system selects among three color constancy algorithms, the White-Patch, the Gray-World and the Gray-Edge. These algorithms have been considered because of their accurate remotion of the illuminant, besides showing an outstanding color enhancement on images. The design of the rule-based system is not a trivial task because several features are involved in the selection. Our proposal consists in a fuzzy system, modeling the decision process through simple rules. This approach can handle large amounts of information and is tolerant to ambiguity, while addressing the problem of dark image enhancement. The methodology consists in two main stages. Firstly, a training protocol determines the fuzzy rules, according to features computed from a subset of training images taken from the SFU Laboratory dataset. We choose carefully twelve image features for the formulation of the rules: seven color features, three texture descriptors, and two lighting-content descriptors. In the rules, the fuzzy sets are modeled using Gaussian membership functions. Secondly, experiments are carried out using Mamdani and Larsen fuzzy inferences. For a test image, a color constancy algorithm is selected according to the inference process and the rules previously defined. The results show that our method attains a high rate of correct selection of the most well-suited algorithm for the particular scene. | Automatic selection of color constancy algorithms for dark image enhancement by fuzzy rule-based reasoning |
S1568494614005985 | Failure mode and effects analysis (FMEA) is one of the most popular reliability analysis tools for identifying, assessing and eliminating potential failure modes in a wide range of industries. In general, failure modes in FMEA are evaluated and ranked through the risk priority number (RPN), which is obtained by the multiplication of crisp values of the risk factors, such as the occurrence (O), severity (S), and detection (D) of each failure mode. However, the conventional RPN method has been considerably criticized for various reasons. To deal with the uncertainty and vagueness from humans’ subjective perception and experience in risk evaluation process, this paper presents a novel approach for FMEA based on combination weighting and fuzzy VIKOR method. Integration of fuzzy analytic hierarchy process (AHP) and entropy method is applied for risk factor weighting in this proposed approach. The risk priorities of the identified failure modes are obtained through next steps based on fuzzy VIKOR method. To demonstrate its potential applications, the new fuzzy FMEA is used for analyzing the risk of general anesthesia process. Finally, a sensitivity analysis is carried out to verify the robustness of the risk ranking and a comparison analysis is conducted to show the advantages of the proposed FMEA approach. | A novel approach for failure mode and effects analysis using combination weighting and fuzzy VIKOR method |
S1568494614005997 | Modeling the glass-forming ability (GFA) of bulk metallic glasses (BMGs) is one of the hot issues ever since bulk metallic glasses (BMGs) are discovered. It is very useful for the development of new BMGs for various engineering applications, if GFA criterion modeled precisely. In this paper, we have proposed support vector regression (SVR), artificial neural network (ANN), general regression neural network (GRNN), and multiple linear regression (MLR) based computational intelligent (CI) techniques that model the maximum section thickness (D max) parameter for glass forming alloys. For this study, a reasonable large number of BMGs alloys are collected from the current literature of material science. CI models are developed using three thermal characteristics of glass forming alloys i.e., glass transition temperature (T g), the onset crystallization temperature (T x), and liquidus temperature (T l). The R 2-values of GRNN, SVR, ANN, and MLR models are computed to be 0.5779, 0.5606, 0.4879, and 0.2611 for 349 BMGs alloys, respectively. We have investigated that GRNN model is performing better than SVR, ANN, and MLR models. The performance of proposed models is compared to the existing physical modeling and statistical modeling based techniques. In this study, we have investigated that proposed CI approaches are more accurate in modeling the experimental D max than the conventional GFA criteria of BMGs alloys. | Modeling glass-forming ability of bulk metallic glasses using computational intelligent techniques |
S1568494614006000 | Text mining refers to the activity of identifying useful information from natural language text. This is one of the criteria practiced in automated text categorization. Machine learning (ML) based methods are the popular solution for this problem. However, the developed models typically provide low expressivity and lacking in human-understandable representation. In spite of being highly efficient, the ML based methods are established in train–test setting, and when the existing model is found insufficient, the whole processes need to be reinvented which implies train–test–retrain and is typically time consuming. Furthermore, retraining the model is not usually practical and feasible option whenever there is continuous change. This paper introduces the evolving fuzzy grammar (EFG) method for crime texts categorization. In this method, the learning model is built based on a set of selected text fragments which are then transformed into their underlying structure called fuzzy grammars. The fuzzy notion is used because the grammar matching, parsing and derivation involve uncertainty. Fuzzy union operator is also used to combine and transform individual text fragment grammars into more general representations of the learned text fragments. The set of learned fuzzy grammars is influenced by the evolution in the seen pattern; the learned model is slightly changed (incrementally) as adaptation, which does not require the conventional redevelopment. The performance of EFG in crime texts categorization is evaluated against expert-tagged real incidents summaries and compared against C4.5, support vector machines, naïve Bayes, boosting, and k-nearest neighbour methods. Results show that the EFG algorithm produces results that are close in performance with the other ML methods while being highly interpretable, easily integrated into a more comprehensive grammar system and with lower model retraining adaptability time. | Evolving fuzzy grammar for crime texts categorization |
S1568494614006012 | Environmental economic dispatch of fixed head of hydrothermal power systems is viewed as a mulitobjective optimization problem in this paper. The practical hydrothermal system possesses various constraints which make the problem of finding global optimum difficult. This paper develops an improved multiobjective estimation of distribution algorithm to solving the above problem. A local learning operation is added into the original regularity model-based multiobjective estimation of distribution algorithm (RM-MEDA) in the improved approach so as to improve the local search ability and enhance the convergence efficiency. Furthermore, a repair mechanism is employed to repair the searched infeasible solutions in order to be able to search in the feasible region. In the experiment, the results obtained by the proposed approach have been compared with those from other three MOEAs: NSGA-II, NNIA, and RM-MEDA. Results from some pervious reported methods have also been employed to compare with our method. In addition, the results demonstrate the superiority of this proposed method as a promising MOEA to solve this power system multiobjective optimization problem. | An improved multiobjective estimation of distribution algorithm for environmental economic dispatch of hydrothermal power systems |
S1568494614006024 | One of the most well-known binary (discrete) versions of the artificial bee colony algorithm is the similarity measure based discrete artificial bee colony, which was first proposed to deal with the uncapacited facility location (UFLP) problem. The discrete artificial bee colony simply depends on measuring the similarity between the binary vectors through Jaccard coefficient. Although it is accepted as one of the simple, novel and efficient binary variant of the artificial bee colony, the applied mechanism for generating new solutions concerning to the information of similarity between the solutions only consider one similarity case i.e. it does not handle all similarity cases. To cover this issue, new solution generation mechanism of the discrete artificial bee colony is enhanced using all similarity cases through the genetically inspired components. Furthermore, the superiority of the proposed algorithm is demonstrated by comparing it with the basic discrete artificial bee colony, binary particle swarm optimization, genetic algorithm in dynamic (automatic) clustering, in which the number of clusters is determined automatically i.e. it does not need to be specified in contrast to the classical techniques. Not only evolutionary computation based algorithms, but also classical approaches such as fuzzy C-means and K-means are employed to put forward the effectiveness of the proposed approach in clustering. The obtained results indicate that the discrete artificial bee colony with the enhanced solution generator component is able to reach more valuable solutions than the other algorithms in dynamic clustering, which is strongly accepted as one of the most difficult NP-hard problem by researchers. | Dynamic clustering with improved binary artificial bee colony algorithm |
S1568494614006036 | Rough sets theory is widely used as a method for estimating and/or inducing the knowledge structure of if-then rules from various decision tables. This paper presents the results of a retest of rough set rule induction ability by the use of simulation data sets. The conventional method has two main problems: firstly the diversification of the estimated rules, and secondly the strong dependence of the estimated rules on the data set sampling from the population. We here propose a new rule induction method based on the view that the rules existing in their population cause partiality of the distribution of the decision attribute values. This partiality can be utilized to detect the rules by use of a statistical test. The proposed new method is applied to the simulation data sets. The results show the method is valid and has clear advantages, as it overcomes the above problems inherent in the conventional method. | Proposal of a statistical test rule induction method by use of the decision table |
S1568494614006048 | Stage shop problem is an extension of the mixed shop as well as job shop and open shop. The problem is also a special case of the general shop. In a stage shop, each job has a number of stages; each of which includes one or more operations. As a subset of operations of a job, the operations of a stage can be done without any precedence consideration of each other, whereas the stages themselves should be processed according to a preset sequence. Due to the NP-hardness of the problem, a modified artificial bee colony (ABC) algorithm is suggested. In order to improve the exploitation feature of ABC, an effective neighborhood of the stage shop problem and PSO are used in employed and onlooker bee phases, respectively. In addition, the idea of tabu search is substituted for the greedy selection property of the artificial bee colony algorithm. The proposed algorithm is compared with the traditional ABC and the state-of-the-art CMA-ES. The computational results show that the modified ABC outperforms CMA-ES and completely dominates the traditional ABC. In addition, the proposed algorithm found high quality solutions within short times. For instance, two new optimal solutions and many new upper bounds are discovered for the unsolved benchmarks. | A modified ABC algorithm for the stage shop scheduling problem |
S1568494614006061 | In this paper, we have proposed a new method of similarity measure associating the geometric distance, area and height of generalized trapezoidal fuzzy numbers. Some properties regarding the proposed new method of similarity measure have been derived. To illustrate the effectiveness of this method, it is compared with existing techniques taking thirty two different sets of generalized trapezoidal fuzzy numbers. Moreover, the proposed method has been used for calculating the fuzzy risk analysis in a production system in which different parameters are represented by linguistic trapezoidal fuzzy numbers. | Fuzzy risk analysis using area and height based similarity measure on generalized trapezoidal fuzzy numbers and its application |
S1568494614006085 | This paper deals with direction-of-arrival (DOA) estimation of minimum variance distortionless response (MVDR) approach based on Taylor series expansion (TSE) technique for space-time code-division multiple access (CDMA) systems. It has been shown that the TSE of the presumed steering vector is a simple approach without any need for direction search. Unfortunately, the Taylor approach is more likely to converge to a local maximum, causing errors in DOA estimation. In conjunction with a genetic algorithm for selecting initial search angle, an efficient approach is presented to achieve the advantages of TSE DOA estimation with fast convergence and less computational load over iterative searching MVDR estimator. Simulation results are provided for illustrating the effectiveness of the proposed approach. | Combining genetic algorithm and Taylor series expansion approach for DOA estimation in space-time CDMA systems |
S1568494614006103 | Butterflies are classified firstly according to their outer morphological qualities. It is required to analyze genital characters of them when classification according to outer morphological qualities is not possible. Genital characteristics of a butterfly can be determined by using various chemical substances and methods. Currently, these processes are carried out manually by preparing genital slides of the collected butterfly through some certain processes. For some groups of butterflies molecular techniques should be applied for identification which is expensive to use. In this study, a computer vision method is proposed for automatically identifying butterfly species as an alternative to conventional identification methods. The method is based on local binary pattern (LBP) and artificial neural network (ANN). A total of 50 butterfly images of five species were used for evaluating the effectiveness of the proposed method. Experimental results demonstrated that the proposed method has achieved well recognition in terms of accuracy rates for butterfly species identification. | Automatic identification of butterfly species based on local binary patterns and artificial neural network |
S1568494614006115 | This research is conducted such that the bodywork vertical vibration acceleration of a quarter car model is selected as a control objective and Fuzzy-PID control strategy based on improved cultural algorithm is proposed for vibration reduce. Cultural algorithm and niche algorithm are mixed together to design Fuzzy-PID controller and optimize control rules, which can accelerate optimization speed and has a good global optimal performance. The numerical results show that the active suspension, with Fuzzy-PID control strategy in which control rules are optimized by improved cultural algorithm, can significantly suppress the bodywork vertical vibration acceleration, and the ride comfort is improved. All the simulation analyses can give the support for the optimal control scheme. | An optimal vibration control strategy for a vehicle's active suspension based on improved cultural algorithm |
S1568494614006127 | This study develops a hybrid method to improve selection decision making in service innovation. Because criteria for customer perceptions tend to be vague and conflicting, the process of evaluating perceptions (qualitative scale) and operational data (quantitative scale) should be combined. This study proposes the concomitant evaluation of qualitative and quantitative scales using a hybrid approach that combines fuzzy set theory, a discrete multi-criteria method based on prospect theory (known as TODIM in Portuguese) and the non-addictive Choquet integral. The study assumes that the criteria possess interdependent relationships. The advantages of the proposed hybrid approach, which exhibits a hierarchical structure, have been demonstrated throughout the hot spring hotel industry. The proposed method demonstrates that it can be extremely useful for recommending operational alternatives because it clearly identifies the main criteria of the expressed alternatives. The results indicate that the approach easily and effectively accommodates criteria with gain and loss functions and can help practitioners improve their performance and reduce overall service innovation risks. | Using a hybrid method to evaluate service innovation in the hotel industry |
S1568494614006139 | In the past, many algorithms were proposed to adopt fuzzy-set theory for discovering fuzzy association rules from quantitative databases. The fuzzy frequent pattern (FFP)-tree and the compressed fuzzy frequent pattern (CFFP)-tree algorithms were respectively proposed to mine the incomplete fuzzy frequent itemsets from the tree-based structures. In the past, multiple fuzzy frequent pattern (MFFP)-tree algorithm was proposed to keep more linguistic terms for mining fuzzy frequent itemsets. Since the MFFP-tree algorithm inherits the property of the FFP-tree algorithm, numerous tree nodes are thus required to build the MFFP-tree structure for mining the desired multiple fuzzy frequent itemsets. In this paper, the compressed multiple fuzzy frequent pattern (CMFFP)-tree algorithm is designed to keep not only the linguistic term with maximum membership value but also the other frequent linguistic terms for mining the completely fuzzy frequent itemsets. In the designed CMFFP-tree algorithm, the multiple frequent linguistic terms are sorted in descending order of their occurrence frequencies to build the CMFFP-tree structure. The construction process is the same as the CFFP-tree algorithm except more information are kept for later mining process to discover the completely fuzzy frequent itemsets. Each node in the CMFFP-tree uses the additional array to keep the membership values of its prefix path by intersection operation. A CMFFP-mine algorithm is also designed to efficiently mine the multiple fuzzy frequent itemsets from the developed CMFFP-tree structure. Experiments are then conducted to show the performance of the proposed CMFFP-tree algorithm in terms of execution time and the number of tree nodes, compared to those of the MFFP-tree and CFFP-tree algorithms. | A CMFFP-tree algorithm to mine complete multiple fuzzy frequent itemsets |
S1568494614006140 | Ant Colony Optimization is a population-based meta-heuristic that exploits a form of past performance memory that is inspired by the foraging behavior of real ants. The behavior of the Ant Colony Optimization algorithm is highly dependent on the values defined for its parameters. Adaptation and parameter control are recurring themes in the field of bio-inspired optimization algorithms. The present paper explores a new fuzzy approach for diversity control in Ant Colony Optimization. The main idea is to avoid or slow down full convergence through the dynamic variation of a particular parameter. The performance of different variants of the Ant Colony Optimization algorithm is analyzed to choose one as the basis to the proposed approach. A convergence fuzzy logic controller with the objective of maintaining diversity at some level to avoid premature convergence is created. Encouraging results on several traveling salesman problem instances and its application to the design of fuzzy controllers, in particular the optimization of membership functions for a unicycle mobile robot trajectory control are presented with the proposed method. | A new approach for dynamic fuzzy logic parameter tuning in Ant Colony Optimization and its application in fuzzy control of a mobile robot |
S1568494614006152 | Evaluating teaching performance is a main means to improve teaching quality and can plays an important role in strengthening the management of higher education institutions. In this paper, we present a novel framework for teaching performance evaluation based on the combination of fuzzy AHP and fuzzy comprehensive evaluation method. Specifically, after determining the factors and sub-factors, the teaching performance index system was established. In the index system, the factor and sub-factor weights were then estimated by the extent analysis fuzzy AHP method. Employing the fuzzy AHP method in group decision-making can facilitate a consensus of decision-makers and reduce uncertainty. On the basis of the system, the fuzzy comprehensive evaluation method was employed to evaluate teaching performance. A case application was also used to illustrate the proposed framework. The application of this framework can make the evaluation results more scientific, accurate, and objective. It is expected that this work may serve as an assistance tool for managers of higher education institutions in improving the educational quality level. | Evaluating teaching performance based on fuzzy AHP and comprehensive evaluation approach |
S1568494614006164 | A Wireless Sensor Network (WSN) usually consists of numerous wireless devices deployed in a region of interest, each of which is capable of collecting and processing environmental information and communicating with neighboring devices. The problem of sensor placement becomes non trivial when we consider environmental factors such as terrain elevations. In this paper, we differentiate a stepwise optimization approach from a generic optimization approach, and show that the former is better suited for sensor placement optimization. Following a stepwise optimization approach, we propose a Crowd-Out Dominance Search (CODS), which makes use of terrain information and intersensor relationship information to facilitate the optimization. Finally, we investigate the effect of terrain irregularity on optimization algorithm performances, and show that the proposed method demonstrates better resistance to terrain complexity than other optimization methods. | A dominance-based stepwise approach for sensor placement optimization |
S1568494614006176 | Active power filter (APF) performance is entirely dependent on capacitor voltage. DC voltage optimization is one of the key aspects in harmonics compensation. This paper presents an experimental comparative study of new adaptive fuzzy controller (AFC) and proportional integral (PI) regulator, applied to regulate the DC bus voltage of three phase shunt APF. The proposed AFC for APF, consist to adapt the output gain of this controller at every situation of the system as a function of the voltage error and its variation. The algorithm used to identify the reference currents is based on the Self Tuning Filter (STF). The firing pulses of the insulated-gate bipolar transistors (IGBTs) inverter are generated using a hysteresis current controller; which is implemented on an analog card. Finally, the above study, under steady state and transient conditions, is illustrated with signal-flow graphs and corresponding analysis. This study was verified by experimental tests on hardware prototype based on dSPACE-1104. | Implementation of adaptive fuzzy logic and PI controllers to regulate the DC bus voltage of shunt active power filter |
S1568494614006188 | This paper introduces a new class of neural networks in complex space called Complex-valued Radial Basis Function (CRBF) neural networks and also an improved version of CRBF called Improved Complex-valued Radial Basis Function (ICRBF) neural networks. They are used for multiple crack identification in a cantilever beam in the frequency domain. The novelty of the paper is that, these complex-valued neural networks are first applied on inverse problems (damage identification) which come under the category of function approximation. The conventional CRBF network was used in the first stage of ICRBF network and in the second stage a reduced search space moving technique was employed for accurate crack identification. The effectiveness of proposed ICRBF neural network was studied first on a single crack identification problem and then applied to a more challenging problem of multiple crack identification in a cantilever beam with zero noise as well as 5% noise polluted signals. The results proved that, the proposed ICRBF and real-valued Improved RBF (IRBF) neural networks have identified the single and multiple cracks with less than 1% absolute mean percentage error as compared to conventional CRBF and RBF neural networks, mainly because of their second stage reduced search space moving technique. It appears that IRBF neural network is a good compromise considering all factors like accuracy, simplicity and computational effort. | Improved Complex-valued Radial Basis Function (ICRBF) neural networks on multiple crack identification |
S156849461400619X | Although greedy algorithms possess high efficiency, they often receive suboptimal solutions of the ensemble pruning problem, since their exploration areas are limited in large extent. And another marked defect of almost all the currently existing ensemble pruning algorithms, including greedy ones, consists in: they simply abandon all of the classifiers which fail in the competition of ensemble selection, causing a considerable waste of useful resources and information. Inspired by these observations, an interesting greedy Reverse Reduce-Error (RRE) pruning algorithm incorporated with the operation of subtraction is proposed in this work. The RRE algorithm makes the best of the defeated candidate networks in a way that, the Worst Single Model (WSM) is chosen, and then, its votes are subtracted from the votes made by those selected components within the pruned ensemble. The reason is because, for most cases, the WSM might make mistakes in its estimation for the test samples. And, different from the classical RE, the near-optimal solution is produced based on the pruned error of all the available sequential subensembles. Besides, the backfitting step of RE algorithm is replaced with the selection step of a WSM in RRE. Moreover, the problem of ties might be solved more naturally with RRE. Finally, soft voting approach is employed in the testing to RRE algorithm. The performances of RE and RRE algorithms, and two baseline methods, i.e., the method which selects the Best Single Model (BSM) in the initial ensemble, and the method which retains all member networks of the initial ensemble (ALL), are evaluated on seven benchmark classification tasks under different initial ensemble setups. The results of the empirical investigation show the superiority of RRE over the other three ensemble pruning algorithms. | A new reverse reduce-error ensemble pruning algorithm |
S1568494614006206 | We report a novel design method for determining the optimal proportional-integral-derivative (PID) controller parameters of an automatic voltage regulator (AVR) system, using a combined genetic algorithm (GA), radial basis function neural network (RBF-NN) and Sugeno fuzzy logic approaches. GA and a RBF-NN with a Sugeno fuzzy logic are proposed to design a PID controller for an AVR system (GNFPID). The problem for obtaining the optimal AVR and PID controller parameters is formulated as an optimization problem and RBF-NN tuned by GA is applied to solve the optimization problem. Whereas, optimal PID gains obtained by the proposed RBF tuning by genetic algorithm for various operating conditions are used to develop the rule base of the Sugeno fuzzy system and design fuzzy PID controller of the AVR system to improve the system's response (∼0.005s). The proposed approach has superior features, including easy implementation, stable convergence characteristic, good computational efficiency and this algorithm effectively searches for a high-quality solution and improve the transient response of the AVR system (7E−06). Numerical simulation results demonstrate that this is faster and has much less computational cost as compared with the real-code genetic algorithm (RGA) and Sugeno fuzzy logic. The proposed method is indeed more efficient and robust in improving the step response of an AVR system. | Sugeno fuzzy PID tuning, by genetic-neutral for AVR in electrical power generation |
S1568494614006218 | Periodic patterns and cyclic patterns have been used to discover recurring patterns in sequence databases. Toroslu (2003) proposed cyclically repeated pattern (CRP) mining, in which a new parameter called repetition support is considered in the mining process. In a data sequence, the occurrence of a subsequence must satisfy a single user-specified minimum repetition support. However, in real-life applications, items may occur at various frequencies in a database. The rare item problem may occur when all items are set to a single minimum repetition support. To solve this problem, we included the concept of multiple minimum supports to enable users to specify the multiple minimum item repetition support (MIR) according to the natures of items. In this paper, we first redefined CRPs based on the MIR and original form of the sequence minimum support. A new algorithm, rep-PrefixSpan, was developed for discovering a complete set of CRPs in sequence databases. The experimental results indicate that the proposed approach exhibits performance superior to that of conventional CRP mining. The proposed method can be applied in many application domains including customer purchase behavior, web logging, and stock analyses. | A novel approach for mining cyclically repeated patterns with multiple minimum supports |
S156849461400622X | The proposed work involves the multiobjective PSO based adaption of optimal neural network topology for the classification of multispectral satellite images. It is per pixel supervised classification using spectral bands (original feature space). This paper also presents a thorough experimental analysis to investigate the behavior of neural network classifier for given problem. Based on 1050 number of experiments, we conclude that following two critical issues needs to be addressed: (1) selection of most discriminative spectral bands and (2) determination of optimal number of nodes in hidden layer. We propose new methodology based on multiobjective particle swarm optimization (MOPSO) technique to determine discriminative spectral bands and the number of hidden layer node simultaneously. The accuracy with neural network structure thus obtained is compared with that of traditional classifiers like MLC and Euclidean classifier. The performance of proposed classifier is evaluated quantitatively using Xie-Beni and β indexes. The result shows the superiority of the proposed method to the conventional one. | Multiobjective PSO based adaption of neural network topology for pixel classification in satellite imagery |
S1568494614006231 | In this work, we first define intuitionistic fuzzy parameterized soft sets (intuitionistic FP-soft sets) and study some of their properties. We then introduce an adjustable approaches to intuitionistic FP-soft sets based decision making. Finally, we give a numerical example which shows that this method successfully works. | Intuitionistic fuzzy parameterized soft set theory and its decision making |
S1568494614006243 | This paper proposes an alternative approach for determining the most energy efficient route towards a destination. An innovative mesoscopic vehicular consumption model that is based on machine learning functionality is introduced and its application in a case study involving Fully Electric Vehicles (FEVs) is examined. The integration of this model in a routing engine especially designed for FEVs is also analyzed and a software architecture for implementing the proposed routing methodology is defined. In order to verify the robustness and the energy efficiency of this methodology, a system prototype has been developed and a series of field tests have been performed. The results of these tests are reported and significant conclusions are derived regarding the generated energy efficient routes. | Energy-efficient routing based on vehicular consumption predictions of a mesoscopic learning model |
S1568494614006255 | This work illustrates the use of a multi-objective optimization approach to model and optimize the performance of a simple thermoacoustic engine. System parameters and constraints that capture the underlying thermoacoustic dynamics have been used to define the model. Work output, viscous loss, conductive heat loss, convective heat loss and radiative heat loss have been used to measure the performance of the engine. The optimization task is formulated as a five-criterion mixed-integer non-linear programming problem. Since we optimize multiple objectives simultaneously, each objective component has been given a weighting factor to provide appropriate user-defined emphasis. A practical example is given to illustrate the approach. We have determined a design statement of a stack describing how the design would change if emphasis is given to one objective in particular. We also considered optimization of multiple objectives components simultaneously and identify global optimal solutions describing the stack geometry using the augmented ɛ-constraint method. This approach has been implemented in GAMS (General Algebraic Modelling System). | Multi-objective optimization of the stack of a thermoacoustic engine using GAMS |
S1568494614006267 | In this study, an improved magnetic charged system search (IMCSS) is presented for optimization of truss structures. The algorithm is based on magnetic charged system search (MCSS) and improved scheme of harmony search algorithm (IHS). In IMCSS some of the most effective parameters in the convergence rate of the HS scheme have been improved to achieve a better convergence, especially in the final iterations and explore better results than previous studies. The IMCSS algorithm is applied for optimal design problem with both continuous and discrete variables. In comparison to the results of the previous studies, the efficiency and robustness of the proposed algorithm in fast convergence and achieving the optimal values for weight of structures, is demonstrated. | An improved magnetic charged system search for optimization of truss structures with continuous and discrete variables |
S1568494614006279 | The presence of multiple markets create profitable opportunities to the supply chain system. In this regard, this paper consists of the joint relationship between a manufacturer and multiple markets in which manufacturer offers part-payment to the markets due to their collection of finished products during the production run time. Here it is also considered that manufacturer is facilitated with credit period by raw material supplier where credit period has been presented as an interactive fuzzy fashion. In this paper, two types of deterioration have been assumed such as one for finished products and the other for raw materials. A solution algorithm is presented to get fuzzy optimal profit for the proposed integrated production inventory system optimizing production run time. A numerical example is used to illustrate the proposed model. Finally, sensitivity analysis has been carried out with respect to the major parameters to demonstrate the feasibility of the proposed model. | An integrated production inventory model under interactive fuzzy credit period for deteriorating item with several markets |
S1568494614006280 | Optimal performances of thin film devices such as those of micro/nano-electromechanical systems like sensors and actuators are possible with accurate and reliable characterization techniques. Such techniques can be enhanced if predictive models are constructed and deployed for production and monitoring. This paper presents functional networks as a novel modeling approach for rapid characterization of thin films such as thickness, deposition rate, resistivity and uniformity based on 8 deposition parameters. The functional network (FN) models were developed and tested using 154 experimental data sets obtained from ultrathin polycrystalline silicon germanium films deposited by Applied Materials Centura low pressure chemical vapour deposition system. The results showed that the proposed FN models perform excellently for all the outputs with minimum and maximum regression coefficients of 0.95 and 0.99, respectively. To further demonstrate the robustness of these models, several trend analyses were conducted. The performance statistics indicates that the mean percentage error for the model, based on the deposition rate, lies between 0.3% and 0.8% for silane, germane, diborane flow rates and pressure. For these deposition variables, the probability or p-value at a significance level of 0.01 implies that no significant difference exists between the means of the predicted and the measured values. The results are further discussed in light of physics of the CVD process. | Functional networks models for rapid characterization of thin films: An application to ultrathin polycrystalline silicon germanium films |
S1568494614006292 | In this paper, we propose to use an evolutionary methodology in order to determine the values of the parameters for implementing the MUlticriteria RAnking MEthod (MURAME). The proposed approach has been designed for dealing with a creditworthiness evaluation problem faced by an important north-eastern Italian bank needing to score and/or to rank firms (which act as alternatives) applying for a loan. The point of the matter, known as preference disaggregation, consists in finding the MURAME parameters which minimize the inconsistency between the MURAME evaluations of given alternatives and those properly revealed by the decision maker (DM). To find a numerical solution of the involved mathematical programming problem, we adopt an evolutionary algorithm based on the particle swarm optimization (PSO), which is an iterative metaheuristics grounded on swarm intelligence. The obtained results show a high consistency between the MURAME outputs produced by the PSO-based solution algorithm and the actual scoring/ranking of the applicants provided by the bank (which acts as the DM). | An evolutionary approach to preference disaggregation in a MURAME-based creditworthiness problem |
S1568494614006309 | Dialkylimidazolium-based ionic liquids (ILs) are one of the most employed and accessible ILs. These novel chemicals possess unique physicochemical properties which, unfortunately, are greatly altered by impurities. A simple method to evaluate the purity level of ILs is proposed, as a direct relationship exists between refractive index (RI) and purity. Two multilayer perceptrons (MLPs) have been designed to estimate the RI values using the molecular weights (MWs) of the imidazolium-based ILs. The RI is defined as the single output of the created neural network models. These MLPs offered low verification prediction errors (less than 0.48% in both cases), thus leading to useful mathematical tools that are able to more than adequately estimate the RI of imidazolium-based ILs by solely relying on the MWs. Therefore, an extremely manageable mathematical tool that can accurately estimate the RIs of imidazolium-based ILs, and, in the end, their purity, has been created. Additional tests were developed with experimental data regarding two imidazolium-based ILs to evaluate the applicability of the networks, and the results were successful in terms of RI and purity estimation. | Inputting molecular weights into a multilayer perceptron to estimate refractive indices of dialkylimidazolium-based ionic liquids—A purity evaluation |
S1568494614006322 | System-level fault diagnosis (SLFD) deals with the detection of all faulty nodes in a set of networked units in a parallel or distributed system. By allowing nodes to test each other under well-defined conditions and carefully inspecting the collection of test outcomes, the true set of damaged nodes can be located. In this paper, we tackle the SLFD problem by using a recently proposed bio-inspired optimization method: the Cuckoo Search. Two formalizations of this scheme were independently put forth by Yang and Deb [16] and Rajabioun [17] for numerical optimization. In this work, we have adapted both methods to work in the combinatorial search space induced by SLFD and we have tested them against other nature-inspired metaheuristics in presence of a class of distributed systems called t-diagnosable systems, since they are easy to generate and eliminate ambiguity in the optimization-driven fault diagnosis. Three well-known test models (i.e. PMC, symmetric and asymmetric comparison) that define how the system units test one another were employed in the simulations. The empirical results reveal that the two cuckoo-based approaches outperformed their competitors in terms of solution quality and spatio-temporal requirements, with Yang and Deb's version achieving notable improvements over Rajabioun’. | Efficient detection of faulty nodes with cuckoo search in t-diagnosable systems |
S1568494614006334 | In this work we consider spatial clustering problem with no a priori information. The number of clusters is unknown, and clusters may have arbitrary shapes and density differences. The proposed clustering methodology addresses several challenges of the clustering problem including solution evaluation, neighborhood construction, and data set reduction. In this context, we first introduce two objective functions, namely adjusted compactness and relative separation. Each objective function evaluates the clustering solution with respect to the local characteristics of the neighborhoods. This allows us to measure the quality of a wide range of clustering solutions without a priori information. Next, using the two objective functions we present a novel clustering methodology based on Ant Colony Optimization (ACO-C). ACO-C works in a multi-objective setting and yields a set of non-dominated solutions. ACO-C has two pre-processing steps: neighborhood construction and data set reduction. The former extracts the local characteristics of data points, whereas the latter is used for scalability. We compare the proposed methodology with other clustering approaches. The experimental results indicate that ACO-C outperforms the competing approaches. The multi-objective evaluation mechanism relative to the neighborhoods enhances the extraction of the arbitrary-shaped clusters having density variations. | Ant Colony Optimization based clustering methodology |
S1568494614006346 | This paper proposes an integrated system for the segmentation and classification of four moving objects, including pedestrians, cars, motorcycles, and bicycles, from their side-views in a video sequence. Based on the use of an adaptive background in the red–green–blue (RGB) color model, each moving object is segmented with its minimum enclosing rectangle (MER) window by using a histogram-based projection approach or a tracking-based approach. Additionally, a shadow removal technique is applied to the segmented objects to improve the classification performance. For the MER windows with different sizes, a window scaling operation followed by an adaptive block-shifting operation is applied to obtain a fixed feature dimension. A weight mask, which is constructed according to the frequency of occurrence of an object in each position within a square window, is proposed to enhance the distinguishing pixels in the rescaled MER window. To extract classification features, a two-level Haar wavelet transform is applied to the rescaled MER window. The local shape features and the modified histogram of oriented gradients (HOG) are extracted from the level-two and level-one sub-bands, respectively, of the wavelet-transformed space. A hierarchical linear support vector machine classification configuration is proposed to classify the four classes of objects. Six video sequences are used to test the classification performance of the proposed method. The computer processing times of the object segmentation, object tracking, and feature extraction and classification approaches are 79ms, 211ms, and 0.01ms, respectively. Comparisons with different well-known classification approaches verify the superiority of the proposed classification method. | Moving object classification using local shape and HOG features in wavelet-transformed space with hierarchical SVM classifiers |
S1568494614006358 | This paper presents a new method for enhancing power system security, including a remedial action, using an artificial neural network (ANN) technique. The deregulation of electricity markets is still an essential requirement of modern power systems, which require the operation of an independent system driven by economic considerations. Power flow and contingency analyses usually take a few seconds to suggest a control action. Such delay could result in issues that affect system security. This study aims to find a significant control action that alleviates the bus voltage violation of a power system and to develop an automatic data knowledge generation method for the adaptive ANN. The developed method is proved to be a steady-state security assessment tool for supplying possible control actions to mitigate an insecure situation resulting from credible contingency. The proposed algorithm is successfully tested on the IEEE 9-bus and 39-bus test systems. A comparison of the results of the proposed algorithm with those of other conventional methods reveals that an ANN can accurately and instantaneously provide the required amounts of generation re-dispatch and load shedding in megawatts. set to a random number in the beginning of the training process set to 1 to speed up the convergence process determines the shape of the function=1 number of contingencies | Simulation of an adaptive artificial neural network for power system security enhancement including control action |
S1568494614006371 | The knowledge discovery process is supported by data files information gathered from collected data sets, which often contain errors in the form of missing values. Data imputation is the activity aimed at estimating values for missing data items. This study focuses on the development of automated data imputation models, based on artificial neural networks for monotone patterns of missing values. The present work proposes a single imputation approach relying on a multilayer perceptron whose training is conducted with different learning rules, and a multiple imputation approach based on the combination of multilayer perceptron and k-nearest neighbours. Eighteen real and simulated databases were exposed to a perturbation experiment with random generation of monotone missing data pattern. An empirical test was accomplished on these data sets, including both approaches (single and multiple imputations), and three classical single imputation procedures – mean/mode imputation, regression and hot-deck – were also considered. Therefore, the experiments involved five imputation methods. The results, considering different performance measures, demonstrated that, in comparison with traditional tools, both proposals improve the automation level and data quality offering a satisfactory performance. | Single imputation with multilayer perceptron and multiple imputation combining multilayer perceptron and k-nearest neighbours for monotone patterns |
S1568494614006383 | In this study, a new meta-heuristic algorithm called teaching-learning-based optimization (TLBO) is used for the size and shape optimization of structures. The TLBO algorithm is based on the effect of the influence of a teacher on the output of learners in a class. The cross-sectional areas of the bar element and the nodal coordinates of the structural system are the design variables for size and shape optimization, respectively. Displacement, allowable stress and the Euler buckling stress are taken as the constraint for the problem considered. Some truss structures are designed by using this new algorithm to show the efficiency of the TLBO algorithm. The results obtained from this study are compared with those reported in the literature. It is concluded that the TLBO algorithm presented in this study can be effectively used in combined size and shape optimization of the structures. | Combined size and shape optimization of structures with a new meta-heuristic algorithm |
S1568494614006395 | The main objective of the molecular docking problem is to find a conformation between a small molecule (ligand) and a receptor molecule with minimum binding energy. The quality of the docking score depends on two factors: the scoring function and the search method being used to find the lowest binding energy solution. In this context, AutoDock 4.2 is a popular C++ software package in the bioinformatics community providing both elements, including two genetic algorithms, one of them endowed with a local search strategy. This paper principally focuses on the search techniques for solving the docking problem. In using the AutoDock 4.2 scoring function, the approach in this study is twofold. On the one hand, a number of four metaheuristic techniques are analyzed within an extensive set of docking problems, looking for the best technique according to the quality of the binding energy solutions. These techniques are thoroughly evaluated and also compared with popular well-known docking algorithms in AutoDock 4.2. The metaheuristics selected are: generational and a steady-state Genetic Algorithm, Differential Evolution, and Particle Swarm Optimization. On the other hand, a C++ version of the jMetal optimization framework has been integrated inside AutoDock 4.2, so that all the algorithms included in jMetal are readily available to solve docking problems. The experiments reveal that Differential Evolution obtains the best overall results, even outperforming other existing algorithms specifically designed for molecular docking. | Solving molecular flexible docking problems with metaheuristics: A comparative study |
S1568494614006401 | Owing to the fluctuations of the financial market, input data in the options pricing formula cannot be expected to be precise. This paper discusses the problem of pricing geometric Asian options under the fuzzy environment. We present the fuzzy price of the geometric Asian option under the assumption that the underlying stock price, the risk-free interest rate and the volatility are all fuzzy numbers. This assumption makes the financial investors to pick any geometric Asian option price with an acceptable belief degree. In order to obtain the belief degree, the interpolation search algorithm has been proposed. Some numerical examples are presented to illustrate the rationality and practicability of the model and the algorithm. Finally, an empirical study is performed based on the real data. The empirical study results indicate that the proposed fuzzy pricing model of geometric Asian option is a useful tool for modeling the imprecise problem in the real world. | Fuzzy pricing of geometric Asian options and its algorithm |
S1568494614006425 | This paper presents the optimization of a fuzzy edge detector based on the traditional Sobel technique combined with interval type-2 fuzzy logic. The goal of using interval type-2 fuzzy logic in edge detection methods is to provide them with the ability to handle uncertainty in processing real world images. However, the optimal design of fuzzy systems is a difficult task and for this reason the use of meta-heuristic optimization techniques is also considered in this paper. For the optimization of the fuzzy inference systems, the Cuckoo Search (CS) and Genetic Algorithms (GAs) are applied. Simulation results show that using an optimal interval type-2 fuzzy system in conjunction with the Sobel technique provides a powerful edge detection method that outperforms its type-1 counterparts and the pure original Sobel technique. | Optimization of interval type-2 fuzzy systems for image edge detection |
S1568494614006437 | This paper proposes a population-based heuristic based on the local best solution (HLBS) for the minimization of makespan in permutation flow shop scheduling problems. The proposed heuristic operates through three mechanisms: (i) it introduces a new method to produce a trace-model for guiding the search, (ii) it modifies a filter strategy to filter the solution regions that have been reviewed and guide the search to new solution regions in order to keep the search from trapping into local optima, and (iii) it initiates a new jump strategy to help the search escape if the search is trapped at a local optimum. Computational experiments on the well-known Taillard's benchmark data sets demonstrate that the proposed algorithm generated high quality solutions when compared to existing population-based search algorithms such as genetic algorithms, ant colony optimization, and particle swarm optimization. | A new heuristic based on local best solution for permutation flow shop scheduling |
S1568494614006449 | A selection hyper-heuristic is a high level search methodology which operates over a fixed set of low level heuristics. During the iterative search process, a heuristic is selected and applied to a candidate solution in hand, producing a new solution which is then accepted or rejected at each step. Selection hyper-heuristics have been increasingly, and successfully, applied to single-objective optimization problems, while work on multi-objective selection hyper-heuristics is limited. This work presents one of the initial studies on selection hyper-heuristics combining a choice function heuristic selection methodology with great deluge and late acceptance as non-deterministic move acceptance methods for multi-objective optimization. A well-known hypervolume metric is integrated into the move acceptance methods to enable the approaches to deal with multi-objective problems. The performance of the proposed hyper-heuristics is investigated on the Walking Fish Group test suite which is a common benchmark for multi-objective optimization. Additionally, they are applied to the vehicle crashworthiness design problem as a real-world multi-objective problem. The experimental results demonstrate the effectiveness of the non-deterministic move acceptance, particularly great deluge when used as a component of a choice function based selection hyper-heuristic. | Choice function based hyper-heuristics for multi-objective optimization |
S1568494614006450 | This paper aims to propose a stable fuzzy wavelet neural-based adaptive power system stabilizer (SFWNAPSS) for stabilizing the inter-area oscillations in multi-machine power systems. In the proposed approach, a self-recurrent Wavelet Neural Network (SRWNN) is applied with the aim of constructing a self-recurrent consequent part for each fuzzy rule of a Takagi-Sugeno-Kang (TSK) fuzzy model. All parameters of the consequent parts are updated online based on Direct Adaptive Control Theory (DACT) and employing a back-propagation-based approach. The stabilizer initialization is performed using an approach based on genetic algorithm (GA). A Lyapunov-based adaptive learning rates (LALRs) algorithm is also proposed in order to speed up the stabilization rate, as well as to guarantee the convergence of the proposed stabilizer. Therefore, due to having a stable powerful adaptation law, there is no requirement to use any identification process. Kundur's four-machine two-area benchmark power system and six-machine three-area power system are used with the aim of assessing the effectiveness of the proposed stabilizer. The results are promising and show that the inter-area oscillations are successfully damped by the SFWNAPSS. Furthermore, the superiority of the proposed stabilizer is demonstrated over the IEEE standard multi-band power system stabilizer (MB-PSS), and the conventional PSS. mother wavelet translation parameter in SRWNN dilation parameter in SRWNN weight of the self-feedback loop in SRWNN number of input variables total number of the node in the product layer weight parameter in SRWNN product of the wavelets number of fuzzy rules number of samples in the data set SRWNN output center parameter for the Gaussian function scaling parameter for the Gaussian function linguistic term of the input fuzzy variables membership value of the fuzzy set FWNN output (control signal) input variable of FWNN input vector of SFWNAPSS performance index vector of SRWNN parameters Lyapunov function change of Lyapunov function vector of learning rates plant output desired output error between the desired and plant output scaling factor system sensitivity penalty factor scaled control signal chromosome vector rotor speed deviation desired rotor speed deviation | Direct adaptive power system stabilizer design using fuzzy wavelet neural network with self-recurrent consequent part |
S1568494614006462 | This paper presents an evolutionary hybrid algorithm of invasive weed optimization (IWO) merged with oppositional based learning to solve the large scale economic load dispatch (ELD) problems. The oppositional invasive weed optimization (OIWO) is based on the colonizing behavior of weed plants and empowered by quasi opposite numbers. The proposed OIWO methodology has been developed to minimize the total generation cost by satisfying several constraints such as generation limits, load demand, valve point loading effect, multi-fuel options and transmission losses. The proposed algorithm is tested and validated using five different test systems. The most important merit of the proposed methodology is high accuracy and good convergence characteristics and robustness to solve ELD problems. The simulation results of the proposed OIWO algorithm show its applicability and superiority when compared with the results of other tested algorithms such as oppositional real coded chemical reaction, shuffled differential evolution, biogeography based optimization, improved coordinated aggregation based PSO, quantum-inspired particle swarm optimization, hybrid quantum mechanics inspired particle swarm optimization, modified shuffled frog leaping algorithm with genetic algorithm, simulated annealing based optimization and estimation of distribution and differential evolution algorithm. | Large scale economic dispatch of power systems using oppositional invasive weed optimization |
Subsets and Splits