FileName
stringlengths
17
17
Abstract
stringlengths
163
6.01k
Title
stringlengths
12
421
S1568494615006353
In this paper, a soft computing method, based on a recurrent self-organizing neural network (RSONN) is proposed for predicting the sludge volume index (SVI) in the wastewater treatment process (WWTP). For this soft computing method, a growing and pruning method is developed to tune the structure of RSONN by the sensitivity analysis (SA) of hidden nodes. The redundant hidden nodes will be removed and the new hidden nodes will be inserted when the SA values of hidden nodes meet the criteria. Then, the structure of RSONN is able to be self-organized to maintain the prediction accuracy. Moreover, the convergence of RSONN is discussed in both the self-organizing phase and the phase following the modification of the structure for the soft computing method. Finally, the proposed soft computing method has been tested and compared to other algorithms by applying it to the problem of predicting SVI in WWTP. Experimental results demonstrate its effectiveness of achieving considerably better predicting performance for SVI values.
A soft computing method to predict sludge volume index based on a recurrent self-organizing neural network
S1568494615006365
Immense understanding of antenna designers illustrate that a general microstrip antenna demonstrate low efficiency. Various techniques have been adopted to improve the performance characteristics of microstrip antenna. This paper deals with the optimization of Sierpinski fractal antenna structure by utilizing the particle swarm optimization (PSO) and curve fitting method. The required data for optimization and fitting the curve has been obtained by varying different design parameters of the proposed antenna. Electromagnetic solver Ansoft HFSS 13.0 is used for generating the parametric data. The MATLAB curve fitting tool is referred for developing the equations which exhibits the relations between the parameters of proposed antenna design. Particle swarm optimization technique is then applied to find the optimum values of the design parameters for the bandwidth enhancement of proposed antenna. Curve fitting based optimized design represents the remarkable improvement in the bandwidth of conventional microstrip line fed Sierpinski fractal antenna for broadband applications.
Design and optimization of a modified Sierpinski fractal antenna for broadband applications
S1568494615006468
Distributed Evolutionary Algorithms are traditionally executed on homogeneous dedicated clusters, despite most scientists have access mainly to networks of heterogeneous nodes (e.g., desktop PCs in a lab). Fitting this kind of algorithms to these environments, so that they can take advantage of their heterogeneity to save running time, is still an open problem. The different computational power of the nodes affects the performance of the algorithm, and tuning or fitting it to each node properly could reduce execution time. Since the distributed Evolutionary Algorithms include a whole range of parameters that influence the performance, this paper proposes a study on the population size. This parameter is one of the most important, since it has a direct relationship with the number of iterations needed to find the solution, as it affects the exploration factor of the algorithm. The aim of this paper consists in validating the following hypothesis: fitting the sub-population size to the computational power of the heterogeneous cluster node can lead to an improvement in running time with respect to the use of the same population size in every node. Two parameter size schemes have been tested, an offline and an online parameter setting, and three problems with different characteristics and computational demands have been used. Results show that setting the population size according to the computational power of each node in the heterogeneous cluster improves the time required to obtain the optimal solution. Meanwhile, the same set of different size values could not improve the running time to reach the optimum in a homogeneous cluster with respect to the same size in all nodes, indicating that the improvement is due to the interaction of the different hardware resources with the algorithm. In addition, a study on the influence of the different population sizes on each stage of the algorithm is presented. This opens a new research line on the fitting (offline or online) of parameters of the distributed Evolutionary Algorithms to the computational power of the devices.
Studying the effect of population size in distributed evolutionary algorithms on heterogeneous clusters
S156849461500647X
In this paper, novel interval and general type-2 self-organizing fuzzy logic controllers (SOFLCs) are proposed for the automatic control of anesthesia during surgical procedures. The type-2 SOFLC is a hierarchical adaptive fuzzy controller able to generate and modify its rule-base in response to the controller's performance. The type-2 SOFLC uses type-2 fuzzy sets derived from real surgical data capturing patient variability in monitored physiological parameters during anesthetic sedation, which are used to define the footprint of uncertainty (FOU) of the type-2 fuzzy sets. Experimental simulations were carried out to evaluate the performance of the type-2 SOFLCs in their ability to control anesthetic delivery rates for maintaining desired physiological set points for anesthesia (muscle relaxation and blood pressure) under signal and patient noise. Results show that the type-2 SOFLCs can perform well and outperform previous type-1 SOFLC and comparative approaches for anesthesia control producing lower performance errors while using better defined rules in regulating anesthesia set points while handling the control uncertainties. The results are further supported by statistical analysis which also show that zSlices general type-2 SOFLCs are able to outperform interval type-2 SOFLC in terms of their steady state performance.
Type-2 fuzzy sets applied to multivariable self-organizing fuzzy logic controllers for regulating anesthesia
S1568494615006481
In the present study, a new soft computing framework is developed for solving nanofluidic problems based on fluid flow and heat transfer of multi-walled carbon nanotube (MWCNT) along a flat plate with Navier slip boundary with the help of artificial neural networks (ANNs), Genetic Algorithms (GAs), Interior-Point Algorithm (IPA), and hybridized approach GA-IPA. Original PDEs associated with the problem are transformed into system of nonlinear ODEs using similarity transformation. Mathematical model of transformed system is constructed by exploiting the strength of universal function approximation ability of ANNs and an unsupervised error function is formulated for the system in a least mean square sense. Learning of the design variable of the networks is carried out with GAs supported with IPA for rapid local convergence. The design scheme is applied to solve number of variants by taking water, engine oil, and kerosene oil as a base fluids mixed with different concentrations of MWCNTs. The reliability and effectiveness of the design scheme is measured with the help of results of statistical analysis based on sufficient large number of independent runs of the algorithms rather than single successful run. The comparative studies of the proposed solution are made with standard numerical results in order to establish the correctness of the given scheme.
Stochastic numerical solver for nanofluidic problems containing multi-walled carbon nanotubes
S156849461500650X
The present study applies a Hybrid method for identification of unknown parameters in a semi-empirical tire model, the so-called Magic Formula. Firstly, the Hybrid method used a Genetic Algorithm (GA) as a global search methodology with high exploration power. Secondly, the results of the Genetic Algorithm were used as starting values for the Levenberg–Marquardt (LM) algorithm as a gradient-based method with high exploitation power. In this way the beneficial aspects of both methods are simultaneously exploited and their shortcomings are avoided. In order to establish the effectiveness of the proposed method, performance of the Hybrid method has been compared with other methods available in the literature. In addition, the use of GA as a Heuristic method for tire parameters identification has been discussed. Moreover, the extrapolation power of Magic Formula identified with Hybrid method has been properly investigated. Finally, the performance of the Hybrid method has been examined through tire parameter identification with priori known model. The results indicated that the Hybrid method has outstanding benefits such as high convergence speed, high accuracy, and null-sensitivity to the starting values of unknown parameters.
Identification of tire force characteristics using a Hybrid method
S1568494615006511
In this paper, a novel intelligent computational approach is developed for finding the solution of nonlinear singular system governed by boundary value problems of Flierl–Petviashivili equations using artificial neural networks optimized with genetic algorithms, sequential quadratic programming technique, and their combinations. The competency of artificial neural network for universal function approximation is exploited in formulation of mathematical modelling of the equation based on an unsupervised error with specialty of satisfying boundary conditions at infinity. The training of the weights of the networks is carried out with memetic computing based on genetic algorithm used as a tool for reliable global search method, hybridized with sequential quadratic programming technique used as a tool for rapid local convergence. The proposed scheme is evaluated on three variants of the two boundary problems by taking different values of nonlinearity operators and constant coefficients. The reliability and effectiveness of the design approaches are validated through the results of statistical analyses based on sufficient large number of independent runs in terms of accuracy, convergence, and computational complexity. Comparative studies of the proposed results are made with state of the art analytical solvers, which show a good agreement mostly and even better in few cases as well. The intrinsic worth of the schemes is simplicity in the concept, ease in implementation, to avoid singularity at origin, to deal with strong nonlinearity effectively, and their ability to handle exactly traditional initial conditions along with boundary condition at infinity.
Reliable numerical treatment of nonlinear singular Flierl–Petviashivili equations for unbounded domain using ANN, GAs, and SQP
S1568494615006523
Complex product configuration design requires rapid and accurate response to customers’ demand. The participation of customers in product design will be a very effective solution to achieve this. The traditional interactive genetic algorithm (IGA) can solve the above problem to some extent by a computer-aided user interface. However, it is difficult to adopt an accurate number to express an individual's fitness because the customers’ cognition of evolutionary population is uncertain, and to solve the users’ fatigue problem in IGA. Thus, an interactive genetic algorithm with interval individual fitness based on hesitancy (IGA-HIIF) is proposed in this paper. In IGA-HIIF, the interval number derived from users’ evaluation time is adopted to express an individual's fitness, and the evolutionary individuals are compared according to the interval probability dominant strategy proposed in this paper. Then, the genetic operations are applied to generate offspring population and the evolutionary process doesn’t stop until it meets the termination conditions of the evolution or user manually terminates the evolution process. The IGA-HIIF is applied into the design system of the car console configuration, and compared to the other two kinds of IGA. The extensive experiment results are provided to demonstrate that our proposed algorithm is correct and efficient.
An interactive genetic algorithm with the interval arithmetic based on hesitation and its application to achieve customer collaborative product configuration design
S1568494615006535
To resolve the conflict between our desire for a good smoothing effect and desire to give additional weight to the recent change, a grey accumulating generation operator that can smooth the random interference of data is introduced into the double exponential smoothing method. The results of practical numerical examples have demonstrated that the proposed grey double exponential smoothing method outperforms the traditional double exponential smoothing method in forecasting problems.
Grey double exponential smoothing model and its application on pig price forecasting in China
S1568494615006547
A chaos based image encryption and lossless compression algorithm using hash table and Chinese Remainder Theorem is proposed. Initially, the Henon map is used to generate the scrambled blocks of the input image. The scrambled block undergoes a fixed number of iterations based on the plain image using Arnold cat map. Since hyper chaos system has complex dynamical characteristics than chaos, the confused image is further permuted using the index sequence generated by the hyper chaos along with hash table structure. The permuted image is divided into blocks and the diffusion is carried out either by using Lorenz equations or by using another complex matrix generated from the plain image appropriately. Along with diffusion, compression is also carried out by Chinese Remainder Theorem for each block. This encryption algorithm has high key space, good NPCR and UACI values and very less correlation among adjacent pixels. Simulation results show the high effectiveness and security features of the proposed algorithm.
A chaos based image encryption and lossless compression algorithm using hash table and Chinese Remainder Theorem
S1568494615006559
We study the problem of scheduling a set of N jobs with non-identical job sizes from F different families on a set of M parallel batch machines; the objective is to minimize the makespan. The problem is known to be NP-hard. A meta-heuristic based on Max–Min Ant System (MMAS) is presented. The performance of the algorithm is compared with several previously studied algorithms by computational experiments. According to our results, the average distance between the solutions found by our proposed algorithm and the lower bounds is about 4% less than that of the best of all the compared algorithms, demonstrating that our algorithm outperforms the previously studied algorithms.
An ACO algorithm for makespan minimization in parallel batch machines with non-identical job sizes and incompatible job families
S1568494615006560
Delay Tolerant Networks (DTNs) often suffer from intermittent disruption due to factors such as mobility and energy. Though lots of routing algorithms in DTNs have been proposed in the last few years, the routing security problems have not attracted enough attention. DTNs are still facing the threats from different kinds of routing attacks. In this paper, a general purpose defense mechanism is proposed against various routing attacks on DTNs. The defense mechanism is based on the routing path information acquired from the forwarded messages and the acknowledgment (ACK), and it is suitable for different routing schemes. Evolutionary game theory is applied with the defense mechanism to analyze and facilitate the strategy changes of the nodes in the networks. Simulation results show that the proposed evolutionary game theory based defense scheme can achieve high average delivery ratio, low network overhead and low average transmission delay in various routing attack scenarios. By introducing the game theory, the networks can avoid being attacked and provide normal transmission service. The networks can reach evolutionary strategy stable (ESS) under special conditions after evolution. The initial parameters will affect the convergence speed and the final ESS, but the initial ratio of the nodes choosing different strategies can only affect the game process.
A routing defense mechanism using evolutionary game theory for Delay Tolerant Networks
S1568494615006572
Excessive implant-bone relative micromotion is detrimental to both primary as well as long-term stability of a hip stem in cementless total hip arthroplasty (THA). The shape and geometry of the implant are known to influence the resulting post-operative micromotion. Finite element (FE)-based design evaluations are manually intensive and computationally expensive, especially when a large number of designs need to be evaluated for an optimum outcome. This study presents a predictive mathematical model based on back-propagation neural network (BPNN) to relate femoral stem design parameters to the post-operative implant-bone micromotion, with no recourse to tedious nonlinear FE analysis. The characterization of the design parameters were based on our earlier study on shape optimization of femoral implant. The BPNN led to faster prediction of the implant-bone relative micromotion as compared to the FE analysis. Using the BPNN-predicted output as the objective function, a genetic algorithm (GA) based search was performed in order to minimize post-operative micromotion, under simulated physiological loading conditions. The micromotion predicted by the neural network was found to have a significant correlation with FE calculated results (correlation coefficient R 2 =0.80 for training; R 2 =0.82 for test). The optimal stems, evolved from the GA search of over 12,500 designs, were found to offer improved primary stability, as compared to the initial TriLock (DePuy) design. Our predicted results favour lateral-flared designs having rectangular proximal transverse sections with greater stem-sizes.
A combined neural network and genetic algorithm based approach for optimally designed femoral implant having improved primary stability
S1568494615006596
Association mining is a well explored topic applied to various fields. In this article, the associations among the genes have been identified from microarray gene expression data. Here a methodology, called Fuzzy Correlated Association Mining (FCAM), is developed for identifying the associations among the genes that have altered quite significantly from normal state to diseased state with respect to their expression patterns. This idea leads to predict the disease mediating genes along with their altered associations. The proposed methodology involves generation of fuzzy gene sets, construction of fuzzy items, computation of fuzzy support for fuzzy items and fuzzy correlation coefficient of a pair of fuzzy items, generation of associations, and identification of altered associations from normal to diseased state. The concept of finding fuzzy correlation between two groups of items, generation of altered associations among the items (groups of items) and then rank these items (groups of items) according to their importance are the novel contribution of the present article. The effectiveness of the methodology has been demonstrated on five gene expression data sets dealing with human lung cancer, colon cancer, sarcoma, breast cancer and leukemia. As a result, some possible genes, like IGFBP3, ERBB2, TP53, HBB, KRAS, PTEN, CALCA, CDKN2A, has been found as important genes that may mediate the development of various cancers considered here. For comparison, we have considered 11 existing association rule mining algorithms. The results are appropriately validated in terms of gene–gene interactions, functional enrichment, biochemical pathways, and using NCBI database.
Fuzzy Correlated Association Mining: Selecting altered associations among the genes, and some possible marker genes mediating certain cancers
S1568494615006602
In optimization, the performance of differential evolution (DE) and their hybrid versions exist in the literature is highly affected by the inappropriate choice of its operators like mutation and crossover. In general practice, during simulation DE does not employ any strategy of memorizing the so-far-best results obtained in the initial part of the previous generation. In this paper, a new “Memory based DE (MBDE)” presented where two “swarm operators” have been introduced. These operators based on the pBEST and gBEST mechanism of particle swarm optimization. The proposed MBDE is employed to solve 12 basic, 25 CEC 2005, and 30 CEC 2014 unconstrained benchmark functions. In order to further test its efficacy, five different test system of model order reduction (MOR) problem for single-input and single-output system are solved by MBDE. The results of MBDE are compared with state-of-the-art algorithms that also solved those problems. Numerical, statistical, and graphical analysis reveals the competency of the proposed MBDE.
A memory based differential evolution algorithm for unconstrained optimization
S1568494615006614
Automatic network clustering is an important technique for mining the meaningful communities (or clusters) of a network. Communities in a network are clusters of nodes where the intra-cluster connection density is high and the inter-cluster connection density is low. The most popular scheme of automatic network clustering aims at maximizing a criterion function known as modularity in partitioning all the nodes into clusters. But it is found that the modularity suffers from the resolution limit problem, which remains an open challenge. In this paper, the automatic network clustering is formulated as a constrained optimization problem: maximizing a criterion function with a density constraint. With this scheme, the established algorithm can be free from the resolution limit problem. Furthermore, it is found that the density constraint can improve the detection accuracy of the modularity optimization. The efficiency of the proposed scheme is verified by comparative experiments on large scale benchmark networks.
Automatic network clustering via density-constrained optimization with grouping operator
S1568494615006626
Parameter estimation is a cornerstone of most fundamental problems of statistical research and practice. In particular, finite mixture models have long been heavily relied on deterministic approaches such as expectation maximization (EM). Despite their successful utilization in wide spectrum of areas, they have inclined to converge to local solutions. An alternative approach is the adoption of Bayesian inference that naturally addresses data uncertainty while ensuring good generalization. To this end, in this paper we propose a fully Bayesian approach for Langevin mixture model estimation and selection via MCMC algorithm based on Gibbs sampler, Metropolis–Hastings and Bayes factors. We demonstrate the effectiveness and the merits of the proposed learning framework through synthetic data and challenging applications involving topic detection and tracking and image categorization.
A Bayesian analysis of spherical pattern based on finite Langevin mixture
S156849461500664X
With the advent of paralleling and implementation of restructuring in the power market, some routine rules and patterns of traditional market should be accomplished in a way different from the past. To this end, the unit commitment (UC) scheduling that has once been aimed at minimizing operating costs in an integrated power market, is metamorphosed to profit based unit commitment (PBUC) by adopting a new schema, in which generation companies (GENCOs) have a common tendency to maximize their own profit. In this paper, a novel optimization technique called imperialist competitive algorithm (ICA) as well as an improved version of this evolutionary algorithm are employed for solving the PBUC problem. Moreover, traditional binary approach of coding of initial solutions is replaced with an improved integer based coding method in order to reduce computational complexity and subsequently ameliorate convergence procedure of the proposed method. Then, a sub-ICA algorithm is proposed to obtain optimal generation power of thermal units. Simulation results validate effectiveness and applicability of the proposed method on two scenarios: (a) a set of unimodal and multimodal standard benchmark functions, (b) two GENCOs consist of 10 and 100 generating units. fuel consumption coefficient of unit (i) fuel consumption coefficient of unit (i) fuel consumption coefficient of unit (i) cost of imperialist (n) cost of country (n) cost function of unit i ($); for units with ON status, (C i (P (i,t))= a i + b i *P (i,t) + c i *P (i,t) 2 normalized cost of nth imperialist cold start-up cost of unit i ($/h) cooling constant of unit i (h) ramp down rate of unit i (MW/h) cost of best obtained solution at last iteration of algorithm hot start-up cost of unit i ($/h) initial state of unit i (h) commitment state of unit i at time t number of decades of main/sub algorithm total number of generating units number of colonies number of imperialists initial number of colonies of empire n a chaotic vector produced by a chose map total system demand at time t (MW/h) positions of a colony at the current decade positions of an imperialist at the current decade positions of a colony at the next decade maximum active generation of unit i (MW/h) maximum response rate limited active power output of unit i at time t minimum active generation of unit i (MW/h) maximum response rate limited active power output of unit i at time t generation of unit i at time t (MW/h) a randomly generated number within [0,1] total revenue ($) start-up cost of unit i at time t ($) stopping criterion of the algorithm, when it converges into 10−6 tolerance dispatch period (h) total cost of generation ($) total cost of imperialist empire n minimum OFF time of unit i (h) minimum ON time of unit i (h) ramp up rate of unit i (MW/h) a vector that is directed toward the imperialist locations and starts from the previous location of the colony |{v 1}|=1 a vector perpendicular to vector V 1 time duration for which unit i has been OFF at time t (h) time duration for which unit i has been ON at time t (h) colonies’ corporation factor in imperialist’ power a chaotic number between zero and one generated by a specific mapping method assimilation weight factor deviation limit from the original direction prosperity value of empire i forecasted market price for energy at time t ($/MWh) probability of imperialist i
An ICA based approach for solving profit based unit commitment problem market
S1568494615006675
In this paper the optimization of type-2 fuzzy inference systems using genetic algorithms (GAs) and particle swarm optimization (PSO) is presented. The optimized type-2 fuzzy inference systems are used to estimate the type-2 fuzzy weights of backpropagation neural networks. Simulation results and a comparative study among neural networks with type-2 fuzzy weights without optimization of the type-2 fuzzy inference systems, neural networks with optimized type-2 fuzzy weights using genetic algorithms, and neural networks with optimized type-2 fuzzy weights using particle swarm optimization are presented to illustrate the advantages of the bio-inspired methods. The comparative study is based on a benchmark case of prediction, which is the Mackey-Glass time series (for τ =17) problem.
Optimization of type-2 fuzzy weights in backpropagation learning for neural networks using GAs and PSO
S1568494615006687
This paper discusses the application of simulated annealing (SA) based meta-heuristics to self-organized orthogonal resource allocation problems in small cell networks (SCN)s, for static and dynamic topologies. We consider the graph coloring formulation of the orthogonal resource allocation problem, where a planar graph is used to model interference relations in a SCN comprising of randomly deployed mutually interfering cells. The aim is to color the underlying conflict graph in a distributed way, for which different variants of SA such as SA with focusing heuristic (i.e., limiting the local moves only to the cells that are in conflict), and fixed temperature, are investigated. For static topologies, distributed algorithms are used, in which no dedicated message-passing is required between the cells, except for the symmetrization of conflict graph. To enable distributed SA in dynamic topologies, a distributed temperature control protocol based on message-passing is considered. Different aspects relevant to self-organizing cellular networks are analyzed using simulations. These include the number of cells with resource conflicts, number of resource reconfigurations required by the cells to resolve the conflicts, requirements on dedicated message-passing between the cells, and sensitivity to the temperature parameter that guides the stochastic search process. Simulation results indicate that the considered algorithms are inherently suitable for SCNs, thereby enabling efficient resource allocation in a self-organized way. Furthermore, the underlying concepts and the key conclusions are general, and relevant to other problems that can be solved by distributed graph coloring.
Simulated annealing variants for self-organized resource allocation in small cell networks
S1568494615006699
Conventional machine learning methods such as neural network (NN) uses empirical risk minimization (ERM) based on infinite samples, which is disadvantageous to the gait learning control based on small sample sizes for biped robots walking in unstructured, uncertain and dynamic environments. Aiming at the stable walking control problem in the dynamic environments for biped robots, this paper puts forward a method of gait control based on support vector machines (SVM), which provides a solution for the learning control issue based on small sample sizes. The SVM is equipped with a mixed kernel function for the gait learning. Using ankle trajectory and hip trajectory as inputs, and the corresponding trunk trajectory as outputs, the SVM is trained based on small sample sizes to learn the dynamic kinematics relationships between the legs and the trunk of the biped robots. Robustness of the gait control is enhanced, which is propitious to realize the stable biped walking, and the proposed method shows superior performance when compared to SVM with radial basis function (RBF) kernels and polynomial kernels, respectively. Simulation results demonstrate the superiority of the proposed methods.
A SVM controller for the stable walking of biped robots based on small sample sizes
S1568494615006705
The turning points prediction scheme for future time series analysis based on past and present information is widely employed in the field of financial applications. In this research, a novel approach to identify turning points of the trading signal using a fuzzy rule-based model is presented. The Takagi–Sugeno fuzzy rule-based model (the TS model) can accurately identify daily stock trading from sets of technical indicators according to the trading signals learned by a support vector regression (SVR) technique. In addition, when new trading points are created, the structure and parameters of the TS model are constantly inherited and updated. To verify the effectiveness of the proposed TS fuzzy rule-based modeling approach, we have acquired the stock trading data in the US stock market. The TS fuzzy approach with dynamic threshold control is compared with a conventional linear regression model and artificial neural networks. Our result indicates that the TS fuzzy model not only yields more profit than other approaches but also enables stable dynamic identification of the complexities of the stock forecasting system.
A Takagi–Sugeno fuzzy model combined with a support vector regression for stock trading forecasting
S1568494615006717
This article suggests soft computing methods to predict stable cutting depths in turning operations without chatter vibrations. Chatter vibrations cause poor surface finish. Therefore, preventing these vibrations is an important area of research. Predicting stable cutting depths is vital to determine the stable cutting region. In this study, a set of cutting experiments has been used and the stable cutting depths are predicted as a function of cutting, modal and tool-working material parameters. Regression analyses, artificial neural networks (ANN) decision trees and heuristic optimization models are used to develop the generalization models. The purpose of the models is to estimate stable cutting depths with minimum error. ANN produces better results compared to the other models. This study helps operators and engineers to perform turning operations in an appropriate cutting region without chatter vibrations. It also helps to take precautions against chatter.
Prediction of stable cutting depths in turning operation using soft computing methods
S1568494615006729
This paper proposes the modified fuzzy min–max neural network (MFMMN) classification model to perform the supervised classification of data. The basic fuzzy min–max neural network (FMMN) can only be applied to the continuous attribute values and cannot handle the discrete values. Also justification of the classification results given by FMMN required to be obtained to make it more applicable to real world applications. These both issues are solved in the proposed MFMMN. In the MFMMN, each hyperbox have min–max values defined in terms of continuous attributes and a set of binary strings defined for discrete attributes. Bitwise ‘and’ and ‘or’ operators are used to update the discrete values associated with each hyperbox. The trained network is pruned to remove the less useful hyperboxes based on their confidence factor. The proposed model is applied to nine different datasets taken from the University of California, Irvine (UCI) machine learning repository. Finally the case study of a real time weather data is evaluated using MFMMN. The experimental results show that the proposed model has given very good accuracy. In addition to accuracy, the number of hyperboxes obtained after pruning are very less which lead to less number of concise rules and reduced computational complexity.
Extracting classification rules from modified fuzzy min–max neural network for data with mixed attributes
S1568494615006742
This paper proposes a new global optimization metaheuristic called Galactic Swarm Optimization (GSO) inspired by the motion of stars, galaxies and superclusters of galaxies under the influence of gravity. GSO employs multiple cycles of exploration and exploitation phases to strike an optimal trade-off between exploration of new solutions and exploitation of existing solutions. In the explorative phase different subpopulations independently explore the search space and in the exploitative phase the best solutions of different subpopulations are considered as a superswarm and moved towards the best solutions found by the superswarm. In this paper subpopulations as well as the superswarm are updated using the PSO algorithm. However, the GSO approach is quite general and any population based optimization algorithm can be used instead of the PSO algorithm. Statistical test results indicate that the GSO algorithm proposed in this paper significantly outperforms 4 state-of-the-art PSO algorithms and 4 multiswarm PSO algorithms on an overwhelming majority of 15 benchmark optimization problems over 50 independent trials and up to 50 dimensions. Extensive simulation results show that the GSO algorithm proposed in this paper converges faster to a significantly more accurate solution on a wide variety of high dimensional and multimodal benchmark optimization problems.
Galactic Swarm Optimization: A new global optimization metaheuristic inspired by galactic motion
S1568494615006766
This paper is the first one of the two papers entitled “Weighted Superposition Attraction (WSA)”, which is based on two basic mechanisms, “superposition” and “attracted movement of agents”, that are observable in many systems. Dividing this paper into two parts raised as a necessity because of their individually comprehensive contents. If we wanted to write these papers as a single paper we had to write more compact as distinct from its current versions because of the space requirements. So, writing them as a single paper would not be as effective as we desired. In many natural phenomena it is possible to compute superposition or weighted superposition of active fields like light sources, electric fields, sound sources, heat sources, etc.; the same may also be possible for social systems as well. An agent (particle, human, electron, etc.) may be supposed to move towards superposition if it is attractive to it. As systems status changes the superposition also changes; so it needs to be recomputed. This is the main idea behind the WSA algorithm, which mainly attempts to realize this superposition principle in combination with the attracted movement of agents as a search procedure for solving optimization problems in an effective manner. In this current part, the performance of the proposed WSA algorithm is tested on the well-known unconstrained continuous optimization functions, through a set of computational study. The comparison with some other search algorithms is performed in terms of solution quality and computational time. The experimental results clearly indicate the effectiveness of the WSA algorithm. iteration number (stopping condition) current iteration number number of artificial agents number of dimensions of the problem user defined parameter user defined parameter user defined parameter upper limit for the dimensions lower limit for the dimensions fitness of the current point of agent i fitness of the target point weight of the current point of an agent current position vector of an agent position vector of the target point vector combines an agent to target point move direction vector of an agent signum function step length
Weighted Superposition Attraction (WSA): A swarm intelligence algorithm for optimization problems – Part 1: Unconstrained optimization
S1568494615006778
Microarray experiments generally deal with complex and high-dimensional samples, and in addition, the number of samples is much smaller than their dimensions. Both issues can be alleviated by using a feature selection (FS) method. In this paper two new, simple, and efficient hybrid FS algorithms, called respectively BDE- X Rank and BDE- X Rank f , are presented. Both algorithms combine a wrapper FS method based on a Binary Differential Evolution (BDE) algorithm with a rank-based filter FS method. Besides, they generate the initial population with solutions involving only a small number of features. Some initial solutions are built considering only the most relevant features regarding the filter method, and the remaining ones include only random features (to promote diversity). In the BDE- X Rank f , a new fitness function, in which the score value of a solution is influenced by the frequency of the features in the current population, is incorporated in the algorithm. The robustness of BDE- X Rank and BDE- X Rank f is shown by using four Machine Learning (ML) algorithms (NB, SVM, C4.5, and kNN). Six high-dimensional well-known data sets of microarray experiments are used to carry out an extensive experimental study based on statistical tests. This experimental analysis shows the robustness as well as the ability of both proposals to obtain highly accurate solutions at the earlier stages of BDE evolutionary process. Finally, BDE- X Rank and BDE- X Rank f are also compared against the results of nine state-of-the-art algorithms to highlight its competitiveness and the ability to successfully reduce the original feature set size by more than 99%.
Two hybrid wrapper-filter feature selection algorithms applied to high-dimensional microarray experiments
S156849461500678X
Power quality (PQ) issues have become more important than before due to increased use of sensitive electrical loads. In this paper, a new hybrid algorithm is presented for PQ disturbances detection in electrical power systems. The proposed method is constructed based on four main steps: simulation of PQ events, extraction of features, selection of dominant features, and classification of selected features. By using two powerful signal processing tools, i.e. variational mode decomposition (VMD) and S-transform (ST), some potential features are extracted from different PQ events. VMD as a new tool decomposes signals into different modes and ST also analyzes signals in both time and frequency domains. In order to avoid large dimension of feature vector and obtain a detection scheme with optimum structure, sequential forward selection (SFS) and sequential backward selection (SBS) as wrapper based methods and Gram–Schmidt orthogonalization (GSO) based feature selection method as filter based method are used for elimination of redundant features. In the next step, PQ events are discriminated by support vector machines (SVMs) as classifier core. Obtained results of the extensive tests prove the satisfactory performance of the proposed method in terms of speed and accuracy even in noisy conditions. Moreover, the start and end points of PQ events can be detected with high precision.
Combined VMD-SVM based feature selection method for classification of power quality events
S1568494615006791
This paper presents a novel adaptive cuckoo search (ACS) algorithm for optimization. The step size is made adaptive from the knowledge of its fitness function value and its current position in the search space. The other important feature of the ACS algorithm is its speed, which is faster than the CS algorithm. Here, an attempt is made to make the cuckoo search (CS) algorithm parameter free, without a Levy step. The proposed algorithm is validated using twenty three standard benchmark test functions. The second part of the paper proposes an efficient face recognition algorithm using ACS, principal component analysis (PCA) and intrinsic discriminant analysis (IDA). The proposed algorithms are named as PCA+IDA and ACS–IDA. Interestingly, PCA+IDA offers us a perturbation free algorithm for dimension reduction while ACS+IDA is used to find the optimal feature vectors for classification of the face images based on the IDA. For the performance analysis, we use three standard face databases—YALE, ORL, and FERET. A comparison of the proposed method with the state-of-the-art methods reveals the effectiveness of our algorithm.
A novel adaptive cuckoo search algorithm for intrinsic discriminant analysis based face recognition
S1568494615006808
The objective of this paper is divided into two folds. Firstly, a new generalized improved score function has been presented in the interval-valued intuitionistic fuzzy sets (IVIFSs) environment by incorporating the idea of weighted average of the degree of hesitation between their membership functions. Secondly, an IVIFSs based method for solving the multi-criteria decision making (MCDM) problem has been presented with completely unknown attribute weights. A ranking of the different attributes is based on the proposed generalized improved score functions and the sensitivity analysis on the ranking of the system has been done based on the decision-making parameters. An illustrative examples have been studied to show that the proposed function is more reasonable in the decision-making process than other existing functions.
A new generalized improved score function of interval-valued intuitionistic fuzzy sets and applications in expert systems
S156849461500681X
This paper presents a new algorithm designed to find the optimal parameters of PID controller. The proposed algorithm is based on hybridizing between differential evolution (DE) and Particle Swarm Optimization with an aging leader and challengers (ALC-PSO) algorithms. The proposed algorithm (ALC-PSODE) is tested on twelve benchmark functions to confirm its performance. It is found that it can get better solution quality, higher success rate in finding the solution and yields in avoiding unstable convergence. Also, ALC-PSODE is used to tune PID controller in three tanks liquid level system which is a typical nonlinear control system. Compared to different PSO variants, genetic algorithm (GA), differential evolution (DE) and Ziegler–Nichols method; the proposed algorithm achieve the best results with least standard deviation for different swarm size. These results show that ALC-PSODE is more robust and efficient while keeping fast convergence.
Design of optimal PID controller using hybrid differential evolution and particle swarm optimization with an aging leader and challengers
S1568494615006821
The multi-objective energy consumption scheduling problem based on the third-party management is one essential issue of smart grid. The minimal energy cost and the maximal utility are two optimization objectives. One characteristic of the multi-objective energy consumption scheduling problem is that the magnitude difference between the two objectives increases as the number of users increases. The difference affects the quality of the solution set in the sense that it is harder to obtain the uniformly distributed solutions with good convergence and diversity for problems with a larger difference. In this paper, we propose a decomposition algorithm based on the non-uniform weight vector. The weight vector is designed based on the relationship of the magnitude difference between the two optimization objectives. The weight vector consists of two parts. One is the balance factor that can maintain the diversity of the solutions. The other is the scale factor that further enhances the searching ability with a large number of users. With the non-uniform weight vector, our algorithm can effectively deal with the magnitude difference between the two optimization objectives. The simulation illustrates that the proposed algorithm is very useful for solving large-scale energy consumption scheduling problems. In addition, the modified standard ZDT problems with apparent magnitude differences between two objectives are used to illustrate the versatility and stability of the proposed algorithms based on the non-uniform weight vector.
Multi-objective energy consumption scheduling based on decomposition algorithm with the non-uniform weight vector
S1568494615006833
The 0-1 knapsack problem is a classic combinational optimization problem. However, many exiting algorithms have low precision and easily fall into local optimal solutions to solve the 0-1 knapsack problem. In order to overcome these problems, this paper proposes a binary version of the monkey algorithm where the greedy algorithm is used to strengthen the local search ability, the somersault process is modified to avoid falling into local optimal solutions, and the cooperation process is adopted to speed up the convergence rate of the algorithm. To validate the efficiency of the proposed algorithm, experiments are carried out with various data instances of 0-1 knapsack problems and the results are compared with those of five metaheuristic algorithms.
An improved monkey algorithm for a 0-1 knapsack problem
S1568494615006857
Three dimensional integrated circuits (3D ICs) can alleviate the problem of interconnection, a critical problem in the nanoscale era, and are also promising for heterogeneous integration. This paper proposes a two-phase method combining the ant system algorithm (AS) and simulated annealing (SA) to handle 3D IC floorplanning with fixed-outline constraints. In the first AS phase, the floorplans are constructed by sequentially packing the block one by one, and the AS is used to explore the appropriate packing order and device layer assignment for the blocks. When packing a block, a proper position including the coordinates and the appropriate layer in the partially constructed floorplan should be chosen from all possible positions. While packing the blocks, a probability layer assignment strategy is proposed to determine the device layer assignment of unpacked blocks. After the AS phase, the SA phase is used to perform further optimization. The proposed method can also be easily applied to 2D floorplanning problems. Compared with the state of the art 3D/2D fixed-outline floorplanner, the experimental results demonstrate the effectiveness of the proposed method.
Combining the ant system algorithm and simulated annealing for 3D/2D fixed-outline floorplanning
S1568494615006882
Ant colony optimization (ACO) algorithms have been successfully applied in data classification, which aim at discovering a list of classification rules. However, due to the essentially random search in ACO algorithms, the lists of classification rules constructed by ACO-based classification algorithms are not fixed and may be distinctly different even using the same training set. Those differences are generally ignored and some beneficial information cannot be dug from the different data sets, which may lower the predictive accuracy. To overcome this shortcoming, this paper proposes a novel classification rule discovery algorithm based on ACO, named AntMinermbc, in which a new model of multiple rule sets is presented to produce multiple lists of rules. Multiple base classifiers are built in AntMinermbc, and each base classifier is expected to remedy the weakness of other base classifiers, which can improve the predictive accuracy by exploiting the useful information from various base classifiers. A new heuristic function for ACO is also designed in our algorithm, which considers both of the correlation and coverage for the purpose to avoid deceptive high accuracy. The performance of our algorithm is studied experimentally on 19 publicly available data sets and further compared to several state-of-the-art classification approaches. The experimental results show that the predictive accuracy obtained by our algorithm is statistically higher than that of the compared targets.
A novel multiple rule sets data classification algorithm based on ant colony algorithm
S1568494615006894
Large-scale global optimization (LSGO) is a very important but thorny task in optimization domain, which widely exists in management and engineering problems. In order to strengthen the effectiveness of meta-heuristic algorithms when handling LSGO problems, we propose a novel meta-heuristic algorithm, which is inspired by the joint operations strategy of multiple military units and called joint operations algorithm (JOA). The overall framework of the proposed algorithm involves three main operations: offensive, defensive and regroup operations. In JOA, offensive operations and defensive operations are used to balance the exploration ability and exploitation ability, and regroup operations is applied to alleviate the problem of premature convergence. To evaluate the performance of the proposed algorithm, we compare JOA with six excellent meta-heuristic algorithms on twenty LSGO benchmark functions of IEEE CEC 2010 special session and four real-life problems. The experimental results show that JOA performs steadily, and it has the best overall performance among the seven compared algorithms.
Joint operations algorithm for large-scale global optimization
S1568494615006900
This paper introduces an improved accelerated particle swarm optimization algorithm (IAPSO) to solve constrained nonlinear optimization problems with various types of design variables. The main improvements of the original algorithm are the incorporation of the individual particles memories, in order to increase swarm diversity, and the introduction of two selected functions to control balance between exploration and exploitation, during search process. These modifications are used to update particles positions of the swarm. Performance of the proposed algorithm is illustrated through six benchmark mechanical engineering design optimization problems. Comparison of obtained computation results with those of several recent meta-heuristic algorithms shows the superiority of the IAPSO in terms of accuracy and convergence speed.
Improved accelerated PSO algorithm for mechanical engineering optimization problems
S1568494615006912
This study provides a timely and flexible location-aware service (LAS) to mobile users in a dynamic environment. Few previous studies have examined similar concepts. In the proposed methodology, the inaccuracy of user positioning and the uncertainty of manual service preparation are considered and modeled by using fuzzy numbers. These numbers are used as inputs to establish a fuzzy integer-nonlinear programming model applied in identifying the most suitable service location and path. A required service is prepared immediately before a user reaches the recommended service location. To manage simultaneous requests from multiple users, the concepts of fuzzy modeling, route planning, and parallel machine scheduling are combined. Thus, the proposed LAS can distribute multiple users among service locations, thereby enabling users to avoid unnecessary waiting, which is a major problem associated with existing LASs. To assess the effectiveness of the proposed methodology, two experiments were conducted in small areas in Taichung City and Taipei City, Taiwan. The experimental results revealed that the waiting times of users were substantially reduced, increasing the average satisfaction level. However, improving the accuracy of user positioning does not necessarily facilitate achieving a high average satisfaction level.
A fuzzy integer-nonlinear programming approach for creating a flexible just-in-time location-aware service in a mobile environment
S1568494615006948
In this paper, we present a clustering method called clustering by sorting influence power, which incorporates the concept of influence power as measurement among points. In our method, clustering is performed in an efficient tree-growing fashion exploiting both the hypothetical influence powers of data points and the distances among data points. Since influence powers among data points evolve over time, we adopt a PageRank-like algorithm to calculate them iteratively to avoid the issue of improper initial exemplar preference. The experimental results show that our proposed method outperforms four well-known clustering methods across seven complex and non-isotropic datasets. Moreover, our simple clustering method can be easily applied to several practical clustering problems. We evaluate the effectiveness of our algorithm on two real-world datasets, i.e. an open dataset of Alzheimers disease protein–protein interaction network and a dataset for race walking recognition collected by ourselves, and we find our method outperforms other methods reported in the literature.
An influence power-based clustering approach with PageRank-like model
S156849461500695X
Minimum class variance support vector machine (MCVSVM) and large margin linear projection (LMLP) classifier, in contrast with traditional support vector machine (SVM), take the distribution information of the data into consideration and can obtain better performance. However, in the case of the singularity of the within-class scatter matrix, both MCVSVM and LMLP only exploit the discriminant information in a single subspace of the within-class scatter matrix and discard the discriminant information in the other subspace. In this paper, a so-called twin-space support vector machine (TSSVM) algorithm is proposed to deal with the high-dimensional data classification task where the within-class scatter matrix is singular. TSSVM is rooted in both the non-null space and the null space of the within-class scatter matrix, takes full advantage of the discriminant information in the two subspaces, and so can achieve better classification accuracy. In the paper, we first discuss the linear case of TSSVM, and then develop the nonlinear TSSVM. Experimental results on real datasets validate the effectiveness of TSSVM and indicate its superior performance over MCVSVM and LMLP.
Enhanced algorithm for high-dimensional data classification
S1568494615006961
In this paper, we illustrate a proposed method for control that combines the outputs of several individual controllers to improve global control of complex nonlinear plants. In the first part of this paper, we illustrate the proposed method that consists of two levels, where in the top level a fuzzy system represents a superior control that is designed for adjusting the behavior of the individual fuzzy controllers at the lower level. To test the approach, we consider the problem of flight control because it requires several individual controllers. Also a comparison is performed, where the hierarchical control strategy is compared with a simple control approach using the t student test. In this paper, we show that the proposed method outperforms the conventional fuzzy control approach. In the optimal design of the proposed control architecture a genetic algorithm was also applied to tune the parameters of the fuzzy systems in an optimal fashion.
Hierarchical aggregation of multiple fuzzy controllers for global complex control problems
S1568494615006973
Nanoscale crossbar architectures have received steadily growing interests as a result of their great potential to be main building blocks in nanoelectronic circuits. However, due to the extremely small size of nanodevices and the bottom-up self-assembly nanofabrication process, considerable process variation will be an inherent vice for crossbar nanoarchitectures. In this paper, the variation tolerant logical mapping problem is treated as a bilevel multiobjective optimization problem. Since variation mapping is an NP-complete problem, a hybrid multiobjective evolutionary algorithm is designed to solve the problem adhering to a bilevel optimization framework. The lower level optimization problem, most frequently tackled, is modeled as the min–max-weight and min-weight-gap bipartite matching (MMBM) problem, and a Hungarian-based linear programming (HLP) method is proposed to solve MMBM in polynomial time. The upper level optimization problem is solved by evolutionary multiobjective optimization algorithms, where a greedy reassignment local search operator, capable of exploiting the domain knowledge and information from problem instances, is introduced to improve the efficiency of the algorithm. The numerical experiment results show the effectiveness and efficiency of proposed techniques for the variation tolerant logical mapping problem.
A hybrid evolutionary algorithm for multiobjective variation tolerant logic mapping on nanoscale crossbar architectures
S1568494615006985
Bio-inspired metaheuristic algorithms have been widely applied in estimating the extrinsic parameters of a photovoltaic (PV) model. These methods are capable of handling the nonlinearity of objective functions whose derivatives are often not defined as well. However, these algorithms normally utilize multiple agents in the search process, and thus the solution process is extremely time-consuming. In this regard, it takes much time to search the possible solutions in the whole search domain by sequential computing devices. To overcome the limitation of sequential computing devices, parallel swarm algorithm (PSA) is proposed in this work with the aim of extracting and estimating the parameters of the PV cell model by utilizing the power of multicore central processing unit (CPU) and graphical processing unit (GPU). We implement this PSA in the OpenCL platform with the execution on Nvidia multi-core GPUs. Simulation results demonstrate that the proposed method significantly increases the computational speed in comparison to the sequential algorithm, which means that given a time requirement, the accuracy of a solution from the PSA can be improved compared to that from the sequential one by using a larger swarm size. photocurrent generated by photovoltaic cell saturation current of a diode diode ideality constant series resistance parallel resistance
Multicores and GPU utilization in parallel swarm algorithm for parameter estimation of photovoltaic cell model
S1568494615006997
In this paper, a new approach called ‘instance variant nearest neighbor’ approximates a regression surface of a function using the concept of k nearest neighbor. Instead of fixed k neighbors for the entire dataset, our assumption is that there are optimal k neighbors for each data instance that best approximates the original function by fitting the local regions. This approach can be beneficial to noisy datasets where local regions form data characteristics that are different from the major data clusters. We formulate the problem of finding such k neighbors for each data instance as a combinatorial optimization problem, which is solved by a particle swarm optimization. The particle swarm optimization is extended with a rounding scheme that rounds up or down continuous-valued candidate solutions to integers, a number of k neighbors. We apply our new approach to five real-world regression datasets and compare its prediction performance with other function approximation algorithms, including the standard k nearest neighbor, multi-layer perceptron, and support vector regression. We observed that the instance variant nearest neighbor outperforms these algorithms in several datasets. In addition, our new approach provides consistent outputs with five datasets where other algorithms perform poorly.
Instance variant nearest neighbor using particle swarm optimization for function approximation
S1568494615007000
This paper presents a hybrid efficient genetic algorithm (EGA) for the stochastic competitive Hopfield (SCH) neural network, which is named SCH–EGA. This approach aims to tackle the frequency assignment problem (FAP). The objective of the FAP in satellite communication system is to minimize the co-channel interference between satellite communication systems by rearranging the frequency assignment so that they can accommodate increasing demands. Our hybrid algorithm involves a stochastic competitive Hopfield neural network (SCHNN) which manages the problem constraints, when a genetic algorithm searches for high quality solutions with the minimum possible cost. Our hybrid algorithm, reflecting a special type of algorithm hybrid thought, owns good adaptability which cannot only deal with the FAP, but also cope with other problems including the clustering, classification, and the maximum clique problem, etc. In this paper, we first propose five optimal strategies to build an efficient genetic algorithm. Then we explore three hybridizations between SCHNN and EGA to discover the best hybrid algorithm. We believe that the comparison can also be helpful for hybridizations between neural networks and other evolutionary algorithms such as the particle swarm optimization algorithm, the artificial bee colony algorithm, etc. In the experiments, our hybrid algorithm obtains better or comparable performance than other algorithms on 5 benchmark problems and 12 large problems randomly generated. Finally, we show that our hybrid algorithm can obtain good results with a small size population.
A hybrid approach based on stochastic competitive Hopfield neural network and efficient genetic algorithm for frequency assignment problem
S1568494615007012
This paper presents an adaptive group search optimization (AGSO) algorithm for solving optimal power flow (OPF) problem. In this study, different aspects of the OPF problem are considered to form the accurate multi-objective model. The system total operation cost, the total emission, and N-1 security index are first, second, and third ordered objectives, respectively. Additionally, to consider accurate model of the problem, transmission losses and different equality and inequality constrains, such as feasible operating ranges of generators (FOR) and power flow equations are taken into account. Moreover, this study presents adaptive form of conventional GSO to precise the convergence characteristic of GSO. The effectiveness and accuracy of the proposed method for solving the nonlinear and nonconvex problems is validated by carrying out simulation studies on sample benchmark test cases and 30-bus and 57-bus IEEE standard test systems. Based on the comprehensive simulation studies, the accuracy of the proposed method is validated.
Adaptive group search optimization algorithm for multi-objective optimal power flow problem
S1568494615007024
Generalized Nash equilibrium problems address extensions of the well-known standard Nash equilibrium concept, making it possible to model and study more general settings. The main difference lies in that they allow both objective functions and constraints of each player to depend on the strategies of other players. The study of such problems has numerous applications in many fields, including engineering, economics, or management science, for instance. In this work we introduce a solution algorithm based on the Fuzzy Adaptive Simulated Annealing global optimization method (Fuzzy ASA, for short), demonstrating that it is possible to transform the original task into a constrained global optimization problem, which can be solved, in principle, by any effective global optimization algorithm, but in this paper our main tool will be the cited paradigm (Fuzzy ASA). We believe that the main merit of the proposed approach is to offer a simpler alternative for solving this important class of problems, in a less restrictive way in the sense of not demanding very strong conditions on the defining functions. Several case studies are presented for the sake of exemplifying the proposal's efficacy.
Solving generalized Nash equilibrium problems through stochastic global optimization
S1568494615007036
Rough set reduction has been used as an important preprocessing tool for pattern recognition, machine learning and data mining. As the classical Pawlak rough sets can just be used to evaluate categorical features, a neighborhood rough set model is introduced to deal with numerical data sets. Three-way decision theory proposed by Yao comes from Pawlak rough sets and probability rough sets for trading off different types of classification error in order to obtain a minimum cost ternary classifier. In this paper, we discuss reduction questions based on three-way decisions and neighborhood rough sets. First, the three-way decision reducts of positive region preservation, boundary region preservation and negative region preservation are introduced into the neighborhood rough set model. Second, three condition entropy measures are constructed based on three-way decision regions by considering variants of neighborhood classes. The monotonic principles of entropy measures are proved, from which we can obtain the heuristic reduction algorithms in neighborhood systems. Finally, the experimental results show that the three-way decision reduction approaches are effective feature selection techniques for addressing numerical data sets.
Three-way decision reduction in neighborhood systems
S1568494615007048
E-commerce customers demand quick and easy access to products in large search spaces according to their needs and preferences. To support and facilitate this process, recommender systems (RS) based on user preferences have recently played a key role. However the elicitation of customers preferences is not always precise either correct, because of external factors such as human errors, uncertainty and vagueness proper of human beings and so on. Such a problem in RS is known as natural noise and can bias customers recommendations. Despite different proposals have been presented to deal with natural noise in RS none of them is able to manage properly the inherent uncertainty and vagueness of customers preferences. Hence, this paper is devoted to a new fuzzy method for managing in a flexible and adaptable way such uncertainty of natural noise in order to improve recommendation accuracy. Eventually a case study is performed to show the improvements produced by this fuzzy method regarding previous proposals.
A fuzzy model for managing natural noise in recommender systems
S156849461500705X
Detecting discontinuities in electrical signals from recorded oscillograms makes it possible to segment them. This is the first step in implementing automated methods which will ensure disturbances in electrical power systems are detected, classified and stored. In this context, this paper presents a way of determining an adaptive threshold based on the decomposition of electrical signals through the Discrete Wavelet Transform (DWT) using Daubechies family filter banks, allowing for the segmentation of signals and, as a consequence, the analysis of disturbances related to Power Quality (PQ). Considering this, the proposed approach was initially evaluated for signals originating from mathematical models representing short-term voltage fluctuations, transients (impulsive and oscillatory) and harmonic distortions. In the synthetic signal database, either single or combined occurrences of more than one disturbance were considered. By applying the DWT, the amount of energy and entropy of energy were then calculated for the leaves of the second level of decomposition. Based on these calculations, a unique adaptive threshold could be determined for each analyzed signal. Afterwards, the amount of existing intersections between the threshold and the curve of details obtained for the second level of decomposition was then defined. Thus, the intersections determine the beginning and end of the segments. In order to validate the approach, the performance of the proposed methodology was analyzed considering the signals obtained from oscillograms provided by IEEE 1159.3 Task Force, as well as real oscillograms obtained from a regional distribution utility. After these analyses, it was observed that the proposed approach is efficient and applicable to automatic segmentation of events related to PQ.
Adaptive threshold based on wavelet transform applied to the segmentation of single and combined power quality disturbances
S1568494615007061
There are a significant number of high fall risk individuals who are susceptible to falling and sustaining severe injuries. An automatic fall detection and diagnostic system is critical for ensuring a quick response with effective medical aid based on relative information provided by the fall detection system. This article presents and evaluates an accelerometer-based multiple classifier fall detection and diagnostic system implemented on a single wearable Shimmer device for remote health monitoring. Various classifiers have been utilised within literature, however there is very little current work in combining classifiers to improve fall detection and diagnostic performance within accelerometer-based devices. The presented fall detection system utilises multiple classifiers with differing properties to significantly improve fall detection and diagnostic performance over any single classifier and majority voting system. Additionally, the presented multiple classifier system utilises comparator functions to ensure fall event consistency, where inconsistent events are outsourced to a supervisor classification function and discrimination power is considered where events with high discrimination power are evaluated to further improve the system response. The system demonstrated significant performance advantages in comparison to other classification methods, where the proposed system obtained over 99% metrics for fall detection recall, precision, accuracy and F-value responses.
Multiple comparator classifier framework for accelerometer-based fall detection and diagnostic
S1568494615007073
This paper presents a constructive solid geometry based representation scheme for structural topology optimization. The proposed scheme encodes the topology using position of few joints and width of segments connecting them. Union of overlapping rectangular primitives is calculated using constructive solid geometry technique to obtain the topology. A valid topology in the design domain is ensured by representing the topology as a connected simple graph of nodes. A graph repair operator is applied to ensure a physically meaningful connected structure. The algorithm is integrated with single and multi-objective genetic algorithm and its performance is compared with those of other methods like SIMP. The multi-objective analysis provides the trade-off front between compliance and material availability, unveiling common design principles among optimized solutions. The proposed method is generic and can be easily extended to any two or three-dimensional topology optimization problem by using different shape primitives.
Structural topology optimization using multi-objective genetic algorithm with constructive solid geometry representation
S1568494615007085
This paper develops a simulated annealing heuristic based exact solution approach to solve the green vehicle routing problem (G-VRP) which extends the classical vehicle routing problem by considering a limited driving range of vehicles in conjunction with limited refueling infrastructure. The problem particularly arises for companies and agencies that employ a fleet of alternative energy powered vehicles on transportation systems for urban areas or for goods distribution. Exact algorithm is based on the branch-and-cut algorithm which combines several valid inequalities derived from the literature to improve lower bounds and introduces a heuristic algorithm based on simulated annealing to obtain upper bounds. Solution approach is evaluated in terms of the number of test instances solved to optimality, bound quality and computation time to reach the best solution of the various test problems. Computational results show that 22 of 40 instances with 20 customers can be solved optimally within reasonable computation time.
The green vehicle routing problem: A heuristic based exact solution approach
S1568494615007097
Water pollution by organic materials or metals is one of the problems that threaten humanity, both nowadays and over the next decades. Morphological changes in Nile Tilapia “Oreochromis niloticus” fish liver and gills can also represent the adaptation strategies to maintain some physiological functions or to assess acute and chronic exposure to chemicals found in water and sediments. This paper presents an automatic system for assessing water quality, in Sharkia Governorate – Egypt, based on microscopic images of fish gills and liver. The proposed system used fish gills and liver as hybrid-biomarker in order to detect water pollution. It utilized case-based reasoning (CBR) for indicating the degree of water quality based on the different histopathological changes in fish gills and liver microscopic images. Various performance evaluation metrics namely, retrieval accuracy, receiver operating characteristic (ROC) curves, F-measure, and G-mean have been used in order to objectively indicate the true performance of the system considering the unbalanced data. Experimental results showed that the proposed hybrid-biomarker CBR based system achieved water quality prediction accuracy of 97.9% using cosine distance similarity measure. Also, it outperformed both SVMs and LDA classifiers for the tested microscopic images dataset.
Hybrid-biomarker case-based reasoning system for water pollution assessment in Abou Hammad Sharkia, Egypt
S1568494615007103
Boolean functions represent an important primitive in the design of various cryptographic algorithms. There exist several well-known schemes where a Boolean function is used to add nonlinearity to the cipher. Thus, methods to generate Boolean functions that possess good cryptographic properties present an important research goal. Among other techniques, evolutionary computation has proved to be a well-suited approach for this problem. In this paper, we present three different objective functions, where each inspects important cryptographic properties of Boolean functions, and examine four evolutionary algorithms. Our research confirms previous results, but also sheds new insights on the effectiveness and comparison of different evolutionary algorithms for this problem.
Cryptographic Boolean functions: One output, many design criteria
S1568494615007115
This paper models a three echelon supply chain distribution problem considering multiple time periods, multi-products and uncertain demands. To take the problem closer to reality we consider multiple truck types and focus on the truck selection and loading sub-problem. Truck selection is important because the quantity of goods to be transported varies regularly and also because different trucks have different hiring cost, mileage and speed. Truck loading is important when considering the optimal loading pattern of products having different shapes and sizes on trucks, which themselves have distinct loading capacities. The two objectives considered here are the cost and responsiveness of the supply chain. The distribution problem is solved using the non-dominated sorting genetic algorithm (NSGA-II). However, the genetic algorithms compromise the optimality of the sub-problems while optimizing the entire system. But the optimality of truck selection and loading sub-problem is non-compromisable in nature. Hence a heuristic algorithm is used innovatively along with the NSGA-II to produce much better solutions. To make our model more realistic, the distribution chain is modelled as a push–pull based supply chain having multiple time periods and using demand aggregation over time. Using a separate algorithm also gives the advantage of utilizing the difference in nature of the push and pull part of the supply chain by giving every individual truck different objectives. Real life like data is generated and the optimality gap between the heuristic and non-heuristic approach is calculated. A clear improvement in objectives can be seen while using the heuristic approach.
Bi-objective optimization of three echelon supply chain involving truck selection and loading using NSGA-II with heuristics algorithm
S1568494615007139
Manufacturing processes could be well characterized by both the quantitative and the qualitative measurements of their performances. In case of conflicting type performance measures, it is necessary to get possible optimum values of all performances simultaneously, like higher material removal rate (MRR) with lower average surface roughness (ASR) in electric discharge machining (EDM) process. EDM itself is a stochastic process and predictions of responses – MRR and ASR are still difficult. Advanced structural risk minimization based learning system – support vector machine (SVM) is, therefore, applied to capture the random variations in EDM responses in a robust way. Internal parameters of SVM – C, ɛ and σ are tuned by modified teaching learning based optimization (TLBO) procedure. Subsequently, using the developed SVM model as a virtual data generator of EDM process, responses are generated at the different points in the experimental space and power law models are fitted to the estimated data. Varying the weight factors, different weighted combinations of the inverse of MRR and the ASR are minimized by modified TLBO. Pseudo Pareto front passing through the optimum results, thus obtained, gives a guideline for selection of optimum achievable value of ASR for a specific demand of MRR. Further, inverse solution procedure is elaborated to find the near-optimum setting of process parameters in EDM machine to obtain the specific need based MRR-ASR combination. average surface roughness (μm) bias regularization parameter limits of constriction factor limits of cognitive acceleration coefficient current setting (A) training input space dimension target function component of best position of swarm along dth dimension maximum number of iterations kernel function mean absolute training error material removal rate (mm3/min) number of learners in class, number of particles in swarm number of training data component of best position of ith particle along dth dimension a pseudorandom number generated following standard uniform distribution within range (0,1) random weight factors limits of social acceleration coefficient pulse off time (μs) pulse on time (μs) teaching factor velocity component of kth particle along dth dimension in iterth iteration weight vector training input vector component of velocity corrected position of kth particle along dth dimension in iterth iteration training output vector mean of training output set number of attributes Lagrange multipliers radius of loss insensitive hyper-tube standard deviation of radial basis function (kernel function) feature space limits of inertia factor
Application of teaching learning based optimization procedure for the development of SVM learned EDM process and its pseudo Pareto optimization
S1568494615007140
Application of the sustainability concept to environmental projects implies that at least three feature categories (i.e., economic, social, and environmental) must be taken into account by applying a participative multi-criterion analysis (MCA). However, MCA results depend crucially on the methodology applied to estimate the relative criterion weights. By using a logically consistent set of data and methods (i.e., linear regression [LR], factor analysis [FA], the revised Simos procedure [RSP], and the analytical hierarchy process [AHP]), the present study revealed that mistakes from using one weight-estimation method rather than an alternative are non-significant in terms of satisfaction of specified acceptable standards (i.e., a risk of up to 1% of erroneously rejecting an option), but significant for comparisons between options (i.e., a risk of up to 11% of choosing a worse option by rejecting a better option). In particular, the risks of these mistakes are larger if both differences in statistical or computational algorithms and in data sets are involved (e.g., LR vs. AHP). In addition, the present study revealed that the choice of weight-estimation methods should depend on the estimated and normalised score differences for the economic, social, and environmental features. However, on average, some pairs of weight-estimation methods are more similar (e.g., AHP vs. RSP and LR vs. AHP are the most and the least similar, respectively), and some single weight-estimation methods are more reliable (i.e., FA>RSP>AHP>LR).
Choosing among weight-estimation methods for multi-criterion analysis: A case study for the design of multi-purpose offshore platforms
S1568494615007152
This paper presents an evolutionary based method to obtain the un-stressed lattice spacing, d 0, required to calculate the residual stress profile across a weld of an age-hardenable aluminum alloy, AA2024. Due to the age-hardening nature of this alloy, the d 0 value depends on the heat treatment. In the case of welds, the heat treatment imposed by the welding operation differs significantly depending on the distance to the center of the joint. This implies that a variation of d 0 across the weld is expected, a circumstances which limits the possibilities of conventional analytical methods to determine the required d 0 profile. The interest of the paper is, therefore, two-fold: First, to demonstrate that the application of an evolutionary algorithm solves a problem not addressed in the literature such as the determination of the required data to calculate the residual stress state across a weld. Second, to show the robustness of the approximation used, which allows obtaining solutions for different constraints of the problem. Our results confirm the capacity of evolutionary computation to reach realistic solutions under three different scenarios of the initial conditions and the available experimental data.
Using evolutionary algorithms to determine the residual stress profile across welds of age-hardenable aluminum alloys
S1568494615007164
In this paper, we consider a recently proposed model for portfolio selection, called Mean-Downside Risk-Skewness (MDRS) model. This modelling approach takes into account both the multidimensional nature of the portfolio selection problem and the requirements imposed by the investor. Concretely, it optimizes the expected return, the downside-risk and the skewness of a given portfolio, taking into account budget, bound and cardinality constraints. The quantification of the uncertain future return on a given portfolio is approximated by means of LR-fuzzy numbers, while the moments of its return are evaluated using possibility theory. The main purpose of this paper is to solve the MDRS portfolio selection model as a whole constrained three-objective optimization problem, what has not been done before, in order to analyse the efficient portfolios which optimize the three criteria simultaneously. For this aim, we propose new mutation, crossover and reparation operators for evolutionary multi-objective optimization, which have been specially designed for generating feasible solutions of the cardinality constrained MDRS problem. We incorporate the operators suggested into the evolutionary algorithms NSGAII, MOEA/D and GWASF-GA and we analyse their performances for a data set from the Spanish stock market. The potential of our operators is shown in comparison to other commonly used genetic operators and some conclusions are highlighted from the analysis of the trade-offs among the three criteria.
Evolutionary multi-objective optimization algorithms for fuzzy portfolio selection
S156849461500719X
In this research, flexible flow shop scheduling with unrelated parallel machines at each stage are considered. The number of stages and machines vary at each stage and each machine can process specific operations. In other words, machines have eligibility and parts have different release times. In addition, the blocking restriction is considered for the problem. Parts should pass each stage and process on only one machine at each stage. In the proposed problem, transportation of parts, loading and unloading parts are done by robots and the objective function is finding an optimal sequence of processing parts and robots movements to minimize the makespan and finding the closest number to the optimal number of robots. The main contribution of this study is to present the mixed integer linear programming model for the problem which considers release times for parts in scheduling area, loading and unloading times of parts which transferred by robots. New methodologies are investigated for solving the proposed model. Ant Colony Optimization (ACO) with double pheromone and genetic algorithm (GA) are proposed. Finally, two meta-heuristic algorithms are compared to each other, computational results show that the GA performs better than ACO and the near optimal numbers of robots are determined.
Two meta-heuristic algorithms for flexible flow shop scheduling problem with robotic transportation and release time
S1568494615007206
To date, a look at the scientific literature on the construction and use of synchronous computer-mediated communication (CMC) support environments reveals that most researchers have focused either on exchanging information or on constructing and presenting posts. In this work, an intelligent collaborative synchronous CMC platform that detects whether the learners address the expected discussion issues is proposed. The concept maps related to the learning topics are first outlined by the instructor. After each learner issues a post on the synchronous CMC platform, a feature selection approach is adopted to derive the input parameters of a one-class Support Vector Machines (SVMs) classifier. The classifier then determines if the learners’ posts are related to the concept maps previously outlined by the instructor. Meanwhile, learner peers from the same group are asked to provide comments on the synchronous CMCs, and a group grading module is established in this work to evaluate the quality of the synchronous CMCs. If the evaluation results from the classifier and the group grading module are inconsistent, the instructor or the teaching assistant is consulted to verify the evaluation results. Notably, a feedback rule construction mechanism is used to issue feedback messages to learners in cases where the synchronous CMC support system detects that the learners have strayed astray from the expected learning topics in their posts. The classification rates for the one-class SVM classifier can reach up to 97.06%, and the average pre-test and post-test grades were 51.94 and 66.77, respectively, which revealed that the junior high school students participating in synchronous CMC activities related to natural science were benefited by the proposed intelligent synchronous CMC platform.
Supporting the development of synchronous text-based computer-mediated communication with an intelligent diagnosis tool
S1568494615007218
To cope with drastically increasing user demands for spectrum resource, spectrum sharing is a strategy that greatly alleviates scarcity. However, without a proper spectrum sharing scheme, the use of concurrently shared channels between primary (or licensed) users and secondary (or unlicensed) users within one cell can lead to harmful interference, lowing spectral efficiency and throughput of the system. In this paper, we address the problem of spectrum assignment (SA) in an underlay spectrum sharing network. An SA algorithm is considered as the mechanism for secondary users exploit primary channels while maintaining the interference in acceptable levels, ensuring that the primary system performance is not compromised. We model this scenario as a multi-objective problem (MOP) and we propose the application of Non-dominated Sorting Genetic Algorithm-II (NSGA-II) to face the throughput and spectral efficiency tradeoff. The simulation results demonstrate through the Pareto optimal set, that our approach maintains the quality of service (QoS) of primary and secondary networks and maximizes the throughput of the system at the cost of spectral efficiency. The experiments and results are compared with Weighted Sum Rate (WSA) and the Parallel Cell Coordinate System Adaptative Multiobjective Particle Swarm Optimization (pccsAMOPSO) for different cases for the SA problem.
Application of NSGA-II algorithm to the spectrum assignment problem in spectrum sharing networks
S156849461500722X
Fabrication of three-dimensional structures has gained increasing importance in the bone tissue engineering (BTE) field. Mechanical properties and permeability are two important requirement for BTE scaffolds. The mechanical properties of the scaffolds are highly dependent on the processing parameters. Layer thickness, delay time between spreading each powder layer, and printing orientation are the major factors that determine the porosity and compression strength of the 3D printed scaffold. In this study, the aggregated artificial neural network (AANN) was used to investigate the simultaneous effects of layer thickness, delay time between spreading each layer, and print orientation of porous structures on the compressive strength and porosity of scaffolds. Two optimization methods were applied to obtain the optimal 3D parameter settings for printing tiny porous structures as a real BTE problem. First, particle swarm optimization algorithm was implemented to obtain the optimum topology of the AANN. Then, Pareto front optimization was used to determine the optimal setting parameters for the fabrication of the scaffolds with required compressive strength and porosity. The results indicate the acceptable potential of the evolutionary strategies for the controlling and optimization of the 3DP process as a complicated engineering problem.
Optimal design of a 3D-printed scaffold using intelligent evolutionary algorithms
S1568494615007231
Web service composition combines available services to provide new functionality. The various available services have different quality-of-service (QoS) attributes. Building a QoS-optimal web service composition is a multi-criteria NP-hard problem. Most of the existing approaches reduce this problem to a single-criterion problem by aggregating different criteria into a unique global score (scalarization). However, scalarization has some significant drawbacks: the end user is supposed to have a complete a priori knowledge of its preferences/constraints about the desired solutions and there is no guarantee that the aggregated results match it. Moreover, non-convex parts of the Pareto set cannot be reached by optimizing a convex weighted sum. An alternative is to use Pareto-based approaches that enable a more accurate selection of the end-user solution. However, so far, only few solutions based on these approaches have been proposed and there exists no comparative study published to date. This motivated us to perform an analysis of several state-of-the-art multi-objective evolutionary algorithms. Multiple scenarios with different complexities are considered. Performance metrics are used to compare several evolutionary algorithms. Results indicate that GDE3 algorithm yields the best performances on this problem, also with the lowest time complexity.
Comparative analysis of multi-objective evolutionary algorithms for QoS-aware web service composition
S1568494615007279
When increase in energy needs of developing countries cannot be met by conventional energy sources, alternative energy sources are considered to substitute them. Nuclear energy that meets the needs of a greater proportion of energy demands for countries is one of effective alternative energy types. In this context, after deciding on the use of nuclear energy, the selection of the most suitable location for the production of nuclear power is one of the important decision making problems. In this paper, we performed a facility location selection model for Turkey in meeting the needs of energy with using new and unused source of nuclear energy. For this aim, a combined fuzzy multi criteria decision making (MCDM) methodology that consists of Interval type-2 fuzzy analytical hierarchy process (AHP) that is applied to determine weights of criteria and interval type-2 fuzzy TOPSIS that is applied for ranking alternatives is used to determine the best location alternative for the nuclear power plant. By the way, the obtained results have been analyzed depending upon the criteria that used for the evaluation process. The obtained results are compared with existing nuclear power plants location selection policy for Turkey and some suggestions have been made for the plants where would be the located are not decided. By the way, a sensitivity analysis has been conducted to analyze effects of changes in decision's parameters.
A combined fuzzy approach to determine the best region for a nuclear power plant in Turkey
S1568494615007310
A novel global PID control scheme for nonlinear MIMO systems is proposed and implemented for a robot as study case, this scheme is called AWFPID from its adaptive wavelet fuzzy PID control structure. Basically, it identifies inverse error dynamics using a radial basis neural network with daughter RASP1 wavelets activation function; its output is in cascaded with an infinite impulse response (IIR) filter to prune irrelevant signals and nodes as well as to recover a canonical form. Then, online adaptive fuzzy tuning of a discrete PID regulator is proposed, whose closed-loop guarantees global regulation for nonlinear dynamical plants. The wavelet network includes a fuzzy inference system for online tuning of learning rates. A real-time experimental study on a three degrees of freedom haptic interface, the PHANToM Premium 1.0A, highlights the regulation with smooth control effort without using the mathematical model of the robot.
Wavenet fuzzy PID controller for nonlinear MIMO systems: Experimental validation on a high-end haptic robotic interface
S1568494615007322
In this paper, a fuzzy logic controller (FLC) based variable structure control (VSC) with guaranteed stability for multivariable systems is presented. It is aimed at obtaining an improved performance of nonlinear multivariable systems. The main contribution of this work is firstly developing a generic matrix formulation of the FLC-VSC algorithm for nonlinear multivariable systems, with a special attention to non-zero final state. Secondly, ensuring the global stability of the controlled system. The multivariable nonlinear system is represented by T-S fuzzy model. The identification of the T-S model parameters has been improved using the well known weighting parameters approach to optimize local and global approximation and modeling capability of T-S fuzzy model. The main problem encountered is that T-S identification method cannot be applied when the membership functions (MFs) are overlapped by pairs. This in turn restricts the application of the T-S method because this type of membership function has been widely used in control applications. In order to overcome the chattering problem a switching function is added as an additional fuzzy variable and will be introduced in the premise part of the fuzzy rules together with the state variables. A two-link robot system and a mixing thermal system are chosen to evaluate the robustness, effectiveness, accuracy and remarkable performance of proposed FLC-VSC method.
Chattering-free fuzzy variable structure control for multivariable nonlinear systems
S1568494615007334
Nature-based algorithms have become popular in recent fifteen years and have been widely applied in various fields of science and engineering, such as robot control, cluster analysis, controller design, dynamic optimization and image processing. In this paper, a new swarm intelligence algorithm named cognitive behavior optimization algorithm (COA) is introduced, which is used to solve the real-valued numerical optimization problems. COA has a detailed cognitive behavior model. In the model of COA, the common phenomenon of foraging food source for population is summarized as the process of exploration–communication–adjustment. Matching with the process, three main behaviors and two groups in COA are introduced. Firstly, cognitive population uses Gaussian and Levy flight random walk methods to explore the search space in the rough search behavior. Secondly, the improved crossover and mutation operator are used in the information exchange and share behavior between the two groups: cognitive population and memory population. Finally, the intelligent adjustment behavior is used to enhance the exploitation of the population for cognitive population. To verify the performance of our approach, both the classic and modern complex benchmark functions considered as the unconstrained functions are employed. Meanwhile, some well-known engineering design optimization problems are used as the constrained functions in the literature. The experimental results, considering both convergence and accuracy simultaneously, demonstrate the effectiveness of COA for global numerical and engineering optimization problems.
Cognitive behavior optimization algorithm for solving optimization problems
S1568494615007346
Supervised learning has attracted much attention in recent years. As a consequence, many of the state-of-the-art algorithms are domain dependent as they require a labeled training corpus to learn the domain features. This requires the availability of labeled corpora which is a cumbersome task in itself. However, for text sentiment detection SentiWordNet (SWN) may be used. It is a vocabulary where terms are arranged in synonym groups called synsets. This research makes use of SentiWordNet and treats it as the labeled corpus for training. A sentiment dictionary, SentiMI, builds upon the mutual information calculated from these terms. A complete framework is developed by using feature selection and extracting mutual information, from SentiMI, for the selected features. Training, testing and evaluation of the proposed framework are conducted on a large dataset of 50,000 movie reviews. A notable performance improvement of 7% in accuracy, 14% in specificity, and 8% in F-measure is achieved by the proposed framework as compared to the baseline SentiWordNet classifier. Comparison with the state-of-the-art classifiers is also performed on widely used Cornell Movie Review dataset which also proves the effectiveness of the proposed approach.
SentiMI: Introducing point-wise mutual information with SentiWordNet to improve sentiment polarity detection
S1568494615007358
Production lines, designed in a serial manner, have attracted the attention of researchers for many years. Line efficiency, throughput time and workload balancing are the main concerns regarding assembly lines, due to the high volume production. Recently, specific assembly line configurations, such as two-sided and parallel lines, have been addressed by researchers within the operational levels of production. Parallel two-sided assembly lines, rather a new research area, allocate more flexible workers and reduce throughput time by incorporating the advantages of two-sided and parallel assembly lines. Since parallel two-sided assembly lines are utilized to produce large-scale products, such as automobiles, trucks, and buses in industry; they also require a significant worker movement between parallel lines because of the unavoidable walking distances between lines. In this respect, different from the existing literature, walking distances have been included in parallel two-sided assembly line balancing problem. The main purpose of this paper is to introduce parallel two-sided assembly line balancing problem with walking times and to propose the implementation of Bees Algorithm and Artificial Bee Colony algorithm due to the NP-hardness of the problem. An extensive computational study is also carried out and the comparative results are presented.
Bee algorithms for parallel two-sided assembly line balancing problem with walking times
S1568494615007383
This paper presents the application of backtracking search algorithm (BSA) for solving an economic/emission dispatch (EED) problem as a multi-objective optimization problem. BSA is a newly developed evolutionary algorithm with one control parameter to solve numerical optimization problems. It utilizes crossover and mutation operators to advance optimization toward the optimal. The multi-objective BSA developed and presented in this paper uses an elitist external archive to store non-dominated solutions known as pareto front. The problem of EED is also solved by weighted sum method, which combines both objectives of the problem into a single objective. Three test systems are the case studies verifying the effectiveness of BSA. The results are compared with those of other methods in literatures and confirm the high performance of BSA. number of generating units cost function coefficients of generating unit i emission function coefficients of generating unit i minimum and maximum production limits of the ith generator output power of the ith generator power demand transmission network loss vector for power outputs of N generating units loss coefficients generation cost of generating unit i total generation cost of N generating units emission amount of generating unit i total emission amount of N generating units objective vector combined objective of several objectives weighting factor price penalty factor standard uniform distribution standard normal distribution amplitude control function of search-direction matrix feasible search space of optimization problem BSA's control parameter population size population matrix in iteration t individual i of population X in iteration t historical population matrix in iteration t final and trial population matrices binary matrix number of objectives number of non-dominated solutions objective function j maximum and minimum values of the jth objective function solution number i crowding distance of solution i membership function of solution i for objective j normalized membership function of solution i
Multi-objective backtracking search algorithm for economic emission dispatch problem
S1568494615007395
Control charts are the most popular process monitoring techniques designed to determine whether a process is in a state of statistical control or not. When a process change occurs, the control chart exhibits an out-of-control signal. In most cases, the signal is followed by a substantial amount of delay. To address this drawback, supplementary techniques have been considered to be employed along with the control charts to identify the exact time of the process change. This paper presents a hybrid method for estimating the change point on x ¯ chart when neither the change type nor its magnitude are known. For this purpose, two sets of features with the most discriminatory power between x ¯ chart patterns are selected. Then the feature vectors are extracted from the control chart patterns (CCPs) and served as an input to a classification scheme which is comprised of several support vector machine (SVM) classifiers. After parameters tuning, the classifiers are built and used for classifying the CCPs and identifying the change type. Once the change type is determined, the fuzzy statistical clustering (FSC) and the maximum likelihood (ML) estimators are employed to identify the process change point. The performance of selected features, the classification scheme, and the change point estimators are evaluated by conducting several simulation studies. Empirical results show that in identifying the change types, the proposed classification scheme is more accurate than two recent CCP classification methods. The results also confirm that the proposed hybrid method offers an accurate estimate of the process change point, as compared to the most recent methods developed for the change point estimation.
A hybrid method for estimating the process change point using support vector machine and fuzzy statistical clustering
S1568494615007401
The ranking of intuitionistic fuzzy sets (IFSs) is very important for the intuitionistic fuzzy decision making. The aim of this paper is to propose a new risk attitudinal ranking method of IFSs and apply to multi-attribute decision making (MADM) with incomplete weight information. Motivated by technique for order preference by similarity to ideal solution (TOPSIS), we utilize the closeness degree to characterize the amount of information according to the geometrical representation of an IFS. The area of triangle is calculated to measure the reliability of information. It is proved that the closeness degree and the triangle area just form an interval. Thereby, a new lexicographical method is proposed based on the intervals for ranking the intuitionistic fuzzy values (IFVs). Furthermore, considered the risk attitude of decision maker sufficiently, a novel risk attitudinal ranking measure is developed to rank the IFVs on the basis of the continuous ordered weighted average (C-OWA) operator and this interval. Through maximizing the closeness degrees of alternatives, we construct a multi-objective fractional programming model which is transformed into a linear program. Thus, the attribute weights are derived objectively by solving this linear program. Then, a new method is put forward for MADM with IFVs and incomplete weight information. Finally, an example analysis of a teacher selection is given to verify the effectiveness and practicability of the proposed method.
A novel risk attitudinal ranking method for intuitionistic fuzzy values and application to MADM
S1568494615007413
The recently-proposed Zhang dynamics (ZD) has been proven to achieve success for solving the linear-equality constrained time-varying quadratic program ideally when time goes to infinity. The convergence performance is a significant improvement, as compared to the gradient-based dynamics (GD) that cannot make the error converge to zero even after infinitely long time. However, this ZD model with the suggested activation functions cannot reach the theoretical time-varying solution in finite time, which may limit its applications in real-time calculation. Therefore, a nonlinearly-activated neurodynamic model is proposed and studied in this paper for real-time solution of the equality-constrained quadratic optimization with nonstationary coefficients. Compared with existing neurodynamic models (specifically the GD model and the ZD model) for optimization, the proposed neurodynamic model possesses the much superior convergence performance (i.e., finite-time convergence). Furthermore, the upper bound of the finite convergence time is derived analytically according to Lyapunov theory. Both theoretical and simulative results verify the efficacy and superior of the nonlinearly-activated neurodynamic model, as compared to these of the GD and ZD models.
A nonlinearly-activated neurodynamic model and its finite-time solution to equality-constrained quadratic optimization with nonstationary coefficients
S1568494615007425
In novel forms of the Social Internet of Things, any mobile user within communication range may help routing messages for another user in the network. The resulting message delivery rate depends both on the users’ mobility patterns and the message load in the network. This new type of configuration, however, poses new challenges to security, amongst them, assessing the effect that a group of colluding malicious participants can have on the global message delivery rate in such a network is far from trivial. In this work, after modeling such a question as an optimization problem, we are able to find quite interesting results by coupling a network simulator with an evolutionary algorithm. The chosen algorithm is specifically designed to solve problems whose solutions can be decomposed into parts sharing the same structure. We demonstrate the effectiveness of the proposed approach on two medium-sized Delay-Tolerant Networks, realistically simulated in the urban contexts of two cities with very different route topology: Venice and San Francisco. In all experiments, our methodology produces attack patterns that greatly lower network performance with respect to previous studies on the subject, as the evolutionary core is able to exploit the specific weaknesses of each target configuration.
Optimizing groups of colluding strong attackers in mobile urban communication networks with evolutionary algorithms
S1568494615007437
The consideration of this paper is given to address the straight and U-shaped assembly line balancing problem. Although many attempts in the literature have been made to develop deterministic version of the assembly line model, the attention is not considerably given to those in uncertain environment. In this paper, a novel bi-objective fuzzy mixed-integer linear programming model (BOFMILP) is developed so that triangular fuzzy numbers (TFNs) are employed in order to represent uncertainty and vagueness associated with the task processing times in the real production systems. In this proposed model, two conflicting objectives (minimizing the number of stations as well as cycle time) are considered simultaneously with respect to set of constraints. For this purpose, an appropriate strategy in which new two-phase interactive fuzzy programming approach is proposed as a solution method to find an efficient compromise solution. Finally, validity of the proposed model as well as its solution approach are evaluated though numerical examples. In addition, a comparison study is conducted over some test problems in order to assess the performance of the proposed solution approach. The results demonstrate that our proposed interactive fuzzy approach not only can be applied in ALBPs but also is capable to handle any practical MOLP models. Moreover, in light of these results, the proposed model may constitute a framework aiming to assist the decision maker (DM) to deal with uncertainty in assembly line problem.
An interactive fuzzy programming approach for bi-objective straight and U-shaped assembly line balancing problem
S1568494615007449
In this study, a new kind of fuzzy set in fuzzy time series’ field is introduced. It works as a trend estimator to be appropriate for fuzzy time series forecasting by reconnoitering trend of data appropriately. First, the historical data are fuzzified into differential fuzzy sets, and then differential fuzzy relationships are calculated. Second, differential fuzzy logic groups are established by grouping differential fuzzy relationships. Finally, in the defuzzification step, the forecasts are calculated. However, for increasing the accuracy of the models, an evolutionary algorithm, namely imperialist competitive algorithm is injected, to train the model. A massive stock data from four main stock databases have been selected for model validation. The final project, has shown that outperformed its counterparts in term of accuracy.
A hybrid model based on differential fuzzy logic relationships and imperialist competitive algorithm for stock market forecasting
S1568494615007450
Solution of optimal power flow (OPF) problem aims to optimize a selected objective function such as fuel cost, active power loss, total voltage deviation (TVD) etc. via optimal adjustment of the power system control variables while at the same time satisfying various equality and inequality constraints. In the present work, a particle swarm optimization with an aging leader and challengers (ALC-PSO) is applied for the solution of the OPF problem of power systems. The proposed approach is examined and tested on modified IEEE 30-bus and IEEE 118-bus test power system with different objectives that reflect minimization of fuel cost or active power loss or TVD. The simulation results demonstrate the effectiveness of the proposed approach compared with other evolutionary optimization techniques surfaced in recent state-of-the-art literature. Statistical analysis, presented in this paper, indicates the robustness of the proposed ALC-PSO algorithm.
Particle swarm optimization with an aging leader and challengers algorithm for the solution of optimal power flow problem
S1568494615007462
Most mobile communication companies in China were puzzled by the arrearage problem. Then credit evaluation for the mobile telephone customer becomes increasingly important in China's telecommunications services industry on economic operation and information management. The study presents a customer credit measure method based on customer attributes by employing the eigenface technology. Analysis of customer credit evaluation indicates potentiality of the proposed eigenface-based credit evaluation method that it provides a potential viable solution to the arrearage problem. By means of principal component analysis, customers represented by their attribute vectors in training set are used to build the eigenspace. Subspace of the eigenspace for credit evaluation is generated by selecting principal orthogonal eigenvectors of covariance matrix of the training samples. Each one of the principal eigenvectors ever called “eigenfaces” is taken as one basis reference customer (component) for credit evaluation. Then each customer represented its feature vector is projected onto the subspace and described by a linear combination of the basis reference customers represented by the selected principal eigenvectors in the subspace. Weights of the linear combination are denoted as one weight vector. Difference between the weight vectors is used to evaluate customer credit. The proposed method yields satisfying results in its application to credit evaluation for 400,000 customers in a mobile communication services company in China. Result analysis indicates that further arrearage management based on credit evaluation is workable.
Credit evaluation using eigenface method for mobile telephone customers
S1568494615007486
This paper addresses a new method for combination of supervised learning and reinforcement learning (RL). Applying supervised learning in robot navigation encounters serious challenges such as inconsistent and noisy data, difficulty for gathering training data, and high error in training data. RL capabilities such as training only by one evaluation scalar signal, and high degree of exploration have encouraged researchers to use RL in robot navigation problem. However, RL algorithms are time consuming as well as suffer from high failure rate in the training phase. Here, we propose Supervised Fuzzy Sarsa Learning (SFSL) as a novel idea for utilizing advantages of both supervised and reinforcement learning algorithms. A zero order Takagi–Sugeno fuzzy controller with some candidate actions for each rule is considered as the main module of robot's controller. The aim of training is to find the best action for each fuzzy rule. In the first step, a human supervisor drives an E-puck robot within the environment and the training data are gathered. In the second step as a hard tuning, the training data are used for initializing the value (worth) of each candidate action in the fuzzy rules. Afterwards, the fuzzy Sarsa learning module, as a critic-only based fuzzy reinforcement learner, fine tunes the parameters of conclusion parts of the fuzzy controller online. The proposed algorithm is used for driving E-puck robot in the environment with obstacles. The experiment results show that the proposed approach decreases the learning time and the number of failures; also it improves the quality of the robot's motion in the testing environments.
Supervised fuzzy reinforcement learning for robot navigation
S1568494615007498
In this paper, we present a new approach for reliable fall detection. The fuzzy system consists of two input Mamdani engines and a triggering alert Sugeno engine. The output of the first Mamdani engine is a fuzzy set, which assigns grades of membership to the possible values of dynamic transitions, whereas the output of the second one is another fuzzy set assigning membership grades to possible body poses. Since Mamdani engines perform fuzzy reasoning on disjoint subsets of the linguistic variables, the total number of the fuzzy rules needed for input–output mapping is far smaller. The person pose is determined on the basis of depth maps, whereas the pose transitions are inferred using both depth maps and the accelerations acquired by a body worn inertial sensor. In case of potential fall a threshold-based algorithm launches the fuzzy system to authenticate the fall event. Using the accelerometric data we determine the moment of the impact, which in turn helps us to calculate the pose transitions. To the best of our knowledge, this is a new application of fuzzy logic in a novel approach to modeling and reliable low cost detecting of falls.
Fuzzy inference-based fall detection using kinect and body-worn accelerometer
S1568494615007504
Machine learning including neural networks are useful in eddy current nondestructive evaluation for automated sizing of defects in a component or structure. Sizing of subsurface defects in an electrically conducting material using eddy current response is challenging, as the skin-effect and radial extents of magnetic fields are expected to strongly influence. Moreover, the information about all defect characteristics such as length, width, depth, and height is available within an eddy current image. Inspired by the recent developments in machine learning for multidimensional classification and their promise, this paper proposes chain classification for sizing of defects. Chain classification enables incorporation of dependency between the class variables which can enhance the performance of the machine learning algorithms. The best sequence among the class variables has been optimized using a greedy breadth-first-search (GBFS) algorithm and systematic studies have been carried out using the GBFS. Two well established machine learning classification algorithms, namely, radial basis function neural network and support vector machine have been used in chain classification. Coupling the chain classification with the GBFS, an approach for automated sizing of defects has been proposed. From modeled as well as experimentally obtained eddy current images, it has been established that the proposed approach can successfully size subsurface as well as surface defects.
Greedy breadth-first-search based chain classification for nondestructive sizing of subsurface defects
S1568494615007516
Differential evolution (DE) algorithm is a population-based algorithm designed for global optimization of the optimization problems. This paper proposes a different DE algorithm based on mathematical modeling of socio-political evolution which is called Colonial Competitive Differential Evolution (CCDE). The two typical CCDE algorithms are benchmarked on three well-known test functions, and the results are verified by a comparative study with two original DE algorithms which include DE/best/1 and DE/rand/2. Also, the effectiveness of CCDE algorithms is tested on Economic Load Dispatch (ELD) problem including 10, 15, 40, and 140-unit test systems. In this study, the constraints and operational limitations, such as valve-point loading, transmission losses, ramp rate limits, and prohibited operating zones are considered. The comparative results show that the CCDE algorithms have good performance and are reliable tools in solving ELD problem.
Colonial competitive differential evolution: An experimental study for optimal economic load dispatch
S1568494615007528
Factory management plays an important role in improving the productivity and quality of service in the production process. In particular, the distributed permutation flow shop scheduling problem with multiple factories is considered a priority factor in the factory automation. This study proposes a novel model of the developed distributed scheduling by supplementing the reentrant characteristic into the model of distributed reentrant permutation flow shop (DRPFS) scheduling. This problem is described as a given set of jobs with a number of reentrant layers is processed in the factories, which compromises a set of machines, with the same properties. The aim of the study is to determine the number of factory needs to be used, jobs assignment to certain factory and sequence of job assigned to the factory in order to simultaneously satisfy three objectives of minimizing makespan, total cost and average tardiness. To do this, a novel multi-objective adaptive large neighborhood search (MOALNS) algorithm is developed for finding the near optimal solutions based on the Pareto front. Various destroy and repair operators are presented to balance between intensification and diversification of searching process. The numerical examples of computational experiments are carried out to validate the proposed model. The analytical results on the performance of proposed algorithm are checked and compared with the existing methods to validate the effectiveness and robustness of the proposed potential algorithm in handling the DRPFS problem.
Multi-objective adaptive large neighborhood search for distributed reentrant permutation flow shop scheduling
S156849461500753X
An accurate contour estimation plays a significant role in classification and estimation of shape, size, and position of thyroid nodule. This helps to reduce the number of false positives, improves the accurate detection and efficient diagnosis of thyroid nodules. This paper introduces an automated delineation method that integrates spatial information with neutrosophic clustering and level-sets for accurate and effective segmentation of thyroid nodules in ultrasound images. The proposed delineation method named as Spatial Neutrosophic Distance Regularized Level Set (SNDRLS) is based on Neutrosophic L-Means (NLM) clustering which incorporates spatial information for Level Set evolution. The SNDRLS takes rough estimation of region of interest (ROI) as input provided by Spatial NLM (SNLM) clustering for precise delineation of one or more nodules. The performance of the proposed method is compared with level set, NLM clustering, Active Contour Without Edges (ACWE), Fuzzy C-Means (FCM) clustering and Neutrosophic based Watershed segmentation methods using the same image dataset. To validate the SNDRLS method, the manual demarcations from three expert radiologists are employed as ground truth. The SNDRLS yields the closest boundaries to the ground truth compared to other methods as revealed by six assessment measures (true positive rate is 95.45±3.5%, false positive rate is 7.32±5.3% and overlap is 93.15±5. 2%, mean absolute distance is 1.8±1.4 pixels, Hausdorff distance is 0.7±0.4 pixels and Dice metric is 94.25±4.6%). The experimental results show that the SNDRLS is able to delineate multiple nodules in thyroid ultrasound images accurately and effectively. The proposed method achieves the automated nodule boundary even for low-contrast, blurred, and noisy thyroid ultrasound images without any human intervention. Additionally, the SNDRLS has the ability to determine the controlling parameters adaptively from SNLM clustering.
Automated delineation of thyroid nodules in ultrasound images using spatial neutrosophic clustering and level set
S1568494615007541
Distributed generator (DG) is recognized as a viable solution for controlling line losses, bus voltage, voltage stability, etc. and represents a new era for distribution systems. This paper focuses on developing an approach for placement of DG in order to minimize the active power loss and energy loss of distribution lines while maintaining bus voltage and voltage stability index within specified limits of a given power system. The optimization is carried out on the basis of optimal location and optimal size of DG. This paper developed a new, efficient and novel krill herd algorithm (KHA) method for solving the optimal DG allocation problem of distribution networks. To test the feasibility and effectiveness, the proposed KH algorithm is tested on standard 33-bus, 69-bus and 118-bus radial distribution networks. The simulation results indicate that installing DG in the optimal location can significantly reduce the power loss of distributed power system. Moreover, the numerical results, compared with other stochastic search algorithms like genetic algorithm (GA), particle swarm optimization (PSO), combined GA and PSO (GA/PSO) and loss sensitivity factor simulated annealing (LSFSA), show that KHA could find better quality solutions.
Krill herd algorithm for optimal location of distributed generator in radial distribution system
S1568494615007553
Credit classification is an important component of critical financial decision making tasks such as credit scoring and bankruptcy prediction. Credit classification methods are usually evaluated in terms of their accuracy, interpretability, and computational efficiency. In this paper, we propose an approach for automatic designing of fuzzy rule-based classifiers (FRBCs) from financial data using multi-objective evolutionary optimization algorithms (MOEOAs). Our method generates, in a single experiment, an optimized collection of solutions (financial FRBCs) characterized by various levels of accuracy-interpretability trade-off. In our approach we address the complexity- and semantics-related interpretability issues, we introduce original genetic operators for the classifier's rule base processing, and we implement our ideas in the context of Non-dominated Sorting Genetic Algorithm II (NSGA-II), i.e., one of the presently most advanced MOEOAs. A significant part of the paper is devoted to an extensive comparative analysis of our approach and 24 alternative methods applied to three standard financial benchmark data sets, i.e., Statlog (Australian Credit Approval), Statlog (German Credit Approval), and Credit Approval (also referred to as Japanese Credit) sets available from the UCI repository of machine learning databases (http://archive.ics.uci.edu/ml). Several performance measures including accuracy, sensitivity, specificity, and some number of interpretability measures are employed in order to evaluate the obtained systems. Our approach significantly outperforms the alternative methods in terms of the interpretability of the obtained financial data classifiers while remaining either competitive or superior in terms of their accuracy and the speed of decision making.
A multi-objective genetic optimization for fast, fuzzy rule-based credit classification with balanced accuracy and interpretability
S1568494615007565
Global optimization for mining complexes aims to generate a production schedule for the various mines and processing streams that maximizes the economic value of the enterprise as a whole. Aside from the large scale of the optimization models, one of the major challenges associated with optimizing mining complexes is related to the blending and non-linear geo-metallurgical interactions in the processing streams as materials are transformed from bulk material to refined products. This work proposes a new two-stage stochastic global optimization model for the production scheduling of open pit mining complexes with uncertainty. Three combinations of metaheuristics, including simulated annealing, particle swarm optimization and differential evolution, are tested to assess the performance of the solver. Experimental results for a copper-gold mining complex demonstrate that the optimizer is capable of generating designs that reduce the risk of not meeting production targets, have 6.6% higher expected net present value than the deterministic-equivalent design and 22.6% higher net present value than an industry-standard deterministic mine planning software.
Global optimization of open pit mining complexes with uncertainty
S1568494615007577
Image-based handwritten signature verification is important in most of the financial transactions when a hard copy of signature is needed. Considering the lack of dynamic information from static signature images, we proposed a working framework through hybrid methods of discrete Radon transform (DRT), principal component analysis (PCA) and probabilistic neural network (PNN). The proposed framework aims to distinguish forgeries from genuine signatures based on the image level. Extensive experiments are conducted on our own independent signature database, and a public signature database – MYCT. Equal error rates (EER) of 1.51%, 3.23% and 13.07% are reported, respectively, for random, casual and skilled forgeries of our own database. When working on the MYCT signature database, our proposed approach manages to achieve an EER of 9.87% with 10 training samples.
Image-based handwritten signature verification using hybrid methods of discrete Radon transform, principal component analysis and probabilistic neural network
S1568494615007723
Software reliability growth model (SRGM) with testing-effort function (TEF) is very helpful for software developers and has been widely accepted and applied. However, each SRGM with TEF (SRGMTEF) contains some undetermined parameters. Optimization of these parameters is a necessary task. Generally, these parameters are estimated by the Least Square Estimation (LSE) or the Maximum Likelihood Estimation (MLE). We found that the MLE can be used only when the software failure data to satisfy some assumptions such as to satisfy a certain distribution. However, the software failure data may not satisfy such a distribution. In this paper, we investigate the improvement and application of a swarm intelligent optimization algorithm, namely quantum particle swarm optimization (QPSO) algorithm, to optimize these parameters of SRGMTEF. The performance of the proposed SRGMTEF model with optimized parameters is also compared with other existing models. The experiment results show that the proposed parameter optimization approach using QPSO is very effective and flexible, and the better software reliability growth performance can be obtained based on SRGMTEF on the different software failure datasets.
Parameter optimization of software reliability growth model with S-shaped testing-effort function using improved swarm intelligent optimization
S1568494615007735
A hybrid scheme for the image segmentation of high-resolution images is proposed in this study. Our methodology is based on combining both supervised and unsupervised segmentation. The entire process is performed in the frequency domain, rather than the spatial domain, using the Shift Invariant Shearlet Transform (SIST). Initially, the input image is filtered using an anisotropic filter to enhance the texture features. Then, it is separated into low and high sub-band frequencies using SIST. Subsequently, we built a feature vector from coarser coefficients complemented with texture information extracted from high-frequency coefficients of the input image. SOM is used for the preliminary classification of the input image coefficients, and the network training process is performed using the previously built feature vector. Lastly, the modified PCNN is used to augment the SOM results to reduce the over-segmentation artefacts. We used the Berkeley Segmentation Database (BSR) and Quick-Bird Satellite images to validate the results. It was found that the proposed scheme is superior to the Fuzzy-C-Means-based, SOM-based, and PCNN-based segmentation algorithms in terms of quantitative criteria and visual interpretation.
Image segmentation scheme based on SOM–PCNN in frequency domain
S1568494615007760
Hyper-heuristics are (meta-)heuristics that operate at a higher level to choose or generate a set of low-level (meta-)heuristics in an attempt of solve difficult optimization problems. Iterated local search (ILS) is a well-known approach for discrete optimization, combining perturbation and hill-climbing within an iterative framework. In this study, we introduce an ILS approach, strengthened by a hyper-heuristic which generates heuristics based on a fixed number of add and delete operations. The performance of the proposed hyper-heuristic is tested across two different problem domains using real world benchmark of course timetabling instances from the second International Timetabling Competition Tracks 2 and 3. The results show that mixing add and delete operations within an ILS framework yields an effective hyper-heuristic approach.
Iterated local search using an add and delete hyper-heuristic for university course timetabling
S1568494615007772
In cluster analysis, a fundamental problem is to determine the best estimate of the number of clusters; this is known as the automatic clustering problem. Because of lack of prior domain knowledge, it is difficult to choose an appropriate number of clusters, especially when the data have many dimensions, when clusters differ widely in shape, size, and density, and when overlapping exists among groups. In the late 1990s, the automatic clustering problem gave rise to a new era in cluster analysis with the application of nature-inspired metaheuristics. Since then, researchers have developed several new algorithms in this field. This paper presents an up-to-date review of all major nature-inspired metaheuristic algorithms used thus far for automatic clustering. Also, the main components involved during the formulation of metaheuristics for automatic clustering are presented, such as encoding schemes, validity indices, and proximity measures. A total of 65 automatic clustering approaches are reviewed, which are based on single-solution, single-objective, and multiobjective metaheuristics, whose usage percentages are 3%, 69%, and 28%, respectively. Single-objective clustering algorithms are adequate to efficiently group linearly separable clusters. However, a strong tendency in using multiobjective algorithms is found nowadays to address non-linearly separable problems. Finally, a discussion and some emerging research directions are presented.
Automatic clustering using nature-inspired metaheuristics: A survey
S1568494615007784
Data gathering in wireless sensor networks (WSN) consumes more energy due to large amount of data transmitted. In direct transmission (DT) method, each node has to transmit its generated data to the base station (BS) which leads to higher energy consumption and affects the lifetime of the network. Clustering is one of the efficient ways of data gathering in WSN. There are various kinds of clustering techniques, which reduce the overall energy consumption in sensor networks. Cluster head (CH) plays a vital role in data gathering in clustered WSN. Energy consumption in CH node is comparatively higher than other non CH nodes because of its activities like data aggregation and transmission to BS node. The present day clustering algorithms in WSN use multi-hopping mechanism which cost higher energy for the CH nodes near to BS since it routes the data from other CHs to BS. Some CH nodes may die earlier than its intended lifetime due to its overloaded work which affects the performance of the WSN. This paper contributes a new clustering algorithm, Distributed Unequal Clustering using Fuzzy logic (DUCF) which elects CHs using fuzzy approach. DUCF forms unequal clusters to balance the energy consumption among the CHs. Fuzzy inference system (FIS) in DUCF uses the residual energy, node degree and distance to BS as input variables for CH election. Chance and size are the output fuzzy parameters in DUCF. DUCF assigns the maximum limit (size) of a number of member nodes for a CH by considering its input fuzzy parameters. The smaller cluster size is assigned for CHs which are nearer to BS since it acts as a router for other distant CHs. DUCF ensures load balancing among the clusters by varying the cluster size of its CH nodes. DUCF uses Mamdani method for fuzzy inference and Centroid method for defuzzification. DUCF performance was compared with well known algorithms such as LEACH, CHEF and EAUCF in various network scenarios. The experimental results indicated that DUCF forms unequal clusters which ensure load balancing among clusters, which again improves the network lifetime compared with its counterparts.
DUCF: Distributed load balancing Unequal Clustering in wireless sensor networks using Fuzzy approach
S1568494615007796
In this paper, a new similarity measure and a weighted similarity measure on intuitionistic fuzzy soft sets (IFSSs) are proposed and some of their basic properties are discussed. Using the proposed similarity measure, a relation (≈ α ) between two IFSSs are defined and it is found that the defined relation is not an equivalence relation. Further, the effectiveness of the proposed similarity measure is demonstrated in a numerical example with the help of measure of performance and measure of error. Moreover, medical diagnosis problems have been exhibited through a hypothetical case study by using this proposed similarity measure. Finally, the proposed method is applied to 10 different medical data sets from UCI Machine Learning Repository datasets and its similarity measures are calculated. The corresponding performance measures, like, accuracy, sensitivity, specificity, ROC curves, AUC values, and F-measures are obtained and it is compared with the existing methods. This shows that the proposed method exhibits more accuracy, sensitivity and enhanced F-measures than the existing methods.
A similarity measure of intuitionistic fuzzy soft sets and its application in medical diagnosis
S1568494615007802
It is well known that 0-1 knapsack problem (KP01) plays an important role in both computing theory and real life application. Due to its NP-hardness, lots of impressive research work has been performed on many variants of the problem. Inspired by region partition of items, an effective hybrid algorithm based on greedy degree and expectation efficiency (GDEE) is presented in this paper. In the proposed algorithm, initially determinate items region, candidate items region and unknown items region are generated to direct the selection of items. A greedy degree model inspired by greedy strategy is devised to select some items as initially determinate region. Dynamic expectation efficiency strategy is designed and used to select some other items as candidate region, and the remaining items are regarded as unknown region. To obtain the final items to which the best profit corresponds, static expectation efficiency strategy is proposed whilst the parallel computing method is adopted to update the objective function value. Extensive numerical investigations based on a large number of instances are conducted. The proposed GDEE algorithm is evaluated against chemical reaction optimization algorithm and modified discrete shuffled frog leaping algorithm. The comparative results show that GDEE is much more effective in solving KP01 than other algorithms and that it is a promising tool for solving combinatorial optimization problems such as resource allocation and production scheduling.
Solving 0-1 knapsack problem by greedy degree and expectation efficiency
S1568494615007814
The brain magnetic resonance (MR) image has an embedded bias field. This field needs to be corrected to obtain the actual MR image for classification. Bias field, being a slowly varying nonlinear field, needs to be estimated. In this paper, we have proposed three schemes and in turn three algorithms to segment the given MR image while estimating the bias field. The problem is compounded when the MR image is corrupted with noise in addition to the inherent bias field. The notions of possibilistic and fuzzy membership have been combined to take care of the modeling of the bias field and noise. The weighted typicality measure together with the weighted fuzzy membership has been used to model the image. The above resulted in the proposed Bias Corrected Possibilistic Fuzzy C-Means (BCPFCM) strategy and the algorithm. Further reinforcing the neighbourhood data to the modeling aspect has resulted in the two other strategies namely Bias Corrected Possibilistic Neighborhood Fuzzy C-Means (BCPNFCM) and Bias Corrected Separately weighted Possibilistic Neighborhood Fuzzy C-Means (BCSPNFCM). The proposed algorithms have successfully been tested with synthetic data with bias field of low and high spatial frequency. Noisy brain MR images with Gaussian Noise of varying strength have been considered from the BrainWeb database. The algorithms have also been tested on real brain MR data set with axial and sagittal view and it has been found that the proposed algorithms produced segmentation results with less percentage of misclassification errors as compared to the Bias Corrected Fuzzy C-Means (BCFCM) algorithm proposed by Ahmed et al. [4]. The performance of the proposed algorithms has been compared with algorithms from other paradigm in the context of Tanimoto's index.
Modified possibilistic fuzzy C-means algorithms for segmentation of magnetic resonance image
S1568494615007826
PID controller structure is regarded as a standard in the control-engineering community and is supported by a vast range of automation hardware. Therefore, PID controllers are widely used in industrial practice. However, the problem of tuning the controller parameters has to be tackled by the control engineer and this is often not dealt with in an optimal way, resulting in poor control performance and even compromised safety. The paper proposes a framework, which involves using an interval model for describing the uncertain or variable dynamics of the process. The framework employs a particle swarm optimization algorithm for obtaining the best performing PID controller with regard to several possible criteria, but at the same time taking into account the complementary sensitivity function constraints, which ensure robustness within the bounds of the uncertain parameters’ intervals. Hence, the presented approach enables a simple, computationally tractable and efficient constrained optimization solution for tuning the parameters of the controller, while considering the eventual gain, pole, zero and time-delay uncertainties defined using an interval model of the controlled process. The results provide good control performance while assuring stability within the prescribed uncertainty constraints. Furthermore, the controller performance is adequate only if the relative system perturbations are considered, as proposed in the paper. The proposed approach has been tested on various examples. The results suggest that it is a useful framework for obtaining adequate controller parameters, which ensure robust stability and favorable control performance of the closed-loop, even when considerable process uncertainties are expected.
Interval-model-based global optimization framework for robust stability and performance of PID controllers
S1568494615007838
The extension of traditional data mining methods to time series has been effectively applied to a wide range of domains such as finance, econometrics, biology, security, and medicine. Many existing mining methods deal with the task of change points detection, but very few provide a flexible approach. Querying specific change points with linguistic variables is particularly useful in crime analysis, where intuitive, understandable, and appropriate detection of changes can significantly improve the allocation of resources for timely and concise operations. In this paper, we propose an on-line method for detecting and querying change points in crime-related time series with the use of a meaningful representation and a fuzzy inference system. Change points detection is based on a shape space representation, and linguistic terms describing geometric properties of the change points are used to express queries, offering the advantage of intuitiveness and flexibility. An empirical evaluation is first conducted on a crime data set to confirm the validity of the proposed method and then on a financial data set to test its general applicability. A comparison to a similar change-point detection algorithm and a sensitivity analysis are also conducted. Results show that the method is able to accurately detect change points at very low computational costs. More broadly, the detection of specific change points within time series of virtually any domain is made more intuitive and more understandable, even for experts not related to data mining.
Change points detection in crime-related time series: An on-line fuzzy approach based on a shape space representation