FileName
stringlengths
17
17
Abstract
stringlengths
163
6.01k
Title
stringlengths
12
421
S1568494614002750
Most real-world problems naturally involve multiple conflicting objectives, such as in the case of attempting to maximize both efficiency and safety of a working environment. The aim of multi-objective optimization algorithms is to find those solutions that optimize several components of a vector of objective functions simultaneously. However, when objectives conflict with each other, the multi-objective problem does not have a single optimal solution for all objectives simultaneously. Instead, algorithms attempt to search for the set of efficient solutions, known as the global non-dominated set, that provides solutions that optimally trade-off the objectives. The final solution to be adopted from this set would depend on the preferences of the decision-makers involved in the process. Hence, a decision-maker is typically interested in knowing as many potential solutions as possible. In this paper, we propose an extension to a previous piece of work on multi-objective cooperative coevolution algorithms (MOCCA). The idea was motivated with a practical problem in air traffic management to design terminal airspaces. MOCCA and a further study that was done on a distributed environment for MOCCA, were found to fit the need for the application. We systematically questioned key components of this algorithm and investigated variations to identify a better design. This paper summarizes this systematic investigation and present the resultant new algorithm: multi-objective co-operative co-evolutionary algorithm II (MOCCA-II).
MOCCA-II: A multi-objective co-operative co-evolutionary algorithm
S1568494614002762
To improve the performance of the standard particle swarm optimization (PSO) which suffers from premature convergence and slow convergence speed, many PSO variants introduce lots of stochastic or aimless strategies to overcome the convergence problem. However, the mutual learning between elites particles is omitted, although which might be benefit to the convergence speed and, prevent the premature convergence. In this paper, we introduce DSLPSO, which integrates three novel strategies, specifically, tabu d etecting, s hrinking and l ocal learning strategies, into PSO to overcome the aforementioned shortcomings. In DSLPSO, search space of each dimension is divided into many equal subregions. Then the tabu detecting strategy, which has good ergodicity for search space, helps the global historical best particle to detect a more suitable subregion, and thus help it jump out of a local optimum. The shrinking strategy enables DSLPSO to take optimization in a smaller search space and obtain a higher convergence speed. In the local learning strategy, a differential between two elites particles is used to increase solution accuracy. The experimental results show that DSLPSO has a superior performance in comparison with several other participant PSOs on most of the tested functions, as well as offering faster convergence speed, higher solution accuracy and stronger reliability.
An improved particle swarm optimizer based on tabu detecting and local learning strategy in a shrunk search space
S1568494614002774
In the presented paper a new method of identification of canonical coherent scatterers in the quad-polarimetric SAR data are presented. The proposed method is based on the analysis of polarimetric signatures. The observed signatures are compared with the polarimetric signatures of four canonical objects: trihedral, dihedral and helix – right and left which represent basic scattering mechanisms: single bounce, double bounce and helix scattering. The polarimetric matrices are treated as vectors in a unitary space with a scalar product that generates the norm. A recognized object is classified to one of the four coherent classes by a Kohonen network. It is not trained in an iteration process but its weights are adjusted according to the given patterns. The network classification is supported by rules. The obtained maps of pixels that represent canonical objects are compared with a map of coherent scatterers which was obtained by using the polarimetric entropy approach. The developed method of canonical coherent scatterers identification based on the polarimetric signatures analysis allows us not only to identify precisely the canonical coherent scatterers but also to determine the type of scattering mechanism characteristic for each of them. Since the proposed method works on a single-look (non-averaged) SAR data, it does not cause any spatial nor spectral decrease of amount of information because averaging is not conducted. Moreover, the proposed method will enable us the identification of a type of scattering mechanism in the canonical coherent pixels. This is an improvement in comparison to the existing methods. The obtained results should be more precise because the full polarimetric information about the scatterers is used in the identification procedure.
SAR images analysis based on polarimetric signatures
S1568494614002786
The integrated machine allocation and facility layout problem (IMALP) is a branch of the general facility layout problem in which, besides selecting machine locations, the processing route of each product is determined. Most research in this area suppose that the flow of material is certain and exact, which is an unrealistic assumption in today's dynamic and uncertain business environment. Therefore, in this paper the demand volume has been assumed as fuzzy numbers with different membership functions. To solve this problem, the deterministic model is first integrated with a fuzzy implication via the expected value model, and thereafter an intelligent hybrid algorithm, including a genetic algorithm and a fuzzy simulation approach has been applied. Finally, the efficiency of the proposed algorithm is evaluated with a set of numerical examples. The results show the effectiveness of the hybrid algorithm in finding the IMALP solutions.
A hybrid fuzzy-GA algorithm for the integrated machine allocation problem with fuzzy demands
S1568494614002798
Evolutionary algorithms are one of the most common choices reported in the literature for the tuning of fuzzy logic controllers based on either type-1 or type-2 fuzzy systems. An alternative to evolutionary algorithms is the simple tuning algorithm (STA-FLC), which is a methodology designed to improve the response of type-1 fuzzy logic controllers in a practical, intuitive and simple ways. This paper presents an extension of the simple tuning algorithm for fuzzy logic controllers based on the theory of type-2 fuzzy systems by using a parallel model implementation, it also includes a mechanism to calculate the feedback gain, new integral criteria parameters, and the effect of the AND/OR operator combinations on the fuzzy rules to improve the algorithm applicability and performance. All these improvements are demonstrated with experiments applied to different types of plants.
Optimal design of interval type 2 fuzzy controllers based on a simple tuning algorithm
S1568494614002804
Most of the traditional histogram-based thresholding techniques are effective for bi-level thresholding and unable to consider spatial contextual information of the image for selecting optimal threshold. In this article a novel thresholding technique is presented by proposing an energy function to generate the energy curve of an image by taking into an account the spatial contextual information of the image. The behavior of this energy curve is very much similar to the histogram of the image. To incorporate spatial contextual information of the image for threshold selection process, this energy curve is used as an input of our technique instead of histogram. Moreover, to mitigate multilevel thresholding problem the properties of genetic algorithm are exploited. The proposed algorithm is evaluated on the number of different types of images using a validity measure. The results of the proposed technique are compared with those obtained by using histogram of the image and also with an existing genetic algorithm based context sensitive technique. The comparisons confirmed the effectiveness of the proposed technique.
A novel context sensitive multilevel thresholding for image segmentation
S1568494614002816
This paper deals with the potential and limitations of using voice and speech processing to detect Obstructive Sleep Apnea (OSA). An extensive body of voice features has been extracted from patients who present various degrees of OSA as well as healthy controls. We analyse the utility of a reduced set of features for detecting OSA. We apply various feature selection and reduction schemes (statistical ranking, Genetic Algorithms, PCA, LDA) and compare various classifiers (Bayesian Classifiers, kNN, Support Vector Machines, neural networks, Adaboost). S-fold crossvalidation performed on 248 subjects shows that in the extreme cases (that is, 127 controls and 121 patients with severe OSA) voice alone is able to discriminate quite well between the presence and absence of OSA. However, this is not the case with mild OSA and healthy snoring patients where voice seems to play a secondary role. We found that the best classification schemes are achieved using a Genetic Algorithm for feature selection/reduction.
Detection of severe obstructive sleep apnea through voice analysis
S1568494614002828
In this paper, we present a low-complexity algorithm for real-time joint user scheduling and receive antenna selection (JUSRAS) in multiuser MIMO systems. The computational complexity of exhaustive search for JUSRAS problem grows exponentially with the number of users and receives antennas. We apply binary particle swarm optimization (BPSO) to the joint user scheduling and receive antenna selection problem. In addition to applying the conventional BPSO to JUSRAS, we also present a specific improvement to this population-based heuristic algorithm; namely, we feed cyclically shifted initial population, so that the number of iterations until reaching an acceptable solution is reduced. The proposed BPSO for JUSRAS problem has a low computational complexity, and its effectiveness is verified through simulation results.
A joint antenna and user selection scheme for multiuser MIMO system
S1568494614002944
Scheduling of single machine in manufacturing systems is especially complex when the order arrivals are dynamic. The complexity of the problem increases by considering the sequence-dependent setup times and machine maintenance in dynamic manufacturing environment. Computational experiments in literature showed that even solving the static single machine scheduling problem without considering regular maintenance activities is NP-hard. Multi-agent systems, a branch of artificial intelligence provide a new alternative way for solving dynamic and complex problems. In this paper a collaborative multi-agent based optimization method is proposed for single machine scheduling problem with sequence-dependent setup times and maintenance constraints. The problem is solved under the condition of both regular and irregular maintenance activities. The solutions of multi-agent based approach are compared with some static single machine scheduling problem sets which are available in the literature. The method is also tested under real-time manufacturing environment where computational time plays a critical role during decision making process.
Multi-agent based approach for single machine scheduling with sequence-dependent setup times and machine maintenance
S1568494614002968
The twin-screw configuration problem (TSCP) arises in the context of polymer processing, where twin-screw extruders are used to prepare polymer blends, compounds or composites. The goal of the TSCP is to define the configuration of a screw from a given set of screw elements. The TSCP can be seen as a sequencing problem as the order of the screw elements on the screw axis has to be defined. It is also inherently a multi-objective problem since processing has to optimize various conflicting parameters related to the degree of mixing, shear rate, or mechanical energy input among others. In this article, we develop hybrid algorithms to tackle the bi-objective TSCP. The hybrid algorithms combine different local search procedures, including Pareto local search and two phase local search algorithms, with two different population-based algorithms, namely a multi-objective evolutionary algorithm and a multi-objective ant colony optimization algorithm. The experimental evaluation of these approaches shows that the best hybrid designs, combining Pareto local search with a multi-objective ant colony optimization approach, outperform the best algorithms that have been previously proposed for the TSCP.
Hybrid algorithms for the twin–screw extrusion configuration problem
S156849461400297X
All dynamic crop models for growth and development have several parameters whose values are usually determined by using measurements coming from the real system. The parameter estimation problem is raised as an optimization problem and optimization algorithms are used to solve it. However, because the model generally is nonlinear the optimization problem likely is multimodal and therefore classical local search methods fail in locating the global minimum and as a consequence the model parameters could be inaccurate estimated. This paper presents a comparison of several evolutionary (EAs) and bio-inspired (BIAs) algorithms, considered as global optimization methods, such as Differential Evolution (DE), Covariance Matrix Adaptation Evolution Strategy (CMA-ES), Particle Swarm Optimization (PSO) and Artificial Bee Colony (ABC) on parameter estimation of crop growth SUCROS (a Simple and Universal CROp Growth Simulator) model. Subsequently, the SUCROS model for potential growth was applied to a husk tomato crop (Physalis ixocarpa Brot. ex Horm.) using data coming from an experiment carried out in Chapingo, Mexico. The objective was to determine which algorithm generates parameter values that give the best prediction of the model. An analysis of variance (ANOVA) was carried out to statistically evaluate the efficiency and effectiveness of the studied algorithms. Algorithm's efficiency was evaluated by counting the number of times the objective function was required to approximate an optimum. On the other hand, the effectiveness was evaluated by counting the number of times that the algorithm converged to an optimum. Simulation results showed that standard DE/rand/1/bin got the best result.
Parameter estimation for crop growth model using evolutionary and bio-inspired algorithms
S1568494614002981
The location routing problem with simultaneous pickup and delivery (LRPSPD) is a new variant of the location routing problem (LRP). The objective of LRPSPD is to minimize the total cost of a distribution system including vehicle traveling cost, depot opening cost, and vehicle fixed cost by locating the depots and determining the vehicle routes to simultaneously satisfy the pickup and the delivery demands of each customer. LRPSPD is NP-hard since its special case, LRP, is NP-hard. Thus, this study proposes a multi-start simulated annealing (MSA) algorithm for solving LRPSPD which incorporates multi-start hill climbing strategy into simulated annealing framework. The MSA algorithm is tested on 360 benchmark instances to verify its performance. Results indicate that the multi-start strategy can significantly enhance the performance of traditional single-start simulated annealing algorithm. Our MSA algorithm is very effective in solving LRPSPD compared to existing solution approaches. It obtained 206 best solutions out of the 360 benchmark instances, including 126 new best solutions.
Multi-start simulated annealing heuristic for the location routing problem with simultaneous pickup and delivery
S1568494614002993
Scenario Planning helps explore how the possible futures may look like and establishing plans to deal with them, something essential for any company, institution or country that wants to be competitive in this globalize world. In this context, Cross Impact Analysis is one of the most used methods to study the possible futures or scenarios by identifying the system's variables and the role they play in it. In this paper, we focus on the method called MICMAC (Impact Matrix Cross-Reference Multiplication Applied to a Classification), for which we propose a new version based on Computing with Words techniques and fuzzy sets, namely Fuzzy Linguistic MICMAC (FLMICMAC). The new method allows linguistic assessment of the mutual influence between variables, captures and handles the vagueness of these assessments, expresses the results linguistically, provides information in absolute terms and incorporates two new ways to visualize the results. Our proposal has been applied to a real case study and the results have been compared to the original MICMAC, showing the superiority of FLMICMAC as it gives more robust, accurate, complete and easier to interpret information, which can be very useful for a better understanding of the system.
A new fuzzy linguistic approach to qualitative Cross Impact Analysis
S1568494614003007
In recent years, the use of magnetic field measurements has become relevant in several applications ranging from non-invasive structural fault detection to tracking of micro-capsules within living organisms. Magnetic measurements are, however, affected by a high noise due to a number of causes, such as interference from external objects and the Earth magnetic field. Furthermore, in many situations the magnetic fields under analysis are time-variant, for example because generated by moving objects, power lines, antennas, etc. For these reasons, a general approach for accurate real-time magnetic dipole detection is unfeasible, but specific techniques should be devised. In this paper we explore the possibility of using multiple 3-axis magnetic field sensors to estimate the position and orientation of a magnetic dipole moving within the detection area of the sensors. We propose a real-time Computational Intelligence approach, based on an innovative single particle optimization algorithm, for solving, with an average period of 2.5ms, the inverse problem of magnetic dipole detection. Finally, we validate the proposed approach by means of an experimental setup consisting of 3 sensors and a custom graphical application showing in real-time the estimated position and orientation of the magnetic dipole. Experimental results show that the proposed approach is superior, in terms of detection error and computational time, to several state-of-the-art real-parameter optimization algorithms.
Real-time magnetic dipole detection with single particle optimization
S1568494614003019
Because of the chaotic nature and intrinsic complexity of wind speed, it is difficult to describe the moving tendency of wind speed and accurately forecast it. In our study, a novel EMD–ENN approach, a hybrid of empirical mode decomposition (EMD) and Elman neural network (ENN), is proposed to forecast wind speed. First, the original wind speed datasets are decomposed into a collection of intrinsic mode functions (IMFs) and a residue by EMD, yielding relatively stationary sub-series that can be readily modeled by neural networks. Second, both IMF components and residue are applied to establish the corresponding ENN models. Then, each sub-series is predicted using the corresponding ENN. Finally, the prediction values of the original wind speed datasets are calculated by the sum of the forecasting values of every sub-series. Moreover, in the ENN modeling process, the neuron number of the input layer is determined by a partial autocorrelation function. Four prediction cases of wind speed are used to test the performance of the proposed hybrid approach. Compared with the persistent model, back-propagation neural network, and ENN, the simulation results show that the proposed EMD–ENN model consistently has the minimum statistical error of the mean absolute error, mean square error, and mean absolute percentage error. Thus, it is concluded that the proposed approach is suitable for wind speed prediction.
Forecasting wind speed using empirical mode decomposition and Elman neural network
S1568494614003020
Rapid growth in world population and recourse limitations necessitate remanufacturing of products and their parts/modules. Managing these processes requires special activities such as inspection, disassembly, and sorting activities known as treatment activities. This paper proposes a capacitated multi-echelon, multi-product reverse logistic network design with fuzzy returned products in which both locations of the treatment activities and facilities are decision variables. As the obtained nonlinear mixed integer programming model is a combinatorial problem, a memetic-based heuristic approach is presented to solve the resulted model. To validate the proposed memetic-based heuristic method, the obtained results are compared with the results of the linear approximation of the model, which is obtained by a commercial optimization package. Moreover, due to inherent uncertainty in return products, demands of these products are considered as uncertain parameters and therefore a fuzzy approach is employed to tackle this matter. In order to deal with the uncertainty, a stochastic simulation approach is employed to defuzzify the demands, where extra costs due to opening new centers or extra transportation costs may be imposed to the system. These costs are considered as penalty in the objective function. To minimize the resulting penalties during simulation's iterations, the average of penalties is added to the objective function of the deterministic model considered as the primary objective function and variance of penalties are considered as the secondary objective function to make a robust solution. The resulted bi-objective model is solved through goal programming method to minimizing the objectives, simultaneously.
Location based treatment activities for end of life products network design under uncertainty by a robust multi-objective memetic-based heuristic approach
S1568494614003032
In this study, a new method is proposed for the exact analytical inverse mapping of Takagi–Sugeno fuzzy systems with singleton and linear consequents where the input variables are described by using strong triangular partitions. These fuzzy systems can be decomposed into several fuzzy subsystems. The output of the fuzzy subsystem results in multi-linear form in singleton consequent case or multi-variate second order polynomial form in linear consequent case. Since there exist explicit analytical formulas for the solutions of first and second order equations, the exact analytical inverse solutions can be obtained for decomposable Takagi–Sugeno fuzzy systems with singleton and linear consequents. In the proposed method, the output of the fuzzy subsystem is represented by using the matrix multiplication form. The parametric inverse definition of the fuzzy subsystem is obtained by using appropriate matrix partitioning with respect to the inversion variable. The inverse mapping of each fuzzy subsystem can then easily be calculated by substituting appropriate parameters of the fuzzy subsystem into this parametric inverse definition. So, it becomes very easy to find the analytical inverse mapping of the overall Takagi–Sugeno fuzzy system by composing inverse mappings of all fuzzy subsystems. The exactness and the effectiveness of the proposed inversion method are demonstrated on trajectory tracking problems by simulations.
Exact analytical inverse mapping of decomposable TS fuzzy systems with singleton and linear consequents
S1568494614003044
This study provides a general introduction to the principles, algorithms and practice of Computational Intelligence (CI) and elaborates on their use to signal processing and time series. In this setting, we discuss the main technologies of Computational Intelligence (namely, neural networks, fuzzy sets or Granular Computing, and evolutionary optimization), identify their focal points and stress an overall synergistic character, which ultimately gives rise to the highly synergistic CI environment. Furthermore, the main advantages and limitations of the CI technologies are discussed. In the sequel, we present CI-oriented constructs in signal modeling, classification, and interpretation.
Signal processing and time series description: A Perspective of Computational Intelligence and Granular Computing
S1568494614003056
In this paper the assessment of the wave energy potential in nearshore coastal areas is investigated by means of artificial neural networks (ANNs). The performance of the ANNs is compared with in situ measurements and spectral numerical modelling (the conventional tool for wave energy assessment). For this purpose, 13 years of records of two buoys, one offshore and one inshore, with an hourly frequency are used to develop an ANN model for predicting the nearshore wave power. The best suited architecture was selected after assessing the performance of 480 ANN models involving twelve different architectures. The results predicted by the ANN model were compared with the measured data and those obtained by means of the SWAN (Simulating Waves Nearshore) spectral model. The quality in the predictions of the ANN model shows that this type of artificial intelligence models constitutes a powerful tool to forecast the wave energy potential at particular coastal site with great accuracy, and one that overcomes some of the disadvantages of the conventional tools for nearshore wave power prediction.
Performance of artificial neural networks in nearshore wave power prediction
S1568494614003068
In this study, Adaptive Neuro-Fuzzy Inference System (ANFIS) is used for multi-criteria decision making in supplier evaluation and selection problem. The contemporary supply-chain management is looking for both quantitative and qualitative measures other than just getting the lowest price. After evaluating a number of distinct suppliers, determining the reliable suppliers by ANFIS model with better approximation will support decision makers. To this end, ANFIS is evaluated for different data sets with the attributes of the suppliers and their scores that are gathered from a previous study conducted for the same problem under the name of Neural Network (NN) application for fuzzy multi-criteria decision-making model. In the proposed ANFIS model built for determining supplier score, linear regression analysis (R-value) and Mean Square Error (MSE) were 0.8467 and 0.0134, respectively, while they were 0.7733 and 0.0193 for NN for fuzzy. ANFIS gives better results according to MSEs. Hence, it is determined that ANFIS algorithm can be used in multi-criteria decision making problems for supplier evaluation and selection with more precise and reliable results.
Comparison of neural network application for fuzzy and ANFIS approaches for multi-criteria decision making problems
S156849461400307X
This study applies the Multiple Criteria Decision Making (MCDM) to evaluate the service quality of some Turkish hospitals. In general the service quality has abstract properties, which mean that using the previously known measurement approach is insufficient. It is for this reason that the fuzzy set theory is adopted as a research template. In Istanbul, Turkey, there are four B class hospitals classed as private hospitals that are covered by the Social Security Institution (SSI) and for which we propose to represent the service performance measurement using triangular fuzzy numbers. In this study, importance weights of performance criteria are found with AHP. Then, the Multiple Criteria Decision Making methods TOPSIS and Yager's min-max approach are applied to find and rank the crisp performance values. In a second step, an aggregation of performance criteria with OWA and Compensatory AND operators are looked at instead of the TOPSIS method and min-max approach. Thereby numerical applications are supplied by the four methods and the obtained results are compared.
The evaluation of hospital service quality by fuzzy MCDM
S1568494614003081
Particle swarm optimization (PSO) is one of the well-known population-based techniques used in global optimization and many engineering problems. Despite its simplicity and efficiency, the PSO has problems as being trapped in local minima due to premature convergence and weakness of global search capability. To overcome these disadvantages, the PSO is combined with Levy flight in this study. Levy flight is a random walk determining stepsize using Levy distribution. Being used Levy flight, a more efficient search takes place in the search space thanks to the long jumps to be made by the particles. In the proposed method, a limit value is defined for each particle, and if the particles could not improve self-solutions at the end of current iteration, this limit is increased. If the limit value determined is exceeded by a particle, the particle is redistributed in the search space with Levy flight method. To get rid of local minima and improve global search capability are ensured via this distribution in the basic PSO. The performance and accuracy of the proposed method called as Levy flight particle swarm optimization (LFPSO) are examined on well-known unimodal and multimodal benchmark functions. Experimental results show that the LFPSO is clearly seen to be more successful than one of the state-of-the-art PSO (SPSO) and the other PSO variants in terms of solution quality and robustness. The results are also statistically compared, and a significant difference is observed between the SPSO and the LFPSO methods. Furthermore, the results of proposed method are also compared with the results of well-known and recent population-based optimization methods.
A novel particle swarm optimization algorithm with Levy flight
S1568494614003093
Artificial bee colony (ABC) algorithm inspired by the foraging behaviour of the honey bees is one of the most popular swarm intelligence based optimization techniques. Quick artificial bee colony (qABC) is a new version of ABC algorithm which models the behaviour of onlooker bees more accurately and improves the performance of standard ABC in terms of local search ability. In this study, the qABC method is described and its performance is analysed depending on the neighbourhood radius, on a set of benchmark problems. And also some analyses about the effect of the parameter limit and colony size on qABC optimization are carried out. Moreover, the performance of qABC is compared with the state of art algorithms’ performances.
A quick artificial bee colony (qABC) algorithm and its performance on optimization problems
S156849461400310X
This research addresses system reliability analysis using weakest t-norm based approximate intuitionistic fuzzy arithmetic operations, where failure probabilities of all components are represented by different types of intuitionistic fuzzy numbers. Due to the incomplete, imprecise, vague and conflicting information about the component of system, the present study evaluates the reliability of system in terms of membership function and non-membership function by using weakest t-norm (T w ) based approximate intuitionistic fuzzy arithmetic operations on different types of intuitionistic fuzzy numbers. In general, interval arithmetic (α-cut arithmetic) operations have been used to analyze the fuzzy system reliability. In complicated systems, interval arithmetic operations may occur the accumulating phenomenon of fuzziness. In order to overcome the accumulating phenomenon of fuzziness, this research adopts approximate intuitionistic fuzzy arithmetic operations under the weakest t-norm arithmetic operations (T w ) to analyze fuzzy system reliability. The approximate intuitionistic fuzzy arithmetic operations employ principle of interval arithmetic under the weakest t-norm arithmetic operations. The proposed novel fuzzy arithmetic operations may obtain fitter decision values, which have smaller fuzziness accumulating and successfully analyze the system reliability. Also weakest t-norm arithmetic operations provide more exact fuzzy results and effectively reduce fuzzy spreads (fuzzy intervals). Using proposed approach, fuzzy reliability of series system and parallel system are also constructed. For numerical verification of proposed approach, a malfunction of printed circuit board assembly (PCBA) is presented as a numerical example. The result of the proposed method is compared with the listing approaches of reliability analysis methods.
Applying weakest t-norm based approximate intuitionistic fuzzy arithmetic operations on different types of intuitionistic fuzzy numbers to evaluate reliability of PCBA fault
S1568494614003111
Most industrial processes exhibit inherent nonlinear characteristics. Hence, classical control strategies which use linearized models are not effective in achieving optimal control. In this paper an Artificial Neural Network (ANN) based reinforcement learning (RL) strategy is proposed for controlling a nonlinear interacting liquid level system. This ANN-RL control strategy takes advantage of the generalization, noise immunity and function approximation capabilities of the ANN and optimal decision making capabilities of the RL approach. Two different ANN-RL approaches for solving a generic nonlinear control problem are proposed and their performances are evaluated by applying them to two benchmark nonlinear liquid level control problems. Comparison of the ANN-RL approach is also made to a discretized state space based pure RL control strategy. Performance comparison on the benchmark nonlinear liquid level control problems indicate that the ANN-RL approach results in better control as evidenced by less oscillations, disturbance rejection and overshoot. system state vector control action or input to the system state transition probabilities reward for taking action ‘a’ in state ‘s’ policy function or action to be taken in state ‘s’ cumulative discounted reward for following policy π starting from state ‘s’ optimal policy function optimal value function [h 1 h 2 h 3]T, state vector for the liquid level system inlet flow rate for the liquid level system discount factor to discount future rewards and favor immediate rewards number of discretization levels used for level variable h i number of discretization levels used for inlet flow rate Q
Control of a nonlinear liquid level system using a new artificial neural network based reinforcement learning approach
S1568494614003123
This paper presents an improved solution for optimal placement and sizing of active power conditioner (APC) to enhance power quality in distribution systems using the improved discrete firefly algorithm (IDFA). A multi-objective optimization problem is formulated to improve voltage profile, minimize voltage total harmonic distortion and minimize total investment cost. The performance of the proposed algorithm is validated on the IEEE 16- and 69-bus test systems using the Matlab software. The obtained results are compared with the conventional discrete firefly algorithm, genetic algorithm and discrete particle swarm optimization. The comparison of results showed that the proposed IDFA is the most effective method among others in determining optimum location and size of APC in distribution systems.
Optimum placement of active power conditioner in distribution systems using improved discrete firefly algorithm for power quality enhancement
S1568494614003135
This paper introduces a new non-parametric method for uncertainty quantification through construction of prediction intervals (PIs). The method takes the left and right end points of the type-reduced set of an interval type-2 fuzzy logic system (IT2FLS) model as the lower and upper bounds of a PI. No assumption is made in regard to the data distribution, behaviour, and patterns when developing intervals. A training method is proposed to link the confidence level (CL) concept of PIs to the intervals generated by IT2FLS models. The new PI-based training algorithm not only ensures that PIs constructed using IT2FLS models satisfy the CL requirements, but also reduces widths of PIs and generates practically informative PIs. Proper adjustment of parameters of IT2FLSs is performed through the minimization of a PI-based objective function. A metaheuristic method is applied for minimization of the non-linear non-differentiable cost function. Performance of the proposed method is examined for seven synthetic and real world benchmark case studies with homogenous and heterogeneous noise. The demonstrated results indicate that the proposed method is capable of generating high quality PIs. Comparative studies also show that the performance of the proposed method is equal to or better than traditional neural network-based methods for construction of PIs in more than 90% of cases. The superiority is more evident for the case of data with a heterogeneous noise.
An interval type-2 fuzzy logic system-based method for prediction interval construction
S1568494614003147
The objective of voice conversion system is to formulate the mapping function which can transform the source speaker characteristics to that of the target speaker. In this paper, we propose the General Regression Neural Network (GRNN) based model for voice conversion. It is a single pass learning network that makes the training procedure fast and comparatively less time consuming. The proposed system uses the shape of the vocal tract, the shape of the glottal pulse (excitation signal) and long term prosodic features to carry out the voice conversion task. In this paper, the shape of the vocal tract and the shape of source excitation of a particular speaker are represented using Line Spectral Frequencies (LSFs) and Linear Prediction (LP) residual respectively. GRNN is used to obtain the mapping function between the source and target speakers. The direct transformation of the time domain residual using Artificial Neural Network (ANN) causes phase change and generates artifacts in consecutive frames. In order to alleviate it, wavelet packet decomposed coefficients are used to characterize the excitation of the speech signal. The long term prosodic parameters namely, pitch contour (intonation) and the energy profile of the test signal are also modified in relation to that of the target (desired) speaker using the baseline method. The relative performances of the proposed model are compared to voice conversion system based on the state of the art RBF and GMM models using objective and subjective evaluation measures. The evaluation measures show that the proposed GRNN based voice conversion system performs slightly better than the state of the art models.
Voice conversion using General Regression Neural Network
S1568494614003159
Time series forecasting (TSF) is an important tool to support decision making (e.g., planning production resources). Artificial neural networks (ANNs) are innate candidates for TSF due to advantages such as nonlinear learning and noise tolerance. However, the search for the best model is a complex task that highly affects the forecasting performance. In this work, we propose two novel evolutionary artificial neural networks (EANNs) approaches for TSF based on an estimation distribution algorithm (EDA) search engine. The first new approach consist of sparsely connected evolutionary ANN (SEANN), which evolves more flexible ANN structures to perform multi-step ahead forecasts. The second one, consists of an automatic Time lag feature selection EANN (TEANN) approach that evolves not only ANN parameters (e.g., input and hidden nodes, training parameters) but also which set of time lags are fed into the forecasting model. Several experiments were held, using a set of six time series, from different real-world domains. Also, two error metrics (i.e., mean squared error and symmetric mean absolute percentage error) were analyzed. The two EANN approaches were compared against a base EANN (with no ANN structure or time lag optimization) and four other methods (autoregressive integrated moving average method, random forest, echo state network and support vector machine). Overall, the proposed SEANN and TEANN methods obtained the best forecasting results. Moreover, they favor simpler neural network models, thus requiring less computational effort when compared with the base EANN.
Evolutionary optimization of sparsely connected and time-lagged neural networks for time series forecasting
S1568494614003160
The amount of online transactions is growing these days to a large number. A big portion of these transactions contains credit card transactions. The growth of online fraud, on the other hand, is notable, which is generally a result of ease of access to edge technology for everyone. There has been research done on many models and methods for credit card fraud prevention and detection. Artificial Immune Systems is one of them. However, organizations need accuracy along with speed in the fraud detection systems, which is not completely gained yet. In this paper we address credit card fraud detection using Artificial Immune Systems (AIS), and introduce a new model called AIS-based Fraud Detection Model (AFDM). We will use an immune system inspired algorithm (AIRS) and improve it for fraud detection. We increase the accuracy up to 25%, reduce the cost up to 85%, and decrease system response time up to 40% compared to the base algorithm.
A novel model for credit card fraud detection using Artificial Immune Systems
S1568494614003172
This paper presents a solution to multi-objective optimal power flow (MOOPF) problem using an adaptive clonal selection algorithm (ACSA) to minimise generation cost, transmission loss and voltage stability index (L-index) in the presence of multi-type FACTS devices in power systems. The proposed approach utilizes clonal selection principle and evolutionary concept which performs cloning of antibodies followed by hyper maturation. In this algorithm, a non-dominated sorting and crowding distance have been used to find and manage Pareto optimal front. Various voltage source converter (VSC) based multi-type FACTS devices such as UPFC, IPFC and GUPFC are considered and incorporated as power injection models in multi-objective optimisation problem formulation. The proposed multi-objective adaptive clonal selection algorithm (MOACSA) has been tested on standard IEEE 30-bus test system with FACTS devices. The results obtained from the proposed MOACSA approach are compared with implementation of standard algorithms namely NSGA-II, MOPSO and MODE.
Multi-objective adaptive clonal selection algorithm for solving optimal power flow considering multi-type FACTS devices and load uncertainty
S1568494614003184
Given a linear program, a desired optimal objective value, and a set of feasible cost vectors, one needs to determine a cost vector of the linear program such that the corresponding optimal objective value is closest to the desired value. The problem is always known as a standard inverse optimal value problem. When multiple criteria are adopted to determine cost vectors, a multi-criteria inverse optimal value problem arises, which is more general than the standard case. This paper focuses on the algorithmic approach for this class of problems, and develops an evolutionary algorithm based on a dynamic weighted aggregation method. First, the original problem is converted into a bilevel program with multiple upper level objectives, in which the lower level problem is a linear program for each fixed cost vector. In addition, the potential bases of the lower level program are encoded as chromosomes, and the weighted sum of the upper level objectives is taken as a new optimization function, by which some potential nondominated solutions can be generated. In the design of the evolutionary algorithm some specified characteristics of the problem are well utilized, such as the optimality conditions. Some preliminary computational experiments are reported, which demonstrates that the proposed algorithm is efficient and robust.
An evolutionary algorithm for multi-criteria inverse optimal value problems using a bilevel optimization model
S1568494614003196
This paper presents and investigates the application of Zhang neural network (ZNN) activated by Li function to kinematic control of redundant robot manipulators via time-varying Jacobian matrix pseudoinversion. That is, by using Li activation function and by computing the time-varying pseudoinverse of the Jacobian matrix (of the robot manipulator), the resultant ZNN model is applied to redundant-manipulator kinematic control. Note that there are nine novelties and differences of ZNN from the conventional gradient neural network in the research methodology. More importantly, such a Li-function activated ZNN (LFAZNN) model has the property of finite-time convergence (showing its feasibility to redundant-manipulator kinematic control). Simulation results based on a four-link planar robot manipulator and a PA10 robot manipulator further demonstrate the effectiveness of the presented LFAZNN model, as well as show the LFAZNN application prospect.
Li-function activated ZNN with finite-time convergence applied to redundant-manipulator kinematic control via time-varying Jacobian matrix pseudoinversion
S1568494614003202
This paper addresses the question of time-domain-constrained data clustering, a problem which deals with data labelled with the time they are obtained and imposing the condition that clusters need to be contiguous in time (the time-domain constraint). The objective is to obtain a partitioning of a multivariate time series into internally homogeneous segments with respect to a statistical model given in each cluster. In this paper, time-domain-constrained data clustering is formulated as an unrestricted bi-level optimization problem. The clustering problem is stated at the upper level model and at the lower level the statistical models are adjusted to the set of clusters determined in the upper level. This formulation is sufficiently general to allow these statistical models to be used as black boxes. A hybrid technique based on combining a generic population-based optimization algorithm and Nelder–Mead simplex search is used to solve the bi-level model. The capability of the proposed approach is illustrated using simulations of synthetic signals and a novel application for survival analysis. This application shows that the proposed methodology is a useful tool to detect changes in the hidden structure of historical data. Finally, the performance of the hybridizations of particle swarm optimization, genetic algorithms and simulated annealing with Nelder–Mead simplex search are tested on a pattern recognition problem of text identification.
Hybrid meta-heuristic optimization algorithms for time-domain-constrained data clustering
S1568494614003214
Modeling and forecasting seasonal and trend time series is an important research topic in many areas of industrial and economic activity. In this study, we forecast the seasonal and trend time series using a quasi-linear autoregressive model. This quasi-linear autoregressive model belongs to a class of varying coefficient models in which its autoregressive coefficients are constructed by radial basis function networks. A combined genetic optimization and gradient-based optimization algorithm is applied for automatic selection of proper input variables and model-dependent variables, and optimizing the model parameters simultaneously. The model is tested by five monthly time series. We compare the results with those of other various methods, which show the effectiveness of the proposed approach for the seasonal time series.
Seasonal and trend time series forecasting based on a quasi-linear autoregressive model
S1568494614003226
This paper presents an approach where differential evolution is applied to underwater glider path planning. The objective of a glider is to reach a target location and gather research data along its path by propelling itself underwater and returning periodically to the surface. The main hypothesis of this work is that gliders operational capabilities will benefit from improved path planning, especially when dealing with opportunistic short-term missions focused on the sampling of dynamic structures. To model a glider trajectory, we evolve a global underwater glider path based on the local kinematic simulation of an underwater glider, considering the daily and hourly sea currents predictions. The global path is represented by control points where the glider is expected to resurface for communication with a satellite and to receive further navigation instructions. Some well known differential evolution instance algorithms are then assessed and compared on 12 test scenarios using the proposed approach. Finally, a real case glider vessel mission was commanded using this approach.
Differential evolution and underwater glider path planning applied to the short-term opportunistic sampling of dynamic mesoscale ocean structures
S1568494614003238
A new glowworm swarm optimization (GSO) algorithm is proposed to find the optimal solution for multiple objective environmental economic dispatch (MOEED) problem. In this proposed approach, technique for order preference similar to an ideal solution (TOPSIS) is employed as an overall fitness ranking tool to evaluate the multiple objectives simultaneously. In addition, a time varying step size is incorporated in the GSO algorithm to get better performance. Finally, to evaluate the feasibility and effectiveness of the proposed combination of GSO algorithm with TOPSIS (GSO–T) approach is examined in four different test cases. Simulation results have revealed the capabilities of the proposed GSO–T approach to find the optimal solution for MOEED problem. The comparison with own coded weighted sum method incorporated GSO (WGSO) and other methods reported in literatures exhibit the superiority of the proposed GSO–T approach and also the results confirm the potential of the proposed GSO–T approach to solve the MOEED problem.
Glowworm swarm optimization algorithm with topsis for solving multiple objective environmental economic dispatch problem
S156849461400324X
In medical system, there may be many critical diseases, where experts do not have sufficient knowledge to handle those problems. For these cases, experts may provide their opinion only about certain aspects of the disease and remain silent for those unknown features. Feeling the need of prioritizing different experts based on their given information, this article uses a novel concept for assigning confident weights to different experts which are mainly based on their provided information. Experts provide their opinions about various symptoms using intuitionistic fuzzy soft matrix (IFSM). In this article, we propose an algorithmic approach based on intuitionistic fuzzy soft set (IFSS) which explores a particular disease reflecting the agreement of all experts. This approach is guided by the group decision making (GDM) model and uses cardinals of IFSS as novel concept. We have used choice matrix (CM) as an important parameter which is based on choice parameters of individual expert. This article has also validated the proposed approach using distance measurements and consents of the majority of experts. The effectiveness of the proposed approach is demonstrated using a suitable case study.
Group decision making in medical system: An intuitionistic fuzzy soft set approach
S1568494614003251
Due to inherent complexity of the dynamic facility layout problem, it has always been a challenging issue to develop a solution algorithm for this problem. For more than one decade, many researchers have proposed different algorithms for this problem. After reviewing the shortcomings of these algorithms, we realize that the performance can be further improved by a more intelligent search. This paper develops an effective novel hybrid multi-population genetic algorithm. Using a proposed heuristic procedure, we separate solution space into different parts and each subpopulation represents a separate part. This assures the diversity of the algorithm. Moreover, to intensify the search more and more, a powerful local search mechanism based on simulated annealing is developed. Unlike the available genetic operators previously proposed for this problem, we design the operators so as to search only the feasible space; thus, we save computational time by avoiding infeasible space. To evaluate the algorithm, we comprehensively discuss the parameter tuning of the algorithms by Taguchi method. The perfectly tuned algorithm is then compared with 11 available algorithms in the literature using well-known set of benchmark instances. Different analyses conducted on the results, show that the proposed algorithm enjoys the superiority and outperformance over the other algorithms.
A hybrid multi-population genetic algorithm for the dynamic facility layout problem
S1568494614003263
Present paper proposes a new technique based on double parametric form of fuzzy numbers to solve an uncertain beam equation using Adomian decomposition method subject to unit step and impulse loads. Uncertainties appear in the initial conditions are considered in terms of triangular convex normalized fuzzy sets. Using the single parametric form viz. α-cut form of fuzzy numbers, the fuzzy beam equation is converted first to an interval based fuzzy differential equation. Next this differential equation is transformed to crisp form by applying double parametric form of fuzzy numbers. Finally the same is solved by Adomian decomposition method symbolically to obtain the uncertain bounds of the dynamic response. Obtained results are depicted in term of plots to show the efficiency and powerfulness of the present analysis.
Dynamic response of imprecisely defined beam subject to various loads using Adomian decomposition method
S1568494614003275
Security administrators need to prioritise which feature to focus on amidst the various possibilities and avenues of attack, especially via Web Service in e-commerce applications. This study addresses the feature selection problem by proposing a predictive fuzzy associative rule model (FARM). FARM validates inputs by segregating the anomalies based fuzzy associative patterns discovered from five attributes in the intrusion datasets. These associative patterns leads to the discovery of a set of 18 interesting rules at 99% confidence and subsequently, categorisation into not only certainly allow/deny but also probably deny access decision class. FARM's classification provides 99% classification accuracy and less than 1% false alarm rate. Our findings indicate two benefits to using fuzzy datasets. First, fuzzy enables the discovery of fuzzy association patterns, fuzzy association rules and more sensitive classification. In addition, the root mean squared error (RMSE) and classification accuracy for fuzzy and crisp datasets do not differ much when using the Random Forest classifier. However, when other classifiers are used with increasing number of instances on the fuzzy and crisp datasets, the fuzzy datasets perform much better. Future research will involve experimentation on bigger data sets on different data types.
Defending against XML-related attacks in e-commerce applications with predictive fuzzy associative rules
S1568494614003287
The purpose of the present study is to analyze the fuzzy reliability of a repairable industrial system utilizing historical vague, imprecise and uncertain data which reflects its components’ failure and repair pattern. Soft-computing based two different hybridized techniques named as Genetic Algorithms Based Lambda–Tau (GABLT) and Neural Network and Genetic Algorithms Based Lambda–Tau (NGABLT) along with a traditional Fuzzy Lambda–Tau (FLT) technique are used to evaluate some important reliability indices of the system in the form of fuzzy membership functions. As a case study, all the three techniques are applied to analyse the fuzzy reliability of the washing system in a paper mill and results are compared. Sensitivity analysis has also been performed to analyze the effect of variation of different reliability parameters on system performance. The analysis can help maintenance personnel to understand and plan suitable maintenance strategy to improve the overall performance of the system. Based on results some important suggestions are given for future course of action in maintenance planning.
Fuzzy reliability analysis of repairable industrial systems using soft-computing based hybridized techniques
S1568494614003299
Numerous manufacturing companies are taking advantage of material handling systems due to their flexibility, reliability, safety and contribution to the increase of productivity. However, several uncertain parameters such as types of cost, availability of vehicle etc., influence the performance of the material handling system greatly. In recent years, robust optimization has proven to be an effective methodology permitting overcoming uncertainty in optimization models. Robust optimization models work well even when probabilistic knowledge of the phenomenon is incomplete. This paper thus proposes two new zero-one programming (ZOP) models for vehicle positioning in multi-cell automated manufacturing system. Uncertain parameters in these models include cost parameters, travel time between each pair of centers of cells and location of machines, average time required for performing all transports from location of machines and availability of the vehicle. Then, the robust counterpart of the proposed ZOP models is presented by using the recent extensions in robust optimization theory. Eventually, to verify the robustness of the solutions obtained by the novel robust optimization model, they are compared to those generated by the deterministic ZOP model for different test problems.
Vehicle positioning in cell manufacturing systems via robust optimization
S1568494614003305
One of the commonly used techniques to verify software and hardware systems which have been specified through graph transformation system (GTS), especially safety critical ones, is model checking. However, the model checking of large and complex systems suffers from the state space explosion problem. Since genetic algorithm (GA) is a heuristic technique which can be used to search the state space intelligently instead of using exhaustive methods, in this paper, we propose a heuristic approach based on GA to find error states, such as deadlocks, in systems specified through GTS with extra large state space. To do so, in each step of space exploration our algorithm determines which state and path should be explored. The proposed approach is implemented in GROOVE, a tool for model checking graph transformation systems. The experimental results show that our approach outperforms significantly in comparison with existing techniques in discovering error states of models with large state space.
A heuristic solution for model checking graph transformation systems
S1568494614003317
A common assumption in the classical permutation flowshop scheduling model is that each job is processed on each machine at most once. However, this assumption does not hold for a re-entrant flowshop in which a job may be operated by one or more machines many times. Given that the re-entrant permutation flowshop scheduling problem to minimize the makespan is very complex, we adopt the CPLEX solver and develop a memetic algorithm (MA) to tackle the problem. We conduct computational experiments to test the effectiveness of the proposed algorithm and compare it with two existing heuristics. The results show that CPLEX can solve mid-size problem instances in a reasonable computing time, and the proposed MA is effective in treating the problem and outperforms the two existing heuristics.
A memetic algorithm for the re-entrant permutation flowshop scheduling problem to minimize the makespan
S1568494614003329
For most of rock engineering and engineering geology projects, strength and deformability parameters of intact rocks have crucial importance. However, it is highly challenging to obtain these parameters from weak and very weak rocks due to their nature and testing requirements. For this reason, prediction models are commonly used to obtain desired parameters indirectly. When developing a prediction model, data sets having sufficient size are required. If sufficient data size is not provided for a prediction model, insufficient data problem arises. The main purpose of this study was to investigate the use of synthetic data in indirect determination of rock strength by employing fuzzy C-means (FCM) and adaptive neuro-fuzzy inference system (ANFIS). For the purpose, the experiments were carried out in two stages; (i) uniaxial compressive strength (UCS) prediction by using real data with ANFIS, and (ii) production of synthetic data sets having different sizes, and synthetic data set evaluation in modeling. According to the results obtained, FCM is a practical and suitable method for synthetic data production. Development of prediction models for rock strength by using synthetic data is found to be successful based on statistical performance indices. Additionally, the use of proposed size for synthetic data reduces modeling effort significantly because it eliminates the iterative approach in modeling, hence development of models for limited number of data becomes more practical.
An assessment on producing synthetic samples by fuzzy C-means for limited number of data in prediction models
S1568494614003342
This paper suggests a synergy of fuzzy logic and nature-inspired optimization in terms of the nature-inspired optimal tuning of the input membership functions of a class of Takagi-Sugeno-Kang (TSK) fuzzy models dedicated to Anti-lock Braking Systems (ABSs). A set of TSK fuzzy models is proposed by a novel fuzzy modeling approach for ABSs. The fuzzy modeling approach starts with the derivation of a set of local state-space models of the nonlinear ABS process by the linearization of the first-principle process model at ten operating points. The TSK fuzzy model structure and the initial TSK fuzzy models are obtained by the modal equivalence principle in terms of placing the local state-space models in the rule consequents of the TSK fuzzy models. An operating point selection algorithm to guide modeling is proposed, formulated on the basis of ranking the operating points according to their importance factors, and inserted in the third step of the fuzzy modeling approach. The optimization problems are defined such that to minimize the objective functions expressed as the average of squared modeling errors over the time horizon, and the variables of these functions are a part of the parameters of the input membership functions. Two representative nature-inspired algorithms, namely a Simulated Annealing (SA) algorithm and a Particle Swarm Optimization (PSO) algorithm, are implemented to solve the optimization problems and to obtain optimal TSK fuzzy models. The validation and the comparison of SA and PSO and of the new TSK fuzzy models are carried out for an ABS laboratory equipment. The real-time experimental results highlight that the optimized TSK fuzzy models are simple and consistent with both training data and validation data and that these models outperform the initial TSK fuzzy models.
Nature-inspired optimal tuning of input membership functions of Takagi-Sugeno-Kang fuzzy models for Anti-lock Braking Systems
S1568494614003354
This paper proposes a hybrid variable neighborhood search (HVNS) algorithm that combines the chemical-reaction optimization (CRO) and the estimation of distribution (EDA), for solving the hybrid flow shop (HFS) scheduling problems. The objective is to minimize the maximum completion time. In the proposed algorithm, a well-designed decoding mechanism is presented to schedule jobs with more flexibility. Meanwhile, considering the problem structure, eight neighborhood structures are developed. A kinetic energy sensitive neighborhood change approach is proposed to extract global information and avoid being stuck at the local optima. In addition, contrary to the fixed neighborhood set in traditional VNS, a dynamic neighborhood set update mechanism is utilized to exploit the potential search space. Finally, for the population of local optima solutions, an effective EDA-based global search approach is investigated to direct the search process to promising regions. The proposed algorithm is tested on sets of well-known benchmark instances. Through the analysis of experimental results, the high performance of the proposed HVNS algorithm is shown in comparison with four efficient algorithms from the literature.
A hybrid variable neighborhood search for solving the hybrid flow shop scheduling problem
S1568494614003366
Given the fact that totally unknown information cannot be expressed by Zadeh's fuzzy sets but can well be represented by Atanassov's intuitionistic fuzzy sets (A-IFSs), we provide in this paper a new idea, from the perspective of amount of knowledge, that the entropy for such information should be unique so that it may differ significantly from the entropy of a particular fuzzy element of which the membership and non-membership values are both equal to 0.5. Motivated by this idea, we present a more strict axiomatic definition of entropy on A-IFSs, and then develop a new entropy measure of A-IFSs that satisfies the refined axioms, with the aim of overcoming some drawbacks and ambiguities the existing entropy brings about and building a new entropy measurement model in the A-IFSs context to show some unique features of A-IFSs from the viewpoint of a specific purpose, notably those related to decision making. This allows us to capture the intrinsic amount of knowledge and the additional features that may be relevant when making decisions, and ultimately may be able to help us distinguish such particular cases in which the membership values are equal to the corresponding non-membership values in the A-IFSs context. Finally, a real-life example is provided for an in-depth discussion on the application of the developed models in decision making under uncertainty.
On the entropy for Atanassov's intuitionistic fuzzy sets: An interpretation from the perspective of amount of knowledge
S1568494614003378
In this paper we apply a novel meta-heuristic approach, the Coral Reefs Optimization (CRO) algorithm, to solve a Mobile Network Deployment Problem (MNDP), in which the control of the electromagnetic pollution plays an important role. The CRO is a new bio-inspired meta-heuristic algorithm based on the growing and evolution of coral reefs. The aim of this paper is therefore twofold: first of all, we study the performance of the CRO approach in a real hard optimization problem, and second, we solve an important problem in the field of telecommunications, including the minimization of electromagnetic pollution as a key concept in the problem. We show that the CRO is able to obtain excellent solutions to the MNDP in a real instance in Alcalá de Henares (Madrid, Spain), improving the results obtained by alternative algorithms such as Evolutionary, Particle Swarm Optimization or Harmony Search algorithms.
A Coral Reefs Optimization algorithm for optimal mobile network deployment with electromagnetic pollution control criterion
S156849461400338X
Multilayer perceptron (MLP) and support vector machine (SVM), two popular learning machines, are increasingly being used as alternatives to classical statistical models for ground-level ozone prediction. However, employing learning machines without sufficient awareness about their limitations can lead to unsatisfactory results in modeling the ozone evolving mechanism, especially during ozone formation episodes. With the spirit of literature review and justification, this paper discusses, with respect to the concerning of ozone prediction, the recently developed algorithms/technologies for treating the most prominent model-performance-degradation limitations. MLP has the “black-box” property, i.e., it hardly provides physical explanation for the trained model, overfitting and local minima problems, and SVM has parameters identification and class imbalance problems. This commentary article aims to stress that the underlying philosophy of using learning machines is by no means as trivial as simply fitting models to the data because it causes difficulties, controversies or unresolved problems. This article also aims to serve as a reference point for further technical readings for experts in relevant fields.
Learning machines: Rationale and application in ground-level ozone prediction
S1568494614003391
Preference articulation in multi-objective optimization could be used to improve the pertinency of solutions in an approximated Pareto front. That is, computing the most interesting solutions from the designer's point of view in order to facilitate the Pareto front analysis and the selection of a design alternative. This articulation can be achieved in an a priori, progressive, or a posteriori manner. If it is used within an a priori frame, it could focus the optimization process toward the most promising areas of the Pareto front, saving computational resources and assuring a useful Pareto front approximation for the designer. In this work, a physical programming approach embedded in an evolutionary multi-objective optimization is presented as a tool for preference inclusion. The results presented and the algorithm developed validate the proposal as a potential tool for engineering design by means of evolutionary multi-objective optimization.
Physical programming for preference driven evolutionary multi-objective optimization
S1568494614003408
The ongoing increase of energy consumption by IT infrastructures forces data center managers to find innovative ways to improve energy efficiency. The latter is also a focal point for different branches of computer science due to its financial, ecological, political, and technical consequences. One of the answers is given by scheduling combined with dynamic voltage scaling technique to optimize the energy consumption. The way of reasoning is based on the link between current semiconductor technologies and energy state management of processors, where sacrificing the performance can save energy. This paper is devoted to investigate and solve the multi-objective precedence constrained application scheduling problem on a distributed computing system, and it has two main aims: the creation of general algorithms to solve the problem and the examination of the problem by means of the thorough analysis of the results returned by the algorithms. The first aim was achieved in two steps: adaptation of state-of-the-art multi-objective evolutionary algorithms by designing new operators and their validation in terms of performance and energy. The second aim was accomplished by performing an extensive number of algorithms executions on a large and diverse benchmark and the further analysis of performance among the proposed algorithms. Finally, the study proves the validity of the proposed method, points out the best-compared multi-objective algorithm schema, and the most important factors for the algorithms performance.
Multi-objective evolutionary algorithms for energy-aware scheduling on distributed computing systems
S156849461400341X
Application of machine learning techniques to the functional Magnetic Resonance Imaging (fMRI) data is recently an active field of research. There is however one area which does not receive due attention in the literature – preparation of the fMRI data for subsequent modelling. In this study we focus on the issue of synchronization of the stream of fMRI snapshots with the mental states of the subject, which is a form of smart filtering of the input data, performed prior to building a predictive model. We demonstrate, investigate and thoroughly discuss the negative effects of lack of alignment between the two streams and propose an original data-driven approach to efficiently address this problem. Our solution involves casting the issue as a constrained optimization problem in combination with an alternative classification accuracy assessment scheme, applicable to both batch and on-line scenarios and able to capture information distributed across a number of input samples lifting the common simplifying i.i.d. assumption. The proposed method is tested using real fMRI data and experimentally compared to the state-of-the-art ensemble models reported in the literature, outperforming them by a wide margin.
Data stream synchronization for defining meaningful fMRI classification problems
S1568494614003421
This study proposes the multi-objective programming (MOP) method for solving network DEA (NDEA) models. In the proposed method, the divisional efficiencies (within an organization) and the overall efficiency of the organization are formulated as separate objective functions in the multi-objective programming model. Compared with conventional DEA where the intermediate processes and products are ignored, this work measures the organization's overall efficiency without neglecting the efficiencies of its subunits. Two case studies demonstrate the proposed NDEA–MOP's utility in measuring the efficiencies of an organization with concerning interactive internal process.
A multi-objective programming method for solving network DEA
S1568494614003445
In this paper, a new and efficient optimization technique based on hybridization of chemical reaction optimization (CRO) with differential evolution (DE) is developed and demonstrated to solve the ELD problem with thermal cost function having valve point loading effect together with and without multiple fuel options and with and without considering prohibited operating zone and ramp rate constraint. When valve-point effects, multi-fuel operations and the constraints of prohibited operating zone and ramp rate are taken into account, ELD problem become more complex than conventional ELD problem. To show the priority of the proposed algorithm, it is implemented on six different test systems for solving ELD problems. Comparative studies are carried out to examine the effectiveness of the proposed HCRO-DE approach with conventional DE, CRO and the other algorithms reported in the literature. The simulation results show that the proposed HCRO-DE method is capable of obtaining better quality solutions than DE, CRO and the other well popular optimization techniques.
Solution of economic load dispatch using hybrid chemical reaction optimization approach
S1568494614003457
This article presents an innovative approach to solve one of the most relevant problems related to smart mobility: the reduction of vehicles’ travel time. Our original approach, called Red Swarm, suggests a potentially customized route to each vehicle by using several spots located at traffic lights in order to avoid traffic jams by using V2I communications. That is quite different from other existing proposals, as it deals with real maps and actual streets, as well as several road traffic distributions. We propose an evolutionary algorithm (later efficiently parallelized) to optimize our case studies which have been imported from OpenStreetMap into SUMO as they belong to a real city. We have also developed a Rerouting Algorithm which accesses the configuration of the Red Swarm and communicates the route chosen to vehicles, using the spots (via WiFi link). Moreover, we have developed three competing algorithms in order to compare their results to those of Red Swarm and have observed that Red Swarm not only achieved the best results, but also outperformed the experts’ solutions in a total of 60 scenarios tested, with up to 19% shorter travel times.
Red Swarm: Reducing travel times in smart cities by using bio-inspired algorithms
S1568494614003469
Loadability limits are critical points of particular interest in voltage stability assessment, indicating how much a system can be stressed from a given state before reaching instability. Thus estimating the loadability margin of a power system is essential in the real time voltage stability assessment. A new methodology is developed based on Support Vector Regression (SVR) which is the most common application form of Support Vector Machines (SVM). The proposed SVR methodology can successfully estimate the loadability margin under normal operating conditions and different loading directions. SVR has the feature of minimizing the generalization error in achieving the generalized network over the other mapping methods. In this paper, the SVR input vector is in the form of real and reactive power load, while the target vector is lambda (loading margin). To reduce both mean square error and prediction time in SVR, the kernel type and SVR parameters are chosen determined by using grid search based on 10-fold cross-validation method for the best SVR network. The results of SVRs (nu-SVR and epsilon-SVR) are compared with RBF neural networks and validated in the IEEE 30 bus system and IEEE 118 bus system at different operating scenarios. The results demonstrate the effectiveness of the proposed method for on-line prediction of loadability margins of a power system.
Support Vector Regression Model for the prediction of Loadability Margin of a Power System
S1568494614003470
Gravitational search algorithm (GSA) has been shown to yield good performance for solving various optimization problems. However, it tends to suffer from premature convergence and loses the abilities of exploration and exploitation when solving complex problems. This paper presents an improved gravitational search algorithm (IGSA) that first employs chaotic perturbation operator and then considers memory strategy to overcome the aforementioned problems. The chaotic operator can enhance its global convergence to escape from local optima, and the memory strategy provides a faster convergence and shares individual's best fitness history to improve the exploitation ability. After that, convergence analysis of the proposed IGSA is presented based on discrete-time linear system theory and results show that IGSA is not only guaranteed to converge under the conditions, but can converge to the global optima with the probability 1. Finally, choice of reasonable parameters for IGSA is discussed on four typical benchmark test functions based on sensitivity analysis. Moreover, IGSA is tested against a suite of benchmark functions with excellent results and is compared to GA, PSO, HS, WDO, CFO, APO and other well-known GSA variants presented in the literatures. The results obtained show that IGSA converges faster than GSA and other heuristic algorithms investigated in this paper with higher global optimization performance.
Convergence analysis and performance of an improved gravitational search algorithm
S1568494614003482
In most practical environments, scheduling is an ongoing reactive process where the presence of real time information continually forces reconsideration and revision of pre-established schedules. The objectives of the research reported in this paper are to respond to changes in the problem, to solve the new problem faster and to use some parts of the previous solution for the next problem. In this paper, based on Network Simplex Algorithm, a dynamic algorithm, which is called Dynamic Network Simplex Algorithm (DNSA), is presented. Although the traditional network simplex algorithm is at least one hundred times faster than traditional simplex algorithm for Linear Programs (through specialization), for dynamic scheduling with large scale problems it still takes time to make a new graph model and to solve it. The overall approach of DNSA is to update the graph model dynamically and repair its spanning tree by some strategies when any changes happen. To test the algorithm and its performance, an application of this algorithm to Dynamic Scheduling of Automated Guided Vehicles in the container terminal is used. The dynamic problem arises when new jobs are arrived, the fulfilled jobs are removed and the links or junctions are blocked (which results in distances between points being changed). The results show considerable improvements, in terms of reducing the number of iterations and CPU time, to solve randomly generated problems.
A dynamic version for the Network Simplex Algorithm
S1568494614003494
Difficult non-linear, mixed integer non-linear and general disjunctive programming (NLP/MINLP/GDP) problems are considered in the present study. We show using random relaxation procedures that sampling over a subset of the integer variables and parallel processing have potential to simplify large scale MINLP/GDP problems and hence to solve them faster and more reliably than conventional approaches on single processors. Gray coding can be utilized to assign problems to the local processors. Some efficient non-linear solvers have been connected to the Genetic Hybrid Algorithm (GHA) through a switch board minlp_machine(), monitoring a vector of caller functions. The switch board is tested on single processor computers and massively parallel supercomputers in a sample of optimization problems. A new line search algorithm based on multi-sectioning, i.e. repeated bi-sectioning of parallel threads is applied to a set of irregular NLP-problems. Simulations indicate that step search based on multi-sectioning and re-centering is robust. Time recording indicates that the step search procedure is fast also in large optimization problems. Parallel processing with Gray coding and random sampling over a subset of the integer variables improves the performance of the sequential quadratic programming algorithm with multi-sectioning in large-scale problems. An early mesh interrupt guarantees load balancing in regular problems. General disjunctive programming (GDP) problems can be simplified through parallel processing. Certain problems are solved in larger scale than reported previously. Function pointer logic and intelligent switching procedures provide a fruitful basis for solving high-order GDP problems, where the computational challenge in direct MINLP-formulations would be insurmountable.
Solving difficult mixed integer and disjunctive non-linear problems on single and parallel processors
S1568494614003603
Multi-objective evolutionary algorithms represent an effective tool to improve the accuracy-interpretability trade-off of fuzzy rule-based classification systems. To this aim, a tuning process and a rule selection process can be combined to obtain a set of solutions with different trade-offs between the accuracy and the compactness of models. Nevertheless, an initial model needs to be defined, in particular the parameters that describe the partitions and the number of fuzzy sets of each variable (i.e. the granularities) must be determined. The simplest approach is to use a previously established single granularity and a uniform fuzzy partition for each variable. A better approach consists in automatically identifying from data the appropriate granularities and fuzzy partitions, since this usually leads to more accurate models. This contribution presents a fuzzy discretization approach, which is used to generate automatically promising granularities and their associated fuzzy partitions. This mechanism is integrated within a Multi-Objective Fuzzy Association Rule-Based Classification method, namely D-MOFARC, which concurrently performs a tuning and a rule selection process on an initial knowledge base. The aim is to obtain fuzzy rule-based classification systems with high classification performances, while preserving their complexity.
A multi-objective evolutionary method for learning granularities based on fuzzy discretization to improve the accuracy-complexity trade-off of fuzzy rule-based classification systems: D-MOFARC algorithm
S1568494614003615
The goal of this paper is to achieve optimal performance for synchronization of bilateral teleoperation systems against time delay and modeling uncertainties, in both free and contact motions. Time delay in bilateral teleoperation systems imposes a delicate tradeoff between the conflicting requirements of stability and transparency. To this reason, in this paper, population-based optimization algorithms are employed to tuning the proposed controller parameters. The performance of tuned controllers is compared with the gains obtained by Cuckoo Optimization Algorithm (COA), Biogeography-Based Optimization (BBO), Imperialist Competitive Algorithm (ICA), Artificial Bee Colony (ABC), Particle Swarm Optimization (PSO), Genetic Algorithm (GA), Ant Colony Optimization with continuous domain (ACOR), Self-adaptive Differential Evolution with Neighborhood Search (SaNSDE), Adaptive Differential Evolution with Optional External Archive (JADE), Differential Evolution with Ensemble of Parameters and mutation strategies (EPSDE) and Cuckoo Search (CS). Through numerical simulations, the validity of the proposed method is illustrated. It is also shown that the COA algorithm is able to solve synchronization problem with high performance in stable transparent bilateral teleoperation systems.
A comparison between optimization algorithms applied to synchronization of bilateral teleoperation systems against time delay and modeling uncertainties
S1568494614003627
Data partitioning and scheduling is one the important issues in minimizing the processing time for parallel and distributed computing system. We consider a single-level tree architecture of the system and the case of affine communication model, for a general m processor system with n rounds of load distribution. For this case, there exists an optimal activation order, optimal number of processors m* (m *≤ m), and optimal rounds of load distribution n* (n *≤ n), such that the processing time of the entire processing load is a minimum. This is a difficult optimization problem because for a given activation order, we have to first identify the processors that are participating (in the computation process) in every round of load distribution and then obtain the load fractions assigned to them, and the processing time. Hence, in this paper, we propose a real-coded genetic algorithm (RCGA) to solve the optimal activation order, optimal number of processors m* (m *≤ m), and optimal rounds of load distribution n* (n *≤ n), such that the processing time of the entire processing load is a minimum. RCGA employs a modified crossover and mutation operators such that the operators always produce a valid solution. Also, we propose different population initialization schemes to improve the convergence. Finally, we present a comparative study with simple real-coded genetic algorithm and particle swarm optimization to highlight the advantage of the proposed algorithm. The results clearly indicate the effectiveness of the proposed real-coded genetic algorithm.
Hybrid real-coded genetic algorithm for data partitioning in multi-round load distribution and scheduling in heterogeneous systems
S1568494614003639
Surrogate-assisted evolutionary optimization has proved to be effective in reducing optimization time, as surrogates, or meta-models can approximate expensive fitness functions in the optimization run. While this is a successful strategy to improve optimization efficiency, challenges arise when constructing surrogate models in higher dimensional function space, where the trade space between multiple conflicting objectives is increasingly complex. This complexity makes it difficult to ensure the accuracy of the surrogates. In this article, a new surrogate management strategy is presented to address this problem. A k-means clustering algorithm is employed to partition model data into local surrogate models. The variable fidelity optimization scheme proposed in the author's previous work is revised to incorporate this clustering algorithm for surrogate model construction. The applicability of the proposed algorithm is illustrated on six standard test problems. The presented algorithm is also examined in a three-objective stiffened panel optimization design problem to show its superiority in surrogate-assisted multi-objective optimization in higher dimensional objective function space. Performance metrics show that the proposed surrogate handling strategy clearly outperforms the single surrogate strategy as the surrogate size increases.
Improving surrogate-assisted variable fidelity multi-objective optimization using a clustering algorithm
S1568494614003640
In this paper, a hybrid method is proposed to control a nonlinear dynamic system using feedforward neural network. This learning procedure uses different learning algorithm separately. The weights connecting the input and hidden layers are firstly adjusted by a self organized learning procedure, whereas the weights between hidden and output layers are trained by supervised learning algorithm, such as a gradient descent method. A comparison with backpropagation (BP) shows that the new algorithm can considerably reduce network training time.
Neural network control of nonlinear dynamic systems using hybrid algorithm
S1568494614003652
Feature selection plays an important role in the machine-vision-based online detection of foreign fibers in cotton because of improvement detection accuracy and speed. Feature sets of foreign fibers in cotton belong to multi-character feature sets. That means the high-quality feature sets of foreign fibers in cotton consist of three classes of features which are respectively the color, texture and shape features. The multi-character feature sets naturally contain a space constraint which lead to the smaller feature space than the general feature set with the same number of features, however the existing algorithms do not consider the space characteristic of multi-character feature sets and treat the multi-character feature sets as the general feature sets. This paper proposed an improved ant colony optimization for feature selection, whose objective is to find the (near) optimal subsets in multi-character feature sets. In the proposed algorithm, group constraint is adopted to limit subset constructing process and probability transition for reducing the effect of invalid subsets and improve the convergence efficiency. As a result, the algorithm can effectively find the high-quality subsets in the feature space of multi-character feature sets. The proposed algorithm is tested in the datasets of foreign fibers in cotton and comparisons with other methods are also made. The experimental results show that the proposed algorithm can find the high-quality subsets with smaller size and high classification accuracy. This is very important to improve performance of online detection systems of foreign fibers in cotton.
Feature selection based on improved ant colony optimization for online detection of foreign fiber in cotton
S1568494614003664
A hybrid model is designed by combining the genetic algorithm (GA), radial basis function neural network (RBF-NN) and Sugeno fuzzy logic to determine the optimal parameters of a proportional-integral-derivative (PID) controller. Our approach used the rule base of the Sugeno fuzzy system and fuzzy PID controller of the automatic voltage regulator (AVR) to improve the system sensitive response. The rule base is developed by proposing a feature extraction for genetic neural fuzzy PID controller through integrating the GA with radial basis function neural network. The GNFPID controller is found to possess excellent features of easy implementation, stable convergence characteristic, good computational efficiency and high-quality solution. Our simulation provides high sensitive response (∼0.005s) of an AVR system compared to the real-code genetic algorithm (RGA), a linear-quadratic regulator (LQR) method and GA. We assert that GNFPID is highly efficient and robust in improving the sensitive response of an AVR system.
A novel design of high-sensitive fuzzy PID controller
S1568494614003688
Since the Internet bubble, firms that focus on virtual enterprises have sought to enhance productivity. To achieve this goal, a firm must evaluate its present productivity and estimate its future productivity. To overcome the considerable uncertainty in estimates of productivity, we propose an agent-based fuzzy collaborative intelligence approach that predicts productivity. First, a fuzzy learning model is built and used to estimate future productivity. Subsequently, the fuzzy learning model is fitted by several agents with diverse settings; those agents produce different productivity forecasts. Fuzzy intersection is then applied to determine the narrowest range that contains the actual value from the fuzzy forecasts. Finally, a back-propagation network derives a representative value from the fuzzy forecasts. The real-world case of Facebook is used to demonstrate the applicability of the proposed methodology.
Forecasting the productivity of a virtual enterprise by agent-based fuzzy collaborative intelligence—With Facebook as an example
S156849461400369X
Traditionally, clustering is the task of dividing samples into homogeneous clusters based on their degrees of similarity. As samples are assigned to clusters, users need to manually give descriptions for all clusters. In this paper, a rapid fuzzy rule clustering method based on granular computing is proposed to give descriptions for all clusters. A new and simple unsupervised feature selection method is employed to endow every sample with a suitable description. Exemplar descriptions are selected from sample's descriptions by relative frequency, and data granulation is guided by the selected exemplar fuzzy descriptions. Every cluster is depicted by a single fuzzy rule, which make the clusters understandable for humans. The experimental results show that our proposed model is able to discover fuzzy IF–THEN rules to obtain the potential clusters.
A rapid fuzzy rule clustering method based on granular computing
S1568494614003706
This paper focuses on generating the optimal solutions of the solid transportation problem under fuzzy environment, in which the supply capacities, demands and transportation capacities are supposed to be type-2 fuzzy variables due to the instinctive imprecision. In order to model the problem within the framework of the credibility optimization, three types of new defuzzification criteria, i.e., optimistic value criterion, pessimistic value criterion and expected value criterion, are proposed for type-2 fuzzy variables. Then, the multi-fold fuzzy solid transportation problem is reformulated as the chance-constrained programming model with the least expected transportation cost. To solve the model, fuzzy simulation based tabu search algorithm is designed to seek approximate optimal solutions. Numerical experiments are implemented to illustrate the application and effectiveness of the proposed approaches.
A solid transportation problem with type-2 fuzzy variables
S1568494614003718
In this paper, optimal control for stochastic linear quadratic singular neuro Takagi–Sugeno (T-S) fuzzy system with singular cost is obtained using genetic programming(GP). To obtain the optimal control, the solution of matrix Riccati differential equation (MRDE) is computed by solving differential algebraic equation (DAE) using a novel and nontraditional GP approach. The obtained solution in this method is equivalent or very close to the exact solution of the problem. Accuracy of the solution computed by GP approach to the problem is qualitatively better. The solution of this novel method is compared with the traditional Runge–Kutta (RK) method. A numerical example is presented to illustrate the proposed method.
Optimal control for stochastic linear quadratic singular neuro Takagi–Sugeno fuzzy system with singular cost using genetic programming
S156849461400372X
The evaluation of bone development is a complex and time-consuming task for the physicians since it may cause intraobserver and interobserver differences. In this study, we present a new training algorithm for support vector machines in order to determine the bone age in young children from newborn to 6 years old. By the new algorithm, we aimed to assist the radiologists so as to eliminate the disadvantages of the methods used in bone age determination. To achieve this purpose, primarily feature extraction procedure was performed to the left hand wrist X-ray images by using image processing techniques and the features related with the carpal bones and distal epiphysis of radius bone were obtained. Then these features were used for the input arguments of the classifier. In the classification process, a new training algorithm for support vector machines was proposed by using particle swarm optimization. When training support vector machines, particle swarm optimization was used for generating a new training instance which will represent the whole training set of the related class by using the training set. Finally, these new instances were used as the support vectors and classification process was carried out by using these new instances. The performance of the proposed method was compared with the naive Bayes, k-nearest neighborhood, support vector machines and C4.5 algorithms. As a result, it was determined that the proposed method was found successful than the other methods for bone age determination with a classification performance of 74.87%.
Support vector machines classification based on particle swarm optimization for bone age determination
S1568494614003731
Zhu et al. (2012) proposed dual hesitant fuzzy set as an extension of hesitant fuzzy sets which encompass fuzzy sets, intuitionistic fuzzy sets, hesitant fuzzy sets, and fuzzy multisets as a special case. Dual hesitant fuzzy sets consist of two parts, that is, the membership and nonmembership degrees, which are represented by two sets of possible values. Therefore, in accordance with the practical demand these sets are more flexible, and provides much more information about the situation. In this paper, the axiom definition of a similarity measure between dual hesitant fuzzy sets is introduced. A new similarity measure considering membership and nonmembership degrees of dual hesitant fuzzy sets has been presented and also it is shown that the corresponding distance measures can be obtained from the proposed similarity measures. To check the effectiveness, the proposed similarity measure is applied in a bidirectional approximate reasoning systems. Mathematical formulation of dual hesitant fuzzy assignment problem with restrictions is presented. Two algorithms based on the proposed similarity measure, are developed to finds the optimal solution of dual hesitant fuzzy assignment problem with restrictions. Finally, the proposed method is illustrated by numerical examples.
A new method for solving dual hesitant fuzzy assignment problems with restrictions based on similarity measure
S1568494614003743
A lot of bankruptcy forecasting model has been studied. Most of them uses corporate finance data and is intended for general companies. It may not appropriate for forecasting bankruptcy of construction companies which has big liquidity. It has a different capital structure, and the model to judge the financial risk of general companies can be difficult to apply the construction companies. The existing studies such as traditional Z-score and bankruptcy prediction using machine learning focus on the companies of non-specific industries. The characteristics of companies are not considered at all. In this paper, we showed that AdaBoost (adaptive boosting) is an appropriate model to judge the financial risk of Korean construction companies. We classified construction companies into three groups – large, middle, and small based on the capital of a company. We analyzed the predictive ability of the AdaBoost and other algorithms for each group of companies. The experimental results showed that the AdaBoost has more predictive power than others, especially for the large group of companies that has the capital more than 50 billion won.
AdaBoost based bankruptcy forecasting of Korean construction companies
S1568494614003755
This paper assesses effectiveness of dynamic evolving neural-fuzzy inference system (DENFIS) models in predicting the compressive strength of dry-cast concretes, and compares their prediction performances with those of regression, neural network (NN) and ANFIS models. The results of this study emphasized capabilities of online first-order and offline high-order Takagi–Sugeno (TSK) type DENFIS models for prediction purposes, whereas offline first-order TSK-type DENFIS models did not produce reliable results. Comparison between the produced results of an elite high-order DENFIS model with those predicted by the selected NN, regression and ANFIS models showed effectiveness of DENFIS model than the regression model, while its performance was similar to or slightly better than the other artificial prediction tools.
Numerical study on the feasibility of dynamic evolving neural-fuzzy inference system for approximation of compressive strength of dry-cast concrete
S1568494614003767
This paper presents a novel idea of intracranial segmentation of magnetic resonance (MR) brain image using pixel intensity values by optimum boundary point detection (OBPD) method. The newly proposed (OBPD) method consists of three steps. Firstly, the brain only portion is extracted from the whole MR brain image. The brain only portion mainly contains three regions–gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF). We need two boundary points to divide the brain pixels into three regions on the basis of their intensity. Secondly, the optimum boundary points are obtained using the newly proposed hybrid GA–BFO algorithm to compute final cluster centres of FCM method. For a comparison, other soft computing techniques GA, PSO and BFO are also used. Finally, FCM algorithm is executed only once to obtain the membership matrix. The brain image is then segmented using this final membership matrix. The key to our success is that we have proposed a technique where the final cluster centres for FCM are obtained using OBPD method. In addition, reformulated objective function for optimization is used. Initial values of boundary points are constrained to be in a range determined from the brain dataset. The boundary points violating imposed constraints are repaired. This method is validated by using simulated T1-weighted MR brain images from IBSR database with manual segmentation results. Further, we have used MR brain images from the Brainweb database with additional noise levels to validate the robustness of our proposed method. It is observed that our proposed method significantly improves segmentation results as compared to other methods.
A study on fuzzy clustering for magnetic resonance brain image segmentation using soft computing approaches
S1568494614003779
This paper presents a novel two-phased and Ensemble scheme integrated Backpropagation (TP-ES-BP) algorithm, which could greatly alleviate the problem of local minima of standard BP (SBP) algorithm, and overcome the limitations of individual component BPs in classification performance by the integration of Ensemble method. Empirical and t-test results of three groups of simulation experiments, including the face recognition task on ORL face image database and four benchmark classification tasks on datasets drawn from the UCI repository of machine learning databases, show that TP-ES-BP algorithm achieves significantly better recognition results and higher generalization performance compared to SBP and the state-of-the-art emotional backpropagation (EmBP) learning algorithm.
A two-phased and Ensemble scheme integrated Backpropagation algorithm
S1568494614003780
Particle swarm optimization (PSO) has shown its competitive performance for solving benchmark and real-world optimization problems. Nevertheless, it requires better control of exploration/exploitation searches to prevent the premature convergence of swarms. Thus, this paper proposes a new PSO variant called PSO with adaptive time-varying topology connectivity (PSO-ATVTC) that employs an ATVTC module and a new learning framework. The proposed ATVTC module specifically aims to balance the algorithm's exploration/exploitation searches by varying the particle's topology connectivity with time according to its searching performance. The proposed learning framework consists of a new velocity update mechanism and a new neighborhood search operator to improve the algorithm's performance. A comprehensive study was conducted on 24 benchmark functions and one real-world problem. Compared with nine well-established PSO variants and six other cutting-edge metaheuristic search algorithms, the searching performance of PSO-ATVTC was proven to be more prominent in majority of the tested problems.
Particle swarm optimization with adaptive time-varying topology connectivity
S1568494614003792
In this study, we model bandwidth provider selection and task allocation problem as an expected cost minimization problem with stochastic constraints. Two important parameters of quality of service (QoS), namely delay and jitter are considered as random variables to capture stochastic nature of telecom network environment. As solution methodology, stochastic model is converted into its deterministic equivalent and then a novel heuristic algorithm is proposed to solve resulting non-linear mixed integer programming model. We analyze how different prices, quality and task distribution affect the optimal behavior of the firm. Finally, performance of solution procedure is tested by several randomly generated scenarios and by a relaxation of this problem as tight lower bound.
A bandwidth sourcing and task allocation model in telecommunications under stochastic QoS guarantees
S1568494614003809
There is a global challenge in demand and need of electricity. Whenever a serious fault occurs, it affects the productivity of any power plant. So many indicators have been identified in real-time fault diagnosis of steam turbine which is extremely important in a functioning power plant. Detection of fault and early rectification requires a real time intelligent fault diagnostic system. This paper considers seven types of very slowly happening and accumulating physical phenomena which will ultimately lead to deterioration in turbine performance. Based on the acquired domain knowledge, an online intelligent diagnostic analyzer for turbine performance degradation is designed in this paper by using a VLSI based methodology written in Verilog code and simulated using a simulator (Modelsim Altera 6.4a). This system can be easily implemented on to FPGA which enables the identification of the root causes for turbine performance degradation. The simulation results show that the developed real-time fault diagnostic system is accurate, high percentile with less time consuming, cost effective, and easy to apply and user friendly.
Implementation of VLSI model as a tool in diagnostics of slowly varying process parameters which affect the performance of steam turbine
S1568494614003810
Recently much more attention has been paid to the applications of lattice theory in different fields. Fuzzy lattice reasoning (FLR) was described lately as a lattice data domain extension of fuzzy-ARTMAP based on a lattice inclusion measure function. In this work, we develop a fuzzy lattice reasoning classifier using various distance metrics. As a consequence, the new algorithm named FLRC-MD shows better classification results and more generalization and it will lead to generate fewer induced rules. To assess the effectiveness of the proposed model, twenty benchmark data sets are tested. The results are compared favorably with those from a number of state-of-the-art machine learning techniques published in the literature. Results obtained confirm the effectiveness of the proposed method.
Rule inducing by fuzzy lattice reasoning classifier based on metric distances (FLRC-MD)
S1568494614003822
This paper deals with the simultaneous application of thyristor controlled series capacitor based damping controller and power system stabilizer for stability improvement of dynamic power system. The adaptive neuro-fuzzy inference system and Levenberg–Marquardt artificial neural network algorithm are used to develop the control strategy for thyristor controlled series capacitor based damping controller and power system stabilizer. The power system stabilizer generates appropriate supplementary control signal to an excitation system of synchronous generator to damp the frequency oscillations and improves the performance of the power system dynamic. The performance of power system affected due to the system configuration and load variation. In order to achieve the appreciable damping, the series capacitor is suggested in addition to the power system stabilizer. Nonlinear simulations of single machine infinite bus system are carried out using the individual application of power system stabilizer and simultaneous application of power system stabilizer and thyristor controlled series capacitor. The comparison analysis between conventional and smart control strategies based controllers is demonstrated. Single machine infinite bus system is tested under various operating conditions and disturbances to show the effectiveness of proposed control schemes. rotor angle in radian rotor base speed, rad/s generator slip, p.u. initial operating slip, p.u. inertia constant damping coefficient mechanical power input, p.u. electrical power output, p.u. open circuit d-axis time constant, s open circuit q-axis time constant, s q-axis transient voltage d-axis transient voltage d-axis synchronous reactance, p.u. d-axis transient reactance, p.u. q-axis synchronous reactance, p.u. q-axis transient reactance, p.u. d-axis current q-axis current generator terminal voltage infinite bus voltage
Smart control techniques for design of TCSC and PSS for stability enhancement of dynamical power system
S1568494614003846
Complex networks are widely used to describe the structure of many complex systems in nature and society. The box-covering algorithm is widely applied to calculate the fractal dimension, which plays an important role in complex networks. However, there are two open issues in the existing box-covering algorithms. On the one hand, to identify the minimum boxes for any given size belongs to a family of Non-deterministic Polynomial-time hard problems. On the other hand, there exists randomness. In this paper, a fuzzy fractal dimension model of complex networks with fuzzy sets is proposed. The results are illustrated to show that the proposed model is efficient and less time consuming.
Fuzzy fractal dimension of complex networks
S1568494614003858
SVM (support vector machines) techniques have recently arrived to complete the wide range of classification methods for complex systems. These classification systems offer similar performances to other classifiers (such as the neuronal networks or classic statistical classifiers) and they are becoming a valuable tool in industry for the resolution of real problems. One of the fundamental elements of this type of classifier is the metric used for determining the distance between samples of the population to be classified. Although the Euclidean distance measure is the most natural metric for solving problems, it presents certain disadvantages when trying to develop classification systems that can be adapted as the characteristics of the sample space change. Our study proposes a means of avoiding this problem using the multivariate normalization of the inputs (both during the training and classification processes). Using experimental results produced from a significant number of populations, the study confirms the improvement achieved in the classification processes. Lastly, the study demonstrates that the multivariate normalization applied to a real SVM is equivalent to the use of a SVM that uses the Mahalanobis distance measure, for non-normalized data.
Training of support vector machine with the use of multivariate normalization
S156849461400386X
Large-scale software systems are in general difficult to manage and monitor. In many cases, these systems display unexpected behavior, especially after being updated or when changes occur in their environment (operating system upgrades or hardware migrations, to name a few). Therefore, to handle a changing environment, it is desirable to base fault detection and performance monitoring on self-adaptive techniques. Several studies have been carried out in the past which, inspired on the immune system, aim at solving complex technological problems. Among them, anomaly detection, pattern recognition, system security and data mining are problems that have been addressed in this framework. There are similarities between the software fault detection problem and the identification of the pathogens that are found in natural immune systems. Being inspired by vaccination and negative and clonal selection observed in these systems, we developed an effective self-adaptive model to monitor software applications analyzing the metrics of system resources.
Monitoring applications: An immune inspired algorithm for software-fault detection
S1568494614003871
Being simple to use X-bar control chart has been most widely used in industry for monitoring and controlling manufacturing processes. Measurements of a quality characteristic in terms of samples are taken from the production process at regular interval and the sample means are plotted on this chart. Design of a control chart involves the selection of three parameters, namely the sample size (n), the sampling interval (h) and the width of control limits (k). In case of economic design, these three control chart parameters are selected in such a manner that the total cost of controlling the process is the least. The effectiveness of this design depends on the accuracy of determination of these three parameters. In this paper, a new efficient and effective optimization technique named as teaching–learning based optimization (TLBO) has been used for the global minimization of a loss cost function expressed as a function of three variables n, h and k in an economic model of X-bar chart based on unified approach. In this work, the TLBO algorithm has been modified to simplify the tuning of teaching factor. A MATLAB computer program has been developed for this purpose. A numerical example has been solved and the results are found to be better than the earlier published results. Further, the sensitivity analysis using fractional factorial design and analysis of variance have been carried out to identify the critical process and cost parameters affecting the economic design.
A teaching–learning based optimization approach for economic design of X-bar control chart
S1568494614003883
To overcome computerized intractability and imprecise estimation of the standard cardinalized probability hypothesis density (CPHD) filter for multitarget tracking (MTT), an improved CPHD filtering algorithm is proposed in this paper. We apply Sequential Monte Carlo (SMC) method to achieve the closed-form solution in the filtering process as well as to avoid missed detection. Afterwards we partition the particle set into surviving particles and newborn particles based on the particle labels. To eliminate the over-estimated target number, the weights of newborn particles are assigned to increase to surviving particles on average. Simulations are presented to compare the performance of the proposed filtering algorithm with that of the standard one. The results show that the proposed filtering algorithm can effectively achieve MTT with better performance.
Improved cardinalized probability hypothesis density filtering algorithm
S1568494614003895
The Imperialist Competitive Algorithm (ICA), derived from the field of human social evolution, is a component of swarm intelligence theory. It was first introduced in 2007 to deal with continuous optimization problems, but recently has been extensively applied to solve discrete optimization problems. This paper reviews the underlying ideas of how ICA emerged and its application to the engineering disciplines mainly on industrial engineering. The present study is the first ever comprehensive review on ICA, which indicates a statistically significant increase in the amount of published research on this metaheuristic algorithm, especially research addressing discrete optimization problems. Future research directions and trends are also described.
A survey on the Imperialist Competitive Algorithm metaheuristic: Implementation in engineering domain and directions for future research
S1568494614003901
This paper surveys strategies applied to avoid premature convergence in Genetic Algorithms (GAs). Genetic Algorithm belongs to the set of nature inspired algorithms. The applications of GA cover wide domains such as optimization, pattern recognition, learning, scheduling, economics, bioinformatics, etc. Fitness function is the measure of GA, distributed randomly in the population. Typically, the particular value for each gene start dominating as the search evolves. During the evolutionary search, fitness decreases as the population converges, this leads to the problems of the premature convergence and slow finishing. In this paper, a detailed and comprehensive survey of different approaches implemented to prevent premature convergence with their strengths and weaknesses is presented. This paper also discusses the details about GA, factors affecting the performance during the search for global optima and brief details about the theoretical framework of Genetic algorithm. The surveyed research is organized in a systematic order. A detailed summary and analysis of reviewed literature are given for the quick review. A comparison of reviewed literature has been made based on different parameters. The underlying motivation for this paper is to identify methods that allow the development of new strategies to prevent premature convergence and the effective utilization of genetic algorithms in the different area of research.
A comparative review of approaches to prevent premature convergence in GA
S1568494614003913
Portfolio optimization involves the optimal assignment of limited capital to different available financial assets to achieve a reasonable trade-off between profit and risk objectives. In this paper, we studied the extended Markowitz's mean-variance portfolio optimization model. We considered the cardinality, quantity, pre-assignment and round lot constraints in the extended model. These four real-world constraints limit the number of assets in a portfolio, restrict the minimum and maximum proportions of assets held in the portfolio, require some specific assets to be included in the portfolio and require to invest the assets in units of a certain size respectively. An efficient learning-guided hybrid multi-objective evolutionary algorithm is proposed to solve the constrained portfolio optimization problem in the extended mean-variance framework. A learning-guided solution generation strategy is incorporated into the multi-objective optimization process to promote the efficient convergence by guiding the evolutionary search towards the promising regions of the search space. The proposed algorithm is compared against four existing state-of-the-art multi-objective evolutionary algorithms, namely Non-dominated Sorting Genetic Algorithm (NSGA-II), Strength Pareto Evolutionary Algorithm (SPEA-2), Pareto Envelope-based Selection Algorithm (PESA-II) and Pareto Archived Evolution Strategy (PAES). Computational results are reported for publicly available OR-library datasets from seven market indices involving up to 1318 assets. Experimental results on the constrained portfolio optimization problem demonstrate that the proposed algorithm significantly outperforms the four well-known multi-objective evolutionary algorithms with respect to the quality of obtained efficient frontier in the conducted experiments.
A learning-guided multi-objective evolutionary algorithm for constrained portfolio optimization
S1568494614003925
Several classical techniques have evolved over the years for the purpose of denoising binary images. But the main disadvantages of these classical techniques lie in that an a priori information regarding the noise characteristics is required during the extraction process. Among the intelligent techniques in vogue, the multilayer self organizing neural network (MLSONN) architecture is suitable for binary image preprocessing tasks. In this article, we propose a quantum version of the MLSONN architecture. Similar to the MLSONN architecture, the proposed quantum multilayer self organizing neural network (QMLSONN) architecture comprises three processing layers viz., input, hidden and output layers. The different layers contains qubit based neurons. Single qubit rotation gates are designated as the network layer interconnection weights. A quantum measurement at the output layer destroys the quantum states of the processed information thereby inducing incorporation of linear indices of fuzziness as the network system errors used to adjust network interconnection weights through a quantum backpropagation algorithm. Results of application of the proposed QMLSONN are demonstrated on a synthetic and a real life binary image with varying degrees of Gaussian and uniform noise. A comparative study with the results obtained with the MLSONN architecture and the supervised Hopfield network reveals that the QMLSONN outperforms the MLSONN and the Hopfield network in terms of the computation time.
Binary image denoising using a quantum multilayer self organizing neural network
S1568494614003937
The blocking job shop (BJS) problem is an extension of a job shop problem with no buffer constraints. It means that after a job is completed on the current machine, it remains on that machine until the next machine becomes available. This paper addresses an extension of the BJS problem, which takes into account transferring jobs between different machines using a limited number of automated guided vehicles (AGV), called a BJS–AGV problem. Two integer non-linear programming (INLP) models are proposed. A two-stage heuristic algorithm that combines an improving timetabling method and a local search is proposed to solve the BJS–AGV problem. A neighborhood structure in the local search is proposed based on a disjunctive graph model. According to the characteristics of the BJS–AGV problem, four principles are proposed to guarantee the feasibility of the search neighborhood. Computation results are presented for a set of benchmarking tests, some of which are enlarged by transportation times between different machines. The numerical results show the effectiveness of the proposed two-stage algorithm.
Scheduling of no buffer job shop cells with blocking constraints and automated guided vehicles
S1568494614003949
This study provides valuable support for successful decision making in network hierarchical structures. The balanced scorecard (BSC) is a measurement system that requires a balanced set of financial and non-financial measures. This study examined both qualitative and quantitative data for the proposed hybrid analytical method. Although numerous studies have been conducted in industry and academia for the development of BSC performance measures, few studies have explored a BSC hierarchical network structure and interdependence relations under uncertainty. This study proposed a set of hybrid approaches for the real case of a performance evaluation of a leisure farm and demonstrated a closed-loop analytical network process for the hierarchical and interdependence relations BSC to assist research improve practical performance and enhance management effectiveness and efficiency regarding qualitative and quantitative information. This study adopted the traditional BSC framework that considers importance weights, performance weights and norm values. The results indicated that the financial aspect is highly weighted and exhibits superior performance; however, the customer aspect exhibits a higher ranking when considering the importance weight and norm value. Managerial implications and concluding remarks are also included in the study.
Balanced scorecard performance evaluation in a closed-loop hierarchical model under uncertainty
S1568494614003950
In outbreak detection, one of the key issues is the need to deal with the weakness of early outbreak signals because this causes the detection model to have has less capability in terms of robustness when unseen outbreak patterns vary from those in the trained model. As a result, an imbalance between high detection rate and low false alarm rate occurs. To solve this problem, this study proposes a novel outbreak detection model based on danger theory; a bio-inspired method that replicates how the human body fights pathogens. We propose a signal formalization approach based on cumulative sum and a cumulative mature antigen contact value to suit the outbreak characteristic and danger theory. Two outbreak diseases, dengue and SARS, are subjected to a danger theory algorithm; namely the dendritic cell algorithm. To evaluate the model, four measurement metrics are applied: detection rate, specificity, false alarm rate, and accuracy. From the experiment, the proposed model outperforms the other detection approaches and shows a significant improvement for both diseases outbreak detection. The findings reveal that the robustness of the proposed immune model increases when dealing with inconsistent outbreak signals. The model is able to detect new unknown outbreak patterns and can discriminate between outbreak and non-outbreak cases with a consistent high detection rate, high sensitivity, and lower false alarm rate even without a training phase.
Outbreak detection model based on danger theory
S1568494614003962
Different methods are proposed in the framework of multi attribute utility theory for multi criteria decision making. Among the proposed methods, weighted sum and weighted product models (WSM and WPM) are well known and widely accepted. To improve the accuracy of WSM and WPM, the weighted aggregated sum product assessment (WASPAS) method was proposed which used an aggregation operator on WSM and WPM. In this paper, an extended version of WASPAS method is proposed which can be applied in uncertain decision making environment. In the proposed WASPAS-IVIF method, the uncertainty of decision maker(s) in stating their judgments and evaluations regard to criteria importance and/or alternatives performance on criteria are expressed by interval valued intuitionistic fuzzy numbers. Two numerical examples of ranking derelict buildings’ redevelopment decisions and investment alternatives are presented. The results are then compared with the rankings provided by other methods such as TOPSIS-IVIF, COPRAS-IVIF and IFOWA. Combining the strengths of IVIFS in handling uncertainty with the enhanced accuracy of WASPAS makes the proposed method as a desirable method for multi criteria decision making in real world applications.
Extension of weighted aggregated sum product assessment with interval-valued intuitionistic fuzzy numbers (WASPAS-IVIF)
S1568494614003974
Background The application of microarray data for cancer classification is important. Researchers have tried to analyze gene expression data using various computational intelligence methods. Purpose We propose a novel method for gene selection utilizing particle swarm optimization combined with a decision tree as the classifier to select a small number of informative genes from the thousands of genes in the data that can contribute in identifying cancers. Conclusion Statistical analysis reveals that our proposed method outperforms other popular classifiers, i.e., support vector machine, self-organizing map, back propagation neural network, and C4.5 decision tree, by conducting experiments on 11 gene expression cancer datasets.
Applying particle swarm optimization-based decision tree classifier for cancer classification on gene expression data
S1568494614003986
This paper deals with generalized traveling salesman problems. In this problem, all nodes are partitioned into some clusters and each cluster must be visited exactly once in a tour. We present an effective metaheuristic method hybridized with a local search procedure to solve this problem. The proposed algorithm is based on the imperialist competitive algorithm (ICA), which is a new socio-politically motivated global search strategy. ICA is enhanced by a novel encoding scheme, assimilation policy procedure, destruction/construction operator and imperialist development plans. Various parameters of the algorithm are analyzed to calibrate the algorithm by means of the Taguchi method. For the evaluation of the proposed algorithm, it is compared against two effective existing algorithms through a set of available instances. The results demonstrate the superiority of our algorithm in both solution quality and robustness of the solution.
A novel imperialist competitive algorithm for generalized traveling salesman problems
S1568494614003998
Using benchmark problems to demonstrate and compare novel methods to the work of others could be more widely adopted by the Soft Computing community. This article contains a collection of several benchmark problems in nonlinear control and system identification, which are presented in a standardized format. Each problem is augmented by examples where it has been adopted for comparison. The selected examples range from component to plant level problems and originate mainly from the areas of mechatronics/drives and process systems. The authors hope that this overview contributes to a better adoption of benchmarking in method development, test and demonstration.
Benchmark problems for nonlinear system identification and control using Soft Computing methods: Need and overview
S1568494614004001
A new approach for linguistic group decision making by using probabilistic information and induced aggregation operators is presented. It is based on the induced linguistic probabilistic ordered weighted average (ILPOWA). It is an aggregation operator that uses probabilities and OWA operators in the same formulation considering the degree of importance that each concept has in the formulation. It uses complex attitudinal characters that can be assessed by using order inducing variables. Furthermore, it deals with an uncertain environment where the information cannot be studied in a numerical scale but it is possible to use linguistic variables. Several extensions to this approach are presented by using moving averages and Bonferroni means. The applicability of this approach is also studied with a focus on multi-criteria group decision making by using multi-person aggregation operators in order to deal with the opinion of several experts in the analysis. An illustrative example regarding importation strategies in the administration of a country is developed.
Linguistic group decision making with induced aggregation operators and probabilistic information