FileName
stringlengths
17
17
Abstract
stringlengths
163
6.01k
Title
stringlengths
12
421
S156849461500784X
We consider the problem faced by a company that must outsource reverse logistics (RL) activities to third-party providers. Addressing RL outsourcing problems has become increasingly relevant issue in the management science and decision making literatures. The correct evaluation and ranking of the decision criteria/priorities determining the selection of the best third-party RL providers (3PRLPs) is essential for the competitive performance of the outsourcing company. The method proposed in this study allows to identify and classify these decision criteria. First, the relevant criteria and sub-criteria are identified using a SWOT analysis. Then, Intuitionistic Fuzzy AHP is used to evaluate the relative importance weights among the criteria and the corresponding sub-criteria. These relative weights are implemented in a novel extension of Mikhailov's fuzzy preference programming method to produce local weights for all criteria and sub-criteria. Finally, these local weights are used to assign a global weight to each sub-criterion and create a ranking. We discuss the results obtained by applying the proposed model to a case study of a real company. In particular, these results show that the most important priority for the company when delegating RL activities to 3PRLPs is to focus on the core business, while reducing costs constitutes one of its least important priorities.
An integrated intuitionistic fuzzy AHP and SWOT method for outsourcing reverse logistics
S1568494615007863
The huge demand for real time services in wireless mesh networks (WMN) creates many challenging issues for providing quality of service (QoS). Designing of QoS routing protocols, which optimize the multiple objectives is computationally intractable. This paper proposes a new model for routing in WMN by using Modified Non-dominated Sorting Genetic Algorithm-II (MNSGA-II). The objectives which are considered here are the minimization of expected transmission count and the transmission delay. In order to retain the diversity in the non-dominated solutions, dynamic crowding distance (DCD) procedure is implemented in NSGA-II. The simulation is carried out in Network Simulator 2 (NS-2) and comparison is made using the metrics, expected transmission count and transmission delay by varying node mobility and by increasing number of nodes. It is observed that MNSGA-II improves the throughput and minimizes the transmission delay for varying number of nodes and higher mobility scenarios. The simulation clearly shows that MNSGA-II algorithm is certainly more suitable for solving multiobjective routing problem. A decision-making procedure based on analytic hierarchy process (AHP) has been adopted to find the best compromise solution from the set of Pareto-solutions obtained through MNSGA-II. The performance of MNSGA-II is compared with reference point based NSGA-II (R-NSGA-II) in terms of spread.
A multi-objective evolutionary algorithm based QoS routing in wireless mesh networks
S1568494615007875
Data clustering is a technique for grouping similar and dissimilar data. Many clustering algorithms fail when dealing with multi-dimensional data. This paper introduces efficient methods for data clustering by Cuckoo Optimization Algorithm; called COAC and Fuzzy Cuckoo Optimization Algorithm, called FCOAC. The COA by inspire of cuckoo bird nature life tries to solve continuous problems. This algorithm clusters a large dataset to prior determined clusters numbers by this meta-heuristic algorithm and optimal the results by fuzzy logic. Firstly, the algorithm generates a random solutions equal to cuckoo population and with length dataset objects and with a cost function calculates the cost of each solution. Finally, fuzzy logic tries for the optimal solution. The performance of our algorithm is evaluated and compared with COAC, Black hole, CS, K-mean, PSO and GSA. The results show that our algorithm has better performance in comparison with them.
Efficient protocol for data clustering by fuzzy Cuckoo Optimization Algorithm
S1568494615007887
In this paper, an intelligent adaptive tracking control system (IATCS) based on the mixed H 2 /H ∞ approach under uncertain plant parameters and external disturbances for achieving high precision performance of a two-axis motion control system is proposed. The two-axis motion control system is an X–Y table driven by two permanent-magnet linear synchronous motors (PMLSMs) servo drives. The proposed control scheme incorporates a mixed H 2 /H ∞ controller, a self-organizing recurrent fuzzy-wavelet-neural-network controller (SORFWNNC) and a robust controller. The combinations of these control methods would insure the stability, robustness, optimality, overcome the uncertainties, and performance properties of the two-axis motion control system. The SORFWNNC is used as the main tracking controller to adaptively estimate an unknown nonlinear dynamic function that includes the lumped parameter uncertainties, external disturbances, cross-coupled interference and frictional force. Moreover, the structure and the parameter learning phases of the SORFWNNC are performed concurrently and online. Furthermore, a robust controller is designed to deal with the uncertainties, including the approximation error, optimal parameter vectors and higher order terms in Taylor series. Besides, the mixed H 2 /H ∞ controller is designed such that the quadratic cost function is minimized and the worst case effect of the unknown nonlinear dynamic function on the tracking error must be attenuated below a desired attenuation level. The mixed H 2 /H ∞ control design has the advantage of both H 2 optimal control performance and H ∞ robust control performance. The sufficient conditions are developed for the adaptive mixed H 2 /H ∞ tracking problem in terms of a pair of coupled algebraic equations instead of coupled nonlinear differential equations. The coupled algebraic equations can be solved analytically. The online adaptive control laws are derived based on Lyapunov theorem and the mixed H 2 /H ∞ tracking performance so that the stability of the proposed IATCS can be guaranteed. Furthermore, the control algorithms are implemented in a DSP-based control computer. From the experimental results, the motions at X-axis and Y-axis are controlled separately, and the dynamic behaviors of the proposed IATCS can achieve favorable tracking performance and are robust to parameter uncertainties.
Intelligent mixed H2/H∞ adaptive tracking control system design using self-organizing recurrent fuzzy-wavelet-neural-network for uncertain two-axis motion control system
S1568494615007899
A novel fuzzy evaluation framework is applied in this study to evaluate service quality in the public healthcare sector. In particular, the proposed framework is based on the ServQual disconfirmation paradigm and incorporates the Analytic Hierarchy Process (AHP) method to elicit reliable estimations of service quality expectations. Moreover, degrees of uncertainty, subjectivity and vagueness on the part of stakeholders are addressed via linguistic evaluation scales parameterized by triangular fuzzy numbers. With reference to nine relevant public hospitals in the Sicilian Region (Italy), a detailed case study evaluating four core service criteria and 15 fundamental service items is conducted so as to discern dissatisfying aspects regarding the public healthcare service in the Region. Dissatisfaction reasons with the provided service are identified in the analysis as well, further demonstrating the effectiveness of the proposed approach.
A fuzzy framework to evaluate service quality in the healthcare industry: An empirical case of public hospital service evaluation in Sicily
S1568494615007905
With the development of Web technologies and the increasing usage of Internet, more and more Web Services (WS) are deployed over Internet. Therefore, there will be a large number of candidate services for fulfilling a desired task. In the last decade, several WS selection approaches are proposed to cope with this challenge. In sharp contrast to the existing WS selection approaches that focus only on user-specified preferences, in this paper, we propose a flexible and effective WS selection framework, which gives users an adequate way to express their preferences using linguistic terms, and enhance the WS selection by leveraging their contexts and profiles. The satisfaction of the candidate WS is expressed by an objective score that takes into consideration no only the user-specified preferences, but also additional preferences extracted from both his/her context and profile using fuzzy inference rules, so as to improve the effectiveness of the selection. We then introduce an effective strategy that allows for priority between the two kinds of preferences, for ranking candidate services. Experimental evaluation on a real case study demonstrates the effectiveness of our proposed strategy.
A fuzzy framework for efficient user-centric Web service selection
S1568494615007917
The selection of an appropriate and stable route that enables suitable load balancing of Internet gateways is an important issue in hybrid mobile ad hoc networks. The variables employed to perform routing must ensure that no harm is caused that might degrade other network performance metrics such as delay and packet loss. Moreover, the effect of such routing must remain affordable, such as low losses or extra signaling messages. This paper proposes a new method, Steady Load Balancing Gateway Election, based on a fuzzy logic system to achieve this objective. The fuzzy system infers a new routing metric named cost that considers several networks performance variables to select the best gateway. To solve the problem of defining the fuzzy sets, they are optimized by a genetic algorithm whose fitness function also employs fuzzy logic and is designed with four network performance metrics. The promising results confirm that ad hoc networks are characterized by great uncertainty, so that the use of Computational Intelligence methods such as fuzzy logic or genetic algorithms is highly recommended.
Improving hybrid ad hoc networks: The election of gateways
S1568494615007929
In this paper, a novel neuro-fuzzy learning machine called randomized adaptive neuro-fuzzy inference system (RANFIS) is proposed for predicting the parameters of ground motion associated with seismic signals. This advanced learning machine integrates the explicit knowledge of the fuzzy systems with the learning capabilities of neural networks, as in the case of conventional adaptive neuro-fuzzy inference system (ANFIS). In RANFIS, to accelerate the learning speed without compromising the generalization capability, the fuzzy layer parameters are not tuned. The three time domain ground motion parameters which are predicted by the model are peak ground acceleration (PGA), peak ground velocity (PGV) and peak ground displacement (PGD). The model is developed using the database released by PEER (Pacific Earthquake Engineering Research Center). Each ground motion parameter is related to mainly to four seismic parameters, namely earthquake magnitude, faulting mechanism, source to site distance and average soil shear wave velocity. The experimental results validate the improved performance of the machine, with lesser computation time compared to prior studies.
Prediction of ground motion parameters using randomized ANFIS (RANFIS)
S1568494615007942
Prototype generation (PG) methods aim to find a subset of instances taken from a large training data set, in such a way that classification performance (commonly, using a 1NN classifier) when using prototypes is equal or better than that obtained when using the original training set. Several PG methods have been proposed so far, most of them consider a small subset of training instances as initial prototypes and modify them trying to maximize the classification performance on the whole training set. Although some of these methods have obtained acceptable results, training instances may be under-exploited, because most of the times they are only used to guide the search process. This paper introduces a PG method based on genetic programming in which many training samples are combined through arithmetic operators to build highly effective prototypes. The genetic program aims to generate prototypes that maximize an estimate of the generalization performance of an 1NN classifier. Experimental results are reported on benchmark data to assess PG methods. Several aspects of the genetic program are evaluated and compared to many alternative PG methods. The empirical assessment shows the effectiveness of the proposed approach outperforming most of the state of the art PG techniques when using both small and large data sets. Better results were obtained for data sets with numeric attributes only, although the performance of the proposed technique on mixed data was very competitive as well.
PGGP: Prototype Generation via Genetic Programming
S1568494615007954
This paper discusses the issues related to the process of global decision-making on the basis of knowledge which is stored in several local knowledge bases. The approach considered in this paper is very general because we do not assume any additional conditions on the sets of objects or the sets of conditional attributes of local knowledge bases. The paper proposes a new approach to the organization of the structure of multi-agent decision-making system, which operates on the basis of dispersed knowledge. In the presented system, the local knowledge bases will be combined into groups in a dynamic way. We will seek to designate groups of local bases on which the test object is classified to the decision classes in a similar manner. Then, a process of the elimination inconsistencies in the knowledge will be implemented in the created groups. Global decisions will be made by using one of the methods for analysis of conflicts. The paper includes the definition of a multi-agent decision-making system with dynamically generated clusters and a description of a global decision-making process. In addition, the paper presents the results of experiments carried out on data from the UCI repository.
Global decision-making in multi-agent decision-making system with dynamically generated disjoint clusters
S1568494615007966
Estimating job cycle time is an important task for a semiconductor manufacturer as it helps to strengthen relationships with customers and is also conducive to the sustainable development of the manufacturer. The research trend in this field has moved toward the development of hybrid methods, especially those that are classification-based. Most existing methods use pre-classification; however, such methods have several drawbacks, such as incompatibility with the estimation method and unequal sizes of different job groups. In contrast, a post-classification approach has great potential, and therefore is used as a basis for the new approach in this study. In the proposed methodology, a systematic procedure is established to divide jobs into several groups according to their estimation errors. In this way, the classification and estimation stages can be combined seamlessly because they optimize the same objectives. A real case is used to evaluate the effectiveness of the proposed methodology and the experimental results support its superiority over several existing methods. The shortcomings of the existing methods based on pre-classification are also clearly illustrated.
Estimating job cycle time in a wafer fabrication factory: A novel and effective approach based on post-classification
S1568494615007978
When we are developing information system we must, in some way, determine the development order of its subsystems. Currently, this problem is not formally solved. Therefore, to rectify this we are proposing a solution which takes the sum of weights of feedback arcs as a criteria for determining the development order, rather than some other criteria that has not come directly from information system description. For the purpose of solving this problem we have developed, analyzed, and tested, Branch and Bound algorithm and Monte-Carlo randomized algorithm which solves the problem of Information System Subsystems Development Order in polynomial time with arbitrary probability. Also, we have determined an approximation error for developed Monte-Carlo randomized algorithm. Lastly, we have proven that the problem of Information System Subsystems Development Order is NP-hard, NP-complete, and APX-hard.
Monte-Carlo randomized algorithm for minimal feedback arc set problem
S156849461500798X
This paper proposes relaxed conditions for control synthesis of discrete-time Takagi–Sugeno fuzzy control systems under unreliable communication links. To widen the applicability of the fuzzy control approach under network environments, a novel fuzzy controller, which is homogenous polynomially parameter-dependent on both the current-time normalized fuzzy weighting functions and the multi-steps-past normalized fuzzy weighting functions, is provided to make much more use of the information of the underlying system. Moreover, a new kind of slack variable approach is also developed and thus the algebraic properties of these multi-instant normalized fuzzy weighting functions are collected into some augmented matrices. As a result, the conservatism of control synthesis of discrete-time Takagi–Sugeno fuzzy control systems under unreliable communication links can be significantly reduced. Two illustrative examples are presented to demonstrate the effectiveness of the theoretical development.
Relaxed fuzzy control synthesis of nonlinear networked systems under unreliable communication links
S1568494615007991
Recently, the TODIM (an acronym in Portuguese for Interactive Multi-criteria Decision Making) approach, which can characterize the decision makers’ psychological behaviours under risk, has been introduced to handle multi-criteria decision making (MCDM) problems. Moreover, Pythagorean fuzzy set is an effective tool for depicting uncertainty of the MCDM problems. In this paper, based on the prospect theory, we first extend the TODIM approach to solve the MCDM problems with Pythagorean fuzzy information. Then, we conduct simulation tests to analyze how the risk attitudes of the decision makers exert the influence on the results of MCDM under uncertainty. Finally, a case study on selecting the governor of Asian Infrastructure Investment Bank is made to show the applicability of the proposed approach.
Pythagorean fuzzy TODIM approach to multi-criteria decision making
S1568494615008005
Many good evolutionary algorithms have been proposed in the past. However, frequently, the question arises that given a problem, one is at a loss of which algorithm to choose. In this paper, we propose a novel algorithm portfolio approach to address the above problem for single objective optimization. A portfolio of evolutionary algorithms is first formed. Covariance Matrix Adaptation Evolution Strategy (CMA-ES), History driven Evolutionary Algorithm (HdEA), Particle Swarm Optimization (PSO2011) and Self adaptive Differential Evolution (SaDE) are chosen as component algorithms. Each algorithm runs independently with no information exchange. At any point in time, the algorithm with the best predicted performance is run for one generation, after which the performance is predicted again. The best algorithm runs for the next generation, and the process goes on. In this way, algorithms switch automatically as a function of the computational budget. This novel algorithm is named Multiple Evolutionary Algorithm (MultiEA). The predictor we introduced has the nice property of being parameter-less, and algorithms switch automatically as a function of budget. The following contributions are made: (1) experimental results on 24 benchmark functions show that MultiEA outperforms (i) Multialgorithm Genetically Adaptive Method for Single Objective Optimization (AMALGAM-SO); (ii) Population-based Algorithm Portfolio (PAP); (iii) a multiple algorithm approach which chooses an algorithm randomly (RandEA); and (iv) a multiple algorithm approach which divides the computational budget evenly and execute all algorithms in parallel (ExhEA). This shows that it outperforms existing portfolio approaches and the predictor is functioning well. (2) Moreover, a neck to neck comparison of MultiEA with CMA-ES, HdEA, PSO2011, and SaDE is also made. Experimental results show that the performance of MultiEA is very competitive. In particular, MultiEA, being a portfolio algorithm, is sometimes even better than all its individual algorithms, and has more robust performance. (3) Furthermore, a positive synergic effect is discovered, namely, MultiEA can sometimes perform better than the sum of its individual EAs. This gives interesting insights into why an algorithm portfolio is a good approach. (4) It is found that MultiEA scales as well as the best algorithm in the portfolio. This suggests that MultiEA scales up nicely, which is a desirable algorithmic feature. (5) Finally, the performance of MultiEA is investigated on a real world problem. It is found that MultiEA can select the most suitable algorithm for the problem and is much better than choosing algorithms randomly.
Which algorithm should I choose: An evolutionary algorithm portfolio approach
S1568494615008029
This paper deals with the attitude tracking control problem for a 2 DoF laboratory helicopter using optimal linear quadratic regulator (LQR). As the performance of the LQR controller greatly depends on the weighting matrices (Q and R), it is important to select them optimally. However, normally the weighting matrices are selected based on trial and error approach, which not only makes the controller design tedious but also time consuming. Hence, to address the weighting matrices selection problem of LQR, in this paper we propose an adaptive particle swarm optimization (APSO) method to obtain the elements of Q and R matrices. Moreover, to enhance the convergence speed and precision of the conventional PSO, an adaptive inertia weight factor (AIWF) is introduced in the velocity update equation of PSO. One of the key features of the AIWF is that unlike the standard PSO in which the inertia weight is kept constant throughout the optimization process, the weights are varied adaptively according to the success rate of the particles towards the optimum value. The proposed APSO based LQR control strategy is applied for pitch and yaw axes control of 2 Degrees of Freedom (DoF) laboratory helicopter workstation, which is a highly nonlinear and unstable system. Experimental results substantiate that the weights optimized using APSO, compared to PSO, result in not only reduced tracking error but also improved tracking response with reduced oscillations.
Adaptive PSO for optimal LQR tracking control of 2 DoF laboratory helicopter
S1568494615008030
A novel and generic multi-objective design paradigm is proposed which utilizes quantum-behaved PSO (QPSO) for deciding the optimal configuration of the LQR controller for a given problem considering a set of competing objectives. There are three main contributions introduced in this paper as follows. (1) The standard QPSO algorithm is reinforced with an informed initialization scheme based on the simulated annealing algorithm and Gaussian neighborhood selection mechanism. (2) It is also augmented with a local search strategy which integrates the advantages of memetic algorithm into conventional QPSO. (3) An aggregated dynamic weighting criterion is introduced that dynamically combines the soft and hard constraints with control objectives to provide the designer with a set of Pareto optimal solutions and lets her to decide the target solution based on practical preferences. The proposed method is compared against a gradient-based method, seven meta-heuristics, and the trial-and-error method on two control benchmarks using sensitivity analysis and full factorial parameter selection and the results are validated using one-tailed T-test. The experimental results suggest that the proposed method outperforms opponent methods in terms of controller effort, measures associated with transient response and criteria related to steady-state.
Multi-objective design of state feedback controllers using reinforced quantum-behaved particle swarm optimization
S1568494615008066
Location selection is a multi dimensional issue which requires consideration of quantitative and qualitative evaluation criteria. Some of these criteria may have imprecise and uncertain data which make the location selection decision hard to progress. Although many multi attribute decision making (MADM) techniques are utilized in location decision study field, there is a lack of studies which provide solutions by considering high number of supply chain uncertainties. In this study, against the drawbacks of traditional MADM techniques, a novel MADM approach is applied for location decision under high uncertainty as a first time. In the proposed model, a new notion named as cloud based design optimization (CBDO) is utilized because CBDO can take into consideration certain and uncertain factors simultaneously. Furthermore, it provides robust solution within worst case scenario to existing approaches by mediating between aspects of fuzzy set theory and probability distributions. Robustness enables decision makers have managerial and operational foresights about possible unexpected situations, and take necessary actions against risk. An illustrative example is conducted in warehouse location selection problem area to indicate the performance of the proposed approach. It is revealed out that location decision is very sensitive to the consideration of uncertainty and CBDO can be a helpful supportive tool for decision makers in providing solution under high uncertainty.
A novel multi attribute decision making approach for location decision under high uncertainty
S1568494615008078
Wireless sensor networks (WSNs) is one of the most important technologies in this century. As sensor nodes have limited energy resources, designing energy-efficient routing algorithms for WSNs has become the research focus. And because WSNs routing for maximizing the network lifetime is a NP-hard problem, many researchers try to optimize it with meta-heuristics. However, due to the uncertain variable number and strong constraints of WSNs routing problem, most meta-heuristics are inappropriate in designing routing algorithms for WSNs. This paper proposes an Improved Harmony Search Based Energy Efficient Routing Algorithm (IHSBEER) for WSNs, which is based on harmony search (HS) algorithm (a meta-heuristic). To address the WSNs routing problem with HS algorithm, several key improvements have been put forward: First of all, the encoding of harmony memory has been improved based on the characteristics of routing in WSNs. Secondly, the improvisation of a new harmony has also been improved. We have introduced dynamic adaptation for the parameter HMCR to avoid the prematurity in early generations and strengthen its local search ability in late generations. Meanwhile, the adjustment process of HS algorithm has been discarded to make the proposed routing algorithm containing less parameters. Thirdly, an effective local search strategy is proposed to enhance the local search ability, so as to improve the convergence speed and the accuracy of routing algorithm. In addition, an objective function model that considers both the energy consumption and the length of path is developed. The detailed descriptions and performance test results of the proposed approach are included. The experimental results clearly show the advantages of the proposed routing algorithm for WSNs.
An improved harmony search based energy-efficient routing algorithm for wireless sensor networks
S156849461500808X
Two soft-computing techniques are implemented to model and optimize the compressive strength of carbon/polymer composites. Artificial neural network is used to establish a relationship between the uniaxial compressive strength of fabricated materials and the most significant processing parameters. To put together a database, three different types of wood are carbonized at various heat treatment temperatures, in specific pyrolysis time periods. Compression tests are then conducted at room temperature on the composites, at a constant strain rate. The collected data of compressive strength and the related fabrication parameters are used as sets of data for training a neural network. A nested cross validation scheme is used to ensure the efficiency of the network. Results are indicative of a very good network, which generalizes very well. Next, an attempt is made to optimize the compressive behavior of the composites by controlling carbonization temperature, time and also starting material type with the aid of a genetic algorithm coupled with the trained network. The optimization system yields promising results, significantly enhancing the compressive strength. The validity of the optimal experiment, as proposed by the soft-computing system, is verified by subsequent laboratory testing.
Intelligent use of data to optimize compressive strength of cellulose-derived composites
S1568494615008091
In this paper, we investigate multiple attribute decision making (MADM) problems based on Frank triangular norms, in which the attribute values assume the form of hesitant fuzzy information. Firstly, some basic concepts of hesitant fuzzy set (HFS) and the Frank triangle norms are introduced. We develop some hesitant fuzzy aggregation operators based on Frank operations, such as hesitant fuzzy Frank weighted average (HFFWA) operator, hesitant fuzzy Frank ordered weighted averaging (HFFOWA) operator, hesitant fuzzy Frank hybrid averaging (HFFHA) operator, hesitant fuzzy Frank weighted geometric (HFFWG) operator, hesitant fuzzy Frank ordered weighted geometric (HFFOWG) operator, and hesitant fuzzy Frank hybrid geometric (HFFHG) operator. Some essential properties together with their special cases are discussed in detail. Next, a procedure of multiple attribute decision making based on the HFFHWA (or HFFHWG) operator is presented under hesitant fuzzy environment. Finally, a practical example that concerns the human resource selection is provided to illustrate the decision steps of the proposed method. The result demonstrates the practicality and effectiveness of the new method. A comparative analysis is also presented.
Frank aggregation operators and their application to hesitant fuzzy multiple attribute decision making
S1568494615008121
Estimating the electrical and mechanical parameters involved in three-phase induction motors is frequently employed to avoid measuring every variable in the process. Among mechanical parameters, speed is an important variable: it is involved in control, diagnosis, condition monitoring, and can be measured or estimated by sensorless methods. These technologies offer advantages when compared with direct measurement, such as lower cost or more robust systems. This paper proposes the use of artificial neural networks to estimate rotor speed by using current sensors for balanced and unbalanced voltage sources with a wide mechanical load range in a line-connected induction motor. This paper also presents two case analyses: (i) a single current sensor; and (ii) a multiple currents sensors. Simulation and experimental results are presented to validate the proposed approach. A neural speed estimator embedded in a digital processor is also presented.
Neural speed estimator for line-connected induction motor embedded in a digital processor
S1568494615008133
In the bioinformatics community, it is really important to find an accurate and simultaneous alignment among diverse biological sequences which are assumed to have an evolutionary relationship. From the alignment, the sequences homology is inferred and the shared evolutionary origins among the sequences are extracted by using phylogenetic analysis. This problem is known as the multiple sequence alignment (MSA) problem. In the literature, several approaches have been proposed to solve the MSA problem, such as progressive alignments methods, consistency-based algorithms, or genetic algorithms (GAs). In this work, we propose a Hybrid Multiobjective Evolutionary Algorithm based on the behaviour of honey bees for solving the MSA problem, the hybrid multiobjective artificial bee colony (HMOABC) algorithm. HMOABC considers two objective functions with the aim of preserving the quality and consistency of the alignment: the weighted sum-of-pairs function with affine gap penalties (WSP) and the number of totally conserved (TC) columns score. In order to assess the accuracy of HMOABC, we have used the BAliBASE benchmark (version 3.0), which according to the developers presents more challenging test cases representing the real problems encountered when aligning large sets of complex sequences. Our multiobjective approach has been compared with 13 well-known methods in bioinformatics field and with other 6 evolutionary algorithms published in the literature.
Hybrid multiobjective artificial bee colony for multiple sequence alignment
S1568494615008145
This paper dealt with an unrelated parallel machines scheduling problem with past-sequence-dependent setup times, release dates, deteriorating jobs and learning effects, in which the actual processing time of a job on each machine is given as a function of its starting time, release time and position on the corresponding machine. In addition, the setup time of a job on each machine is proportional to the actual processing times of the already processed jobs on the corresponding machine, i.e., the setup times are past-sequence-dependent (p-s-d). The objective is to determine jointly the jobs assigned to each machine and the order of jobs such that the total machine load is minimized. Since the problem is NP-hard, optimal solution for the instances of realistic size cannot be obtained within a reasonable amount of computational time using exact solution approaches. Hence, an efficient method based on the hybrid particle swarm optimization (PSO) and genetic algorithm (GA), denoted by HPSOGA, is proposed to solve the given problem. In view of the fact that efficiency of the meta-heuristic algorithms is significantly depends on the appropriate design of parameters, the Taguchi method is employed to calibrate and select the optimal levels of parameters. The performance of the proposed method is appraised by comparing its results with GA and PSO with and without local search through computational experiments. The computational results for small sized problems show that the mentioned algorithms are fully effective and viable to generate optimal/near optimal solutions, but when the size of the problem is increased, the HPSOGA obtains better results in comparison with other algorithms.
A robust hybrid approach based on particle swarm optimization and genetic algorithm to minimize the total machine load on unrelated parallel machines
S1568494615008157
Programmed pulse width modulation is an optimized pulse width modulation which is particularly applicable for high-power applications where the power losses must be kept below firm limits. Based on the offline estimation, it is capable of pre programming the harmonic profile of the output waveform over a range of modulation indices by eliminating some lower order harmonics. In this paper, improved firefly algorithm (FA) is applied to determine the optimum switching angles for the 11- level cascaded H bridge multilevel inverter (MLI) with adjustable DC sources in order to eliminate pre specified lower order harmonics and to achieve the desired fundamental voltage. Though number of optimization algorithms is available for the estimation of switching angles, Firefly algorithm takes least computation time and surpasses all other 11 metaheuristic Algorithms. The algorithm and the model are developed using MATLAB and the validity of the simulation is confirmed by an experimental setup using FPGA Spartan 6A DSP. Results are compared with the results obtained using particle swarm optimization (PSO) and artificial bee colony algorithm (ABCA) and it is proved that the proposed method offers reduced total harmonic distortion (THD) with less computation period.
Application of improved firefly algorithm for programmed PWM in multilevel inverter with adjustable DC sources
S1568494615008182
A comparative study of the impacts of various local search methodologies for the surrogate-assisted multi-objective memetic algorithm (MOMA) is presented in this paper. The base algorithm for the comparative study is the single surrogate-assisted MOMA (SS-MOMA) with the main aim being to solve expensive problems with a limited computational budget. In addition to the standard weighted sum (WS) method used in the original SS-MOMA, we studied the capabilities of other local search methods based on the achievement scalarizing function (ASF), Chebyshev function, and random mutation hill climber (RMHC) in various test problems. Several practical aspects, such as normalization and constraint handling, were also studied and implemented to deal with real-world problems. Results from the test problems showed that, in general, the SS-MOMA with ASF and Chebyshev functions was able to find higher-quality solutions that were more robust than those found with WS or RMHC; although on problems with more complicated Pareto sets SS-MOMA-WS appeared as the best. SS-MOMA-ASF in conjunction with the Chebyshev function was then tested on an airfoil-optimization problem and compared with SS-MOMA-WS and the non-dominated sorting based genetic algorithm-II (NSGA-II). The results from the airfoil problem clearly showed that SS-MOMA with an achievement-type function could find more diverse solutions than SS-MOMA-WS and NSGA-II. This suggested that for real-world applications, higher-quality solutions are more likely to be found when the surrogate-based memetic optimizer is equipped with ASF or a Chebyshev function than with other local search methods.
A comparative study of local search within a surrogate-assisted multi-objective memetic algorithm framework for expensive problems
S156849461600003X
The theoretical studies of differential evolution algorithm (DE) have gradually attracted the attention of more and more researchers. According to recent researches, the classical DE cannot guarantee global convergence in probability except for some special functions. Along this perspective, a problem aroused is that on which functions DE cannot guarantee global convergence. This paper firstly addresses that DE variants are difficult on solving a class of multimodal functions (such as the Shifted Rotated Ackley's function) identified by two characteristics. One is that the global optimum of the function is near a boundary of the search space. The other is that the function has a larger deceptive optima set in the search space. By simplifying the class of multimodal functions, this paper then constructs a Linear Deceptive function. Finally, this paper develops a random drift model of the classical DE algorithm to prove that the algorithm cannot guarantee global convergence on the class of functions identified by the two above characteristics.
Not guaranteeing convergence of differential evolution on a class of multimodal functions
S1568494616000053
This paper studies three techniques for outliers detection in the context of Wireless Sensor Networks, including a machine learning technique, a Principal Component Analysis-based methodology and an univariate statistics-based approach. The first methodology is based on a Least Squares-Support Vector Machine technique, together with a sliding window learning. A modification to this approach is also considered in order to improve its performance in non-stationary time-series. The second methodology relies on Principal Component Analysis, along with the robust orthonormal projection approximation subspace tracking with rank-1 modification, while the last approach is based on univariate statistics within an oversampling mechanism. All methods are implemented under a hierarchical multi-agent framework and compared through experiments carried out on a test-bed.
Detection and accommodation of outliers in Wireless Sensor Networks within a multi-agent framework
S1568494616000065
Interval fuzzy preference relations that can well cope with the vagueness and uncertainty are commonly used by the decision maker. The most crucial issue is how to derive the interval priority vector from an interval fuzzy preference relation. This paper first analyzes the size of the interval priority weights. Then, two linear programming models are built, by which the interval priority weights are obtained, respectively. Considering the inconsistent case, two consistency-based linear programming models are built to derive the additive consistent fuzzy preference relations. Different to the current methods, new models consider the consistency and the interval priority weight simultaneously. In some situations, the decision maker may only offer an incomplete interval fuzzy preference relation, namely, some judgments are missing. To cope with this situation, we first classify the missing intervals into three categories and then apply the associated linear equations to denote the missing values. After that, we construct two consistency-based linear programming models to determine the missing values to cope with the consistent and inconsistent cases. It is worth noting that the built models can cope with the situation where ignorance objects exist. Meanwhile, the associated numerical examples are offered, and the analysis comparison is made.
Consistency-based linear programming models for generating the priority vector from interval fuzzy preference relations
S1568494616000077
In this paper, we improve D. Karaboga's Artificial Bee Colony (ABC) optimization algorithm, by using the sensitivity analysis method described by Morris. Many improvements of the ABC algorithm have been made, with effective results. In this paper, we propose a new approach of random selection in neighborhood search. As the algorithm is running, we apply a sensitivity analysis method, Morris’ OAT (One-At-Time) method, to orientate the random choice selection of a dimension to shift. Morris’ method detects which dimensions have a high influence on the objective function result and promotes the search following these dimensions. The result of this analysis drives the ABC algorithm towards significant dimensions of the search space to improve the discovery of the global optimum. We also demonstrate that this method is fruitful for more recent improvements of ABC algorithm, such as GABC, MeABC and qABC.
A sensitivity analysis method for driving the Artificial Bee Colony algorithm's search process
S1568494616000089
The aim of this paper is to propose a new aggregation method to solve heterogeneous MAGDM problem which involves real numbers, interval numbers, triangular fuzzy numbers (TFNs), trapezoidal fuzzy numbers (TrFNs), linguistic values and Atanassov's intuitionistic fuzzy numbers (AIFNs). Firstly, motivated by the relative closeness of technique for order preference by similarity to ideal solution (TOPSIS), we propose a new general method for aggregating crisp values, TFNs, TrFNs and linguistic values into AIFNs. Thus all the group decision matrices for each alternative which involves heterogeneous information are transformed into an Atanassov's intuitionistic fuzzy decision matrix which only contains AIFNs. To determine the attribute weights, a multiple objective Atanassov's intuitionistic fuzzy programming model is constructed and solved by converting it into a linear program. Subsequently, comparison analyses demonstrate that the proposed aggregated technology can overcome the drawbacks of existing methods. An example about cloud computing service evaluation is given to verify the practicality and effectiveness of the proposed method.
Aggregating decision information into Atanassov's intuitionistic fuzzy numbers for heterogeneous multi-attribute group decision making
S1568494616000090
Artificial bee colony algorithm (ABC) is a new type of swarm intelligence methods which imitates the foraging behavior of honeybees. Due to its simple implementation with very small number of control parameters, many efforts have been done to explore ABC research in both algorithms and applications. In this paper, a new ABC variant named ABC with memory algorithm (ABCM) is described, which imitates a memory mechanism to the artificial bees to memorize their previous successful experiences of foraging behavior. The memory mechanism is applied to guide the further foraging of the artificial bees. Essentially, ABCM is inspired by the biological study of natural honeybees, rather than most of the other ABC variants that integrate existing algorithms into ABC framework. The superiority of ABCM is analyzed on a set of benchmark problems in comparison with ABC, quick ABC and several state-of-the-art algorithms.
Artificial bee colony algorithm with memory
S1568494616000119
Most of the previous work on identification involves systems described by ordinary differential equations (ODEs). Many industrial processes and physical phenomena, however, should be modeled using partial differential equations (PDEs) which offer both spatial and temporal distributions that are simply not available with ODE models. Systems described by a PDE belong to a class of system called distributed parameter system (DPS). This article presents a method for solving the problem of identification of uncertain DPSs using a differential neural network (DNN). The DPS, assumed to be described by a PDE, is approximated using the finite element method (FEM). The FEM discretizes the domain into a set of distributed and connected nodes, thereby, allowing a representation of the DPS in a finite number of ODEs. The proposed DNN follows the same interconnection structure of the FEM, thus allowing the DNN to identify the FEM approximation of the DPS in both 2D and 3D domains. Lyapunov's second method was used to derive adaptive learning laws for the proposed DNN structure. The identification algorithm, here developed in Nvidia's CUDA/C to reduce the execution time, runs mostly on the graphics processing unit (GPU). A physical experiment served to validate the 2D case. In the experiment, the DNN followed the trajectory of 57 markers that were placed on an undulating square piece of silk. The proposed DNN is compared against a method based on principal component analysis and an artificial neural network trained with group search optimization. In addition to the 2D case, a simulation validated the 3D case, where input data for the DNN was generated by solving a PDE with appropriate initial and boundary conditions over an unitary domain. Results show that the proposed FEM-based DNN approximates the dynamic behavior of both a real 2D and a simulated 3D system.
Distributed parameter system identification using finite element differential neural networks
S1568494616000120
Mortality scores based on multiple regressions are common in critical care medicine for prognostic stratification of patients. However, to be used at the point of care, they need to be both accurate and easily interpretable. In this work, we propose the application of one existent type of rule base system using statistical information – probabilistic fuzzy systems (PFS) – to predict mortality of septic shock patients. To assess its accuracy and interpretability, these models are compared to methodologies previously proposed in this domain: Takagi-Sugeno fuzzy models and logistic regression models. The methods are tested using a retrospective cohort study including ICU patients with abdominal septic shock. Regarding accuracy, PFS models are comparable to fuzzy modeling and logistic regression. In terms of interpretability, results indicate that PFS models increase the transparency of the learned system (using fuzzy rules), but at the same time, provide additional means for validating the fuzzy classifier using expert knowledge (from physicians in this paper). By providing accurate and interpretable estimates for the mortality risk, results suggest the usefulness of PFS to develop scores for critical care medicine.
Mortality prediction of septic shock patients using probabilistic fuzzy systems
S1568494616000132
Developing an effective memetic algorithm that integrates the Particle Swarm Optimization (PSO) algorithm and a local search method is a difficult task. The challenging issues include when the local search method should be called, the frequency of calling the local search method, as well as which particle should undergo the local search operations. Motivated by this challenge, we introduce a new Reinforcement Learning-based Memetic Particle Swarm Optimization (RLMPSO) model. Each particle is subject to five operations under the control of the Reinforcement Learning (RL) algorithm, i.e. exploration, convergence, high-jump, low-jump, and fine-tuning. These operations are executed by the particle according to the action generated by the RL algorithm. The proposed RLMPSO model is evaluated using four uni-modal and multi-modal benchmark problems, six composite benchmark problems, five shifted and rotated benchmark problems, as well as two benchmark application problems. The experimental results show that RLMPSO is useful, and it outperforms a number of state-of-the-art PSO-based algorithms.
A new Reinforcement Learning-based Memetic Particle Swarm Optimizer
S1568494616000144
Hospitals are one of the important service industries of health care for patients. The emergency department is the heart of every hospital, because the errors or failures occurring in it will significantly affect the safety of patients and the goodwill of the hospital. Therefore, emergency departments should be monitored carefully. This study proposed the application of Fuzzy failure mode and effects analysis (FMEA) for prioritization and assessment of failures that likely occur in the working process of an emergency department. All individuals were assessed independently without the interference of team members. In addition, this method could reduce the limitations of traditional FMEA. The prioritization of risks could also help the emergency department to choose corrective actions wisely. In conclusion, the Fuzzy FMEA method was found to be suitably adopted in the emergency department. Finally, this method helped to increase the level of confidence on hospitals.
Fuzzy FMEA application to improve decision-making process in an emergency department
S1568494616000156
Data mining is the process of extracting desirable knowledge or interesting patterns from existing databases for specific purposes. In real-world applications, transactions may contain quantitative values and each item may have a lifespan from a temporal database. In this paper, we thus propose a data mining algorithm for deriving fuzzy temporal association rules. It first transforms each quantitative value into a fuzzy set using the given membership functions. Meanwhile, item lifespans are collected and recorded in a temporal information table through a transformation process. The algorithm then calculates the scalar cardinality of each linguistic term of each item. A mining process based on fuzzy counts and item lifespans is then performed to find fuzzy temporal association rules. Experiments are finally performed on two simulation datasets and the foodmart dataset to show the effectiveness and the efficiency of the proposed approach.
Mining fuzzy temporal association rules by item lifespans
S1568494616000168
A continuous sliding mode control with moving sliding surface for nonlinear systems of arbitrary order is presented in this paper. The sliding surface is moved repetitively toward the target sliding surface in order to ensure that the system trajectory is close to the actual surface during the whole control process. The parameters of sliding mode control are tuned by a fuzzy logic. The proposed procedure reduces the time when the system operates in the approaching phase during which the control performance is deteriorated since the system is more susceptible to external disturbances and model uncertainties. The effectiveness of the presented approach is demonstrated on a control of a flexible robot manipulator arm.
Adaptive sliding mode control with moving sliding surface
S156849461600017X
Mining frequent itemsets is an essential problem in data mining and plays an important role in many data mining applications. In recent years, some itemset representations based on node sets have been proposed, which have shown to be very efficient for mining frequent itemsets. In this paper, we propose DiffNodeset, a novel and more efficient itemset representation, for mining frequent itemsets. Based on the DiffNodeset structure, we present an efficient algorithm, named dFIN, to mining frequent itemsets. To achieve high efficiency, dFIN finds frequent itemsets using a set-enumeration tree with a hybrid search strategy and directly enumerates frequent itemsets without candidate generation under some case. For evaluating the performance of dFIN, we have conduct extensive experiments to compare it against with existing leading algorithms on a variety of real and synthetic datasets. The experimental results show that dFIN is significantly faster than these leading algorithms.
DiffNodesets: An efficient structure for fast mining frequent itemsets
S1568494616000181
In this paper, a method is proposed to overcome the saturation non-linearity linked to the microphones and loudspeakers of active noise control (ANC) system. The reference microphone gets saturated when the acoustic noise at the source increases beyond the dynamic limits of the microphone. When the controller tries to drive the loudspeaker system beyond its dynamic limits, the saturation nonlinearity is also introduced into the system. The secondary path which is generally estimated with a low level auxiliary noise by a linear transfer function does not model such saturation nonlinearity. Therefore, the filtered-x least mean square (FXLMS) algorithm fails to perform when the noise level is increased. For alleviating the saturation nonlinearity effect a nonlinear functional expansion based ANC algorithm is proposed where the particle swarm optimization (PSO) algorithm is suitably applied to tune the parameters of a filter bank based functional link artificial neural network (FLANN) structure, named as PSO based nonlinear structure (PSO-NLS) algorithm. The proposed algorithm does not require any computation of secondary path estimate filtering unlike other conventional gradient based algorithms and hence has got computational advantage. The computer simulation experiments show its superior performance compared to the FXLMS, filtered-s LMS and genetic algorithms under saturation present at both at secondary and reference paths. The paper also includes a sensitivity analysis to study the effect of different parameters on ANC performance.
Particle swarm optimization based nonlinear active noise control under saturation nonlinearity
S1568494616000260
The aim of this paper is to present an integrated mathematical model to solve the dynamic cell formation problem considering operator assignment and inter/intra cell layouts problems with machine duplication, simultaneously. The proposed model includes three objectives which the first objective seeks to minimize inter/intra cell part movements and machine relocation, the second objective minimizes machine and operator related costs and the third objective maximizes consecutive forward flows ratio. In order to validate the proposed model, a numerical example is presented and solved by the sum weighted method. Due to NP-hardness of the model, two meta-heuristics namely multi-objective simulated annealing (MOSA) and multi-objective vibration damping optimization (MOVDO) present to solve the proposed model. Finally, two algorithms have been compared using multi-objective criteria.
An integrated mathematical model for solving dynamic cell formation problem considering operator assignment and inter/intra cell layouts
S1568494616000272
In this paper we focus on finding high quality solutions for the problem of maximum partitioning of graphs with supply and demand (MPGSD). There is a growing interest for the MPGSD due to its close connection to problems appearing in the field of electrical distribution systems, especially for the optimization of self-adequacy of interconnected microgrids. We propose an ant colony optimization algorithm for the problem. With the goal of further improving the algorithm we combine it with a previously developed correction procedure. In our computational experiments we evaluate the performance of the proposed algorithm on trees, 3-connected graphs, series–parallel graphs and general graphs. The tests show that the method manages to find optimal solutions for more than 50% of the problem instances, and has an average relative error of less than 0.5% when compared to known optimal solutions.
An ant colony optimization algorithm for partitioning graphs with supply and demand
S1568494616300011
Gene association networks have become one of the most important approaches to modelling of biological processes by means of gene expression data. According to the literature, co-expression-based methods are the main approaches to identification of gene association networks because such methods can identify gene expression patterns in a dataset and can determine relations among genes. These methods usually have two fundamental drawbacks. Firstly, they are dependent on quality of the input dataset for construction of reliable models because of the sensitivity to data noise. Secondly, these methods require that the user select a threshold to determine whether a relation is biologically relevant. Due to these shortcomings, such methods may ignore some relevant information. We present a novel fuzzy approach named FyNE (Fuzzy NEtworks) for modelling of gene association networks. FyNE has two fundamental features. Firstly, it can deal with data noise using a fuzzy-set-based protocol. Secondly, the proposed approach can incorporate prior biological knowledge into the modelling phase, through a fuzzy aggregation function. These features help to gain some insights into doubtful gene relations. The performance of FyNE was tested in four different experiments. Firstly, the improvement offered by FyNE over the results of a co-expression-based method in terms of identification of gene networks was demonstrated on different datasets from different organisms. Secondly, the results produced by FyNE showed its low sensitivity to noise data in a randomness experiment. Additionally, FyNE could infer gene networks with a biological structure in a topological analysis. Finally, the validity of our proposed method was confirmed by comparing its performance with that of some representative methods for identification of gene networks
Incorporating biological knowledge for construction of fuzzy networks of gene associations
S1568494616300023
Power quality may degrade owing to increasing harmonics from nonlinear loads in a power system. Harmonics are time-varying and non-stationary power quality problems in the power system. This article addresses that the passive filter planning considers multiple scenarios, including different load levels and harmonic currents in a factory distribution system. Based on the above factors and use of the Markov model, this work attempts to gain multiple scenarios, each with its own probability and duration. A novel method based on probabilistic Sugeno fuzzy reasoning is also developed by using individual optimal solutions of all scenarios. Moreover, the final optimal solution is obtained using the center-of-gravity approach. The proposed method is validated using simulation results of a 2-busbar factory distribution system and an 18-busbar factory distribution system.
Multi-scenario passive filter planning in factory distribution system by using Markov model and probabilistic Sugeno fuzzy reasoning
S1568494616300035
Due to the intricacy of machining processes and inconsistency in material properties, analytical models are often unable to describe the mechanics of machining of carbon fiber reinforced polymer (CFRP) composites. Recently, soft computing techniques are used as alternate modeling and analyzing methods, which are usually robust and capable of yielding comprehensive, precise, and unswerving solutions. In this paper, drilling experiments as per the Taguchi L 27 experimental layout are carried out on bi-directional carbon fiber reinforced polymer (BD CFRP) composite laminates using three types of drilling tools: high speed steel (HSS), uncoated solid carbide (USC) and titanium nitride coated SC (TiN-SC). The focus of this work is to determine the best drilling tool that produces good quality drilled holes in BD CFRP composite laminates. This paper proposes a novel prediction model ‘genetic algorithm optimised multi-layer perceptron neural network’ (GA-MLPNN) in which genetic algorithm (GA) is integrated with Multi-Layer Perceptron Neural Network. The performance capability of response surface methodology (RSM) and GA-MLPNN in prediction of thrust force is investigated. RSM is also used to evaluate the influence of process parameters (spindle speed, feed rate, point angle and drill diameter) on thrust force. GA is used to optimize the thrust force and its optimization performance is compared with that of RSM. It is observed that the GA-MLPNN is better predicting tool than the RSM model. The investigation in this paper demonstrates that TiN-SC is the best tool for drilling BD CFRP composite laminates as minimum thrust force is developed during its use.
Soft computing techniques during drilling of bi-directional carbon fiber reinforced composite
S1568494616300047
In the wake of the rapid emergence of e-commerce, its evaluation is of great theoretical and practical importance. Among the various types of e-commerce, business-to-consumer (B2C) e-commerce has become a key, and especially influential, retailing channel. Its ascendancy raises core issues with respect to how the customer is to be satisfied by, and therefore inclined to trust, e-commerce websites. B2C e-commerce website evaluation, therefore, is an important related issue. First, while a number of studies have studied B2C e-commerce website evaluation using various multiple-criteria-decision-making (MCDM) methods, they have focused only on the perceived service quality of B2C e-commerce websites. In fact, it is generally recognized that service quality is determined by the difference between the expected service level (which expectation is derived from information obtained before the service experience) and the actual, perceived service level; this concept, then, should be also be afforded due consideration in B2C e-commerce website evaluation. Second, among the various MCDM approaches, TOPSIS, which involves the consideration of both the positive-ideal solution (PIS) and the negative-ideal solution (NIS), is especially pertinent to the complex decision-making entailed in the evaluation of B2C e-commerce websites. Third, the human element of subjectivity in the evaluation of B2C e-commerce websites needs to be considered in order to enable modeling of real-life website-evaluation situations. Finally, the hierarchical structure of the evaluation criteria between the main dimensions and their sub-criteria should be considered. To reflect these issues, in this paper, we present a fuzzy hierarchical TOPSIS based on E-SERVQUAL (E-S-QUAL). This approach effectively considers the raised issues and preserves the core concept of E-S-QUAL (the extended version of SERVQUAL) for measurement of electronic service quality in the e-commerce environment. The empirical case study of B2C e-commerce provides the researchers and practitioners to understand in a better way the evaluation process from a practical point of view. In addition, core finding of this study is that the comparison results with other MCDM methods further verify the robustness of the proposed approach. It implies that this method potentially can be applied to performance evaluation of similar service sectors.
Evaluation of e-commerce websites using fuzzy hierarchical TOPSIS based on E-S-QUAL
S1568494616300059
Electric energy is the most popular form of energy because it can be transported easily at high efficiency and reasonable cost. Nowadays the real-world electric power systems are large-scale and highly complex interconnected transmission systems. The transmission expansion planning (TEP) problem is a large-scale optimization, complicated and nonlinear problem that the number of candidate solutions increases exponentially with system size. Investment cost, reliability (both adequacy and security), and congestion cost are considered in this optimization. To overcome the difficulties in solving the non-convex and mixed integer nature of this optimization problem, this paper offers a firefly algorithm (FA) to solve this problem. In this paper it is shown that FA, like other heuristic optimization algorithms, can solve the problem in a better manner compare with other methods such genetic algorithm (GA), particle swarm optimization (PSO), Simulated Annealing (SA) and Differential Evolution (DE). To show the feasibility of proposed method, applied model has been considered in IEEE 24-Bus, IEEE 118-Bus and Iran 400-KV transmission grid case studies for TEP problem in both adequacy and security modes. The obtained results show the capability of the proposed method. A comprehensive analysis of the GA, PSO, SA and DE with proposed method is also presented. suceptance of the circuits in right-of-way i–j active load in Bus i initial active load in Bus i initial maximum active power generation at Bus i minimum active power generation at Bus i coefficient of bid function of Bus i maximum capacity of branch (i − j) discount factor a large penalty factor node-branch incidence, if i connected to j then s ij =1 else s ij =0 total planning horizon investment cost to build a line in the right-of-way i–j ($/year) load factor generation factor active power flow in the right-of-way i–j active power flow in the right-of-way i–j while a line in right-of-way m-n is out of service active power generation in Bus i voltage angle at Bus i voltage angle at Bus i while a line in right-of-way m–n is out of service curtailed load at bus k in normal operation curtailed load at bus k while a line in right-of-way m–n is out of service number of new circuit added to the right-of-way i–j (integer variable) vector of Lagrange multipliers for inequality and equality constraint Bus index set of load Buses number of available branches in corridor ℓ maximum number of new added branches number of generators set of all new right-of-ways
Application of firefly algorithm for multi-stage transmission expansion planning with adequacy-security considerations in deregulated environments
S1568494616300060
Due to the ever increasing number of documents in the digital form, automated text clustering has become a promising method for the text analysis in last few decades. A major issue in the text clustering is high dimensionality of the feature space. Most of these features are irrelevant, redundant, and noisy that mislead the underlying algorithm. Therefore, feature selection is an essential step in the text clustering to reduce dimensionality of the feature space and to improve accuracy of the underlying clustering algorithm. In this paper, a hybrid intelligent algorithm, which combines the binary particle swarm optimization (BPSO) with opposition-based learning, chaotic map, fitness based dynamic inertia weight, and mutation, is proposed to solve feature selection problem in the text clustering. Here, fitness based dynamic inertia weight is integrated with the BPSO to control movement of the particles based on their current status, and the mutation and the chaotic strategy are applied to enhance the global search capability of the algorithm. Moreover, an opposition-based initialization is used to start with a set of promising and well-diversified solutions to achieve a better final solution. In addition, the opposition-based learning method is also used to generate opposite position of the gbest particle to get rid of the stagnation in the swarm. To prove effectiveness of the proposed method, experimental analysis is conducted on three different benchmark text datasets Reuters-21578, Classic4, and WebKB. The experimental results demonstrate that the proposed method selects more informative features set compared to the competitive methods as it attains higher clustering accuracy. Moreover, it also improves convergence speed of the BPSO.
Opposition chaotic fitness mutation based adaptive inertia weight BPSO for feature selection in text clustering
S1568494616300072
A plethora of patents are approved by the patent officers each year and current patent systems face a solemn quandary of evaluating these patents’ qualities. Traditional researchers and analyzers have fixated on developing sundry patent quality indicators only, but these indicators do not have further prognosticating power on incipient patent applications or publications. Therefore, the data mining (DM) approaches are employed in this article to identify and to classify the new patent's quality in time. An automatic patent quality analysis and classification system, namely SOM-KPCA-SVM, is developed according to patent quality indicators and characteristics, respectively. First, the self-organizing map (SOM) approach is used to cluster patents published before into different quality groups according to the patent quality indicators and defines group quality type instead of via experts. The kernel principal component analysis (KPCA) approach is used to transform nonlinear feature space in order to improve classification performance. Finally, the support vector machine (SVM) is used to build up the patent quality classification model. The proposed SOM-KPCA-SVM is applied to classify patent quality automatically in patent data of the thin film solar cell. Experimental results show that our proposed system can capture the analysis effectively compared with traditional manpower approach.
A patent quality analysis and classification system using self-organizing maps with support vector machine
S1568494616300084
The Sugeno-type fuzzy models are used frequently in system modeling. The idea of information granulation inherently arises in the design process of Sugeno-type fuzzy model, whereas information granulation is closely related with the developed information granules. In this paper, the design method of Sugeno-type granular model is proposed on a basis of an optimal allocation of information granularity. The overall design process initiates with a well-established Sugeno-type numeric fuzzy model (the original Sugeno-type model). Through assigning soundly information granularity to the related parameters of the antecedents and the conclusions of fuzzy rules of the original Sugeno-type model (i.e. granulate these parameters in the way of optimal allocation of information granularity becomes realized), the original Sugeno-type model is extended to its granular counterpart (granular model). Several protocols of optimal allocation of information granularity are also discussed. The obtained granular model is applied to forecast three real-world time series. The experimental results show that the method of designing Sugeno-type granular model offers some advantages yielding models of good prediction capabilities. Furthermore, those also show merits of the Sugeno-type granular model: (1) the output of the model is an information granule (interval granule) rather than the specific numeric entity, which facilitates further interpretation; (2) the model can provide much more flexibility than the original Sugeno-type model; (3) the constructing approach of the model is of general nature as it could be applied to various fuzzy models and realized by invoking different formalisms of information granules.
The granular extension of Sugeno-type fuzzy models based on optimal allocation of information granularity and its application to forecasting of time series
S1568494616300096
This paper presents a new approach of delineation and characterization of four different types of brain tumors viz. Glioblastoma multiforme (GBM), metastasis (MET), meningioma (MG) and granuloma (GN) from magnetic resonance imaging (MRI) slices of post contrast T1-weighted (T1C) sequence to improve the computer assistive diagnostic accuracy. An integrated framework of identification and extraction of tumor region, quantification of histogram, shape and textural features followed through pattern classification by machine learning algorithm has been proposed. Rough entropy based thresholding in granular computing paradigm has been adopted for delineation of tumor area. After accomplishing quantitative validation and comparison with existing methods, experimental results prove the efficiency and applicability of proposed segmentation approach. In the next stage, the extracted lesions have been quantified with 86 features to develop the training dataset. Random forest (RF), an ensemble learning scheme has been implemented, which learns the training data for accurate prediction of the class label of a given input. The performance of RF has been evaluated by statistical measures from 3 fold cross-validation and compared with five different classifiers. The same experiment has been repeated over the reduced set of features generated by correlation based feature selection strategy. Experimental results show the superiority of RF (Sensitivity achieved in %: GBM-96.7, MET-96.2, MG-98.1 and GN-97.7) with the complete set of features. The comparison of proposed methodology with the existing works signifies its applicability and effectiveness. Additionally a 10 fold cross-validation has been accomplished to justify the statistical significance of the classification accuracy achieved from proposed methodology.
Delineation and diagnosis of brain tumors from post contrast T1-weighted MR images using rough granular computing and random forest
S1568494616300102
The aim of this work is to introduce a trust model, which is highly consistent with the social nature of trust in computational domains. To this end, we propose a hesitant fuzzy multi-criteria decision making based computational trust model capable of taking into account the fundamental building blocks corresponding to the concept of trust. The proposed model is capable of considering the contextuality property of trust and the subjective priorities of the trustor regarding the chosen goal. This is due to viewing trust not as a single label or an integrated concept, but as a collection of trustworthiness facets that may form the trust decision in various contexts and toward different goals. The main benefit of the proposed model is the consideration of the hesitancy of recommenders and the trustor in the process of trust decision making which can create a more flexible mapping between the social and computational requirements of trust. This type of formulation also allows for taking into account the vagueness of the provided opinions. In addition to the vagueness of the provided opinions, the model is capable of considering the certainty of recommendations and its effect on the aggregation process of gathered opinions. In the proposed model, the taste of the recommenders and the similarity of opinions are also considered. This will allow the model to assign more weight to recommendations that have a similar taste compared to the trustor. Finally, taking into consideration the attitudes of the trustors toward change of personality that may occur for various entities in the environment is another advantage of the proposed model. A step-by-step illustrative example and the results of several experimental evaluations, which demonstrate the benefits of the proposed model, are also presented in this paper.
A hesitant fuzzy model of computational trust considering hesitancy, vagueness and uncertainty
S1568494616300114
Support vector machines have a wide use for the prediction problems in life sciences. It has been shown to offer more generalisation ability in input–output mapping. However, the performance of predictive models is often negatively influenced due to the complex, high-dimensional, and non-linear nature of the post-genome data. Soft computing methods can be used to model such non-linear systems. Fuzzy systems are one of the widely used methods of soft computing that model uncertainties. It is formed of interpretable rules aiding one to gain insight into applied model. This study is therefore concerned to provide more interpretable and efficient biological model with the development of a hybrid method that integrates the fuzzy system and support vector regression. In order to demonstrate the robustness of this new hybrid method, it is applied to the prediction of peptide binding affinity being one of the most challenging problems in the post-genomic era due to diversity in peptide families and complexity and high-dimensionality in the characteristic features of the peptides. Having used four different case studies, this hybrid predictive model has yielded the highest predictive power in all the four cases and achieved an improvement of as much as 34% compared to the results presented in the literature. Availability: Matlab scripts are available at https://github.com/sekerbigdatalab/tsksvr.
Quantitative prediction of peptide binding affinity by using hybrid fuzzy support vector regression
S1568494616300138
Stereo vision systems are utilized since it provides contactless measurements of objects in 3-D (three-dimensional). Orthognathic surgery is a very sensitive operation that requires very high accuracy in measurements. The reduction of measurement error is an essential problem in orthognathic surgery. Moreover, quality inspection of the process during the course of operation aids the surgeon to avoid or minimize the mitigating circumstances. In this paper, artificial intelligence methods (neural network and neuro-fuzzy system) are used in order to increase the accuracy of positioning of jaws during the real-time practice. The comparison of artificial measurements with the real measurements shows that a statistically acceptable accuracy is achieved in 3-D positioning of teeth.
Artificial 3-D contactless measurement in orthognathic surgery with binocular stereo vision
S156849461630014X
Time series forecasting is an important and widely popular topic in the research of system modeling, and stock index forecasting is an important issue in time series forecasting. Accurate stock price forecasting is a challenging task in predicting financial time series. Time series methods have been applied successfully to forecasting models in many domains, including the stock market. Unfortunately, there are 3 major drawbacks of using time series methods for the stock market: (1) some models can not be applied to datasets that do not follow statistical assumptions; (2) most time series models that use stock data with a significant amount of noise involutedly (caused by changes in market conditions and environments) have worse forecasting performance; and (3) the rules that are mined from artificial neural networks (ANNs) are not easily understandable. To address these problems and improve the forecasting performance of time series models, this paper proposes a hybrid time series adaptive network-based fuzzy inference system (ANFIS) model that is centered around empirical mode decomposition (EMD) to forecast stock prices in the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX) and Hang Seng Stock Index (HSI). To measure its forecasting performance, the proposed model is compared with Chen's model, Yu's model, the autoregressive (AR) model, the ANFIS model, and the support vector regression (SVR) model. The results show that our model is superior to the other models, based on root mean squared error (RMSE) values.
A hybrid ANFIS model based on empirical mode decomposition for stock time series forecasting
S1568494616300151
In this paper, a self-organizing cascade neural network (SCNN) with random weights is proposed for nonlinear system modeling. This SCNN is constructed via simultaneous structure and parameter learning processes. In structure learning, the units, which lead to the maximal error reduction of the network, are selected from the candidates and added to the existing network one by one. A stopping criterion based on the training and validation errors is introduced to select the optimal network size to match with a given application. In parameter learning, the weights connected with the output units are incrementally updated without gradients or generalized inverses, while the other weights are randomly assigned and no need to be tuned. Then, the convergence of SCNN is analyzed. Finally, the proposed SCNN is tested on two benchmark nonlinear systems and an actual municipal sewage treatment system. The experiment results show that the proposed SCNN has better performance on nonlinear system modeling than other similar methods.
A self-organizing cascade neural network with random weights for nonlinear system modeling
S1568494616300163
A new algorithm, dubbed memory-based adaptive partitioning (MAP) of search space, which is intended to provide a better accuracy/speed ratio in the convergence of multi-objective evolutionary algorithms (MOEAs) is presented in this work. This algorithm works by performing an adaptive-probabilistic refinement of the search space, with no aggregation in objective space. This work investigated the integration of MAP within the state-of-the-art fast and elitist non-dominated sorting genetic algorithm (NSGAII). Considerable improvements in convergence were achieved, in terms of both speed and accuracy. Results are provided for several commonly used constrained and unconstrained benchmark problems, and comparisons are made with standalone NSGAII and hybrid NSGAII-efficient local search (eLS). search space partitioning multiplier step size for a given search space i Pareto front improvement within the last r generations Euclidian distance restricted search space i hypervolume indicator variable importance inverted generational distance hypervolume distance between the Pareto optimal front and an approximated Pareto front interval numbers relative to a same category of variables i partitioning degree number of objective functions memory matrix population size number of variables statistical p-value population Pareto front optimal Pareto front partitioning tendency partitioning vector solution belonging to a restricted search space D _ i vector of decision variables vector of restricted decision variables vector of congruent variables i vector of restricted congruent variables i maximum value in a set of restricted congruent variables x ˜ _ i minimum value in a set of restricted congruent variables x ˜ _ i matrix of restricted solution vectors lower upper
Memory-based adaptive partitioning (MAP) of search space for the enhancement of convergence in Pareto-based multi-objective evolutionary algorithms
S1568494616300175
This paper presents a comparative analysis of four nature inspired algorithms to improve the training stage of a segmentation strategy based on Gaussian matched filters (GMF) for X-ray coronary angiograms. The statistical results reveal that the method of differential evolution (DE) outperforms the considered algorithms in terms of convergence to the optimal solution. From the potential solutions acquired by DE, the area (A z ) under the receiver operating characteristic curve is used as fitness function to establish the best GMF parameters. The GMF-DE method demonstrated high accuracy with A z =0.9402 with a training set of 40 angiograms. Moreover, to evaluate the performance of the coronary artery segmentation method compared to the ground-truth vessels hand-labeled by a specialist, measures of sensitivity, specificity and accuracy have been adopted. According to the experimental results, GMF-DE has obtained high coronary artery segmentation rate compared with six state-of-the-art methods provided an average accuracy of 0.9134 with a test set of 40 angiograms. Additionally, the experimental results in terms of segmentation accuracy, have also shown that the GMF-DE can be highly suitable for clinical decision support in cardiology.
On the performance of nature inspired algorithms for the automatic segmentation of coronary arteries using Gaussian matched filters
S1568494616300187
In this paper, a novel fuzzy linear assignment method is developed for multi-attribute group decision making problems. Since uncertain nature of many decision problems, the proposed method incorporates various concepts from fuzzy set theory such as fuzzy arithmetic and aggregation, fuzzy ranking and fuzzy mathematical programming into a fuzzy concordance based group decision making process. Fuzziness in the group hierarchy and quantitative type criteria are also taken into account. In order to present the validity and practicality of the proposed method, it is applied to a real life multi-criteria spare part inventory classification problem. The case study has demonstrated that the proposed method is easy to apply and able to provide effective spare parts inventory classes under uncertain environments. In addition to the practical verification by the company experts, the proposed method is also compared with some of the commonly used fuzzy multi-attribute decision making methods from the literature. According to the comparison of the results, there is an association between classes of spare parts obtained by the proposed method and the benchmarked methods.
A new fuzzy linear assignment method for multi-attribute decision making with an application to spare parts inventory classification
S1568494616300199
Giri et al. [P.K. Giri, M.K. Maiti, M. Maiti, Fully fuzzy fixed charge multi-item solid transportation problem, Applied Soft Computing, 27 (2015) 77–91] proposed an approach for solving mathematical programming of fully fuzzy fixed charge multi-item solid transportation problems (FFFCMISTP) and claimed that it is better to use their proposed approach as compared to the existing method [A. Kumar, J. Kaur, P. Singh, A new method for solving fully fuzzy linear programming problems, Applied Mathematical Modelling, 35 (2011) 817–823]. The aim of this note is to point out that Giri et al. have used some mathematical incorrect assumptions in their proposed approach and hence the claim of Giri et al. is not valid.
A note on “Fully fuzzy fixed charge multi-item solid transportation problem”
S1568494616300205
A blocking lot-streaming flow shop scheduling problem with interval processing time has a wide range of applications in various industrial systems, however, not yet been well studied. In this paper, the problem is formulated as a multi-objective optimization problem, where each interval objective is converted into a real-valued one using a dynamically weighted sum of its midpoint and radius. A novel evolutionary multi-objective optimization algorithm is then proposed to solve the re-formulated multi-objective optimization problem, in which non-dominated solutions and differences among parents are taken advantage of when designing the crossover operator, and an ideal-point assisted local search strategy for multi-objective optimization is employed to improve the exploitation capability of the algorithm. To empirically evaluate the performance of the proposed algorithm, a series of comparative experiments are conducted on 24 scheduling instances. The experimental results show that the proposed algorithm outperforms the compared algorithms in convergence, and is more capable of tackling uncertainties.
Evolutionary multi-objective blocking lot-streaming flow shop scheduling with interval processing time
S1568494616300217
In this paper, we propose a graph-based clustering algorithm called “probability propagation,” which is able to identify clusters having spherical shapes as well as clusters having non-spherical shapes. Given a set of objects, the proposed algorithm uses local densities calculated from a kernel function and a bandwidth to initialize the probability of one object choosing another object as its attractor and then propagates the probabilities until the set of attractors become stable. Experiments on both synthetic data and real data show that the proposed method performs very well as expected.
Clustering by propagating probabilities between data points
S1568494616300229
Similarity analysis and preference information aggregation are two important issues for consensus building in group decision making with preference relations. Pairwise ratings in an interval reciprocal preference relation (IRPR) are usually regarded as interval-valued And-like representable cross ratios (i.e., interval-valued cross ratios for short) from the multiplicative perspective. In this paper, a ratio-based formula is introduced to measure similarity between a pair of interval-valued cross ratios, and its desirable properties are provided. We put forward ratio-based similarity measurements for IRPRs. An induced interval-valued cross ratio ordered weighted geometric (IIVCROWG) operator with interval additive reciprocity is developed to aggregate interval-valued cross ratio information, and some properties of the IIVCROWG operator are presented. The paper devises an importance degree induced IRPR ordered weighted geometric operator to fuse individual IRPRs into a group IRPR, and discusses the derivation of its associated weights. By employing ratio-based similarity measurements and IIVCROWG-based aggregation operators, a soft consensus model including a generation mechanism of feedback recommendation rules is further proposed to solve group decision making problems with IRPRs. Three numerical examples are examined to illustrate the applicability and effectiveness of the developed models.
Ratio-based similarity analysis and consensus building for group decision making with interval reciprocal preference relations
S1568494616300230
Incomplete linguistic preference relations (InLPRs) are generally inevitable in group decision making problems due to several reasons. Two vital issues of InLPRs are the consistency and the estimation of missing entries. The initial InLPR may be not consistent, which means that some of its entries do not reflect the real opinions of the experts accurately. Thus, there are deviations between some initial provided values and real opinions. Therefore, it is valuable to elicit the providers to realize and repair the deviations. In this paper, we discuss the consistency and the completing algorithms of InLPRs by interacting with the experts. Servicing as the minimum condition of consistency, the weak consistency of InLPRs is defined and a weak consistency reaching algorithm is designed to guarantee the logical correctness of InLPRs. Then two distinct completing algorithms are presented to estimate the missing entries. The former not only estimates all possible linguistic terms and represents them by the extended hesitant fuzzy linguistic terms sets but also keeps weak consistency during the computing procedures. The later can automatically revise the existing entries using the new opinions supplemented by the experts during interactions. All the proposed algorithms interact with the experts to elicit and mine their actual opinions more accurately. A real case study is also presented to clarify the advantages of our proposal. Moreover, these algorithms can serve as assistant tools for the experts to present their preferences.
Interactive algorithms for improving incomplete linguistic preference relations based on consistency measures
S1568494616300242
Many networks exhibit small-world properties. The structure of a small-world network is characterized by short average path lengths and high clustering coefficients. Few graph layout methods capture this structure well which limits their effectiveness and the utility of the visualization itself. Here we present an extension to our novel graphTPP layout method for laying out small-world networks using only their topological properties rather than their node attributes. The Watts–Strogatz model is used to generate a variety of graphs with a small-world network structure. Community detection algorithms are used to generate six different clusterings of the data. These clusterings, the adjacency matrix and edgelist are loaded into graphTPP and, through user interaction combined with linear projections of the adjacency matrix, graphTPP is able to produce a layout which visually separates these clusters. These layouts are compared to the layouts of two force-based techniques. graphTPP is able to clearly separate each of the communities into a spatially distinct area and the edge relationships between the clusters show the strength of their relationship. As a secondary contribution, an edge-grouping algorithm for graphTPP is demonstrated as a means to reduce visual clutter in the layout and reinforce the display of the strength of the relationship between two communities.
Using adjacency matrices to lay out larger small-world networks
S1568494616300266
This paper presents an approach for tackling constrained underwater glider path planning (UGPP), where the feasible path area is defined as a corridor around the border of an ocean eddy. The objective of the glider here is to sample the oceanographic variables more efficiently while keeping a bounded trajectory. Therefore, we propose a solution based on differential evolution (DE) algorithm mechanisms, including in its configuration self-adaptation of control parameters, population size reduction, ϵ-constraint handling with adjustment, and mutation based on elitistic best vector. Different aspects of this DE configuration are studied for the constrained UGPP challenge, on a prepared benchmark set comprised of 28 different specialized scenarios. The DE configurations were tested over a benchmark set over 51 independent runs for each DE configuration aspect. Comparison and suitability for the combination of these mechanisms is reported, through the per-scenario and aggregated statistical performance differences, including different constraint handling definition strategies, different DE mutation strategies’ configurations, and population sizing parameterizations. Our proposed solution outranked all other compared algorithms, keeping a trajectory within the limits with 100% success rate in all physically feasible scenarios; on average, it improved the randomly initialized trajectories fitness by roughly 50%, even reaching perfect fitness (all-around, 360-degree eddy corridor sampling) in some scenarios.
Constrained differential evolution optimization for underwater glider path planning in sub-mesoscale eddy sampling
S1568494616300291
This paper proposes Improved Colliding Bodies Optimization (ICBO) algorithm to solve efficiently the optimal power flow (OPF) problem. Several objectives, constraints and formulations at normal and preventive operating conditions are used to model the OPF problem. Applications are carried out on three IEEE standard test systems through 16 case studies to assess the efficiency and the robustness of the developed ICBO algorithm. A proposed performance evaluation procedure is proposed to measure the strength and robustness of the proposed ICBO against numerous optimization algorithms. Moreover, a new comparison approach is developed to compare the ICBO with the standard CBO and other well-known algorithms. The obtained results demonstrate the potential of the developed algorithm to solve efficiently different OPF problems compared to the reported optimization algorithms in the literature.
Optimal power flow using an Improved Colliding Bodies Optimization algorithm
S1568494616300308
Improperly tuned wavelet neural network (WNN) has been shown to exhibit unsatisfactory generalization performance. In this study, the tuning is done by an improved fuzzy C-means algorithm, that utilizes a novel similarity measure. This similarity measure takes the orientation as well as the distance into account. The modified WNN was first applied to a benchmark problem. Performance assessments with other approaches were made subsequently. Next, the feasibility of the proposed WNN in forecasting the chaotic Mackey–Glass time series and a real world application problem, i.e., blood glucose level prediction, were studied. An assessment analysis demonstrated that this presented WNN was superior in terms of prediction accuracy.
Calibrating wavelet neural networks by distance orientation similarity fuzzy C-means for approximation problems
S156849461630031X
Cluster ensemble is a powerful method for improving both the robustness and the stability of unsupervised classification solutions. This paper introduced group method of data handling (GMDH) to cluster ensemble, and proposed a new cluster ensemble framework, which named cluster ensemble framework based on the group method of data handling (CE-GMDH). CE-GMDH consists of three components: an initial solution, a transfer function and an external criterion. Several CE-GMDH models can be built according to different types of transfer functions and external criteria. In this study, three novel models were proposed based on different transfer functions: least squares approach, cluster-based similarity partitioning algorithm and semidefinite programming. The performance of CE-GMDH was compared among different transfer functions, and with some state-of-the-art cluster ensemble algorithms and cluster ensemble frameworks on synthetic and real datasets. Experimental results demonstrate that CE-GMDH can improve the performance of cluster ensemble algorithms which used as the transfer functions through its unique modelling process. It also indicates that CE-GMDH achieves a better or comparable result than the other cluster ensemble algorithms and cluster ensemble frameworks.
Cluster ensemble framework based on the group method of data handling
S1568494616300321
Feature selection has been widely used in data mining and machine learning tasks to make a model with a small number of features which improves the classifier's accuracy. In this paper, a novel hybrid feature selection algorithm based on particle swarm optimization is proposed. The proposed method called HPSO-LS uses a local search strategy which is embedded in the particle swarm optimization to select the less correlated and salient feature subset. The goal of the local search technique is to guide the search process of the particle swarm optimization to select distinct features by considering their correlation information. Moreover, the proposed method utilizes a subset size determination scheme to select a subset of features with reduced size. The performance of the proposed method has been evaluated on 13 benchmark classification problems and compared with five state-of-the-art feature selection methods. Moreover, HPSO-LS has been compared with four well-known filter-based methods including information gain, term variance, fisher score and mRMR and five well-known wrapper-based methods including genetic algorithm, particle swarm optimization, simulated annealing and ant colony optimization. The results demonstrated that the proposed method improves the classification accuracy compared with those of the filter based and wrapper-based feature selection methods. Furthermore, several performed statistical tests show that the proposed method's superiority over the other methods is statistically significant.
A hybrid particle swarm optimization for feature subset selection by integrating a novel local search strategy
S1568494616300357
One of the most important techniques in human-robot communication is gesture recognition. If robots can read intentions from human gestures, the communication process will be smoother and more natural. Processing for gesture recognition typically consists of two parts: feature extraction and gesture classification. In most works, these are independently designed and evaluated by their own criteria. This paper proposes a hybrid approach based on mutual adaptation for human gesture recognition. We use a neuro-fuzzy system (NFS) for the classification of human gesture and apply an evolution strategy for parameter tuning and pruning of membership functions. Experimental results indicate the effectiveness of mutual adaptation in terms of the generalization.
Hybrid evolutionary neuro-fuzzy approach based on mutual adaptation for human gesture recognition
S1568494616300369
Technical indicators are widely used in Forex and other financial markets which are the building blocks of many trading systems. A trading system is based on technical indicators or pattern-based approaches which produces buy/sell signals to trade in the market. In this paper, a heuristic based trading system on Forex data, which is developed using popular technical indicators is presented. The system grounds on selecting and combining the trading rules based on indicators using heuristic methods. The selection of the trading rules is realized by using Genetic algorithm and a greedy search heuristic. A weighted majority voting method is proposed to combine the technical indicator based trading rules to form a single trading rule. The experiments are conducted on 2 major currency pairs in 3 different time frames where promising results are achieved.
Heuristic based trading system on Forex data using technical indicator rules
S1568494616300370
This paper addresses the problem of speech enhancement and acoustic noise reduction by adaptive filtering algorithms. Recently, we have proposed a new Forward blind source separation algorithm that enhances very noisy speech signals with a subband approach. In this paper, we propose a new variable subband step-sizes algorithm that allows improving the previous algorithm behaviour when the number of subband is selected high. This new proposed algorithm is based on recursive formulas to compute the new variable step-sizes of the cross-coupling filters by using the decorrelation criterion between the estimated sub-signals at each subband output. This new algorithm has shown an important improvement in the steady state and the mean square error values. Along this paper, we present the obtained simulation results by the proposed algorithm that confirm its superiority in comparison with its original version that employs fixed step-sizes of the cross-coupling adaptive filters and with another fullband algorithm. decimation factor interpolation factor index varied between 0 and N-1 decimated time index analysis and synthesis filters lengths real and adaptive filters lengths delay index subbands number time index small positive constant small positive constant small positive constant fast transversal filter fast Newton transversal filter two-channel fullband backward two-channel subband forward acoustic echo cancellation acoustic noise reduction affine projection algorithm blind source separation Forward blind source separation Backward blind source separation fixed step-size impulse response(s) least mean squares normalized LMS two-channel Forward NLMS recursive least square subband adaptive filtering symmetric adaptive decorrelating system mismatch signal-to-noise ratio segmental signal-to-noise ratio cepstral distance mean square error voice activity detector variable-step-size decibel voice activity detector singular value decomposition discrete wavelet transform packet wavelet transform expectation operator smoothed parameter original speech signal noise cross-coupling IRs noisy speech signals Kronecker impulse analysis filters synthesis filters noisy speech sub-signals estimated speech sub-signals estimated noise sub-signals symmetric adaptive filters enhanced speech signal estimated noise Lagrange multipliers sampling frequency fixed step-sizes minimal step-sizes maximal step-sizes controlled step-sizes cross-correlation factors functions deviation vectors
Improved subband-forward algorithm for acoustic noise reduction and speech quality enhancement
S1568494616300382
Residual stresses are an integral part of the total stress acting on any component in service. It is important to determine and/or predict the magnitude, nature and direction of the residual stress to estimate the life of important engineering parts, particularly welded components. Researchers have developed many direct measuring techniques for welding residual stress. Intelligent techniques have been developed to predict residual stresses to meet the demands of advanced manufacturing planning. This research paper explores the development of Finite Element model and evolutionary fuzzy support vector regression model for the prediction of residual stress in welding. Residual stress model is developed using Finite Element Simulation. Results from Finite Element Method (FEM) model are used to train and test the developed Fuzzy Support Vector Regression model tuned with Genetic Algorithm (FSVRGA) using K-fold cross validation method. The performance of the developed model is compared with Support Vector Regression model and Fuzzy Support Vector Regression model. The proposed and developed model is superior in terms of computational speed and accuracy. Developed models are validated and reported. The developed model finds scope in setting the initial weld process parameters.
Evolutionary fuzzy SVR modeling of weld residual stress
S1568494616300394
A method relying on the convex combination of two normalized filtered-s least mean square algorithms (CNFSLMS) is presented for nonlinear active noise control (ANC) systems with a linear secondary path (LSP) and nonlinear secondary path (NSP) in this paper. The proposed CNFSLMS algorithm-based functional link artificial neural network (FLANN) filter, aiming to overcome the compromise between convergence speed and steady state mean square error of the NFSLMS algorithm, offers both fast convergence rate and low steady state error. Furthermore, by replacing the sigmoid function with the modified Versorial function, the modified CNFSLMS (MCNFSLMS) algorithm with low computational complexity is also presented. Experimental results illustrate that the combination scheme can behave as well as the best component and even better. Moreover, the MCNFSLMS algorithm requires less computational complexity than the CNFSLMS while keeping the same filtering performance.
Improved functional link artificial neural network via convex combination for nonlinear active noise control
S1568494616300400
This paper investigates an integrated production and transportation scheduling (IPTS) problem which is formulated as a bi-level mixed integer nonlinear program. This problem considers distinct realistic features widely existing in make-to-order supply chains, namely unrelated parallel-machine production environment and product batch-based delivery. An evolution-strategy-based bi-level evolutionary optimization approach is developed to handle the IPTS problem by integrating a memetic algorithm and heuristic rules. The efficiency and effectiveness of the proposed approach is evaluated by numerical experiments based on industrial data and industrial-size problems. Experimental results demonstrate that the proposed approach can effectively solve the problem investigated.
A bi-level evolutionary optimization approach for integrated production and transportation scheduling
S1568494616300412
What are the most relevant factors to be considered by employees when searching for an employer? The answer to this question poses valuable knowledge from the Business Intelligence viewpoint since it allows companies to retain personnel and attract competent employees. It leads to an increase in sales of their products or services, therefore remaining competitive across similar companies in the market. In this paper we assess the attractiveness of companies in Belgium by using a new two-stage methodology based on Artificial Intelligence techniques. The proposed method allows constructing high-quality prototypes from partial rankings indicating experts’ preferences. Being more explicit, in the first step we propose a fuzzy clustering algorithm for partial rankings called fuzzy c-aggregation. This algorithm is based on the well-known fuzzy c-means procedure and uses the Hausdorff distance as dissimilarity functional and a counting strategy for updating the center of each cluster. However, we cannot ensure the optimality of such prototypes, and therefore more accurate prototypes must be derived. That is why the second step is focused on solving the extended Kemeny ranking problem for each discovered cluster taking into account the estimated membership matrix. To accomplish that, we adopt an optimization method based on Swarm Intelligence that exploits a colony of artificial ants. Several simulations show the effectiveness of the proposal for the real-world problem under investigation.
Prototypes construction from partial rankings to characterize the attractiveness of companies in Belgium
S1568494616300436
In this study, we introduce a concept of a granular input space in system modeling, in particular in fuzzy rule-based modeling. The underlying problem can be succinctly formulated in the following way: given is a numeric model, develop an efficient way of forming granular input variables so that the corresponding granular outputs of the model achieve the highest level of specificity. The rationale behind the formulation of the problem is offered along with several illustrative examples. In conjunction with the underlying idea, developed is an algorithmic framework supporting an optimization of the specificity of the model exposed to granular inputs (data). It is dwelled upon one of the principles of Granular Computing, namely an optimal allocation of information granularity. For illustrative purposes, the study is focused on information granules formalized in terms of intervals (however the proposed approach becomes equally relevant for other formalism of information granules). Some comparative analysis with the existing idea of global sensitivity analysis is also carried out by contrasting the essential differences among the two approaches and analyzing the results of computational experiments.
Optimal allocation of information granularity in system modeling through the maximization of information specificity: A development of granular input space
S1568494616300448
This paper considers the Bus Terminal Location Problem (BTLP) which incorporates characteristics of both the p-median and maximal covering problems. We propose a parallel variable neighborhood search algorithm (PVNS) for solving BTLP. Improved local search, based on efficient neighborhood interchange, is used for the p-median problem, and is combined with a reduced neighborhood size for the maximal covering part of the problem. The proposed parallel algorithm is compared with its non-parallel version. Parallelization yielded significant time improvement in function of the processor core count. Computational results show that PVNS improves all existing results from the literature, while using significantly less time. New larger instances, based on rl instances from the TSP library, are introduced and computational results for those new instances are reported.
Parallel VNS for Bus Terminal Location Problem
S156849461630045X
The minimum vertex cover problem is a classical combinatorial optimization problem. This paper studies this problem based on rough sets. We show that finding the minimal vertex cover of a graph can be translated into finding the attribute reduction of a decision information table. At the same time, finding a minimum vertex cover of graphs is equivalent to finding an optimal reduct of a decision information table. As an application of the theoretical framework, a new algorithm for the minimum vertex cover problem based on rough sets is constructed. Experiments show that the proposed algorithm gets better performance in terms of the ratio value when compared with some other algorithms.
A rough set method for the minimum vertex cover problem of graphs
S1568494616300461
We consider the problem of calibrating pricing models based on the binomial tree method to market data in a network of auctions where agents are supposed to maximize a given utility function. The calibration is carried out using the minimum entropy principle to find a probability distribution that minimizes a weighted misfit between predicted and observed data. Numerical results from calibrating the mid prices from the bid–ask pairs of the buyer and seller to Taobao data demonstrated the feasibility of this approach in the case of pricing goods in a sequential auction. Further numerical test cases have been presented and have shown promising results. This work can equip those engaged in electronic trading with computational tools to improve their decision-making process in an uncertain environment.
Calibration based on entropy minimization for a class of asset pricing models
S1568494616300473
This paper presents a novel two-stage image segmentation framework by artificial immune system (AIS) thereof to partition synthetic aperture radar (SAR) images. The following three crucial tough problems have not been completely solved in current SAR image processing community thus far: (1) the automatic ability of discovering the true number of categories in different types of land covers; (2) the skill of smoothing speckle noise in SAR imagery, which is different from classical Gaussian and Salt & Pepper noise; (3) the better clustering performance in segmenting thousands of highly contaminating pixels in SAR image. With above three problems as goals, an effective two-stage SAR image segmentation framework (TSIS) is discussed here. Firstly, a union filter, combing maximum likelihood estimator and partial nonlocal means filter, is designed. Afterwards, a searching algorithm with variable length of chromosomes is designed to automatically discover the clustering numbers in SAR images. Finally, an efficient multi-objective clustering paradigm in AIS and kernel mapping thereof to implement final image partition is proposed. To test its performance, a systematic comparison of TSIS versus three famous variations of fuzzy c-means (FCM) and two graph partitioning methods is given. Experiments results show that TSIS can provide an effective option in segmenting the SAR imagery.
Two-stage SAR image segmentation framework with an efficient union filter and multi-objective kernel clustering
S1568494616300485
Yttrium barium copper oxide (YBCO) is a high temperature superconductor with excellent potential for long distance power transmission applications as well as other applications involving generation of high magnetic field such as magnetic resonance imaging machines in hospitals. Among the uniqueness of this material is its perpetual current carrying ability without loss of energy. Practical applications of YBCO superconductor depend greatly on the value of the superconducting transition temperature (T C) attained by YBCO superconductor upon doping with other external materials. The number of holes (i.e. doping) present in an atom of copper in CuO2 planes of YBCO superconductor controls its T C. Movement of the apical oxygen along CuO2 planes due to doping gives insight to the way of determining the effect of doping on T C using the bound related quantity (lattice parameter) that is easily measurable with reasonable high precision. This work employs excellent predictive and generalization ability of computational intelligence technique via support vector regression (SVR) to develop a computational intelligence-based model (CIM) that estimates the T C of thirty-one different YBCO superconductors using lattice parameters as the descriptors. The estimated superconducting transition temperatures agree with the experimental values with high degree of accuracy. The developed CIM allows quick and accurate estimation of T C of any fabricated YBCO superconductor without the need for any sophisticated equipment.
Application of computational intelligence technique for estimating superconducting transition temperature of YBCO superconductors
S1568494616300497
This paper presents an interval-valued intuitionistic fuzzy permutation method with likelihood-based preference functions for managing multiple criteria decision analysis based on interval-valued intuitionistic fuzzy sets. First, certain likelihood-based preference functions are proposed using the likelihoods of interval-valued intuitionistic fuzzy preference relationships. Next, selected practical indices of concordance/discordance are established to evaluate all possible permutations of the alternatives. The optimal priority order of the alternatives is determined by comparing all comprehensive concordance/discordance values based on score functions. Furthermore, this paper considers various preference types and develops another interval-valued intuitionistic fuzzy permutation method using programming models to address multiple criteria decision-making problems with incomplete preference information. The feasibility and applicability of the proposed methods are illustrated in the problem of selecting a suitable bridge construction method. Moreover, certain comparative analyses are conducted to verify the advantages of the proposed methods compared with those of other decision-making methods. Finally, the practical effectiveness of the proposed methods is validated with a risk assessment problem in new product development.
An interval-valued intuitionistic fuzzy permutation method with likelihood-based preference functions and its application to multiple criteria decision analysis
S1568494616300503
Computer numerical control (CNC) machines are used for repetitive, difficult and unsafe manufacturing tasks that require a high degree of accuracy. However, when selecting an appropriate CNC machine, multiple criteria need to be considered by multiple decision makers. In this study, a multi-criteria group decision making (MCGDM) technique based on the fuzzy VIKOR method is developed to solve a CNC machine tool selection problem. Linguistic variables represented by triangular fuzzy numbers are used to reflect decision maker preferences for the criteria importance weights and the performance ratings. After the individual preferences are aggregated or after the separation values are computed, they are then defuzzified. In this paper, two algorithms based on a fuzzy linguistic approach are developed. Based on these two algorithms and the VIKOR method, a general MCGDM framework is proposed. A CNC machine tool selection example illustrates the application of the proposed approach. A comparative study of the two algorithms using the above case study information highlighted the need to combine the ranking results, as both algorithms have distinct characteristics.
A group decision making framework based on fuzzy VIKOR approach for machine tool selection with linguistic information
S1568494616300515
Unimodal biometric have improved the possibility to establish systems capable of identifying and managing the flow of individuals according to the available intrinsic characteristics that we have. However, a reliable recognition system requires multiple resources. This is the main objective of the multimodal systems that consists of using different resources. Although multimodality improves the accuracy of the systems, it occupies a large memory space and consumes more execution time considering the collected information from different resources. Therefore we have considered the feature selection, that is, the selection of the best attributes that enhances the accuracy and reduce the memory space as a solution. As a result, acceptable recognition performances with less forge and steal can be guaranteed. In this paper we propose an identification system using multimodal fusion of finger-knuckle-print, fingerprint and finger's venous network by adopting several techniques in different levels for multimodal fusion. A feature level fusion and decision level is proposed for the fusion of these three biological traits. An optimization method for this multimodal fusion system by enhancing the feature level fusion is introduced. The optimization consists of the space reduction by using different methods.
Multimodal fusion of the finger vein, fingerprint and the finger-knuckle-print using Kernel Fisher analysis
S1568494616300527
Permeable concrete (PC) has gained a wide range of applications as a result of its unique properties which result into highly connected macro-porosity and large pore sizes. However, experimental determination of these properties is intensive and time consuming which necessitates the need for modeling technique that has a capability to estimate the properties of PC with high degree of accuracy. This present work estimates the physical, mechanical and hydrological properties of PC using computational intelligent technique on the platform of support vector regression (SVR) due to excellent generalization and predictive ability of SVR in the presences of few descriptive features. Four different models were built using twenty-four data-points characterized with four descriptive features. The estimated properties of PC agree well with experimental values. Excellent generalization and predictive ability recorded in the developed models indicate their high potentials for enhancing the performance of PC through quick and accurate estimation of its properties which are experimentally demanding and time consuming.
Estimation of physical, mechanical and hydrological properties of permeable concrete using computational intelligence approach
S1568494616300539
Hesitant multiplicative preference relation (HMPR) contains much more comprehensive information than the traditional multiplicative preference relations. The HMPR is a useful tool to help the decision makers express their preferences in group decision making under uncertainty. The key of group decision making with the HMPR is to derive the priority weights from the HMPR. Thus, an efficient and practical priority method should be put forward so as to ensure the reasonability of the final decision result. In order to do that, in this paper, we first introduce the expected value and the geometric average value of hesitant multiplicative element (HME) which is the component of the HMPR. Then from different perspectives, we utilize the error-analysis technique to put forward three novel methods for the priorities of the HMPR, i.e., the expectation value method, the geometric average value method, and the multiplicative deviation method. We also investigate the relationships among these methods, and develop an approach to group decision making with the HMPR by using the methods and the possibility degree formula. Finally, by constructing the indicator system for credit risk evaluation of supply chain enterprises, we make a detailed case study concerning Lu-Zhou-Lao-Jiao (the well-known liquor enterprise in China) to demonstrate our approach.
An approach to group decision making with hesitant information and its application in credit risk evaluation of enterprises
S1568494616300540
Differential evolution (DE) is a simple, yet efficient, population-based global evolutionary algorithm. DE may suffer from stagnation. This study presents a DE framework with guiding archive (GAR-DE) to help DE escape from the situation of stagnation. The proposed framework constructs a guiding archive and executes stagnation detection at each iteration. Guiding archive is composed of a certain number of relatively high-quality solutions. These solutions are collected in terms of fitness as well as diversity. If a stagnated individual is detected, the proposed framework selects a solution from guiding archive to replace the base vector in mutation operator. In this way, more promising solutions are provided to guide the evolution and effectively help DE escape from the situation of stagnation. The proposed framework is applied to six original DE algorithms, as well as two advanced DE variants. Experimental results on 28 benchmark functions and 8 real-world application problems show that the proposed framework can enhance the performance of most DE algorithms studied.
Differential evolution with guiding archive for global numerical optimization
S1568494616300552
In this paper, a combination of type-2 fuzzy logic system (T2FLS) and a conventional feedback controller (CFC) has been designed for the load frequency control (LFC) of a nonlinear time-delay power system. In this approach, the T2FLS controller which is designed to overcome the uncertainties and nonlinearites of the controlled system is in the feedforward path and the CFC which plays an important role in the transient state is in the feedback path. A Lyapunov–Krasovskii functional has been used to ensure the stability of the system and the parameter adjustment laws for the T2FLS controller are derived using this functional. In this training method, the effect of delay has been considered in tuning the T2FLS controller parameters and thus the performance of the system has been improved. The T2FLS controller is used due to its ability to effectively model uncertainties, which may exist in the rules and data measured by the sensors. To illustrate the effectiveness of the proposed method, a two-area nonlinear time-delay power system has been used and compared with the controller that uses the gradient-descend (GD) algorithm to tune the T2FLS controller parameters.
Designing an adaptive type-2 fuzzy logic system load frequency control for a nonlinear time-delay power system
S1568494616300564
This paper presents a Hybrid Nelder–Mead – Fuzzy Adaptive Particle Swarm Optimization (HNM-FAPSO) for a Multi-Line Congestion Management (MLCM) problem. MLCM is a nonlinear optimization problem in deregulated power system. The generators which are sensitive to the congested lines are selected based on the novel Apparent Power Sensitivity Factor (APSF) and are rescheduled by using the HNM-FAPSO algorithm. The objective of hybridizing the Nelder–Mead (NM), Fuzzy adaptive Particle Swarm Optimization (FAPSO) is to blend their unique advantages as well as their efficacy. The NM method is a very efficient local search procedure and is used to initialize the population for Particle Swarm Optimization (PSO) which is used to search global best value. But PSO has a problem of premature convergence due to linear variation of its internal parameter i.e. inertia weight. To overcome this, a fuzzy inference system is used to dynamically update the inertia weight. The feasibility and performance of the HNM-FAPSO algorithm is demonstrated on the standard IEEE 30-bus system and IEEE 118-bus system. A comparison of simulation results and statistical analysis reveal optimization efficacy of the HNM-FAPSO algorithm over other algorithms like FAPSO, NM, Simple Bacterial Foraging (SBF), adaptive bacterial foraging algorithm with Nelder–Mead (ABFNM), Genetic Algorithm (GA) and PSO available in literature. The main outcome of the proposed algorithm in solving MLCM is reduction in congestion cost, rescheduled real power, power loss and computational time.
Managing multi-line power congestion by using Hybrid Nelder–Mead – Fuzzy Adaptive Particle Swarm Optimization (HNM-FAPSO)
S1568494616300576
This study presents a seasonal multi-product multi-period inventory control model with inventory costs obtained under inflation and all-unit discount policy. The products are delivered in boxes of known quantities and both backorder and lost-sale quantities are considered in case of shortage. The goal is to find a representative set of Pareto optimal solutions (including the ordering quantities) in different periods and to minimize both the total inventory cost (i.e. ordering, holding, shortage, and purchasing costs) and the total storage space, simultaneously. Three multi-objective optimization algorithms of non-dominated sorting genetic algorithm (NSGA-II), non-dominated ranked genetic algorithm (NRGA), and multi-objective particle swarm optimization (MOPSO) are proposed to solve the problem. The Taguchi approach with a novel metric (based on the coefficient of variation) is utilized to model the response variable and compare the performances of the algorithms. Three numerical examples are used to demonstrate the applicability and exhibit the efficacy of the procedures and algorithms. The results of statistical analyses show significant differences in the performance metrics for all three algorithms and in all three numerical examples.
A bi-objective inventory optimization model under inflation and discount using tuned Pareto-based algorithms: NSGA-II, NRGA, and MOPSO
S1568494616300667
Dimensionality reduction is a useful technique to cope with high dimensionality of the real-world data. However, traditional methods were studied in the context of datasets with only numeric attributes. With the demand of analyzing datasets involving categorical attributes, an extension to the recent dimensionality-reduction technique t-SNE is proposed. The extension facilitates t-SNE to handle mixed-type datasets. Each attribute of the data is associated with a distance hierarchy which allows the distance between numeric values and between categorical values be measured in a unified manner. More importantly, domain knowledge regarding distance considering semantics embedded in categorical values can be specified via the hierarchy. Consequently, the extended t-SNE can project the high-dimensional, mixed data to a low-dimensional space with topological order which reflects user's intuition.
Integrated dimensionality reduction technique for mixed-type data involving categorical values
S1568494616300679
In multi-hop routing, cluster heads near the base station act as relays for far cluster heads and thus will deplete their energy very quickly. Thus, hot spots in the sensor field result. This paper introduces a new clustering algorithm named an Unequal Multi-hop Balanced Immune Clustering protocol (UMBIC) to solve the hot spot problem and improve the lifetime of small and large scale/homogeneous and heterogeneous wireless sensor networks with different densities. UMBIC protocol utilizes the Unequal Clustering Mechanism (UCM) and the Multi-Objective Immune Algorithm (MOIA) to adjust the intra-cluster and inter-cluster energy consumption. The UCM is used to partition the network into clusters of unequal size based on distance with reference to base station and residual energy. While the MOIA constructs an optimum clusters and a routing tree among them based on covering the entire sensor field, ensuring the connectivity among nodes and minimizing the communication cost of all nodes. The UMBIC protocol rotates the role of cluster heads among the nodes only if the residual energy of one of the current cluster heads less than the energy threshold, as a result the time computational and overheads are saved. Simulation results show that, compared with other protocols, the UMBIC protocol can effectively improve the network lifetime, solve the hot spot problem and balance the energy consumption among all nodes in the network. Moreover, it has less overheads and computational complexity.
An Unequal Multi-hop Balanced Immune Clustering protocol for wireless sensor networks
S1568494616300680
Identity verification based on authenticity assessment of a handwritten signature is an important issue in biometrics. There are many effective methods for signature verification taking into account dynamics of a signing process. Methods based on partitioning take a very important place among them. In this paper we propose a new approach to signature partitioning. Its most important feature is the possibility of selecting and processing of hybrid partitions in order to increase a precision of the test signature analysis. Partitions are formed by a combination of vertical and horizontal sections of the signature. Vertical sections correspond to the initial, middle, and final time moments of the signing process. In turn, horizontal sections correspond to the signature areas associated with high and low pen velocity and high and low pen pressure on the surface of a graphics tablet. Our previous research on vertical and horizontal sections of the dynamic signature (created independently) led us to develop the algorithm presented in this paper. Selection of sections, among others, allows us to define the stability of the signing process in the partitions, promoting signature areas of greater stability (and vice versa). In the test of the proposed method two databases were used: public MCYT-100 and paid BioSecure.
A new algorithm for identity verification based on the analysis of a handwritten dynamic signature
S1568494616300692
Hüseyin Haklı and Harun Uguz (2014) proposed a novel approach for global function optimization using particle swarm optimization with levy flight (LFPSO) [Hüseyin Haklı, Harun U guz, A novel particle swarm optimization algorithm with levy flight. Appl. Soft Comput. 23, 333–345 (2014)]. In our study, we enhance the LFPSO algorithm so that modified LFPSO algorithm (PSOLF) outperforms LFPSO algorithm and other PSO variants. The enhancement involves introducing a levy flight method for updating particle velocity. After this update, the particle velocity becomes the new position of the particle. The proposed work is examined on well-known benchmark functions and the results show that the PSOLF is better than the standard PSO (SPSO), LFPSO and other PSO variants. Also the experimental results are tested using Wilcoxon's rank sum test to assess the statistical significant difference between the methods and the test proves that the proposed PSOLF method is much better than SPSO and LFPSO. By combining levy flight with PSO results in global search competence and high convergence rate.
An enhanced particle swarm optimization with levy flight for global optimization
S1568494616300709
This paper presents Fuzzy and Ant Colony Optimization Based Combined MAC, Routing, and Unequal Clustering Cross-Layer Protocol for Wireless Sensor Networks (FAMACROW) consisting of several nodes that send sensed data to a Master Station. FAMACROW incorporates cluster head selection, clustering, and inter-cluster routing protocols. FAMACROW uses fuzzy logic with residual energy, number of neighboring nodes, and quality of communication link as input variables for cluster head selection. To avoid hot spots problem, FAMACROW uses an unequal clustering mechanism with clusters closer to MS having smaller sizes than those far from it. FAMACROW uses Ant Colony Optimization based technique for reliable and energy-efficient inter-cluster multi-hop routing from cluster heads to MS. The inter-cluster routing protocol decides relay node considering its: (i) distance from current cluster head and that from MS (for energy-efficient inter-cluster communication), (ii) residual energy (for energy distribution across the network), (iii) queue length (for congestion control), (iv) delivery likelihood (for reliable communication). A comparative analysis of FAMACROW with Unequal Cluster Based Routing [33], Unequal Layered Clustering Approach [43], Energy Aware Unequal Clustering using Fuzzy logic [37] and Improved Fuzzy Unequal Clustering [35] shows that FAMACROW is 41% more energy-efficient, has 75–88% more network lifetime and sends 82% more packets compared to Improved Fuzzy Unequal Clustering protocol.
FAMACROW: Fuzzy and ant colony optimization based combined mac, routing, and unequal clustering cross-layer protocol for wireless sensor networks
S1568494616300734
Credit scoring aims to assess the risk associated with lending to individual consumers. Recently, ensemble classification methodology has become popular in this field. However, most researches utilize random sampling to generate training subsets for constructing the base classifiers. Therefore, their diversity is not guaranteed, which may lead to a degradation of overall classification performance. In this paper, we propose an ensemble classification approach based on supervised clustering for credit scoring. In the proposed approach, supervised clustering is employed to partition the data samples of each class into a number of clusters. Clusters from different classes are then pairwise combined to form a number of training subsets. In each training subset, a specific base classifier is constructed. For a sample whose class label needs to be predicted, the outputs of these base classifiers are combined by weighted voting. The weight associated with a base classifier is determined by its classification performance in the neighborhood of the sample. In the experimental study, two benchmark credit data sets are adopted for performance evaluation, and an industrial case study is conducted. The results show that compared to other ensemble classification methods, the proposed approach is able to generate base classifiers with higher diversity and local accuracy, and improve the accuracy of credit scoring.
Ensemble classification based on supervised clustering for credit scoring
S1568494616300746
Incremental learning has been developed for supervised classification, where knowledge is accumulated incrementally and represented in the learning process. However, labeling sufficient samples in each data chunk is of high cost, and incremental technologies are seldom discussed in the semi-supervised paradigm. In this paper we advance an Incremental Semi-Supervised classification approach via Self-Representative Selection (IS3RS) for data streams classification, by exploring both the labeled and unlabeled dynamic samples. An incremental self-representative data selection strategy is proposed to find the most representative exemplars from the sequential data chunk. These exemplars are incrementally labeled to expand the training set, and accumulate knowledge over time to benefit future prediction. Extensive experimental evaluations on some benchmarks have demonstrated the effectiveness of the proposed framework.
Incremental Semi-Supervised classification of data streams via self-representative selection
S1568494616300758
Sentiment analysis, also called opinion mining, is currently one of the most studied research fields which aims to analyse people's opinions. E-commerce websites allow users to share opinions about a product/service by providing textual reviews along with numerical ratings. These opinions greatly influence future consumer purchasing decisions. This paper introduces an innovative computational intelligence framework for efficiently predicting customer review ratings. The framework has been designed to deal with the dimensionality and noise which is typically apparent in large datasets containing customer reviews. The proposed framework integrates the techniques of Singular Value Decomposition (SVD) and dimensionality reduction, Fuzzy C-Means (FCM) and the Adaptive Neuro-Fuzzy Inference System (ANFIS). The performance of the proposed approach returned high accuracy and the results revealed that when large datasets are concerned, only a fraction of the data is needed for creating a system to predict the review ratings of textual reviews. Results from the experiments suggest that the proposed approach yields better prediction performance than other state-of-the-art rating predictors which are based on the conventional Artificial Neural Network, Fuzzy C-Means, and Support Vector Machine approaches. In addition, the proposed framework can be utilised for other classification and prediction tasks, and its neuro-fuzzy predictor module can be replaced by other classifiers.
A computational intelligence approach to efficiently predicting review ratings in e-commerce