FileName
stringlengths 17
17
| Abstract
stringlengths 163
6.01k
| Title
stringlengths 12
421
|
---|---|---|
S1568494614006474 | In recent years, particle swarm optimization (PSO) has extensively applied in various optimization problems because of its simple structure. Although the PSO may find local optima or exhibit slow convergence speed when solving complex multimodal problems. Also, the algorithm requires setting several parameters, and tuning the parameters is a challenging for some optimization problems. To address these issues, an improved PSO scheme is proposed in this study. The algorithm, called non-parametric particle swarm optimization (NP-PSO) enhances the global exploration and the local exploitation in PSO without tuning any algorithmic parameter. NP-PSO combines local and global topologies with two quadratic interpolation operations to increase the search ability. Nineteen (19) unimodal and multimodal nonlinear benchmark functions are selected to compare the performance of NP-PSO with several well-known PSO algorithms. The experimental results showed that the proposed method considerably enhances the efficiency of PSO algorithm in terms of solution accuracy, convergence speed, global optimality, and algorithm reliability. | Non-parametric particle swarm optimization for global optimization |
S1568494614006620 | This article presents the hardware implementation of the floating-point processor (FPP) to develop the radial basis function (RBF) neural network for the general purpose of pattern recognition and nonlinear control. The floating-point processor is designed on a field programmable gate array (FPGA) chip to execute nonlinear functions required in the parallel calculation of the back-propagation algorithm. Internal weights of the RBF network are updated by the online learning back-propagation algorithm. The on-line learning process of the RBF chip is compared numerically with the results of the RBF neural network learning process written in the MATLAB program. The performance of the designed RBF neural chip is tested for the real-time pattern classification of the XOR logic. Performances are evaluated by comparing results from the MATLAB through extensive experimental studies. | Implementation of the RBF neural chip with the back-propagation algorithm for on-line learning |
S1568494614006632 | Most of the routing algorithms devised for sensor networks considered either energy constraints or bandwidth constraints to maximize the network lifetime. In the real scenario, both energy and bandwidth are the scarcest resource for sensor networks. The energy constraints affect only sensor routing, whereas the link bandwidth affects both routing topology and data rate on each link. Therefore, a heuristic technique that combines both energy and bandwidth constraints for better routing in the wireless sensor networks is proposed. The link bandwidth is allocated based on the remaining energy making the routing solution feasible under bandwidth constraints. This scheme uses an energy efficient algorithm called nearest neighbor tree (NNT) for routing. The data gathered from the neighboring nodes are also aggregated based on averaging technique in order to reduce the number of data transmissions. Experimental results show that this technique yields good solutions to increase the sensor network lifetime. The proposed work is also tested for wildfire application. | Heuristic routing with bandwidth and energy constraints in sensor networks |
S1568494614006644 | Research on intelligent optimization is concerned with developing algorithms in which the optimization process is guided by an “intelligent agent”, whose role is to deal with algorithmic issues such as parameters tuning, adaptation, and combination of different existing optimization techniques, with the aim of improving the efficiency and robustness of the optimization process. This paper proposes an intelligent optimization approach to solve the minimum labelling spanning tree (MLST) problem. The MLST problem is a combinatorial optimization problem where, given a connected, undirected graph whose edges are labelled (or coloured), the aim is to find a spanning tree whose edges have the smallest number of distinct labels (or colours). In recent work, the MLST problem has been shown to be NP-hard and some effective metaheuristics have been proposed and analysed. The intelligent optimization algorithm proposed here integrates the basic variable neighbourhood search heuristic with other complementary approaches from machine learning, statistics and experimental soft computing, in order to produce high-quality performance and to completely automate the resulting optimization strategy. We present experimental results on randomly generated graphs with different statistical properties, and demonstrate the implementation, the robustness, and the empirical scalability of our intelligent local search. Our computational experiments show that the proposed strategy outperforms heuristics recommended in the literature and is able to obtain high quality solutions quickly. | Solving the minimum labelling spanning tree problem by intelligent optimization |
S1568494614006656 | This study focuses on the accurate tracking control and sensorless estimation of external force disturbances on robot manipulators. The proposed approach is based on an adaptive Wavelet Neural Network (WNN), named Adaptive Force-Environment Estimator (WNN-AFEE). Unlike disturbance observers, WNN_AFEE does not require the inverse of the Jacobian transpose for computing the force, thus, it has no computational problem near singular points. In this scheme, WNN estimates the external force disturbance to attenuate its effects on the control system performance by estimating the environment model. A Lyapunov based design is presented to determine adaptive laws for tuning WNN parameters. Another advantage of the proposed approach is that it can estimate the force even when there are some parametric uncertainties in the robot model, because an additional adaptive law is designed to estimate the robot parameters. In a theorem, the stability of the closed loop system is proved and a general condition is presented for identifying the force and robot parameters. Some suggestions are provided for improving the estimation and control performance. Then, a WNN-AFEE is designed for a planar manipulator as an example, and some simulations are performed for different conditions. WNN_AFEE results are compared attentively with the results of an adaptive force estimator and a disturbance estimator. These comparisons show the efficiency of the proposed controller in dealing with different conditions. | Adaptive force–environment estimator for manipulators based on adaptive wavelet neural network |
S1568494614006668 | An accurate estimation of aquifer hydraulic parameters is required for groundwater modeling and proper management of vital groundwater resources. In situ measurements of aquifer hydraulic parameters are expensive and difficult. Traditionally, these parameters have been estimated by graphical methods that are approximate and time-consuming. As a result, nonlinear programming (NLP) techniques have been used extensively to estimate them. Despite the outperformance of NLP approaches over graphical methods, they tend to converge to local minima and typically suffer from a convergence problem. In this study, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) methods are used to identify hydraulic parameters (i.e., storage coefficient, hydraulic conductivity, transmissivity, specific yield, and leakage factor) of three types of aquifers namely, confined, unconfined, and leaky from real time–drawdown pumping test data. The performance of GA and ACO is also compared with that of graphical and NLP techniques. The results show that both GA and ACO are efficient, robust, and reliable for estimating various aquifer hydraulic parameters from the time–drawdown data and perform better than the graphical and NLP techniques. The outcomes also indicate that the accuracy of GA and ACO is comparable. Comparing the running time of various utilized methods illustrates that ACO converges to the optimal solution faster than other techniques, while the graphical method has the highest running time. | Evaluation of methods for estimating aquifer hydraulic parameters |
S156849461400667X | The pyramidal dual-tree directional filter bank (PDTDFB) transform is a new image decomposition, which has many advantages, such as multiscale and multidirectional transform, efficient implementation, high angular resolution, low redundant ratio, and shiftable subbands. In this paper, we present a new color image segmentation algorithm based on PDTDFB domain hidden Markov tree (HMT) model. Firstly, the joint statistics and mutual information of the PDTDFB coefficients are studied. Then, the PDTDFB coefficients are modeled using a HMT model with Gaussian mixtures, which can effectively capture the intra-scale, inter-scale, and inter-direction dependencies. Finally, a color image segmentation using PDTDFB domain HMT model is developed, in which expectation–maximization (EM) parameter estimation, Bayesian multiscale raw segmentation, context based multiscale fusion, and majority-vote based color component fusion are used. Experimental evidence shows that the proposed color image segmentation algorithm has very effective segmentation results in comparison with the state-of-the-art segmentation methods recently proposed in the literature. | Color image segmentation using PDTDFB domain hidden Markov tree model |
S1568494614006681 | A combined MEMS Inertial Navigation System (INS) with GPS is used to provide position and velocity data of land vehicles. Data fusion of INS and GPS measurements are commonly achieved through a conventional Extended Kalman filter (EKF). Considering the required accurate model of system together with perfect knowledge of predefined error models, the performance of the EKF is decreased due to unmodeled nonlinearities and unknown bias uncertainties of MEMS inertial sensors. Universal knowledge based approximators comprising of neural networks and fuzzy logic methods are capable of approximating the nonlinearities and the uncertainties of practical systems. First, in this paper, a new fuzzy neural network (FNN) function approximator is used to model unknown nonlinear systems. Second, the process of design and real-time implementation of an adaptive fuzzy neuro-observer (AFNO) in integrated low-cost INS/GPS positioning systems is proposed. To assess the long time performance of the proposed AFNO method, wide range tests of a real INS/GPS with a car vehicle have been performed. The unbiased estimation results of the AFNO show the superiority of the proposed method compared with the classic EKF and the adaptive neuro-observer (ANO) including a pure artificial neural network (ANN) function approximator. | Adaptive fuzzy neuro-observer applied to low cost INS/GPS |
S1568494614006693 | Search-based software testing promises the ability to generate and evaluate large numbers of test cases at minimal cost. From an industrial perspective, this could enable an increase in product quality without a matching increase in the time and effort required to do so. Search-based software testing, however, is a set of quite complex techniques and approaches that do not immediately translate into a process for use with most companies. For example, even if engineers receive the proper education and training in these new approaches, it can be hard to develop a general fitness function that covers all contingencies. Furthermore, in industrial practice, the knowledge and experience of domain specialists are often key for effective testing and thus for the overall quality of the final software system. But it is not clear how such domain expertise can be utilized in a search-based system. This paper presents an interactive search-based software testing (ISBST) system designed to operate in an industrial setting and with the explicit aim of requiring only limited expertise in software testing. It uses SBST to search for test cases for an industrial software module, while also allowing domain specialists to use their experience and intuition to interactively guide the search. In addition to presenting the system, this paper reports on an evaluation of the system in a company developing a framework for embedded software controllers. A sequence of workshops provided regular feedback and validation for the design and improvement of the ISBST system. Once developed, the ISBST system was evaluated by four electrical and system engineers from the company (the ‘domain specialists’ in this context) used the system to develop test cases for a commonly used controller module. As well as evaluating the utility of the ISBST system, the study generated interaction data that were used in subsequent laboratory experimentation to validate the underlying search-based algorithm in the presence of realistic, but repeatable, interactions. The results validate the importance that automated software testing tools in general, and search-based tools, in particular, can leverage input from domain specialists while generating tests. Furthermore, the evaluation highlighted benefits of using such an approach to explore areas that the current testing practices do not cover or cover insufficiently. | An initial industrial evaluation of interactive search-based testing for embedded software |
S156849461400670X | In this article, the dynamic multi-swarm particle swarm optimizer (DMS-PSO) and a new cooperative learning strategy (CLS) are hybridized to obtain DMS-PSO-CLS. DMS-PSO is a recently developed multi-swarm optimization algorithm and has strong exploration ability for the use of a novel randomly regrouping schedule. However, the frequently regrouping operation of DMS-PSO results in the deficiency of the exploitation ability. In order to achieve a good balance between the exploration and exploitation abilities, the cooperative learning strategy is hybridized to DMS-PSO, which makes information be used more effectively to generate better quality solutions. In the proposed strategy, for each sub-swarm, each dimension of the two worst particles learns from the better particle of two randomly selected sub-swarms using tournament selection strategy, so that particles can have more excellent exemplars to learn and can find the global optimum more easily. Experiments are conducted on some well-known benchmarks and the results show that DMS-PSO-CLS has a superior performance in comparison with DMS-PSO and several other popular PSO variants. | Dynamic multi-swarm particle swarm optimizer with cooperative learning strategy |
S1568494614006711 | In a deregulated multi-area electrical power system the objective is to determine the most economical generation dispatch strategy that could satisfy the area load demands, the tie-line limits and other operating constraints. Usually, economic dispatch (ED) deals only with the cost minimization, but minimization of emission content has also become an equally important concern due to the mandatory requirement of pollution reduction for environmental protection. Environmental economic dispatch (EED) is a complex multi-objective optimization (MOO) problem with conflicting goals. Normally a fuzzy ranking is employed to rank the large number of Pareto solutions obtained after solving a MOO problem. But in this paper the preference of the decision maker (DM) is used to guide the search and to select the population for the next generation. An improved differential evolution (DE) method is proposed where the selection operation is modified to reduce the complexity of multi-attribute decision making with the help of a fuzzy framework. Solutions are assigned a fuzzy rank on the basis of their level of satisfaction for different objectives before the population selection and then the fuzzy rank is used to select and pass on better solutions to the next generation. A well distributed Pareto-front is obtained which presents a large number of alternate trade-off solutions for the power system operator. A momentum operation is also included to prevent stagnation and to create Pareto diversity. Studies are carried out on three test cases and results obtained are found to be better than some previous literature. | Environmental economic dispatch in multi-area power system employing improved differential evolution with fuzzy selection |
S1568494614006735 | Image coding using principal component analysis (PCA), a type of image compression technique, projects image blocks to a subspace that can preserve most of the original information. However, the blocks in the image exhibit various inhomogeneous properties, such as smooth region, texture, and edge, which give rise to difficulties in PCA image coding. This paper proposes a repartition clustering method to partition the data into groups, such that individuals of the same group are homogeneous, and vice versa. The PCA method is applied separately for each group. In the clustering method, the genetic algorithm acts as a framework consisting of three phases, including the proposed repartition clustering. Based on this mechanism, the proposed method can effectively increase image quality and provide an enhanced visual effect. | A repartition method improving visual quality for PCA image coding |
S1568494614006747 | In this paper, a scatter search algorithm with improved component modules is proposed to solve the single machine total weighted tardiness problem with sequence-dependent setup times. For diversification generation module, both random strategy based heuristics and construction heuristic are adopted to generate the diversified population. For improvement module, variable neighborhood search based local searches are embedded into the algorithm to improve the trial solutions and the combined solutions. For reference set update module, the number of edges by which the two solutions differ from each other is counted to measure the diversification value between two solutions. We also propose a new strategy in which the length of the reference set could be adjusted adaptively to balance the computing time and solving ability. In addition, a discrete differential evolution operator is proposed with another two operators constitute the combination module to generate the new trial solutions with the solutions in the subsets. The proposed algorithm is tested on the 120 benchmark instances from the literature. Computational results indicate that the average relative percentage deviations of the improved algorithm from the ACO_AP, DPSO, DDE and GVNS are −5.16%, −3.33%, −1.81% and −0.08%, respectively. Comparing with the state-of-the-art and exact algorithms, the proposed algorithm can obtain 78 optimal solutions out of 120 instances within a reasonable computational time. | An improved scatter search algorithm for the single machine total weighted tardiness scheduling problem with sequence-dependent setup times |
S1568494614006759 | As a special intuitionistic fuzzy set on a real number set, trapezoidal intuitionistic fuzzy numbers (TrIFNs) have the better capability to model ill-known quantities. The purpose of this paper is to develop some power geometric operators of TrIFNs and apply to multi-attribute group decision making (MAGDM) with TrIFNs. First, the lower and upper weighted possibility means of TrIFNs are introduced as well as weighted possibility means. Hereby, a new lexicographic method is developed to rank TrIFNs. The Minkowski distance between TrIFNs is defined. Then, four kinds of power geometric operators of TrIFNs are investigated including the power geometric operator of TrIFNs, power weighted geometric operator of TrIFNs, power ordered weighted geometric operator of TrIFNs and power hybrid geometric operator of TrIFNs. Their desirable properties are discussed. Four methods for MAGDM with TrIFNs are respectively proposed for the four cases whether the weight vectors of attributes and DMs are known or unknown. In these methods, the individual overall attribute values of alternatives are generated by using the power geometric or power weighted geometric operator of TrIFNs. The collective overall attribute values of alternatives are determined through constructing the multi-objective optimization model, which is transformed into the goal programming model to solve. Thus, the ranking order of alternatives is obtained according to the collective overall attribute values of alternatives. Finally, the green supplier selection problem is illustrated to demonstrate the application and validation of the proposed method. | Power geometric operators of trapezoidal intuitionistic fuzzy numbers and application to multi-attribute group decision making |
S1568494615000022 | Feature selection is a pre-processing step in data mining and machine learning, and is very important in analyzing high-dimensional data. Attribute clustering has been proposed for feature selection. If similar attributes can be clustered into groups, they can then be easily replaced by others in the same group when some attribute values are missing. Hong et al. proposed a genetic algorithm (GA) to find appropriate attribute clusters. However, in their approaches, multiple chromosomes represent the same attribute clustering result (feasible solution) due to the combinatorial property, and thus the search space is larger than necessary. This study improves the performance of the GA-based attribute clustering process based on the grouping genetic algorithm (GGA). In the proposed approach, the general GGA representation and operators are used to reduce redundancy in the chromosome representation for attribute clustering. Experiments are also conducted to compare the efficiency of the proposed approach with that of an existing approach. The results indicate that the proposed approach can derive attribute grouping results in an effective way. | Using group genetic algorithm to improve performance of attribute clustering |
S1568494615000034 | Personnel selection is a critical enterprise strategic problem in knowledge-intensive enterprise. Fuzzy number which can be described as triangular (trapezoid) fuzzy number is an adequate way to assess the evaluation and weights for the alternatives. In that case, fuzzy TOPSIS, as a classic fuzzy multiple criteria decision making (MCDM) methods, has been applied in personnel selection problems. Currently, all the researches on this topic either apply crisp relative closeness but causing information loss, or employ fuzzy relative closeness estimate but with complicated computation to rank the alternatives. In this paper, based on Karnik–Mendel (KM) algorithm, we propose an analytical solution to fuzzy TOPSIS method. Some properties are discussed, and the computation procedure for the proposed analytical solution is given as well. Compared with the existing TOPSIS method for personnel selection problem, it obtains accurate fuzzy relative closeness instead of the crisp point or approximate fuzzy relative closeness estimate. It can both avoid information loss and keep computational efficiency in some extent. Moreover, the global picture of fuzzy relative closeness provides a way to further discuss the inner properties of fuzzy TOPSIS method. Detailed comparisons with approximate fuzzy relative closeness method are provided in personnel selection application. | An analytical solution to fuzzy TOPSIS and its application in personnel selection for knowledge-intensive enterprise |
S1568494615000046 | This paper will introduce the Monte Carlo-based heuristic with seven local searches (LSs) which are carefully designed for distributed production network scheduling. Distributed production network consists of the number of different individual factories that joins together to form a network, in which these factories can operate more economically than operating individually and each individual factory usually focuses on self-benefits. Some realistic features such as heterogeny of factories and the transportation among factories are incorporated in our problem definition. However, in such network, each individual factory usually focuses on self-benefits and it plans to optimize its own profit. In this problem, among F exit factories in the network, F′ factories are interested in the total earliness costs and the remaining factories (F″= F − F′) are interested in the total tardiness cost. Cost minimization is achieved through the minimization of earliness in F′factories, tardiness in F″ factories and the total costs of operation time of all jobs. This algorithm initializes with best known non-cooperative local scheduling and then the LSs search simultaneously through the same solution space, starting from the same current solution. Upon receiving the solutions from the LSs, the new solution will be accepted based on the Monte Carlo acceptance criterion. This criterion always accepts an improved solution and, in order to escape local minima, accept the worse solutions with a certain probability, which this probability decreases with deteriorating solutions. After solving the mixed integer linear programming by the CPLEX solver in the small-size instances, the results obtained by heuristic are compared with two genetic algorithms in the large-size instances. The results of the scheduling before cooperation in production network were also considered in the experiments. | Minimizing cost-related objective in synchronous scheduling of parallel factories in the virtual production network |
S1568494615000058 | Particle swarm optimization (PSO) is a population-based stochastic optimization algorithm motivated by intelligent collective behavior of some animals such as flocks of birds or schools of fish. The most important features of the PSO are easy implementation and few adjustable parameters. A novel PSO method called LHNPSO, with low-discrepancy sequence initialized particles and high-order (1/π 2) nonlinear time-varying inertia weight and constant acceleration coefficients, is proposed in this paper. The initial population particles are generated by using the Halton sequence to fill the search space efficiently. Nonlinear functions with orders varied within big ranges are employed to adjust the inertial weight, cognitive and social parameters. Based on the sensitivity analysis of PSO performance to the changes of the orders of these nonlinear functions, 1/π 2 order nonlinear function is selected to adjust the time-varying inertia weight and the two acceleration coefficients are set to be constants. A set of well-known benchmark optimization problems is then used to investigate the performance of the proposed LHNPSO algorithm and facilitate the comparison with other three types of PSO algorithms. The results show that the easily implemented LHNPSO can converge faster and give a much more accurate final solution for a variety of benchmark test functions. | Low-discrepancy sequence initialized particle swarm optimization algorithm with high-order nonlinear time-varying inertia weight |
S156849461500006X | Recent research revealed that model-assisted parameter tuning can improve the quality of supervised machine learning (ML) models. The tuned models were especially found to generalize better and to be more robust compared to other optimization approaches. However, the advantages of the tuning often came along with high computation times, meaning a real burden for employing tuning algorithms. While the training with a reduced number of patterns can be a solution to this, it is often connected with decreasing model accuracies and increasing instabilities and noise. Hence, we propose a novel approach defined by a two criteria optimization task, where both the runtime and the quality of ML models are optimized. Because the budgets for this optimization task are usually very restricted in ML, the surrogate-assisted Efficient Global Optimization (EGO) algorithm is adapted. In order to cope with noisy experiments, we apply two hypervolume indicator based EGO algorithms with smoothing and re-interpolation of the surrogate models. The techniques do not need replicates. We find that these EGO techniques can outperform traditional approaches such as latin hypercube sampling (LHS), as well as EGO variants with replicates. | Efficient multi-criteria optimization on noisy machine learning problems |
S1568494615000071 | This study presents a particle swarm optimization (PSO) with an aging leader and challengers (ALC-PSO) for the solution of optimal reactive power dispatch (ORPD) problem. The ORPD problem is formulated as a nonlinear constrained single-objective optimization problem where the real power loss and the total voltage deviations are to be minimized separately. In order to evaluate the performance of the proposed algorithm, it has been implemented on IEEE 30-, 57- and 118-bus test power systems and the optimal results obtained are compared with those of the other evolutionary optimization techniques surfaced in the recent state-of-the-art literature. The results presented in this paper demonstrate the potential of the proposed approach and show its effectiveness and robustness for solving the ORPD problem of power system. transfer susceptance between buses i and j positive coefficients that determine the influence of the personal and neighborhood best positions, respectively conductance of branch k transfer conductance between buses i and j number of iteration for which the leading power of challenger is tested objective function to be minimized number of total buses number of possible reactive power source installation buses number of network branches number of generator buses number of buses adjacent to bus i (including bus i) number of load buses number of total buses excluding slack bus number of particles in a population number of PQ buses number of transformer branches maximum NFFEs number of fitness function evaluations or iteration cycles demanded active power at bus i injected active power at bus i shunt capacitor/inductor demanded reactive power at bus i generator reactive power injected reactive power at bus i random numbers uniformly distributed in [0,1] power flow in branch l tth iteration transformer tap vector of control variables generator voltage (continuous) voltage at bus i reference value of the voltage magnitude of the ith bus which is equal to 1.0 p.u. ith particle's velocity vector load bus voltage inertia weight maximum value of w minimum value of w vector of dependent variables historically best position of the entire swarm historically best position of particle i (i =1, 2, …, N P ) ith particle's position vector voltage angle difference between buses i and j | Optimal reactive power dispatch by particle swarm optimization with an aging leader and challengers |
S1568494615000083 | Currently, emotion is considered as a critical aspect of human behavior; thus it should be embedded within the reasoning module in an intelligent system where the aim is to anticipate or respond to human reactions. Therefore, current research in data mining shows an increasing interest in emotion assessment for improving human–machine interaction. Based on the analysis of electroencephalogram (EEG) which derives from automatic nervous system responses, computers can assess user emotions and find correlations between significant EEG features extracted from the raw data and the human emotional states. With the advent of modern signal processing techniques, the evaluative power of human emotion derived from EEG is increased exponentially due to the huge number of features that are typically extracted from the EEG signals. Notwithstanding that the expanded set of features could allow computers to evaluate emotions in an accurate way, it is too complex a task to manage in a structured way and, for the reasons stated, methods and approaches to enable both EEG information management and evaluation are necessary to support emotion assessment. Starting from this consideration, this paper proposes an enhanced EEG-based emotion assessment system exploiting a collection of ontological models representing EEG feature sets and arousal–valence space (two-dimensional emotion scale), statistical tests capable of evaluating the gender-specific correlations between EEG features and emotional states, and a classification methodology inferring arousal and valence levels. As will be shown in the experimental section where the proposed approach has been tested on a public dataset, the experimental results demonstrate that better performance in emotion assessment can be achieved using our framework as compared with other studies using the same dataset and with three other classification techniques. | Electroencephalogram-based emotion assessment system using ontology and data mining techniques |
S1568494615000095 | Based on decision-theoretic rough set model of three-way decisions, we augment the existing model by introducing linguistic terms. Considering the two types of parameters being used in the three-way decisions with linguistic assessment, a certain type of novel three-way decisions based on the Bayesian decision procedure is constructed. In this way, three-way decisions with decision-theoretic rough sets are extended to the qualitative environment. With the aid of multi-attribute group decision making, the values of these parameters are determined. An adaptive algorithm supporting consistency improvement of multi-attribute group decision making is designed. Then, we optimize the scales of the linguistic terms with the use of particle swarm optimization. The values of these parameters of three-way decisions are aggregated when proceeding with group decision making. Finally, the proposed model of three-way decisions with linguistic assessment is applied to the selection process of new product ideas. | Three-way decisions based on decision-theoretic rough sets under linguistic assessment with the aid of group decision making |
S1568494615000101 | Dynamic lubrication analysis of connecting rod is a very complex problem. Some factors have great effect on lubrication, such as clearance, oil viscosity, oil supplying hole, bearing elastic modulus, surface roughness, oil supplying pressure and engine speed and bearing width. In this paper, ten indexes are used as the input parameters to evaluate the bearing performances: minimum oil film thickness (MOFT), friction loss, the maximum oil film pressure (MOFP) and average of the oil leakages (OLK). Two orthogonal experiments are combined to identify the factors dominating the bearing behavior. The stepwise regression is used to establish the regression model without insignificant variables, and two most important variables are used as the input to carry out the surface response analysis for each model. At last, the support vector machine (SVM) is used to identify the asperity contact. Compared with SVM model, the particle swarm optimization-support vector machines (PSO–SVM) can predict the asperity contact more precise, especially to the samples near dividing line. In future work, more soft computing methods with statistical characteristic are used to the tribology analyses. the velocity in the circumferential direction of the bearing (U 1 =0) the velocity in the circumferential direction of the crankshaft the oil film thickness the oil film pressure the lubricant viscosity the coordinate axis along the horizontal direction the coordinate axis along the axial direction the pressure flow factors the pressure flow factors the shear flow factor the composite rms roughness filling factor represents the original radical clearance at the underformed state the crankshaft deformation in x direction the crankshaft deformation in the z direction radical deformation of the bearing measured from bearing crown the composite elastic modulus, E ′ = ( ( 1 − v 1 2 ) / E 1 + ( 1 − v 2 2 ) / E 2 ) − 1 the Poisson ratio and Young's modulus of the adjacent surfaces the asperity density the radius of curvature of convex peak the dimensionless clearance parameter, H = H/σ flow-rate from the front-end plane of the bearing flow-rate from rear end of the bearing total end leakage flow rate of lubricant asperity contact shear force fluid shear force | Tribilogical performances of connecting rod and by using orthogonal experiment, regression method and response surface methodology |
S1568494615000113 | A novel adaptive fuzzy directional median filter is proposed in this paper which considers the directional pixels (horizontal, vertical and two diagonal directions) for estimation of adaptive threshold and incorporates the remaining background pixels based on directional statistics for efficient noise detection. The proposed filter consists of two phases: adaptive fuzzy noise detection phase followed by fuzzy filtering phase. In fuzzy noise detection phase, intensity differences from the central pixel in a 5×5 sliding window are calculated in four main directions, i.e., horizontal, vertical and two diagonal directions. Average value and central pixel value of 5×5 sliding window of newly constructed intensity differences image are exploited with fuzzy membership function to adaptively estimate threshold parameters. These parameters are then merged with fuzzy rules to detect the noise especially in detailed regions of an image. In filtering phase, simple median filter and directional median filters are smartly used based on edge and background information to restore the noisy pixels detected as noisy in adaptive fuzzy noise detection process. Experimental results based on well known quantitative measures shows the effectiveness of the proposed technique. center weighted median filter switching median filter directional weighted median filter adaptive median filter multi-stage directional median filter fuzzy reasoning based directional median filter design grey level of noisy pixel noise density intensity differences average directional differences average directional deviation of δ d central pixel value of average directional deviation of δ d membership degree membership degree of large first threshold parameter second threshold parameter median directional index 5×5 sliding window | Adaptive threshold based fuzzy directional filter design using background information |
S1568494615000125 | In this article, we describe a novel polyphonic analysis that employs a hybrid of Tone-Model (TM) and Particle Swarm Optimization (PSO) techniques. This hybrid approach exploits the strengths of model-based and heuristic-search approaches. The correlations between each monophonic Tone-Model and the polyphonic input are used to predict relevant pitches such that the aggregations of the pitches’ Tone-Models are able to describe the harmonic contents of the polyphonic input. These aggregations are then refined using PSO. PSO heuristically searches for a local optimal aggregation in which some Tone-Models suggested earlier may be excluded from the final best aggregation. We present and discuss the design of our approach. The experimental results from the proposed hybrid approach are compared and contrasted with the non-negative matrix factorization (NMF) technique. A performance comparison between synthesized guitar sound and acoustic guitar sound is discussed. The experimental results confirm the potential of TM–PSO in polyphonic transcription task. | Investigating a hybrid of Tone-Model and Particle Swarm Optimization techniques in transcribing polyphonic guitar sound |
S1568494615000137 | Fractional-order proportional-integral-derivative (FOPID) controllers are designed for load-frequency control (LFC) of two interconnected power systems. Conflicting time-domain design objectives are considered in a multi-objective optimization (MOO)-based design framework to design the gains and the fractional differ-integral orders of the FOPID controllers in the two areas. Here, we explore the effect of augmenting two different chaotic maps along with the uniform random number generator (RNG) in the popular MOO algorithm—the Non-dominated Sorting Genetic Algorithm-II (NSGA-II). Different measures of quality for MOO, e.g. hypervolume indicator, moment of inertia-based diversity metric, total Pareto spread, spacing metric, are adopted to select the best set of controller parameters from multiple runs of all the NSGA-II variants (i.e. nominal and chaotic versions). The chaotic versions of the NSGA-II algorithm are compared with the standard NSGA-II in terms of solution quality and computational time. In addition, the Pareto optimal fronts showing the trade-off between the two conflicting time domain design objectives are compared to show the advantage of using the FOPID controller over that with simple PID controller. The nature of fast/slow and high/low noise amplification effects of the FOPID structure or the four quadrant operation in the two inter-connected areas of the power system is also explored. A fuzzy logic-based method has been adopted next to select the best compromise solution from the best Pareto fronts corresponding to each MOO comparison criteria. The time-domain system responses are shown for the fuzzy best compromise solutions under nominal operating conditions. Comparative analysis on the merits and de-merits of each controller structure is reported then. A robustness analysis is also done for the PID and the FOPID controllers. | Fractional-order load-frequency control of interconnected power systems using chaotic multi-objective optimization |
S1568494615000149 | According to the regularity of continuous multi-objective optimization problems (MOPs), an immune multi-objective optimization algorithm with differential evolution inspired recombination (IMADE) is proposed in this paper. In the proposed IMADE, the novel recombination provides two types of candidate searching directions by taking three recombination parents which distribute along the current Pareto set (PS) within a local area. One of the searching direction provides guidance for finding new points along the current PS, and the other redirects the search away from the current PS and moves towards the target PS. Under the background of the SBX (Simulated binary crossover) recombination which performs local search combined with random search near the recombination parents, the new recombination operator utilizes the regularity of continuous MOPs and the distributions of current population, which helps IMADE maintain a more uniformly distributed PF and converge much faster. Experimental results have demonstrated that IMADE outperforms or performs similarly to NSGAII, NNIA, PESAII and OWMOSaDE in terms of solution quality on most of the ten testing MOPs. IMADE converges faster than NSGAII and OWMOSaDE. The efficiency of the proposed DE recombination and the contributions of DE and SBX recombination to IMADE have also been experimentally investigated in this work. | An immune multi-objective optimization algorithm with differential evolution inspired recombination |
S1568494615000150 | This paper proposes an artificial bee colony approach to minimize the makespan for a single batch-processing machine. The single batch-processing problem is characterized by discontinuity in the objective function and having integer variables. Since the problem under study is NP-hard, an artificial bee colony approach is proposed. The penalty function method is used to convert the constrained problem to unconstrained problem, which is then solved by the ABC algorithm. A procedure to generate initial solutions is presented, which is based on filling partially filled batches first. The analysis in the article shows that the colony size, the value of the penalty parameter, the penalty function iteration, the ABC iteration, and maximum trials per food source all have a significant effect on the performance of the ABC algorithm; however, no pattern can be established. | Constrained binary artificial bee colony to minimize the makespan for single machine batch processing with non-identical job sizes |
S1568494615000162 | Consumer-oriented companies are getting increasingly more sensitive about customer's perception of their products, not only to get a feedback on their popularity, but also to improve the quality and service through a better understanding of design issues for further development. However, a consumer's perception is often qualitative and is achieved through third party surveys or the company's recording of after-sale feedback through explicit surveys or warranty based commitments. In this paper, we consider an automobile company's warranty records for different vehicle models and suggest a data mining procedure to assign a customer satisfaction index (CSI) to each vehicle model based on the perceived notion of the level of satisfaction of customers. Based on the developed CSI function, customers are then divided into satisfied and dissatisfied customer groups. The warranty data are then clustered separately for each group and analyzed to find possible causes (field failures) and their relative effects on customer's satisfaction (or dissatisfaction) for a vehicle model. Finally, speculative introspection has been made to identify the amount of improvement in CSI that can be achieved by the reduction of some critical field failures through better design practices. Thus, this paper shows how warranty data from customers can be utilized to have a better perception of ranking of a product compared to its competitors in the market and also to identify possible causes for making some customers dissatisfied and eventually to help percolate these issues at the design level. This closes the design cycle loop in which after a design is converted into a product, its perceived level of satisfaction by customers can also provide valuable information to help make the design better in an iterative manner. The proposed methodology is generic and novel, and can be applied to other consumer products as well. | Development, analysis and applications of a quantitative methodology for assessing customer satisfaction using evolutionary optimization |
S1568494615000174 | In today's highly dynamic economy and society, performance assessment of decision making units (DMUs) is a problem of great significance. In order to make efficient decisions and beneficial improvements, decision makers and top management need to have a comprehensive view over capabilities and performance of DMUs which could be defined as organizational units. This study presents an efficient model for analyzing the outputs of performance measurement methodologies, by means of trust that provides explicit qualitative scales rather than pure numerical data. To the best of our knowledge, this is the first attempt for implementing the concept of trust in terms of performance measurement. In order to develop the structure of our proposed framework, fifteen scenarios are established based on number of DMUs and timeslots. These scenarios form the basis of the proposed structure. The efficiency rate of current, previous and upcoming years, as well as the average efficiency and standard deviation, are five inputs for this model. The approach incorporates time series forecasting to predict the future efficiency rate. Furthermore, auto correlation function (ACF) is used as the input selection in time series context. The model utilizes t-norms and t-conorms as the final modeling tools. To show the applicability and superiority of the proposed model, it is applied to a data set which is provided by running a simulation structured by a unique logic. The results provided by this model are more user-centric and have been represented in a potentially more transparent and intelligible way for decision makers. | A trust-based performance measurement modeling using t-norm and t-conorm operators |
S1568494615000186 | Grouping strategy exactly specifies the form of covariance matrix, therefore it is very essential. Most 2DPCA methods use the original 2D image matrices to form the covariance matrix which actually means that the strategy is to group the random variables by row or column of the input image. Because of their grouping strategies these methods have two main drawbacks. Firstly, 2DPCA and some of its variants such as A2DPCA, DiaPCA and MatPCA preserve only the covariance information between the elements of these groups. This directly implies that 2DPCA and these variants eliminate some covariance information while PCA preserves such information that can be useful for recognition. Secondly, all the existing methods suffer from the relatively high intra-group correlation, since the random variables in a row, column, or a block are closely located and highly correlated. To overcome such drawbacks we propose a novel grouping strategy named cross grouping strategy. The algorithm focuses on reducing the redundancy among the row and the column vectors of the image matrix. While doing this the algorithm completely preserves the covariance information of PCA between local geometric structures in the image matrix which is partially maintained in 2DPCA and its variants. And also in the proposed study intra-group correlation is weak according to the 2DPCA and its variants because the random variables spread over the whole face image. These make the proposed algorithm superior to 2DPCA and its variants. In order to achieve this, image cross-covariance matrix is calculated from the summation of the outer products of the column and the row vectors of all images. The singular value decomposition (SVD) is then applied to the image cross-covariance matrix. The right and the left singular vectors of SVD of the image cross-covariance matrix are used as the optimal projective vectors. Further in order to reduce the dimension LDA is applied on the feature space of the proposed method that is proposed method+LDA. The exhaustive experimental results demonstrate that proposed grouping strategy for 2DPCA is superior to 2DPCA, its specified variants and PCA, and proposed method outperforms bi-directional PCA+LDA. | Cross grouping strategy based 2DPCA method for face recognition |
S1568494615000198 | High-accuracy positioning is not only an essential issue for efficient running of high-speed train (HST), but also an important guarantee for the safe operation of high-speed train. Positioning error is zero when the train is passing through a balise. However, positioning error between adjacent balises is going up as the train is moving away from the previous balise. Although average speed method (ASM) is commonly used to compute the position of train in engineering, its positioning error is somewhat large by analyzing the field data. In this paper, we firstly establish a mathematical model for computing position of HST after analyzing wireless message from the train control system. Then, we propose three position computation models based on least square method (LSM), support vector machine (SVM) and least square support vector machine (LSSVM). Finally, the proposed models are trained and tested by the field data collected in Wuhan-Guangzhou high-speed railway. The results show that: (1) compared with ASM, the three models proposed are capable of reducing positioning error; (2) compared with ASM, the percentage error of LSM model is reduced by 50.2% in training and 53.9% in testing; (3) compared with LSM model, the percentage error of SVM model is further reduced by 38.8% in training and 14.3% in testing; (4) although LSSVM model performs almost the same with SVM model, LSSVM model has advantages over SVM model in terms of running time. We also put forward some online learning methods to update the parameters in the three models and better positioning accuracy is obtained. With the three position computation models we proposed, we can improve the positioning accuracy for HST and potentially reduce the number of balises to achieve the same positioning accuracy. | Position computation models for high-speed train based on support vector machine approach |
S1568494615000204 | Classification of Electroencephalogram (EEG) data for imagined motor movements has been a challenge in the design and development of Brain Computer Interfaces (BCIs). There are two principle challenges. The first is the variability in the recorded EEG data, which manifests across trials as well as across individuals. Consequently, features that are more discriminative need to be identified before any pattern recognition technique can be applied. The second challenge is in the pattern recognition domain. The number of data samples in a class of interest, e.g. a specific action, is a small fraction of the total data, which is composed of samples corresponding to all actions of all users. Building a robust classifier when learning from a highly unbalanced dataset is very difficult; minimizing the classification error typically causes the larger class to overwhelm the smaller one. We show that the combination of ‘classifiability’ for selecting the optimal frequency band and the use of the Twin Support Vector Machine (Twin SVM) for classification, yields significantly improved generalization. On benchmark BCI Competition datasets, the proposed approach often yields up to 20% improvement over the state-of-the-art. | High performance EEG signal classification using classifiability and the Twin SVM |
S1568494615000216 | This paper examines the problem of output feedback control of a Takagi–Sugeno (TS) fuzzy fishery system. The considered system is the continuous age structured model of an exploited population that includes a nonlinear stock–recruitment relationship. The effort is used as control term, the age classes as states and the quantity of captured fish per unit of effort as measured output. In order to stabilize the stock states around the references equilibrium, which means biologically the sustainability of the fish stock, the output feedback controller is adopted, rather than a controller based on the state observer. An algorithm based on the linear matrix inequality is proposed to compute the static output feedback gain. Simulation results of the continuous fishery systems confirm the effectiveness of the proposed design. | Static output-feedback controller design for a fish population system |
S1568494615000228 | In this paper, we propose a sensorless wind energy conversion system (WECS) maximum wind power point tracking using Takagi–Sugeno fuzzy cerebellar model articulation control (T-S CMAC). The main objective of the WECS is to achieve maximum power transfer under various wind speeds without actual measurement of the wind velocity. We first represent the WECS, which uses a permanent magnet synchronous generator (PMSG), as a nonlinear dynamical model. To carry out the T-S CMAC design, we rewrite the WECS model as a T-S fuzzy representation. The T-S CMAC design is inspired by the architectural similarity of the T-S fuzzy control and CMAC where accordingly the PDC design control gains and weighting parameter are augmented into a single vector. The advantages of this approach are 3-fold: (i) increases accuracy of CMAC initial weights – we assign the initial weights of CMAC using the control gains solved by the LMIs from the PDC design; (ii) introduces adaptive ability in LMI-based design – the CMAC design allows time-varying parameters in the system; and (iii) relaxes assumption on system uncertainty – we drop the assumption that a strict upper bound on system uncertainty is known. Numerical simulations under various wind speeds show exponential convergence results which further verify the theoretical derivations. | Sensorless wind energy conversion system maximum power point tracking using Takagi–Sugeno fuzzy cerebellar model articulation control |
S156849461500023X | In this paper, a hybrid gravitational search algorithm (GSA) and pattern search (PS) technique is proposed for load frequency control (LFC) of multi-area power system. Initially, various conventional error criterions are considered, the PI controller parameters for a two-area power system are optimized employing GSA and the effect of objective function on system performance is analyzed. Then GSA control parameters are tuned by carrying out multiple runs of algorithm for each control parameter variation. After that PS is employed to fine tune the best solution provided by GSA. Further, modifications in the objective function and controller structure are introduced and the controller parameters are optimized employing the proposed hybrid GSA and PS (hGSA-PS) approach. The superiority of the proposed approach is demonstrated by comparing the results with some recently published modern heuristic optimization techniques such as firefly algorithm (FA), differential evolution (DE), bacteria foraging optimization algorithm (BFOA), particle swarm optimization (PSO), hybrid BFOA-PSO, NSGA-II and genetic algorithm (GA) for the same interconnected power system. Additionally, sensitivity analysis is performed by varying the system parameters and operating load conditions from their nominal values. Also, the proposed approach is extended to two-area reheat thermal power system by considering the physical constraints such as reheat turbine, generation rate constraint (GRC) and governor dead band (GDB) nonlinearity. Finally, to demonstrate the ability of the proposed algorithm to cope with nonlinear and unequal interconnected areas with different controller coefficients, the study is extended to a nonlinear three unequal area power system and the controller parameters of each area are optimized using proposed hGSA-PS technique. | A novel hybrid gravitational search and pattern search algorithm for load frequency control of nonlinear power system |
S1568494615000241 | This article presents an extended Parameterized Fuzzy Semi-supervised learning (PFSL) method, in which the key innovation is the capability of separating a sample set into two independent subsets: outlier sample subset and regular sample subset. In our proposed PFSL, we first develop an improved parameterized Fuzzy Linear Discriminant Analysis (F-LDA) algorithm to classify regular samples, in which the distribution information of each sample in terms of fuzzy membership degree is incorporated with the redefined within-class and between-class scatter matrices. To achieve good parameter estimation for this improved F-LDA, we advocate the use of Hopfield Neural Networks (HNN) due to its efficiency. Second, a new semi-supervised Fuzzy C-Means (S-FCM) algorithm is designed using pre-computed cluster number and cluster centers in the supervised pattern discovery stage. It is applied to classify the remaining outlier samples and generate the final classification result. Third, since Kernel Fisher Discriminant (KFD) is an efficient way to extract nonlinear discriminant features, a kernel version of the proposed PFSL (K-PFSL) is discussed. Extensive experiments on the ORL, NUST603, FERET and Yale face datasets show the effectiveness and the superiority of the proposed algorithm. | Extended semi-supervised fuzzy learning method for nonlinear outliers via pattern discovery |
S1568494615000253 | This paper presents bacterial foraging optimization (BFO) algorithm and its adaptive version to optimize the planning of passive harmonic filters (PHFs). The important problem of using PHFs is determining location, size and harmonic tuning orders of them, which is reach standard levels of harmonic distortion with applying minimum cost of passive filters. In this study to optimize the PHFs location, size and setting the harmonic tuning orders in the distribution system, considered objective function includes the reduction of power loss and investment cost of PHFs. At the same time, constraints include voltage limits, number/size of installed PHFs, limit candidate buses for PHFs installation and the voltage total harmonic distortion (THDv) in all buses. The harmonic levels of system are obtained by current injections method and the load flow is solved by the iterative method of power sum, which is suitable for the accuracy requirements of this type of study. It is shown that through an economical placement and sizing of PHFs the total voltage harmonic distortion and active power loss could be minimized simultaneously. The considered objective function is of highly non-convex manner, and also has several constraints. On the other hand due to significant computational time reduction and faster convergence of BFO in comparison with other intelligent optimization approach such as genetic algorithm (GA), particle swarm optimization (PSO) and artificial bee colony (ABC) the simple version of BFO has been implemented. Of course other versions of BFO such as Adaptive BFO and combination of BFO with other method due to complexity of harmonic optimization problem have not considered in this research. The simulation results for small scale test system with 10 buses, showed the significant computational time reduction and faster convergence of BFO in comparison with GA, PSO and ABC. Therefore in large scale radial system with 34 buses, the proposed method is solved using BFO. The simulation results for a 10-bus system as a small scale and 34-bus radial system as a large scale show that the proposed method is efficient for solving the presented problem. | Bacterial foraging optimization and adaptive version for economically optimum sitting, sizing and harmonic tuning orders setting of LC harmonic passive power filters in radial distribution systems with linear and nonlinear loads |
S1568494615000265 | This study presents an effective hybrid algorithm based on harmony search (HHS) for solving multidimensional knapsack problems (MKPs). In the proposed HHS algorithm, a novel harmony improvisation mechanism is developed with the modified memory consideration rule and the global-best pitch adjustment scheme to enhance the global exploration. A parallel updating strategy is employed to enrich the harmony memory diversity. To well balance the exploration and the exploitation, the fruit fly optimization (FFO) scheme is integrated as a local search strategy. For solving MKPs, binary strings are used to represent solutions and two repair operators are applied to guarantee the feasibility of the solutions. The HHS is calibrated based on the Taguchi method of design-of-experiment. Extensive numerical investigations based on well-known benchmark instances are conducted. The comparative evaluations indicate the HHS is much more effective than the existing HS and FFO variants in solving MKPs. | An effective hybrid harmony search-based algorithm for solving multidimensional knapsack problems |
S1568494615000277 | Elderly people, who are living alone, are at great risk if a fall event occurred. Thus, automatic fall detection systems are in demand. Some of the early automatic fall detection systems such as wearable devices has a high cost and may cause inconvenience to the daily lives of the elderly people. In this paper, an improved depth-based fall detection system is presented. Our approach uses shape based fall characterization and a Support Vector Machines (SVM) classifier to classify falls from other daily actions. Shape based fall characterization is carried out with Curvature Scale Space (CSS) features and Fisher Vector (FV) encoding. FV encoding is used because it has several advantages against the Bag-of-Words (BoW) model. FV representation is robust and performs well even with simple linear classifiers. Extensive experiments on SDUFall dataset, which contains five daily activities and intentional falls from 20 subjects, show that encoding CSS features with FV encoding and a SVM classifier can achieve an up to 88.83% fall detection accuracy with a single depth camera. This classification rate is 2% more accurate than the compared approach. Moreover, an overall 64.67% accuracy is obtained for 6-class action recognition, which is about 10% more accurate than the compared approach. | Shape feature encoding via Fisher Vector for efficient fall detection in depth-videos |
S1568494615000289 | In wireless sensor networks a large amount of data is collected for each node. The challenge of transferring these data to a sink, because of energy constraints, requires suitable techniques such as data compression. Transform-based compression, e.g. Discrete Wavelet Transform (DWT), are very popular in this field. These methods behave well enough if there is a correlation in data. However, especially for environmental measurements, data may not be correlated. In this work, we propose two approaches based on F-transform, a recent fuzzy approximation technique. We evaluate our approaches with Discrete Wavelet Transform on publicly available real-world data sets. The comparative study shows the capabilities of our approaches, which allow a higher data compression rate with a lower distortion, even if data are not correlated. | Multisignal 1-D compression by F-transform for wireless sensor networks applications |
S1568494615000290 | The information extraction capability of two widely used signal processing tools, Hilbert Transform (HT) and Wavelet Transform (WT), is investigated to develop a multi-class fault diagnosis scheme for induction motor using radial vibration signals. The vibration signals are associated with unique predominant frequency components and instantaneous amplitudes depending on the motor condition. Using good systematic and analytical approach this fault frequencies can be identified. However, some faults either electrical or mechanical in nature are associated with same or similar vibration frequencies leading to erroneous conclusions. Genetic Algorithm (GA) is proposed and used successfully to find the most relevant fault frequencies in radial (vertical) frame vibration signal which can be used to diagnose the induction motor faults very effectively even in the presence of noise. The information obtained by Continuous Wavelet Transform (CWT) was found to be highly redundant compared to HT and thus by selecting the most relevant features using GA, the fault classification accuracy has considerably improved especially for CWT. Almost similar fault frequencies were found using CWT+GA and HT+GA for radial vibration signal. | Multi-class fault diagnosis of induction motor using Hilbert and Wavelet Transform |
S1568494615000307 | This study introduces a new clustering approach which is not only energy-efficient but also distribution-independent for wireless sensor networks (WSNs). Clustering is used as a means of efficient data gathering technique in terms of energy consumption. In clustered networks, each node transmits acquired data to a cluster-head which the nodes belong to. After a cluster-head collects all the data from all member nodes, it transmits the data to the base station (sink) either in a compressed or uncompressed manner. This data transmission occurs via other cluster-heads in a multi-hop network environment. As a result of this situation, cluster-heads close to the sink tend to die earlier because of the heavy inter-cluster relay. This problem is named as the hotspots problem. To solve this problem, some unequal clustering approaches have already been introduced in the literature. Unequal clustering techniques generate clusters in smaller sizes when approaching the sink in order to decrease intra-cluster relay. In addition to the hotspots problem, the energy hole problem may also occur because of the changes in the node deployment locations. Although a number of previous studies have focused on energy-efficiency in clustering, to the best of our knowledge, none considers both problems in uniformly and non-uniformly distributed networks. Therefore, we propose a multi-objective solution for these problems. In this study, we introduce a multi-objective fuzzy clustering algorithm (MOFCA) that addresses both hotspots and energy hole problems in stationary and evolving networks. Performance analysis and evaluations are done with popular clustering algorithms and obtained experimental results show that MOFCA outperforms the existing algorithms in the same set up in terms of efficiency metrics, which are First Node Dies (FND), Half of the Nodes Alive (HNA), and Total Remaining Energy (TRE) used for estimating the lifetime of the WSNs and efficiency of protocols. | MOFCA: Multi-objective fuzzy clustering algorithm for wireless sensor networks |
S1568494615000319 | Particle swarm optimization (PSO) is a bio-inspired optimization strategy founded on the movement of particles within swarms. PSO can be encoded in a few lines in most programming languages, it uses only elementary mathematical operations, and it is not costly as regards memory demand and running time. This paper discusses the application of PSO to rules discovery in fuzzy classifier systems (FCSs) instead of the classical genetic approach and it proposes a new strategy, Knowledge Acquisition with Rules as Particles (KARP). In KARP approach every rule is encoded as a particle that moves in the space in order to cooperate in obtaining high quality rule bases and in this way, improving the knowledge and performance of the FCS. The proposed swarm-based strategy is evaluated in a well-known problem of practical importance nowadays where the integration of fuzzy systems is increasingly emerging due to the inherent uncertainty and dynamism of the environment: scheduling in grid distributed computational infrastructures. Simulation results are compared to those of classical genetic learning for fuzzy classifier systems and the greater accuracy and convergence speed of classifier discovery systems using KARP is shown. | Rules discovery in fuzzy classifier systems with PSO for scheduling in grid computational infrastructures |
S1568494615000423 | A new method of evaluation of CT scan heart slices is proposed in this paper. CT images, acquired from different patients, are stored in PACS database. In order to create 3D/4D model of heart it is necessary to choose these CT images that have sufficient quality. The proposed method is based on classification of histograms of brightness of CT scan heart slices. Some structural features of these histograms are correlated to the images quality which is evaluated in the context of creating an ultrasonography simulator on the basis of CT scan heart slices. They constitute computed tomography scan sets. The quality evaluation is based on fuzzy classification. A new methodology of the membership function construction in relation to structural features of the examined images is proposed. The algorithmic approach to the construction of the membership functions of given in advance classes is introduced. The experiments have shown that the proposed method is effective in selection of high quality CT scan heart slices that can be the basis for the simulator construction. The experiment showed that the proposed fuzzy selection is absolutely consistent with the one done by an expert. | Automatized fuzzy evaluation of CT scan heart slices for creating 3D/4D heart model |
S1568494615000435 | A conventional collaborative beamforming (CB) system suffers from high sidelobes due to the random positioning of the nodes. This paper introduces a hybrid metaheuristic optimization algorithm called the Particle Swarm Optimization and Gravitational Search Algorithm-Explore (PSOGSA-E) to suppress the peak sidelobe level (PSL) in CB, by the means of finding the best weight for each node. The proposed algorithm combines the local search ability of the gravitational search algorithm (GSA) with the social thinking skills of the legacy particle swarm optimization (PSO) and allows exploration to avoid premature convergence. The proposed algorithm also simplifies the cost of variable parameter tuning compared to the legacy optimization algorithms. Simulations show that the proposed PSOGSA-E outperforms the conventional, the legacy PSO, GSA and PSOGSA optimized collaborative beamformer by obtaining better results faster, producing up to 100% improvement in PSL reduction when the disk size is small. | PSOGSA-Explore: A new hybrid metaheuristic approach for beampattern optimization in collaborative beamforming |
S1568494615000447 | Activity recognition aims to detect the physical activities such as walking, sitting, and jogging performed by humans. With the widespread adoption and usage of mobile devices in daily life, several advanced applications of activity recognition were implemented and distributed all over the world. In this study, we explored the power of ensemble of classifiers approach for accelerometer-based activity recognition and built a novel activity prediction model based on machine learning classifiers. Our approach utilizes from J48 decision tree, Multi-Layer Perceptrons (MLP) and Logistic Regression techniques and combines these classifiers with the average of probabilities combination rule. Publicly available activity recognition dataset known as WISDM (Wireless Sensor Data Mining) which includes information from thirty six users was used during the experiments. According to the experimental results, our model provides better performance than MLP-based recognition approach suggested in previous study. These results strongly suggest researchers applying ensemble of classifiers approach for activity recognition problem. | On the use of ensemble of classifiers for accelerometer-based activity recognition |
S1568494615000459 | Cloud computing enables a conventional relational database system's hardware to be adjusted dynamically according to query workload, performance and deadline constraints. One can rent a large amount of resources for a short duration in order to run complex queries efficiently on large-scale data with virtual machine clusters. Complex queries usually contain common subexpressions, either in a single query or among multiple queries that are submitted as a batch. The common subexpressions scan the same relations, compute the same tasks (join, sort, etc.), and/or ship the same data among virtual computers. The total time spent for the queries can be reduced by executing these common tasks only once. In this study, we build and use efficient sets of query execution plans to reduce the total execution time. This is an NP-Hard problem therefore, a set of robust heuristic algorithms, Branch-and-Bound, Genetic, Hill Climbing, and Hybrid Genetic-Hill Climbing, are proposed to find (near-) optimal query execution plans and maximize the benefits. The optimization time of each algorithm for identifying the query execution plans and the quality of these plans are analyzed by extensive experiments. | Robust heuristic algorithms for exploiting the common tasks of relational cloud database queries |
S1568494615000460 | This paper proposes an efficient algorithm to extract the singular points which can be used to classify the given fingerprint. It makes use of a novel algorithm which is a hybrid of orientation field, directional filtering and Poincare Index based algorithms to detect singular points, even when the fingerprint is of low quality or singular point is occluded. Locations of detected singular points are not much accurate and thus they are further refined. Also, some delta points which lie near to the border, may be missed out at the time of detection. Efforts are made to retrieve these missed points. The proposed algorithm also determines the direction of a singular point along with its type (either core or delta). It uses these detected singular points to classify accurately arch, tented arch, left loop, right loop, double loop and whorl type fingerprint patterns. It can handle efficiently the cases of missing delta points during fingerprint classification. The proposed algorithm has been tested on three publicly available databases. It reveals that the proposed algorithm exhibits better singular points detection and fingerprint classification performance in comparison to other well known algorithms. | A robust singular point detection algorithm |
S1568494615000472 | In a manufacturing or service system, the actual processing time of a job can be controlled by the amount of an indivisible resource allocated, such as workers or auxiliary facilities. In this paper, we consider unrelated parallel-machine scheduling problems with discrete controllable processing times. The processing time of a job is discretely controllable by the allocation of indivisible resources. The planner must make decisions on whether or how to allocate resources to jobs during the scheduling horizon to optimize the performance measures. The objective is to minimize the total cost including the cost measured by a standard criterion and the total processing cost. We first consider three scheduling criterions: the total completion time, the total machine load, and the total earliness and tardiness penalties. If the number of machines and the number of possible processing times are fixed, we develop polynomial time algorithms for the considered problems. We then consider the minimization problem of the makespan cost plus the total processing cost and present an integer programming method and a heuristic method to solve the studied problem. | Decision support for unrelated parallel machine scheduling with discrete controllable processing times |
S1568494615000484 | Modern compilers present a great and ever increasing number of options which can modify the features and behavior of a compiled program. Many of these options are often wasted due to the required comprehensive knowledge about both the underlying architecture and the internal processes of the compiler. In this context, it is usual, not having a single design goal but a more complex set of objectives. In addition, the dependencies between different goals are difficult to be a priori inferred. This paper proposes a strategy for tuning the compilation of any given application. This is accomplished by using an automatic variation of the compilation options by means of multi-objective optimization and evolutionary computation commanded by the NSGA-II algorithm. This allows finding compilation options that simultaneously optimize different objectives. The advantages of our proposal are illustrated by means of a case study based on the well-known Apache web server. Our strategy has demonstrated an ability to find improvements up to 7.5% and up to 27% in context switches and L2 cache misses, respectively, and also discovers the most important bottlenecks involved in the application performance. | Tuning compilations by multi-objective optimization: Application to Apache web server |
S1568494615000502 | In this research, a data clustering algorithm named as non-dominated sorting genetic algorithm-fuzzy membership chromosome (NSGA-FMC) based on K-modes method which combines fuzzy genetic algorithm and multi-objective optimization was proposed to improve the clustering quality on categorical data. The proposed method uses fuzzy membership value as chromosome. In addition, due to this innovative chromosome setting, a more efficient solution selection technique which selects a solution from non-dominated Pareto front based on the largest fuzzy membership is integrated in the proposed algorithm. The multiple objective functions: fuzzy compactness within a cluster (π) and separation among clusters (sep) are used to optimize the clustering quality. A series of experiments by using three UCI categorical datasets were conducted to compare the clustering results of the proposed NSGA-FMC with two existing methods: genetic algorithm fuzzy K-modes (GA-FKM) and multi-objective genetic algorithm-based fuzzy clustering of categorical attributes (MOGA (π, sep)). Adjusted Rand index (ARI), π, sep, and computation time were used as performance indexes for comparison. The experimental result showed that the proposed method can obtain better clustering quality in terms of ARI, π, and sep simultaneously with shorter computation time. | Non-dominated sorting genetic algorithm using fuzzy membership chromosome for categorical data clustering |
S1568494615000514 | Steganography is the science of hiding secret message in an appropriate digital multimedia in such a way that the existence of the embedded message should be invisible to anyone apart from the sender or the intended recipient. This paper presents an irreversible scheme for hiding a secret image in the cover image that is able to improve both the visual quality and the security of the stego-image while still providing a large embedding capacity. This is achieved by a hybrid steganography scheme incorporates Noise Visibility Function (NVF) and an optimal chaotic based encryption scheme. In the embedding process, first to reduce the image distortion and to increase the embedding capacity, the payload of each region of the cover image is determined dynamically according to NVF. NVF analyzes the local image properties to identify the complex areas where more secret bits should be embedded. This ensures to maintain a high visual quality of the stego-image as well as a large embedding capacity. Second, the security of the secret image is brought about by an optimal chaotic based encryption scheme to transform the secret image into an encrypted image. Third, the optimal chaotic based encryption scheme is achieved by using a hybrid optimization of Particle Swarm Optimization (PSO) and Genetic Algorithm (GA) which is allowing us to find an optimal secret key. The optimal secret key is able to encrypt the secret image so as the rate of changes after embedding process be decreased which results in increasing the quality of the stego-image. In the extracting process, the secret image can be extracted from the stego-image losslessly without referring to the original cover image. The experimental results confirm that the proposed scheme not only has the ability to achieve a good trade-off between the payload and the stego-image quality, but also can resist against the statistics and image processing attacks. | An adaptive image steganographic scheme based on Noise Visibility Function and an optimal chaotic based encryption method |
S1568494615000526 | In order to enhance customer's loyalty in a supply chain, many efforts could be done such as upgrading the service facilities and decreasing delivery time. Although these efforts need to extra-added cost, but we can reduce the shortage cost in supply chain. In this paper we consider a five-tier supply chain and assume that the lead time of manufacturers and warehouses can be shortened at an extra crashing cost, which depends on the length of lead time. Also we consider different options with different prices for product transportation between facilities. We formulate mixed integer non-linear model for a five-tiers supply chain with controllable lead time and multiple transportation options and develop a novel meta-heuristic method that combines the Taguchi's feature with Artificial Immune System (AIS) to solve the proposed model. The performance of the proposed solution method has been examined against a set of numeric instances and the obtained results are compared with those provided by AIS and the hybrid of Taguchi-genetic algorithm (GATA). Results indicate that the proposed method can provide better results than the previous solutions effectively. | Flexible supply chain optimization with controllable lead time and shipping option |
S1568494615000538 | Facial neuromuscular signal has recently drawn the researchers’ attention to its outstanding potential as an efficient medium for Muscle Computer Interface (MuCI) applications. The proper analysis of such electromyogram (EMG) signals is essential in designing the interfaces. In this article, a multiclass least-square support vector machine (LS-SVM) is proposed for classification of different facial gestures EMG signals. EMG signals were captured through three bi-polar electrodes from ten participants while gesturing ten different facial states. EMGs were filtered and segmented into non-overlapped windows from which root mean square (RMS) features were extracted and then fed to the classifier. For the purpose of classification, different models of LS-SVM were constructed while tuning the kernel parameters automatically and manually. In the automatic mode, 48 models were formed while parameters of linear and radial basis function (RBF) kernels were tuned using different optimization techniques, cost functions and encoding schemes. In the manual mode, 8 models were shaped by means of the considered kernel functions and encoding schemes. In order to find the best model with a reliable performance, constructed models were evaluated and compared in terms of classification accuracy and computational cost. Results reported that the model including RBF kernel which was tuned manually and encoded by one-versus-all scheme provided the highest classification accuracy (93.10%) and consumed 0.98s for training. It was indicated that automatic models were outperformed since they required too much time for tuning the parameters without any meaningful improvement in the final classification accuracy. The robustness of the selected LS-SVM model was evaluated through comparison with Support Vector Machine, fuzzy C-Means and fuzzy Gath-Geva clustering techniques. | Facial neuromuscular signal classification by means of least square support vector machine for MuCI |
S156849461500054X | Feature selection is often required as a preliminary step for many pattern recognition problems. However, most of the existing algorithms only work in a centralized fashion, i.e. using the whole dataset at once. In this research a new method for distributing the feature selection process is proposed. It distributes the data by features, i.e. according to a vertical distribution, and then performs a merging procedure which updates the feature subset according to improvements in the classification accuracy. The effectiveness of our proposal is tested on microarray data, which has brought a difficult challenge for researchers due to the high number of gene expression contained and the small samples size. The results on eight microarray datasets show that the execution time is considerably shortened whereas the performance is maintained or even improved compared to the standard algorithms applied to the non-partitioned datasets. | Distributed feature selection: An application to microarray data classification |
S1568494615000551 | Voice conversion (VC) approach, which morphs the voice of a source speaker to be perceived as spoken by a specified target speaker, can be intentionally used to deceive the speaker identification (SID) and speaker verification (SV) systems that use speech biometric. Voice conversion spoofing attacks to imitate a particular speaker pose potential threat to these kinds of systems. In this paper, we first present an experimental study to evaluate the robustness of such systems against voice conversion disguise. We use Gaussian mixture model (GMM) based SID systems, GMM with universal background model (GMM-UBM) based SV systems and GMM supervector with support vector machine (GMM-SVM) based SV systems for this. Voice conversion is conducted by using three different techniques: GMM based VC technique, weighted frequency warping (WFW) based conversion method and its variation, where energy correction is disabled (WFW−). Evaluation is done by using intra-gender and cross-gender voice conversions between fifty male and fifty female speakers taken from TIMIT database. The result is indicated by degradation in the percentage of correct identification (POC) score in SID systems and degradation in equal error rate (EER) in all SV systems. Experimental results show that the GMM-SVM SV systems are more resilient against voice conversion spoofing attacks than GMM-UBM SV systems and all SID and SV systems are most vulnerable towards GMM based conversion than WFW and WFW− based conversion. From the results, it can also be said that, in general terms, all SID and SV systems are slightly more robust to voices converted through cross-gender conversion than intra-gender conversion. This work extended the study to find out the relationship between VC objective score and SV system performance in CMU ARCTIC database, which is a parallel corpus. The results of this experiment show an approach on quantifying objective score of voice conversion that can be related to the ability to spoof an SV system. | On robustness of speech based biometric systems against voice conversion attack |
S1568494615000563 | The aim of this work is to develop an unsupervised approach based on Probabilistic Neural Network (PNN) for land use classification. A time series of high spatial resolution acquired by LANDSAT and SPOT images has been used to firstly generate the profiles of Normalized Difference Vegetation Index (NDVI) and then used for the classification procedure. The proposed method allows the implementation of cluster validity technique in PNN using Ward's method to get clusters. This procedure is completely automatic with no parameter adjusting and instantaneous training, has high ability in producing a good cluster number estimates and provides a new point of view to use PNN as unsupervised classifier. The obtained results showed that this approach gives an accurate classification with about 3.44% of error through a comparison with the real land use and provides a better performance when comparing to usual unsupervised classification methods (fuzzy c-means (FCM) and K-means). | Using an unsupervised approach of Probabilistic Neural Network (PNN) for land use classification from multitemporal satellite images |
S1568494615000575 | Data mining and visualization techniques for high-dimensional data provide helpful information to substantially augment decision-making. Optimization techniques provide a way to efficiently search for these solutions. ACO applied to data mining tasks – a decision tree construction – is one of these methods and the focus of this paper. The Ant Colony Decision Tree (ACDT) approach generates solutions efficiently and effectively but scales poorly to large problems. This article merges the methods that have been developed for better construction of decision trees by ants. The ACDT approach is tested in the context of the bi-criteria evaluation function by focusing on two problems: the size of the decision trees and the accuracy of classification obtained during ACDT performance. This approach is tested in co-learning mechanism, it means agents–ants can interact during the construction decision trees via pheromone values. This cooperation is a chance of getting better results. The proposed methodology of analysis of ACDT is tested in a number of well-known benchmark data sets from the UCI Machine Learning Repository. The empirical results clearly show that the ACDT algorithm creates good solutions which are located in the Pareto front. The software that implements the ACDT algorithm used to generate the results of this study can be downloaded freely from http://www.acdtalgorithm.com. | Enhancing the effectiveness of Ant Colony Decision Tree algorithms by co-learning |
S1568494615000587 | This paper describes the determination of optimum values of the parameters of a Simplified Fuzzy ARTMAP neural network for monitoring dry-cured ham processing with different salt formulations to be implemented in a microcontroller device. The employed network must be set to the limited microcontroller memory but, at the same time, should achieve optimal performance to classify the samples obtained from this application. Hams salted with different salt formulations (100% NaCl; 50% NaCl+50% KCl and 55% NaCl+25% KCl+15% CaCl2 +5% MgCl2) were checked at four processing times, from post-salting to the end of their processing (2, 4, 8 and 12 months). Measurements were taken with a potentiometric electronic tongue system formed by metal electrodes of different materials that worked as nonspecific sensors. This study aimed to discriminate ham samples according to two parameters: processing time and salt formulation. The results were analyzed with an artificial neural network of the Simplified Fuzzy ARTMAP (SFAM) type. During the training and validation process of the neural network, optimum values of the control parameters of the neural network were determined for easy implementation in a microcontroller, and to simultaneously achieve maximum sample discrimination. The test process was run in a PIC18F450 microcontroller, where the SFAM algorithm was implemented with the optimal parameters. A data analysis with the optimized neural network was achieved, and samples were perfectly discriminated according to processing time (100%). It is more difficult to discriminate all samples according to salt formulation type, but it is easy to achieve salt type discrimination within each processing block time. Thus, we conclude that the processing time effect dominates salt formulation effects. | Artificial neural networks (Fuzzy ARTMAP) analysis of the data obtained with an electronic tongue applied to a ham-curing process with different salt formulations |
S1568494615000599 | Various intrusion detection systems (IDSs) have been proposed in recent years to provide safe and reliable services in cloud computing. However, few of them have considered the existence of service attackers who can adapt their attacking strategies to the topology-varying environment and service providers’ strategies. In this paper, we investigate the security and dependability mechanism when service providers are facing service attacks of software and hardware, and propose a stochastic evolutionary coalition game (SECG) framework for secure and reliable defenses in virtual sensor services. At each stage of the game, service providers observe the resource availability, the quality of service (QoS), and the attackers’ strategies from cloud monitoring systems (CMSs) and IDSs. According to these observations, they will decide how evolutionary coalitions should be dynamically formed for reliable virtual-sensor-service composites to deliver data and how to adaptively defend in the face of uncertain attack strategies. Using the evolutionary coalition game, virtual-sensor-service nodes can form a reliable service composite by a reliability update function. With the Markov chain constructed, virtual-sensor-service nodes can gradually learn the optimal strategy and evolutionary coalition structure through the minimax-Q learning, which maximizes the expected sum of discounted payoffs defined as QoS for virtual-sensor-service composites. The proposed SECG strategy in the virtual-sensor-service attack-defense game is shown to achieve much better performance than strategies obtained from the evolutionary coalition game or stochastic game, which only maximizes each stage's payoff and optimizes a defense strategy of stochastic evolutionary, since it successfully accommodates the environment dynamics and the strategic behavior of the service attackers. | A stochastic evolutionary coalition game model of secure and dependable virtual service in Sensor-Cloud |
S1568494615000605 | This article describes a multiobjective spatial fuzzy clustering algorithm for image segmentation. To obtain satisfactory segmentation performance for noisy images, the proposed method introduces the non-local spatial information derived from the image into fitness functions which respectively consider the global fuzzy compactness and fuzzy separation among the clusters. After producing the set of non-dominated solutions, the final clustering solution is chosen by a cluster validity index utilizing the non-local spatial information. Moreover, to automatically evolve the number of clusters in the proposed method, a real-coded variable string length technique is used to encode the cluster centers in the chromosomes. The proposed method is applied to synthetic and real images contaminated by noise and compared with k-means, fuzzy c-means, two fuzzy c-means clustering algorithms with spatial information and a multiobjective variable string length genetic fuzzy clustering algorithm. The experimental results show that the proposed method behaves well in evolving the number of clusters and obtaining satisfactory performance on noisy image segmentation. | A multiobjective spatial fuzzy clustering algorithm for image segmentation |
S1568494615000617 | Unlike the traditional Multiple Kernel Learning (MKL) with the implicit kernels, Multiple Empirical Kernel Learning (MEKL) explicitly maps the original data space into multiple feature spaces via different empirical kernels. MEKL has been demonstrated to bring good classification performance and to be much easier in processing and analyzing the adaptability of kernels for the input space. In this paper, we incorporate the dynamic pairwise constraints into MEKL to propose a novel Multiple Empirical Kernel Learning with dynamic Pairwise Constraints method (MEKLPC). It is known that the pairwise constraint provides the relationship between two samples, which tells whether these samples belong to the same class or not. In the present work, we boost the original pairwise constraints and design the dynamic pairwise constraints which can pay more attention onto the boundary samples and thus to make the decision hyperplane more reasonable and accurate. Thus, the proposed MEKLPC not only inherits the advantages of the MEKL, but also owns multiple folds of prior information. Firstly, MEKLPC gets the side-information and boosts the classification performance significantly in each feature space. Here, the side-information is the dynamic pairwise constraints which are constructed by the samples near the decision boundary, i.e. the boundary samples. Secondly, in each mapped feature space, MEKLPC still measures the empirical risk and generalization risk. Lastly, different feature spaces mapped by multiple empirical kernels can agree to their outputs for the same input sample as much as possible. To the best of our knowledge, it is the first time to introduce the dynamic pairwise constraints into the MEKL framework in the present work. The experiments on a number of real-world data sets demonstrate the feasibility and effectiveness of MEKLPC. | Multiple Empirical Kernel Learning with dynamic pairwise constraints |
S1568494615000629 | This article is concerned with event-triggered fuzzy control design for a class of discrete-time nonlinear networked control systems (NCSs) with time-varying communication delays. Firstly, a more general mixed event-triggering scheme (ETS) is proposed. Secondly, considering the effects of the ETS and communication delays, based on the T-S fuzzy model scheme and time delay system approach, the original nonlinear NCSs is reformulated as a new event-triggered networked T-S fuzzy systems with interval time-varying delays. Sufficient conditions for uniform ultimately bound (UUB) stability are established in terms of linear matrix inequalities (LMIs). In particular, the quantitative relation between the boundness of the stability region and the triggering parameters are studied in detail. Thirdly, a relative ETS is also provided, which can be seen as a special case of the above proposed mixed ETS. As a difference from the preceding results, sufficient conditions on the existence of desired fuzzy controller are derived to ensure the asymptotic stability of the closed-loop system with reduced communication frequency between sensors and controllers. Moreover, a co-design algorithm for simultaneously determining the gain matrices of the fuzzy controller and the triggering parameters is developed. Finally, two illustrative examples are presented to demonstrate the advantage of the proposed ETS and the effectiveness of the controller design method. | Event-triggered controller design of nonlinear discrete-time networked control systems in T-S fuzzy model |
S1568494615000630 | The simultaneous generation of steam and power, which is commonly referred to as cogeneration, has been adopted by many sugar mills in India to overcome the power shortage. It becomes an increasingly important source of income for sugar factories. The problems faced by the sugar mill industry arise mainly due to failures of either the complete system or some specific components during the cogeneration process. This paper presents the failure analysis of the boiler during the cogeneration process and provides solution to overcome these failures. The failures frequently occur in the screw conveyor and in the drum feeder of fuel feeding system and the grate of the boiler. In this research work, the statistical tools viz., Failure Mode and Effect Analysis (FMEA) and the Taguchi method have been applied to investigate and alleviate these failures. Since conventional FMEA has some limitations and Taguchi method does not give better solution, fuzzy FMEA has been employed to overcome the limitations and genetic algorithm technique has been applied to obtain failure – free system during the cogeneration process. | Optimization of process parameters through fuzzy logic and genetic algorithm – A case study in a process industry |
S1568494615000642 | Feature selection is an important pre-processing step for solving classification problems. This problem is often solved by applying evolutionary algorithms in order to decrease the dimensional number of features involved. In this paper, we propose a novel hybrid system to improve classification accuracy with an appropriate feature subset in binary problems based on an improved gravitational search algorithm. This algorithm makes the best of ergodicity of piecewise linear chaotic map to explore the global search and utilizes the sequential quadratic programming to accelerate the local search. We evaluate the proposed hybrid system on several UCI machine learning benchmark examples, comparing our approaches with feature selection techniques and obtained better predictions with consistently fewer relevant features. Furthermore, the improved gravitational search algorithm is tested on 23 nonlinear benchmark functions and compared with 5 other heuristic algorithms. The obtained results confirm the high performance of the improved gravitational search algorithm in solving function optimization problems. | A novel hybrid system for feature selection based on an improved gravitational search algorithm and k-NN method |
S1568494615000666 | In this paper, artificial neural networks (ANNs), genetic algorithm (GA), simulated annealing (SA) and Quasi Newton line search techniques have been combined to develop three integrated soft computing based models such as ANN–GA, ANN–SA and ANN–Quasi Newton for prediction modelling and optimisation of welding strength for hybrid CO2 laser–MIG welded joints of aluminium alloy. Experimental dataset employed for the purpose has been generated through full factorial experimental design. Laser power, welding speeds and wires feed rate are considered as controllable input parameters. These soft computing models employ a trained ANN for calculation of objective function value and thereby eliminate the need of closed form objective function. Among 11 tested networks, the ANN with best prediction performance produces maximum percentage error of only 3.21%. During optimisation ANN–GA is found to show best performance with absolute percentage error of only 0.09% during experimental validation. Low value of percentage error indicates efficacy of models. Welding speed has been found as most influencing factor for welding strength. | Application of integrated soft computing techniques for optimisation of hybrid CO2 laser–MIG welding process |
S1568494615000678 | Makespan minimized multi-agent path planning (MAPP) requires the minimization of the time taken by the slowest agents to reach its destination. The resulting minimax objective function is non-smooth and the search for an optimal solution in MAPP can be intractable. In this work, a maximum entropy function is adopted to approximate the minimax objective function. An iterative algorithm named probabilistic iterative makespan minimization (PIMM) is then proposed to approximate a makespan minimized MAPP solution by solving a sequence of computationally hard MAPP minimization problems with a linear objective function. At each iteration, a novel local search algorithm called probabilistic iterative path coordination (PIPC) is used to find a sufficiently good solution for each MAPP minimization problem. Experimental results from comparative studies with existing MAPP algorithms show that the proposed algorithm strikes a good tradeoff between the quality of the makespan minimized solution and the computational cost incurred. | A stochastic algorithm for makespan minimized multi-agent path planning in discrete space |
S156849461500068X | Hill-climbing constitutes one of the simplest way to produce approximate solutions of a combinatorial optimization problem, and is a central component of most advanced metaheuristics. This paper focuses on evaluating climbing techniques in a context where deteriorating moves are not allowed, in order to isolate the intensification aspect of metaheuristics. We aim at providing guidelines to choose the most adequate method for climbing efficiently fitness landscapes with respect to their size and some ruggedness and neutrality measures. To achieve this, we compare best and first improvement strategies, as well as different neutral move policies, on a large set of combinatorial fitness landscapes derived from academic optimization problems, including NK landscapes. The conclusions highlight that first-improvement is globally more efficient to explore most landscapes, while best-improvement superiority is observed only on smooth landscapes and on some particular structured landscapes. The empirical analysis realized on neutral move policies shows that a stochastic hill-climbing reaches in average better configurations and requires fewer evaluations than other climbing techniques. Results indicate that accepting neutral moves at each step of the search should be useful on all landscapes, especially those having a significant rate of neutrality. Last, we point out that reducing adequately the precision of a fitness function makes the climbing more efficient and helps to solve combinatorial optimization problems. | Climbing combinatorial fitness landscapes |
S1568494615000708 | This paper presents a segmentation method, integrating judiciously the merits of rough-fuzzy computing and multiresolution image analysis technique, for documents having both text and graphics regions. It assumes that the text and non-text or graphics regions of a given document are considered to have different textural properties. The M-band wavelet packet analysis and rough-fuzzy-possibilistic c-means are used for text-graphics segmentation problem. The M-band wavelet packet is used to extract the scale-space features, which offers a huge range of possibilities of scale-space features for document image and is able to zoom it onto narrow band high frequency components. A scale-space feature vector is thus derived, taken at different scales for each pixel in an image. However, the decomposition scheme employing M-band wavelet packet leads to a large number of redundant features. In this regard, an unsupervised feature selection method is introduced to select a set of relevant and non-redundant features for text-graphics segmentation problem. Finally, the rough-fuzzy-possibilistic c-means algorithm is used to address the uncertainty problem of document segmentation. The whole approach is invariant under the font size, line orientation, and script of the text. The performance of the proposed technique, along with a comparison with related approaches, is demonstrated on a set of real life document images. | Rough-fuzzy clustering and multiresolution image analysis for text-graphics segmentation |
S156849461500071X | This paper presents a modified version of the water cycle algorithm (WCA). The fundamental concepts and ideas which underlie the WCA are inspired based on the observation of water cycle process and how rivers and streams flow to the sea. New concept of evaporation rate for different rivers and streams is defined so called evaporation rate based WCA (ER-WCA), which offers improvement in search. Furthermore, the evaporation condition is also applied for streams that directly flow to sea based on the new approach. The ER-WCA shows a better balance between exploration and exploitation phases compared to the standard WCA. It is shown that the ER-WCA offers high potential in finding all global optima of multimodal and benchmark functions. The WCA and ER-WCA are tested using several multimodal benchmark functions and the obtained optimization results show that in most cases the ER-WCA converges to the global solution faster and offers more accurate results than the WCA and other considered optimizers. Based on the performance of ER-WCA on a number of well-known benchmark functions, the efficiency of the proposed method with respect to the number of function evaluations (computational effort) and accuracy of function value are represented. | Water cycle algorithm with evaporation rate for solving constrained and unconstrained optimization problems |
S1568494615000733 | In this paper, a new multi-objective approach for the routing problem in Wireless Multimedia Sensor Networks (WMSNs) is proposed. It takes into account Quality of Service (QoS) requirements such as delay and the Expected Transmission Count (ETX). Classical approximations optimize a single objective or QoS parameter, not taking into account the conflicting nature of these parameters which leads to sub-optimal solutions. The case studies applying the proposed approach show clear improvements on the QoS routing solutions. For example, in terms of delay, the approximate mean improvement ratios obtained for scenarios 1 and 2 were of 15 and 28 times, respectively. | A multi-objective routing algorithm for Wireless Multimedia Sensor Networks |
S1568494615000757 | With respect to multi-criteria group decision making (MCGDM) problems under trapezoidal intuitionistic fuzzy environment, a new MCGDM method is investigated. The proposed method can effectively avoid the failure caused by the use of inconsistent decision information and provides a decision-making idea for the case of “the truth be held in minority”. It consists of three interrelated modules: weight determining mechanism, group consistency analysis, and ranking and selection procedure. For the first module, distance measures, expected values and arithmetic averaging operator for trapezoidal intuitionistic fuzzy numbers are used to determine the weight values of criteria and decision makers. For the second module, a consistency analysis and correction procedure based on trapezoidal intuitionistic fuzzy weighted averaging operator and OWA operator is developed to reduce the influence of conflicting opinions prior to the ranking process. For the third module, a trapezoidal intuitionistic fuzzy TOPSIS is used for ranking and selection. Then a procedure for the proposed MCGDM method is developed. Finally, a numerical example further illustrates the practicality and efficiency of the proposed method. | Multi-criteria group decision making based on trapezoidal intuitionistic fuzzy information |
S1568494615000769 | Fuzzy utility mining has been an emerging research issue because of its simplicity and comprehensibility. Different from traditional fuzzy data mining, fuzzy utility mining considers not only quantities of items in transactions but also their profits for deriving high fuzzy utility itemsets. In this paper, we introduce a new fuzzy utility measure with the fuzzy minimum operator to evaluate the fuzzy utilities of itemsets. Besides, an effective fuzzy utility upper-bound model based on the proposed measure is designed to provide the downward-closure property in fuzzy sets, thus reducing the search space of finding high fuzzy utility itemsets. A two-phase fuzzy utility mining algorithm, named TPFU, is also proposed and described for solving the problem of fuzzy utility mining. At last, the experimental results on both synthetic and real datasets show that the proposed algorithm has good performance. | Fuzzy utility mining with upper-bound measure |
S1568494615000770 | Radio frequency identification (RFID) is an emerging non-contact technique where readers read data from or write data to tags by using radio frequency signals. When multiple readers transmit and/or receive signals simultaneously in a dense RFID system, some reader collision problems occur. Typically, in a modern warehouse management system, the warehouse space is partitioned into blocks for storing different goods items on which RFID tags are affixed. The goods items with the equal size are placed in the same block. Because the sizes of goods items are possibly different among blocks, the density values of tags that are affixed on the goods items are different from each other. In this case, tags in each block are distributed randomly and uniformly while tags in the whole warehouse space (i.e., all blocks are considered as a whole) follow a non-uniformly random distribution. For the sake of academic research, this situation is defined as a multiple-density tag distribution. From the viewpoint of resource scheduling, this article establishes an RFID reader-to-reader collision avoidance model with multiple-density tag distribution (R2RCAM-MTD), where the number of queryable tags is used as the evaluation index. Correspondingly, an improved artificial immune network (AINet-MTD) is used as an optimization method to solve R2RCAM-MTD. In the simulation experiments, four cases with different blocks in a warehouse management system are considered as testbeds to evaluate the effectiveness of R2RCAM-MTD and the computational accuracy of AINet-MTD. The effects of time slots and frequency channels are investigated, and some comparative results are obtained from the proposed AINet-MTD algorithm and the other existing algorithms. Further, the identified tags and the operating readers are graphically illustrated. The simulation results indicate that R2RCAM-MTD is effective for reader-to-reader collision problems, and the proposed AINet-MTD algorithm is more efficient in searching the global optimal solution of R2RCAM-MTD than the existing algorithms such as genetic algorithm (RA-GA), particle swarm optimization (PSO), artificial immune network for optimization (opt-aiNet) and artificial immune system for resource allocation (RA-AIS). | RFID reader-to-reader collision avoidance model with multiple-density tag distribution solved by artificial immune network optimization |
S1568494615000782 | This paper presents an adaptive fuzzy control scheme for a class of uncertain multi-input multi-output (MIMO) nonlinear systems with the nonsymmetric control gain matrix and the unknown dead-zone inputs. In this scheme, fuzzy systems are used to approximate the unknown nonlinear functions and the estimated symmetric gain matrix is decomposed into a product of one diagonal matrix and two orthogonal matrices. Based on the decomposition results, a controller is developed, therefore, the possible controller singularity problem and the parameter initialization condition constraints problem are avoided. In addition, a dynamic robust controller is employed to compensate for the lumped errors. It is proved that all the signals in the proposed closed-loop system are bounded and that the tracking errors converge asymptotically to zero. A simulation example is used to demonstrate the effectiveness of the proposed scheme. | Adaptive fuzzy control for multi-input multi-output nonlinear systems with unknown dead-zone inputs |
S1568494615000794 | This method presents extraction of effective color and shape features for the analysis of dermatology images. We employ three phases of operation in order to perform efficient retrieval of images of skin lesions. Our proposed algorithm used color and shape feature vectors and the features are normalized using Min–Max normalization. Particle swarm optimization (PSO) technique for multi-class classification is used to converge the search space more efficiently. The results using receiver operating characteristic (ROC) curve proved that the proposed architecture is highly contributed to computer-aided diagnosis of skin lesions. Experiments on a set of 1450 images yielded a specificity of 98.22% and a sensitivity of 94%. Our empirical evaluation has a superior retrieval and diagnosis performance when compared to the performance of other works. We present explicit combinations of feature vectors corresponding to healthy and lesion skin. | Content-based image retrieval techniques for the analysis of dermatological lesions using particle swarm optimization technique |
S1568494615000800 | In this study, 39 sets of hard turning (HT) experimental trials were performed on a Mori-Seiki SL-25Y (4-axis) computer numerical controlled (CNC) lathe to study the effect of cutting parameters in influencing the machined surface roughness. In all the trials, AISI 4340 steel workpiece (hardened up to 69 HRC) was machined with a commercially available CBN insert (Warren Tooling Limited, UK) under dry conditions. The surface topography of the machined samples was examined by using a white light interferometer and a reconfirmation of measurement was done using a Form Talysurf. The machining outcome was used as an input to develop various regression models to predict the average machined surface roughness on this material. Three regression models – Multiple regression, Random forest, and Quantile regression were applied to the experimental outcomes. To the best of the authors’ knowledge, this paper is the first to apply random forest or quantile regression techniques to the machining domain. The performance of these models was compared to ascertain how feed, depth of cut, and spindle speed affect surface roughness and finally to obtain a mathematical equation correlating these variables. It was concluded that the random forest regression model is a superior choice over multiple regression models for prediction of surface roughness during machining of AISI 4340 steel (69 HRC). constant (intercept) normally distributed error feed depth of cut the number of trees in a random forest specification number of variables to use at each tree split in random forest expected increment in the response spindle speed (RPM) tool nose radius average value of machined surface roughness per unit change in surface roughness for ith experiment | Prediction of surface roughness during hard turning of AISI 4340 steel (69 HRC) |
S1568494615000812 | The optimal layout or geometry of the production area (stope) in an underground mining operation maximises the undiscounted value subject to the inherent physical, geotechnical, and geological constraints. Numerous approaches to develop possible stope layouts have been introduced. However, owing to the size and complexity of the problem, these approaches do not guarantee an optimal solution in three-dimensional space. This article proposes a new heuristic algorithm that incorporates stope size variation for solving this complex and challenging optimisation problem. A case study demonstrates the implementation of the algorithm on an actual ore body model. In a validation study, the proposed algorithm generates 10.7% more profitable solution than the commercially available Maximum Value Neighbourhood (MVN) algorithm. | A heuristic approach to optimal design of an underground mine stope layout |
S1568494615000836 | For many-objective optimization problems, how to get a set of solutions with good convergence and diversity is a difficult and challenging work. In this paper, a new decomposition based evolutionary algorithm with uniform designs is proposed to achieve the goal. The proposed algorithm adopts the uniform design method to set the weight vectors which are uniformly distributed over the design space, and the size of the weight vectors neither increases nonlinearly with the number of objectives nor considers a formulaic setting. A crossover operator based on the uniform design method is constructed to enhance the search capacity of the proposed algorithm. Moreover, in order to improve the convergence performance of the algorithm, a sub-population strategy is used to optimize each sub-problem. Comparing with some efficient state-of-the-art algorithms, e.g., NSGAII-CE, MOEA/D and HypE, on six benchmark functions, the proposed algorithm is able to find a set of solutions with better diversity and convergence. | A new decomposition based evolutionary algorithm with uniform designs for many-objective optimization |
S1568494615000848 | In this work, the main objective is to obtain enhanced performance of nonlinear multivariable systems. Several algorithms of Fuzzy Logic Controller based Linear Quadratic Regulator (FLC-LQR) are presented. The multivariable nonlinear system is represented by a generalized Takagi–Sugeno (T–S) model developed by the authors in previous works. This model has been improved using the well known weighting parameters approach to optimize local and global approximation. In comparison with existent works, the proposed controller is based on the calculation of the control action in each point of the state space according to the dynamic properties of the nonlinear system at that point. This control methodology offers a robust, well damped dynamic response and zero steady state error when the system is subjected to disturbances and modeling errors. A two-link robot system is chosen to evaluate the robustness of the proposed controller algorithms. | Fuzzy optimal control using generalized Takagi–Sugeno model for multivariable nonlinear systems |
S156849461500085X | Image quality assessment of distorted or decompressed images without any reference to the original image is challenging from computational point of view. Quality of an image is best judged by human observers without any reference image, and evaluated using subjective measures. The paper aims at designing a generic no-reference image quality assessment (NR-IQA) method by incorporating human visual perception in assigning quality class labels to the images. Using fuzzy logic approach, we consider information theoretic entropies of visually salient regions of images as features and assess quality of the images using linguistic values. The features are transformed into fuzzy feature space by designing an algorithm based on interval type-2 (IT2) fuzzy sets. The algorithm measures uncertainty present in the input–output feature space to predict image quality accurately as close to human observations. We have taken a set of training images belonging to five different pre-assigned quality class labels for calculating foot print of uncertainty (FOU) corresponding to each class. To assess the quality class label of the test images, maximum of T-conorm applied on the lower and upper membership functions of the test images belonging to different classes is calculated. Our proposed image quality metric is compared with other no-reference quality metrics demonstrating more accurate results and compatible with subjective mean opinion score metric. | No-reference image quality assessment using interval type 2 fuzzy sets |
S1568494615000861 | Control charts are the most useful tools for monitoring production and service processes. Their design involves the selection of such design parameters as sample size, control limits, and sampling frequency. This paper considers the design of X ¯ control charts as a multiple objective decision making problem (MODM) which is identified by three criteria: expected hourly cost, in-control average run length, and detection power of control chart. To solve the MODM problem, we propose a hybrid method based on an evolutionary algorithm. In this method, an epsilon constraint is integrated with PSO (particle swarm optimization) as a multi-objective framework. Also, we consider the magnitude of the process shift and the occurrence rate of an assignable cause as fuzzy numbers. Hence, the fuzziness is modeled using both minimax and maximin approaches. Generally, we present a control chart which is suitable for processes with fuzzy parameters and is effective for a range of values of process parameters. A numerical example from the literature is used to illustrate the procedure used for solving the proposed hybrid algorithm. | Multi-objective design of X ¯ control charts with fuzzy process parameters using the hybrid epsilon constraint PSO |
S1568494615000885 | In this paper, an evolutionary approach to solve the mobile robot path planning problem is proposed. The proposed approach combines the artificial bee colony algorithm as a local search procedure and the evolutionary programming algorithm to refine the feasible path found by a set of local procedures. The proposed method is compared to a classical probabilistic roadmap method (PRM) with respect to their planning performances on a set of benchmark problems and it exhibits a better performance. Criteria used to measure planning effectiveness include the path length, the smoothness of planned paths, the computation time and the success rate in planning. Experiments to demonstrate the statistical significance of the improvements achieved by the proposed method are also shown. | Mobile robot path planning using artificial bee colony and evolutionary programming |
S1568494615000897 | This paper presents the hybrid of the adaptive fuzzy decision level fusion and the score level fusion for finger-knuckle-print (FKP) based authentication to improve over the individual fusion methods. The scores obtained from the fusion of the left index (LI) and the left middle (LM) and those obtained from the fusion of the right index (RI) and the right middle (RM) FKP are fused at the fuzzy decision level. The uncertainty in the local decisions made by the individual score level fusion methods is addressed by treating the error rates as fuzzy sets. The operating points (thresholds) are adapted to accommodate the varying the cost of false acceptance rate using the hybrid PSO algorithm that ensures the desired level of security. The error rates associated with the operating points are converted into the fuzzy domain by triangular membership functions and the alpha-cuts are applied on the membership functions for the better representation of uncertainty. The global fuzzy error rates are defuzzified using total distance criterion (TDC). The rigorous experimental results indicate that the hybrid fusion is superior to the component level fusion methods (score level and decision level fusion). | Hybrid fusion of score level and adaptive fuzzy decision level fusions for the finger-knuckle-print based authentication |
S1568494615000903 | In the past, approaches often generalized classical multi-criteria decision-making (MCDM) methods under fuzzy environment to solve fuzzy multi-criteria decision-making (FMCDM) problems and encompass decision-making messages uncertainty and vagueness. These MCDM methods included analytic hierarchy process (AHP), simple additive weighting method (SAW), technique for order preference by similarity to ideal solution (TOPSIS), etc. In the MCDM methods, SAW is a famous method that is applied in fuzzy environment, but fuzzy multiplication is a drawback to generalize SAW under fuzzy environment. To resolve the multiplication drawback, we utilize a relative preference relation that is from fuzzy preference relation in fuzzy generalized SAW. Generally, fuzzy preference relation is an option to reserve lots of messages. However, pair-wise comparison for fuzzy preference relation is complex on operation. Through the description above, the relative preference relation is improved form fuzzy preference relation to avoid comparing fuzzy numbers on pair-wise and reserving fuzzy messages. Through the relative preference relation, we can generalize SAW under fuzzy environment. It is said that we propose a FMCDM model based on SAW and the relative preference relation to easily and quickly solve FMCDM problems. | A fuzzy multi-criteria decision-making model based on simple additive weighting method and relative preference relation |
S1568494615000915 | The objective of this study is to design an efficient artificial neural network (ANN) architecture in order to predict the crack growth direction in multiple crack geometry. Nonlinear logistic (sigmoid and tangent hyperbolic) and linear activation functions have been used through the one- and two-hidden layer ANN. 85 tests were conducted on aluminium alloys under different crack positions, defined by crack tip distance, crack offset distance, crack size, and crack inclination with loading axis. The experimental data set as first degree or second degree were used to train 22 proposed ANN models to predict the output for new data sets (not included in the training sets). The model results were then compared with the experimental data. It was observed that ANN model with combinations of activation functions and two hidden layers predict the crack initiation direction with good accuracy when higher order input variables are presented to the network. crack length correlation coefficient student t distribution crack offset distance crack tip distance first non-singular stress term, T stress mode I stress intensity factor mode II stress intensity factor crack inclination angle learning parameter crack initiation angle Poisson's ratio | Application of artificial neural network for predicting crack growth direction in multiple cracks geometry |
S1568494615000927 | The purpose of this paper is to develop a multi-item economic order quantity (EOQ) model with shortage for a single-buyer single-supplier supply chain under green vendor managed inventory (VMI) policy. This model explicitly includes the VMI contractual agreement between the vendor and the buyer such as warehouse capacity and delivery constraints, bounds for each order, and limits on the number of pallets. To create a kind of green supply chain, tax cost of green house gas (GHG) emissions and limitation on total emissions of all items are considered in the model. A hybrid genetic and imperialist competitive algorithm (HGA) is employed to find a near-optimum solution of a nonlinear integer-programming (NIP) with the objective of minimizing the total cost of the supply chain. Since no benchmark is available in the literature, a genetic algorithm (GA) is developed as well to validate the result obtained. For further validation, the outcomes are also compared to lower bounds that are found using a relaxed model in which all variables are treated continuous. At the end, numerical examples are presented to demonstrate the application of the proposed methodology. Our results proved that the proposed hybrid procedure was able to find better and nearer optimal solutions. | A hybrid genetic and imperialist competitive algorithm for green vendor managed inventory of multi-item multi-constraint EOQ model under shortage |
S1568494615000939 | Differential evolution (DE) is widely studied in the past decade. In its mutation operator, the random variations are derived from the difference of two randomly selected different individuals. Difference vector plays an important role in evolution. It is observed that the best fitness found so far by DE cannot be improved in every generation. In this article, a directional mutation operator is proposed. It attempts to recognize good variation directions and increase the number of generations having fitness improvement. The idea is to construct a pool of difference vectors calculated when fitness is improved at a generation. The difference vector pool will guide the mutation search in the next generation once only. The directional mutation operator can be applied into any DE mutation strategy. The purpose is to speed up the convergence of DE and improve its performance. The proposed method is evaluated experimentally on CEC 2005 test set with dimension 30 and on CEC 2008 test set with dimensions 100 and 1000. It is demonstrated that the proposed method can result in a larger number of generations having fitness improvement than classic DE. It is combined with eleven DE algorithms as examples of how to combine with other algorithms. After its incorporation, the performance of most of these DE algorithms is significantly improved. Moreover, simulation results show that the directional mutation operator is helpful for balancing the exploration and exploitation capacity of the tested DE algorithms. Furthermore, the directional mutation operator modifications can save computational time compared to the original algorithms. The proposed approach is compared with the proximity based mutation operator as both are claimed to be applicable to any DE mutation strategy. The directional mutation operator is shown to be better than the proximity based mutation operator on the five variants in the DE family. Finally, the applications of two real world engineering optimization problems verify the usefulness of the proposed method. | A directional mutation operator for differential evolution algorithms |
S1568494615000952 | The order acceptance and scheduling (OAS) problem is important in make-to-order production systems in which production capacity is limited and order delivery requirements are applied. This study proposes a multi-initiator simulated annealing (MSA) algorithm to maximize the total net revenue for the permutation flowshop scheduling problem with order acceptance and weighted tardiness. To evaluate the performance of the proposed MSA algorithm, computational experiments are performed and compared for a benchmark problem set of test instances with up to 500 orders. Experimental results reveal that the proposed heuristic outperforms the state-of-the-art algorithm and obtains the best solutions in 140 out of 160 benchmark instances. | Order acceptance and scheduling to maximize total net revenue in permutation flowshops with weighted tardiness |
S1568494615000964 | This paper proposes a Tabu-mechanism improved iterated greedy (TMIIG) algorithm to solve the no-wait flowshop scheduling problem with a makespan criterion. The idea of seeking further improvement in the iterated greedy (IG) algorithm framework is based on the observation that the construction phase of the original IG algorithm may not achieve good performance in escaping from local minima when incorporating the insertion neighborhood search. To overcome this limitation, we have modified the IG algorithm by utilizing a Tabu-based reconstruction strategy to enhance its exploration ability. A powerful neighborhood search method that involves insert, swap, and double-insert moves is then applied to obtain better solutions from the reconstructed solution in the previous step. Empirical results on several benchmark problem instances and those generated randomly confirm the advantages of utilizing the new reconstruction scheme. In addition, our results also show that the proposed TMIIG algorithm is relatively more effective in minimizing the makespan than other existing well-performing heuristic algorithms. | An improved iterated greedy algorithm with a Tabu-based reconstruction strategy for the no-wait flowshop scheduling problem |
S1568494615000976 | In this paper, a multi-item solid transportation problem with parameters, e.g., transportation costs, supplies and demands as type-2 triangular fuzzy variables is formulated. In this problem there are restrictions on some items and conveyances so that some specific items cannot be transported through some particular conveyances. With the critical value (CV)-based reductions of type-2 fuzzy variables, a chance-constrained programming model is formulated for the problem and then this model is converted to an equivalent deterministic form. Here, we propose a method to find nearest interval approximations for continuous type-2 fuzzy variables. A deterministic form for the problem is also obtained by applying interval analysis using the interval approximations of continuous type-2 fuzzy variables. The reduced deterministic problems are solved using a gradient based optimization – Generalized Reduced Gradient (GRG) technique (using LINGO solver) and Genetic Algorithm. Numerical example is provided to illustrate the problem and methods. | Multi-item solid transportation problem with type-2 fuzzy parameters |
S156849461500099X | A new disturbance detection and classification technique based on modified Adaline and adaptive neuro-fuzzy information system (ANFIS) is proposed for a distributed generation system comprising a wind power generating system (DFIG) and a photovoltaic array. The proposed technique is based on a fast Gauss–Newton parameter updating rule rather than the conventional Widrow–Hoff delta rule for the Adaline network. The voltage and current signals near the target distributed generation (DG), particularly the DFIG, whose speed varies from minimum to the maximum cut-off speed, are processed through the modified Adaline network to yield the features like the negative sequence power, harmonic amplification factor (HAF), total harmonic distortion (THD), etc. These features are then used as training sets for the ANFIS, which employs a gradient descent algorithm to update its parameters. The proposed technique distinguishes the islanding condition of the distributed generation system with some other disturbances, such as switching faults, capacitor bank switching, voltage swell, voltage sag, distorted grid voltage, unbalanced load switching, etc. which are referred to as non-islanding cases in this paper. | Impact of wind farms on disturbance detection and classification in distributed generation using modified Adaline network and an adaptive neuro-fuzzy information system |
S1568494615001003 | Genetic algorithms (GA) provide an efficient method for training filters to find proper weights using a fitness function where the input signal is filtered and compared with the desired output. In the case of image processing applications, the high computational cost of the fitness function that is evaluated repeatedly can cause training time to be relatively long. In this study, a new algorithm, called sub-image blocks based on graphical processing units (GPU), is developed to accelerate the training of mask weights using GA. The method is developed by discussing other alternative design considerations, including direct method (DM), population-based method (PBM), block-based method (BBM), and sub-images-based method (SBM). A comparative performance evaluation of the introduced methods is presented using sequential and other GPUs. Among the discussed designs, SBM provides the best performance by taking advantage of the block shared and thread local memories in GPU. According to execution duration and comparative acceleration graphs, SBM provides approximately 55–90 times more acceleration using GeForce GTX 660 over sequential implementation on a 3.5GHz processor. | GPU accelerated training of image convolution filter weights using genetic algorithms |
S1568494615001015 | In this study a new approach was proposed to determine optimum parameters of a protective spur dike to mitigate scouring depth amount around existing main spur dikes. The studied parameters were angle of the protective spur dike relative to the flume wall, its length, and its distance from the main spur dikes, flow intensity, and the diameters of the sediment particles that were explored to find the optimum amounts. In prediction phase, a novel hybrid approach was developed, combining adaptive-network-based fuzzy inference system and particle swarm optimization (ANFIS–PSO) to predict protective spur dike's parameters in order to control scouring around a series of spur dikes. The results indicated that the accuracy of the proposed method is increased significantly compared to other approaches. In addition, the effectiveness of the developed method was confirmed using the available data. | Hybrid ANFIS–PSO approach for predicting optimum parameters of a protective spur dike |
S1568494615001027 | Image segmentation is a very significant process in image analysis. Much effort based on thresholding has been made on this field as it is simple and intuitive, commonly used thresholding approaches are to optimize a criterion such as between-class variance or entropy for seeking appropriate threshold values. However, a mass of computational cost is needed and efficiency is broken down as an exhaustive search is utilized for finding the optimal thresholds, which results in application of evolutionary algorithm and swarm intelligence to obtain the optimal thresholds. This paper considers image thresholding as a constrained optimization problem and optimal thresholds for 1-level or multi-level thresholding in an image are acquired by maximizing the fuzzy entropy via a newly proposed bat algorithm. The optimal thresholding is achieved through the convergence of bat algorithm. The proposed method has been tested on some natural and infrared images. The results are compared with the fuzzy entropy based methods that are optimized by artificial bee colony algorithm (ABC), genetic algorithm (GA), particle swarm optimization (PSO) and ant colony optimization (ACO); moreover, they are also compared with thresholding methods based on criteria of between-class variance and Kapur's entropy optimized by bat algorithm. It is demonstrated that the proposed method is robust, adaptive, encouraging on the score of CPU time and exhibits the better performance than other methods involved in the paper in terms of objective function values. | Fuzzy entropy based optimal thresholding using bat algorithm |
S1568494615001039 | We propose a framework for abstractive summarization of multi-documents, which aims to select contents of summary not from the source document sentences but from the semantic representation of the source documents. In this framework, contents of the source documents are represented by predicate argument structures by employing semantic role labeling. Content selection for summary is made by ranking the predicate argument structures based on optimized features, and using language generation for generating sentences from predicate argument structures. Our proposed framework differs from other abstractive summarization approaches in a few aspects. First, it employs semantic role labeling for semantic representation of text. Secondly, it analyzes the source text semantically by utilizing semantic similarity measure in order to cluster semantically similar predicate argument structures across the text; and finally it ranks the predicate argument structures based on features weighted by genetic algorithm (GA). Experiment of this study is carried out using DUC-2002, a standard corpus for text summarization. Results indicate that the proposed approach performs better than other summarization systems. | A framework for multi-document abstractive summarization based on semantic role labelling |
S1568494615001040 | This paper presents a bi-objective vendor managed inventory (BOVMI) model for a supply chain problem with a single vendor and multiple retailers, in which the demand is fuzzy and the vendor manages the retailers’ inventory in a central warehouse. The vendor confronts two constraints: number of orders and available budget. In this model, the fuzzy demand is formulated using trapezoidal fuzzy number (TrFN) where the centroid defuzzification method is employed to defuzzify fuzzy output functions. Minimizing both the total inventory cost and the warehouse space are the two objectives of the model. Since the proposed model is formulated into a bi-objective integer nonlinear programming (INLP) problem, the multi-objective evolutionary algorithm (MOEA) of non-dominated sorting genetic algorithm-II (NSGA-II) is developed to find Pareto front solutions. Besides, since there is no benchmark available in the literature to validate the solutions obtained, another MOEA, namely the non-dominated ranking genetic algorithms (NRGA), is developed to solve the problem as well. To improve the performances of both algorithms, their parameters are calibrated using the Taguchi method. Finally, conclusions are made and future research works are recommended. | Two parameter tuned multi-objective evolutionary algorithms for a bi-objective vendor managed inventory model with trapezoidal fuzzy demand |
S1568494615001052 | The growing complexity of real-world problems has motivated computer scientists to search for efficient problem-solving methods. Metaheuristics based on evolutionary computation and swarm intelligence are outstanding examples of nature-inspired solution techniques. Inspired by the social spiders, we propose a novel social spider algorithm to solve global optimization problems. This algorithm is mainly based on the foraging strategy of social spiders, utilizing the vibrations on the spider web to determine the positions of preys. Different from the previously proposed swarm intelligence algorithms, we introduce a new social animal foraging strategy model to solve optimization problems. In addition, we perform preliminary parameter sensitivity analysis for our proposed algorithm, developing guidelines for choosing the parameter values. The social spider algorithm is evaluated by a series of widely used benchmark functions, and our proposed algorithm has superior performance compared with other state-of-the-art metaheuristics. | A social spider algorithm for global optimization |
S1568494615001064 | One of the research problems investigated these days is early fault detection. To this end, advanced signal processing algorithms are employed. The present paper makes an attempt at early fault detection in a gearbox. In order to evaluate its technical condition, artificial neural networks were used. Early fault detection based on support vector machines is a relatively new and rarely employed method for evaluating condition of machines, particularly gearboxes. The available literature offers very promising results of using this method. In order to compare the obtained results, a multilayer perceptron network was created. Such standard neural network ensures high effectiveness. The vibration signal obtained from a sensor is seldom a material for direct analysis. First, it needs to be processed to bring out the informative part of the signal. To this end, a wavelet transform was used. The presented results concern both a “raw” vibration signal and processed one, investigated for two neural networks. The wavelet transform has proved to improve significantly the accuracy of condition evaluation and the results obtained by the two networks are consistent with one another. | Early fault detection in gearboxes based on support vector machines and multilayer perceptron with a continuous wavelet transform |
Subsets and Splits