FileName
stringlengths
17
17
Abstract
stringlengths
163
6.01k
Title
stringlengths
12
421
S1568494613002731
In this work we present the parallel implementation of a hybrid global optimization algorithm assembled specifically to tackle a class of time consuming interatomic potential fitting problems. The resulting objective function is characterized by large and varying execution times, discontinuity and lack of derivative information. The presented global optimization algorithm corresponds to an irregular, two-level execution task graph where tasks are spawned dynamically. We use the OpenMP tasking model to express the inherent parallelism of the algorithm on shared-memory systems and a runtime library which implements the execution environment for adaptive task-based parallelism on multicore clusters. We describe in detail the hybrid global optimization algorithm and various parallel implementation issues. The proposed methodology is then applied to a specific instance of the interatomic potential fitting problem for the metal titanium. Extensive numerical experiments indicate that the proposed algorithm achieves the best parallel performance. In addition, its serial implementation performs well and therefore can also be used as a general purpose optimization algorithm.
A parallel hybrid optimization algorithm for fitting interatomic potentials
S1568494613002743
In this paper, we address the minimum energy broadcast (MEB) problem in wireless ad-hoc networks (WANETs). The researches in WANETs have attracted significant attentions, and one of the most critical issues in WSNs is minimization of energy consumption. In WANETs the packets have to be transported from a given source node to all other nodes in the network, and the objective of the MEB problem is to minimize the total transmission power consumption. A hybrid algorithm based on particle swarm optimization (PSO) and local search is presented to solve the MEB problem. A power degree encoding is proposed to reflect the extent of transmission power level and is used to define the particle position in PSO. We also analyze a well-known local search mechanism, r-shrink, and propose an improved version, the intensified r-shrink. In order to solve the dynamic MEB problem with node removal/insertion, this paper provides an effective simple heuristic, Conditional Incremental Power (CIP), to reconstruct the broadcast network efficiently. The promising results indicate the potential of the proposed methods for practical use.
Static and dynamic minimum energy broadcast problem in wireless ad-hoc networks: A PSO-based approach and analysis
S1568494613002755
Over the recent years peer-to-peer (p2p) systems have become increasingly popular. As of today most of the internet IP traffic is already transmitted in this format and still it is said to double in volume till 2014. Most p2p systems, however, are not pure serverless solutions, nor is the searching in those networks highly efficient, usually achieved by simple flooding. In order to confront with the growing traffic we must consider more elaborate search mechanisms and far less centralized environments. An effective proposal to this problem is to solve it in the domain of ant colony optimization metaheuristics. In this paper we present an overview of ACO algorithms that offer the best potential in this field, under the strict requirements and limitations of a pure p2p network. We design several experiments to serve as an evaluation platform for the mentioned algorithms to conclude the features of a high quality approach. Finally, we consider two hybrid extensions to the classical algorithms, in order to examine their contribution to the overall quality robustness.
On the performance of ACO-based methods in p2p resource discovery
S1568494613002767
It is important to predict the future behavior of complex systems. Currently there are no effective methods to solve time series forecasting problem by using the quantitative and qualitative information. Therefore, based on belief rule base (BRB), this paper focuses on developing a new model that can deal with the problem. Although it is difficult to obtain accurately and completely quantitative information, some qualitative information can be collected and represented by a BRB. As such, a new BRB based forecasting model is proposed when the quantitative and qualitative information exist simultaneously. The performance of the proposed model depends on the structure and belief degrees of BRB simultaneously. Moreover, the structure is determined by the delay step. In order to obtain the appropriate delay step using the available information, a model selection criterion is defined according to Akaike's information criterion (AIC). Based on the proposed model selection criterion and the optimal algorithm for training the belief degrees, an algorithm for constructing the BRB based forecasting model is developed. Experimental results show that the constructed BRB based forecasting model can not only predict the time series accurately, but also has the appropriate structure.
Construction of a new BRB based model for time series forecasting
S1568494613002779
In this paper, the modelling problem of brain and eye signals is considered. To solve this problem, three important evolving and stable intelligent algorithms are applied: the sequential adaptive fuzzy inference system (SAFIS), uniform stable backpropagation algorithm (SBP), and online self-organizing fuzzy modified least-squares networks (SOFMLS). The effectiveness of the studied methods is verified by simulations.
Evolving intelligent algorithms for the modelling of brain and eye signals
S1568494613002792
This work investigates the effectiveness of using computer-based machine learning regression algorithms and meta-regression methods to predict performance data for Australian football players based on parameters collected during daily physiological tests. Three experiments are described. The first uses all available data with a variety of regression techniques. The second uses a subset of features selected from the available data using the Random Forest method. The third used meta-regression with the selected feature subset. Our experiments demonstrate that feature selection and meta-regression methods improve the accuracy of predictions for match performance of Australian football players based on daily data of medical tests, compared to regression methods alone. Meta-regression methods and feature selection were able to obtain performance prediction outcomes with significant correlation coefficients. The best results were obtained by the additive regression based on isotonic regression for a set of most influential features selected by Random Forest. This model was able to predict athlete performance data with a correlation coefficient of 0.86 (p <0.05).
Using meta-regression data mining to improve predictions of performance based on heart rate dynamics for Australian football
S1568494613002809
An adaptive seamless streaming dissemination system for vehicular networks is presented in this work. An adaptive streaming system is established at each local server to prefetch and buffer stream data. The adaptive streaming system computes the parts of prefetched stream data for each user and stores them temporarily at the local server, based on current situation of the users and the environments where they are located. Thus, users can download the prefetched stream data from the local servers instead of from the Internet directly, meaning that the video playing problem caused by network congestion can be avoided. Several techniques such as stream data prefetching, stream data forwarding, and adaptive dynamic decoding were utilized for enhancing the adaptability of different users and environments and achieving the best transmission efficiency. Fuzzy logic inference systems are utilized to determine if a roadside base station or a vehicle can be chosen to transfer stream data for users. Considering the uneven deployment of BSs and vehicles, a bandwidth reservation mechanism for premium users was proposed to ensure the QoS of the stream data premium users received. A series of simulations were conducted, with the experimental results verifying the effectiveness and feasibility of the proposed work.
An adaptive multimedia streaming dissemination system for vehicular networks
S1568494613002810
In this paper, a fuzzy based Variable Structure Control (VSC) with guaranteed stability is presented. The main objective is to obtain an improved performance of highly non-linear unstable systems. The main contribution of this work is that, firstly, new functions for chattering reduction and error convergence without sacrificing invariant properties are proposed, which is considered the main drawback of the VSC control. Secondly, the global stability of the controlled system is guaranteed. The well known weighting parameters approach, is used in this paper to optimize local and global approximation and modeling capability of T-S fuzzy model. A one link robot is chosen as a nonlinear unstable system to evaluate the robustness, effectiveness and remarkable performance of optimization approach and the high accuracy obtained in approximating nonlinear systems in comparison with the original T-S model. Simulation results indicate the potential and generality of the algorithm. The application of the proposed FLC-VSC shows that both alleviation of chattering and robust performance are achieved with the proposed FLC-VSC controller. The effectiveness of the proposed controller is proven infront of disturbances and noise effects.
Variable Structure Control with chattering elimination and guaranteed stability for a generalized T-S model
S1568494613002822
This paper describes a nonparametric approach for analyzing gait and identifying bilateral heel-strike events in data from an inertial measurement unit worn on the waist. The approach automatically adapts to variations in gait of the subjects by including a classifier that continuously evolves as it “learns” aspects of each individual's gait profile. The novel data-driven approach is shown to be capable of adapting to different gait profiles without any need for supervision. The approach has several stages. First, cadence episode is detected using Hidden Markov Model. Second, discrete wavelet transforms are applied to extract peak features from accelerometers and gyroscopes. Third, the feature dimensionality is reduced using principal component analysis. Fourth, Rapid Centroid Estimation (RCE) is used to cluster the peaks into 3 classes: (a) left heel-strike, (b) right heel-strike, and (c) artifacts that belongs to neither (a) nor (b). Finally, a Bayes filter is used, which takes into account prior detections, model predictions, and step timings at time segments of interest. Experimental results involving 15 participants suggest that the system is capable of detecting bilateral heel-strikes with greater than 97% accuracy.
Unsupervised nonparametric method for gait analysis using a waist-worn inertial sensor
S1568494613002834
We present a new classifier fusion method to combine soft-level classifiers with a new approach, which can be considered as a generalized decision templates method. Previous combining methods based on decision templates employ a single prototype for each class, but this global point of view mostly fails to properly represent the decision space. This drawback extremely affects the classification rate in such cases: insufficient number of training samples, island-shaped decision space distribution, and classes with highly overlapped decision spaces. To better represent the decision space, we utilize a prototype selection method to obtain a set of local decision prototypes for each class. Afterward, to determine the class of a test pattern, its decision profile is computed and then compared to all decision prototypes. In other words, for each class, the larger the numbers of decision prototypes near to the decision profile of a given pattern, the higher the chance for that class. The efficiency of our proposed method is evaluated over some well-known classification datasets suggesting superiority of our method in comparison with other proposed techniques.
Combining classifiers using nearest decision prototypes
S1568494613002846
This article introduces a hybrid approach that combines the advantages of fuzzy sets, ant-based clustering and multilayer perceptron neural networks (MLPNN) classifier, in conjunction with statistical-based feature extraction technique. An application of breast cancer MRI imaging has been chosen and hybridization system has been applied to see their ability and accuracy to classify the breast cancer images into two outcomes: Benign or Malignant. The introduced hybrid system starts with an algorithm based on type-II fuzzy sets to enhance the contrast of the input images. This is followed by an improved version of the classical ant-based clustering algorithm, called adaptive ant-based clustering to identify target objects through an optimization methodology that maintains the optimum result during iterations. Then, more than twenty statistical-based features are extracted and normalized. Finally, a MLPNN classifier was employed to evaluate the ability of the lesion descriptors for discrimination of different regions of interest to determine whether the cancer is Benign or Malignant. To evaluate the performance of presented approach, we present tests on different breast MRI images. The experimental results obtained, show that the adaptive ant-based segmentation is superior to the classical ant-based clustering technique and the overall accuracy offered by the employed hybrid technique confirm that the effectiveness and performance of the proposed hybrid system is high.
MRI breast cancer diagnosis hybrid approach using adaptive ant-based segmentation and multilayer perceptron neural networks classifier
S156849461300286X
This paper proposes a novel multi-objective model for an unrelated parallel machine scheduling problem considering inherent uncertainty in processing times and due dates. The problem is characterized by non-zero ready times, sequence and machine-dependent setup times, and secondary resource constraints for jobs. Each job can be processed only if its required machine and secondary resource (if any) are available at the same time. Finding optimal solution for this complex problem in a reasonable time using exact optimization tools is prohibitive. This paper presents an effective multi-objective particle swarm optimization (MOPSO) algorithm to find a good approximation of Pareto frontier where total weighted flow time, total weighted tardiness, and total machine load variation are to be minimized simultaneously. The proposed MOPSO exploits new selection regimes for preserving global as well as personal best solutions. Moreover, a generalized dominance concept in a fuzzy environment is employed to find locally Pareto-optimal frontier. Performance of the proposed MOPSO is compared against a conventional multi-objective particle swarm optimization (CMOPSO) algorithm over a number of randomly generated test problems. Statistical analyses based on the effect of each algorithm on each objective space show that the proposed MOPSO outperforms the CMOPSO in terms of quality, diversity and spacing metrics.
A particle swarm optimization for a fuzzy multi-objective unrelated parallel machines scheduling problem
S1568494613002871
This paper proposes an artificial immune network with social learning (AINet-SL) for complex optimization problems. In AINet-SL, antibodies are divided into two swarms. One is an elitist swarm (ES) where antibodies experience self-learning and the other is a common swarm (CS) where antibodies experience social-learning with different mechanisms, i.e., stochastic social-learning (SSL) and heuristic social-learning (HSL). The elitist antibody to be learned is selected randomly in SSL, while it is determined by the affinity measure in HSL. In order to obtain more accurate solutions, a dynamic searching step length updating strategy is proposed. A series of comparative numerical simulations are arranged among the proposed AINet-SL optimization, Differential Evolution (DE), opt-aiNet, IA-AIS and AAIS-2S. Five benchmark functions and a practical application of finite impulse response (FIR) filter designing are selected as testbeds. The simulation results indicate that the proposed AINet-SL is an efficient method and outperforms DE, opt-aiNet, IA-AIS and AAIS-2S in convergence speed and solution accuracy.
AINet-SL: Artificial immune network with social learning and its application in FIR filter designing
S1568494613002883
This paper presents the optimal path of nonholonomic multi robots with coherent formation in a leader–follower structure in the presence of obstacles using Asexual Reproduction Optimization (ARO). The robots path planning based on potential field method are accomplished and a novel formation controller for mobile robots based on potential field method is proposed. The efficiency of the proposed method is verified through simulation and experimental studies by applying them to control the formation of four e-Pucks robots (low-cost mobile robot platform). Also the proposed method is compared with Simulated Annealing, Improved Harmony Search and Cuckoo Optimization Algorithm methods and the experimental results, higher performance and fast convergence time to the best solution of the ARO demonstrated that this optimization method is appropriate for real time control application.
Control of leader–follower formation and path planning of mobile robots using Asexual Reproduction Optimization (ARO)
S1568494613002895
Real-life datasets are often imbalanced, that is, there are significantly more training samples available for some classes than for others, and consequently the conventional aim of reducing overall classification accuracy is not appropriate when dealing with such problems. Various approaches have been introduced in the literature to deal with imbalanced datasets, and are typically based on oversampling, undersampling or cost-sensitive classification. In this paper, we introduce an effective ensemble of cost-sensitive decision trees for imbalanced classification. Base classifiers are constructed according to a given cost matrix, but are trained on random feature subspaces to ensure sufficient diversity of the ensemble members. We employ an evolutionary algorithm for simultaneous classifier selection and assignment of committee member weights for the fusion process. Our proposed algorithm is evaluated on a variety of benchmark datasets, and is confirmed to lead to improved recognition of the minority class, to be capable of outperforming other state-of-the-art algorithms, and hence to represent a useful and effective approach for dealing with imbalanced datasets.
Cost-sensitive decision tree ensembles for effective imbalanced classification
S1568494613002901
Optimal multipath selection to maximize the received multiple description coding (MDCs) in a lossy network model is proposed. Multiple description scalar quantization (MDSQ) has been applied to the wavelet coefficients of a color image to generate the MDCs which are combating transmission loss over lossy networks. In the networks, each received description raises the reconstruction quality of an MDC-coded signal (image, audio or video). In terms of maximizing the received descriptions, a greater number of optimal routings between source and destination must be obtained. The rainbow network flow (RNF) collaborated with effective meta-heuristic algorithms is a good approach to resolve it. Two meta-heuristic algorithms which are genetic algorithm (GA) and particle swarm optimization (PSO) have been utilized to solve the multi-objective optimization routing problem for finding optimal routings each of which is assigned as a distinct color by RNF to maximize the coded descriptions in a network model. By employing a local search based priority encoding method, each individual in GA and particle in PSO is represented as a potential solution. The proposed algorithms are compared with the multipath Dijkstra algorithm (MDA) for both finding optimal paths and providing reliable multimedia communication. The simulations run over various random network topologies and the results show that the PSO algorithm finds optimal routings effectively and maximizes the received MDCs with assistance of RNF, leading to reduce packet loss and increase throughput.
Meta-heuristic algorithms for optimized network flow wavelet-based image coding
S1568494613002913
The truss optimization constrained with vibration frequencies is a highly nonlinear and more computational cost problem. To speed up the convergence and obtain the global solution of this problem, a hybrid optimality criterion (OC) and genetic algorithm (GA) method for truss optimization is presented in this paper. Firstly, the OC method is developed for multiple frequency constraints. Then, the most efficient variables are identified by sensitivity analysis and modified as iteration scheme. Finally, OC method, serving as a local search operator, is integrated with GA. The numerical results verify that the hybrid method provides powerful ability in searching for more optimal solution and reducing computational effort.
A hybrid OC–GA approach for fast and global truss optimization with frequency constraints
S1568494613002925
In this paper, we investigate a compound of the exam timetabling problems which consists of assigning a set of independent exams to a certain number of classrooms. We can define the exam timetabling problem as the scheduling of exams to time slots in first stage and at a second stage, the assignment of a set of exams extracted from one time slot to some available classrooms. Even though the formulation of this problem looks simple as it contains only two sets of constraints including only binary variables, we show that it belongs to the class of NP hard problems by reduction from the Numerical Matching with Target Sum problems (NMTS). In order to reduce the size of this problem and make it efficiently solvable either by exact method or heuristic approaches, a theorem is rigorously demonstrated and a reduction procedure inspired from the dominance criterion is developed. The two methods contribute in the search for a feasible solution by reducing the size of the original problem without affecting the feasibility. Since the reduction procedures do not usually assign all exams to classrooms, we propose a Variable Neighbourhood Search (VNS) algorithm in order to obtain a good quality complete solution. The objective of VNS algorithm is to reduce the total classroom capacity assigned to exams. A numerical result concerning the exam of the main session of the first semester of the academic year 2009–2010 of the Faculty of Economics and Management Sciences of Sfax shows the good performance of our approach compared with lower bound defined as the sum of the total capacity of all assigned classrooms and the total size of the remaining exams after reduction.
The classroom assignment problem: Complexity, size reduction and heuristics
S1568494613002937
This study proposes a novel collaborative filtering framework which integrates both subjective and objective information to generate recommendations for an active consumer. The proposed framework can solve the problem of sparsity and the cold-start problem which affect traditional CF algorithms. The fuzzy linguistic model, which is a more natural way for the consumer to present their preferences, is adopted within the proposed framework. Based on these concepts, two algorithms, a simple aggregated (SA) algorithm and aggregated subjective and objective users’ viewpoint (ASOV) algorithm are developed. A series of experiments is performed, the results of which indicate that the proposed methodologies produce high-quality recommendations. Finally, the results confirm that the proposed algorithms perform better than the traditional method.
A fuzzy recommender system based on the integration of subjective preferences and objective information
S1568494613002949
In this paper, a multi-objective dynamic vehicle routing problem with fuzzy time windows (DVRPFTW) is presented. In this problem, unlike most of the work where all the data are known in advance, a set of real time requests arrives randomly over time and the dispatcher does not have any deterministic or probabilistic information on the location and size of them until they arrive. Moreover, this model involves routing vehicles according to customer-specific time windows, which are highly relevant to the customers’ satisfaction level. This preference information of customers can be represented as a convex fuzzy number with respect to the satisfaction for a service time. This paper uses a direct interpretation of the DVRPFTW as a multi-objective problem where the total required fleet size, overall total traveling distance and waiting time imposed on vehicles are minimized and the overall customers’ preferences for service is maximized. A solving strategy based on the genetic algorithm (GA) and three basic modules are proposed, in which the state of the system including information of vehicles and customers is checked in a management module each time. The strategy module tries to organize the information reported by the management module and construct an efficient structure for solving in the subsequent module. The performance of the proposed approach is evaluated in different steps on various test problems generalized from a set of static instances in the literature. In the first step, the performance of the proposed approach is checked in static conditions and then the other assumptions and developments are added gradually and changes are examined. The computational experiments on data sets illustrate the efficiency and effectiveness of the proposed approach.
A multi-objective dynamic vehicle routing problem with fuzzy time windows: Model, solution and application
S1568494613002950
This paper presents an online TS fuzzy modeling general methodology based on the extended Kalman filter. The model can be obtained in a recursive way only based on input–output data. The methodology can work online with the system, properly in the presence of noise, is very efficient computationally and completely general. It is general in the sense theorically there are no restrictions neither in the number of inputs nor outputs, neither in the type nor distribution of membership functions used (which can even be mixed in the antecedents of the rules). Some examples and comparisons with other online fuzzy identification models from signals are provided to illustrate the skill of the online identification of the proposed methodology.
A general methodology for online TS fuzzy modeling by the extended Kalman filter
S1568494613002962
In this paper, we investigate to use the L1/2 regularization method for variable selection based on the Cox's proportional hazards model. The L1/2 regularization can be taken as a representative of L q (0< q <1) regularizations and has been demonstrated many attractive properties. To solve the L1/2 penalized Cox model, we propose a coordinate descent algorithm with a new univariate half thresholding operator which is applicable to high-dimensional biological data. Simulation results based on standard artificial data show that the L1/2 regularization method can be more accurate for variable selection than Lasso and SCAD methods. The results from real DNA microarray datasets indicate the L1/2 regularization method performs competitively.
The L1/2 regularization method for variable selection in the Cox model
S1568494613002974
Seed URLs selection for focused Web crawler intends to guide related and valuable information that meets a user's personal information requirement and provide more effective information retrieval. In this paper, we propose a seed URLs selection approach based on user-interest ontology. In order to enrich semantic query, we first intend to apply Formal Concept Analysis to construct user-interest concept lattice with user log profile. By using concept lattice merger, we construct the user-interest ontology which can describe the implicit concepts and relationships between them more appropriately for semantic representation and query match. On the other hand, we make full use of the user-interest ontology for extracting the user interest topic area and expanding user queries to receive the most related pages as seed URLs, which is an entrance of the focused crawler. In particular, we focus on how to refine the user topic area using the bipartite directed graph. The experiment proves that the user-interest ontology can be achieved effectively by merging concept lattices and that our proposed approach can select high quality seed URLs collection and improve the average precision of focused Web crawler.
An approach for selecting seed URLs of focused crawler based on user-interest ontology
S1568494613002998
The assessment of fetal wellbeing depends heavily on variations in fetal heart rate (FHR) patterns. The variations in FHR patterns are very complex in nature thus its reliable interpretation is very difficult and often leads to erroneous diagnosis. We propose a new method for evaluation of fetal health status based on interval type-2 fuzzy logic through fetal phonocardiography (fPCG). Type-2 fuzzy logic is a powerful tool in handling uncertainties due to extraneous variations in FHR patterns through its increased fuzziness of relations. Four FHR parameters are extracted from each fPCG signal for diagnostic decision making. The membership functions of these four inputs and one output are chosen as a range of values so as to represent the level of uncertainty. The fuzzy rules are constructed based on standard clinical guidelines on FHR parameters. Experimental clinical tests have shown very good performance of the developed system in comparison with the FHR trace simultaneously recorded through standard fetal monitor. Statistical evaluation of the developed system shows 92% accuracy. With the proposed method we hope that, long-term and continuous antenatal care will become easy, cost effective, reliable and efficient.
Interval type-2 fuzzy logic based antenatal care system using phonocardiography
S1568494613003001
This paper implores the possible intervention of computers in the generative (concept) stage of settlement planning. The objective was to capture the complexity and character of naturally grown fishing settlements through simple rules and incorporate them in the process of design. A design tool was developed for this purpose. This design tool used a generative evolutionary design technique, which is based on multidisciplinary methods. Facets of designing addressed in this research are: • allocation of each design element's space and geometry, • defining the rules, constraints and relationships governing the elements of design, • the purposeful search for better alternative solutions, • quantitative evaluation of the solution based on spatial, comfort, complexity criterions to ensure the needed complexity, usability in the solutions. Generative design methods such as geometric optimization, shape grammars and genetic algorithms have been combined for achieving the above purposes. The allocation of space has been achieved by geometric optimization techniques, which allocate spaces by proliferation of a simple shape unit. This research conducts an analysis of various naturally grown fishing settlements and identifies the features that would be essential to recreate such an environment. Features such as the essential elements, their relationships, hierarchy, and order in the settlement pattern, which resulted due to the occupational and cultural demands of the fisher folk, are analysed. The random but ordered growth of the settlement is captured as rules and relations. These rules propel and guide the whole process of design generation. These rules and certain constraints, restrictions control the random arrangement of the shape units. This research limits itself to conducting exhaustive search in the prescribed solution search space defined a priori by the rules and relationships. This search within a bounded space can be compared to the purposeful, constrained decision making process involved in designing. The generated solutions use the evolutionary concept of genetic algorithms to deduce solutions within the predefined design solution search space. Simple evolutionary concepts such as reproduction, crossover and mutation aid this search process. These concepts transform by swapping/interchanging the genetic properties (the constituent data/material making up the solution) of two generated solutions to produce alternate solutions. Thus the genetic algorithm finds a series of new solutions. With such a tool in hand various possibilities of design solutions could be analysed and compared. A thorough search of possible solutions ensures a deeper probe essential for a good design. The spatial quality, comfort quality of the solutions are compared and graded (fitness value) against the standard stipulations. These parameters look at the solution in the context of the whole and not as parts and most of these parameters could be improved only at the expense of another. The tool is able to produce multiple equally good solutions to the same problem, possibly with one candidate solution optimizing one parameter and another candidate optimizing a different one. The final choice of the suitable solution is made based on the user's preferences and objectives. The tool is tested for an existing fishing settlement. This was done to check for its credibility and to see if better alternatives evolved. The existing settlement is analysed based on the evaluation parameters used in the tool and compared with the generated solutions. The results of the tool has proved that simple rules when applied recursively within constraints would provide solutions that are unpredictable and also would resonate the qualities of the knowledge from which the rules were distilled from. The complex whole generated has often exhibited emergent properties and thus opens up new avenues of thinking.
Generative methods and the design process: A design tool for conceptual settlement planning
S1568494613003013
In this paper, a methodology for identifying switching sequences and switching instants of switched linear systems (SLS) is derived. The identification problem of a SLS is a challenging and non-trivial problem. In fact, it involves interaction between binary, discrete and real-valued variables. A SLS switches many times over a finite time horizon and thus estimating the sequence of activated modes and the switches locations is a crucial problem for both control and Fault Detection and Isolation (FDI). The proposed methodology is based on the Discrete Particle Swarm Optimization (DPSO) technique. The identification problem is formulated as an optimization problem involving noisy data (system inputs and outputs). Both a set of binary variables corresponding to each sub-model before and after each switch, and the corresponding switching instants are iteratively adjusted by the DPSO algorithm. Thus, the DPSO algorithm has to classify which sub-system has generated which data. The efficiency of the proposed approach is illustrated through a numerical example and a physical one. The numerical example is a Switched Auto-Regressive eXogenous (SARX) system and the physical one is a buck–boost DC/DC converter.
Active modes and switching instants identification for linear switched systems based on Discrete Particle Swarm Optimization
S1568494613003025
In this paper, a new and efficient model for variables representation, named F-coding, in optimal power dispatch problems for smart electrical distribution grids is proposed. In particular, an application devoted to optimal energy dispatch of Distributed Energy Resources including ideal storage devices is here considered. Electrical energy storage systems, such as any other component that must meet an integral capacity constraint in optimal dispatch problems, have to show the same energy level at the beginning and at the end of the considered timeframe for operation. The use of zero-integral functions, such as sinusoidal functions, for the synthesis of the charge and discharge course of batteries is thus consequential. The issue is common to many other engineering problems, such as any dispatch problem where resources must be allocated within a given amount in a considered timeframe. Many authors have proposed different methods to deal with such integral constraints in the literature on smart grids management, but all of them do not seem very efficient. The paper is organized as follows. First, the state of the art on the optimal management problem is outlined with special attention to treatment of integral constraints, then the proposed new model for variables representation is described. Finally, the multiobjective optimization method and its application to the optimal dispatch problem considering different variables representations are considered.
Modelling energy storage systems using Fourier analysis: An application for smart grids optimal management
S1568494613003037
In the present article, semi-supervised learning is integrated with an unsupervised context-sensitive change detection technique based on modified self-organizing feature map (MSOFM) network. In the proposed methodology, training of the MSOFM network is initially performed using only a few labeled patterns. Thereafter, the membership values, in both the classes, for each unlabeled pattern are determined using the concept of fuzzy set theory. The soft class label for each of the unlabeled patterns is then estimated using the membership values of its K nearest neighbors. Here, training of the network using the unlabeled patterns along with a few labeled patterns is carried out iteratively. A heuristic method has been suggested to select some patterns from the unlabeled ones for training. To check the effectiveness of the proposed methodology, experiments are conducted on three multi-temporal and multi-spectral data sets. Performance of the proposed work is compared with that of two unsupervised techniques, a supervised technique and two semi-supervised techniques. Results are also statistically validated using paired t-test. The proposed method produced promising results.
Semi-supervised change detection using modified self-organizing feature map neural network
S1568494613003049
The aim of this paper is to combine several techniques together to provide one systematic method for guiding the investment in mutual funds. Many researches focus on the prediction of a single asset time series, or focus on portfolio management to diversify the investment risk, but they do not generate explicit trading rules. Only a few researches combine these two concepts together, but they adjust trading rules manually. Our method combines the techniques for generating observable and profitable trading rules, managing portfolio and allocating capital. First, the buying timing and selling timing are decided by the trading rules generated by gene expression programming. The trading rules are suitable for the constantly changing market. Second, the funds with higher Sortino ratios are selected into the portfolio. Third, there are two models for capital allocation, one allocates the capital equally (EQ) and the other allocates the capital with the mean variance (MV) model. Also, we perform superior predictive ability test to ensure that our method can earn positive returns without data snooping. To evaluate the return performance of our method, we simulate the investment on mutual funds from January 1999 to September 2012. The training duration is from 1999/1/1 to 2003/12/31, while the testing duration is from 2004/1/1 to 2012/9/11. The best annualized return of our method with EQ and MV capital allocation models are 12.08% and 12.85%, respectively. The latter also lowers the investment risk. To compare with the method proposed by Tsai et al., we also perform testing from January 2004 to December 2008. The experimental results show that our method can earn annualized return 9.07% and 11.27%, which are better than the annualized return 6.89% of Tsai et al.
The trading on the mutual funds by gene expression programming with Sortino ratio
S1568494613003050
Welding is an efficient reliable metal joining process in which the coalescence of metals is achieved by fusion. Localized heating during welding, followed by rapid cooling, induce residual stresses in the weld and in the base metal. Determination of magnitude and distribution of welding residual stresses is essential and important. Data sets from finite element method (FEM) model are used to train the developed neural network model trained with genetic algorithm and particle swarm optimization (NN–GA–PSO model). The performance of the developed NN–GA–PSO model is compared neural network model trained with genetic algorithm (NN–GA) and neural network model trained with particle swarm optimization (NN–PSO) model. Among the developed models, performance of NN–GA–PSO model is superior in terms of computational speed and accuracy. Confirmatory experiments are performed using X-ray diffraction method to confirm the accuracy of the developed models. These developed models can be used to set the initial weld process parameters in shop floor welding environment.
Neuro evolutionary model for weld residual stress prediction
S1568494613003062
The endpoint parameters of molten steel, such as the steel temperature and the carbon content, directly affect the quality of the production steel. Moreover, these endpoint results cannot be the online continuous measurement in time. To solve the above-mentioned problems, an anti-jamming endpoint prediction model is proposed to predict the endpoint parameters of molten steel. More specifically, the model is constructed on the parameters of extreme learning machine (ELM) adaptively adjusted by the evolutionary membrane algorithm with the global optimization ability. In other words, the evolutionary membrane algorithm may find the suitable parameters of an ELM model which reduces the incidence of the overfitting of ELM affected by the noise in the actual data. Finally, the proposed model is applied to predict the endpoint parameters of molten steel in steel-making. In the simulation experiments, two test problems, including ‘SinC’ function with the Gaussian noise and the actual production data of basic oxygen furnace (BOF) steel-making, are employed to evaluate the performance of the proposed model. The results indicate that the proposed model has good prediction accuracy and robustness in the data with noise. Therefore, the proposed model has good application prospects in the industrial field.
Endpoint prediction model for basic oxygen furnace steel-making based on membrane algorithm evolving extreme learning machine
S1568494613003074
In this article, grey based theory is used to grasp the ambiguity exists in the utilized information and the fuzziness appears in the human judgments and preferences. Grey theory can produce satisfactory results, and hence stimulates creativity and the invention for developing new methods and alternative approaches. This article is a very useful source of information for fuzzy grey and decision making using more than one decision makers in fuzzy environment. A case study on system selection comprised of 12 attributes and 7 alternatives is constructed and solved by the proposed method and the results are compared with the results obtained from QSPM, TOPSIS and SAW approaches for analysis purposes.
Strategic system selection with linguistic preferences and grey information using MCDM
S1568494613003086
In this study, we propose a set of new algorithms to enhance the effectiveness of classification for 5-year survivability of breast cancer patients from a massive data set with imbalanced property. The proposed classifier algorithms are a combination of synthetic minority oversampling technique (SMOTE) and particle swarm optimization (PSO), while integrating some well known classifiers, such as logistic regression, C5 decision tree (C5) model, and 1-nearest neighbor search. To justify the effectiveness for this new set of classifiers, the g-mean and accuracy indices are used as performance indexes; moreover, the proposed classifiers are compared with previous literatures. Experimental results show that the hybrid algorithm of SMOTE+PSO+C5 is the best one for 5-year survivability of breast cancer patient classification among all algorithm combinations. We conclude that, implementing SMOTE in appropriate searching algorithms such as PSO and classifiers such as C5 can significantly improve the effectiveness of classification for massive imbalanced data sets.
A hybrid classifier combining SMOTE with PSO to estimate 5-year survivability of breast cancer patients
S1568494613003098
In this paper, we solve the optimal power flow problem using by the new hybrid fuzzy particle swarm optimisation and Nelder–Mead (NM) algorithm (HFPSO–NM). The goal of combining the NM simplex method and the particle swarm optimisation (PSO) method is to integrate their advantages and avoid their disadvantages. The NM simplex method is a very efficient local search procedure, but its convergence is extremely sensitive to the selected starting point. In addition, PSO belongs to the class of global search procedures, but it requires significant computational effort. In the other side, in the PSO algorithm, two variables ( Φ 1 , Φ 2 ) are traditionally constant; in this case, due to the importance of these two factors, we decided to obtain these two as fuzzy parameters. The proposed method is firstly examined on some benchmark mathematical functions. Then, it is tested an IEEE 30-bus standard test system by considering different objective functions for normal and contingency conditions to solve optimal power flow. The simulation results indicate that the FPSO–NM algorithm is effective in solving the mathematical functions and the OPF problem.
Optimal power flow under both normal and contingent operation conditions using the hybrid fuzzy particle swarm optimisation and Nelder–Mead algorithm (HFPSO–NM)
S1568494613003104
This paper is part two of a two part series. The originality of part one was the proposal of a novelty approach for wind turbine supervisory control and data acquisition (SCADA) data mining for condition monitoring purposes. The novelty concerned the usage of adaptive neuro-fuzzy interference system (ANFIS) models in this context and the application of a proposed procedure to a wide range of different SCADA signals. The applicability of the set up ANFIS models for anomaly detection was proven by the achieved performance of the models. In combination with the fuzzy interference system (FIS) proposed the prediction errors provide information about the condition of the monitored components. Part two presents application examples illustrating the efficiency of the proposed method. The work is based on continuously measured wind turbine SCADA data from 18 modern type pitch regulated wind turbines of the 2 MW class covering a period of 35 months. Several real life faults and issues in this data are analyzed and evaluated by the condition monitoring system (CMS) and the results presented. It is shown that SCADA data contain crucial information for wind turbine operators worth extracting. Using full signal reconstruction (FSRC) adaptive neuro-fuzzy interference system (ANFIS) normal behavior models (NBM) in combination with fuzzy logic (FL) a setup is developed for data mining of this information. A high degree of automation can be achieved. It is shown that FL rules established with a fault at one turbine can be applied to diagnose similar faults at other turbines automatically via the CMS proposed. A further focus in this paper lies in the process of rule optimization and adoption, allowing the expert to implement the gained knowledge in fault analysis. The fault types diagnosed here are: (1) a hydraulic oil leakage; (2) cooling system filter obstructions; (3) converter fan malfunctions; (4) anemometer offsets and (5) turbine controller malfunctions. Moreover, the graphical user interface (GUI) developed to access, analyze and visualize the data and results is presented.
Wind turbine condition monitoring based on SCADA data using normal behavior models. Part 2: Application examples
S1568494613003116
This study examines the use of social network information for customer churn prediction. An alternative modeling approach using relational learning algorithms is developed to incorporate social network effects within a customer churn prediction setting, in order to handle large scale networks, a time dependent class label, and a skewed class distribution. An innovative approach to incorporate non-Markovian network effects within relational classifiers and a novel parallel modeling setup to combine a relational and non-relational classification model are introduced. The results of two real life case studies on large scale telco data sets are presented, containing both networked (call detail records) and non-networked (customer related) information about millions of subscribers. A significant impact of social network effects, including non-Markovian effects, on the performance of a customer churn prediction model is found, and the parallel model setup is shown to boost the profits generated by a retention campaign.
Social network analysis for customer churn prediction
S156849461300313X
Stress is a major health problem in our world today. For this reason, it is important to gain an objective understanding of how average individuals respond to real-life events they observe in environments they encounter. Our aim is to estimate an objective stress signal for an observer of a real-world environment stimulated by meditation. A computational stress signal predictor system is proposed which was developed based on a support vector machine, genetic algorithm and an artificial neural network to predict the stress signal from a real-world data set. The data set comprised of physiological and physical sensor response signals for stress over the time of the meditation activity. A support vector machine based individual-independent classification model was developed to determine the overall shape of the stress signal and results suggested that it matched the curves formed by a linear function, a symmetric saturating linear function and a hyperbolic tangent function. Using this information of the shape of the stress signal, an artificial neural network based stress signal predictor was developed. Compared to the curves formed from a linear function, symmetric saturating linear function and hyperbolic tangent function, the stress signal produced by the stress signal predictor for the observers was the most similar to the curve formed by a hyperbolic tangent function with p <0.01 according to statistical analysis. The research presented in this paper is a new dimension in stress research – it investigates developing an objective stress measure that is dependent on time.
Modeling a stress signal
S1568494613003141
Heart disease is the leading cause of death among both men and women in most countries in the world. Thus, people must be mindful of heart disease risk factors. Although genetics play a role, certain lifestyle factors are crucial contributors to heart disease. Traditional approaches use thirteen risk factors or explanatory variables to classify heart disease. Diverging from existing approaches, the present study proposes a new hybrid intelligent modeling scheme to obtain different sets of explanatory variables, and the proposed hybrid models effectively classify heart disease. The proposed hybrid models consist of logistic regression (LR), multivariate adaptive regression splines (MARS), artificial neural network (ANN), and rough set (RS) techniques. The initial stage of the proposed process includes the use of LR, MARS, and RS techniques to reduce the set of explanatory variables. The remaining variables are subsequently used as inputs for the ANN method employed in the second stage. A real heart disease data set was used to demonstrate the development of the proposed hybrid models. The modeling results revealed that the proposed hybrid schemes effectively classify heart disease and outperform the typical, single-stage ANN method.
Hybrid intelligent modeling schemes for heart disease classification
S1568494613003153
Due to the pressure from work load and daily life, there is an increase in geriatric depression and arrhythmia population. However, some people may not notice or have no idea about the symptom of depression and arrhythmia. More research input is needed to diagnose severity of depression and arrhythmia at an early stage. To help users examine their physical fitness and mental health condition before outpatient service, we apply data mining strategy to discover association rules from responded questionnaire, including geriatric depression, BAI, ASRM, and PSQI. To obtain informative analytical results, multitudes of simulations are performed on 25,000 data stored in our database. We also propose an effective heartbeat monitoring ECG real-time detection system for homecare service, which uses the ECG sensors and a wireless sensor network technology to detect the subject's heartbeats and their variations. In addition, the MIT-BIH database is used to analyze arrhythmia. A fuzzy model is proposed to discriminate between normal heartbeats and arrhythmia. Experimental results show that an average accuracy of 95.42% is achieved by the proposed system. This evidence verifies that the hybrid intelligent model is effective in medical related applications.
Hybrid intelligent methods for arrhythmia detection and geriatric depression diagnosis
S1568494613003165
Variability and unpredictability are typical characteristics of complex systems such as emergency department (ED) where the patient demand is high and patient conditions are diverse. To tackle the uncertain nature of ED and improve the resource management, it is beneficial to group patients with common features. This paper aims to use self-organizing map (SOM), k-means, and hierarchical methods to group patients based on their medical procedures and make comparisons among these methods. It can be reasonably assumed that the medical procedures received by the patients are directly associated with ED resource consumption. Different grouping techniques are compared using a validity index and the resulting groups are distinctive in the length of treatment (LOT) of patients and their presenting complaints. This paper also discusses how the resulting patient groups can be used to enhance the ED resource planning, as well as to redesign the ED charging policy.
A medical procedure-based patient grouping method for an emergency department
S1568494613003177
In this paper, two different hybrid intelligent systems are applied to develop practical soft identifiers for modeling the tool-tissue force as well as the resulted maximum local stress in laparoscopic surgery. To conduct the system identification process, a 2D model of an in vivo porcine liver was built for different probing tasks. Based on the simulation, three different geometric features, i.e. maximum deformation angle, maximum deformation depth and width of displacement constraint of the reconstructed shape of the deformed body are extracted. Thereafter, two different fuzzy inference paradigms are proposed for the identification task. The first identifier is an adaptive co-evolutionary fuzzy inference system (ACFIS) which takes advantage of bio-inspired supervisors to be reconciled to the characteristics of the problem at hand. To learn the fuzzy machine, the authors propose a co-evolutionary technique which uses a modified optimizer called scale factor local search differential evolution (SFLSDE) as the core metaheuristic. The concept of co-evolving is implemented through a consequential optimization procedure in which the degree of optimality of the ACFIS architecture is evaluated by sharing the characteristics of both antecedent and consequent parts between two different SFLSDEs. The second identifier is an adaptive neuro-fuzzy inference system (ANFIS) which is based on the use of some well-known neuro computing concepts, i.e. back-propagation learning and synaptic nodal computing, for tuning the construction of the fuzzy identifier. The two proposed techniques are used to identify the force and maximum local stress of tool-tissue. Based on the experiments, the authors have observed that each of the identifiers have their own advantages and disadvantages. However, both ACFIS and ANFIS succeed to identify the model outputs precisely. Moreover, to ascertain the veracity of the derived systems, the authors adopt a Pareto-based hyper-level heuristic approach called synchronous self-learning Pareto strategy (SSLPS). This technique provides the authors with good information regarding the optimum controlling parameters of both ACFIS and ANFIS identifiers.
Identifying the tool-tissue force in robotic laparoscopic surgery using neuro-evolutionary fuzzy systems and a synchronous self-learning hyper level supervisor
S1568494613003189
This paper proposes an effective fault detection and identification method for systems which perform in multiple processes. One such type of system investigated in this paper is COSMED K4b2. K4b2 is a standard portable electrical device designed to test pulmonary functions in various applications, such as athlete training, sports medicine and health monitoring. However, its actual sensor outputs and received data may be disturbed by Electromagnetic Interference (EMI), body artifacts, and device malfunctions/faults, which might cause misinterpretations of activities or statuses to people being monitored. Although some research is reported to detect faults in specific steady state, normal approach may yield false alarms in multi-processes applications. In this paper, a novel and comprehensive method, which merges statistical analysis and intelligent computational model, is proposed to detect and identify faults of K4b2 during exercise monitoring. Firstly the principal component analysis (PCA) is utilized to acquire main features of measured data and then K-means is combined to cluster various processes for abnormalities detection. When faults are detected, a back propagation (BP) neural network is constructed to identify and isolate faults. The effectiveness and feasibility of the proposed model method is finally verified with experimental data.
Fault detection and identification spanning multiple processes by integrating PCA with neural network
S1568494613003190
In past, fuzzy multi-criteria decision-making (FMCDM) models desired to find an optimal alternative from numerous feasible alternatives under fuzzy environment. However, researches seldom focused on determination of criteria weights, although they were also important components for FMCDM. In fact, criteria weights can be computed through extending quality function deployment (QFD) under fuzzy environment, i.e. fuzzy quality function deployment (FQFD). By FQFD, customer demanded qualities expressing the opinions of customers and service development capabilities presenting the opinions of experts can be integrated into criteria weights for FMCDM. However, deriving criteria weights in FQFD may be complex and different to multiply two fuzzy numbers in real world. To resolve the tie, we will combine FQFD with relative preference relation on FMCDM problems. With the relative preference relation on fuzzy numbers, it is not necessary multiplying two fuzzy numbers to derive criteria weights in FQFD. Alternatively, adjusted criteria weights will substitute for original criteria weights through relative preference relation. Obviously, adjusted criteria weights are clearly determined and then utilized in FMCDM models.
A criteria weighting approach by combining fuzzy quality function deployment with relative preference relation
S1568494613003360
Glaucoma is a major cause of blindness and is prevalent among Asian populations. Therefore, early detection is of paramount importance in order to let patients have early treatments. One prominent indicator of glaucomatous damage is the Retinal Nerve Fiber Layer (RNFL) profile. In this paper, the performance of artificial neural network models in identifying RNFL profile of glaucoma suspect and glaucoma subjects is studied. RNFL thickness was measured using optical coherence tomography (Stratus OCT). Inputs to the neural network consisted of regional RNFL thickness measurements over 12 clock hours. Sensitivity and specificity for glaucoma detection will be compared by the area under the Receiver Operating Characteristic Curve (AROC). The results show that artificial neural network coupled with the OCT technology enhances the diagnostic accuracy of optical coherence tomography in differentiating glaucoma suspect and glaucoma from normal individuals.
Neural Network Analysis for the detection of glaucomatous damage
S1568494613003372
Adopting effective model to access the desired images is essential nowadays with the presence of a huge amount of digital images. The present paper introduces an accurate and rapid model for content based image retrieval process depending on a new matching strategy. The proposed model is composed of four major phases namely: features extraction, dimensionality reduction, ANN classifier and matching strategy. As for the feature extraction phase, it extracts a color and texture features, respectively, called color co-occurrence matrix (CCM) and difference between pixels of scan pattern (DBPSP). However, integrating multiple features can overcome the problems of single feature, but the system works slowly mainly because of the high dimensionality of the feature space. Therefore, the dimensionality reduction technique selects the effective features that jointly have the largest dependency on the target class and minimal redundancy among themselves. Consequently, these features reduce the calculation work and the computation time in the retrieval process. The artificial neural network (ANN) in our proposed model serves as a classifier so that the selected features of query image are the input and its output is one of the multi classes that have the largest similarity to the query image. In addition, the proposed model presents an effective feature matching strategy that depends on the idea of the minimum area between two vectors to compute the similarity value between a query image and the images in the determined class. Finally, the results presented in this paper demonstrate that the proposed model provides accurate retrieval results and achieve improvement in performance with significantly less computation time compared with other models.
A new matching strategy for content based image retrieval system
S1568494613003384
The predicted growth in air transportation and the ambitious goal of the European Commission to have on-time performance of flights within 1min makes efficient and predictable ground operations at airports indispensable. Accurately predicting taxi times of arrivals and departures serves as an important key task for runway sequencing, gate assignment and ground movement itself. This research tests different statistical regression approaches and also various regression methods which fall into the realm of soft computing to more accurately predict taxi times. Historic data from two major European airports is utilised for cross-validation. Detailed comparisons show that a TSK fuzzy rule-based system outperformed the other approaches in terms of prediction accuracy. Insights from this approach are then presented, focusing on the analysis of taxi-in times, which is rarely discussed in literature. The aim of this research is to unleash the power of soft computing methods, in particular fuzzy rule-based systems, for taxi time prediction problems. Moreover, we aim to show that, although these methods have only been recently applied to airport problems, they present promising and potential features for such problems.
Aircraft taxi time prediction: Comparisons and insights
S1568494613003396
Various methodologies of artificial intelligence have been recently used for estimating performance parameters of soil working machines and off-road vehicles. Due to nonlinear and stochastic features of soil–wheel interactions, application of knowledge-based Mamdani max–min fuzzy expert system for estimation of contact area and contact pressure is described in this paper. Fuzzy logic model was constructed by use of the experience of contact area and contact pressure utilizing data obtained from series of experimentations in soil bin facility and a single-wheel tester. Two paramount tire parameters: wheel load and tire inflation pressure are the input variables for our model, each has five membership functions. As a fundamental aspect of the fuzzy logic based prediction systems, a set of fuzzy if-then rules were used in accordance with fuzzy logic principles. 25 linguistic if-then rules were included to develop a complicated highly intelligent predicting model based on Centroid method at defuzzification stage. The model performance was assessed on the basis of several statistical quality criteria. Mean relative error lower than 10%, satisfactory scattering around unity-slope line (T), and high coefficient of determination, R 2, were obtained by the fuzzy logic model proposed in this study.
Fuzzy logic system based prediction effort: A case study on the effects of tire parameters on contact area and contact pressure
S1568494613003402
In this paper, a rotary tool with rotary magnetic field has been used to better flushing of the debris from the machining zone in electrical discharge machining (EDM) process. Two adaptive neuro-fuzzy inference system (ANFIS) models have been designed to correlate the EDM parameters to material removal rate (MRR) and surface roughness (SR) using the data generated based on experimental observations. Then continuous ant colony optimization (CACO) technique has been used to select the best process parameters for maximum MRR and specified SR. Here, the process parameters are magnetic field intensity, rotational speed and product of current and pulse on-time. Also, ANFIS models of MRR and SR are the objective and constraint functions for CACO, respectively. Experimental trials divided into three main regimes of low energy, the middle energy and the high energy. Results showed that the CACO technique which used the ANFIS models as objective and constrain functions can successfully optimize the input conditions of the magnetic field assisted rotary EDM process.
Optimization of magnetic field assisted EDM using the continuous ACO algorithm
S1568494613003414
The purpose of this research work is to go beyond the traditional classification systems in which the set of recognizable categories is predefined at the conception phase and keeps unchanged during its operation. Motivated by the increasing needs of flexible classifiers that can be continuously adapted to cope with dynamic environments, we propose a new evolving classification system and an incremental learning algorithm called ILClass. The classifier is learned in incremental and lifelong manner and able to learn new classes from few samples. Our approach is based on first-order Takagi-Sugeno (TS) system. The main contribution of this paper consists in proposing a global incremental learning paradigm in which antecedent and consequent are learned in synergy, contrary to the existing approaches where they are learned separately. Output feedback is used in controlled manner to bias antecedent adaptation toward difficult data samples in order to improve system accuracy. Our system is evaluated using different well-known benchmarks, with a special focus on its capacity of learning new classes.
ILClass: Error-driven antecedent learning for evolving Takagi-Sugeno classification systems
S1568494613003426
For constrained multi-objective optimization problems (CMOPs), how to preserve infeasible individuals and make use of them is a problem to be solved. In this case, a modified objective function method with feasible-guiding strategy on the basis of NSGA-II is proposed to handle CMOPs in this paper. The main idea of proposed algorithm is to modify the objective function values of an individual with its constraint violation values and true objective function values, of which a feasibility ratio fed back from current population is used to keep the balance, and then the feasible-guiding strategy is adopted to make use of preserved infeasible individuals. In this way, non-dominated solutions, obtained from proposed algorithm, show superiority on convergence and diversity of distribution, which can be confirmed by the comparison experiment results with other two CMOEAs on commonly used constrained test problems.
A modified objective function method with feasible-guiding strategy to solve constrained multi-objective optimization problems
S1568494613003438
One approach to protein structure prediction is to first predict from sequence, a thresholded and binary 2D representation of a protein's topology known as a contact map. The predicted contact map can be used as distance constraints to construct a 3D structure. We focus on the latter half of the process for helix pairs and present an approach that aims to obtain a set of non-binary distance constraints from contacts maps. We extend the definition of “in contact” by incorporating fuzzy logic to construct fuzzy contact maps. Then, template-based retrieval and distance geometry bound smoothing were applied to obtain distance constraints in the form of a distance map. From the distance map, we can calculate the helix pair structure. Our experimental results indicate that distance constraints close to the true distance map could be predicted at various noise levels and the resulting structure was highly correlated to the predicted distance map.
Predicting helix pair structure from fuzzy contact maps
S156849461300344X
Brushless DC (BLDC) machines are found increasing use in applications that demand high and rugged performance. In some critical circumstance, such as aerospace, the motor must be highly reliable. In this context, a novel model-based fault diagnosis system is developed for brushless DC motor speed control system. Under the consideration of the complexity of characterizing the dynamic of BLDC motor control system with analytic expression, a LRGF neural network (LRGFNN) with pole assignment technique is carried out for modeling the system. During the diagnosis process, fault signal of the motor is isolated with LRGFNN online. Meanwhile, adaptive lifting scheme and adaptive threshold method are presented for detecting the faults from the isolated fault signal under the existence of mechanical error and electrical error. The effectiveness of the diagnosis system is demonstrated in the simulation of electrical and mechanical fault in the motor. The detection of the incipient fault is also given.
BLDC motor speed control system fault diagnosis based on LRGF neural network and adaptive lifting scheme
S1568494613003451
Computational optimization methods are most often used to find a single or multiple optimal or near-optimal solutions to the underlying optimization problem describing the problem at hand. In this paper, we elevate the use of optimization to a higher level in arriving at useful problem knowledge associated with the optimal or near-optimal solutions to a problem. In the proposed innovization process, first a set of trade-off optimal or near-optimal solutions are found using an evolutionary algorithm. Thereafter, the trade-off solutions are analyzed to decipher useful relationships among problem entities automatically so as to provide a better understanding of the problem to a designer or a practitioner. We provide an integrated algorithm for the innovization process and demonstrate the usefulness of the procedure to three real-world engineering design problems. New and innovative design principles obtained in each case should clearly motivate engineers and practitioners for its further application to more complex problems and its further development as a more efficient data analysis procedure.
An integrated approach to automated innovization for discovering useful design principles: Case studies from engineering
S1568494613003463
Ranking fuzzy numbers is a very important decision-making procedure in decision analysis and applications. The last few decades have seen a large number of approaches investigated for ranking fuzzy numbers, yet some of these approaches are non-intuitive and inconsistent. In 1992, Liou and Wang proposed an approach to rank fuzzy number based a convex combination of the right and the left integral values through an index of optimism. Despite its merits, some shortcomings associated with Liou and Wang's approach include: (i) it cannot differentiate normal and non-normal fuzzy numbers, (ii) it cannot rank effectively the fuzzy numbers that have a compensation of areas, (iii) when the left or right integral values of the fuzzy numbers are zero, the index of optimism has no effect in either the left integral value or the right integral value of the fuzzy number, and (iv) it cannot rank consistently the fuzzy numbers and their images. This paper proposes a revised ranking approach to overcome the shortcomings of Liou and Wang's ranking approach. The proposed ranking approach presents the novel left, right, and total integral values of the fuzzy numbers. The median value ranking approach is further applied to differentiate fuzzy numbers that have the compensation of areas. Finally, several comparative examples and an application for market segment evaluation are given herein to demonstrate the usages and advantages of the proposed ranking method for fuzzy numbers.
An improved ranking method for fuzzy numbers with integral values
S1568494613003475
In this paper, in order to optimize neural network architecture and generalization, after analyzing the reasons of overfitting and poor generalization of the neural networks, we presented a class of constructive decay RBF neural networks to repair the singular value of a continuous function with finite number of jumping discontinuity points. We proved that a function with m jumping discontinuity points can be approximated by a simplest neural network and a decay RBF neural network in L 2 ( ℝ ) by each ɛ error, and a function with m jumping discontinuity point y = f ( x ) , x ∈ E ⊂ ℝ d can be constructively approximated by a decay RBF neural network in L 2 ( ℝ d ) by each ε > 0 error. Then the whole networks will have less hidden neurons and well generalization in the same of the first part. A real world problem about stock closing price with jumping discontinuity have been presented and verified the correctness of the theory.
A new constructive neural network method for noise processing and its application on stock market prediction
S1568494613003487
This paper surveys neuro fuzzy systems (NFS) development using classification and literature review of articles for the last decade (2002–2012) to explore how various NFS methodologies have been developed during this period. Based on the selected journals of different NFS applications and different online database of NFS, this article surveys and classifies NFS applications into ten different categories such as student modeling system, medical system, economic system, electrical and electronics system, traffic control, image processing and feature extraction, manufacturing and system modeling, forecasting and predictions, NFS enhancements and social sciences. For each of these categories, this paper mentions a brief future outline. This review study indicates mainly three types of future development directions for NFS methodologies, domains and article types: (1) NFS methodologies are tending to be developed toward expertise orientation. (2) It is suggested that different social science methodologies could be implemented using NFS as another kind of expert methodology. (3) The ability to continually change and learning capability is the driving power of NFS methodologies and will be the key for future intelligent applications.
Applications of neuro fuzzy systems: A brief review and future outline
S1568494613003499
Computer aided techniques for scheduling software projects are a crucial step in the software development process within the highly competitive software industry. The Software Project Scheduling (SPS) problem relates to the decision of who does what during a software project lifetime, thus involving mainly both people-intensive activities and human resources. Two major conflicting goals arise when scheduling a software project: reducing both its cost and duration. A multi-objective approach is therefore the natural way of facing the SPS problem. As companies are getting involved in larger and larger software projects, there is an actual need of algorithms that are able to deal with the tremendous search spaces imposed. In this paper, we analyze the scalability of eight multi-objective algorithms when they are applied to the SPS problem using instances of increasing size. The algorithms are classical algorithms from the literature (NSGA-II, PAES, and SPEA2) and recent proposals (DEPT, MOCell, MOABC, MO-FA, and GDE3). From the experimentation conducted, the results suggest that PAES is the algorithm with the best scalability features.
The software project scheduling problem: A scalability analysis of multi-objective metaheuristics
S1568494613003505
Software reliability prediction plays a very important role in the analysis of software quality and balance of software cost. The data during software lifecycle is used to analyze and predict software reliability. However, predicting the variability of software reliability with time is very difficult. Recently, support vector regression (SVR) has been widely applied to solve nonlinear predicting problems in many fields and has obtained good performance in many situations; however it is still difficult to optimize SVR's parameters. Previously, some optimization algorithms have been used to find better parameters of SVR, but these existing algorithms usually are not fully satisfactory. In this paper, we first improve estimation of distribution algorithms (EDA) in order to maintain the diversity of the population, and then a hybrid improved estimation of distribution algorithms (IEDA) and SVR model, called IEDA-SVR model, is proposed. IEDA is used to optimize parameters of SVR, and IEDA-SVR model is used to predict software reliability. We compare IEDA-SVR model with other software reliability models using real software failure datasets. The experimental results show that the IEDA-SVR model has better prediction performance than the other models.
Software reliability prediction model based on support vector regression with improved estimation of distribution algorithms
S1568494613003517
We demonstrate the use of Ant Colony System (ACS) to solve the capacitated vehicle routing problem associated with collection of recycling waste from households, treated as nodes in a spatial network. For networks where the nodes are concentrated in separate clusters, the use of k-means clustering can greatly improve the efficiency of the solution. The ACS algorithm is extended to model the use of multi-compartment vehicles with kerbside sorting of waste into separate compartments for glass, paper, etc. The algorithm produces high-quality solutions for two-compartment test problems.
An ant colony algorithm for the multi-compartment vehicle routing problem
S1568494613003529
Recommendation system has been a rhetoric area and a topic of rigorous research owing to its application in various domains, from academics to industries through e-commerce. Recommendation system is useful in reducing information overload and improving decision making for customers in any arena. Recommending products to attract customers and meet their needs have become an important aspect in this competitive environment. Although there are many approaches to recommend items, collaborative filtering has emerged as an efficient mechanism to perform the same. Added to it there are many evolutionary methods that could be incorporated to achieve better results in terms of accuracy of prediction, handling sparsity as well as cold start problems. In this paper, we have used unsupervised learning to address the problem of scalability. The recommendation engine reduces calculation time by matching the interest profile of the user to its partitioned and even smaller training samples. Additionally, we have explored the aspect of finding global neighbours through transitive similarities and incorporating particle swarm optimization (PSO) to assign weights to various alpha estimates (including the proposed α 7) that alleviate sparsity problem. Our experimental study reveals that the particle swarm optimized alpha estimate has significantly increased the accuracy of prediction over the traditional methods of collaborative filtering and fixed alpha scheme.
Enhancing scalability and accuracy of recommendation systems using unsupervised learning and particle swarm optimization
S1568494613003530
This study proposes a cooperative evolutionary optimization method between a user and system (CEUS) for problems involving quantitative and qualitative optimization criteria. In a general interactive evolutionary computation (IEC) model, both the system and user have their own role in the evolution, such as individual reproduction or evaluation. In contrast, the proposed CEUS allows the user to dynamically change the allocation of search roles between the system and user, resulting in simultaneous optimization of qualitative and quantitative objective functions without increasing user fatigue. This is achieved by a combination of user evaluation prediction and the integration of interactive and non-interactive EC. For instance, the system performs a global search at the beginning, the user then intensifies the search area, and finally the system conducts a local search in the intensified search area. This study applies CEUS to an image processing filter design problem that involves both quantitative (filter output accuracy) and qualitative (filter behavior) criteria. Experiments have shown that the proposed CEUS can design image filters in accordance with user preferences, and CEUS interacting with a non-naive user enhanced the initial global search so that it converged and found a reasonable solution more than four times faster than a non-interactive search.
User-system cooperative evolutionary computation for both quantitative and qualitative objective optimization in image processing filter design
S1568494613003542
The reliability-based design optimization (RBDO) presents to be a systematic and powerful approach for process designs under uncertainties. The traditional double-loop methods for solving RBDO problems can be computationally inefficient because the inner reliability analysis loop has to be iteratively performed for each probabilistic constraint. To solve RBDOs in an alternative and more effective way, Deb et al. [1] proposed recently the use of evolutionary algorithms with an incorporated fastPMA. Since the imbedded fastPMA needs the gradient calculations and the initial guesses of the most probable points (MPPs), their proposed algorithm would encounter difficulties in dealing with non-differentiable constraints and the effectiveness could be degraded significantly as the initial guesses are far from the true MPPs. In this paper, a novel population-based evolutionary algorithm, named cell evolution method, is proposed to improve the computational efficiency and effectiveness of solving the RBDO problems. By using the proposed cell evolution method, a family of test cells is generated based on the target reliability index and with these reliability test cells the determination of the MPPs for probabilistic constraints becomes a simple parallel calculation task, without the needs of gradient calculations and any initial guesses. Having determined the MPPs, a modified real-coded genetic algorithm is applied to evolve these cells into a final one that satisfies all the constraints and has the best objective function value for the RBDO. Especially, the nucleus of the final cell contains the reliable solution to the RBDO problem. Illustrative examples are provided to demonstrate the effectiveness and applicability of the proposed cell evolution method in solving RBDOs. Simulation results reveal that the proposed cell evolution method outperforms comparative methods in both the computational efficiency and solution accuracy, especially for multi-modal RBDO problems.
A cell evolution method for reliability-based design optimization
S1568494613003554
In this article, we present an algorithm for detecting moving objects from a given video sequence. Here, spatial and temporal segmentations are combined together to detect moving objects. In spatial segmentation, a multi-layer compound Markov Random Field (MRF) is used which models spatial, temporal, and edge attributes of image frames of a given video. Segmentation is viewed as a pixel labeling problem and is solved using the maximum a posteriori (MAP) probability estimation principle; i.e., segmentation is done by searching a labeled configuration that maximizes this probability. We have proposed using a Differential Evolution (DE) algorithm with neighborhood-based mutation (termed as Distributed Differential Evolution (DDE) algorithm) for estimating the MAP of the MRF model. A window is considered over the entire image lattice for mutation of each target vector of the DDE; thereby enhancing the speed of convergence. In case of temporal segmentation, the Change Detection Mask (CDM) is obtained by thresholding the absolute differences of the two consecutive spatially segmented image frames. The intensity/color values of the original pixels of the considered current frame are superimposed in the changed regions of the modified CDM to extract the Video Object Planes (VOPs). To test the effectiveness of the proposed algorithm, five reference and one real life video sequences are considered. Results of the proposed method are compared with four state of the art techniques and provide better spatial segmentation and better identification of the location of moving objects.
Moving object detection using Markov Random Field and Distributed Differential Evolution
S1568494613003566
In this paper we present a self-tuning of two degrees-of-freedom control algorithm that is designed for use on a non-linear single-input single-output system. The control algorithm is developed based on the Takagi-Sugeno fuzzy model, and it consists of two loops: a feedforward loop and feedback loop. The feedforward part of the controller should drive the system output to the vicinity of the reference signal. It is developed from the inversion of the T-S fuzzy model. To achieve accurate error-free reference tracking a feedback part of the controller is added. A time-varying error-model predictive controller is used in the feedback loop. The error-model is obtained from the T-S fuzzy model. The T-S fuzzy model of the system, required in the controller, is obtained with evolving fuzzy modelling, which is based on recursive Gustafson-Kessel clustering algorithm and recursive fuzzy least squares. It employs evolving mechanisms for adding, removing, merging and splitting the clusters. The presented control approach was experimentally validated on a non-linear second-order SISO system helio-crane in simulation and real environment. Several criteria functions were defined to evaluate the reference-tracking and disturbance rejection performance of the control algorithm. The presented control approach was compared to another fuzzy control algorithm. The experimental results confirm the applicability of the approach.
Self-tuning of 2 DOF control based on evolving fuzzy model
S1568494613003578
To provide speech prostheses for individuals with severe communication impairments, we investigated a classification method for brain computer interfaces (BCIs) using silent speech. Event-related potentials (ERPs) were recorded using scalp electrodes when five subjects imagined the vocalization of Japanese vowels, /a/, /i/, /u/, /e/, and /o/ in order and in random order, while the subjects remained silent and immobilized. For actualization, we tried to apply relevance vector machine (RVM) and RVM with Gaussian kernel (RVM-G) instead of support vector machine with Gaussian kernel (SVM-G) to reduce the calculation cost in the use of 19 channels, common special patterns (CSPs) filtering, and adaptive collection (AC). Results show that using RVM-G instead of SVM-G reduced the ratio of the number of efficient vectors to the number of training data from 97% to 55%. At this time, the averaged classification accuracies (CAs) using SVM-G and RVM-G were, respectively, 77% and 79%, showing no degradation. However, the calculation cost was more than that using SVM-G because RVM-G necessitates high calculation costs for optimization. Furthermore, results show that CAs using RVM-G were weaker than SVM-G when the training data were few. Additionally, results showed that nonlinear classification was necessary for silent speech classification. This paper serves as a beginning of feasibility study for speech prostheses using an imagined voice. Although classification for silent speech presents great potential, many feasibility problems remain.
Classification of silent speech using support vector machine and relevance vector machine
S156849461300358X
Models based on data mining and machine learning techniques have been developed to detect the disease early or assist in clinical breast cancer diagnoses. Feature selection is commonly applied to improve the performance of models. There are numerous studies on feature selection in the literature, and most of the studies focus on feature selection in supervised learning. When class labels are absent, feature selection methods in unsupervised learning are required. However, there are few studies on these methods in the literature. Our paper aims to present a hybrid intelligence model that uses the cluster analysis techniques with feature selection for analyzing clinical breast cancer diagnoses. Our model provides an option of selecting a subset of salient features for performing clustering and comprehensively considers the use of most existing models that use all the features to perform clustering. In particular, we study the methods by selecting salient features to identify clusters using a comparison of coincident quantitative measurements. When applied to benchmark breast cancer datasets, experimental results indicate that our method outperforms several benchmark filter- and wrapper-based methods in selecting features used to discover natural clusters, maximizing the between-cluster scatter and minimizing the within-cluster scatter toward a satisfactory clustering quality.
A hybrid intelligent model of analyzing clinical breast cancer data using clustering techniques with feature selection
S1568494613003591
Feedback active noise control has been used for tonal noise only and it is impractical for broadband noise. In this paper, it has been proposed that the feedback ANC algorithm can be applied to a broadband noise if the noise characteristic is chaotic in nature. Chaotic noise is neither tonal nor random; it is broadband and nonlinearly predictable. It is generated from dynamic sources such as fans, airfoils, etc. Therefore, a nonlinear controller using a functional link artificial neural network is proposed in a feedback configuration to control chaotic noise. A series of synthetic chaotic noise is generated for performance evaluation of the algorithm. It is shown that the proposed nonlinear controller is capable to control the broadband chaotic noise using feedback ANC which uses only one microphone whereas the conventional filtered-X least mean square (FXLMS) algorithm is incapable for controlling this type of noise.
Nonlinear feedback active noise control for broadband chaotic noise
S1568494613003608
The current state of the art in the planning and coordination of autonomous vehicles is based upon the presence of speed lanes. In a traffic scenario where there is a large diversity between vehicles the removal of speed lanes can generate a significantly higher traffic bandwidth. Vehicle navigation in such unorganized traffic is considered. An evolutionary based trajectory planning technique has the advantages of making driving efficient and safe, however it also has to surpass the hurdle of computational cost. In this paper, we propose a real time genetic algorithm with Bezier curves for trajectory planning. The main contribution is the integration of vehicle following and overtaking behaviour for general traffic as heuristics for the coordination between vehicles. The resultant coordination strategy is fast and near-optimal. As the vehicles move, uncertainties may arise which are constantly adapted to, and may even lead to either the cancellation of an overtaking procedure or the initiation of one. Higher level planning is performed by Dijkstra's algorithm which indicates the route to be followed by the vehicle in a road network. Re-planning is carried out when a road blockage or obstacle is detected. Experimental results confirm the success of the algorithm subject to optimal high and low-level planning, re-planning and overtaking.
Heuristic based evolution for the coordination of autonomous vehicles in the absence of speed lanes
S1568494613003621
Cell counts and viral load serve as major clinical indicators to provide treatment in the course of a viral infection. Monitoring these markers in patients can be expensive and some of them are not feasible to perform. An alternative solution to this problem is the observer based estimation. Several observer schemes require the previous knowledge of the model and parameters, such condition is not achievable for some applications. A linear output assumption is required in the majority of the current works. Nevertheless, the output of the system can be a nonlinear combination of the state variables. This paper presents a discrete-time neural observer for non-linear systems with a non-linear output; the mathematical model is assumed to be unknown. The observer is trained on-line with the extended Kalman filter (EKF)-based algorithm and the respective stability analysis based on the Lyapunov approach is addressed. We applied different observers to the estimation problem in HIV infection; that is state estimation of the viral load, and the number of infected and non-infected CD4+ T cells. Simulation results suggest a good performance of the proposed neural observer and the applicability to biological systems.
Observers for biological systems
S1568494613003633
Protein structure prediction (PSP) has a large potential for valuable biotechnological applications. However the prediction itself encompasses a difficult optimization problem with thousands of degrees of freedom and is associated with extremely complex energy landscapes. In this work a simplified three-dimensional protein model (hydrophobic-polar model, HP in a cubic lattice) was used in order to allow for the fast development of a robust and efficient genetic algorithm based methodology. The new methodology employs a phenotype based crowding mechanism for the maintenance of useful diversity within the populations, which resulted in increased performance and granted the algorithm multiple solutions capabilities. Tests against several benchmark HP sequences and comparative results showed that the proposed genetic algorithm is superior to other evolutionary algorithms. The proposed algorithm was then successfully adapted to an all-atom protein model and tested on poly-alanines. The native structure, an alpha helix, was found in all test cases as a local or a global minimum, in addition to other conformations with similar energies. The results showed that optimization strategies with multiple solutions capability present two advantages for PSP applications. The first one is a more efficient investigation of complex energy landscapes; the second one is an increase in the probability of finding native structures, even when they are not at the global optimum.
A multiple minima genetic algorithm for protein structure prediction
S1568494613003645
Fuzzy cognitive mapping is commonly used as a participatory modelling technique whereby stakeholders create a semi-quantitative model of a system of interest. This model is often turned into an iterative map, which should (ideally) have a unique stable fixed point. Several methods of doing this have been used in the literature but little attention has been paid to differences in output such different approaches produce, or whether there is indeed a unique stable fixed point. In this paper, we seek to highlight and address some of these issues. In particular we state conditions under which the ordering of the variables at stable fixed points of the linear fuzzy cognitive map (iterated to) is unique. Also, we state a condition (and an explicit bound on a parameter) under which a sigmoidal fuzzy cognitive map is guaranteed to have a unique fixed point, which is stable. These generic results suggest ways to refine the methodology of fuzzy cognitive mapping. We highlight how they were used in an ongoing case study of the shift towards a bio-based economy in the Humber region of the UK.
Linear and sigmoidal fuzzy cognitive maps: An analysis of fixed points
S1568494613003657
This paper proposes an optimization based design methodology of interval type-2 fuzzy PID (IT2FPID) controllers for the load frequency control (LFC) problem. Hitherto, numerous fuzzy logic control structures are proposed as a solution of LFC. However, almost all of these solutions use type-1 fuzzy sets that have a crisp grade of membership. Power systems are large scale complex systems with many different uncertainties. In order to handle these uncertainties, in this study, type-2 fuzzy sets, which have a grade of membership that is fuzzy, have been used. Interval type-2 fuzzy sets are used in the design of a load frequency controller for a four area interconnected power system, which represents a large power system. The Big Bang–Big Crunch (BB–BC) algorithm is applied to tune the scaling factors and the footprint of uncertainty (FOU) membership functions of interval type-2 fuzzy PID (IT2FPID) controllers to minimize frequency deviations of the system against load disturbances. BB–BC is a global optimization algorithm and has a low computational cost, a high convergence speed, and is therefore very efficient when the number of optimization parameters is high as presented in this study. In order to show the benefits of IT2FPID controllers, a comparison to conventional type-1 fuzzy PID (T1FPID) controllers and conventional PID controllers is given for the four-area interconnected power system. The gains of conventional PID and T1FPID controllers are also optimized using the BB–BC algorithm. Simulation results explicitly show that the performance of the proposed optimum IT2FPID load frequency controller is superior compared to the conventional T1FPID and PID controller in terms of overshoot, settling time and robustness against different load disturbances.
Interval type-2 fuzzy PID load frequency controller using Big Bang–Big Crunch optimization
S1568494613003669
Embedded systems have become integral parts of today's technology-based life, starting from various home appliances to satellites. Such a wide range of applications encourages for their economic design using optimization-based tools. The JPEG encoder is an embedded system, which is applied for obtaining high quality output from continuous-tone images. It has emerged in recent years as a problem of optimum partitioning of its various processes into hardware and software components. Realizing pairing and conflicting nature among its various cost terms, for the first time the JPEG encoder is formulated and partitioned here as a multi-objective optimization problem. A multi-objective binary-coded genetic algorithm is proposed for this purpose, whose effectiveness is demonstrated through the application to a real case study and a number of large-size hypothetical instances.
Multi-objective hardware–software partitioning of embedded systems: A case study of JPEG encoder
S1568494613003955
A hybrid sliding level Taguchi-based particle swarm optimization (HSLTPSO) algorithm is proposed for solving multi-objective flowshop scheduling problems (FSPs). The proposed HSLTPSO integrates particle swarm optimization, sliding level Taguchi-based crossover, and elitist preservation strategy. The novel contribution of the proposed HSLTPSO is the use of a PSO to explore the optimal feasible region in macro-space, the use of a systematic reasoning mechanism of the sliding level Taguchi-based crossover to exploit the better solution in micro-space, and the use of the elitist preservation strategy to retain the best particles of multi-objective population for next iteration. The sliding level Taguchi-based crossover is embedded in the PSO to find the best solutions and consequently enhance the PSO. Using the systematic reasoning way of the Taguchi-based crossover with considering the influence of tuning factors α, β and γ is presented in this study to solve the conflicting problem of non-feasible solutions and to find the better particles. As a result, it exhibits a significant improvement in Pareto best solutions of the FSP. By combining the advantages of exploration and exploitation, from the computational experiments of the six test problems, the HSLTPSO provides better results compared to the existing methods reported in the literature when solving multi-objective FSPs. Therefore, the HSLTPSO is an effective approach in solving multi-objective FSPs.
Hybrid sliding level Taguchi-based particle swarm optimization for flowshop scheduling problems
S1568494613003967
This paper proposes a self-splitting fuzzy classifier with support vector learning in expanded high-order consequent space (SFC-SVHC) for classification accuracy improvement. The SFC-SVHC expands the rule-mapped consequent space of a first-order Takagi-Sugeno (TS)-type fuzzy system by including high-order terms to enhance the rule discrimination capability. A novel structure and parameter learning approach is proposed to construct the SFC-SVHC. For structure learning, a variance-based self-splitting clustering (VSSC) algorithm is used to determine distributions of the fuzzy sets in the input space. There are no rules in the SFC-SVHC initially. The VSSC algorithm generates a new cluster by splitting an existing cluster into two according to a predefined cluster-variance criterion. The SFC-SVHC uses trigonometric functions to expand the rule-mapped first-order consequent space to a higher-dimensional space. For parameter optimization in the expanded rule-mapped consequent space, a support vector machine is employed to endow the SFC-SVHC with high generalization ability. Experimental results on several classification benchmark problems show that the SFC-SVHC achieves good classification results with a small number of rules. Comparisons with different classifiers demonstrate the superiority of the SFC-SVHC in classification accuracy.
An accuracy-oriented self-splitting fuzzy classifier with support vector learning in high-order expanded consequent space
S1568494613003979
Differential evolution (DE) is a simple yet powerful evolutionary algorithm (EA) for global numerical optimization. However, its performance is significantly influenced by its parameters. Parameter adaptation has been proven to be an efficient way for the enhancement of the performance of the DE algorithm. Based on the analysis of the behavior of the crossover in DE, we find that the trial vector is directly related to its binary string, but not directly related to the crossover rate. Based on this inspiration, in this paper, we propose a crossover rate repair technique for the adaptive DE algorithms that are based on successful parameters. The crossover rate in DE is repaired by its corresponding binary string, i.e. by using the average number of components taken from the mutant. The average value of the binary string is used to replace the original crossover rate. To verify the effectiveness of the proposed technique, it is combined with an adaptive DE variant, JADE, which is a highly competitive DE variant. Experiments have been conducted on 25 functions presented in CEC-2005 competition. The results indicate that our proposed crossover rate technique is able to enhance the performance of JADE. In addition, compared with other DE variants and state-of-the-art EAs, the improved JADE method obtains better, or at least comparable, results in terms of the quality of final solutions and the convergence rate.
Repairing the crossover rate in adaptive differential evolution
S156849461300402X
Diabetes mellitus is a disease that affects to hundreds of millions of people worldwide. Maintaining a good control of the disease is critical to avoid severe long-term complications. In recent years, several artificial pancreas systems have been proposed and developed, which are increasingly advanced. However there is still a lot of research to do. One of the main problems that arises in the (semi) automatic control of diabetes, is to get a model explaining how glycemia (glucose levels in blood) varies with insulin, food intakes and other factors, fitting the characteristics of each individual or patient. This paper proposes the application of evolutionary computation techniques to obtain customized models of patients, unlike most of previous approaches which obtain averaged models. The proposal is based on a kind of genetic programming based on grammars known as Grammatical Evolution (GE). The proposal has been tested with in silico patient data and results are clearly positive. We present also a study of four different grammars and five objective functions. In the test phase the models characterized the glucose with a mean percentage average error of 13.69%, modeling well also both hyper and hypoglycemic situations.
Modeling glycemia in humans by means of Grammatical Evolution
S1568494613004031
Virtual screening (VS) methods can considerably aid clinical research, predicting how ligands interact with drug targets. Most VS methods suppose a unique binding site for the target, but it has been demonstrated that diverse ligands interact with unrelated parts of the target and many VS methods do not take into account this relevant fact. This problem is circumvented by a novel VS methodology named BINDSURF that scans the whole protein surface in order to find new hotspots, where ligands might potentially interact with, and which is implemented in last generation massively parallel GPU hardware, allowing fast processing of large ligand databases. BINDSURF can thus be used in drug discovery, drug design, drug repurposing and therefore helps considerably in clinical research. However, the accuracy of most VS methods and concretely BINDSURF is constrained by limitations in the scoring function that describes biomolecular interactions, and even nowadays these uncertainties are not completely understood. In order to improve accuracy of the scoring functions used in BINDSURF we propose a hybrid novel approach where neural networks (NNET) and support vector machines (SVM) methods are trained with databases of known active (drugs) and inactive compounds, being this information exploited afterwards to improve BINDSURF VS predictions.
Improving drug discovery using hybrid softcomputing methods
S1568494613004043
Medical image fusion combines complementary images from different modalities for proper diagnosis and surgical planning. A new approach for medical image fusion based on the hybrid intelligence system is proposed. This paper has integrated the swarm intelligence and neural network to achieve a better fused output. The edges are an important feature of an image and they are detected and optimized by using ant colony optimization. The detected edges are enhanced and it is given as the feeding input to the simplified pulse coupled neural network. The firing maps are generated and the maximum fusion rule is applied to get the fused image. The performance of the proposed method is compared both subjectively and objectively, with the genetic algorithm method, neuro-fuzzy method and also with the modified pulse coupled neural network. The results show that the proposed hybrid intelligent method performs better when compared to the existing computational and hybrid intelligent methods.
Medical image fusion based on hybrid intelligence
S1568494613004067
This paper presents an application of Fuzzy Clustering of Large Applications based on Randomized Search (FCLARANS) for attribute clustering and dimensionality reduction in gene expression data. Domain knowledge based on gene ontology and differential gene expressions are employed in the process. The use of domain knowledge helps in the automated selection of biologically meaningful partitions. Gene ontology (GO) study helps in detecting biologically enriched and statistically significant clusters. Fold-change is measured to select the differentially expressed genes as the representatives of these clusters. Tools like Eisen plot and cluster profiles of these clusters help establish their coherence. Important representative features (or genes) are extracted from each enriched gene partition to form the reduced gene space. While the reduced gene set forms a biologically meaningful attribute space, it simultaneously leads to a decrease in computational burden. External validation of the reduced subspace, using various well-known classifiers, establishes the effectiveness of the proposed methodology on four sets of publicly available microarray gene expression data.
Fuzzy clustering with biological knowledge for gene selection
S1568494613004079
The conventional unconstrained binary quadratic programming (UBQP) problem is known to be a unified modeling and solution framework for many combinatorial optimization problems. This paper extends the single-objective UBQP to the multiobjective case (mUBQP) where multiple objectives are to be optimized simultaneously. We propose a hybrid metaheuristic which combines an elitist evolutionary multiobjective optimization algorithm and a state-of-the-art single-objective tabu search procedure by using an achievement scalarizing function. Finally, we define a formal model to generate mUBQP instances and validate the performance of the proposed approach in obtaining competitive results on large-size mUBQP instances with two and three objectives.
A hybrid metaheuristic for multiobjective unconstrained binary quadratic programming
S1568494613004080
Objective To develop a classifier that tackles the problem of determining the risk of a patient of suffering from a cardiovascular disease within the next 10 years. The system has to provide both a diagnosis and an interpretable model explaining the decision. In this way, doctors are able to analyse the usefulness of the information given by the system. Methods Linguistic fuzzy rule-based classification systems are used, since they provide a good classification rate and a highly interpretable model. More specifically, a new methodology to combine fuzzy rule-based classification systems with interval-valued fuzzy sets is proposed, which is composed of three steps: (1) the modelling of the linguistic labels of the classifier using interval-valued fuzzy sets; (2) the use of the K α operator in the inference process and (3) the application of a genetic tuning to find the best ignorance degree that each interval-valued fuzzy set represents as well as the best value for the parameter α of the K α operator in each rule. Results The suitability of the new proposal to deal with this medical diagnosis classification problem is shown by comparing its performance with respect to the one provided by two classical fuzzy classifiers and a previous interval-valued fuzzy rule-based classification system. The performance of the new method is statistically better than the ones obtained with the methods considered in the comparison. The new proposal enhances both the total number of correctly diagnosed patients, around 3% with respect the classical fuzzy classifiers and around 1% vs. the previous interval-valued fuzzy classifier, and the classifier ability to correctly differentiate patients of the different risk categories. Conclusion The proposed methodology is a suitable tool to face the medical diagnosis of cardiovascular diseases, since it obtains a good classification rate and it also provides an interpretable model that can be easily understood by the doctors.
Medical diagnosis of cardiovascular diseases using an interval-valued fuzzy rule-based classification system
S1568494613004109
Breast cancer is the most commonly occurring form of cancer in women. While mammography is the standard modality for diagnosis, thermal imaging provides an interesting alternative as it can identify tumors of smaller size and hence lead to earlier detection. In this paper, we present an approach to analysing breast thermograms based on image features and a hybrid multiple classifier system. The employed image features provide indications of asymmetry between left and right breast regions that are encountered when a tumor is locally recruiting blood vessels on one side, leading to a change in the captured temperature distribution. The presented multiple classifier system is based on a hybridisation of three computational intelligence techniques: neural networks or support vector machines as base classifiers, a neural fuser to combine the individual classifiers, and a fuzzy measure for assessing the diversity of the ensemble and removal of individual classifiers from the ensemble. In addition, we address the problem of class imbalance that often occurs in medical data analysis, by training base classifiers on balanced object subspaces. Our experimental evaluation, on a large dataset of about 150 breast thermograms, convincingly shows our approach not only to provide excellent classification accuracy and sensitivity but also to outperform both canonical classification approaches as well as other classifier ensembles designed for imbalanced datasets.
A hybrid classifier committee for analysing asymmetry features in breast thermograms
S1568494613004110
There exist numerous state of the art classification algorithms that are designed to handle the data with nominal or binary class labels. Unfortunately, less attention is given to the genre of classification problems where the classes are organized as a structured hierarchy; such as protein function prediction (target area in this work), test scores, gene ontology, web page categorization, text categorization etc. The structured hierarchy is usually represented as a tree or a directed acyclic graph (DAG) where there exist IS-A relationship among the class labels. Class labels at upper level of the hierarchy are more abstract and easy to predict whereas class labels at deeper level are most specific and challenging for correct prediction. It is helpful to consider this class hierarchy for designing a hypothesis that can handle the tradeoff between prediction accuracy and prediction specificity. In this paper, a novel ant colony optimization (ACO) based single path hierarchical classification algorithm is proposed that incorporates the given class hierarchy during its learning phase. The algorithm produces IF–THEN ordered rule list and thus offer comprehensible classification model. Detailed discussion on the architecture and design of the proposed technique is provided which is followed by the empirical evaluation on six ion-channels data sets (related to protein function prediction) and two publicly available data sets. The performance of the algorithm is encouraging as compared to the existing methods based on the statistically significant Student's t-test (keeping in view, prediction accuracy and specificity) and thus confirm the promising ability of the proposed technique for hierarchical classification task.
A novel ant colony optimization based single path hierarchical classification algorithm for predicting gene ontology
S1568494613004122
After electricity energy market clearing, the network may be operated with a low transient stability margin because of hitting security limits or increasing the contribution of risky participants. Therefore, a new multi-objective model for electricity energy market clearing, considering dynamic security aspect of the power system, is proposed. Indeed, in addition to the economical aspects of electricity markets, i.e. offer cost of generating units, linearized Corrected Transient Energy Margin (CTEM) of system is also considered as new objective function of market clearing problem. A Multi-Objective Genetic Algorithm (MOGA) is utilized to solve the multi-objective market clearing problem. The New England test system is used to demonstrate the performance of the proposed method. indices of bus number of system buses number of system units index for Pareto optimal solution index of objective functions a binary variable indicating that the unit of ith bus accepted or not in the energy market energy output of the unit at ith bus bid price of the unit at ith bus for energy number of credible contingencies the sensitivity of CTEM with respect to the generation of unit j (P Gj ) the system CTEM at the base case the system CTEM after a little change ΔP Gj applied in the generation of the unit j with respect to the base case voltage magnitude of the ith bus upper and lower limits of voltage magnitude of ith bus, respectively reactive power output of the unit at ith bus upper and lower limits of active power of the unit at ith bus, respectively upper and lower limits of reactive power of the unit at ith bus, respectively active and reactive loads of ith bus, respectively difference between voltage angles of buses i and j magnitude of element located in ith row and jth column of the admittance matrix of the power system angle of element located in ith row and jth column of the admittance matrix of the power system magnitude of apparent power flow of branch between ith and jth buses apparent power flow capacity of branch between ith and jth buses
Dynamic security consideration in multiobjective electricity markets
S1568494613004134
When driving a vehicle along a given route, several objectives such as the traveling time and the fuel consumption have to be considered. This can be viewed as an optimization problem and solved with the appropriate optimization algorithms. The existing optimization algorithms mostly combine objectives into a weighted-sum cost function and solve the corresponding single-objective problem. Using a multiobjective approach should be, in principle, advantageous, since it enables better exploration of the multiobjective search space, however, no results about the optimization of driving with this approach have been reported yet. To test the multiobjective approach, we designed a two-level Multiobjective Optimization algorithm for discovering Driving Strategies (MODS). It finds a set of nondominated driving strategies with respect to two conflicting objectives: the traveling time and the fuel consumption. The lower-level algorithm is based on a deterministic breadth-first search and nondominated sorting, and searches for nondominated driving strategies. The upper-level algorithm is an evolutionary algorithm that optimizes the input parameters for the lower-level algorithm. The MODS algorithm was tested on data from real-world routes and compared with the existing single-objective algorithms for discovering driving strategies. The results show that the presented algorithm, on average, significantly outperforms the existing algorithms.
Discovering driving strategies with a multiobjective optimization algorithm
S1568494613004158
This paper presents a new combining approach for color constancy, the problem of finding the true color of objects independent of the light illuminating the scene. There are various combining methods in the literature that all of them use weighting approach with either pre-determined static weights for all images or dynamically computed weights for each image. The problem with weighting approach is that due to the inherent characteristics of color constancy methods, finding suitable weights for combination is a difficult and error-prone task. In this paper, a new optimization based combining method is proposed which does not need explicit weight assignment. The proposed method has two phases: first, the best group of color constancy algorithms for the given image is determined and then, some of the algorithms in this group are combined using multi-objective optimization methods. To the best of our knowledge, this is the first time that optimization methods are used in color constancy problem. The proposed method has been evaluated using two benchmark datasets and the experimental results were satisfactory in compare with state of the art algorithms.
Multi-objective optimization based color constancy
S156849461300416X
This paper presents a novel, soft computing based solution to a complex optimal control or dynamic optimization problem that requires the solution to be available in real-time. The complexities in this problem of optimal guidance of interceptors launched with high initial heading errors include the more involved physics of a three dimensional missile–target engagement, and those posed by the assumption of a realistic dynamic model such as time-varying missile speed, thrust, drag and mass, besides gravity, and upper bound on the lateral acceleration. The classic, pure proportional navigation law is augmented with a polynomial function of the heading error, and the values of the coefficients of the polynomial are determined using differential evolution (DE). The performance of the proposed DE enhanced guidance law is compared against the existing conventional laws in the literature, on the criteria of time and energy optimality, peak lateral acceleration demanded, terminal speed and robustness to unanticipated target maneuvers, to illustrate the superiority of the proposed law.
Differential evolution based 3-D guidance law for a realistic interceptor model
S1568494613004171
The multi-level image thresholding is often treated as a problem of optimization. Typically, finding the parameters of these problems leads to a nonlinear optimization problem, for which obtaining the solution is computationally expensive and time-consuming. In this paper a new multi-level image thresholding technique using synergetic differential evolution (SDE), an advanced version of differential evolution (DE), is proposed. SDE is a fusion of three algorithmic concepts proposed in modified versions of DE. It utilizes two criteria (1) entropy and (2) approximation of normalized histogram of an image by a mixture of Gaussian distribution to find the optimal thresholds. The experimental results show that SDE can make optimal thresholding applicable in case of multi-level thresholding and the performance is better than some other multi-level thresholding methods.
Multi-level image thresholding by synergetic differential evolution
S1568494613004183
This paper proposes a practical new hybrid model for short term electrical load forecasting based on particle swarm optimization (PSO) and support vector machines (SVM). Proposed PSO–SVM model is targeted for forecast load during periods with significant temperature variations. The proposed model detects periods when temperature significantly changes based on weather (temperature) forecast and decides whether the model can be trained just on recent history (typically 4 weeks ago) or such history has to be modified with data for similar days taken from history beyond recent history when such weather conditions were detected. Architecture of the solution consists of three modules, preprocessing module, SVM module and PSO module. The algorithm has been tested in city of Burbank utility, USA and obtained results show better accuracy comparing to results generated with classical methods of training on recent history only or similar days only.
Hybrid PSO–SVM method for short-term load forecasting during periods with significant temperature variations in city of Burbank
S1568494613004195
Since some assumptions such as the function ϕ(·) needs to be completely specified and the relationship between μ and ϕ(s) must have linear behavior in the model μ = a + bϕ(S) used in the accelerated life testing analysis, generally do not hold; the estimation of stress level contains uncertainty. In this paper, we propose to use a non-linear fuzzy regression model for performing the extrapolation process and adapting the fuzzy probability theory to the classical reliability including uncertainty and process experience for obtaining fuzzy reliability of a component. Results show, that the proposed model has the ability to estimate reliability when the mentioned assumptions are violated and uncertainty is implicit; so that the classical models are unreliable.
A non-linear fuzzy regression for estimating reliability in a degradation process
S1568494613004201
Fuzzy C-means (FCM) clustering has been widely used successfully in many real-world applications. However, the FCM algorithm is sensitive to the initial prototypes, and it cannot handle non-traditional curved clusters. In this paper, a multi-center fuzzy C-means algorithm based on transitive closure and spectral clustering (MFCM-TCSC) is provided. In this algorithm, the initial guesses of the locations of the cluster centers or the membership values are not necessary. Multi-centers are adopted to represent the non-spherical shape of clusters. Thus, the clustering algorithm with multi-center clusters can handle non-traditional curved clusters. The novel algorithm contains three phases. First, the dataset is partitioned into some subclusters by FCM algorithm with multi-centers. Then, the subclusters are merged by spectral clustering. Finally, based on these two clustering results, the final results are obtained. When merging subclusters, we adopt the lattice similarity method as the distance between two subclusters, which has explicit form when we use the fuzzy membership values of subclusters as the features. Experimental results on two artificial datasets, UCI dataset and real image segmentation show that the proposed method outperforms traditional FCM algorithm and spectral clustering obviously in efficiency and robustness.
Study on multi-center fuzzy C-means algorithm based on transitive closure and spectral clustering
S1568494613004213
The analysis of worst-case behavior in wireless sensor networks is an extremely difficult task, due to the complex interactions that characterize the dynamics of these systems. In this paper, we present a new methodology for analyzing the performance of routing protocols used in such networks. The approach exploits a stochastic optimization technique, specifically an evolutionary algorithm, to generate a large, yet tractable, set of critical network topologies; such topologies are then used to infer general considerations on the behaviors under analysis. As a case study, we focused on the energy consumption of two well-known ad hoc routing protocols for sensor networks: the multi-hop link quality indicator and the collection tree protocol. The evolutionary algorithm started from a set of randomly generated topologies and iteratively enhanced them, maximizing a measure of “how interesting” such topologies are with respect to the analysis. In the second step, starting from the gathered evidence, we were able to define concrete, protocol-independent topological metrics which correlate well with protocols’ poor performances. Finally, we discovered a causal relation between the presence of cycles in a disconnected network, and abnormal network traffic. Such creative processes were made possible by the availability of a set of meaningful topology examples. Both the proposed methodology and the specific results presented here – that is, the new topological metrics and the causal explanation – can be fruitfully reused in different contexts, even beyond wireless sensor networks.
The impact of topology on energy consumption for collection tree protocols: An experimental assessment through evolutionary computation
S1568494613004225
This paper presents a novel rotation-invariant texture image retrieval using particle swarm optimization (PSO) and support vector regression (SVR), which is called the RTIRPS method. It respectively employs log-polar mapping (LPM) combined with fast Fourier transformation (FFT), Gabor filter, and Zernike moment to extract three kinds of rotation-invariant features from gray-level images. Subsequently, the PSO algorithm is utilized to optimize the RTIRPS method. Experimental results demonstrate that the RTIRPS method can achieve satisfying results and outperform the existing well-known rotation-invariant image retrieval methods under considerations here. Also, in order to reduce calculation complexity for image feature matching, the RTIRPS method employs the SVR to construct an efficient scheme for the image retrieval.
Rotation-invariant texture image retrieval using particle swarm optimization and support vector regression
S1568494613004237
As we all know, a well-designed graph tends to result in good performance for graph-based semi-supervised learning. Although most graph-based semi-supervised dimensionality reduction approaches perform very well on clean data sets, they usually cannot construct a faithful graph which plays an important role in getting a good performance, when performing on the high dimensional, sparse or noisy data. So this will generally lead to a dramatic performance degradation. To deal with these issues, this paper proposes a feasible strategy called relative semi-supervised dimensionality reduction (RSSDR) by utilizing the perceptual relativity to semi-supervised dimensionality reduction. In RSSDR, firstly, relative transformation will be performed over the training samples to build the relative space. It should be indicated that relative transformation improves the distinguishing ability among data points and diminishes the impact of noise on semi-supervised dimensionality reduction. Secondly, the edge weights of neighborhood graph will be determined through minimizing the local reconstruction error in the relative space such that it can preserve the global geometric structure as well as the local one of the data. Extensive experiments on face, UCI, gene expression, artificial and noisy data sets have been provided to validate the feasibility and effectiveness of the proposed algorithm with the promising results both in classification accuracy and robustness.
Perceptual relativity-based semi-supervised dimensionality reduction algorithm
S1568494613004249
Many geographical applications have to deal with spatial objects that reveal an intrinsically vague or fuzzy nature. A spatial object is fuzzy if locations exist that cannot be assigned completely to the object or to its complement. Spatial database systems and Geographical Information Systems (GIS) are currently unable to cope with this kind of data. Based on an available abstract data model of fuzzy spatial data types for fuzzy points, fuzzy lines, and fuzzy regions that leverages fuzzy set theory and fuzzy point set topology, this article proposes a Spatial Plateau Algebra that provides spatial plateau data types as an implementation of fuzzy spatial data types. Each spatial plateau object consists of a finite number of crisp counterparts that are all adjacent or disjoint to each other, are associated with different membership values, and hence form different plateaus. The formal framework and the implementation are based on well known, exact models and implementations of crisp spatial data types. Spatial plateau operations as geometric operations on spatial plateau objects are expressed as a combination of geometric operations on the underlying crisp spatial objects. This article offers a conceptually clean foundation for implementing a database extension for fuzzy spatial objects and their operations, and demonstrates the embedding of these new data types as attribute data types in a database schema as well as the incorporation of fuzzy spatial operations into a database query language.
Spatial Plateau Algebra for implementing fuzzy spatial objects in databases and GIS: Spatial plateau data types and operations
S1568494613004250
League Championship Algorithm (LCA) is a recently proposed stochastic population based algorithm for continuous global optimization which tries to mimic a championship environment wherein artificial teams play in an artificial league for several weeks (iterations). Given the league schedule in each week, a number of individuals as sport teams play in pairs and their game outcome is determined in terms of win or loss (or tie), given the playing strength (fitness value) along with the intended team formation/arrangement (solution) developed by each team. Modeling an artificial match analysis, each team devises the required changes in its formation (generation of a new solution) for the next week contest and the championship goes on for a number of seasons (stopping condition). An add-on module based on modeling the end season transfer of players is also developed to possibly speed up the global convergence of the algorithm. Extensive analysis to verify the rationale of the algorithm and suitability of the updating equations together with investigating the effect of different settings for the control parameters are carried out empirically on a large number of benchmark functions. Results indicate that LCA exhibits promising performance suggesting that its further developments and practical applications would be worth investigating in the future studies.
League Championship Algorithm (LCA): An algorithm for global optimization inspired by sport championships
S1568494613004262
In this paper, a multi-objective 2-dimensional vector packing problem is presented. It consists in packing a set of items, each having two sizes in two independent dimensions, say, a weight and a length into a finite number of bins, while concurrently optimizing three cost functions. The first objective is the minimization of the number of used bins. The second one is the minimization of the maximum length of a bin. The third objective consists in balancing the load overall the bins by minimizing the difference between the maximum length and the minimum length of a bin. Two population-based metaheuristics are performed to tackle this problem. These metaheuristics use different indirect encoding approaches in order to find good permutations of items which are then packed by a separate decoder routine whose parameters are embedded in the solution encoding. It leads to a self-adaptive metaheuristic where the parameters are adjusted during the search process. The performance of these strategies is assessed and compared against benchmarks inspired from the literature.
Self-adaptive metaheuristics for solving a multi-objective 2-dimensional vector packing problem
S1568494613004274
Type reduction (TR) is one of the key components of interval type-2 fuzzy logic systems (IT2FLSs). Minimizing the computational requirements has been one of the key design criteria for developing TR algorithms. Often researchers give more rewards to computationally less expensive TR algorithms. This paper evaluates and compares five frequently used TR algorithms based on their contribution to the forecasting performance of IT2FLS models. Algorithms are judged based on the generalization power of IT2FLS models developed using them. Synthetic and real world case studies with different levels of uncertainty are considered to examine effects of TR algorithms on forecasts’ accuracies. As per obtained results, Coupland–Jonh TR algorithm leads to models with a higher and more stable forecasting performance. However, there is no obvious and consistent relationship between the widths of the type reduced set and the TR algorithm.
Effects of type reduction algorithms on forecasting accuracy of IT2FLS models
S1568494613004286
A hybrid approach based on an improved gravitational search algorithm (IGSA) and orthogonal crossover (OC) is proposed to efficiently find the optimal shape of concrete gravity dams. The proposed hybrid approach is called IGSA-OC. The hybrid of IGSA and the OC operator can improve the global exploration ability of the IGSA method, and increase its convergence rate. To find the optimal shape of concrete gravity dams, the interaction effects of dam–water–foundation rock subjected to earthquake loading are considered in this study. The computational cost of the optimal shape of concrete gravity dams subjected earthquake loads is usually high. Due to this problem, the weighted least squares support vector machine (WLS-SVM) regression as an efficient metamodel is utilized to considerably predict dynamic responses of gravity dams by spending low computational cost. To testify the robustness and efficiency of the proposed IGSA-OC, first, four well-known benchmark functions in literatures are optimized using the proposed IGSA-OC, and provides comparisons with the standard gravitational search algorithm (GSA) and the other modified GSA methods. Then, the optimal shape of concrete gravity dams is found using IGSA-OC. The solutions obtained by the IGSA-OC are compared with those of the standard GSA, IGSA and particle swarm optimization (PSO). The numerical results demonstrate that the proposed IGSA-OC significantly outperforms the standard GSA, IGSA and PSO.
A hybrid approach based on an improved gravitational search algorithm and orthogonal crossover for optimal shape design of concrete gravity dams