FileName
stringlengths 17
17
| Abstract
stringlengths 163
6.01k
| Title
stringlengths 12
421
|
---|---|---|
S1568494615001076 | This paper introduces an automated medical data classification method using wavelet transformation (WT) and interval type-2 fuzzy logic system (IT2FLS). Wavelet coefficients, which serve as inputs to the IT2FLS, are a compact form of original data but they exhibits highly discriminative features. The integration between WT and IT2FLS aims to cope with both high-dimensional data challenge and uncertainty. IT2FLS utilizes a hybrid learning process comprising unsupervised structure learning by the fuzzy c-means (FCM) clustering and supervised parameter tuning by genetic algorithm. This learning process is computationally expensive, especially when employed with high-dimensional data. The application of WT therefore reduces computational burden and enhances performance of IT2FLS. Experiments are implemented with two frequently used medical datasets from the UCI Repository for machine learning: the Wisconsin breast cancer and Cleveland heart disease. A number of important metrics are computed to measure the performance of the classification. They consist of accuracy, sensitivity, specificity and area under the receiver operating characteristic curve. Results demonstrate a significant dominance of the wavelet–IT2FLS approach compared to other machine learning methods including probabilistic neural network, support vector machine, fuzzy ARTMAP, and adaptive neuro-fuzzy inference system. The proposed approach is thus useful as a decision support system for clinicians and practitioners in the medical practice. | Medical data classification using interval type-2 fuzzy logic system and wavelets |
S1568494615001180 | In this paper, an efficient genetic algorithm (GA) to generate a simple and well defined TSK model is proposed. The approach is derived from the use of the improved Strength Pareto Evolutionary Algorithm (SPEA-2), where the genes of the chromosome are arranged into control genes and parameter genes. These genes are in a hierarchical form so that the control genes can manipulate the parameter genes in a more effective manner. In our approach, we first apply the back-propagation algorithm to optimize the parameters of the model (parameters of membership functions and fuzzy rules), then we apply the SPEA-2 to optimize the number of fuzzy rules, the number of parameters and to fine tune these parameters. Two well-known dynamic benchmarks are used to evaluate the performance of the proposed algorithm. Simulation results show that our modeling approach outperforms some methods proposed in previous work. | TSK fuzzy model with minimal parameters |
S1568494615001192 | In this study, we consider customer to be a company's crucial asset. In order to have a fast, efficient decision-making process, it is vital that a customer relationship management (CRM) decision-maker condenses and abstracts the existing information. A questionnaire survey was conducted among respondents in order to obtain the required data. The questionnaire contains nine categories of satisfaction variables. To perform the analysis, we used principal component analysis (PCA) and data envelopment analysis (DEA). PDA has been utilised as an abbreviation for the integration of these two methods. To effectively analyse the procedure, PCA was utilised to assign a number to each category of questions related to each satisfaction variable. To achieve optimal precision, DEA was applied to the three categories of customers (‘most important’, ‘important’ and ‘ordinary’ customers) in order to determine the strengths and weaknesses of customer services from these customers’ perspectives. Customers were clustered and then DEA was used to determine their viewpoints. Using DEA, we have optimised our recognition of customers’ complaints and then provided recommendations and remedial actions to resolve the current issues in logistics and transport industry in general, and at Fremantle port in particular. Significance The current study integrates soft computing and optimisation technique in order to build the CRM recommender system. It demonstrates the hybrid soft computing strengthens in area of CRM as the relevance solution. The significance of the proposed algorithm is three fold. First, it integrates soft computing and optimisation technique in order to build the CRM recommender system. Second, it utilises the most standard CRM variables in its decision making process. Third, it is an optimising algorithm because it integrates DEA with PCA technique. | Intelligent customer complaint handling utilising principal component and data envelopment analysis (PDA) |
S1568494615001209 | The GSD team-level service climate is one of the key determinants to achieve the outcome of global software development (GSD) projects from the software service outsourcing perspective. The main aim of this study is to evaluate the GSD team-level service climate and GSD project outcome relationship based on adaptive neuro-fuzzy inference system (ANFIS) with the genetic learning algorithm. For measuring the team-level service climate, the Hybrid Taguchi-Genetic Learning Algorithm (HTGLA) is adopted in the ANFIS, which is more appropriate to determine the optimal premise and consequent constructs by reducing the root-mean-square-error (RMSE) of service climate criteria. For measuring the GSD team-level service climate, synthesizing the literature reviews and consistent with the earlier studies on IT service climate which is classified into three main criterion: managerial practices (deliver quality of service), global service climate (measure overall perceptions), service leadership (goal setting, work planning, and coordination) which comprises 25 GSD team-level service climate attributes. The experimental results show that the optimal prediction error is obtained by the HTGLA-based ANFIS approach is 3.26%, which outperforms the earlier result that is the optimal prediction errors 4.41% and 5.75% determined, respectively, by ANFIS and statistical methods. | An ANFIS approach for evaluation of team-level service climate in GSD projects using Taguchi-genetic learning algorithm |
S1568494615001222 | This paper presents a modified bacterial foraging optimization algorithm called adaptive crossover bacterial foraging optimization algorithm (ACBFOA), which incorporates adaptive chemotaxis and also inherits the crossover mechanism of genetic algorithm. First part of the research work aims at improvising evaluation of the optimal objective function values. The idea of using adaptive chemotaxis is to make it computationally efficient and crossover technique is to search nearby locations by offspring bacteria. Four different benchmark functions are considered for performance evaluation. The purpose of this research work is also to investigate a face recognition algorithm with improved recognition rate. In this connection, we propose a new algorithm called ACBFO-Fisher. The proposed ACBFOA is used for finding optimal principal components for dimension reduction in linear discriminant analysis (LDA) based face recognition. Three well-known face databases, FERET, YALE and UMIST, are considered for validation. A comparison with the results of earlier methods is presented to reveal the effectiveness of the proposed ACBFO-Fisher algorithm. | A novel adaptive crossover bacterial foraging optimization algorithm for linear discriminant analysis based face recognition |
S1568494615001234 | In this paper we introduce a novel online self-organized clustering method based on the ART-2A network for Takagi–Sugeno fuzzy models. To accomplish the self-organization, we introduce an automatic decision algorithm along with solutions for merging and splitting of rules as well as the parameters they operate with, such as our novel incremental distance measurement and competitive recursive least squares. We emphasize the learning algorithm's having an impact for initial as well as long-term learning capabilities. We also emphasize the challenge for online learning, where examples are incoming in real-time and thus are unknown before they can be learned. Therefore, we solve parameter fixing by introducing a parameter free method. We show the performance of our method on various machine learning benchmarks as a highly accurate and low time-consuming method capable of adapting to different databases without the need for fixing any of its parameters according to the database. | SO-ARTIST: Self-Organized ART-2A inspired clustering for online Takagi–Sugeno fuzzy models |
S1568494615001246 | A theoretical framework to consensus building within a networked social group is put forward. This article investigates a trust based estimation and aggregation methods as part of a visual consensus model for multiple criteria group decision making with incomplete linguistic information. A novel trust propagation method is proposed to derive trust relationship from an incomplete connected trust network and the trust score induced order weighted averaging operator is presented to aggregate the orthopairs of trust/distrust values obtained from different trust paths. Then, the concept of relative trust score is defined, whose use is twofold: (1) to estimate the unknown preference values and (2) as a reliable source to determine experts’ weights. A visual feedback process is developed to provide experts with graphical representations of their consensus status within the group as well as to identify the alternatives and preference values that should be reconsidered for changing in the subsequent consensus round. The feedback process also includes a recommendation mechanism to provide advice to those experts that are identified as contributing less to consensus on how to change their identified preference values. It is proved that the implementation of the visual feedback mechanism guarantees the convergence of the consensus reaching process. | Trust based consensus model for social network in an incomplete linguistic information context |
S1568494615001258 | Recommender systems (RSs) exploit past behaviors and user similarities to provide personalized recommendations. There are some precedents of usage in academic environments to assist users finding relevant information, based on assumptions about the characteristics of the items and users. Even if quality has already been taken into account as a property of items in previous works, it has never been given a key role in the re-ranking process for both items and users. In this paper, we present REFORE, a quality-based fuzzy linguistic REcommender system FOr REsearchers. We propose the use of some bibliometric measures as the way to quantify the quality of both items and users without the interaction of experts as well as the use of 2-tuple linguistic approach to describe the linguistic information. The system takes into account the measured quality as the main factor for the re-ranking of the top-N recommendations list in order to point out researchers to the latest and the best papers in their research fileds. To prove the accuracy improvement, we conduct a study involving different recommendation approaches, aiming at measuring their performance gain. The results obtained proved to be satisfactory for the researchers from different departments who took part on the tests. | REFORE: A recommender system for researchers based on bibliometrics |
S156849461500126X | The classification of imbalanced data is a major challenge for machine learning. In this paper, we presented a fuzzy total margin based support vector machine (FTM-SVM) method to handle the class imbalance learning (CIL) problem in the presence of outliers and noise. The proposed method incorporates total margin algorithm, different cost functions and the proper approach of fuzzification of the penalty into FTM-SVM and formulates them in nonlinear case. We considered an excellent type of fuzzy membership functions to assign fuzzy membership values and got six FTM-SVM settings. We evaluated the proposed FTM-SVM method on two artificial data sets and 16 real-world imbalanced data sets. Experimental results show that the proposed FTM-SVM method has higher G_Mean and F_Measure values than some existing CIL methods. Based on the overall results, we can conclude that the proposed FTM-SVM method is effective for CIL problem, especially in the presence of outliers and noise in data sets. | Class imbalance learning via a fuzzy total margin based support vector machine |
S1568494615001271 | We address the problem of system reliability prediction, based on an available series of failure time data. We consider support vector regression (SVR) as solution approach, for its known performance on time series forecasting. However, SVR parameters selection is very critical for obtaining satisfactory forecasting. Currently, two different ways are followed to set the values of SVR parameters. One way is that of choosing parameters based on prior knowledge or experts experience on the problem at hand: this is a simple and quick, practical way but often not optimal in complex situations and for non-expert users. Another way is that of searching the values of the parameters via some intelligent methods of optimization of the SVR regression performance: for doing this efficiently, one must avoid problems like divergence, slow convergence, local optima, etc. In this paper, we propose the combination of an analytic selection (AS) method of prior selection followed by a genetic algorithm (GA) for intelligent optimization. The combination of these two methods allows utilizing the available prior knowledge by AS for guiding the GA optimization process so as to avoid divergence and local optima, and accelerate convergence. To show the effectiveness of the method, some simulation experiments are designed, based on artificial or real reliability datasets. The results show the superiority of our proposed ASGA method to the traditional GA method, in terms of prediction accuracy, convergence speed and robustness. | System reliability prediction by support vector regression with analytic selection and genetic algorithm parameters selection |
S1568494615001283 | Soft set theory is a new mathematical tool to deal with uncertain information. This paper studies soft coverings and their parameter reductions. Firstly, we define a soft set on the power set of an initial universe and discuss its properties. Secondly, we introduce soft coverings and obtain the lattice structure of soft sets induced by them. Thirdly, we investigate parameter reductions of a soft covering by means of attribute reductions in a covering information system and present their algorithm. Finally, we give an application to show the usefulness of parameter reductions of a soft covering. | Soft coverings and their parameter reductions |
S1568494615001295 | In this article, a meta-heuristic technique based on a backtracking search algorithm (BSA) is employed to produce solutions to ascertain distributed generators (DGs). The objective is established to reduce power loss and improve network voltage profile in radial distribution networks by determining optimal locations and sizes of the DGs. Power loss indices and bus voltages are engaged to explore the initial placement of DG installations. The study cares with the DG type injects active and reactive power. The proposed methodology takes into consideration four load models, and their impacts are addressed. The proposed BSA-based methodology is verified on two different test networks with different load models and the simulation results are compared to those reported in the recent literature. The study finds that the constant power load model among various load models is sufficed and viable to allocate DGs for network loss and voltage studies. The simulation results reveal the efficacy and robustness of the BSA in finding the optimal solution of DGs allocation. total network active power loss total network reactive power loss active power losses of the line between the nodes i and j reactive power losses of the line between the nodes i and j resistance of the line between the nodes i and j reactance of the line between the nodes i and j total effective real power load fed through bus j total effective reactive power fed through bus j angle of impedance of line i–j injected active and reactive power components at bus i voltage magnitude and angle at bus i and j, respectively difference between δ i and δ j impedance magnitude between bus i and j real power generation at bus i conductance between bus i and j susceptance between bus i and j reactive power generation at bus i number of network buses total number of lines number of distributed generators (DGs) maximum allowable number of DGs units along the network (user defined) number of connected loads number of DG units connected to bus i injected active power of ith DG injected active power of ith DG active power demand of load at bus i reactive power demand of load at bus i with DGs installations without installations of DGs lower limit of DG active output power upper limit of DG active output power lower limit of DG reactive output power upper limit of DG reactive output power actual line flow of line i rated power capacity of line i penalty for the voltage constraints penalty for the line flow constraint penalty for the maximum allowable DGs active power penalty for the maximum allowable DGs reactive power power loss decline of bus m power loss index of bus m maximum loss decline minimum loss decline load flow cumulative voltage deviation load real and reactive power exponents, respectively load bus voltage and load nominal voltage of bus j, respectively real and reactive power at bus j, respectively real and reactive operating point at bus j, respectively population size problem dimension uniform distribution [0, 1] lower bound of optimized parameter j upper bound of optimized parameter j target individual i in the population P | Study impact of various load models on DG placement and sizing using backtracking search algorithm |
S1568494615001301 | Websites offering daily deal offers have received widespread attention from the end-users. The objective of such Websites is to provide time limited discounts on goods and services in the hope of enticing more customers to purchase such goods or services. The success of daily deal Websites has given rise to meta-level daily deal aggregator services that collect daily deal information from across the Web. Due to some of the unique characteristics of daily deal Websites such as high update frequency, time sensitivity, and lack of coherent information representation, many deal aggregators rely on human intervention to identify and extract deal information. In this paper, we propose an approach where daily deal information is identified, classified and properly segmented and localized. Our approach is based on a semi-supervised method that uses sentence-level features of daily deal information on a given Web page. Our work offers (i) a set of computationally inexpensive discriminative features that are able to effectively distinguish Web pages that contain daily deal information; (ii) the construction and systematic evaluation of machine learning techniques based on these features to automatically classify daily deal Web pages; and (iii) the development of an accurate segmentation algorithm that is able to localize and extract individual deals from within a complex Web page. We have extensively evaluated our approach from different perspectives, the results of which show notable performance. | Automated classification and localization of daily deal content from the Web |
S1568494615001313 | Cross-docking is a material handling and distribution technique in which products are transferred directly from the receiving dock to the shipping dock, reducing the need for a warehouse or distribution center. This process minimizes the storage and order-picking functions in a warehouse. In this paper, we consider cross-docking in a supply chain and propose a multi-objective mathematical model for minimizing the make-span, transportation cost and the number of truck trips in the supply chain. The proposed model allows a truck to travel from a supplier to the cross-dock facility and from the supplier directly to the customers. We propose two meta-heuristic algorithms, the non-dominated sorting genetic algorithm (NSGA-II) and the multi-objective particle swarm optimization (MOPSO), to solve the multi-objective mathematical model. We demonstrate the applicability of the proposed method and exhibit the efficacy of the procedure with a numerical example. The numerical results show the relative superiority of the NSGA-II method over the MOPSO method. | A novel multi-objective meta-heuristic model for solving cross-docking scheduling problems |
S1568494615001337 | The automated servicing of vehicles is becoming more and more a reality in today's world. While certain operations, such as car washing, require only a rough model of the surface of a vehicle, other operations, such as changing of a wheel or filling the gas tank, require a correct localization of the different parts of the vehicle on which operations are to be performed. The paper describes a two-step approach to localize vehicle parts over the surface of a vehicle in front, rear and lateral views capitalizing on a novel approach based on bio-inspired visual attention. First, bounding-boxes are determined based on a model of human visual attention to roughly locate parts of interest. Second, the bounding-boxes are further searched to finely tune and better capture the boundaries of each vehicle part by means of active contour models. The proposed method obtains average bounding-box localization rates over 99.8% for different vehicle parts on a dataset of 120 vehicles belonging to sedan, SUV and wagon categories. Moreover, it allows, with the addition of the active contour models, for a more complete and accurate description of vehicle parts contours than other state-of-the-art solutions. This research work is contributing to the implementation of an automated industrial system for vehicle inspection. | An application of a bio-inspired visual attention model for the localization of vehicle parts |
S1568494615001349 | In the design of predictive controllers (MPC), parameterisation of degrees of freedom by Laguerre functions, has shown to improve the controller performance and feasible region. However, an open question remains: how to select the optimal tuning parameters? Moreover, optimality will depend on the size of the feasible region of the controller, the system's closed-loop performance and the online computational cost of the algorithm. This paper develops a method for a systematic selection of tuning parameters for a parameterised predictive control algorithm. In order to do this, a multiobjective problem is posed and then solved using a multiobjective evolutionary algorithm (MOEA) given that the objectives are in conflict. Numerical simulations show that the MOEA is a useful tool to obtain a suitable balance between feasibility, performance and computational cost. | Systematic selection of tuning parameters for efficient predictive controllers using a multiobjective evolutionary algorithm |
S1568494615001350 | Computational methods are useful for medical diagnosis because they provide additional information that cannot be obtained by simple visual interpretation of clinical presentations and radiologic imaging. As a result an enormous amount of research effort has been targeted at achieving automated medical image analysis. This work reports the texture analysis of Computed Tomography (CT) images and development of Probabilistic Neural Network (PNN), Linear Vector Quantization (LVQ) Neural Network and Back Propagation Neural Network (BPN) for classification of fatty and cirrhosis liver from CT abdominal images. Neural networks are supported by more conventional image processing operations in order to achieve the objective set. To evaluate the classifiers, Receiver Operating Characteristic (ROC) analysis is done and the results are also evaluated by the radiologists. Experimental results show that PNN is a good classifier, giving an accuracy of 95% for classifying fatty and cirrhosis liver using wavelet based statistical texture features. | Neural network based texture analysis of CT images for fatty and cirrhosis liver classification |
S1568494615001374 | Classification is a method of accurately predicting the target class for an unlabelled sample by learning from instances described by a set of attributes and a class label. Instance based classifiers are attractive due to their simplicity and performance. However, many of these are susceptible to noise and become unsuitable for real world problems. This paper proposes a novel instance based classification algorithm called Pattern Matching based Classification (PMC). The underlying principle of PMC is that it classifies unlabelled samples by matching for patterns in the training dataset. The advantage of PMC in comparison with other instance based methods is its simple classification procedure together with high performance. To improve the classification accuracy of PMC, an Ant Colony Optimization based Feature Selection algorithm based on the idea of PMC has been proposed. The classifier is evaluated on 35 datasets. Experimental results demonstrate that PMC is competent with many instance based classifiers. The results are also validated using nonparametric statistical tests. Also, the evaluation time of PMC is less when compared to the gravitation based methods used for classification. | Pattern Matching based Classification using Ant Colony Optimization based Feature Selection |
S1568494615001386 | In modern manufacturing industry, developing automated tool condition monitoring system become more and more import in order to transform manufacturing systems from manually operated production machines to highly automated machining centres. This paper presents a nouvelle cutting tool wear assessment in high precision turning process using type-2 fuzzy uncertainty estimation on acoustic Emission. Without understanding the exact physics of the machining process, type-2 fuzzy logic system identifies acoustic emission signal during the process and its interval set of output assesses the uncertainty information in the signal. The experimental study shows that the development trend of uncertainty in acoustic emission signal corresponds to that of cutting tool wear. The estimation of uncertainties can be used for proving the conformance with specifications for products or auto-controlling of machine system, which has great meaning for continuously improvement in product quality, reliability and manufacturing efficiency in machining industry. | Tool wear assessment based on type-2 fuzzy uncertainty estimation on acoustic emission |
S1568494615001398 | Early diagnosis of psychiatric conditions can be enhanced by taking into account eye movement behavior. However, the implementation of prediction algorithms which are able to assist physicians in the diagnostic is a difficult task. In this paper we propose, for the first time, an automatic approach for classification of multiple psychiatric conditions based on saccades. In particular, the goal is to classify 6 medical conditions: Alcoholism, Alzheimer's disease, opioid dependence (two groups of subjects with measurements respectively taken prior to and after administering synthetic opioid), Parkinson's disease, and Schizophrenia. Our approach integrates different feature spaces corresponding to complementary characterizations of the saccadic behavior. We define a multi-view model of saccades in which the feature representations capture characteristic temporal and amplitude patterns of saccades. Four of the current most advanced classification methods are used to discriminate among the psychiatric conditions and leave-one-out cross-validation is used to evaluate the classifiers. Classification accuracies well above the chance levels are obtained for the different classification tasks investigated. The confusion matrices reveal that it is possible to separate conditions into different groups. We conclude that using relatively simple descriptors of the saccadic behavior it is possible to simultaneously classify among 6 different types of psychiatric conditions. Conceptually, our multi-view classification method excels other approaches that focus on statistical differences in the saccadic behavior of cases and controls because it can be used for predicting unseen cases. Classification integrating different characterizations of the saccades can actually help to predict the conditions of new patients, opening the possibility to integrate automatic analysis of saccades as a practical procedure for differential diagnosis in Psychiatry. | Multi-view classification of psychiatric conditions based on saccades |
S1568494615001404 | The train formation plan (TFP) determines the routing and frequency of trains, and assigns the demands to trains. In this paper, in order to consider the real-life condition of railways, a mathematical model with fuzzy costs is proposed for train formation planning in Iranian railway. In this fuzzy model, the costs are considered in three scenarios, namely optimistic, normal and pessimistic. The model is formulated based on the fixed-charge capacitated multicommodity network design problem. Since the TFP problem is NP-hard, an efficient hybrid algorithm combining local branching and relaxation induced neighborhood search methods is presented. A three-step method is applied for parameter tuning using design of experiments approach. To evaluate the efficiency and effectiveness of the proposed algorithm, the results are compared with those of the state-of-the-art optimization software. | A hybrid solution method for fuzzy train formation planning |
S156849461500143X | This paper deals with Hamilton–Jacobi–Bellman (HJB) equation based stabilized optimal control of hybrid dynamical systems (HDS). This paper presents the fuzzy clustering based event wise multiple linearized modeling approaches for HDS to describe the continuous dynamic in each event. In the present work a fuzzy clustering validation approach is presented for the selection of number of linearized models which span entire HDS. The method also describes how to obtain event wise operating point using fuzzy membership function, which is used to find the event wise model bank by linearizing the first principles model. The event wise linearized models are used for the formulation of the optimal control law. The HJB equation is formulated using a suitable quadratic term in the objective function. By use of the direct method of Lyapunov stability, the control law is shown to be optimal with respect to objective functional and stabilized the event wise linearized models. The global Lyapunov function is proposed with discrete variables which stabilized the HDS. The proposed modeling and control algorithm have been applied on two HDSs. Necessary theoretical and simulation experiments are presented to demonstrate the performance and validation of the proposed algorithm. | Clustering based multiple model control of hybrid dynamical systems using HJB solution |
S1568494615001441 | In this paper, we propose a novel visual tracking algorithm using the collaboration of generative and discriminative trackers under the particle filter framework. Each particle denotes a single task, and we encode all the tasks simultaneously in a structured multi-task learning manner. Then, we implement generative and discriminative trackers, respectively. The discriminative tracker considers the overall information of object to represent the object appearance; while the generative tracker takes the local information of object into account for handling partial occlusions. Therefore, two models are complementary during the tracking. Furthermore, we design an effective dictionary updating mechanism. The dictionary is composed of fixed and variational parts. The variational parts are progressively updated using Metropolis–Hastings strategy. Experiments on different challenging video sequences demonstrate that the proposed tracker performs favorably against several state-of-the-art trackers. | Object tracking via collaborative multi-task learning and appearance model updating |
S1568494615001453 | This article explores the application of DNA computing in recyclable waste paper sorting. The primary challenge in paper recycling is to obtain raw materials with the highest purity. In recycling, waste papers are segregated according to their various grades, and these are subjected to different recycling processes. Highly sorted paper streams facilitate high quality end products, while saving on processing chemicals and energy. In the industry, different sensors are used in paper sorting systems, namely, ultrasonic, lignin, gloss, stiffness, infra-red, mid-infra red, and color sensors. Different mechanical and optical paper sorting systems have been developed based on the different sensors. However, due to inadequate throughput and some major drawbacks related to mechanical paper sorting systems, the popularity of optical paper sorting systems has increased. The automated paper sorting systems offer significant advantages over the manual systems in terms of human fatigue, throughput, speed, and accuracy. This research has two objectives: (1) to use a web camera as an image sensor for the vision system in lieu of different sensors; and (2) to develop a new DNA computing algorithm based on the theme of template matching techniques for segregating recyclable waste papers according to paper grades. Using the concepts of replication and massive parallelism operations, the DNA computing algorithm can efficiently reduce the computational time of the template matching method. This is the main strength of the DNA computing algorithm in actual inspections. The algorithm is implemented by using a silicon-based computer to verify the success rate in paper grade identification. | DNA computer based algorithm for recyclable waste paper segregation |
S1568494615001465 | In this study, a novel bio-inspired metaheuristic optimization algorithm called artificial algae algorithm (AAA) inspired by the living behaviors of microalgae, photosynthetic species, is introduced. The algorithm is based on evolutionary process, adaptation process and the movement of microalgae. The performance of the algorithm has been verified on various benchmark functions and a real-world design optimization problem. The CEC’05 function set was employed as benchmark functions and the test results were compared with the algorithms of Artificial Bee Colony (ABC), Bee Algorithm (BA), Differential Evolution (DE), Ant Colony Optimization for continuous domain (ACOR) and Harmony Search (HSPOP). The pressure vessel design optimization problem, which is one of the widely used optimization problems, was used as a sample real-world design optimization problem to test the algorithm. In order to compare the results on the mentioned problem, the methods including ABC and Standard PSO (SPSO2011) were used. Mean, best, standard deviation values and convergence curves were employed for the analyses of performance. Furthermore, mean square error (MSE), root mean square error (RMSE) and mean absolute percentage error (MAPE), which are computed as a result of using the errors of algorithms on functions, were used for the general performance comparison. AAA produced successful and balanced results over different dimensions of the benchmark functions. It is a consistent algorithm having balanced search qualifications. Because of the contribution of adaptation and evolutionary process, semi-random selection employed while choosing the source of light in order to avoid local minima, and balancing of helical movement methods each other. Moreover, in tested real-world application AAA produced consistent results and it is a stable algorithm. | Artificial algae algorithm (AAA) for nonlinear global optimization |
S1568494615001490 | In this research, the problem of scheduling and sequencing of two-stage assembly-type flexible flow shop with dedicated assembly lines, which produce different products according to requested demand during the planning horizon with the aim of minimizing maximum completion time of products is investigated. The first stage consists of several parallel machines in site I with different speeds in processing components and one machine in site II, and the second stage consists of two dedicated assembly lines. Each product requires several kinds of components with different sizes. Each component has its own structure which leading to difference processing times to assemble. Products composed of only single-process components are assigned to the first assembly line and products composed of at least a two-process component are assigned to the second assembly line. Components are placed on the related dedicated assembly line in the second phase after being completed on the assigned machines in the first phase and final products will be produced by assembling the components. The main contribution of our work is development of a new mathematical model in flexible flow shop scheduling problem and presentation of a new methodology for solving the proposed model. Flexible flow shop problems being an NP-hard problem, therefore we proposed a hybrid meta-heuristic method as a combination of simulated annealing (SA) and imperialist competitive algorithms (ICA). We implement our obtained algorithm and the ones obtained by the LINGO9 software package. Various parameters and operators of the proposed Meta-heuristic algorithm are discussed and calibrated by means of Taguchi statistical technique. | Scheduling of multi-component products in a two-stage flexible flow shop |
S1568494615001507 | We propose the use of Vapnik's vicinal risk minimization (VRM) for training decision trees to approximately maximize decision margins. We implement VRM by propagating uncertainties in the input attributes into the labeling decisions. In this way, we perform a global regularization over the decision tree structure. During a training phase, a decision tree is constructed to minimize the total probability of misclassifying the labeled training examples, a process which approximately maximizes the margins of the resulting classifier. We perform the necessary minimization using an appropriate meta-heuristic (genetic programming) and present results over a range of synthetic and benchmark real datasets. We demonstrate the statistical superiority of VRM training over conventional empirical risk minimization (ERM) and the well-known C4.5 algorithm, for a range of synthetic and real datasets. We also conclude that there is no statistical difference between trees trained by ERM and using C4.5. Training with VRM is shown to be more stable and repeatable than by ERM. | The use of vicinal-risk minimization for training decision trees |
S1568494615001519 | Over half of the world's population will live in urban areas in the next decade, which will impose significant pressure on water security. The advanced management of water resources and their consumption is pivotal to maintaining a sustainable water future. To contribute to this goal, the aim of this study was to develop an autonomous and intelligent system for residential water end-use classification that could interface with customers and water business managers via a user-friendly web-based application. Water flow data collected directly from smart water metres connected to dwellings includes both single (e.g., a shower event occurring alone) and combined (i.e., an event that comprises several overlapping single events) water end use events. The authors recently developed an intelligent application called Autoflow which served as a prototype tool to solve the complex problem of autonomously categorising residential water consumption data into a registry of single and combined events. However, this first prototype application achieved overall recognition accuracy of 85%, which is not sufficient for a commercial application. To improve this accuracy level, a larger dataset consisting of over 82,000 events from over 500 homes in Melbourne and South-east Queensland, Australia, were employed to derive a new single event recognition method employing a hybrid combination of Hidden Markov Model (HMM), Artificial Neural Networks (ANN) and the Dynamic Time Warping (DTW) algorithm. The classified single event registry was then used as the foundations of a sophisticated hybrid ANN–HMM combined event disaggregation module, which was able to strip apart concurrently occurring end use events. The new hybrid model's recognition accuracy ranged from 85.9% to 96.1% for single events and 81.8–91.5% for combined event disaggregation, which was a 4.9% and 8.0% improvement, respectively, when compared to the first prototype model. The developed Autoflow tool has far-reaching implications for enhanced urban water demand planning and management, sustained customer behaviour change through more granular water conservation awareness, and better customer satisfaction with water utility providers. | Intelligent autonomous system for residential water end use classification: Autoflow |
S1568494615001520 | This study proposes an adaptive Takagi-Sugeno-Kang-fuzzy (TSK-fuzzy) speed controller (ATFSC) for use in direct torque control (DTC) induction motor (IM) drives to improve their dynamic responses. The proposed controller consists of the TSK-fuzzy controller, which is used to approximate an ideal control law, and a compensated controller, which is constructed to compensate for the difference between the TSK-fuzzy controller and the ideal control law. Parameter variations and external load disturbances were considered during the design phase to ensure the robustness of the proposed scheme. The parameters of the TSK-fuzzy controller were adjusted online based on the adaptive rules derived in Lyapunov stability theory. The ATFSC, fuzzy control, and PI control schemes were experimentally investigated, using the root mean square error (RMSE) performance index to evaluate each scheme. The robustness of the proposed ATFSC was verified using simulations and experiments, which involved varying parameters and external load disturbances. The experimental results indicate that the ATFSC scheme outperformed the other control schemes. | Design of a novel adaptive TSK-fuzzy speed controller for use in direct torque control induction motor drives |
S1568494615001532 | Surface phenomena such as corrosion, crystal growth, catalysis, adsorption and oxidation cannot be adequately comprehended without the full knowledge of surface energy of the concerned material. Despite these significances of surface energy, they are difficult to obtain experimentally and the few available ones are subjected to certain degree of inaccuracies due to extrapolation of surface tension to 0K. In order to cater for these difficulties, we have developed a model using computational intelligence technique on the platform of support vector regression (SVR) to establish a database of surface energies of hexagonal close packed metals (HCP). The SVR based-model was developed through training and testing SVR using fourteen experimental data of periodic metals. The developed model shows accuracy of 99.08% and 100% during training and testing phase, respectively, using test-set cross validation technique. The developed model was further used to obtain surface energies of HCP metals. The surface energies obtained from SVR-based model are closer to the experimental values than the results of the well-known existing theoretical models. The outstanding performance of this developed model in estimating surface energies of HCP metals with high degree of accuracy, in the presence of few experimental data, is a great achievement in the field of surface science because of its potential to circumvent experimental difficulties in determining surface energies of materials. | Estimation of surface energies of hexagonal close packed metals using computational intelligence technique |
S1568494615001544 | Innumerable casualties due to intrauterine hypoxia are a major worry during prenatal phase besides advanced patient monitoring with latest science and technology. Hence, the analysis of foetal electrocardiogram (fECG) signals is very vital in order to evaluate the foetal heart status for timely recognition of cardiac abnormalities. Regrettably, the latest technology in the cutting edge field of biomedical signal processing does not seem to yield the desired quality of fECG signals required by physicians, which is the major cause for the pathetic condition. The focus of this work is to extort non-invasive fECG signal with highest possible quality with a motive to support physicians in utilizing the methodology for the latest intrapartum monitoring technique called STAN (ST analysis) for forecasting intrapartum foetal hypoxia. However, the critical quandary is that the non-invasive fECG signals recorded from the maternal abdomen are affected by several interferences like power line interference, baseline drift interference, electrode motion interference, muscle movement interference and the maternal electrocardiogram (mECG) being the dominant interference. A novel hybrid methodology called BANFIS (Bayesian adaptive neuro fuzzy inference system) is proposed. The BANFIS includes a Bayesian filter and an adaptive neuro fuzzy filter for mECG elimination and non-linear artefacts removal to yield high quality fECG signal. Kalman filtering frame work has been utilized to estimate the nonlinear transformed mECG component in the abdominal electrocardiogram (aECG). The adaptive neuro fuzzy filter is employed to discover the nonlinearity of the nonlinear transformed version of mECG and to align the estimated mECG signal with the maternal component in the aECG signal for annulment. The outcomes of the investigation by the proposed BANFIS system proved valuable for STAN system for efficient prediction of foetal hypoxia. | Superior foetal electrocardiogram signal elicitation using a novel artificial intelligent Bayesian methodology |
S1568494615001635 | In this paper, a bi-objective multi-product (r,Q) inventory model in which the inventory level is reviewed continuously is proposed. The aim of this work is to find the optimal value for both order quantity and reorder point through minimizing the total cost and maximizing the service level of the proposed model simultaneously. It is assumed that shortage could occur and unsatisfied demand could be backordered, too. There is a budget limitation and storage space constraint in the model. With regard to complexity of the proposed model, several Pareto-based meta-heuristic approaches such as multi-objective vibration damping optimization (MOVDO), multi-objective imperialist competitive algorithm (MOICA), multi-objective particle swarm optimization (MOPSO), non-dominated ranked genetic algorithm (NRGA), and non-dominated sorting genetic algorithm (NSGA-II) are applied to solve the model. In order to compare the results, several numerical examples are generated and then the algorithms were analyzed statistically and graphically. | A bi-objective continuous review inventory control model: Pareto-based meta-heuristic algorithms |
S1568494615001647 | This paper presents a new algorithm capable of improving the accuracy level of a laser pointer detector used within an interactive control device system. A genetic programming based approach has been employed to develop a focus of attention algorithm, which works cooperatively with a genetic fuzzy system. The idea is to improve the detection of laser-spots depicted on images captured by video cameras working on home environments. The new and more accurate detection system, in combination with an environment control system, allows to send correct orders to home devices. The algorithm is capable of eradicating false offs, thus preventing devices to autonomously activate/deactivate appliances when orders have not been really signalled by users. Moreover, by adding self-adjusting capabilities with a genetic fuzzy system the computer vision algorithm focuses its attention on a narrower area of the image. Extensive experimental results show that the combination of the focus of attention technique with dynamic thresholding and genetic fuzzy systems improves significantly the accuracy of the laser-spot detection system while maintaining extremely low false off rates in comparison with previous approaches. | Self-adjusting focus of attention in combination with a genetic fuzzy system for improving a laser environment control device system |
S1568494615001672 | Due to the complexity and uncertainty of the objective world, as well as the limitation of human ability to understand, it is difficult for one to employ only a single type of uncertainty method to deal with the real-life problem of decision-making, especially problems involving conflicts. On the other hand, by incorporating the advantages of various theories of uncertainty, one is expected to develop a more powerful hybrid method for soft decision making and to solve such problems more effectively. In view of this, in this paper the thought and method of intuitionistic fuzzy set and rough set are used to construct a novel intuitionistic fuzzy rough set model. Corresponding to the fact that the decision-making information system of rough sets is of intuitionistic fuzzy information system, our method defines the conflict distance by using the idea of measuring intuitionistic fuzzy similarity so that it is introduced into the models of rough sets, leading to the development of our intuitionistic fuzzy rough set model. After that, we investigate the properties of the model, introduce a novel tool for conflict analysis based on our hybrid model, and employ this new tool to describe and resolve a real-life conflict problem. | Intuitionistic fuzzy rough set model based on conflict distance and applications |
S1568494615001684 | In this paper, a novel algorithm based on the bacterial colony chemotaxis (BCC) algorithm is developed to solve multi-objective optimization problems. The main objective of the paper is to improve the performance of BCC. Hence, the main work is to add three improvements, which are improved adaptive grid, oriented mutation based on grid and adaptive external archive, in order to improve the convergence performance on multi-objective optimization problems and the distribution of solutions. This paper also presents a first and simple convergence analysis of the general Pareto-based MOBCC. The proposed algorithm is validated using 12 benchmark problems and four performance measures are implemented to compare its performance with the MOBCC algorithm, the NSGA-II algorithm, and the MOEA/D algorithm. The simulation results confirmed the effectiveness of the algorithm. | An improved multi-objective bacteria colony chemotaxis algorithm and convergence analysis |
S1568494615001696 | Using different shapes of recognition regions in Artificial Immune Systems (AIS) are not a new issue. Especially, ellipsoidal shapes seem to be more intriguing as they have also been used very effectively in other shape space-based classification methods. Some studies have done in AIS through generating ellipsoidal detectors but they are restricted in their detector generating scheme – Genetic Algorithms (GA). In this study, an AIS was developed with ellipsoidal recognition regions by inspiring from the clonal selection principle and an effective search procedure for ellipsoidal regions was applied. Performance evaluation tests were conducted as well as application results on some real-world classification problems taken from UCI machine learning repository were obtained. Comparison with GA was also done in some of these problems. Very effective and comparatively good classification ratios were recorded. | On the evolution of ellipsoidal recognition regions in Artificial Immune Systems |
S1568494615001702 | In this paper, ant colony optimization for continuous domains (ACOR) based integer programming is employed for size optimization in a hybrid photovoltaic (PV)–wind energy system. ACOR is a direct extension of ant colony optimization (ACO). Also, it is the significant ant-based algorithm for continuous optimization. In this setting, the variables are first considered as real then rounded in each step of iteration. The number of solar panels, wind turbines and batteries are selected as decision variables of integer programming problem. The objective function of the PV–wind system design is the total design cost which is the sum of total capital cost and total maintenance cost that should be minimized. The optimization is separately performed for three renewable energy systems including hybrid systems, solar stand alone and wind stand alone. A complete data set, a regular optimization formulation and ACOR based integer programming are the main features of this paper. The optimization results showed that this method gives the best results just in few seconds. Also, the results are compared with other artificial intelligent (AI) approaches and a conventional optimization method. Moreover, the results are very promising and prove that the authors’ proposed approach outperforms them in terms of reaching an optimal solution and speed. | Size optimization for hybrid photovoltaic–wind energy system using ant colony optimization for continuous domains based integer programming |
S1568494615001714 | This paper proposes a novel fuzzy soft set approach in decision making based on grey relational analysis and Dempster–Shafer theory of evidence. First, the uncertain degrees of various parameters are determined via grey relational analysis, which is applied to calculate the grey mean relational degree. Second, suitable mass functions of different independent alternatives with different parameters are given according to the uncertain degree. Third, to aggregate the alternatives into a collective alternative, Dempster's rule of evidence combination is applied. Finally, the alternatives are ranked and the best alternatives are obtained. The effectiveness and feasibility of this approach are demonstrated by comparing with the mean potentiality approach because the measure of performance of this approach is the same as the mean potentiality approach's, the belief measure of the whole uncertainty falls from 0.4723 to 0.0782 (resp. 0.3821 to 0.0069) in the example of Section 5 (resp. Section 6). | A novel fuzzy soft set approach in decision making based on grey relational analysis and Dempster–Shafer theory of evidence |
S1568494615001726 | Engineering problems presenting themselves in a multiobjective setting have become commonplace in most industries. In such situations the decision maker (DM) requires several solution options prior to selecting the best or the most attractive solution with respect to the current industrial circumstances. The weighted sum scalarization approach was employed in this work in conjunction with three metaheuristic algorithms: particle swarm optimization (PSO), differential evolution (DE) and the improved DE algorithm (GTDE) (which was enhanced using ideas from evolutionary game theory). These methods are then used to generate the approximate Pareto frontier to the nano-CMOS voltage-controlled oscillator (VCO) design problem. Some comparative studies were then carried out to compare the proposed method as compared to the standard DE approach. Examination on the quality of the solutions across the Pareto frontier obtained using these algorithms was carried out using the hypervolume indicator (HVI). | Multiobjective design optimization of a nano-CMOS voltage-controlled oscillator using game theoretic-differential evolution |
S1568494615001738 | This paper presents the first heuristic method for solving the satisfiability problem in the logic with approximate conditional probabilities. This logic is very suitable for representing and reasoning with uncertain knowledge and for modeling default reasoning. The solution space consists of variables, which are arrays of 0 and 1 and the associated probabilities. These probabilities belong to a recursive non-Archimedean Hardy field which contains all rational functions of a fixed positive infinitesimal. Our method is based on the bee colony optimizationmeta-heuristic. The proposed procedure chooses variables from the solution space and determines their probabilities combining some other fast heuristics for solving the obtained linear system of inequalities. Experimental evaluation shows a high percentage of success in proving the satisfiability of randomly generated formulas. We have also showed great advantage in using a heuristic approach compared to standard linear solver. | Bee colony optimization for the satisfiability problem in probabilistic logic |
S156849461500174X | Designing of scalable routing protocol with prolonged network lifetime for a wireless sensor network (WSN) is a challenging task. WSN consists of large number of power, communication and computational constrained inexpensive nodes. It is difficult to replace or recharge battery of a WSN node when operated in a hostile environment. Cluster based routing is one of the techniques to provide prolonged network lifetime along with scalability. This paper proposes a technique for cluster formation derived from “Grid based method”. We have also proposed a new decentralized cluster head (CH) election method based on Bollinger Bands. Bollinger Bands are based on Upper Bollinger Band and Lower Bollinger Band, both of these bands are extremely reactive to any change in the inputs provided to them. We have used this property of Bollinger Bands to elect CH. Simulation result shows significant improvement in the network lifetime in comparison with other decentralized and ant based algorithms. | A new Bollinger Band based energy efficient routing for clustered wireless sensor network |
S1568494615001775 | Urban water distribution systems hold a critical and strategic position in preserving public health and industrial growth. Despite the ubiquity of these urban systems, aging infrastructure, and increased risk of terrorism, decision support models for a timely and adaptive contamination emergency response still remain at an undeveloped stage. Emergency response is characterized as a progressive, interactive, and adaptive process that involves parallel activities of processing streaming information and executing response actions. This study develops a dynamic decision support model that adaptively simulates the time-varying emergency environment and tracks changing best health protection response measures at every stage of an emergency in real-time. Feedback mechanisms between the contaminated network, emergency managers, and consumers are incorporated in a dynamic simulation model to capture time-varying characteristics of an emergency environment. An evolutionary-computation-based dynamic optimization model is developed to adaptively identify time-dependant optimal health protection measures during an emergency. This dynamic simulation–optimization model treats perceived contaminant source attributes as time-varying parameters to account for perceived contamination source updates as more data stream in over time. Performance of the developed dynamic decision support model is analyzed and demonstrated using a mid-size virtual city that resembles the dynamics and complexity of real-world urban systems. This adaptive emergency response optimization model is intended to be a major component of an all-inclusive cyberinfrastructure for efficient contamination threat management, which is currently under development. | A dynamic simulation–optimization model for adaptive management of urban water distribution system contamination threats |
S1568494615001787 | Due to the complex structure of IT2 FLCs, using them in real-time applications might be computationally expensive. To facilitate real-time implementation of these controllers, hardware with parallel processing abilities are recommended; field-programmable gate arrays (FPGA) are one class of such hardware. In this paper, we design and implement three different inference mechanisms of IT2 FLCs on hardware. These engines include Karnik–Mendel (KM) algorithms, Wu–Mendel (WM) uncertainty bounds, Nie-Tan (NT) and Biglarbegian–Melek–Mendel (BMM) which have recently been introduced in the literature. We first demonstrate how the proposed structures of the IT2 FLCs can be implemented on software; next, we propose architectures for implementing these IT2 FLCs on hardware. We performed simulations to compare the performance of the IT2 FLCs. To assess the controllers performance in real-time we used four indicators; the number of DSP48A1s, MUXCYs, slice registers and slice LUTs. It was shown that the NT and BMM controllers require significantly fewer resources compared to the other engines. While the controller that uses KM as its inference engine uses fewer resources in terms of DSP48A1s, it consumes a considerable amount of other resources compared to the WM controller. Finally, the transient responses of the controllers in terms of rise time and settling time were compared. It was found that the controllers with NT and BMM inference engines have faster closed-loop response in comparison to the one using the WM and KM. The results presented herein provides researchers and engineers a better insight into designing the most suitable IT2 FLCs, and hence it is expected IT2 FLCs can be implemented on hardware to further enable applications on plants requiring fast response (or ultimately real-time implementation). | Hardware implementation and performance comparison of interval type-2 fuzzy logic controllers for real-time applications |
S1568494615001830 | Multi-channel communication in a wireless mesh network (WMN) equipped with multi-radio routers can significantly enhance the network capacity. Channel allocation, power control and routing are three main issues involved in the performance of multi-channel multi-radio WMNs. In this paper, the joint optimization of channel allocation, power control and routing under signal-to-interference-and-noise ratio (SINR) model for multi-channel multi-radio WMNs is investigated. It is proven to be NP hard. As we know, no optimal polynomial time solutions have been proposed in the previous literatures. In order to tackle this problem, we apply bio-inspired optimization techniques for channel allocation and power control, and use linear programming for routing optimization. To reflect the cross-layer interaction property among these three issues, the routing optimization is further defined as the fitness value of a chromosome in bio-inspired optimization. Further, we propose an effective joint optimization framework, in which two representative bio-inspired optimization methods (genetic algorithm and particle swarm optimization algorithm) are hybridized to enhance the searching ability. The detailed evolution processes for both genetic algorithm and particle swarm optimization algorithm are demonstrated. Extensive simulation results show that the proposed algorithm converges fast and approaches the sub-optimal solution effectively. | Joint topology control and routing for multi-radio multi-channel WMNs under SINR model using bio-inspired techniques |
S1568494615001842 | The close–open vehicle routing problem is a realistic variant of the “classical” vehicle routing problem where the routes can be opened and closed, i.e. all the vehicles are not required to return to the depot after completing their service. This variant is a planning model that is a standard practice in business nowadays. Companies are contracting their deliveries to other companies that hire vehicles, and payment is made based on the distance covered by the vehicles. Available information on parameters in real world situations is also imprecise, and must be included in the optimization model and method. The aims of this paper are to formulate a model of this novel variant with time windows and imprecise constraints and to propose a fuzzy optimization approach and a hybrid metaheuristic for its solutions. The full proposal is applied to a real route planning problem with outsourcing, obtaining promising practical results. Customer demands and travel times are imprecise, thus capacity and time windows constraints are considered flexible and modelled as fuzzy constraints. | An ACO hybrid metaheuristic for close–open vehicle routing problems with time windows and fuzzy constraints |
S1568494615001854 | Since Atanassov presented the intuitionistic fuzzy set (IFS) in 1983, a great amount of extended content has been made by experts in IFS field and then the citations of a more formal paper, which developed from the first proposal and was presented in 1986, have been cited over 4000 times in Google Scholar ending November 6, 2014. However, the research on the development track of this discipline has not caused extensive attention among scholars in this discipline. Therefore, in this paper, we plan to determine the development track of this discipline based on some statistics and network analysis methodologies, as a new attempt. Note that we only take the citation information on IFSs ending November 6, 2014 into account. Therefore all of the data in this paper have time restriction. We derive a historical graph about the development course of IFS and use some basic statistics to identify some influential journals, authors, etc. Then we analyze a small range of influential literatures based on the SNA (social network analysis) theory to figure out the position of several cardinal IFS literatures. As result suggests, Atanassov [1] is the most influential paper in IFS field. Most literatures referring to IFS cited this paper and on the basis of it, increasingly scholars combine other subjects with IFS subject. | Researching the development of Atanassov intuitionistic fuzzy set: Using a citation network analysis |
S1568494615001866 | The objective of the present work is to develop a method that is able to automatically determine mental states of vigilance; i.e., a person's state of alertness. Such a task is relevant to diverse domains, where a person is expected or required to be in a particular state of mind. For instance, pilots and medical staff are expected to be in a highly alert state and the proposed method could help to detect possible deviations from this expected state. This work poses a binary classification problem where the goal is to distinguish between a “relaxed” state and a baseline state (“normal”) from the study of electroencephalographic signals (EEG) collected with a small number of electrodes. The EEG of 58 subjects in the two alertness states (116 records) were collected via a cap with 58 electrodes. After a data validation step, 19 subjects were retained for further analysis. A genetic algorithm was used to select a subset of electrodes. Common spatial pattern (CSP) coupled to linear discriminant analysis (LDA) was used to build a decision rule and thus predict the alertness of the subjects. Different subset sizes were investigated and the best compromise between the number of selected electrodes and the quality of the solution was obtained by considering 9 electrodes. Even if the present approach is costly in computation time (GA search), it allows to construct a decision rule that provides an accurate and fast prediction of the alertness state of an unseen individual. | EEG classification for the detection of mental states |
S1568494615001878 | The grouping of pixels based on some similarity criteria is called image segmentation. In this paper the problem of color image segmentation is considered as a clustering problem and a fixed length genetic algorithm (GA) is used to handle it. The effectiveness of GA depends on the objective function (fitness function) and the initialization of the population. A new objective function is proposed to evaluate the quality of the segmentation and the fitness of a chromosome. In fixed length genetic algorithm the chromosomes have same length, which is normally set by the user. Here, a self organizing map (SOM) is used to determine the number of segments in order to set the length of a chromosome automatically. An opposition based strategy is adopted for the initialization of the population in order to diversify the search process. In some cases the proposed method makes the small regions of an image as separate segments, which leads to noisy segmentation. A simple ad hoc mechanism is devised to refine the noisy segmentation. The qualitative and quantitative results show that the proposed method performs better than the state-of-the-art methods. | Genetic algorithm and self organizing map based fuzzy hybrid intelligent method for color image segmentation |
S156849461500188X | In this study, a dynamic screening strategy is proposed to discriminate subjects with autistic spectrum disorder (ASD) from healthy controls. The ASD is defined as a neurodevelopmental disorder that disrupts normal patterns of connectivity between the brain regions. Therefore, the potential use of such abnormality for autism screening is investigated. The connectivity patterns are estimated from electroencephalogram (EEG) data collected from 8 brain regions under various mental states. The EEG data of 12 healthy controls and 6 autistic children (age matched in 7–10) were collected during eyes-open and eyes-close resting states as well as when subjects were exposed to affective faces (happy, sad and calm). Subsequently, the subjects were classified as autistic or healthy groups based on their brain connectivity patterns using pattern recognition techniques. Performance of the proposed system in each mental state is separately evaluated. The results present higher recognition rates using functional connectivity features when compared against other existing feature extraction methods. | Dynamic screening of autistic children in various mental states using pattern of connectivity between brain regions |
S1568494615001891 | The original negative selection algorithm (NSA) has the disadvantages that many “black holes” cannot be detected and excessive invalid detectors are generated. To overcome its defects, this paper improves the detection performance of NSA and presents a kind of bidirectional inhibition optimization r-variable negative selection algorithm (BIORV-NSA). The proposed algorithm includes self set edge inhibition strategy and detector self-inhibition strategy. Self set edge inhibition strategy defines a generalized radius for self individual area, making self individual radius dynamically be variable. To a certain extent, the critical antigens close to self individual area are recognized and more non-self space is covered. Detector self-inhibition strategy, aiming at mutual cross-coverage among mature detectors, eliminates those detectors that are recognized by other mature detectors and avoids the production of excessive invalid detectors. Experiments on artificially generating data set and two standard real-world data sets from UCI are made to verify the performance of BIORV-NSA, by comparison with NSA and R-NSA, the experimental results demonstrate that the proposed BIORV-NSA algorithm can cover more non-self space, greatly improve the detection rates and obtain better detection performance by using fewer mature detectors. | BIORV-NSA: Bidirectional inhibition optimization r-variable negative selection algorithm and its application |
S156849461500191X | Support vector machine (SVM) is sensitive to the outliers, which reduces its generalization ability. This paper presents a novel support vector regression (SVR) together with fuzzification theory, inconsistency matrix and neighbors match operator to address this critical issue. Fuzzification method is exploited to assign similarities on the input space and on the output response to each pair of training samples respectively. The inconsistency matrix is used to calculate the weights of input variables, followed by searching outliers through a novel neighborhood matching algorithm and then eliminating them. Finally, the processed data is sent to the original SVR, and the prediction results are acquired. A simulation example and three real-world applications demonstrate the proposed method for data set with outliers. | A novel support vector regression for data set with outliers |
S1568494615001921 | The node placement problem involves positioning and configuring infrastructure for wireless networks. Applied to next generation networks, it establishes a new wireless architecture able to integrate heterogeneous components that can collaborate and exchange data. Furthermore, the heterogeneity of wireless networks makes the problem more intractable. This paper presents a novel multi-objective node placement problem that optimizes concurrently four objectives: maximizing communication coverage, minimizing the active structures’ costs, maximizing of the total capacity bandwidth and minimizing the noise level in the network. Known to be NP -hard, the problem can be approached by applying heuristics mainly for large problem instances. As the number of nodes to place is not determined beforehand; we propose to apply a multi-objective variable-length genetic algorithm (VLGA) that simultaneously searches for the optimal number, positions and nature of heterogeneous nodes and communication devices. The performance of the VLGA is highlighted through the implementation of a decision support system (DSS) applied to the surveillance maritime problem using real data instances. We compare the ability of the proposed algorithm with an existing multi-objective model from the literature in order to validate its effectiveness in dealing with heterogeneous components. The results show that the proposed model well fits the network architecture constraints with a better balance between the objectives applied to the surveillance problem. | A genetic algorithm based decision support system for the multi-objective node placement problem in next wireless generation network |
S1568494615001933 | This paper proposes a novel optimization algorithm inspired by the ions motion in nature. In fact, the proposed algorithm mimics the attraction and repulsion of anions and cations to perform optimization. The proposed algorithm is designed in such a way to have the least tuning parameters, low computational complexity, fast convergence, and high local optima avoidance. The performance of this algorithm is benchmarked on 10 standard test functions and compared to four well-known algorithms in the literature. The results demonstrate that the proposed algorithm is able to show very competitive results and has merits in solving challenging optimization problems. | Ions motion algorithm for solving optimization problems |
S1568494615001945 | This paper presents a performance enhancement scheme for the recently developed extreme learning machine (ELM) for classifying power system disturbances using particle swarm optimization (PSO). Learning time is an important factor while designing any computational intelligent algorithms for classifications. ELM is a single hidden layer neural network with good generalization capabilities and extremely fast learning capacity. In ELM, the input weights are chosen randomly and the output weights are calculated analytically. However, ELM may need higher number of hidden neurons due to the random determination of the input weights and hidden biases. One of the advantages of ELM over other methods is that the parameter that the user must properly adjust is the number of hidden nodes only. But the optimal selection of its parameter can improve its performance. In this paper, a hybrid optimization mechanism is proposed which combines the discrete-valued PSO with the continuous-valued PSO to optimize the input feature subset selection and the number of hidden nodes to enhance the performance of ELM. The experimental results showed the proposed algorithm is faster and more accurate in discriminating power system disturbances. | An integrated PSO for parameter determination and feature selection of ELM and its application in classification of power system disturbances |
S1568494615001957 | Intelligence technology develops quickly to predict and respond to the actions of electric power users to maintain a reliable and secure electricity infrastructure. This paper proposed a new on-line training network called distributed hyper-spherical ARTMAP (dHS-ARTMAP) to forecast the electricity load. The new model constructs a more compact network structure and largely decreases the proliferation problem that Fuzzy ARTMAP models usually encounter. Experiments of short-term electricity load forecasting are made with the data from Queensland, Australia. Results are compared with other methods. The effectiveness of the dHS-ARTMAP network proves itself a promising alternative to put into practical use. | Distributed HS-ARTMAP and its forecasting model for electricity load |
S1568494615001969 | The genetic algorithm (GA) is a population based meta-heuristic global optimization technique for dealing with complex problems with very large search space. The population initialization is a crucial task in GA because it plays a vital role in the convergence speed, problem search space exploration and also the quality of the final optimal solution. Though the importance of deciding problem specific population initialization in GA is widely recognized, it is hardly addressed in the literature. In this paper, different population seeding techniques for permutation-coded genetic algorithm such as random, nearest neighbor (NN), gene bank (GB), sorted population (SP), selective initialization (SI) along with three newly proposed ordered distance vector based initialization techniques have been extensively studied. The ability of each population seeding technique has been examined in terms of a set of performance criteria such as computation time, convergence rate, error rate, average convergence, convergence diversity, nearest–neighbor ratio, average distinct solutions and distribution of individuals. One of the famous combinatorial hard problems of traveling salesman problem (TSP) is being chosen as the testbed and the experiments are performed on large sized benchmark TSP instances obtained from standard TSPLIB. The experimentation analyses are carried out using statistical tools to claim the unique performance characteristic of each population seeding techniques and best performing techniques are identified based on the assessment criteria defined and the nature of the application. | Performance analyses over population seeding techniques of the permutation-coded genetic algorithm: An empirical study based on traveling salesman problems |
S1568494615001970 | In this paper, multiresolution local binary pattern (MRLBP) variants based texture feature extraction techniques have been proposed to categorize hardwood species into its various classes. Initially, discrete wavelet transform (DWT) has been used to decompose each image up to 7 levels using Daubechies wavelet (db2) as decomposition filter. Subsequently, six texture feature extraction techniques (local binary pattern and its variants) are employed to obtain substantial features of these images at different levels. Three classifiers, namely, linear discriminant analysis (LDA), linear and radial basis function (RBF) kernel support vector machine (SVM), have been used to classify the images of hardwood species. Thereafter, classification results obtained from conventional and MRLBP variants based texture feature extraction techniques with different classifiers have been compared. For 10-fold cross validation approach, texture features acquired using discrete wavelet transform based uniform completed local binary pattern (DWTCLBPu2) feature extraction technique has produced best classification accuracy of 97.40±1.06% with linear SVM classifier. This classification accuracy has been achieved at the 3rd level of image decomposition using full feature (1416) dataset. Further, reduction in dimension of texture features (325 features) by principal component analysis (PCA) has been done and the best classification accuracy of 97.87±0.82% for DWTCLBPu2 at the 3rd level of image decomposition has been obtained using LDA classifier. The DWTCLBPu2 texture features have also established superiority among the MRLBP techniques with reduced dimension features for randomly divided database into fix training and testing ratios. | Multiresolution local binary pattern variants based texture feature extraction techniques for efficient classification of microscopic images of hardwood species |
S1568494615001982 | Multi-objective optimization has been a difficult problem and a research focus in the field of science and engineering. This paper presents a novel multi-objective optimization algorithm called elite-guided multi-objective artificial bee colony (EMOABC) algorithm. In our proposal, the fast non-dominated sorting and population selection strategy are applied to measure the quality of the solution and select the better ones. The elite-guided solution generation strategy is designed to exploit the neighborhood of the existing solutions based on the guidance of the elite. Furthermore, a novel fitness calculation method is presented to calculate the selecting probability for onlookers. The proposed algorithm is validated on benchmark functions in terms of four indicators: GD, ER, SPR, and TI. The experimental results show that the proposed approach can find solutions with competitive convergence and diversity within a shorter period of time, compared with the traditional multi-objective algorithms. Consequently, it can be considered as a viable alternative to solve the multi-objective optimization problems. | Elite-guided multi-objective artificial bee colony algorithm |
S1568494615001994 | This paper presents the use of a new meta-heuristic technique namely gray wolf optimizer (GWO) which is inspired from gray wolves’ leadership and hunting behaviors to solve optimal reactive power dispatch (ORPD) problem. ORPD problem is a well-known nonlinear optimization problem in power system. GWO is utilized to find the best combination of control variables such as generator voltages, tap changing transformers’ ratios as well as the amount of reactive compensation devices so that the loss and voltage deviation minimizations can be achieved. In this paper, two case studies of IEEE 30-bus system and IEEE 118-bus system are used to show the effectiveness of GWO technique compared to other techniques available in literature. The results of this research show that GWO is able to achieve less power loss and voltage deviation than those determined by other techniques. | Using the gray wolf optimizer for solving optimal reactive power dispatch problem |
S1568494615002008 | In this work, we propose the use of the neural gas (NG), a neural network that uses an unsupervised Competitive Hebbian Learning (CHL) rule, to develop a reverse engineering process. This is a simple and accurate method to reconstruct objects from point clouds obtained from multiple overlapping views using low-cost sensors. In contrast to other methods that may need several stages that include downsampling, noise filtering and many other tasks, the NG automatically obtains the 3D model of the scanned objects. To demonstrate the validity of our proposal we tested our method with several models and performed a study of the neural network parameterization computing the quality of representation and also comparing results with other neural methods like growing neural gas and Kohonen maps or classical methods like Voxel Grid. We also reconstructed models acquired by low cost sensors that can be used in virtual and augmented reality environments for redesign or manipulation purposes. Since the NG algorithm has a strong computational cost we propose its acceleration. We have redesigned and implemented the NG learning algorithm to fit it onto Graphics Processing Units using CUDA. A speed-up of 180× faster is obtained compared to the sequential CPU version. | 3D model reconstruction using neural gas accelerated on GPU |
S156849461500201X | Unplanned dilution and ore-loss are the most critical challenges in underground stoping operations. These problems are the main cause behind a mine closure and directly influencing the productivity of the underground stope mining and the profitability of the entire operation. Despite being aware of the significance of unplanned dilution and ore-loss, prediction of these phenomena is still unexplained as they occur through complex mechanisms and causative factors. Current management practices primarily rely on similar stope reconciliation data and the intuition of expert mining engineers. In this study, an innovative unplanned dilution and ore-loss (uneven break: UB) management system is established using a neuro-fuzzy system. The aim of the proposed decision support system is to overcome the UB phenomenon in underground stope blasting which provides quantitative prediction of unplanned dilution and ore-loss with practical recommendations simultaneously. To achieve the method proposed, an uneven break (UB) prediction system was developed by an artificial neural network (ANN) considering 1076 datasets covering 10 major UB causative factors collected from three underground stoping mines in Western Australia. In succession, the UB consultation system was established via a fuzzy expert system (FES) in reference to surveyed results of fifteen underground-mining experts. The UB prediction and consultation system were combined as one concurrent neuro-fuzzy system that is named the ‘uneven break optimiser’. Because the current UB prediction systems in investigated mines were highly unsatisfactory with correlation coefficient (R) of 0.088 and limited to only unplanned dilution, the performance of the proposed UB prediction system (R of 0.719) is a remarkable achievement. The uneven break optimiser can be directly employed to improve underground stoping production, and this tool will be beneficial not only for underground stope planning and design but also for production management. | Decision support system of unplanned dilution and ore-loss in underground stoping operations using a neuro-fuzzy system |
S1568494615002021 | Connection frame is an essential component to implement high acceleration and ultra-precision positioning motion in a macro–micro motion platform. The performance of the positioning system is mainly affected by two sources which include thermal–mechanical deformation and the natural frequency of connection frame. In the paper, multi-objective optimization and design for the connection frame is constructed and discussed comprehensively by the effects of thermal–mechanical deformation and the natural frequency of the system. The optimization objectives for the connection structure are the minimized displacement when thermal–mechanical deformation is occurred, the maximized natural frequency to avoid system resonance, and the light weight for the connection structure to fulfil high acceleration motion. Using response surface method (RSM) combined with finite element method (FEM), the objective function is formulated as a prediction model. Non-dominated Sorting Genetic Algorithm II (NSGAII) is used to solve the optimization model and attain the matched parameters. A cantilever beam example is tested to examine the validity of the methodology, and the results from prediction model agree well with that from theoretical model. By the above methodology, a high performance with optimal parameters for the connection structure is obtained, and its natural frequency and weight can meet our design expectation. | Multi-objective optimization design of a connection frame in macro–micro motion platform |
S1568494615002033 | Activity recognition in smart homes enables the remote monitoring of elderly and patients. In healthcare systems, reliability of a recognition model is of high importance. Limited amount of training data and imbalanced number of activity instances result in over-fitting thus making recognition models inconsistent. In this paper, we propose an activity recognition approach that integrates the distance minimization (DM) and probability estimation (PE) approaches to improve the reliability of recognitions. DM uses distances of instances from the mean representation of each activity class for label assignment. DM is useful in avoiding decision biasing towards the activity class with majority instances; however, DM can result in over-fitting. PE on the other hand has good generalization abilities. PE measures the probability of correct assignments from the obtained distances, while it requires a large amount of data for training. We apply data oversampling to improve the representation of classes with less number of instances. Support vector machine (SVM) is applied to combine the outputs of both DM and PE, since SVM performs better with imbalanced data and further improves the generalization ability of the approach. The proposed approach is evaluated using five publicly available smart home datasets. The results demonstrate better performance of the proposed approach compared to the state-of-the-art activity recognition approaches. | Integration of discriminative and generative models for activity recognition in smart homes |
S1568494615002045 | This study addresses a capacitated facility location and task allocation problem of a multi-echelon supply chain against risky demands. Two and three-echelon networks are considered to maximize profit. The study represents the problem by a bi-level stochastic programming model. The revised ant algorithm proposed in the study improves the existing ant algorithm by using new design of heuristic desirability and efficient greedy heuristics to solve the problem. A set of computational experiments is reported to not only allow to fine-tune the parameters of the algorithm but also to evaluate its performance for solving the problem proposed. Experiments reveal that the proposed solution algorithm can reach 95–99% of the optimal solution against risky demands while consuming only 1000th of the computational time for large-sized problems as compared to an optimization-based tool. | A revised ant algorithm for solving location–allocation problem with risky demand in a multi-echelon supply chain network |
S1568494615002057 | Artificial bee colony (ABC) algorithm has several characteristics that make it more attractive than other bio-inspired methods. Particularly, it is simple, it uses fewer control parameters and its convergence is independent of the initial conditions. In this paper, a novel artificial bee colony based maximum power point tracking algorithm (MPPT) is proposed. The developed algorithm, does not allow only overcoming the common drawback of the conventional MPPT methods, but it gives a simple and a robust MPPT scheme. A co-simulation methodology, combining Matlab/Simulink™ and Cadence/Pspice™, is used to verify the effectiveness of the proposed method and compare its performance, under dynamic weather conditions, with that of the Particle Swarm Optimization (PSO) based MPPT algorithm. Moreover, a laboratory setup has been realized and used to experimentally validate the proposed ABC-based MPPT algorithm. Simulation and experimental results have shown the satisfactory performance of the proposed approach. | Artificial bee colony based algorithm for maximum power point tracking (MPPT) for PV systems operating under partial shaded conditions |
S1568494615002069 | TCP Vegas is a source algorithm that offers relatively rich performance in the Internet congestion control. But Vegas has some problems which have serious impacts on its performance. Rerouting is one of these problems. When route of a connection changes and round trip time increases, Vegas misinterprets it as the result of the network congestion and consequently decreases its own sending rate. As another important problem, when a flow joints to the network later than other flows and faces with congested queues, it wrongly considers the measured round trip time as its initial Base RTT . It means that while other flows decrease their sending rates due to existing congestion, this flow does not sense the congestion and hence unfairly increases its sending rate. These problems mainly have roots in the Vegas estimation procedure of the propagation delay i.e. Base RTT . In this paper we propose a novel algorithm, named Pegas, in which particle swarm optimization technique is used to dynamic estimation of Base RTT . Simulation results show that Pegas solves the rerouting and unfairness problems and remarkably enhances Vegas performance in terms of dropped packets, bottleneck utilization, and fairness. | TCP Pegas: A PSO-based improvement over TCP Vegas |
S1568494615002070 | Feature subset selection is a substantial problem in the field of data classification tasks. The purpose of feature subset selection is a mechanism to find efficient subset retrieved from original datasets to increase both efficiency and accuracy rate and reduce the costs of data classification. Working on high-dimensional datasets with a very large number of predictive attributes while the number of instances is presented in a low volume needs to be employed techniques to select an optimal feature subset. In this paper, a hybrid method is proposed for efficient subset selection in high-dimensional datasets. The proposed algorithm runs filter-wrapper algorithms in two phases. The symmetrical uncertainty (SU) criterion is exploited to weight features in filter phase for discriminating the classes. In wrapper phase, both FICA (fuzzy imperialist competitive algorithm) and IWSSr (Incremental Wrapper Subset Selection with replacement) in weighted feature space are executed to find relevant attributes. The new scheme is successfully applied on 10 standard high-dimensional datasets, especially within the field of biosciences and medicine, where the number of features compared to the number of samples is large, inducing a severe curse of dimensionality problem. The comparison between the results of our method and other algorithms confirms that our method has the most accuracy rate and it is also able to achieve to the efficient compact subset. | A hybrid algorithm for feature subset selection in high-dimensional datasets using FICA and IWSSr algorithm |
S1568494615002082 | The declining of population diversity is often considered as the primary reason for solutions falling into the local optima in particle swarm optimization (PSO). Inspired by the phenomenon that parasitic behavior is beneficial to the natural ecosystem for the promotion of its biodiversity, this paper presents a novel coevolutionary particle swarm optimizer with parasitic behavior (PSOPB). The population of PSOPB consists of two swarms, which are host swarm and parasite swarm. The characteristics of parasitic behavior are mimicked from three aspects: the parasites getting nourishments from the host, the host immunity, and the evolution of the parasites. With a predefined probability, which reflects the characteristic of the facultative parasitic behavior, the two swarms exchange particles according to particles’ sorted fitness values in each swarm. The host immunity is mimicked through two ways: the number of exchange particles is linearly decreased over iterations, and particles in the host swarm can learn from the global best position in the parasite swarm. Two mutation operators are utilized to simulate two aspects of the evolution of the parasites in PSOPB. In order to embody the law of “survival of the fittest” in biological evolution, the particles with poor fitness in the host swarm are removed and replaced by the same numbers of randomly initialized particles. The proposed algorithm is experimentally validated on a set of 21 benchmark functions. The experimental results show that PSOPB performs better than other eight popular PSO variants in terms of solution accuracy and convergence speed. | Biomimicry of parasitic behavior in a coevolutionary particle swarm optimization algorithm for global optimization |
S1568494615002094 | The artificial bee colony (ABC) algorithm is a swarm intelligence algorithm inspired by the intelligent foraging behavior of a honeybee swarm. In recent years, several ABC variants that modify some components of the original ABC algorithm have been proposed. Although there are some comparison studies in the literature, the individual contribution of each proposed modification is often unknown. In this paper, the proposed modifications are tested with a systematic experimental study that by a component-wise analysis tries to identify their impact on algorithm performance. This study is done on two benchmark sets in continuous optimization. In addition to this analysis, two new variants of ABC algorithms for each of the two benchmark sets are proposed. To do so, the best components are selected for each step of the Composite ABC algorithms. The performance of the proposed algorithms were compared against that of ten recent ABC algorithms, as well as against several recent state-of-the-art algorithms. The comparison results showed that our proposed algorithms outperform other ABC algorithms. Moreover, the composite ABC algorithms are superior to several state-of-the-art algorithms proposed in the literature. | Composite artificial bee colony algorithms: From component-based analysis to high-performing algorithms |
S1568494615002100 | Analysis and selection of Enterprise Architecture (EA) scenarios is a difficult and complex decision making process directly effecting the long-term business strategies realization. This complexity is associated with contradictory objectives and significant uncertainties involved in analysis process. Although a large body of intuitive and analytical models for EA analysis has evolved over the last few years, none of them leads to an efficient and optimized ranking in fuzzy environments. Moreover, it is necessary to simultaneously employ some complementary methods to reflect the ambiguity and vagueness as the main sources of uncertainty. This paper incorporates the concept of Data Envelopment Analysis (DEA) model into EA scenario analysis through a group analysis under uncertain conditions. To resolve the vagueness and ambiguity of the EA analysis, fuzzy credibility constrained programming and p-robustness technique are applied, respectively. Not only is the proposed DEA model linear, robust, and flexible in aggregating experts’ opinion in a group decision making process, but it also is successful in discrimination power improvement – a major shortcoming associated with classic DEA model. The proposed model provides useful solutions to support decision making process for large-scale Information Technology (IT) development planning. | A novel credibility-based group decision making method for Enterprise Architecture scenario analysis using Data Envelopment Analysis |
S1568494615002112 | Three-phase induction motor are one of the most important elements of electromechanical energy conversion in the production process. However, they are subject to inherent faults or failures under operating conditions. The purpose of this paper is to present a comparative study among intelligent tools to classify short-circuit faults in stator windings of induction motors operating with three different models of frequency inverters. This is performed by analyzing the amplitude of the stator current signal in the time domain, using a dynamic acquisition rate according to machine frequency supply. To assess the classification accuracy across the various levels of faults severity, the performance of three different learning machine techniques were compared: (i) fuzzy ARTMAP network; (ii) multilayer perceptron network; and (iii) support vector machine. Results obtained from 2.268 experimental tests are presented to validate the study, which considered a wide range of operating frequencies and load conditions. | Evaluation of stator winding faults severity in inverter-fed induction motors |
S1568494615002124 | Medical equipment such as infant incubator, infusion pump, CT scanner, etc. should be maintained properly to meet adequate standards of reliability in healthcare services. This paper proposes a new comprehensive risk-based prioritization framework for selecting the best maintenance strategy. The framework encompasses three steps. In the first step, a fuzzy failure modes and effects analysis (FFMEA) method is applied by considering several risk assessment factors. In the second step, seven miscellaneous dimensions such as use-related hazards, age, and utilization are applied to consider all aspects of hazards and risks in prioritization of medical devices. Finally, a simple method is introduced in the third step in order to find the most suitable maintenance strategy for each device according to the scores produced by the previous steps. A numerical example illustrates the proposed approach and shows that, through the method introduced in this paper, managers can easily classify medical devices for maintenance activities according to their criticality scores. Implementation of this framework could increase the availability of high risk machines in healthcare industries. Moreover, this framework can be applied in other critical industries such as aviation by modifying some criteria and dimensions. | A comprehensive fuzzy risk-based maintenance framework for prioritization of medical devices |
S1568494615002148 | This paper proposes a novel hybrid approach based on particle swarm optimization and local search, named PSOLS, for dynamic optimization problems. In the proposed approach, a swarm of particles with fuzzy social-only model is frequently applied to estimate the location of the peaks in the problem landscape. Upon convergence of the swarm to previously undetected positions in the search space, a local search agent (LSA) is created to exploit the respective region. Moreover, a density control mechanism is introduced to prevent too many LSAs crowding in the search space. Three adaptations to the basic approach are then proposed to manage the function evaluations in the way that are mostly allocated to the most promising areas of the search space. The first adapted algorithm, called HPSOLS, is aimed at improving PSOLS by stopping the local search in LSAs that are not contributing much to the search process. The second adapted, algorithm called CPSOLS, is a competitive algorithm which allocates extra function evaluations to the best performing LSA. The third adapted algorithm, called CHPSOLS, combines the fundamental ideas of HPSOLS and CPSOLS in a single algorithm. An extensive set of experiments is conducted on a variety of dynamic environments, generated by the moving peaks benchmark, to evaluate the performance of the proposed approach. Results are also compared with those of other state-of-the-art algorithms from the literature. The experimental results indicate the superiority of the proposed approach. | A novel hybrid adaptive collaborative approach based on particle swarm optimization and local search for dynamic optimization problems |
S156849461500215X | In this paper, a new fuzzy peer assessment methodology that considers vagueness and imprecision of words used throughout the evaluation process in a cooperative learning environment is proposed. Instead of numerals, words are used in the evaluation process, in order to provide greater flexibility. The proposed methodology is a synthesis of perceptual computing (Per-C) and a fuzzy ranking algorithm. Per-C is adopted because it allows uncertainties of words to be considered in the evaluation process. Meanwhile, the fuzzy ranking algorithm is deployed to obtain appropriate performance indices that reflect a student's contribution in a group, and subsequently rank the student accordingly. A case study to demonstrate the effectiveness of the proposed methodology is described. Implications of the results are analyzed and discussed. The outcomes clearly demonstrate that the proposed fuzzy peer assessment methodology can be deployed as an effective evaluation tool for cooperative learning of students. | A new fuzzy peer assessment methodology for cooperative learning of students |
S1568494615002161 | In this paper, a modified particle swarm optimization (PSO) algorithm is developed for solving multimodal function optimization problems. The difference between the proposed method and the general PSO is to split up the original single population into several subpopulations according to the order of particles. The best particle within each subpopulation is recorded and then applied into the velocity updating formula to replace the original global best particle in the whole population. To update all particles in each subpopulation, the modified velocity formula is utilized. Based on the idea of multiple subpopulations, for the multimodal function optimization the several optima including the global and local solutions may probably be found by these best particles separately. To show the efficiency of the proposed method, two kinds of function optimizations are provided, including a single modal function optimization and a complex multimodal function optimization. Simulation results will demonstrate the convergence behavior of particles by the number of iterations, and the global and local system solutions are solved by these best particles of subpopulations. | A modified particle swarm optimization with multiple subpopulations for multimodal function optimization problems |
S1568494615002197 | A study on bilateral filter for denoising reveals that more informative the filters are, better is the result expected. Moreover, getting precise information of the image with noise is a difficult task. In the current work, a rough set theory (RST) based approach is used to derive pixel level edge map and class labels which in turn are used to improve the performance of bilateral filters. RST handles the uncertainty present in the data even under noise. The basic structure of existing bilateral filter is not changed much, however, boosted up by prior information derived by rough edge map and rough class labels. The filter is extensively applied to denoise brain MR images. The results are compared with that of the state-of-the-art approaches. The experiments have been performed on two real (normal and pathological disordered) human MR databases. The performance of the proposed filter is found to be better, in terms of benchmark metrics. | Rough set based bilateral filter design for denoising brain MR images |
S1568494615002203 | Within a radio frequency identification (RFID) system, the reader-to-reader collision problem may occur when a group of readers operate simultaneously. The scheduling-based family, as one branch of RIFD reader collision avoidance methods, focuses on the allocation of time slots and frequency channels to RFID readers. Generally, the RFID reader-to-reader collision avoidance model can be translated as an optimization problem related with the communication resource allocation by maximizing the total effective interrogation area. Artificial immune networks are emerging heuristic evolutionary algorithms, which have been broadly applied to scientific computing and engineering applications. Since the first version of artificial immune networks for optimization occurred, a series of revised or derived artificial immune networks have been developed which aim at capturing more accurate solutions at higher convergence speed. For the RFID reader-to-reader collision avoidance model, this paper attempts to investigate the performance of six artificial immune networks in allocating communication resources to multiple readers. By following the spirits of artificial immune networks, the corresponding major immune operators are redesigned to satisfy the practice of RFID systems. By taking into account the effects of time slots and frequency channels, respectively, two groups of simulation experiments are arranged to examine the effectiveness of different artificial immune networks in optimizing the total effective interrogation area. Besides, a group of examination is executed to investigate the performance of six algorithms in solving different dimensionality of solution space in reader collision avoidance model. Meanwhile, a single group of simulation experiments are arranged to examine the computational efficiency of six artificial immune networks. The results demonstrate that six artificial immune networks perform well in searching the maximum total effective interrogation and are suitable to solve the RFID reader-to-reader collision avoidance model. | Comparative performance analysis of various artificial immune networks applied to RFID reader-to-reader collision avoidance |
S1568494615002215 | Artificial bee colony (ABC) algorithm, one of the swarm intelligence algorithms, has been proposed for continuous optimization, inspired intelligent behaviors of real honey bee colony. For the optimization problems having binary structured solution space, the basic ABC algorithm should be modified because its basic version is proposed for solving continuous optimization problems. In this study, an adapted version of ABC, ABC bin for short, is proposed for binary optimization. In the proposed model for solving binary optimization problems, despite the fact that artificial agents in the algorithm works on the continuous solution space, the food source position obtained by the artificial agents is converted to binary values, before the objective function specific for the problem is evaluated. The accuracy and performance of the proposed approach have been examined on well-known 15 benchmark instances of uncapacitated facility location problem, and the results obtained by ABC bin are compared with the results of continuous particle swarm optimization (CPSO), binary particle swarm optimization (BPSO), improved binary particle swarm optimization (IBPSO), binary artificial bee colony algorithm (binABC) and discrete artificial bee colony algorithm (DisABC). The performance of ABC bin is also analyzed under the change of control parameter values. The experimental results and comparisons show that proposed ABC bin is an alternative and simple binary optimization tool in terms of solution quality and robustness. | The continuous artificial bee colony algorithm for binary optimization |
S1568494615002227 | Hemoglobin can be measured normally after the analysis of the blood sample taken from the body and this measurement is named as invasive. Hemoglobin must continuously be measured to control the disease and its progression in people who go through hemodialysis and have diseases such as oligocythemia and anemia. This gives a perpetual feeling of pain to the people. This paper proposes a non-invasive method for the prediction of the hemoglobin using the characteristic features of the PPG signals and different machine learning algorithms. In this work, PPG signals from 33 people were included in 10 periods and 40 characteristic features were extracted from them. In addition to these features, gender information (male or female), height (as cm), weight (as kg) and age of each subjects were also considered as the features. Blood count and hemoglobin level were measured simultaneously by using the “Hemocue Hb-201TM” device. Using the different machine learning regression techniques (classification and regression trees – CART, least squares regression – LSR, generalized linear regression – GLR, multivariate linear regression – MVLR, partial least squares regression – PLSR, generalized regression neural network – GRNN, MLP – multilayer perceptron, and support vector regression – SVR). RELIEFF feature selection (RFS) and correlation-based feature selection (CFS) were used to select the best features. Original features and selected features using RFS (10 features) and CFS (11 features) were used to predict the hemoglobin level using the different machine learning techniques. To evaluate the performance of the machine learning techniques, different performance measures such as mean absolute error – MAE, mean square error – MSE, R 2 (coefficient of determination), root mean square error – RMSE, Mean Absolute Percentage Error (MAPE) and Index of Agreement – IA were used. The promising results were obtained (MSE-0.0027) using the selected features by RFS and SVR. Hence, the proposed method may clinically be used to predict the hemoglobin level of human being clinically without taking and analyzing blood samples. | Non-invasive prediction of hemoglobin level using machine learning techniques with the PPG signal's characteristics features |
S1568494615002239 | In this paper, the authors propose a novel intelligent framework to identify the exhaust gas temperature (T exh ) and the engine-out hydrocarbon emission (HC raw ) during the coldstart operation of an automotive engine. These are two key variables affecting the cumulative tailpipe emissions (HC cum ) over the coldstart phase, which is the number one emission-related problem for today's spark-ignited (SI) engine vehicles. The coldstart operation is regarded as a highly nonlinear, transient and uncertain phenomenon. The proposed identifier integrates different soft computational strategies, i.e. neuro-fuzzy computing, fuzzy controller, swarm intelligent computing, and ensemble network design, beneficial for capturing both uncertainty and nonlinearity of the problem at hand. Furthermore, concepts of negative correlation topology design and hierarchical pair competition based parallel training are extracted from literature to form a diverse and robust ensemble identifier. Training of each neuro-fuzzy sub-component in ensemble network is carried out using a hybrid learning scheme. One feature of the antecedent part of neuro-fuzzy system, i.e. number of linguistic terms for each variable, as well as characteristics of rules in rule base are adjusted using hierarchical fair competition-based parallel adaptive particle swarm optimization (HFC-APSO) and the rest of features, i.e. the shape of (membership functions) MFs and the consequent variables of each rule, are tuned using back-propagation (BP) and steepest descent techniques. As it was mentioned, the authors try to design an ensemble identifier with acceptable rate of generalization, robustness and accuracy. These features help them to tame the intuitive uncertainties associated with the rate of T exh and HC raw emission over the coldstart period. To do so, the potential characteristics of sub-components (solution domain of network design) are divided into a set of partitions and then HFC-APSO is utilized to explore/exploit each of those partitions. The exploration/exploitation rate of PSO (the core of HFC-APSO) is dynamically controlled by a fuzzy logic based controller. Hence, it is expected that HFC-APSO yields a set of accurate sub-identifiers with different operating characteristics. To further foster the diversity of the ensemble, negative correlation criterion is considered which obstructs the integration of identical sub-identifiers. The identification results demonstrate that the method is highly capable of providing an authentic model for estimation of T exh and HC raw emission during the coldstart period. | An ensemble neuro-fuzzy radial basis network with self-adaptive swarm based supervisor and negative correlation for modeling automotive engine coldstart hydrocarbon emissions: A soft solution to a crucial automotive problem |
S1568494615002240 | Reducing building energy demand is a crucial part of the global response to climate change, and evolutionary algorithms (EAs) coupled to building performance simulation (BPS) are an increasingly popular tool for this task. Further uptake of EAs in this industry is hindered by BPS being computationally intensive: optimisation runs taking days or longer are impractical in a time-competitive environment. Surrogate fitness models are a possible solution to this problem, but few approaches have been demonstrated for multi-objective, constrained or discrete problems, typical of the optimisation problems in building design. This paper presents a modified version of a surrogate based on radial basis function networks, combined with a deterministic scheme to deal with approximation error in the constraints by allowing some infeasible solutions in the population. Different combinations of these are integrated with Non-Dominated Sorting Genetic Algorithm II (NSGA-II) and applied to three instances of a typical building optimisation problem. The comparisons show that the surrogate and constraint handling combined offer improved run-time and final solution quality. The paper concludes with detailed investigations of the constraint handling and fitness landscape to explain differences in performance. | Constrained, mixed-integer and multi-objective optimisation of building designs by NSGA-II with fitness approximation |
S1568494615002252 | Query-by-Humming involves retrieving music with a melody that matches the hummed query. An improved Query-by-Humming system for extracting pitch contour information based on a fuzzy inference model is introduced. In addition, an improved content-based music repeating pattern extraction model is introduced. Our bar-indexing method can extract the melody, identify repeating patterns and handle polyphonic MIDI files. To verify the effectiveness of the system, 15 volunteers recorded queries that were fed as input to the system and the longest common subsequence (LCS) was used to identify the most related top N matches. The system achieves 70% accuracy among the top 5 items retrieved. | A repeating pattern based Query-by-Humming fuzzy system for polyphonic melody retrieval |
S1568494615002264 | An information granule has to be translated into significant frameworks of granular computing to realize interpretability–accuracy tradeoff. These two objectives are in conflict and constitute an open problem. A new operational framework to form the evolving information granule (EIG) is developed in this paper, which ensures a compromise between interpretability and reasonable accuracy. The evolving information granule is initiated with the first information granule by translating the knowledge of the entire output domain. The initial information granule is considered an underfitting state with a high approximation error. Then, the EIG starts evolving in the information granule by partitioning the output domain and uses a dynamic constraint to maintain semantic interpretability in the output-contexts. The important criterion in the EIG is to determine the prominent distinction (output-context) in the output domain and realize the distinct information granule that depicts the semantics at the fuzzy partition level. The EIG tends to evolve toward the lower error region and realizes the effective rulebase by avoiding overfitting. The outcome on the synthetic and real-world data using the EIG shows the effectiveness of the proposed system, which outperforms state-of-the art methods. | Information granularity model for evolving context-based fuzzy system |
S1568494615002276 | This paper presents the experimental study on a system which is an interesting crossover between a standard benchmark control problem and a smart material. The study represents the effect of stress, strain and temperature over bandwidth of antagonistic shape memory alloy (SMA) and its relative performance in influencing the stability of the system. The experiment is implicated on an underactuated open loop unstable ball and beam system, designed and developed to be driven by SMA. A proportional derivative controller cascaded with sliding mode controller (SMC) based on simplified fuzzy adaptive sliding surface is considered to study the dynamics of the system. The designed simplified fuzzy based sliding surface controller is able to balance the ball and beam system around its equilibrium state, which as a control perspective shows that performance of this controller is better than the conventional SMC. Furthermore from smart material perspective decisive results are arrived to handle the issues like stability, speed of operation and performance of antagonistic SMA. | Fuzzy based sliding surface for shape memory alloy wire actuated classical super-articulated control system |
S1568494615002288 | This paper describes the application of a neural model in a speed control loop of an electrical drive with an elastic mechanical coupling. Such mechanical construction makes precise speed control more difficult because of the oscillation tendency of state variables caused by a long shaft. The goal of the presented application was the replacement of a classical speed controller by an on-line trained neurocontroller, based on only one feedback from easily measurable driving motor speed. The proposed controller is based on the feedforwad neural network. Internal coefficients of neural model – weights – are adapted on-line according to the Levenberg–Marquardt algorithm. One of the problematic issues in such implementation is selection of a learning factor of the weight adaptation algorithm. In the proposed solution, a fuzzy model was implemented for calculation of this learning coefficient. The proposed solution was compared to the classical one with a PI speed controller. The designed control structure was tested in simulations and verified in experiments, using dSPACE1103 card. | An on-line trained neural controller with a fuzzy learning rate of the Levenberg–Marquardt algorithm for speed control of an electrical drive with an elastic joint |
S156849461500229X | In this paper, speed control of Brushless DC motor using Bat algorithm optimized online Adaptive Neuro-Fuzzy Inference System is presented. Learning parameters of the online ANFIS controller, i.e., Learning Rate (η), Forgetting Factor (λ) and Steepest Descent Momentum Constant (α) are optimized for different operating conditions of Brushless DC motor using Genetic Algorithm, Particle Swarm Optimization, and Bat algorithm. In addition, tuning of the gains of the Proportional Integral Derivative (PID), Fuzzy PID, and Adaptive Fuzzy Logic Controller is optimized using Genetic Algorithm, Particle Swarm Optimization and Bat Algorithm. Time domain specification of the speed response such as rise time, peak overshoot, undershoot, recovery time, settling time and steady state error is obtained and compared for the considered controllers. Also, performance indices such as Root Mean Squared Error, Integral of Absolute Error, Integral of Time Multiplied Absolute Error and Integral of Squared Error are evaluated and compared for the above controllers. In order to validate the effectiveness of the proposed controller, simulation is performed under constant load condition, varying load condition and varying set speed conditions of the Brushless DC motor. The real time experimental verification of the proposed controller is verified using an advanced DSP processor. The simulation and experimental results confirm that bat algorithm optimized online ANFIS controller outperforms the other controllers under all considered operating conditions. | Speed control of Brushless DC motor using bat algorithm optimized Adaptive Neuro-Fuzzy Inference System |
S1568494615002380 | The Capacitated Vehicle Routing Problem (CVRP) is extended here to handle uncertain arc costs without resorting to probability distributions, giving the Robust VRP (RVRP). The unique set of arc costs in the CVRP is replaced by a set of discrete scenarios. A scenario is for instance the travel time observed on each arc at a given traffic hour. The goal is to build a set of routes using the lexicographic min–max criterion: the worst cost over all scenarios is minimized but ties are broken using the other scenarios, from the worst to the best. This version of robust CVRP has never been studied before. A Mixed Integer Linear Program (MILP), two greedy heuristics, a local search and four metaheuristics are proposed: a Greedy Randomized Adaptive Search Procedure, an Iterated Local Search (ILS), a Multi-Start ILS (MS-ILS), and an MS-ILS based on Giant Tours (MS-ILS-GT) converted into feasible routes via a lexicographic splitting procedure. The greedy heuristics provide the other algorithms with good initial solutions. Tests on small instances (10–20 customers, 2–3 vehicles, 10–30 scenarios) show that the four metaheuristics retrieve all optima found by the MILP. On larger cases with 50–100 customers, 5–20 vehicles and 10–20 scenarios, MS-ILS-GT dominates the other approaches. As our algorithms share the same components (initial heuristic, local search), the positive contribution of using the giant tour approach is confirmed on the RVRP. | Local search based metaheuristics for the robust vehicle routing problem with discrete scenarios |
S1568494615002409 | Time series forecasting concerns the prediction of future values based on the observations previously taken at equally spaced time points. Statistical methods have been extensively applied in the forecasting community for the past decades. Recently, machine learning techniques have drawn attention and useful forecasting systems based on these techniques have been developed. In this paper, we propose an approach based on neuro-fuzzy modeling for time series prediction. Given a predicting sequence, the local context of the sequence is located in the series of the observed data. Proper lags of relevant variables are selected and training patterns are extracted. Based on the extracted training patterns, a set of TSK fuzzy rules are constructed and the parameters involved in the rules are refined by a hybrid learning algorithm. The refined fuzzy rules are then used for prediction. Our approach has several advantages. It can produce adaptive forecasting models. It works for univariate and multivariate prediction. It also works for one-step as well as multi-step prediction. Several experiments are conducted to demonstrate the effectiveness of the proposed approach. | Time series forecasting with a neuro-fuzzy modeling scheme |
S1568494615002410 | This paper describes a biased random-key genetic algorithm for a real-world wireless backhaul network design problem. This is a novel problem, closely related to variants of the Steiner tree problem and the facility location problem. Given a parameter h, we want to build a forest where each tree has at most h hops from the demand nodes, where traffic originates, to the root nodes where each tree is rooted. Candidate Steiner nodes do not have any demand but represent locations where we can install cellsites to cover the traffic and equipment to backhaul the traffic to the cellular core network. Each Steiner node can cover demand nodes within a given distance, subject to a capacity constraint. The aggregate set of constraints may make it impossible to cover or backhaul all demands. A revenue function computes the revenue associated with the total amount of traffic covered and backhauled to the root nodes. The objective of the problem is to build a forest that maximizes the difference between the total revenue and the cost associated with the installed equipment. Although we will have a forest when we consider only the backhaul links and root nodes, the addition of demand vertices can induce undirected cycles, resulting in a directed acyclic graph. We consider instances of this problem with several additional constraints that are motivated by the requirements of real-world telecommunication networks. | A biased random-key genetic algorithm for wireless backhaul network design |
S1568494615002422 | One of the challenging problems in motion planning is finding an efficient path for a robot in different aspects such as length, clearance and smoothness. We formulate this problem as two multi-objective path planning models with the focus on robot's energy consumption and path's safety. These models address two five- and three-objectives optimization problems. We propose an evolutionary algorithm for solving the problems. For efficient searching and achieving Pareto-optimal regions, in addition to the standard genetic operators, a family of path refiner operators is introduced. The new operators play a local search role and intensify power of the algorithm in both explorative and exploitative terms. Finally, we verify the models and compare efficiency of the algorithm and the refiner operators by other multi-objective algorithms such as strength Pareto evolutionary algorithm 2 and multi-objective particle swarm optimization on several complicated path planning test problems. | Clear and smooth path planning |
S1568494615002434 | In this study, we propose a hybrid optimization method, consisting of an evolutionary algorithm (EA) and a branch-and-bound method (BnB) for solving the capacitated single allocation hub location problem (CSAHLP). The EA is designed to explore the solution space and to select promising configurations of hubs (the location part of the problem). Hub configurations produced by the EA are further passed to the BnB search, which works with fixed hubs and allocates the non-hub nodes to located hubs (the allocation part of the problem). The BnB method is implemented using parallelization techniques, which results in short running times. The proposed hybrid algorithm, named EA-BnB, has been tested on the standard Australia Post (AP) hub data sets with up to 300 nodes. The results demonstrate the superiority of our hybrid approach over existing heuristic approaches from the existing literature. The EA-BnB method has reached all the known optimal solutions for AP hub data set and found new, significantly better, solutions on three AP instances with 100 and 200 nodes. Furthermore, the extreme efficiency of the implementation of this hybrid algorithm resulted in short running times, even for the largest AP test instances. | A hybridization of an evolutionary algorithm and a parallel branch and bound for solving the capacitated single allocation hub location problem |
S1568494615002446 | Microarray data analysis is a challenging problem in the data mining field. Actually, it represents the expression levels of thousands of genes under several conditions. The analysis of this data consists on discovering genes that share similar expression patterns across a sub-set of conditions. In fact, the extracted informations are submatrices of the microarray data that satisfy a coherence constraint. These submatrices are called biclusters, while the process of extracting them is called biclustering. Since its first application to the analysis of microarray [1], many modeling and algorithms have been proposed to solve it. In this work, we propose a new multiobjective model and a new metaheuristic HMOBI ibea for the biclustering problem. Results of the proposed method are compared to those of other existing algorithms and the biological relevance of the extracted information is validated. The experimental results show that our method extracts very relevant biclusters, with large sizes with respect to existing methods. | Using multiobjective optimization for biclustering microarray data |
S1568494615002471 | In recent years, various heuristic optimization methods have been developed. Many of these methods are inspired by swarm behaviors in nature, such as particle swarm optimization (PSO), firefly algorithm (FA) and cuckoo optimization algorithm (COA). Recently introduced COA, has proven its excellent capabilities, such as faster convergence and better global minimum achievement. In this paper a new approach for solving graph coloring problem based on COA was presented. Since COA at first was presented for solving continuous optimization problems, in this paper we use the COA for the graph coloring problem, we need a discrete COA. Hence, to apply COA to discrete search space, the standard arithmetic operators such as addition, subtraction and multiplication existent in COA migration operator based on the distance's theory needs to be redefined in the discrete space. Redefinition of the concept of the difference between the two habitats as the list of differential movements, COA is equipped with a means of solving the discrete nature of the non-permutation. A set of graph coloring benchmark problems are solved and its performance is compared with some well-known heuristic search methods. The obtained results confirm the high performance of the proposed method. | Modified cuckoo optimization algorithm (MCOA) to solve graph coloring problem |
S1568494615002483 | In this study, we propose a probabilistic approach for designing nonlinear optimal robust tracking controllers for unmanned aerial vehicles. The controller design is formulated in terms of a multi-objective optimization problem that is solved by using a bio-inspired optimization algorithm, offering high likelihood of finding an optimal or near-optimal global solution. The process of tuning the controller minimizes differences between system outputs and optimal specifications given in terms of rising time, overshoot and steady-state error, and the controller succeed in fitting the performance requirements even considering parametric uncertainties and the nonlinearities of the aircraft. The stability of the controller is proved for the nominal case and its robustness is carefully verified by means of Monte Carlo simulations. | A probabilistic approach for designing nonlinear optimal robust tracking controllers for unmanned aerial vehicles |
S1568494615002501 | In this paper, a Proportional Derivative (PD)-type Multi Input Single Output (MISO) damping controller is designed for Static Synchronous Series Compensator (SSSC) controller. Both local and remote signals with associated time delays are chosen as the input signal to the proposed MISO controller. The design problem is formulated as an optimization problem and a hybrid Improved Differential Evolution and Pattern Search (hIDEPS) technique is employed to optimize the controller parameters. The improvement in Differential Evolution (DE) algorithm is introduced by changing two of its most important control parameters i.e. Scaling Factor F and Crossover Constant CR with an objective of achieving improved performance of the algorithm. The superiority of proposed Improved DE (IDE) over original DE and hIDEPS over IDE has also been demonstrated. To show the effectiveness and robustness of the proposed design approach, simulation results are presented and compared with DE and Particle Swarm Optimization (PSO) optimized Single Input Single Output (SISO) SSSC based damping controllers for both Single Machine Infinite Bus (SMIB) power system and multi-machine power system. It is noticed that the proposed approach provides superior damping performance compared to some approaches available in literature. | A PD-type Multi Input Single Output SSSC damping controller design employing hybrid improved differential evolution-pattern search approach |
S1568494615002513 | Due to the limited amount of stored battery energy it is necessary to optimally accelerate electric vehicles (EVs), especially in urban driving cycles. Moreover, a quick speed change is also important to minimize the trip time. Conversely, for comfortable driving, the jerk experienced during speed changing must be minimum. This study focuses on finding a comfortable driving strategy for EVs during speed changes by solving a multi-objective optimization problem (MOOP) with various conflicting objectives. Variants of two different competing evolutionary algorithms (EAs), NSGA-II (a non-dominated sorting multi-objective genetic algorithm) and SPEA 2 (strength Pareto evolutionary algorithm), are adopted to solve the problem. The design parameters include the acceleration value(s) with the associated duration(s) and the controller gains. The Pareto-optimal front is obtained by solving the corresponding MOOP. Suitable multi-criterion decision-making techniques are employed to select a preferred solution for practical implementation. After an extensive analysis of EA performance and keeping online implementation in mind, it was observed that NSGA-II with the crowding distance approach was the most suitable. A recently proposed innovization procedure was used to reveal salient properties associated with the obtained trade-off solutions. These solutions were analyzed to study the effectiveness of various parameters influencing comfortable driving. | Optimal driving during electric vehicle acceleration using evolutionary algorithms |
S1568494615002525 | Many key problems in computational systems biology and bioinformatics can be formulated and solved using a global optimization framework. The complexity of the underlying mathematical models require the use of efficient solvers in order to obtain satisfactory results in reasonable computation times. Metaheuristics are gaining recognition in this context, with Differential Evolution (DE) as one of the most popular methods. However, for most realistic applications, like those considering parameter estimation in dynamic models, DE still requires excessive computation times. Here we consider this latter class of problems and present several enhancements to DE based on the introduction of additional algorithmic steps and the exploitation of parallelism. In particular, we propose an asynchronous parallel implementation of DE which has been extended with improved heuristics to exploit the specific structure of parameter estimation problems in computational systems biology. The proposed method is evaluated with different types of benchmarks problems: (i) black-box global optimization problems and (ii) calibration of non-linear dynamic models of biological systems, obtaining excellent results both in terms of quality of the solution and regarding speedup and scalability. | Enhanced parallel Differential Evolution algorithm for problems in computational systems biology |
S1568494615002537 | This study proposes a differential-evolution-based symbiotic cultural algorithm (DESCA) for the implementation of neuro-fuzzy systems (NFS) to solve nonlinear control system problems. DESCA adopts symbiotic evolution to decompose a fuzzy system into multiple fuzzy rules as multiple subpopulations. In addition, DESCA randomly selects fuzzy rules from different subpopulations that combine into a complete solution whose performance is be evaluated. Moreover, DESCA uses various mutation strategies of differential evolution as five knowledge sources in the belief space. These knowledge sources influence the population space in the cultural algorithm and can be used as models to guide the feasible search space. Finally, the proposed algorithm is applied to various simulations. The results demonstrate the effectiveness of this approach. | Efficient DE-based symbiotic cultural algorithm for neuro-fuzzy system design |
S1568494615002549 | In this paper, the simultaneous order acceptance and scheduling problem is developed by considering the variety of customers’ requests. To that end, two agents with different scheduling criteria including the total weighted lateness for the first and the weighted number of tardy orders for the second agent are considered. The objective is to maximize the sum of the total profit of the first and the total revenue of the second agents’ orders when the weighted number of tardy orders of the second agent is bounded by an upper bound value. In this study, it is shown that this problem is NP-hard in the strong sense, and then to optimally solve it, an integer linear programming model is proposed based on the properties of optimal solution. This model is capable of solving problem instances up to 60 orders in size. Also, the LP-relaxation of this model was used to propose a hybrid meta-heuristic algorithm which was developed by employing genetic algorithm and linear programming. Computational results reveal that the proposed meta-heuristic can achieve near optimal solutions so efficiently that for the instances up to 60 orders in size, the average deviation of the model from the optimal solution is lower than 0.2% and for the instances up to 150 orders in size, the average deviation from the problem upper bound is lower than 1.5%. | A hybrid genetic and linear programming algorithm for two-agent order acceptance and scheduling problem |
S1568494615002550 | Differential evolution (DE) is a simple and powerful evolutionary algorithm for global optimization. DE with constraint handling techniques, named constrained differential evolution (CDE), can be used to solve constrained optimization problems (COPs). In existing CDEs, the parents are randomly selected from the current population to produce trial vectors. However, individuals with fitness and diversity information should have more chances to be selected. This study proposes a new CDE framework that uses nondominated sorting mutation operator based on fitness and diversity information, named MS-CDE. In MS-CDE, firstly, the fitness of each individual in the population is calculated according to the current population situation. Secondly, individuals in the current population are ranked according to their fitness and diversity contribution. Lastly, parents in the mutation operators are selected in proportion to their rankings based on fitness and diversity. Thus, promising individuals with better fitness and diversity are more likely to be selected as parents. The MS-CDE framework can be applied to most CDE variants. In this study, the framework is applied to two popular representative CDE variants, (μ+λ)-CDE and ECHT-DE. Experiment results on 24 benchmark functions from CEC’2006 and 18 benchmark functions from CEC’2010 show that the proposed framework is an effective approach to enhance the performance of CDE algorithms. | Constrained differential evolution with multiobjective sorting mutation operators for constrained optimization |
Subsets and Splits