FileName
stringlengths
17
17
Abstract
stringlengths
163
6.01k
Title
stringlengths
12
421
S1568494615002562
New heuristic filters are proposed for state estimation of nonlinear dynamic systems based on particle swarm optimization (PSO) and differential evolution (DE). The methodology converts state estimation problem into dynamic optimization to find the best estimate recursively. In the proposed strategy the particle number is adaptively set based on the weighted variance of the particles. To have a filter with minimal parameter settings, PSO with exponential distribution (PSO-E) is selected in conjunction with jDE to self-adapt the other control parameters. The performance of the proposed adaptive evolutionary algorithms i.e. adaptive PSO-E, adaptive DE and adaptive jDE is studied through a comparative study on a suite of well-known uni- and multi-modal benchmark functions. The results indicate an improved performance of the adaptive algorithms relative to original simple versions. Further, the performance of the proposed heuristic filters generally called adaptive particle swarm filters (APSF) or adaptive differential evolution filters (ADEF) are evaluated using different linear (nonlinear)/Gaussian (non-Gaussian) test systems. Comparison of the results to those of the extended Kalman filter, unscented Kalman filter, and particle filter indicate that the adopted strategy fulfills the essential requirements of accuracy for nonlinear state estimation.
State estimation of nonlinear dynamic systems using weighted variance-based adaptive particle swarm optimization
S1568494615002574
In this paper, we propose a new reversible image watermarking method based on interpolation-error expansion called region based interpolation error expansion (RBIEE). We improved Thodi's prediction error expansion (PEE) technique by using a novel interpolation algorithm which exploits interpixel correlation better. Furthermore, interpolation error histogram is divided into two regions. The parameters of each region are determined separately and iteratively to reach a given embedding capacity more precisely. However, adaptive embedding strategy is utilized to get better capacity-distortion performance. Advantage of the proposed method over the other state-of-the-art methods in terms of capacity and visual quality is demonstrated experimentally.
Region based interpolation error expansion algorithm for reversible image watermarking
S1568494615002586
Disc abnormalities cause a great number of complaints including lower back pain. Lower back pain is one of the most common types of pain in the world. The computer-assisted detection of this ailment will be of great use to physicians and specialists. With this study, hybrid models have been developed which include feature extraction, selection and classification characteristics for the purpose of determining the disc abnormalities in the lumbar region. In determining the abnormalities, T2-weighted sagittal and axial Magnetic Resonance Images (MRI) were taken from 55 people. In the feature extraction stage, 27 appearance characteristics and form characteristics were acquired from both sagittal and transverse images. In the feature selection stage, the F-Score-Based Feature Selection (FSFS) and the Correlation-Based Feature Selection (CBFS) methods were used to select the best discriminative features. The number of features was reduced to 5 from 27 by using the FSFS, and to 22 from 27 by using the CBFS. In the last stage, five different classification algorithms, i.e. the Multi-Layer Perceptron, the Support Vector Machine, the Decision Tree, the Naïve Bayes, and the k Nearest Neighbor algorithms were applied. In addition, the combination of the classifier model (the combination of the bagging and the random forests) has been used to improve the classification performance in the detection of lumbar disc datasets. The results which were obtained suggest that the proposed hybrid models can be used safely in detecting the disc abnormalities. Artificial Neural Network Active Shape Model Computer-Aided Diagnosis Correlation-Based Feature Selection Case-Based Reasoning F-Score-Based Feature Selection Generalized Regression Neural Network k-Nearest Neighbor Multilayer Perceptron Magnetic Resonance Imaging Self-Organizing Map Support Vector Machine University of California, Irvine World Health Organization
Detection of abnormalities in lumbar discs from clinical lumbar MRI with hybrid models
S1568494615002598
One of the best approaches for verifying software systems (especially safety critical systems) is the model checking in which all reachable states are generated from an initial state. All of these states are searched for errors or desirable patterns. However, the drawback for many real and complex systems is the state space explosion in which model checking cannot generate all the possible states. In this situation, designers can use refutation to check refusing a property rather than proving it. In refutation, it is very important to handle the state space for finding errors efficiently. In this paper, we propose an efficient solution to implement refutation in complex systems modeled by graph transformation. Since meta-heuristic algorithms are efficient solutions for searching in the problems with very large state spaces, we use them to find errors (e.g., deadlocks) in systems which cannot be verified through existing model checking approaches due to the state space explosion. To do so, we employ a Particle Swarm Optimization (PSO) algorithm to consider only a subset of states (called population) in each step of the algorithm. To increase the accuracy, we propose a hybrid algorithm using PSO and Gravitational Search Algorithm (GSA). The proposed approach is implemented in GROOVE, a toolset for designing and model checking graph transformation systems. The experiments show improved results in terms of accuracy, speed and memory usage in comparison with other existing approaches.
A meta-heuristic solution for automated refutation of complex software systems specified through graph transformations
S1568494615002604
This paper presents a tabu search based hybrid evolutionary algorithm (TSHEA) for solving the max-cut problem. The proposed algorithm integrates a distance-and-quality based solution combination operator and a tabu search procedure based on neighborhood combination of one-flip and constrained exchange moves. Comparisons with leading reference algorithms from the literature disclose that the proposed algorithm discovers new best solutions for 15 out of 91 instances, while matching the best known solutions on all but 4 instances. Analysis indicates that the neighborhood combination and the solution combination operator play key roles to the effectiveness of the proposed algorithm.
A tabu search based hybrid evolutionary algorithm for the max-cut problem
S1568494615002616
This paper deals with the rehabilitation purposes using an active orthosis driven by an adaptive neural controller based on a radial basis function neural network (RBFNN). Two essential conditions are required in our study: ensuring the wearer safety and the good trajectory tracking. We consider for our experiments the same movements often recommended by the doctor during therapy sessions. In this context, it is possible to add some trivial prior knowledge as the dynamic model structure and all dynamical identified parts. The unknown or the uncertainty part of the inertia term of the knee-shank-orthosis system is identified online using an adaptive term. All other uncertainties or unknown dynamics are identified online by the RBFNN. The Lyapunov approach has been used to derive adaptation laws of the neural parameters and the inertia term. These adaptation laws ensure the stability of the system composed of the exoskeleton and its wearer. The wearer can be completely inactive or applying either a resistive or an assistive effort. Experimental results have been conducted on a real exoskeleton that is used for rehabilitation reasons. Based on these results we conclude with the effectiveness of the proposed approach.
A radial basis function neural network adaptive controller to drive a powered lower limb knee joint orthosis
S1568494615002628
Searching homogenous groups of individuals is one of the important steps in activity based travel demand modeling development. This study proposes an Integration of Classification tree And Sequence alignment method (ICAS) as a new classification method. The main advantage is the ability to explore all sources of lifestyle variations that have various data types including: sequential data, continuous variables, and discrete variables. These data are, for example, activity sequential patterns, socio-economic characteristics, and socio-demographic characteristics. Results from ICAS can also be used as both an activity classifier and an activity generator in an activity based travel demand modeling system. The proposed ICAS concept was evaluated with real world data, using the 2004 Bangkok time use data from Thailand's National Statistical Office (NSO).
The integration of classification tree and Sequence Alignment Method for exploring groups of population based on daily time use data
S156849461500263X
Classifying walking patterns helps the diagnosis of health status, disease progression and the effect of interventions. In this paper, we develop previous research on human gait to extract a meaningful set of parameters that allow us to design a highly interpretable system capable of identifying different gait styles with linguistic fuzzy if-then rules. The model easily discriminates among five different walking patterns, namely: normal walk, on tiptoes, dragging left limb, dragging right limb, and dragging both limbs. We have carried out a complete experimentation to test the performance of the extracted parameters to correctly classify these five chosen gait styles.
Walking pattern classification using a granular linguistic analysis
S1568494615002653
The aim of the research is evaluating the classification performances of eight different machine-learning methods on the antepartum cardiotocography (CTG) data. The classification is necessary to predict newborn health, especially for the critical cases. Cardiotocography is used for assisting the obstetricians’ to obtain detailed information during the pregnancy as a technique of measuring fetal well-being, essentially in pregnant women having potential complications. The obstetricians describe CTG shortly as a continuous electronic record of the baby's heart rate took from the mother's abdomen. The acquired information is necessary to visualize unhealthiness of the embryo and gives an opportunity for early intervention prior to happening a permanent impairment to the embryo. The aim of the machine learning methods is by using attributes of data obtained from the uterine contraction (UC) and fetal heart rate (FHR) signals to classify as pathological or normal. The dataset contains 1831 instances with 21 attributes, examined by applying the methods. In the paper, the highest accuracy displayed as 99.2%.
Classification of the cardiotocogram data for anticipation of fetal risks using machine learning techniques
S1568494615002665
Due to cluster resource competition and task scheduling policy, some map tasks are assigned to nodes without input data, which causes significant data access delay. Data locality is becoming one of the most critical factors to affect performance of MapReduce clusters. As machines in MapReduce clusters have large memory capacities, which are often underutilized, in-memory prefetching input data is an effective way to improve data locality. However, it is still posing serious challenges to cluster designers on what and when to prefetch. To effectively use prefetching, we have built HPSO (High Performance Scheduling Optimizer), a prefetching service based task scheduler to improve data locality for MapReduce jobs. The basic idea is to predict the most appropriate nodes for future map tasks based on current pending tasks and then preload the needed data to memory without any delaying on launching new tasks. To this end, we have implemented HPSO in Hadoop-1.1.2. The experiment results have shown that the method can reduce the map tasks causing remote data delay, and improves the performance of Hadoop clusters.
Scheduling algorithm based on prefetching in MapReduce clusters
S1568494615002677
The three-dimensional structure of water flow at river confluences makes these zones of particular importance in the fields of river engineering, fluvial geomorphology, sedimentology and navigation. While previous research has concentrated on the effects of hydraulic and geometric parameters on the scour patterns at river confluences, there remains a lack of expert systems designed to predict the maximum scour depth (d sm ). In the present study, several soft computing models, namely multi-layer perceptron (MLP), radial basis function (RBF) and M5P model tree, were used to predict the d sm at river confluences under live-bed conditions. Model performance, assessed through a number of statistical indices (RMSE, MAE, MARE and R 2), showed that while all three models could provide acceptable predictions of d sm under live-bed conditions, the MLP model was the most accurate. By testing the models at three different ranges of scour depths, we determined that while the MLP model was the most accurate model in the low scour depth range, the RBF model was more accurate in the higher range of scour depths. adaptive neuro-fuzzy inference system artificial neural network depth of flow downstream from confluence (m) scour depth (m) maximum scour depth (m) maximum scour depth ratio (=d sm/w d ) median size of river bed material (m) densimetric Froude number feed-forward back propagation network gravitational acceleration (ms−2) specific gravity mean absolute error mean absolute relative error multi-layer perceptron, a type of ANN. a tree type model is the number of observations in the measured or predicted data set the ith observed value the ith model-predicted value number of observation data inputs in RBF model live-bed discharge (≡ sediment load) (m3 s−1) water discharge in channel downstream from confluence (m3 s−1) water discharge in main channel upstream of confluence (m3 s−1) water discharge of the tributary upstream of confluence (m3 s−1) ratio of live bed sediment discharge to discharge in channel downstream from confluence ratio of tributary discharge to main branch discharge linear correlation coefficient between measured and estimated d sm Reynolds number radial basis function, a type of ANN root mean square error channel bed slope set of examples in a model tree center of the jth radial basis function f in a RBF function river flow velocity (ms−1) river flow velocity downstream from confluence (ms−1) width of channel downstream from confluence (m) is the weight of the connection between the jth neuron in a layer with the ith neuron in the previous layer of an ANN width of main channel upstream of confluence (m) is the weight of the connection between the hidden and output nodes in a RBF network width of tributary channel upstream of confluence (m) ratio of tributary channel width to channel width below the confluence Weber number is the value of the ith neuron in the previous layer of an ANN is the output from the jth neuron in a given layer of an ANN bed elevation difference (m) the network output in RBF model flow viscosity (kgm−1 s−1) density of sediment (kgm−3) tributary-main branch confluence angle (°) force of surface tension (kgs−2) the size of the radius around the RBF center (U j ) the standard deviation of the reduction in M5P model tree
Development of expert systems for the prediction of scour depth under live-bed conditions at river confluences: Application of different types of ANNs and the M5P model tree
S1568494615002689
Security threats against computer networks and the Internet have emerged as a major and increasing area of concern for end-users trying to protect their valuable information and resources from intrusive attacks. Due to the amount of data to be analysed and the similarities between attack and normal traffic patterns, intrusion detection is considered a complex real world problem. In this paper, we propose a solution that uses a genetic algorithm to evolve a set of simple, interval-based rules based on statistical, continuous-valued input data. Several innovations in the genetic algorithm work to keep the ruleset small. We first tune the proposed system using a synthetic data. We then evaluate our system against more complex synthetic data with characteristics associated with network intrusions, the NSL-KDD benchmark dataset, and another dataset constructed based on MIT Lincoln Laboratory normal traffic and the low-rate DDoS attack scenario from CAIDA. This new approach provides a very compact set of simple, human-readable rules with strongly competitive detection performance in comparison to other machine learning techniques.
Evolving statistical rulesets for network intrusion detection
S1568494615002690
This paper is concerned with a method for breakout defect detection and evaluation in a continuous casting process. This method uses adaptive principal component analysis (APCA) as a predictor of inputs–outputs model, which are defined by the mould bath level and casting speed. The main difficulties that cause breakout in continuous casting are, generally, phenomenon related to the non-linear and unsteady state of the metal solidification process. PCA is a modelling method based on linear projection of the principal components; the adaptive version developed in this work uses the sliding window technique for the estimation of the model parameters. This recursive form updates the new model parameters; it gives a reliable and accurate prediction. Simulation results compare PCA, APCA, non-linear system identification using neural network (NN) and support vector regression (SVR) methods showing that the APCA gives the best Mean Squared Error (MSE). Based on the MSE, the proposed approach is analyzed, tested and improved to give an accurate breakout detection and evaluation system.
Inferential sensor-based adaptive principal components analysis of mould bath level for breakout defect detection and evaluation in continuous casting
S1568494615002707
Most experimental studies initialize the population of evolutionary algorithms with random genotypes. In practice, however, optimizers are typically seeded with good candidate solutions either previously known or created according to some problem-specific method. This seeding has been studied extensively for single-objective problems. For multi-objective problems, however, very little literature is available on the approaches to seeding and their individual benefits and disadvantages. In this article, we are trying to narrow this gap via a comprehensive computational study on common real-valued test functions. We investigate the effect of two seeding techniques for five algorithms on 48 optimization problems with 2, 3, 4, 6, and 8 objectives. We observe that some functions (e.g., DTLZ4 and the LZ family) benefit significantly from seeding, while others (e.g., WFG) profit less. The advantage of seeding also depends on the examined algorithm.
Seeding the initial population of multi-objective evolutionary algorithms: A computational study
S1568494615002719
Hierarchical Dirichlet process (HDP) is an unsupervised method which has been widely used for topic extraction and document clustering problems. One advantage of HDP is that it has an inherent mechanism to determine the total number of clusters/topics. However, HDP has three weaknesses: (1) there is no mechanism to use known labels or incorporate expert knowledge into the learning procedure, thus precluding users from directing the learning and making the final results incomprehensible; (2) it cannot detect the categories expected by applications without expert guidance; (3) it does not automatically adjust the model parameters and structure in a changing environment. To address these weaknesses, this paper proposes an incremental learning method, with partial supervision for HDP, which enables the topic model (initially guided by partial knowledge) to incrementally adapt to the latest available information. An important contribution of this work is the application of granular computing to HDP for partial-supervision and incremental learning which results in a more controllable and interpretable model structure. These enhancements provide a more flexible approach with expert guidance for the model learning and hence results in better prediction accuracy and interpretability.
Incremental learning with partial-supervision based on hierarchical Dirichlet process and the application for document classification
S1568494615002720
The software development life cycle generally includes analysis, design, implementation, test and release phases. The testing phase should be operated effectively in order to release bug-free software to end users. In the last two decades, academicians have taken an increasing interest in the software defect prediction problem, several machine learning techniques have been applied for more robust prediction. A different classification approach for this problem is proposed in this paper. A combination of traditional Artificial Neural Network (ANN) and the novel Artificial Bee Colony (ABC) algorithm are used in this study. Training the neural network is performed by ABC algorithm in order to find optimal weights. The False Positive Rate (FPR) and False Negative Rate (FNR) multiplied by parametric cost coefficients are the optimization task of the ABC algorithm. Software defect data in nature have a class imbalance because of the skewed distribution of defective and non-defective modules, so that conventional error functions of the neural network produce unbalanced FPR and FNR results. The proposed approach was applied to five publicly available datasets from the NASA Metrics Data Program repository. Accuracy, probability of detection, probability of false alarm, balance, Area Under Curve (AUC), and Normalized Expected Cost of Misclassification (NECM) are the main performance indicators of our classification approach. In order to prevent random results, the dataset was shuffled and the algorithm was executed 10 times with the use of n-fold cross-validation in each iteration. Our experimental results showed that a cost-sensitive neural network can be created successfully by using the ABC optimization algorithm for the purpose of software defect prediction.
Software defect prediction using cost-sensitive neural network
S1568494615002732
Human skin detection is an essential step in most human detection applications, such as face detection. The performance of any skin detection system depends on assessment of two components: feature extraction and detection method. Skin color is a robust cue used for human skin detection. However, the performance of color-based detection methods is constrained by the overlapping color spaces of skin and non-skin pixels. To increase the accuracy of skin detection, texture features can be exploited as additional cues. In this paper, we propose a hybrid skin detection method based on YIQ color space and the statistical features of skin. A Multilayer Perceptron artificial neural network, which is a universal classifier, is combined with the k-means clustering method to accurately detect skin. The experimental results show that the proposed method can achieve high accuracy with an F 1-measure of 87.82% based on images from the ECU database.
Hybrid Human Skin Detection Using Neural Network and K-Means Clustering Technique
S1568494615002744
This paper presents a Quality of Experience (QoE) prediction model in a student-centered blended learning environment, equipped with appropriate technologically enriched classroom. The model uses ANFIS technique to infer the QoE from the individual subjective factors and the objective technical factors which altogether influence the perceived QoE. We explored the influence of subjective personality traits extroversion and neuroticism, as well as the learning style on QoE. The objective factors included in the model are technically measurable parameters latency, jitter, packet loss and bandwidth affecting Quality of Service (QoS) of the underlying technology. The findings presented in this paper are obtained from a case study which involved 8 teachers and 142 students from second and sixth grade in five primary schools in the Republic of Macedonia. The teachers involved in the project introduced game-based learning strategies in classes, including on-line videoconferences, streamed video content and classical face to face gaming. We constructed three ANFIS systems with seven and four input variables and compared their performances using the RMSE, MAPE and R 2 measurements. The results showed that perceived QoE can be reliably predicted by the student's personality traits and learning style as subjective factors and network jitter as an objective factor.
An ANFIS model of quality of experience prediction in education
S1568494615002756
This paper proposes a new metaheuristic, the runner-root algorithm (RRA), inspired by the function of runners and roots of some plants in nature. The plants which are propagated through runners look for water resources and minerals by developing runners and roots (as well as root hairs). The first tool helps the plant for search around with random big steps while the second one is appropriate for search around with small steps. Moreover, the plant which is placed at a very good location by chance spreads in a larger area through its longer runners and roots. Similarly, the proposed algorithm is equipped with two tools for exploration: random jumps with big steps, which model the function of runners in nature, and a re-initialization strategy in case of trapping in local optima, which redistributes the computational agents randomly in the domain of problem and models the propagation of plant in a larger area in case of being located in a good position. Exploitation in RRA is performed by the so-called roots and root hairs which respectively apply random large and small changes to the variables of the best computational agent separately (in case of stagnation). Performance of the proposed algorithm is examined by applying it to the standard CEC’ 2005 benchmark problems and then comparing the results with 9 state-of-the-art algorithms using nonparametric methods.
The runner-root algorithm: A metaheuristic for solving unimodal and multimodal optimization problems inspired by runners and roots of plants in nature
S1568494615002768
Differential equations play a noticeable role in engineering, physics, economics, and other disciplines. Approximate approaches have been utilized when obtaining analytical (exact) solutions requires substantial computational effort and often is not an attainable task. Hence, the importance of approximation methods, particularly, metaheuristic algorithms are understood. In this paper, a novel approach is suggested for solving engineering ordinary differential equations (ODEs). With the aid of certain fundamental concepts of mathematics, Fourier series expansion, and metaheuristic methods, ODEs can be represented as an optimization problem. The target is to minimize the weighted residual function (error function) of the ODEs. The boundary and initial values of ODEs are considered as constraints for the optimization model. Generational distance and inverted generational distance metrics are used for evaluation and assessment of the approximate solutions versus the exact (numerical) solutions. Longitudinal fins having rectangular, trapezoidal, and concave parabolic profiles are considered as studied ODEs. The optimization task is carried out using three different optimizers, including the genetic algorithm, the particle swarm optimization, and the harmony search. The approximate solutions obtained are compared with the differential transformation method (DTM) and exact (numerical) solutions. The optimization results obtained show that the suggested approach can be successfully applied for approximate solving of engineering ODEs. Providing acceptable accuracy of the proposed technique is considered as its important advantage against other approximate methods and may be an alternative approach for approximate solving of ODEs.
Metaheuristic algorithms for approximate solution to ordinary differential equations of longitudinal fins having various profiles
S156849461500277X
We present an optimization-based unsupervised approach to automatic document summarization. In the proposed approach, text summarization is modeled as a Boolean programming problem. This model generally attempts to optimize three properties, namely, (1) relevance: summary should contain informative textual units that are relevant to the user; (2) redundancy: summaries should not contain multiple textual units that convey the same information; and (3) length: summary is bounded in length. The approach proposed in this paper is applicable to both tasks: single- and multi-document summarization. In both tasks, documents are split into sentences in preprocessing. We select some salient sentences from document(s) to generate a summary. Finally, the summary is generated by threading all the selected sentences in the order that they appear in the original document(s). We implemented our model on multi-document summarization task. When comparing our methods to several existing summarization methods on an open DUC2005 and DUC2007 data sets, we found that our method improves the summarization results significantly. This is because, first, when extracting summary sentences, this method not only focuses on the relevance scores of sentences to the whole sentence collection, but also the topic representative of sentences. Second, when generating a summary, this method also deals with the problem of repetition of information. The methods were evaluated using ROUGE-1, ROUGE-2 and ROUGE-SU4 metrics. In this paper, we also demonstrate that the summarization result depends on the similarity measure. Results of the experiment showed that combination of symmetric and asymmetric similarity measures yields better result than their use separately.
An unsupervised approach to generating generic summaries of documents
S1568494615002781
The aim of this paper is to develop a novel concept of uncertain linguistic fuzzy soft sets (ULFSSs), which applies the notion of uncertain fuzzy set to the soft theory. The relationships between two ULFSSs, including the inclusion relation, the equal relation and the complement relation, are studied based on the binary relations. We also introduce some basic set operations for ULFSSs, such as the ‘AND’ and ‘OR’ operations, the algebra operations. The properties of these operations are also discussed. As an application of this new fuzzy soft set, we propose a ULFSSs-based group decision making model, in which the weights of decision makers are obtained from a non-linear optimization model according to the ‘Technique for Order of Preference by Similarity to Ideal Solution’ (TOPSIS) method and the maximum entropy theory. Finally, an assessment of sound quality problem is investigated to illustrate the feasibility and validity of the approach mentioned in this paper.
Uncertain linguistic fuzzy soft sets and their applications in group decision making
S1568494615002793
3D stacked technology has emerged as an effective mechanism to overcome physical limits and communication delays found in 2D integration. However, 3D technology also presents several drawbacks that prevent its smooth application. Two of the major concerns are heat reduction and power density distribution. In our work, we propose a novel 3D thermal-aware floorplanner that includes: (1) an effective thermal-aware process with three different evolutionary algorithms that aim to solve the soft computing problem of optimizing the placement of functional units and through silicon vias, as well as the smooth inclusion of active cooling systems and new design strategies, (2) an approximated thermal model inside the optimization loop, (3) an optimizer for active cooling (liquid channels), and (4) a novel technique based on air channel placement designed to isolate thermal domains have been also proposed. The experimental work is conducted for a realistic many-core single-chip architecture based on the Niagara design. Results show promising improvements of the thermal and reliability metrics, and also show optimal scaling capabilities to target future-trend many-core systems. air channel dynamic voltage and frequency scaling functional unit integrated circuit liquid channels multi-objective evolutionary algorithm multi-objective floorplanning algorithm printed circuit board resistance–capacitance through silicon via
Thermal-aware floorplanner for 3D IC, including TSVs, liquid microchannels and thermal domains optimization
S1568494615002811
We present a green vehicle routing and scheduling problem (GVRSP) considering general time-dependent traffic conditions with the primary objective of minimizing CO2 emissions and weighted tardiness. A new mathematical formulation is proposed to describe the GVRSP with hierarchical objectives and weighted tardiness. The proposed formulation is an alternative formulation of the GVRSP in the way that a vehicle is allowed to travel an arc in multiple time periods. The schedule of a vehicle is determined based on the actual distance that the vehicle travels each arc in each time period instead of the time point when the vehicle departs from each node. Thereby, more general time dependent traffic patterns can be considered in the model. The proposed formulation is studied using various objectives functions, such as minimizing the total CO2 emissions, the total travel distance, and the total travel time. Computational results show that up to 50% reduction in CO2 emissions can be achieved with average reductions of 12% and 28% compared to distance-oriented solutions and travel-time-oriented solutions, respectively. In addition, a simulated annealing (SA) algorithm is introduced to solve large-sized problem instances. To reduce the search space, the SA algorithm searches only for vehicle routes and rough schedules, and a straightforward heuristic procedure is used to determine near-optimal detailed schedules for a given set of routes. The performance of the SA algorithm is tested on large-sized problems with up to 100 nodes and 10 time periods.
A simulating annealing algorithm to solve the green vehicle routing & scheduling problem with hierarchical objectives and weighted tardiness
S1568494615002823
In this paper, an application of adaptive neuro-fuzzy inference system (ANFIS) for the prediction of temperature distribution in a human brain equivalent liquid (BEL), resulting from exposure to electromagnetic radiation, is presented. In the first phase of the study, temperature distributions, resulting from different exposure conditions of electromagnetic (EM) fields, are experimentally determined. Since experimental determination of temperature distribution in each point of the BEL is time-consuming and hard to implement, ANFIS is employed to determine the distribution without new measurements. The use of ANFIS is found to be very useful in terms of the prediction of temperature distribution within the BEL after exposure to radiation. Numerical results show that the proposed method can be employed for the prediction of temperature distribution in a human BEL with high accuracy without requiring any experimental measurements.
Prediction of temperature distribution in human BEL exposed to 900MHz mobile phone radiation using ANFIS
S1568494615002938
Orthogonal frequency division multiple access (OFDMA) is a promising technique, which can provide high downlink capacity for the future wireless systems. The total capacity of OFDMA systems can be maximized by adaptively assigning subcarriers to the user with the best gain for that subcarrier, with power subsequently distributed by water-filling. In this paper, we propose the use of a differential evolution combined with multi-objective optimization (CMODE) algorithm to allocate the resources to the users in a downlink OFDMA system. Specifically, we propose two approaches for resource allocation in downlink OFDMA systems using CMODE algorithm. In the first approach, CMODE algorithm is used only for subcarrier allocation (OSA), while in the second approach, the CMODE algorithm is used for joint subcarrier and power allocation (JSPA). The CMODE algorithm is population-based where a set of potential solutions evolves to arrive at a near-optimal solution for the problem under study. During the past decade, solving constrained optimization problems with evolutionary algorithms has received considerable attention among researchers and practitioners. CMODE combines multi-objective optimization with differential evolution (DE) to deal with constrained optimization problems. The comparison of individuals in CMODE is based on multi-objective optimization, while DE serves as the search engine. In addition, infeasible solution replacement mechanism based on multi-objective optimization is used in CMODE, with the purpose of guiding the population towards the promising solutions and the feasible region simultaneously. It is shown that both the proposed approaches obtain higher sum capacities as compared to that obtained by previous works, with comparable computational complexity. It is also shown that the JSPA approach provides near optimal results at the slightly higher computational cost.
Differential evolution aided adaptive resource allocation in OFDMA systems with proportional rate constraints
S156849461500294X
In this paper, we aim at proposing a switching adaptive control scheme using a Hopfield-based dynamic neural network (SACHNN) for nonlinear systems with external disturbances. In our proposed scheme, an auxiliary direct adaptive controller (DAC) ensures the system stability when the indirect adaptive controller (IAC) is failed; that is, g ˆ ( x ) approaches to zero, where g ˆ ( x ) is the denominator of an indirect adaptive control law. The IAC's limitation of g ˆ ( x ) > ε then can be solved by simply switching the IAC to the DAC, where ɛ is a positive desired value. The Hopfield dynamic neural network (HDNN) is used to not only design DAC but also approximate the unknown plant nonlinearities in IAC design. The designed simple structure of HDNN keeps the tracking performance well and also makes the practical implementation much easier because of the use of less and fixed number of neurons.
Intelligent switching adaptive control for uncertain nonlinear dynamical systems
S1568494615002951
In this paper, a novel region-based fuzzy active contour model with kernel metric is proposed for a robust and stable image segmentation. This model can detect the boundaries precisely and work well with images in the presence of noise, outliers and low contrast. It segments an image into two regions – the object and the background by the minimization of a predefined energy function. Due to the kernel metric incorporated in the energy and the fuzziness of the energy, the active contour evolves very stably without the reinitialization for the level set function during the evolution. Here the fuzziness provides the model with a strong ability to reject local minima and the kernel metric is employed to construct a nonlinear version of energy function based on a level set framework. This new fuzzy and nonlinear version of energy function makes the updating of region centers more robust against the noise and outliers in an image. Theoretical analysis and experimental results show that the proposed model achieves a much better balance between accuracy and efficiency compared with other active contour models.
Novel fuzzy active contour model with kernel metric for image segmentation
S1568494615002963
In this study, the combination of artificial neural network (ANN) and ant colony optimization (ACO) algorithm has been utilized for modeling and reducing NO x and soot emissions from a direct injection diesel engine. A feed-forward multi-layer perceptron (MLP) network is used to represent the relationship between the input parameters (i.e., engine speed, intake air temperature, rate of fuel mass injected, and power) on the one hand and the output parameters (i.e., NO x and soot emissions) on the other hand. The ACO algorithm is employed to find the optimum air intake temperatures and the rates of fuel mass injected for different engine speeds and powers with the purpose of simultaneous reduction of NO x and soot. The obtained results reveal that the ANN can appropriately model the exhaust NO x and soot emissions with the correlation factors of 0.98, 0.96, respectively. Further, the employed ACO algorithm gives rise to 32% and 7% reduction in the NO x and soot, respectively. The response time of the optimization process was obtained to be less than 4min for the particular PC system used in the present work. The high accuracy and speed of the model show its potential for application in intelligent controlling systems of the diesel engines.
Prediction and reduction of diesel engine emissions using a combined ANN–ACO method
S1568494615002975
Automatic fall detection is a major issue in the health care of elderly people. In this task the ability to discriminate in real time between falls and normal daily activities is crucial. Several methods already exist to perform this task, but approaches able to provide explicit formalized knowledge and high classification accuracy have not yet been developed and would be highly desirable. To achieve this aim, this paper proposes an innovative and complete approach to fall detection based both on the automatic extraction of knowledge expressed as a set of IF-THEN rules from a database of fall recordings, and on its use in a mobile health monitoring system. Whenever a fall is detected by this latter, the system can take immediate actions, e.g. alerting medical personnel. Our method can easily overcome the limitations of other approaches to fall detection. In fact, thanks to the knowledge gathering, it overcomes both the difficulty faced by a human being dealing with many parameters and trying to find out which are the most suitable, and also the need to apply a laborious trial-and-error procedure to find the values of the related thresholds. In addition, in our approach the extracted knowledge is processed in real time by a reasoner embedded in a mobile device, without any need for connection to a remote server. This proposed approach has been compared against four other classifiers on a database of falls simulated by volunteers, and its discrimination ability has been shown to be higher with an average accuracy of 91.88%. We have also carried out a very preliminary experimental phase. The best set of rules found by using the previous database has allowed us to achieve satisfactory performance in these experiments as well. Namely, on these real-world falls the obtained results in terms of accuracy, sensitivity, and specificity are of about 92%, 86%, and 96%, respectively.
A supervised approach to automatically extract a set of rules to support fall detection in an mHealth system
S1568494615002987
The increasing complexity of real-world optimization problems raises new challenges to evolutionary computation. Responding to these challenges, distributed evolutionary computation has received considerable attention over the past decade. This article provides a comprehensive survey of the state-of-the-art distributed evolutionary algorithms and models, which have been classified into two groups according to their task division mechanism. Population-distributed models are presented with master-slave, island, cellular, hierarchical, and pool architectures, which parallelize an evolution task at population, individual, or operation levels. Dimension-distributed models include coevolution and multi-agent models, which focus on dimension reduction. Insights into the models, such as synchronization, homogeneity, communication, topology, speedup, advantages and disadvantages are also presented and discussed. The study of these models helps guide future development of different and/or improved algorithms. Also highlighted are recent hotspots in this area, including the cloud and MapReduce-based implementations, GPU and CUDA-based implementations, distributed evolutionary multiobjective optimization, and real-world applications. Further, a number of future research directions have been discussed, with a conclusion that the development of distributed evolutionary computation will continue to flourish.
Distributed evolutionary algorithms and their models: A survey of the state-of-the-art
S1568494615002999
One of the most important problem in supply chain management is the design of distribution systems which can reduce the transportation costs and meet the customer's demand at the minimum time. In recent years, cross-docking (CD) centers have been considered as the place that reduces the transportation and inventory costs. Meanwhile, neglecting the optimum location of the centers and the optimum routing and scheduling of the vehicles mislead the optimization process to local optima. Accordingly, in this research, the integrated vehicle routing and scheduling problem in cross-docking systems is modeled. In this new model, the direct shipment from the manufacturers to the customers is also included. Besides, the vehicles are assigned to the cross-dock doors with lower cost. Next, to solve the model, a novel machine-learning-based heuristic method (MLBM) is developed, in which the customers, manufacturers and locations of the cross-docking centers are grouped through a bi-clustering approach. In fact, the MLBM is a filter based learning method that has three stages including customer clustering through a modified bi-clustering method, sub-problems’ modeling and solving the whole model. In addition, for solving the scheduling problem of vehicles in cross-docking system, this paper proposes exact solution as well as genetic algorithm (GA). GA is also adapted for large-scale problems in which exact methods are not efficient. Furthermore, the parameters of the proposed GA are tuned via the Taguchi method. Finally, for validating the proposed model, several benchmark problems from literature are selected and modified according to new introduced assumptions in the base models. Different statistical analysis methods are implemented to assess the performance of the proposed algorithms.
A novel learning based approach for a new integrated location-routing and scheduling problem within cross-docking considering direct shipment
S1568494615003002
The main aim of this paper is to present a consistency model for interval multiplicative preference relation (IMPR). To measure the consistency level for IMPR, a referenced consistent IMPR of a given IMPR is defined, which has the minimum logarithmic distance from the given IMPR. Based on the referenced consistent IMPR, the consistency level of an IMPR can be measured and an IMPR with unacceptable consistency can be adjusted by a proposed algorithm such that the revised IMPR is of acceptable consistency. A consistency model for group decision making (GDM) problems with IMPRs is proposed to obtain the collective IMPR with highest consistency level. Numerical examples are provided to illustrate the validity of the proposed approaches in decision making.
A consistency model for group decision making problems with interval multiplicative preference relations
S1568494615003014
Monitoring human energy expenditure (EE) is important in many health and sports applications, since the energy expenditure directly reflects the intensity of physical activity. The actual energy expenditure is unpractical to measure; therefore, it is often estimated from the physical activity measured with accelerometers and other sensors. Previous studies have demonstrated that using a person's activity as the context in which the EE is estimated, and using multiple sensors, improves the estimation. In this study, we go a step further by proposing a context-based reasoning method that uses multiple contexts provided by multiple sensors. The proposed Multiple Contexts Ensemble (MCE) approach first extracts multiple features from the sensor data. Each feature is used as a context for which multiple regression models are built using the remaining features as training data: for each value of the context feature, a regression model is trained on a subset of the dataset with that value. When evaluating a data sample, the models corresponding to the context (feature) values in the evaluated sample are assembled into an ensemble of regression models that estimates the EE of the user. Experiments showed that the MCE method outperforms (in terms of lower root means squared error and lower mean absolute error): (i) five single-regression approaches (linear and non-linear); (ii) two ensemble approaches: Bagging and Random subspace; (iii) an approach that uses artificial neural networks trained on accelerometer-data only; and (iv) BodyMedia (a state-of-the-art commercial EE-estimation device).
Context-based ensemble method for human energy expenditure estimation
S1568494615003026
This paper presents a novel motion planning algorithm for modular robots moving in environments with diverse terrain conditions. This requires the planner to generate a suitable control signal for all actuators, which can be computationally intensive. To decrease the complexity of the planning task, the concept of motion primitives is used. The motion primitives generate simple motions like ‘crawl-forward’ or ‘turn-left’ and the motion planner constructs a plan using these primitives. To preserve the efficiency and robustness of the planner on varying terrains, a novel schema called RRT-AMP (Rapidly Exploring Random Trees with Adaptive Motion Primitives) for adapting the motion primitives is introduced. The adaptation procedure is integrated into the planning process, which allows the planner simultaneously to adapt the primitives and to use them to obtain the final plan. Besides adaptation in changing environments, RRT-AMP can adapt motion primitives if some module fails. The methods is experimentally verified with robots of different morphologies to show its adaptation and planning abilities in complex environments.
Motion planning with adaptive motion primitives for modular robots
S1568494615003038
In this paper, we propose a novel change detection method for synthetic aperture radar images based on unsupervised artificial immune systems. After generating the difference image from the multitemporal images, we take each pixel as an antigen and build an immune model to deal with the antigens. By continuously stimulating the immune model, the antigens are classified into two groups, changed and unchanged. Firstly, the proposed method incorporates the local information in order to restrain the impact of speckle noise. Secondly, the proposed method simulates the immune response process in a fuzzy way to get an accurate result by retaining more image details. We introduce a fuzzy membership of the antigen and then update the antibodies and memory cells according to the membership. Compared with the clustering algorithms we have proposed in our previous works, the new method inherits immunological properties from immune systems and is robust to speckle noise due to the use of local information as well as fuzzy strategy. Experiments on real synthetic aperture radar images show that the proposed method performs well on several kinds of difference images and engenders more robust result than the other compared methods.
Change detection in synthetic aperture radar images based on unsupervised artificial immune systems
S156849461500304X
The job-shop scheduling problem with operators (JSO) is an extension of the classic job-shop problem in which an operation must be assisted by one of a limited set of human operators, so it models many real life situations. In this paper we tackle the JSO by means of memetic algorithms with the objective of minimizing the makespan. We define and analyze a neighborhood structure which is then exploited in local search and tabu search algorithms. These algorithms are combined with a conventional genetic algorithm to improve a fraction of the chromosomes in each generation. We also consider two different schedule builders for chromosome decoding. All these elements are combined to obtain memetic algorithms which are evaluated over an extensive set of instances. The results of the experimental study show that they reach high quality solutions in very short time, comparing favorably with the state-of-the-art methods.
Memetic algorithms for the job shop scheduling problem with operators
S1568494615003051
Self-organizing maps (SOM) have been applied on numerous data clustering and visualization tasks and received much attention on their success. One major shortage of classical SOM learning algorithm is the necessity of predefined map topology. Furthermore, hierarchical relationships among data are also difficult to be found. Several approaches have been devised to conquer these deficiencies. In this work, we propose a novel SOM learning algorithm which incorporates several text mining techniques in expanding the map both laterally and hierarchically. On training a set of text documents, the proposed algorithm will first cluster them using classical SOM algorithm. We then identify the topics of each cluster. These topics are then used to evaluate the criteria on expanding the map. The major characteristic of the proposed approach is to combine the learning process with text mining process and makes it suitable for automatic organization of text documents. We applied the algorithm on the Reuters-21578 dataset in text clustering and categorization tasks. Our method outperforms two comparing models in hierarchy quality according to users’ evaluation. It also receives better F1-scores than two other models in text categorization task.
Incorporating self-organizing map with text mining techniques for text hierarchy generation
S1568494615003063
In this research, we propose a facial expression recognition system with a layered encoding cascade optimization model. Since generating an effective facial representation is a vital step to the success of facial emotion recognition, a modified Local Gabor Binary Pattern operator is first employed to derive a refined initial face representation and we then propose two evolutionary algorithms for feature optimization including (i) direct similarity and (ii) Pareto-based feature selection, under the layered cascade model. The direct similarity feature selection considers characteristics within the same emotion category that give the minimum within-class variation while the Pareto-based feature optimization focuses on features that best represent each expression category and at the same time provide the most distinctions to other expressions. Both a neural network and an ensemble classifier with weighted majority vote are implemented for the recognition of seven expressions based on the selected optimized features. The ensemble model also automatically updates itself with the most recent concepts in the data. Evaluated with the Cohn–Kanade database, our system achieves the best accuracies when the ensemble classifier is applied, and outperforms other research reported in the literature with 96.8% for direct similarity based optimization and 97.4% for the Pareto-based feature selection. Cross-database evaluation with frontal images from the MMI database has also been conducted to further prove system efficiency where it achieves 97.5% for Pareto-based approach and 90.7% for direct similarity-based feature selection and outperforms related research for MMI. When evaluated with 90° side-view images extracted from the videos of the MMI database, the system achieves superior performances with >80% accuracies for both optimization algorithms. Experiments with other weighting and meta-learning combination methods for the construction of ensembles are also explored with our proposed ensemble showing great adpativity to new test data stream for cross-database evaluation. In future work, we aim to incorporate other filtering techniques and evolutionary algorithms into the optimization models to further enhance the recognition performance.
Intelligent facial emotion recognition using a layered encoding cascade optimization model
S1568494615003075
Plane model extraction from three-dimensional point clouds is a necessary step in many different applications such as planar object reconstruction, indoor mapping and indoor localization. Different RANdom SAmple Consensus (RANSAC)-based methods have been proposed for this purpose in recent years. In this study, we propose a novel method-based on RANSAC called Multiplane Model Estimation, which can estimate multiple plane models simultaneously from a noisy point cloud using the knowledge extracted from a scene (or an object) in order to reconstruct it accurately. This method comprises two steps: first, it clusters the data into planar faces that preserve some constraints defined by knowledge related to the object (e.g., the angles between faces); and second, the models of the planes are estimated based on these data using a novel multi-constraint RANSAC. We performed experiments in the clustering and RANSAC stages, which showed that the proposed method performed better than state-of-the-art methods.
Three-dimensional planar model estimation using multi-constraint knowledge based on k-means and RANSAC
S1568494615003087
Improved authentication mechanisms are needed to cope with the increased data exposure we face nowadays. Keystroke dynamics is a cost-effective alternative, which usually only requires a standard keyboard to acquire authentication data. Here, we focus on recognizing users by keystroke dynamics using immune algorithms, considering a one-class classification approach. In such a scenario, only samples from the legitimate user are available to generate the model of the user. Throughout the paper, we emphasize the importance of proper data understanding and pre-processing. We show that keystroke samples from the same user present similarities in what we call typing signature. A proposal to take advantage of this finding is discussed: the use of rank transformation. This transformation improved performance of classification algorithms tested here and it was decisive for some immune algorithms studied in our setting.
Emphasizing typing signature in keystroke dynamics using immune algorithms
S1568494615003099
Transportation of products from sources to destinations with minimal total cost plays an important role in logistics and supply chain management. All algorithms start with an initial feasible solution in obtaining the minimal total cost solution to this problem. Generally, better is the initial feasible solution lesser is the number of iterations of obtaining the minimal total cost solution. Here, first we demonstrate a deficiency of a recently developed method in obtaining the minimal total cost solution to this problem. Then we develop a better polynomial time (O(N 3) (N, higher of the numbers of source and destination nodes)) heuristic solution technique to obtain a better initial feasible solution to the transportation problem. Because of the intractability of carrying out enormous calculations in this heuristic technique without a soft computing program, this technique is coded using C++ programming language. Comparative studies of this heuristic with the best available ones in the literature on results of some numerical problems are carried out to show better performance of the current one. Our heuristic is found to lead to the minimal total cost solution in most cases (88.89%) of the studied numerical problems.
An efficient heuristic to obtain a better initial feasible solution to the transportation problem
S1568494615003105
The vibration domain of structures can be reduced by imposing some constraints on their natural frequencies. For this purpose optimal design of structures under frequency constraints is required which involves highly non-linear and non-convex problems. In this paper an efficient hybrid algorithm is developed for solving such optimization problems. This algorithm utilizes the recently developed colliding bodies optimization (CBO) algorithm as the main engine and uses the positive properties of the particle swarm optimization (PSO) algorithm to increase the efficiency of the CBO. The distinct feature of the present hybrid algorithm is that it requires no parameter tuning. The CBO is known for being parameter independent, and avoiding the use of the traditional penalty method to handle the constraints upholds this property. Two mathematical constrained functions taken from the literature are studied to verify the performance of the algorithm. The algorithm is then applied to optimize truss structures with frequency limitations. The numerical results demonstrate the efficiency of the presented algorithm for this class of problems.
A hybrid CBO–PSO algorithm for optimal design of truss structures with dynamic constraints
S1568494615003117
This study evaluates an appropriate business model for e-book firms in Taiwan. We apply expert questionnaires to calculate the weights, which contain five criteria, namely, business strategy (BS), finance characteristics (FC), market characteristics (MC), quality measurements of product and service (QS) and implementation (IM) (BCCSM) for the capability of developing the firms’ business. Then, a fuzzy analytic hierarchy process (FAHP) is adopted to explore the weights of indices, and the VlseKriterijumska Optimizacija I Kompromisno Resenje (VIKOR), grey relation analysis (GRA), and the technique for order preference by similarity to ideal solution (TOPSIS) are utilized to rank the three alternatives. The results show that a single brand is the best e-book firms’ business strategy that is simultaneously integrated by content, platforms, and devices, while the top two weights of the evaluation criteria are the business strategy and market characteristics to enable firms to develop an appropriate e-book business model.
A hybrid fuzzy model for selecting and evaluating the e-book business model: A case study on Taiwan e-book firms
S1568494615003129
This paper proposes an improved multi-objective differential evolutionary algorithm named multi-objective hybrid differential evolution with simulated annealing technique (MOHDE-SAT) to solve dynamic economic emission dispatch (DEED) problem. The proposed MOHDE-SAT integrates the orthogonal initialization method into the differential evolution, which enlarges the population diversity at the beginning of population evolution. In addition, modified mutation operator and archive retention mechanisms are used to control convergence rate, and simulated annealing technique and entropy diversity method are utilized to adaptively monitor the population diversity as the evolution proceeds, which can properly avoid the premature convergence problem. Furthermore, the MOHDE-SAT is applied on the thermal system with a heuristic constraint handling method, and obtains more desirable results in comparison to those alternatives established recently. The obtained results also reveal that the proposed MOHDE-SAT can provide a viable way for solving DEED problems.
Multi-elite guide hybrid differential evolution with simulated annealing technique for dynamic economic emission dispatch
S1568494615003130
This paper proposes the use of stigmergic cooperation between two swarms of Fuzzy Nanoparticles (FNPs) and Auxiliary Nanoparticles (ANPs) for intelligent control of Low-Density Lipoprotein (LDL) concentration in the arterial wall, as a novel non-invasive method for prevention of atherosclerosis. Given any desired fuzzy controller, a swarm of FNPs in the aqueous environment of a living tissue can collectively realize an accurate approximation of this controller, which is called swarm fuzzy controller. In this study, the task of the swarm fuzzy controller is to manipulate the pheromone level of the environment as output according to the sensed value of LDL concentration as input. Pheromone is a chemical substance that is used for stigmergic communication between two swarms of FNPs and ANPs. An ANP consists of a drug reservoir connected to a nanoscale valve which is controllable by pheromone concentration. The level of pheromone in the local environment of an ANP determines how much drug should be released by it. The hardware complexity of the proposed approach is lower than nanorobotics to facilitate its manufacturing. Simulation results on a well-known mathematical model demonstrate that this method can successfully reduce the LDL level to a desired value in the arterial wall of a patient with very high LDL level, while its performance is much better in contrast to the previous work of authors. Also, the mass of the released drug in a healthy wall is 16 times lesser than its corresponding value in an unhealthy wall.
Stigmergic cooperation of nanoparticles for swarm fuzzy control of low-density lipoprotein concentration in the arterial wall
S1568494615003142
Uneven energy consumption is an inherent problem in wireless sensor networks characterized by multi-hop routing and many-to-one traffic pattern. Such unbalanced energy dissipation can significantly reduce network lifetime. In this paper, we study the problem of prolonging network lifetime in large-scale wireless sensor networks where a mobile sink gathers data periodically along the predefined path and each sensor node uploads its data to the mobile sink over a multi-hop communication path. By using greedy policy and dynamic programming, we propose a heuristic topology control algorithm with time complexity O(n(m + n log n)), where n and m are the number of nodes and edges in the network, respectively, and further discuss how to refine our algorithm to satisfy practical requirements such as distributed computing and transmission timeliness. Theoretical analysis and experimental results show that our algorithm is superior to several earlier algorithms for extending network lifetime.
Energy-efficient topology control algorithm for maximizing network lifetime in wireless sensor networks with mobile sink
S1568494615003154
The flower pollination algorithm (FPA) is a recently proposed metaheuristic inspired by the natural phenomenon of flower pollination. Since its invention, this star-rising metaheuristic has gained a big interest in the community of metaheuristic optimisation. So, many works based on the FPA have already been proposed. However, these works have not given any deep analysis of the performances of the basic algorithm, neither of the variants already proposed. This makes it difficult to decide on the applicability of this new metaheuristic in real-world applications. This paper qualitatively and quantitatively analyses this metaheuristic. The qualitative analysis studies the basic variant of the FPA and some extensions of it. For quantitative analysis, the FPA is statistically examined through using it to solve the CEC 2013 benchmarks for real-parameter continuous optimisation, then by applying it on some of the CEC 2011 benchmarks for real-world optimisation problems. In addition, some extensions of the FPA, based on opposition-based learning and the modification of the movement equation in the global-pollination operator, are presented and also analysed in this paper. On the whole, the basic FPA has been found to offer less than average performances when compared to state-of-the-art algorithms, and the best of the proposed extensions has reached average results.
On the performances of the flower pollination algorithm – Qualitative and quantitative analyses
S1568494615003166
This paper presents a new color image segmentation method based on a multiobjective optimization algorithm, named improved bee colony algorithm for multi-objective optimization (IBMO). Segmentation is posed as a clustering problem through grouping image features in this approach, which combines IBMO with seeded region growing (SRG). Since feature extraction has a crucial role for image segmentation, the presented method is firstly focused on this manner. The main features of an image: color, texture and gradient magnitudes are measured by using the local homogeneity, Gabor filter and color spaces. Then SRG utilizes the extracted feature vector to classify the pixels spatially. It starts running from centroid points called as seeds. IBMO determines the coordinates of the seed points and similarity difference of each region by optimizing a set of cluster validity indices simultaneously in order to improve the quality of segmentation. Finally, segmentation is completed by merging small and similar regions. The proposed method was applied on several natural images obtained from Berkeley segmentation database. The robustness of the proposed ideas was showed by comparison of hand-labeled and experimentally obtained segmentation results. Besides, it has been seen that the obtained segmentation results have better values than the ones obtained from fuzzy c-means which is one of the most popular methods used in image segmentation, non-dominated sorting genetic algorithm II which is a state-of-the-art algorithm, and non-dominated sorted PSO which is an adapted algorithm of PSO for multi-objective optimization.
Color image segmentation based on multiobjective artificial bee colony optimization
S1568494615003178
Paraphrase Extraction involves the discovery of equivalent text segments from large corpora and finds application in tasks such as multi-document summarization and document clustering. Semantic similarity identification is a challenging problem which is further compounded by the large size of the corpus. In this paper a two-stage approach which involves clustering followed by Paraphrase Recognition has been proposed for extraction of sentence-level paraphrases from text collections. In order to handle the ambiguity and inherent variability of natural language a fuzzy hierarchical clustering approach which combines agglomeration based on verbs and division on nouns has been used. Sentences within each resultant cluster are then processed by a machine-learning based Paraphrase Recognizer to discover the paraphrases. The two-stage approach has been applied on the Microsoft Research Paraphrase Corpus and a subset of the Microsoft Research Video Description Corpus. The performance has been evaluated against an existing k-means clustering approach as well as cosine-similarity technique and Fuzzy C-Means clustering and the two-stage system has consistently demonstrated better performance.
Paraphrase Extraction using fuzzy hierarchical clustering
S156849461500318X
In this paper we present a clustering framework for type-2 fuzzy clustering which covers all steps of the clustering process including: clustering algorithm, parameters estimation, and validation and verification indices. The proposed clustering algorithm is developed based on dual-centers type-2 fuzzy clustering model. In this model the centers of clusters are defined by a pair of objects rather than a single object. The membership values of the objects to the clusters are defined by type-2 fuzzy numbers and there are not any type reduction or defuzzification steps in the proposed clustering algorithm. In addition, the relation among the size of the cluster bandwidth, distance between dual-centers and fuzzifier parameter are indicated and analyzed to facilitate the parameters estimation step. To determine the optimum number of clusters, we develop a new validation index which is compatible with the proposed model structure. A new compatible verification index is also defined to compare the results of the proposed model with existing type-1 fuzzy clustering model. Finally, the results of computational experiments are presented to show the efficiency of the proposed approach.
Dual-centers type-2 fuzzy clustering framework and its verification and validation indices
S1568494615003191
Different techniques and methods have been widely used in the subject of automatic anomaly detection in computer networks. Attacks, problems and internal failures when not detected early may badly harm an entire Network system. Thus, an autonomous anomaly detection system based on the statistical method principal component analysis (PCA) is proposed. This approach creates a network profile called Digital Signature of Network Segment using Flow Analysis (DSNSF) that denotes the predicted normal behavior of a network traffic activity through historical data analysis. That digital signature is used as a threshold for volume anomaly detection to detect disparities in the normal traffic trend. The proposed system uses seven traffic flow attributes: bits, packets and number of flows to detect problems, and source and destination IP addresses and Ports, to provides the network administrator necessary information to solve them. Via evaluation techniques performed in this paper using real network traffic data, results showed good traffic prediction by the DSNSF and encouraging false alarm generation and detection accuracy on the detection schema using thresholds.
Autonomous profile-based anomaly detection system using principal component analysis and flow analysis
S156849461500321X
In this paper, a Multitree Genetic Programming-based method is developed to learn an INTerpretable and ACcurate Takagi-Sugeno-Kang (TSK) fuzzy rule based sYstem (MGP-INTACTSKY) for dynamic portfolio trading. The MGP-INTACTSKY utilizes a TSK model with a new structure to develop a more interpretable and accurate system for dynamic portfolio trading. In the new structure of TSK, disjunctive normal form rules with variable structured consequent parts are developed in which the absence of some input variables is allowed. Input variables are the most influential technical indices which are selected by stepwise regression analysis. The technical indices are computed using wavelet transformed stock price series to eliminate the noise. The proposed system directly induces the preferred portfolio weights from the stock's technical indices through time. Here, genetic programming with the multitree structure is applied to learn the TSK fuzzy rule bases with the Pittsburgh approach. With this approach, the correlation of different stocks is properly considered during the evolutionary process. To evaluate the performance of the MGP-INTACTSKY for portfolio trading, the proposed model is implemented on the Tehran Stock Exchange as an emerging market as well as Toronto and Frankfurt Stock Exchanges as two mature markets. The experimental results show that the proposed model outperforms other methods such as the momentum strategy, the multitree genetic programming-based crisp system, the genetic algorithm-based first order TSK system, the buy and hold approach and the market's main index in terms of accuracy and interpretability.
MGP-INTACTSKY: Multitree Genetic Programming-based learning of INTerpretable and ACcurate TSK sYstems for dynamic portfolio trading
S1568494615003221
This paper proposes the application of Covariance Matrix Adaptation Evolution Strategy (CMA-ES) in fixed structure H∞ loop shaping controller design. Integral Time Absolute Error (ITAE) performance requirement is incorporated as a constraint with an objective of maximization of stability margin in the fixed structure H∞ loop shaping controller design problem. Pneumatic servo system, separating tower process and F18 fighter aircraft system are considered as test systems. The CMA-ES designed fixed structure H∞ loop-shaping controller is compared with the traditional H∞ loop shaping controller, non-smooth optimization and Heuristic Kalman Algorithm (HKA) based fixed structure H∞ loop shaping controllers in terms of stability margin. 20% perturbation in the nominal plant is used to validate the robustness of the CMA-ES designed H∞ loop shaping controller. The effect of Finite Word Length (FWL) is considered to show the implementation difficulties of controller in digital processors. Simulation results demonstrated that CMA-ES based fixed structure H∞ loop shaping controller is suitable for real time implementation with good robust stability and performance.
Covariance matrix adaptation evolution strategy based design of fixed structure robust H ∞ loop shaping controller
S1568494615003233
The facts show that multi-instance multi-label (MIML) learning plays a pivotal role in Artificial Intelligence studies. Evidently, the MIML learning introduces a framework in which data is described by a bag of instances associated with a set of labels. In this framework, the modeling of the connection is the challenging problem for MIML. The RBF neural network can explain the complex relations between the instances and labels in the MIMLRBF. The parameters estimation of the RBF network is a difficult task. In this paper, the computational convergence and the modeling accuracy of the RBF network has been improved. The present study aimed to investigate the impact of a novel hybrid algorithm consisting of Gases Brownian Motion optimization (GBMO) algorithm and the gradient based fast converging parameter estimation method on multi-instance multi-label learning. In the current study, a hybrid algorithm was developed to estimate the RBF neural network parameters (the weights, widths and centers of the hidden units) simultaneously. The algorithm uses the robustness of the GBMO to search the parameter space and the efficiency of the gradient. For this purpose, two real-world MIML tasks and a Corel dataset were utilized within a two-step experimental design. In the first step, the GBMO algorithm was used to determine the widths and centers of the network nodes. In the second step, for each molecule with fixed inputs and number of hidden nodes, the parameters were optimized by a structured nonlinear parameter optimization method (SNPOM). The findings demonstrated the superior performance of the hybrid algorithmic method. Additionally, the results for training and testing the dataset revealed that the hybrid method enhances RBF network learning more efficiently in comparison with other conventional RBF approaches. The results obtain better modeling accuracy than some other algorithms.
Efficacy of utilizing a hybrid algorithmic method in enhancing the functionality of multi-instance multi-label radial basis function neural networks
S1568494615003245
This paper investigates a single-period production-inventory model in linguistic ‘imprecise’ and statistical ‘uncertain’ environment, where fuzziness and randomness appear simultaneously. In order to capture the stochastic variability of demand uncertainty, we consider it as a random variable. We also model the hazards of imperfect quality production due to the production process shifting from “in-control” state to “out-of-control” state. The process shifting time and fraction of imperfect quality items are characterized as fuzzy random variables (FRVs). The model restricts the budget and allowable shortages. To mitigate the effect of uncertain demand, fluctuating cost parameters, etc., the imposed constraints are considered in fuzzy sense. To model the above scenarios, we formulate a fuzzy mathematical programming problem. By employing the fuzzy expectation, signed distance, and possibility/necessity measure, the fuzzy model is transformed into an equivalent deterministic non-linear programming problem. A fuzzy simulation based particle swarm optimization (PSO) algorithm is also employed to the model. The effectiveness of the model and solution methodologies are demonstrated with the help of numerical illustrations.
A fuzzy random EPQ model for imperfect quality items with possibility and necessity constraints
S1568494615003257
In this paper, we propose two computationally efficient ‘range-free’ 3D node localization schemes using the application of hybrid-particle swarm optimization (HPSO) and biogeography based optimization (BBO). It is considered that nodes are deployed with constraints over three layer boundaries, in an anisotropic environment. The anchor nodes are randomly distributed over the top layer only and target nodes distributed over the middle and bottom layers. Radio irregularity factor, i.e., an anisotropic property of propagation media and heterogenous properties of the devices are considered. To overcome the non-linearity between received signal strength (RSS) and distance, edge weights between each target node and neighboring anchor nodes have been considered to compute the location of the target node. These edge weights are modeled using fuzzy logic system (FLS) to reduce the computational complexity. The edge weights are further optimized by HPSO and BBO separately to minimize the location error. Both the proposed applications of the two algorithms are compared with the earlier proposed range-free algorithms in literature, i.e., the simple centroid method and weighted centroid method. The results of our proposed applications of the two algorithms are better as compared to centroid and weighted centroid methods in terms of error and scalability.
Range-free 3D node localization in anisotropic wireless sensor networks
S1568494615003269
Spectral graph clustering has become very popular in recent years, due to the simplicity of its implementation as well as the performance of the method, in comparison with other popular ones. In this article, we propose a novel spectral graph clustering method that makes use of genetic algorithms, in order to optimise the structure of a graph and achieve better clustering results. We focus on evolving the constructed similarity graphs, by applying a fitness function (also called objective function), based on some of the most commonly used clustering criteria. The construction of the initial population is based on nearest neighbour graphs, some variants of them and some arbitrary ones, represented as matrices. Each one of these matrices is transformed properly in order to form a chromosome and be used in the evolutionary process. The algorithm's performance greatly depends on the way that the initial population is created, as suggested by the various techniques that have been examined for the purposes of this article. The most important advantage of the proposed method is its generic nature, as it can be applied to several problems, that can be modeled as graphs, including clustering, dimensionality reduction and classification problems. Experiments have been conducted on a traditional dances dataset and on other various multidimensional datasets, using evaluation methods based on both internal and external clustering criteria, in order to examine the performance of the proposed algorithm, providing promising results.
Spectral clustering and semi-supervised learning using evolving similarity graphs
S1568494615003270
In this paper hybrid flow shop scheduling problem with two agents is studied and its feasibility model is considered. A two-phase neighborhood search (TNS) algorithm is proposed to minimize objectives of two agents simultaneously under the given upper bounds. TNS is constructed through the combination of multiple variable neighborhood mechanisms and a new perturbation strategy for new current solution. A new replacement principle is also applied to decide if the current solution can be updated. TNS is tested on a number of instances and compared with the existing methods. The computational results show the promising advantage of TNS on the considered problem.
Two-phase neighborhood search algorithm for two-agent hybrid flow shop scheduling problem
S1568494615003282
A physical habitat simulation is a useful tool for assessing the impact of river development or restoration on river ecosystem. Conventional methods of physical habitat simulation use the habitat suitability index models and their success depends largely on how well the model reflects monitoring data. One of preferred habitat suitability index models is habitat suitability curves, which are normally constructed based on monitoring data. However, these curves can easily be affected by the subjective opinion of the expert. This study introduces the ANFIS method for predicting the composite suitability index for use in physical habitat simulations. The ANFIS method is a hybrid type of artificial intelligence technique that combines the artificial neural network and fuzzy logic. The method is known to be a powerful approach especially for developing nonlinear relationships between input and output datasets. In this study, the ANFIS method was used to predict the composite suitability index for the physical habitat simulation of a 2.5km long reach of the Dal River in Korea. Zacco platypus was chosen as the target fish of the study area. A 2D hydraulic simulation was performed, and the hydraulic model was validated by comparing the measured and predicted water surface elevations. The distribution of the composite suitability index predicted by the ANFIS model was compared with that using the habitat suitability curves. The comparisons reveal that the two distributions are similar for various flows. In addition, the distribution of the composite suitability index of the Dal River is computed by the ANFIS method using monitoring data for the other watersheds, namely the Hongcheon River, the Geum River, and the Chogang Stream. The monitoring data for the Chogang Stream, correlation pattern of which was the most similar to that of the Dal River, yielded the distribution of the composite suitability index, which was very close to that obtained using data for the Dal River. This is also supported by the mean absolute percentage error for the difference in the weighted usable areas.
Prediction of composite suitability index for physical habitat simulations using the ANFIS method
S1568494615003300
Various sensory and control signals in a Heating Ventilation and Air Conditioning (HVAC) system are closely interrelated which give rise to severe redundancies between original signals. These redundancies may cripple the generalization capability of an automatic fault detection and diagnosis (AFDD) algorithm. This paper proposes an unsupervised feature selection approach and its application to AFDD in a HVAC system. Using Ensemble Rapid Centroid Estimation (ERCE), the important features are automatically selected from original measurements based on the relative entropy between the low- and high-frequency features. The materials used is the experimental HVAC fault data from the ASHRAE-1312-RP datasets containing a total of 49 days of various types of faults and corresponding severity. The features selected using ERCE (Median normalized mutual information (NMI)=0.019) achieved the least redundancies compared to those selected using manual selection (Median NMI=0.0199) Complete Linkage (Median NMI=0.1305), Evidence Accumulation K-means (Median NMI=0.04) and Weighted Evidence Accumulation K-means (Median NMI=0.048). The effectiveness of the feature selection method is further investigated using two well-established time-sequence classification algorithms: (a) Nonlinear Auto-Regressive Neural Network with eXogenous inputs and distributed time delays (NARX-TDNN); and (b) Hidden Markov Models (HMM); where weighted average sensitivity and specificity of: (a) higher than 99% and 96% for NARX-TDNN; and (b) higher than 98% and 86% for HMM is observed. The proposed feature selection algorithm could potentially be applied to other model-based systems to improve the fault detection performance.
Unsupervised feature selection using swarm intelligence and consensus clustering for automatic fault detection and diagnosis in Heating Ventilation and Air Conditioning systems
S1568494615003312
Background Detection and monitoring of respiratory related illness is an important aspect in pulmonary medicine. Acoustic signals extracted from the human body are considered in detection of respiratory pathology accurately. Objectives The aim of this study is to develop a prototype telemedicine tool to detect respiratory pathology using computerized respiratory sound analysis. Methods Around 120 subjects (40 normal, 40 continuous lung sounds (20 wheeze and 20 rhonchi)) and 40 discontinuous lung sounds (20 fine crackles and 20 coarse crackles) were included in this study. The respiratory sounds were segmented into respiratory cycles using fuzzy inference system and then S-transform was applied to these respiratory cycles. From the S-transform matrix, statistical features were extracted. The extracted features were statistically significant with p <0.05. To classify the respiratory pathology KNN, SVM and ELM classifiers were implemented using the statistical features obtained from of the data. Results The validation showed that the classification rate for training for ELM classifier with RBF kernel was high compared to the SVM and KNN classifiers. The time taken for training the classifier was also less in ELM compared to SVM and KNN classifiers. The overall mean classification rate for ELM classifier was 98.52%. Conclusion The telemedicine software tool was developed using the ELM classifier. The telemedicine tool has performed extraordinary well in detecting the respiratory pathology and it is well validated.
A telemedicine tool to detect pulmonary pathology using computerized pulmonary acoustic signal analysis
S1568494615003324
This paper deals with the development of effective techniques to automatically obtain the optimum management of petroleum fields aiming to increase the oil production during a given concession period of exploration. The optimization formulations of such a problem turn out to be highly multimodal, and may involve constraints. In this paper, we develop a robust particle swarm algorithm coupled with a novel adaptive constraint-handling technique to search for the global optimum of these formulations. However, this is a population-based method, which therefore requires a high number of evaluations of an objective function. Since the performance evaluation of a given management scheme requires a computationally expensive high-fidelity simulation, it is not practicable to use it directly to guide the search. In order to overcome this drawback, a Kriging surrogate model is used, which is trained offline via evaluations of a High-Fidelity simulator on a number of sample points. The optimizer then seeks the optimum of the surrogate model.
Particle swarm algorithm with adaptive constraint handling and integrated surrogate model for the management of petroleum fields
S1568494615003336
In this paper, we apply genetic algorithms to the field of electoral studies. Forecasting election results is one of the most exciting and demanding tasks in the area of market research, especially due to the fact that decisions have to be made within seconds on live television. We show that the proposed method outperforms currently applied approaches and thereby provides an argument to tighten the intersection between computer science and social science, especially political science, further. We scrutinize the performance of our algorithm's runtime behavior to evaluate its applicability in the field. Numerical results with real data from a local election in the Austrian province of Styria from 2010 substantiate the applicability of the proposed approach.
Evolving accuracy: A genetic algorithm to improve election night forecasts
S1568494615003415
Community structure is one of the most important properties in complex networks, and the problem of community detection in the networks has been investigated extensively in recent years. In this paper, a Memetic algorithm (MA) based on genetic algorithm with two different local search strategies is proposed to maximize the modularity density, and a more general version of the objective function is used with a tunable parameter λ which can resolve the resolution limit. One local search strategy is simulated annealing (SA), and the other one is tightness greedy optimization (TGO). SA is employed to find individuals with higher modularity density, which helps to enhance the convergence speed of the MA and avoid being trapped into local optima. TGO adopts the local tightness function which makes full use of local structural information to generate neighbor partition, which increases very little computation cost and benefits the diversity of the population of MA. Experiments on the computer-generated networks, LFR Benchmark networks, and real-world networks show that compared with several state-of-the-art methods, our algorithm (named as MA-SAT) is very efficient and competitive.
Memetic algorithm with simulated annealing strategy and tightness greedy optimization for community detection in networks
S1568494615003427
Due to uncertainty value of objects in microarray gene expression high dimensional cancer database, finding available subtypes of cancers is considered as challenging task. Researchers have invented mathematical assisted clustering techniques in clustering relevant gene expression of cancer subtypes, but the techniques have failed to provide proper outcome results with less error. Hence, it is an essential one in finding efficient computational clustering techniques to cluster the high dimensional gene expression cancer database for perfect diagnosis of cancer subtypes. This paper presents robust clustering techniques to identify perfect similarity between the uncertain objects of high dimensional cancer database. In order to obtain the robust clustering techniques, this paper incorporates both membership functions of fuzzy c-means and possibilistic c-means. In addition, this paper presents prototype initialization algorithm to avoid random initialization of initial prototypes. Benchmarks datasets were used to show the effectiveness of the proposed methods. The proposed methods were successfully implemented with microarray high dimensional gene expression cancer databases to separate available subtypes of cancer regions. The clustering accuracies of proposed and existed clustering methods indicate that the proposed methods are superior to the existed methods.
Robust fuzzy clustering algorithms in analyzing high-dimensional cancer databases
S1568494615003440
Reservoir flood control operation (RFCO) is a complex multi-objective optimization problem (MOP) with interdependent decision variables. Traditionally, RFCO is modeled as a single optimization problem by using a certain scalar method. Few works have been done for solving multi-objective RFCO (MO-RFCO) problems. In this paper, a hybrid multi-objective optimization approach named MO-PSO–EDA which combines the particle swarm optimization (PSO) algorithm and the estimation of distribution algorithm (EDA) is developed for solving the MO-RFCO problem. MO-PSO–EDA divides the particle population into several sub-populations and builds probability models for each of them. Based on the probability model, each sub-population reproduces new offspring by using PSO based and EDA methods. In the PSO based method, a novel global best position selection method is designed. With the help of the EDA based reproduction, the algorithm can lean linkage between decision variables and hence have a good capability of solving complex multi-objective optimization problems, such as the MO-RFCO problem. Experimental studies on six benchmark problems and two typical multi-objective flood control operation problems of Ankang reservoir have indicated that the proposed MO-PSO–EDA performs as well as or superior to the other three competitive multi-objective optimization algorithms. MO-PSO–EDA is suitable for solving MO-RFCO problems.
A hybrid multi-objective PSO–EDA algorithm for reservoir flood control operation
S1568494615003452
Despite the significant number of benchmark problems for evolutionary multi-objective optimisation algorithms, there are few in the field of robust multi-objective optimisation. This paper investigates the characteristics of the existing robust multi-objective test problems and identifies the current gaps in the literature. It is observed that the majority of the current test problems suffer from simplicity, so five hindrances are introduced to resolve this issue: bias towards non-robust regions, deceptive global non-robust fronts, multiple non-robust fronts (multi-modal search space), non-improving (flat) search spaces, and different shapes for both robust and non-robust Pareto optimal fronts. A set of 12 test functions are proposed by the combination of hindrances as challenging test beds for robust multi-objective algorithms. The paper also considers the comparison of five robust multi-objective algorithms on the proposed test problems. The results show that the proposed test functions are able to provide very challenging test beds for effectively comparing robust multi-objective optimisation algorithms. Note that the source codes of the proposed test functions are publicly available at www.alimirjalili.com/RO.html.
Hindrances for robust multi-objective test problems
S1568494615003476
Classification on medical data raises several problems such as class imbalance, double meaning of missing data, volumetry or need of highly interpretable results. In this paper a new algorithm is proposed: MOCA-I (Multi-Objective Classification Algorithm for Imbalanced data), a multi-objective local search algorithm that is conceived to deal with these issues all together. It is based on a new modelization as a Pittsburgh multi-objective partial classification rule mining problem, which is described in the first part of this paper. An existing dominance-based multi-objective local search (DMLS) is modified to deal with this modelization. After experimentally tuning the parameters of MOCA-I and determining which version of DMLS algorithm is the most effective, the obtained MOCA-I version is compared to several state-of-the-art classification algorithms. This comparison is realized on 10 small and middle-sized data sets of literature and 2 real data sets; MOCA-I obtains the best results on the 10 data sets and is statistically better than other approaches on the real data sets.
Conception of a dominance-based multi-objective local search in the context of classification rule mining in large and imbalanced data sets
S1568494615003488
The fuzzy C-means (FCM) algorithm has got significant importance due to its unsupervised form of learning and more tolerant to variations and noise as compared to other methods in medical image segmentation. In this paper, we propose a conditional spatial fuzzy C-means (csFCM) clustering algorithm to improve the robustness of the conventional FCM algorithm. This is achieved through the incorporation of conditioning effects imposed by an auxiliary (conditional) variable corresponding to each pixel, which describes a level of involvement of the pixel in the constructed clusters, and spatial information into the membership functions. The problem of sensitivity to noise and intensity inhomogeneity in magnetic resonance imaging (MRI) data is effectively reduced by incorporating local and global spatial information into a weighted membership function. The experimental results on four volumes of simulated and one volume of real-patient MRI brain images, each one having 51 images, show that the csFCM algorithm has superior performance in terms of qualitative and quantitative studies such as, cluster validity functions, segmentation accuracy, tissue segmentation accuracy and receiver operating characteristic (ROC) curve on the image segmentation results than the k-means, FCM and some other recently proposed FCM-based algorithms.
Conditional spatial fuzzy C-means clustering algorithm for segmentation of MRI images
S156849461500349X
The Bayesian neural networks are useful tools to estimate the functional structure in the nonlinear systems. However, they suffer from some complicated problems such as controlling the model complexity, the training time, the efficient parameter estimation, the random walk, and the stuck in the local optima in the high-dimensional parameter cases. In this paper, to alleviate these mentioned problems, a novel hybrid Bayesian learning procedure is proposed. This approach is based on the full Bayesian learning, and integrates Markov chain Monte Carlo procedures with genetic algorithms and the fuzzy membership functions. In the application sections, to examine the performance of proposed approach, nonlinear time series and regression analysis are handled separately, and it is compared with the traditional training techniques in terms of their estimation and prediction abilities.
A novel hybrid learning algorithm for full Bayesian approach of artificial neural networks
S1568494615003506
The objective of this paper is to analyze the performance of singular value decomposition, expectation maximization, and Elman Neural Networks in optimization of code converter outputs in the classification of epilepsy risk levels from EEG (electroencephalogram) signals. The signal parameters such as the total number of positive and negative peaks, spikes and sharp waves, their duration etc., were extracted using morphological operators and wavelet transforms. Code converters were considered as a level one classifier. Code converters were found to have a performance index and quality value of 33.26 and 12.74, respectively, which is low. Consequently, for the EEG signals of 20 patients, the post classifiers were applied across 3 epochs of 16 channels. After having made a comparative study of different architectures, SVD was found to be the best post classifier as it marked a performance index of 89.48 and a quality value of 20.62. Elman neural network also exhibits good performance metrics than SVD in the morphological operator based feature extraction method.
A real time experimental setup for classification of epilepsy risk levels
S1568494615003518
Inspired by human learning mechanisms, a novel meta-heuristic algorithm named human learning optimization (HLO) is presented in this paper in which the individual learning operator, social learning operator, random exploration learning operator and re-learning operator are developed to generate new solutions and search for the optima by mimicking the human learning process. Then HLO is applied to solve the well-known 5.100 and 10.100 multi-dimensional knapsack problems from the OR-library and the performance of HLO is compared with that of other meta-heuristics collected from the recent literature. The experimental results show that the presented HLO achieves the best performance in comparison with other meta-heuristics, which demonstrates that HLO is a promising optimization tool.
A human learning optimization algorithm and its application to multi-dimensional knapsack problems
S156849461500352X
Time series forecasting has been widely used to determine future prices of stocks, and the analysis and modeling of finance time series is an important task for guiding investors’ decisions and trades. Nonetheless, the prediction of prices by means of a time series is not trivial and it requires a thorough analysis of indexes, variables and other data. In addition, in a dynamic environment such as the stock market, the non-linearity of the time series is a pronounced characteristic, and this immediately affects the efficacy of stock price forecasts. Thus, this paper aims at proposing a methodology that forecasts the maximum and minimum day stock prices of three Brazilian power distribution companies, which are traded in the São Paulo Stock Exchange BM&FBovespa. When compared to the other papers already published in the literature, one of the main contributions and novelty of this paper is the forecast of the range of closing prices of Brazilian power distribution companies’ stocks. As a result of its application, investors may be able to define threshold values for their stock trades. Moreover, such a methodology may be of great interest to home brokers who do not possess ample knowledge to invest in such companies. The proposed methodology is based on the calculation of distinct features to be analysed by means of attribute selection, defining the most relevant attributes to predict the maximum and minimum day stock prices of each company. Then, the actual prediction was carried out by Artificial Neural Networks (ANNs), which had their performances evaluated by means of Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE) and Root Mean Square Error (RMSE) calculations. The proposed methodology for addressing the problem of prediction of maximum and minimum day stock prices for Brazilian distribution companies is effective. In addition, these results were only possible to be achieved due to the combined use of attribute selection by correlation analysis and ANNs.
Maximum and minimum stock price forecasting of Brazilian power distribution companies based on artificial neural networks
S1568494615003531
The academic and industry have entered big data era in many computer software and embedded system related fields. Intelligent transportation system problem is one of the important areas in the real big data application scenarios. However, it is posing significant challenge to manage the traffic lights efficiently due to the accumulated dynamic car flow data scale. In this paper, we present NeverStop, which utilizes genetic algorithms and fuzzy control methods in big data intelligent transportation systems. NeverStop is constructed with sensors to control the traffic lights at intersection automatically. It utilizes fuzzy control method and genetic algorithm to adjust the waiting time for the traffic lights, consequently the average waiting time can be significantly reduced. A prototype system has been implemented at an EBox-II terminal device, running the fuzzy control and genetic algorithms. Experimental results on the prototype system demonstrate NeverStop can efficiently facilitate researchers to reduce the average waiting time for vehicles.
Soft computing in big data intelligent transportation systems
S1568494615003543
This study investigates the coupling effects of objective-reduction and preference-ordering schemes on the search efficiency in the evolutionary process of multi-objective optimization. The difficulty in solving a many-objective problem increases with the number of conflicting objectives. Degenerated objective space can enhance the multi-directional search toward the multi-dimensional Pareto-optimal front by eliminating redundant objectives, but it is difficult to capture the true Pareto-relation among objectives in the non-optimal solution domain. Successive linear objective-reduction for the dimensionality-reduction and dynamic goal programming for preference-ordering are developed individually and combined with a multi-objective genetic algorithm in order to reflect the aspiration levels for the essential objectives adaptively during optimization. The performance of the proposed framework is demonstrated in redundant and non-redundant benchmark test problems. The preference-ordering approach induces the non-dominated solutions near the front despite enduring a small loss in diversity of the solutions. The induced solutions facilitate a degeneration of the Pareto-optimal front using successive linear objective-reduction, which updates the set of essential objectives by excluding non-conflicting objectives from the set of total objectives based on a principal component analysis. Salient issues related to real-world problems are discussed based on the results of an oil-field application.
Development of Pareto-based evolutionary model integrated with dynamic goal programming and successive linear objective reduction
S1568494615003555
Takagi–Sugeno–Kang (TSK) fuzzy systems have been widely applied for solving function approximation and regression-centric problems. Existing dynamic TSK models proposed in the literature can be broadly classified into two classes. Class I TSK models are essentially fuzzy systems that are limited to time-invariant environments. Class II TSK models are generally evolving systems that can learn in time-variant environments. This paper attempts to address the issues of achieving compact, up-to-date fuzzy rule bases and interpretable knowledge bases in TSK models. It proposes a novel rule pruning method which is simple, computationally efficient and biologically plausible. This rule pruning algorithm applies a gradual forgetting approach and adopts the Hebbian learning mechanism behind the long-term potentiation phenomenon in the brain. It also proposes a merging approach which is used to improve the interpretability of the knowledge bases. This approach can prevent derived fuzzy sets from expanding too many times to protect their semantic meanings. These two approaches are incorporated into a generic self-evolving Takagi–Sugeno–Kang fuzzy framework (GSETSK) which adopts an online data-driven incremental-learning-based approach. Extensive experiments were conducted to evaluate the performance of the proposed GSETSK against other established evolving TSK systems. GSETSK has also been tested on real world dataset using the high-way traffic flow density and Dow Jones index time series. The results are encouraging. GSETSK demonstrates its fast learning ability in time-variant environments. In addition, GSETSK derives an up-to-date and better interpretable fuzzy rule base while maintaining a high level of modeling accuracy at the same time.
GSETSK: a generic self-evolving TSK fuzzy neural network with a novel Hebbian-based rule reduction approach
S1568494615003567
In the digital world, secure data communication has an important role in mass media and Internet technology. With the increase in modern malicious technologies, confidential data are exposed at a greater risk during data communication. For secured communication, recent technologies and the Internet have introduced steganography, a new way to hide data. Steganography is the growing practice of concealing data in multimedia files for secure data transfer. Nowadays, videos are more commonly chosen as cover media than other multimedia files because of the moving sequence of images and audio files. Despite its popularity, video steganography faces a significant challenge, which is a lack of a fast retrieval system of the hidden data. This study proposes a novel video steganography technique in which an enhanced hidden Markov model (EHMM) is employed to improve the speed of retrieving hidden data. EHMM mathematical formulations are used to enhance the speed of embedding and extracting secret data. The data embedding and retrieving operations were performed using the conditional states and the state transition dynamics between the video frames. The proposed EHMM is extensively evaluated using three benchmark functions, and experimental evaluations are conducted to test the speed of data retrieval using differently sized cover-videos. Results indicate that the proposed EHMM yields better results by reducing the data hiding time by 3–50%, improving the data retrieval rate by 22–77% with a minimum computational cost of 20–91%, and improving the security by 4–77% compared with state-of-the-art methods.
Fast retrieval of hidden data using enhanced hidden Markov model in video steganography
S1568494615003579
In last decades, lots of nature-inspired optimization algorithms are developed and presented to the literature for solving optimization problems. Generally, these optimization algorithms can be grouped into two categories: evolutionary algorithms and swarm intelligence methods. Evolutionary methods try to improve the candidate solutions (chromosomes) using evolutionary operators such as crossover, mutation. The methods in swarm intelligence category use differential position update rules for obtaining new candidate solutions. The popularity of the swarm intelligence methods has grown since 1990s due to their simplicity, easy adaptation to the problem and effectiveness in solving the nonlinear optimization problems. One of the popular members of swarm intelligence algorithms is artificial bee colony (ABC) algorithm which simulates the intelligent behaviors of real honey bees and uses differential position update rule. When food sources which present possible solutions for the optimization problems gather on the similar points within the search space, differential position update rule can cause a stagnation behavior in the algorithm during the search process. In this paper, a distribution-based solution update rule is proposed for the basic ABC algorithm instead of differential update rule to overcome stagnation behavior of the algorithm. Distribution-based update rule uses the mean and standard deviation of the selected two food sources to obtain a new candidate solution without using any differential-based processes. This approach is therefore prevents the stagnation in the population. The proposed approach is tested on 18 benchmark functions with different characteristics and compared with the basic variants of ABC algorithm and some nature-inspired methods. The experimental results show that the proposed approach produces acceptable and comparable solutions for the numeric problems.
Artificial bee colony algorithm with distribution-based update rule
S1568494615003580
The recent developments in the image quality, storage and data transmission capabilities increase the importance of texture analysis, which plays an important role in computer vision and image processing. Local binary pattern (LBP) is an effective statistical texture descriptor, which has successful applications in texture classification. In this paper, two novel descriptors were proposed to search different patterns in images built on LBP. One of them is based on the relations between the sequential neighbors with a specified distance and the other one is based on determining the neighbors in the same orientation through central pixel parameter. These descriptors are tested with the Brodatz-1, Brodatz-2, Butterfly and Kylberg datasets to show the applicability of the proposed nLBP d and dLBP α descriptors. The proposed methods are also compared with classical LBP. The average accuracies obtained by ANN with 10 fold cross validation, which are 99.26% (LBP u2 and nLBP d ), 94.44% (dLBPα), 95.71% ( n L B P d u 2 ) and %99.64 (nLBP d ), for Brodatz-1, Brodatz-2, Butterfly and Kylberg datasets, respectively, show that the proposed methods outperform significant accuracies.
Two novel local binary pattern descriptors for texture analysis
S1568494615003592
Differential evolution (DE) is a simple and effective approach for solving numerical optimization problems. However, the performance of DE is sensitive to the choice of mutation and crossover strategies and their associated control parameters. Therefore, to achieve optimal performance, a time-consuming parameter tuning process is required. In DE, the use of different mutation and crossover strategies with different parameter settings can be appropriate during different stages of the evolution. Therefore, to achieve optimal performance using DE, various adaptation, self-adaptation, and ensemble techniques have been proposed. Recently, a classification-assisted DE algorithm was proposed to overcome trial and error parameter tuning and efficiently solve computationally expensive problems. In this paper, we present an evolving surrogate model-based differential evolution (ESMDE) method, wherein a surrogate model constructed based on the population members of the current generation is used to assist the DE algorithm in order to generate competitive offspring using the appropriate parameter setting during different stages of the evolution. As the population evolves over generations, the surrogate model also evolves over the iterations and better represents the basin of search by the DE algorithm. The proposed method employs a simple Kriging model to construct the surrogate. The performance of ESMDE is evaluated on a set of 17 bound-constrained problems. The performance of the proposed algorithm is compared to state-of-the-art self-adaptive DE algorithms: the classification-assisted DE algorithm, regression-assisted DE algorithm, and ranking-assisted DE algorithm.
An evolving surrogate model-based differential evolution algorithm
S1568494615003609
We investigate a parallelized divide-and-conquer approach based on a self-organizing map (SOM) in order to solve the Euclidean traveling salesman problem (TSP). Our approach consists of dividing cities into municipalities, evolving the most appropriate solution from each municipality so as to find the best overall solution and, finally, joining neighborhood municipalities by using a blend operator to identify the final solution. We evaluate performance of parallelized approach over standard TSP test problems (TSPLIB) to show that our approach gives a better answer in terms of quality and time rather than the sequential evolutionary SOM.
Parallelized neural network system for solving Euclidean traveling salesman problem
S1568494615003610
This article proposes a new genetic algorithm (GA) methodology to obtain parsimonious support vector regression (SVR) models capable of predicting highly precise setpoints in a continuous annealing furnace (GA-PARSIMONY). The proposal combines feature selection, model tuning, and parsimonious model selection in order to achieve robust SVR models. To this end, a novel GA selection procedure is introduced based on separate cost and complexity evaluations. The best individuals are initially sorted by an error fitness function, and afterwards, models with similar costs are rearranged according to model complexity measurement so as to foster models of lesser complexity. Therefore, the user-supplied penalty parameter, utilized to balance cost and complexity in other fitness functions, is rendered unnecessary. GA-PARSIMONY performed similarly to classical GA on twenty benchmark datasets from public repositories, but used a lower number of features in a striking 65% of models. Moreover, the performance of our proposal also proved useful in a real industrial process for predicting three temperature setpoints for a continuous annealing furnace. The results demonstrated that GA-PARSIMONY was able to generate more robust SVR models with less input features, as compared to classical GA.
GA-PARSIMONY: A GA-SVR approach with feature selection and parameter optimization to obtain parsimonious solutions for predicting temperature settings in a continuous annealing furnace
S1568494615003622
A numerical method based on a stochastic global optimization paradigm is presented and applied to the calculation of Nash–Cournot equilibria in electricity markets. The proposed method uses and solves GNEP's (generalized Nash equilibrium problems) by means of the Fuzzy Adaptive Simulated Annealing (Fuzzy ASA) algorithm. Concepts of cooperative game theory are used too, such as the bilateral Shapley value. This approach makes it possible to study the feasibility of coalition formation processes in several scenarios, and a case study based on the IEEE 30-bus system is used in order to present and discuss in detail the proposal. The main advantage of the new technique is to simplify and extend current methods, as explained along the text.
Coalition formation feasibility and Nash–Cournot equilibrium problems in electricity markets: A Fuzzy ASA approach
S1568494615003646
This research presents an innovative method for cancer identification and type classification using microarray data. The method is based on gene selection with shuffling in association with optimization based unconventional data clustering. A new hybrid optimization algorithm, COA-GA, is developed by synergizing recently invented Cuckoo Optimization Algorithm (COA) with a more traditional genetic algorithm (GA) for data clustering to select the most dominant genes using shuffling. For gene classification, Support Vector Machine (SVM) and Multilayer Perceptron (MLP) artificial neural networks are used. Literature suggests that data clustering using traditional approaches such as K-means, C-means and Hierarchical do not have any impact on classification accuracy. This is also confirmed in this investigation. However, results show that optimization based clustering with shuffling increase the classification accuracy significantly. The proposed algorithm (COA-GA) not only outperforms COA, GA and Particle Swarm optimization (PSO) in achieving better classification performance but also reaches a better global minimum with only few iterations. Higher accuracy is observed to have achieved with SVM classifier compared to MLP in all datasets used.
Cancer classification using a novel gene selection approach by means of shuffling based on data clustering with optimization
S1568494615003658
In this paper, we investigate how adaptive operator selection techniques are able to efficiently manage the balance between exploration and exploitation in an evolutionary algorithm, when solving combinatorial optimization problems. We introduce new high level reactive search strategies based on a generic algorithm's controller that is able to schedule the basic variation operators of the evolutionary algorithm, according to the observed state of the search. Our experiments on SAT instances show that reactive search strategies improve the performance of the solving algorithm.
An experimental study of adaptive control for evolutionary algorithms
S156849461500366X
Genetic algorithm (GA) is a branch of evolutionary algorithm, has proved its effectiveness in solving constrain based complex real world problems in variety of dimensions. The individual phases of GA are the mimic of the basic biological processes and hence the self-adaptability of GA varied in accordance to the adjustable natural processes. In some instances, self-adaptability in GA fails in identifying adaptable genes to form a solution set after recombination, which leads converge toward infeasible solution, sometimes, this, infeasible solution could not be converted into feasible form by means of any of the repairing techniques. In this perspective, Gene Suppressor (GS), a bio-inspired process is being proposed as a new phase after recombination in the classical GA life cycle. This phase works on new individuals generated after recombination to attain self-adaptability by adapting best genes in the environment to regulate chromosomes expression for achieving desired phenotype expression. Repairing in this phase converts infeasible solution into feasible solution by suppressing conflicting gene from the environment. Further, the solution vector expression is improved by inducing best genes expression in the environment within the set of intended constrains. Multiobjective Multiple Knapsack Problems (MMKP), one of the popular NP hard combinatorial problems is being considered as the test-bed for proving the competence of the proposed new phase of GA. The standard MMKP benchmark instances obtained from OR-library [22] are used for the experiments reported in this paper. The outcomes of the proposed method is compared with the existing repairing techniques, where the analyses proved the proficiency of the proposed GS model in terms of better error and convergence rates for all instances.
Gene Suppressor: An added phase toward solving large scale optimization problems in genetic algorithm
S1568494615003671
Classification is one of the important tasks in data mining. The probabilistic neural network (PNN) is a well-known and efficient approach for classification. The objective of the work presented in this paper is to build on this approach to develop an effective method for classification problems that can find high-quality solutions (with respect to classification accuracy) at a high convergence speed. To achieve this objective, we propose a method that hybridizes the firefly algorithm with simulated annealing (denoted as SFA), where simulated annealing is applied to control the randomness step inside the firefly algorithm while optimizing the weights of the standard PNN model. We also extend our work by investigating the effectiveness of using Lévy flight within the firefly algorithm (denoted as LFA) to better explore the search space and by integrating SFA with Lévy flight (denoted as LSFA) in order to improve the performance of the PNN. The algorithms were tested on 11 standard benchmark datasets. Experimental results indicate that the LSFA shows better performance than the SFA and LFA. Moreover, when compared with other algorithms in the literature, the LSFA is able to obtain better results in terms of classification accuracy.
Hybridizing firefly algorithms with a probabilistic neural network for solving classification problems
S1568494615003683
This paper presents a scatter search approach based on linear correlations among genes to find biclusters, which include both shifting and scaling patterns and negatively correlated patterns contrarily to most of correlation-based algorithms published in the literature. The methodology established here for comparison is based on a priori biological information stored in the well-known repository Gene Ontology (GO). In particular, the three existing categories in GO, Biological Process, Cellular Components and Molecular Function, have been used. The performance of the proposed algorithm has been compared to other benchmark biclustering algorithms, specifically a group of classical biclustering algorithms and two algorithms that use correlation-based merit functions. The proposed algorithm outperforms the benchmark algorithms and finds patterns based on negative correlations. Although these patterns contain important relationship among genes, they are not found by most of biclustering algorithms. The experimental study also shows the importance of the size in a bicluster in addition to the value of its correlation. In particular, the size of a bicluster has an influence over its enrichment in a GO term.
Scatter search-based identification of local patterns with positive and negative correlations in gene expression data
S1568494615003695
In evolutionary multi-objective optimization, balancing convergence and diversity remains a challenge and especially for many-objective (three or more objectives) optimization problems (MaOPs). To improve convergence and diversity for MaOPs, we propose a new approach: clustering-ranking evolutionary algorithm (crEA), where the two procedures (clustering and ranking) are implemented sequentially. Clustering incorporates the recently proposed non-dominated sorting genetic algorithm III (NSGA-III), using a series of reference lines as the cluster centroid. The solutions are ranked according to the fitness value, which is considered to be the degree of closeness to the true Pareto front. An environmental selection operation is performed on every cluster to promote both convergence and diversity. The proposed algorithm has been tested extensively on nine widely used benchmark problems from the walking fish group (WFG) as well as combinatorial travelling salesman problem (TSP). An extensive comparison with six state-of-the-art algorithms indicates that the proposed crEA is capable of finding a better approximated and distributed solution set.
A clustering-ranking method for many-objective optimization
S1568494615003701
Fuzzy C-means (FCM) partitions the observations partially into several clusters based on the principles of fuzzy theory. However, minimization on the Euclidean distance in FCM tends to detect hyper-spherical shaped clusters, which is unfeasible for the real world problems. In this paper, an effective FCM algorithm that adopts the symmetry similarity measure is proposed in order to search for the appropriate clusters, regardless of the geometric structures and overlapping characteristic. Experimental results on several artificial and real life datasets with different nature and the performance assessment with other existing clustering algorithms demonstrate its superiority.
An effective fuzzy C-means algorithm based on symmetry similarity approach
S1568494615003713
Multi-objective optimization problems (MOPs) have become a research hotspot, as they are commonly encountered in scientific and engineering applications. When solving some complex MOPs, it is quite difficult to locate the entire Pareto-optimal front. To better settle this problem, a novel double-module immune algorithm named DMMO is presented, where two evolutionary modules are embedded to simultaneously improve the convergence speed and population diversity. The first module is designed to optimize each objective independently by using a sub-population composed with the competitive individuals in this objective. Differential evolution crossover is performed here to enhance the corresponding objective. The second one follows the traditional procedures of immune algorithm, where proportional cloning, recombination and hyper-mutation operators are operated to concurrently strengthen the multiple objectives. The performance of DMMO is validated by 16 benchmark problems, and further compared with several multi-objective algorithms, such as NSGA-II, SPEA2, SMSEMOA, MOEA/D, SMPSO, NNIA and MIMO. Experimental studies indicate that DMMO performs better than the compared targets on most of test problems and the advantages of double modules in DMMO are also analyzed.
A double-module immune algorithm for multi-objective optimization problems
S1568494615003725
In this paper, an Ellipsoid ARTMAP (EAM) network model based on incremental learning algorithm is proposed to realize online learning and tool condition monitoring. The main characteristic of EAM model is that hyper-ellipsoid is used for geometric representation of categories which can depict the sample distribution robustly and accurately. Meanwhile, adaptive resonance based strategy can realize the update of the hyper-ellipsoid node locally and monotonically. Therefore, the model has strong incremental learning ability, which guarantees the constructed classifier can learn new knowledge without forgetting the original information. Based on incremental EAM model, a tool condition monitoring system is realized. In this system, features are firstly extracted from the force and vibration signals to depict dynamic features of tool wear process. Then, fast correlation based filter (FCBF) method is introduced to select the minimum redundant features adaptively so as to decrease the feature redundancy and improve classifier robustness. Based on the selected features, EAM based incremental classifier is constructed to realize recognition of the tool wear states. To show the effectiveness of the proposed method, multi-teeth milling experiments of Ti-6Al-4V alloy were carried out. Moreover, to estimate the generation error of the classifiers accurately, a five-fold cross validation method is utilized. By comparison with the commonly used Fuzzy ARTMAP (FAM) classifier, it can be shown that the averaging recognition rate of EAM initial classifier can reach 98.67%, which is higher than FAM. Moreover, the incremental learning ability of EAM is also analyzed and compared with FAM using the new data coming from different cutting passes and tool wear category. The results show that the updated EAM classifier can get higher classification accuracy on the original knowledge while realizing effective online learning of the new knowledge.
Incremental learning for online tool condition monitoring using Ellipsoid ARTMAP network model
S1568494615003737
The problem of protein structure prediction (PSP) represents one of the most important challenges in computational biology. Determining the three dimensional structure of proteins is necessary to understand their functions at molecular level. The most representative soft computing approaches for solving the protein tertiary structure prediction problem are summarized in this paper. These approaches have been categorized following the type of methodology. A total of 90 relevant works published in last 15 years in the field of protein structure prediction have been reported, including the best competitors in last CASP editions. However, despite large research effort in last decades, a considerable scope for further improvement still remains in this area.
Soft computing methods for the prediction of protein tertiary structures: A survey
S1568494615003749
Information on a well-scale urban land cover is important for a number of urban planning practices involving tree shade mapping, green space analysis, urban hydrologic modeling and urban land use mapping. In this study, an urban land cover dataset received from the database of UCI (University of California at Irvine) machine learning was used as the urban land cover data. This dataset is the urban area located in Deerfield Beach, FL, USA. Separately, this dataset is a high definition atmospheric image consisting of 9 different urban land covers. The characteristics of a multi-scale spectral, magnitude and formal tectology were used to sort out and classify these different images. The dataset comprises a total of 147 features and land covers of 9 different areas involving trees, grass, soil, concrete, asphalt, buildings, cars, pools and shadows. A new data weighting method was recommended to classify these 9 different patterns automatically. This recommended data weighting method is based on the combination of the measures of central tendency composed of mean value, harmonic value, mode and median along with the k-means clustering method. In the data weighting method, the data sets belonging to each class within the dataset are first calculated by using k-means clustering method, after which the measures of central tendency belonging to each class are calculated, as well. The measure of central tendency belonging to each class is divided by the set central value belonging to the class in question, as the result of which the data weight coefficient of that class has already been calculated. This calculation process is performed separately for 9 different land covers, and afterwards, these data weighting coefficients found are multiplied by the dataset, and thus, the dataset has been weighted. In the second stage, on the other hand, 3 different classification algorithms containing k-NN (k-nearest neighbor), extreme learning machine (ELM) and support vector machine (SVM) were used to classify 9 different urban land covers after the data weighting method. In determining the educational and test data sets, the 10-fold cross validation was used. When classification through raw data was performed along with k-NN (for k =1), ELM and SVM classification algorithms, the overall classification accuracy obtained was 77.15%, 84.70% and 84.79%, respectively. When classification through data weighting method (the combination of k-means clustering and mode measure) along with k-NN (for k =1), ELM and SVM classification algorithms was made, the overall classification accuracy obtained proved to be 98.58%, 98.62% and 98.77%, respectively. The obtained results suggest that the urban land cover in an atmospheric image via the recommended data weighting method was classified as 9 different areas with a high classification success rate.
Automatic classification of high resolution land cover using a new data weighting procedure: The combination of k-means clustering algorithm and central tendency measures (KMC–CTM)
S1568494615003750
In this work, a novel surrogate-assisted memetic algorithm is proposed which is based on the preservation of genetic diversity within the population. The aim of the algorithm is to solve multi-objective optimization problems featuring computationally expensive fitness functions in an efficient manner. The main novelty is the use of an evolutionary algorithm as global searcher that treats the genetic diversity as an objective during the evolution and uses it, together with a non-dominated sorting approach, to assign the ranks. This algorithm, coupled with a gradient-based algorithm as local searcher and a back-propagation neural network as global surrogate model, demonstrates to provide a reliable and effective balance between exploration and exploitation. A detailed performance analysis has been conducted on five commonly used multi-objective problems, each one involving distinct features that can make the convergence difficult toward the Pareto-optimal front. In most cases, the proposed algorithm outperformed the other state-of-the-art evolutionary algorithms considered in the comparison, assuring higher repeatability on the final non-dominated set, deeper convergence level and higher convergence rate. It also demonstrates a clear ability to widely cover the Pareto-optimal front with larger percentage of non-dominated solutions if compared to the total number of function evaluations.
A surrogate-assisted evolutionary algorithm based on the genetic diversity objective
S1568494615003762
Although harmony search (HS) algorithm has shown many advantages in solving global optimization problems, its parameters need to be set by users according to experience and problem characteristics. This causes great difficulties for novice users. In order to overcome this difficulty, a self-adaptive multi-objective harmony search (SAMOHS) algorithm based on harmony memory variance is proposed in this paper. In the SAMOHS algorithm, a modified self-adaptive bandwidth is employed, moreover, the self-adaptive parameter setting based on variation of harmony memory variance is proposed for harmony memory considering rate (HMCR) and pitch adjusting rate (PAR). To solve multi-objective optimization problems (MOPs), the proposed SAMOHS uses non-dominated sorting and truncating procedure to update harmony memory (HM). To demonstrate the effectiveness of the SAMOHS, it is tested with many benchmark problems and applied to solve a practical engineering optimization problem. The experimental results show that the SAMOHS is competitive in convergence performance and diversity performance, compared with other multi-objective evolutionary algorithms (MOEAs). In the experiment, the impact of harmony memory size (HMS) on the performance of SAMOHS is also analyzed.
A self-adaptive multi-objective harmony search algorithm based on harmony memory variance
S1568494615003774
There are two popular types of forecasting algorithms for fuzzy time series (FTS). One is based on intervals of universal sets of independent variables and the other is based on fuzzy clustering algorithms. Clustering based FTS algorithms are preferred since role and optimal length of intervals are not clearly understood. Therefore data of each variable are individually clustered which requires higher computational time. Fuzzy Logical Relationships (FLRs) are used in existing FTS algorithms to relate input and output data. High number of clusters and FLRs are required to establish precise input/output relations which incur high computational time. This article presents a forecasting algorithm based on fuzzy clustering (CFTS) which clusters vectors of input data instead of clustering data of each variable separately and uses linear combinations of the input variables instead of the FLRs. The cluster centers handle fuzziness and ambiguity of the data and the linear parts allow the algorithm to learn more from the available information. It is shown that CFTS outperforms existing FTS algorithms with considerably lower testing error and running time.
A clustering based forecasting algorithm for multivariable fuzzy time series using linear combinations of independent variables
S1568494615003786
Mammography is the most effective technique for breast cancer screening and detection of abnormalities. However, early detection of breast cancer is dependent on both the radiologist's ability to read mammograms and the quality of mammogram images. In this paper, the researchers have investigated combining several image enhancement algorithms to enhance the performance of breast-region segmentation. The masses that appear in mammogram images are further analyzed and classified into four categories that include: benign, probable benign and possible malignant, probable malignant and possible benign, and malignant. The main contribution of this work is to reveal the optimal combination of various enhancement methods and to segment breast region in order to obtain better visual interpretation, analysis, and classification of mammogram masses to assist radiologists in making more accurate decisions. The experimental dataset consists of a total of more than 1300 mammogram images from both the King Hussein Cancer Center and Jordan Hospital. Results achieved tumor classification accuracy values of 90.7%. Moreover, the results showed a sensitivity of 96.2% and a specificity of 94.4% for the mass classifying algorithm. Radiologists from both institutes have acknowledged the results and confirmed that this work has lead to better visual quality images and that the segmentation and classification of tumors has aided the radiologists in making their diagnoses.
Mammogram image visual enhancement, mass segmentation and classification
S1568494615003798
The expansion of roads and the development of new road infrastructures have increased in recent years, linked to the population growing in large cities. In the last two decades, roundabouts have largely replaced traditional intersections in many countries. They have the advantage of allowing drivers continuous flow when traffic is clear, without the usual delay caused by traffic lights. Although roundabouts with and without traffic-signal control have been widely used and considered in the literature, driverless control on roundabouts has not been studied in depth yet. The behavior of autonomous vehicles in roundabouts can be divided into three stages: entrance, inside, and exit. The first and last may be handled as an extension of intersections. However, autonomous driving on the roundabout requires special attention. In this paper, the design and implementation of a fuzzy logic system for the steering control of autonomous vehicles inside the roundabout is proposed. Cascade architecture for lateral control and parametric trajectory generation are used. Fuzzy control has proved to be easy to define using expert knowledge. Experiments with a real prototype have been carried out, taking into account different speed profiles and lane change maneuvers inside the roundabout, with very satisfactory results.
Fuzzy logic steering control of autonomous vehicles inside roundabouts