FileName
stringlengths 17
17
| Abstract
stringlengths 163
6.01k
| Title
stringlengths 12
421
|
---|---|---|
S1568494615005116 | In this study, an exact analytical inversion procedure is presented for interval type-2 Takagi–Sugeno–Kang fuzzy logic systems (IT2 TSK FLSs) with closed form inference methods. In this inversion procedure, the inversion variable is needed to be defined by piecewise linear interval type-2 fuzzy sets (PWL IT2 FSs). This provides a way to define an IT2 TSK FLS in a form that is composed of several fuzzy subsystems with respect to the inversion variable. In the proposed procedure, the output definitions of fuzzy subsystems are derived in a unique parametric form in terms of the inversion variable by using parameters of linear definition of membership functions. By rearranging this definition with respect to the inversion variable; either a linear, quadratic or cubic equation is obtained depending on the types of the inference method and rule consequents of the IT2 TSK FLS. Since explicit analytical solutions are available for these types of equations, the exact inverse solution(s) of each fuzzy subsystem can easily be calculated. The inverse solution set of overall IT2 TSK FLS is obtained by composing the inverse solutions of all fuzzy subsystems. The exactness and feasibility of the proposed inversion procedure has been illustrated through simulation examples. | Exact analytical inversion of interval type-2 TSK fuzzy logic systems with closed form inference methods |
S1568494615005128 | In this contribution, a novel bionic algorithm inspired by plant root foraging behaviors, namely artificial root foraging optimization (ARFO) algorithm, is designed and developed. The incentive mechanism of ARFO is to mimic the adaptation and randomness of plant root foraging behaviors, e.g., branching, regrowing and tropisms. A mathematical architecture is firstly designed to model the plant root foraging pattern. Under this architecture, the effects of the tropism and the self-adaptive growth behaviors are investigated. Afterward, the arithmetic realization of ARFO derived from this framework is presented in detail. In order to demonstrate the optimization performance, the proposed ARFO is benchmarked against several state-of-the-art reference algorithms on a suit of CEC 2013 and CEC 2014 functions. Computational results show a high performance of the proposed ARFO for searching a global optimum on several benchmarks, which indicates that ARFO has potential to deal with complex optimization problems. | A novel bionic algorithm inspired by plant root foraging behaviors |
S156849461500513X | The traditional visual and acoustic embolic signal detection methods based on the expert analysis of individual spectral recordings and Doppler shift sounds are the gold standards. However, these types of detection methods are high-cost, subjective, and can only be applied by experts. In order to overcome these drawbacks, computer based automated embolic detection systems which employ spectral properties of emboli, speckle, and artifact using Fourier and Wavelet Transforms have been proposed. In this study, we propose a fast, accurate, and robust automated emboli detection system based on the Dual Tree Complex Wavelet Transform (DTCWT). Employing the DTCWT, which does not suffer from the lack of shift invariance property of ordinary Discrete Wavelet Transform (DWT), increases the robustness of the coefficients extracted from the Doppler ultrasound signals. In this study, a Doppler ultrasound dataset including 100 samples from each embolic, Doppler speckle, and artifact signal is used. Each sample obtained from forward and reverse blood flow directions is represented by 1024 points. In our method, we first extract the forward and reverse blood flow coefficients separately using DTCWT from the samples. Then dimensionality reduction is applied to each set of coefficients and both of the reduced set of coefficients are fed to classifiers individually. Subsequently, in the view that the forward and reverse blood flow coefficients carry different characteristics, the individual predictors of these classifiers are combined using ensemble stacking method. We compare the obtained results with Fast Fourier Transform and DWT based emboli detection systems, and show that the features extracted using DTCWT give the highest accuracy and emboli detection rate. It is also observed that combining forward and reverse coefficients using stacking ensemble method improves the emboli and artifact detection rates, and overall accuracy. | An emboli detection system based on Dual Tree Complex Wavelet Transform and ensemble learning |
S1568494615005141 | Rough set theory (RST) has been the subject of much study and numerous applications in many areas. However, most previous studies on rough sets have focused on finding rules where the decision attribute has a flat, rather than hierarchical structure. In practical applications, attributes are often organized hierarchically to represent general/specific meanings. This paper (1) determines the optimal decision attribute in a hierarchical level-search procedure, level by level, (2) merges the two stages, generating reducts and inducting decision rules, into a one-shot solution that reduces the need for memory space and the computational complexity and (3) uses a revised strength index to identify meaningful reducts and to improve their accuracy. The selection of a green fleet is used to validate the superiority of the proposed approach and its potential benefits to a decision-making process for transportation industry. | Rule induction for hierarchical attributes using a rough set for the selection of a green fleet |
S1568494615005153 | In this study, a multi-attribute group decision making (MAGDM) problem is investigated, in which decision makers provide their preferences over alternatives by using linguistic 2-tuple. In the process of decision making, we introduce the idea of a specific structure in the attribute set. We assume that attributes are partitioned into several classes and members of intra-partition are interrelated while no interrelationship exists among inter partition. We emphasize the importance of having an aggregation operator, to capture the expressed inter-relationship structure among the attributes, which we will refer to as partition Bonferroni mean (PBM). We also investigate the behavior of the proposed PBM operator. Further to aggregate the given linguistic information to get overall performance value of each alternative in MAGDM, we analyze PBM operator in linguistic 2-tuple environment and develop three new linguistic aggregation operators: 2-tuple linguistic PBM (2TLPBM), weighted 2-tuple linguistic PBM (W2TLPBM) and linguistic weighted 2-tuple linguistic PBM (LW-2TLPBM). Based on the idea that total linguistic deviation between individual decision maker's opinions and group opinion should be minimized, we develop an approach to determine weight of the decision makers. Finally, a practical example is presented to illustrate the proposed method and comparison analysis demonstrates applicability of the proposed method. | Partitioned Bonferroni mean based on linguistic 2-tuple for dealing with multi-attribute group decision making |
S1568494615005165 | In this paper, the optimum cutting conditions without chatter vibrations have been determined during turning operations. Chatter vibrations are detrimental and cause poor surface properties. In this study, chatter vibration prevention has been discussed in a different way using a multi-criteria decision making approach. Regression-multi-criteria decision making hybrid models have been developed and applied to the problem of chatter vibrations. First, regression models have been used to determine the criteria weights for TOPSIS (technique for order preference by similarity to ideal solution) model. Then, TOPSIS models have been developed. Three different hybrid models have been studied. The results of these three models are the same. It has been seen from the results that the number of revolutions and the workpiece hardness are the most effective parameters. The models are developed to help operators in different manufacturing environments. | A hybrid decision making approach to prevent chatter vibrations |
S1568494615005177 | With the advance of technology over the years, computer numerical control (CNC) has been utilized in end milling operations in many industries such as the automotive and aerospace industry. As a result, the need for end milling operations has increased, and the enhancement of CNC end milling technology has also become an issue for automation industry. There have been a considerable number of researches on the capability of CNC machines to detect the tool condition. A traditional tool detection system lacks the ability of self-learning. Once the decision-making system has been built, it cannot be modified. If error detection occurs during the detection process, the system cannot be adjusted. To overcome these shortcomings, a probabilistic neural network (PNN) approach for decision-making analysis of a tool breakage detection system is proposed in this study. The fast learning characteristic of a PNN is utilized to develop a real-time high accurate self-learning tool breakage detection system. Once an error occurs during the machining process, the new error data set is sent back to the PNN decision-making model to re-train the network structure, and a new self-learning tool breakage detection system is reconstructed. Through a self-learning process, the result shows the system can 100% monitor the tool condition. The detection capability of this adjustable tool detection system is enhanced as sampling data increases and eventually the goal of a smart CNC machine is achieved. | A PNN self-learning tool breakage detection system in end milling operations |
S1568494615005219 | Multi Compartment Vehicle Routing Problem is an extension of the classical Capacitated Vehicle Routing Problem where different products are transported together in one vehicle with multiple compartments. Products are stored in different compartments because they cannot be mixed together due to differences in their individual characteristics. The problem is encountered in many industries such as delivery of food and grocery, garbage collection, marine vessels, etc. We propose a hybridized algorithm which combines local search with an existent ant colony algorithm to solve the problem. Computational experiments are performed on new generated benchmark problem instances. An existing ant colony algorithm and the proposed hybridized ant colony algorithm are compared. It was found that the proposed ant colony algorithm gives better results as compared to the existing ant colony algorithm. | Hybridized ant colony algorithm for the Multi Compartment Vehicle Routing Problem |
S1568494615005220 | Artificial Bee Colony (ABC) algorithm is a wildly used optimization algorithm. However, ABC is excellent in exploration but poor in exploitation. To improve the convergence performance of ABC and establish a better searching mechanism for the global optimum, an improved ABC algorithm is proposed in this paper. Firstly, the proposed algorithm integrates the information of previous best solution into the search equation for employed bees and global best solution into the update equation for onlooker bees to improve the exploitation. Secondly, for a better balance between the exploration and exploitation of search, an S-type adaptive scaling factors are introduced in employed bees’ search equation. Furthermore, the searching policy of scout bees is modified. The scout bees need update food source in each cycle in order to increase diversity and stochasticity of the bees and mitigate stagnation problem. Finally, the improved algorithms is compared with other two improved ABCs and three recent algorithms on a set of classical benchmark functions. The experimental results show that the our proposed algorithm is effective and robust and outperform than other algorithms. | An Artificial Bee Colony algorithm with guide of global & local optima and asynchronous scaling factors for numerical optimization |
S1568494615005244 | Double row layout problem (DRLP) is to allocate facilities on two rows separated by a straight aisle. Aiming at the dynamic environment of product processing in practice, we propose a dynamic double-row layout problem (DDRLP) where material flows change over time in different processing periods. A mixed-integer programming model is established for this problem. A methodology combining an improved simulated annealing (ISA) with mathematical programming (MP) is proposed to resolve it. Firstly, a mixed coding scheme is designed to represent both of sequence of facilities and their exact locations. Secondly, an improved simulated annealing algorithm is suggested to produce a solution to DDRLP. Finally, MP is used to improve this solution by determining the optimal exact location for each facility. Experiments show that this methodology is able to obtain the optimal solutions for small size problems and outperforms an exact approach (CPLEX) for problems of realistic size. | Solving dynamic double row layout problem via combining simulated annealing and mathematical programming |
S1568494615005256 | Security constrained optimal power flow (SCOPF) is an important operation function for dispatching centers of current power systems. It optimizes operating conditions of the system, while saves its security. However, SCOPF in its present form focuses on a single time interval, which is known as static SCOPF. A multi-period SCOPF model, referred to as dynamic SCOPF (DSCOPF) in this paper, is presented. It extends static SCOPF to multi-period frameworks considering the inter-temporal constraints. The proposed DSCOPF is a more practical operation function and can optimize operating conditions of the system more realistically compared to traditional SCOPF. Moreover, to solve this DSCOPF model, considering its nonlinear and non-convex behavior, a new stochastic search method is presented, which is an enhanced version of artificial bee colony (ABC) algorithm. The proposed enhanced ABC (EABC) has high exploration capability and can discover different areas of the solution space. Also, it is a robust algorithm and has low sensitivity with respect to the initial population. Effectiveness of the proposed EABC is extensively illustrated on various test cases. generation cost coefficients of ith unit with valve loading effects down ramp rate limit of ith unit generation cost function of ith unit conductance and susceptance of the branch connected between ith and jth buses, respectively number of branches number of buses, number of buses excluding the slack buses and number of PQ buses, respectively number of shunt compensators number of fuel options of ith unit number of units of the power system number of POZs for ith unit number of phase shifters number of prohibited operating zones of ith unit number of tap-changing transformers lower and upper bound for kth prohibited operating zone (POZ) of ith unit minimum and maximum active power generation limits for ith unit active power generation and demand of ith bus in hour t, respectively a vector including all P G,i,t variables phase shifter setting for ith phase shifter in hour t minimum and maximum limits for ith phase shifter setting reactive power injection of ith shunt compensator in hour t minimum and maximum limits for reactive power injection of ith shunt compensator minimum and maximum reactive power generation limits for ith unit reactive power generation and demand of ith bus in hour t, respectively number of hours of the operation horizon tap-changer setting for ith transformer in hour t minimum and maximum limits for ith tap-changer setting up ramp rate limit of ith unit voltage magnitude of ith bus in hour t minimum and maximum limits for voltage magnitude of ith bus apparent power flow of kth branch in hour t apparent power flow limit for kth branch system reserve required in time interval t phase angle difference between buses i and j in hour t | Security constrained multi-period optimal power flow by a new enhanced artificial bee colony |
S1568494615005268 | In this paper, we propose a hybrid deep neural network model for recognizing human actions in videos. A hybrid deep neural network model is designed by the fusion of homogeneous convolutional neural network (CNN) classifiers. The ensemble of classifiers is built by diversifying the input features and varying the initialization of the weights of the neural network. The convolutional neural network classifiers are trained to output a value of one, for the predicted class and a zero, for all the other classes. The outputs of the trained classifiers are considered as confidence value for prediction so that the predicted class will have a confidence value of approximately 1 and the rest of the classes will have a confidence value of approximately 0. The fusion function is computed as the maximum value of the outputs across all classifiers, to pick the correct class label during fusion. The effectiveness of the proposed approach is demonstrated on UCF50 dataset resulting in a high recognition accuracy of 99.68%. | Hybrid deep neural network model for human action recognition |
S156849461500527X | The analysis of internal connective operators of fuzzy reasoning is very significant and the robustness of fuzzy reasoning has been calling for study. An interesting and important question is that, how to choose suitable internal connective operators to guarantee good robustness of rule-based fuzzy reasoning? This paper is intended to answer it. In this paper, Lipschitz aggregation property and copula characteristic of t-norms and implications are discussed. The robustness of rule-based fuzzy reasoning is investigated and the relationships among input perturbation, rule perturbation and output perturbation are presented. The suitable t-norm and implication can be chosen to satisfy the need of robustness of fuzzy reasoning. In 1-Lipschitz operators, if both t-norm and implication are copulas, the rule-based fuzzy reasoning is much more stable and more reliable. In copulas, if both t-norm and implication are 1-l ∞-Lipschitz, they can guarantee good robustness of fuzzy reasoning. The experiments not only illustrate the ideas proposed in the paper but also can be regarded as applications of soft computing. The approach in the paper also provides guidance for choosing suitable fuzzy connective operators and decision making application in rule-based fuzzy reasoning. | A novel approach to guarantee good robustness of fuzzy reasoning |
S1568494615005281 | This paper presents a system for weed mapping, using imagery provided by unmanned aerial vehicles (UAVs). Weed control in precision agriculture is based on the design of site-specific control treatments according to weed coverage. A key component is precise and timely weed maps, and one of the crucial steps is weed monitoring, by ground sampling or remote detection. Traditional remote platforms, such as piloted planes and satellites, are not suitable for early weed mapping, given their low spatial and temporal resolutions. Nonetheless, the ultra-high spatial resolution provided by UAVs can be an efficient alternative. The proposed method for weed mapping partitions the image and complements the spectral information with other sources of information. Apart from the well-known vegetation indexes, which are commonly used in precision agriculture, a method for crop row detection is proposed. Given that crops are always organised in rows, this kind of information simplifies the separation between weeds and crops. Finally, the system incorporates classification techniques for the characterisation of pixels as crop, soil and weed. Different machine learning paradigms are compared to identify the best performing strategies, including unsupervised, semi-supervised and supervised techniques. The experiments study the effect of the flight altitude and the sensor used. Our results show that an excellent performance is obtained using very few labelled data complemented with unlabelled data (semi-supervised approach), which motivates the use of weed maps to design site-specific weed control strategies just when farmers implement the early post-emergence weed control. | A semi-supervised system for weed mapping in sunflower crops using unmanned aerial vehicles and a crop row detection method |
S1568494615005293 | Due to environmental benefits, methyl esters biodiesel got a considerable attention as a viable substitute to petroleum-based diesel. Surface tension plays significant role in atomization of this biodiesel since it controls the combustion process inside the engine through fuel–air mixing. Experimental determination of the surface tension of biodiesel is expensive and time consuming which limits its application as substitute for petroleum-based diesel. This is because proper choice of any methyl esters for diesel engine applications depend on the value of surface tension as high value of surface tension brings about difficulty in droplet formation. This work employs computational intelligence technique on the platform of sensitivity based linear learning method (SBLLM) to develop methyl esters surface tension estimator (MESTE) which estimates surface tension of methyl esters biodiesel with high degree of accuracy. Surface tensions of eight different classes of methyl esters were estimated at different temperatures by training and testing of neural network using SBLLM. The estimated surface tensions were compared with experimental results as well as surface tension obtained from Parachor model and Goldhammer model. The outstanding performance of the developed MESTE suggests its potential in estimating surface tension of methyl esters biodiesel for enhancing the atomization in biodiesels engine applications. | Estimation of surface tension of methyl esters biodiesels using computational intelligence technique |
S156849461500530X | In this paper, using the Dempster–Shafer theory (DST) of evidence, a new decision criterion is proposed which can quickly classify airborne objects without any a priori knowledge, whose data are laced with environmental noise characteristics, within 10seconds (10s) from the time it is detected. Kinematic parameters of an airborne object received from radars are used to classify it into one of the six classes, which include three levels of ballistic target discrimination, aerodynamic, satellite and unknown. The DST is chosen as it can suitably handle the element of uncertainty, limited a priori data and short observation times that exist with the data acquired for the purpose of classification. The focus of the work is on ballistic targets in a theater of war. The approach is compared with the popularly known k-NN and decision tree techniques and is found to perform better with the chosen data sets. This approach is tested using both real flight test data and simulated data. | Evidence theoretic classification of ballistic missiles |
S1568494615005311 | This paper presents a novel method for intensity normalization of DaTSCAN SPECT brain images. The proposed methodology is based on Gaussian mixture models (GMMs) and considers not only the intensity levels, but also the coordinates of voxels inside the so-defined spatial Gaussian functions. The model parameters are obtained according to a maximum likelihood criterion employing the expectation maximization (EM) algorithm. First, an averaged control subject image is computed to obtain a threshold-based mask that selects only the voxels inside the skull. Then, the GMM is obtained for the DaTSCAN-SPECT database, performing space quantization by populating it with Gaussian kernels whose linear combination approximates the image intensity. According to a probability threshold that measures the weight of each kernel or “cluster” in the striatum area, the voxels in the non-specific region are intensity-normalized by removing clusters whose likelihood is negligible. | Intensity normalization of DaTSCAN SPECT imaging using a model-based clustering approach |
S1568494615005347 | The attenuation of the light that travels through a water medium subjects underwater images to several problems. As a result of low contrast and color performance, images are unclear and lose important information. Therefore, the objects in these images can hardly be differentiated from the background. This study proposes a new method called dual-image Rayleigh-stretched contrast-limited adaptive histogram specification, which integrates global and local contrast correction. The aims of the proposed method are to increase image details and to improve the visibility of underwater images while enhancing image contrasts. The two main steps of the proposed method are contrast and color corrections; an underwater image undergoes the former before the latter. Global contrast correction generates dual-intensity images, which are then integrated to produce contrast-enhanced resultant images. Subsequently, such images are processed locally to enhance details. The color of the images is also corrected to improve saturation and brightness. Qualitative and quantitative results show that the contrast of the resultant image improves significantly. Moreover, image detail and color are adequately enhanced; thus, the proposed approach outperforms current state-of-the-art methods. | Enhancement of low quality underwater image through integrated global and local contrast correction |
S1568494615005359 | In this paper the main goal is to find the optimal architecture of modular neural networks, which means finding out the optimal number of modules, layers and nodes of the neural network. The fuzzy gravitational search algorithm with dynamic parameter adaptation is used for optimizing the modular neural network in a particular pattern recognition application. The proposed method is applied to medical images in echocardiogram recognition. One of the most common methods for detection and analysis of diseases in the human body, by physicians and specialists, is the use of medical images. Simulation results of the proposed approach in echocardiogram recognition show the advantages of using the fuzzy gravitational search in the optimization of modular neural networks. In this case the proposed approach provides a very good 99.49% echocardiogram recognition rate. | Fuzzy logic in the gravitational search algorithm enhanced using fuzzy logic with dynamic alpha parameter value adaptation for the optimization of modular neural networks in echocardiogram recognition |
S1568494615005360 | The capture of an eye image with the occlusion of spectacles in a non-cooperative environment compromises the accuracy in identifying a person in an iris recognition system. This is due to the obstruction of the iris by the frame which tends to produce an incorrect estimation of the initial center of the iris and the pupil during the iris segmentation process. In addition, it also causes incorrect localization of the upper eyelid during the process of iris segmentation and sometimes, the edges of the frame are wrongly identified as the edges of the upper eyelid. A frame detection method which involves the combination of two gradients, namely the Sobel operator and high pass filter, followed by fuzzy logic and the dilation operation of morphological processing is proposed to identify the frame on the basis of different frame factors in the capture of a distant eye image. In addition, a different color space is applied and only a single channel is used for the process of frame detection. The proposed frame detection method provides the highest frame detection rate compared to the other methods, with a detection rate of more than 80.0%. For the accuracy of the iris localization, upper eyelid localization and iris recognition system, the proposed method gives more than 96.5% accuracy compared to the other methods. The index of decidability showed that the proposed method gives more than 2.35 index compared to the existing methods. | Frame detection using gradients fuzzy logic and morphological processing for distant color eye images in an intelligent iris recognition system |
S1568494615005372 | This work addresses the problem of detecting parametric faults in nonlinear dynamic systems by extending an eigenstructure based technique to a nonlinear context. Two local state-space models are updated online based on a recursive subspace system identification technique. One of the models relies on input–output real-time data collected from the plant, while the other is updated using data generated by a neural network predictor, describing the nonlinear plant behaviour in fault-free conditions. Parametric faults symptoms are generated based on eigenvalues residuals associated with two linear state-space model approximators. The feasibility and effectiveness of the proposed framework are demonstrated through two case studies. | Recursive subspace system identification for parametric fault detection in nonlinear systems |
S1568494615005384 | The recently developed flower pollination algorithm is used to minimize the weight of truss structures, including sizing design variables. The new algorithm can efficiently combine local and global searches, inspired by cross-pollination and self-pollination of flowering plants, respectively. Furthermore, it implements an iterative constraint handling strategy where trial designs are accepted or rejected based on the allowed amount of constraint violation that is progressively reduced as the search process approaches the optimum. This strategy aims to obtain always feasible optimized designs. The new algorithm is tested using three classical sizing optimization problems of 2D and 3D truss structures. Optimization results show that the proposed method is competitive with other state-of-the-art metaheuristic algorithms presented in the literature. | Sizing optimization of truss structures using flower pollination algorithm |
S1568494615005396 | Instance selection aims at filtering out noisy data (or outliers) from a given training set, which not only reduces the need for storage space, but can also ensure that the classifier trained by the reduced set provides similar or better performance than the baseline classifier trained by the original set. However, since there are numerous instance selection algorithms, there is no concrete winner that is the best for various problem domain datasets. In other words, the instance selection performance is algorithm and dataset dependent. One main reason for this is because it is very hard to define what the outliers are over different datasets. It should be noted that, using a specific instance selection algorithm, over-selection may occur by filtering out too many ‘good’ data samples, which leads to the classifier providing worse performance than the baseline. In this paper, we introduce a dual classification (DuC) approach, which aims to deal with the potential drawback of over-selection. Specifically, performing instance selection over a given training set, two classifiers are trained using both a ‘good’ and ‘noisy’ sets respectively identified by the instance selection algorithm. Then, a test sample is used to compare the similarities between the data in the good and noisy sets. This comparison guides the input of the test sample to one of the two classifiers. The experiments are conducted using 50 small scale and 4 large scale datasets and the results demonstrate the superior performance of the proposed DuC approach over the baseline instance selection approach. | On learning dual classifiers for better data classification |
S1568494615005402 | Fuzzy cognitive maps have been widely used as abstract models for complex networks. Traditional ways to construct fuzzy cognitive maps rely on domain knowledge. In this paper, we propose to use fuzzy cognitive map learning algorithms to discover domain knowledge in the form of causal networks from data. More specifically, we propose to infer gene regulatory networks from gene expression data. Furthermore, a new efficient fuzzy cognitive map learning algorithm based on a decomposed genetic algorithm is developed to learn large scale networks. In the proposed algorithm, the simulation error is used as the objective function, while the model error is expected to be minimized. Experiments are performed to explore the feasibility of this approach. The high accuracy of the generated models and the approximate correlation between simulation errors and model errors suggest that it is possible to discover causal networks using fuzzy cognitive map learning. We also compared the proposed algorithm with ant colony optimization, differential evolution, and particle swarm optimization in a decomposed framework. Comparison results reveal the advantage of the decomposed genetic algorithm on datasets with small data volumes, large network scales, or the presence of noise. | Inferring causal networks using fuzzy cognitive maps and evolutionary algorithms with application to gene regulatory network reconstruction |
S1568494615005414 | Load forecasting is an integral problem in the power system operation, planning and maintenance. The article presents the principles of the pattern similarity-based methods for short-term load forecasting. A common feature of these methods is learning from the data and using similarities between patterns of the seasonal cycles of the load time series. These series are non-stationary in mean and variance, contain long run trend, many cycles of seasonal fluctuations and random noise. The new approach based on the pattern similarity and local nonparametric regression simplifies the forecasting problem and enables us to develop effective forecasting models. Several functions mapping daily cycles of the load time series into input and output patterns are defined. The assumption underlying the pattern similarity-based methods of forecasting and the way of its verification are presented. Some indicators of the strength and stability of the relationship between patterns are described. In the experimental part of the work pattern definitions and the validity of the assumption were verified using Polish power system data. The data analysis was performed specific for load time series. The results show that pattern similarity-based methods can be very useful for forecasting time series with multiple seasonal cycles. | Pattern similarity-based methods for short-term load forecasting – Part 1: Principles |
S1568494615005499 | The main focus of this paper is to develop a model considering some significant aspects of real-world supply chain production planning approved by industries. To do so, we consider a supply chain (SC) model, which contains multiple suppliers, multiple manufactures and multiple customers. This model is formulated as a fuzzy multi objective mixed-integer nonlinear programming (FMOMINLP) to address a comprehensive multi-site, multi-period and multi-product aggregate production planning (APP) problem under uncertainty. Four conflicting objectives are considered in the presented model simultaneously, which are (i) to minimize the total cost of the SC (production costs, workforce wage, hiring/firing and training costs, transportation cost, inventory holding cost, raw material purchasing cost, and shortage cost), (ii) to improve customer satisfaction, (iii) to minimize the fluctuations in the rate of changes of workforce, and (iv) to maximize the total value of purchasing in order to consider the impact of qualitative performance criteria. This model is converted to multi-objective mixed-integer linear programming (MOMILP) through three steps of the developed method and then the MOMILP model is solved by two different methods. Additionally, comparison of these two methods is presented and the results are analyzed. Finally, the efficiency of the model is investigated by a real industry SC case study. number of time periods (t =1,…, T) number of products (i =1,…, I) number of production sites (j =1,…, J) number of suppliers (s=1,…, S) number of worker levels (k =1,…, K) number of consumers (c =1,…, C) number of raw materials (m =1,…, M) regular time production cost per unit at site j in period t overtime production cost per unit at site j in period t subcontracting production cost per unit at site j in period t cost of raw material m provided by supplier s in period t shipping costs from the supplier s to site j in time t average failure rate of raw material m supplied by supplier s to site j in period t acceptable failure rate of site j for incoming shipments of raw material m average service level of supplier s site j acceptable service level labor cost of the k-levels (k =1,…, K) worker at site j in period t firing cost of the k-levels worker at site j in period t hiring cost of the k-levels worker at site j in period t training cost for the k-levels worker trained to level k′at site j in period t inventory holding cost of product i at site j in period t transportation cost from site j to consumer c in period t inventory holding cost of raw material m at site j in period t cost of penalties associated with shortage of product i for consumer c in period t sale price of product i for consumer c in period t demand of production i from consumer c in period t maximum amount of subcontracting for product i at site j in period t machine hour used for produce product i at site j in period t maximum machine capacity available for product i at site j in period t warehouse space per unit for ith product at site j in period t warehouse space per unit for the mth raw material at site j in period t maximum warehouse space available at site j in period t number of units of raw material m required for each unit of product i lead time required for shipping raw material from supplier s to site j available regular time at site j in period t available over time at site j in period t available subcontracting time at site j in period t production time of product i at site j number of the k-level initial labor considered for primary planned period at site j fraction of the workforce variation allowed in period t lead time required for shipping raw material from site j to consumer c raw material storage capacity at site j final production storage capacity at site j productivity of k-level workers (0≤ ν k ≤1) maximum number of raw material m supplier s could provide in period t 1 if training from skill level k to skill level k′ is possible; 0, otherwise total score of supplier s considering qualitative performance factors escalating factor for the regular production cost escalating factor for the overtime production cost escalating factor for subcontracting production cost escalating factor for the raw material cost escalating factor for the labor salary cost escalating factor for the labor firing cost escalating factor for the labor hiring cost escalating factor for the labor training cost escalating factor for the raw material inventory cost escalating factor for the production inventory cost escalating factor for suppliers to the site shipping cost escalating factor for sites to the consumer transportation cost escalating factor for the product shortage cost quantity of production in regular time for product i produced at site j in period t quantity of production in overtime for product i produced at site j in period t subcontracting quantity for product i produced at site j g in period t quantity of units of raw material m shipped from supplier s to site j in period t quantity of k-level workers at site j in period t quantity of k-level workers at site j fired in period t quantity of k-level workers at site j hired in period t quantity of k-level workers at site j trained to level k′ in period t inventory level of final product i at site j in period t number of units of final product i provided by site j for consumer c in period t inventory level of raw material m at site j at the end of period t shortage of product i in demand point c in period t | Comprehensive fuzzy multi-objective multi-product multi-site aggregate production planning decisions in a supply chain under uncertainty |
S1568494615005505 | A new unsupervised feature selection algorithm, based on the concept of shared nearest neighbor distance between pattern pairs, is developed. A multi-objective framework is employed for the preservation of sample similarity, along with dimensionality reduction of the feature space. A reduced set of samples, chosen to preserve sample similarity, serves to reduce the effect of outliers on the feature selection procedure while also decreasing computational complexity. Experimental results on six sets of publicly available data demonstrate the effectiveness of this feature selection strategy. Comparative study with related methods based on different evaluation indices have demonstrated the superiority of the proposed algorithm. | Multi-objective optimization of shared nearest neighbor similarity for feature selection |
S1568494615005529 | Two important problems arise in WDM network planning: network design to minimize the operation cost and traffic grooming to maximize the usage of the high capacity channels. In practice, however, these two problems are usually simultaneously tackled, denoted as the network design problem with traffic grooming (NDG). In this paper, a mathematical formulation of the NDG problem is first presented. Then, this paper proposes a new metaheuristic algorithm based on two-level iterated local search (TL-ILS) to solve the NDG problem, where a novel tree search based neighborhood construction and a fast evaluation method are proposed, which not only enhance the algorithm's search efficiency but also provide a new perspective in designing neighborhoods for problems with graph structures. Our algorithm is tested on a set of benchmarks generated according to real application scenarios. We also propose a strengthening formulation of the original problem and a method to obtain the lower bound of the NDG problem. Computational results in comparison with the commercial software CPLEX and the lower bounds show the effectiveness of the proposed algorithm. | Two-level iterated local search for WDM network design problem with traffic grooming |
S1568494615005530 | The complex structure of relational data makes the process of knowledge discovery from data a more challenging task compared with the single table data structure. The usefulness of granular computing based approaches to mining data stored in a single table is a driving force for adapting this method to relational data. This paper proposes relation-based granules that are defined in a granular computing based approach to mining relational data. The relations are used to represent relational data and patterns to be discovered. Thanks to this representation, the generation of patterns can be speeded up. The representation also makes it possible to discover richer knowledge from relational data. | Relation-based granules to represent relational data and patterns |
S1568494615005566 | In this paper, a stochastic multiobjective framework is proposed for a day-ahead short-term Hydro Thermal Self-Scheduling (HTSS) problem for joint energy and reserve markets. An efficient linear formulations are introduced in this paper to deal with the nonlinearity of original problem due to the dynamic ramp rate limits, prohibited operating zones, operating services of thermal plants, multi-head power discharge characteristics of hydro generating units and spillage of reservoirs. Besides, system uncertainties including the generating units’ contingencies and price uncertainty are explicitly considered in the stochastic market clearing scheme. For the stochastic modeling of probable multiobjective optimization scenarios, a lattice Monte Carlo simulation has been adopted to have a better coverage of the system uncertainty spectrum. Consequently, the resulting multiobjective optimization scenarios should concurrently optimize competing objective functions including GENeration COmpany's (GENCO's) profit maximization and thermal units’ emission minimization. Accordingly, the ɛ-constraint method is used to solve the multiobjective optimization problem and generate the Pareto set. Then, a fuzzy satisfying method is employed to choose the most preferred solution among all Pareto optimal solutions. The performance of the presented method is verified in different case studies. The results obtained from ɛ-constraint method is compared with those reported by weighted sum method, evolutionary programming-based interactive Fuzzy satisfying method, differential evolution, quantum-behaved particle swarm optimization and hybrid multi-objective cultural algorithm, verifying the superiority of the proposed approach. thermal unit index hydro unit index time interval (hour) index. For instance, p(j,t,s) is the power output of hydro unit j at hour t in the sth scenario (MW) scenario index network area index probability of kth price level bilateral contract price ($/MWh) emission price ($/lbs) number of periods in the planning horizon shut-down cost of unit i ($) start-up cost of unit j ($) slope of block n of fuel cost curve of unit i ($/MWh) slope of the volume block n of the reservoir associated to unit j (m3/s/Hm3) slope of the block n of the performance curve k of unit j (MW/m3/s) slope of segment n in emission curve of unit i (lbs/MWh) coefficients of valve loading cost function generated emission by off-unit while providing non-spinning reserve (lbs) generated emission of n −1th upper limit in the emission curve of unit i (lbs) emission group (SO2orNO x ) emission quota (lbs) generation cost of n-1th upper limit in the fuel cost curve of unit i ($/h) forecasted natural water inflow of the reservoir associated to unit j (Hm3/h) number of performance curves number of prohibited operating zones number of blocks of the piecewise linearized start-up fuel function number of price levels number of scenario after scenario reduction number of areas in the network power capacity of bilateral contract (MW) probability of scenario s normalized probability of scenario s minimum and maximum power output of unit i (MW) minimum power output of unit j for performance curve n (MW) capacity of unit j (MW) lower limit of nth prohibited operating zone of unit i (MW) upper limit of n −1th prohibited operating zone of unit i (MW) minimum and maximum water discharge of unit j (m3/s) ramp down and ramp up limit for block n (MW) start-up and shut-down emission generated by unit i (lbs) start-up and shut-down ramp rate limit of unit i (MW/h) ramping down and ramping up limit of unit i (MW) minimum content of the reservoir associated to unit j (Hm3) maximum content of the reservoir j associated to nth performance curve (Hm3) generation of block n of fuel cost curve for unit i (MW) generation of block n of unit i for valve loading effect curve (MW) market price for energy, spinning and non-spinning reserve ($/MWh), respectively individual membership function (the degree of optimality) for the nth objective function in the rth Pareto optimal solution the weight factor of the nth objective function in the MMP problem total membership function of the rth Pareto optimal solution start-up cost of unit i ($) valve loading effect cost of unit i ($) fuel cost of unit i ($) main objective function (expected profit of GENCO) GENCO's total expected profit in dollars after arbitrage expected generated emission for each Pareto optimal solution (lbs) non-spinning reserve of thermal unit i in the spot market when unit is off and on, respectively (MW) non-spinning reserve of a hydro unit j in the spot market when unit is off and on, respectively (MW) power output of thermal unit i (MW) maximum power output of unit i (MW) power output of hydro unit j (MW) power for bid on the spot market (MW) profit of scenario s water discharge of hydro unit j and block n (m3/s) spinning reserve of a thermal unit i and hydro unit j in the spot market (MW), respectively water content of the reservoir associated with unit j (Hm3) 1 if thermal unit i is on 1 if hydro unit j is on 1 if unit i provide non-spinning reserve when unit is off. 1 if block n of fuel cost curve of unit i is selected 1 if volume of reservoir water is greater than v n (j) 1 if power output of unit i has exceeded block n of valve loading effect curve obtained from the roulette wheel mechanism in the scenario generation stage indicating whether kth price level in the sth scenario occurred ( w k , t , s P = 1 ) or not ( w k , t , s P = 0 ) status of the ith thermal and jth hydro unit obtained from LMCS in the scenario generation stage (forced outage state, i.e. W =0 or available, i.e. W =1). 1 if thermal unit i is started-up 1 if hydro unit j is started-up 1 if unit i is shut-down thermal units hydro units set of indices of blocks of piecewise linearized hydro unit performance curve the blocks of piecewise linearized thermal unit emission curve the periods of market time horizon T ={1, 2, …, NT} scenario | Uncertainty management in multiobjective hydro-thermal self-scheduling under emission considerations |
S1568494615005578 | To improve the global performance of the standard teaching–learning-based optimization (TLBO) algorithm, an improved TLBO algorithm (LETLBO) with learning experience of other learners is proposed in the paper. In LETLBO, two random possibilities are used to determine the learning methods of learners in different phases. In the Teacher Phase, the learners improve their grades by utilizing the mean information of the class and the learning experience of other learners according to a random probability. In Learner Phase, the learner learns knowledge from another learner which is randomly selected from the whole class or the mutual learning experience of two randomly selected learners according to a random probability. Moreover, area copying operator which is used in Producer–Scrounger model is used for parts of learners to increase its learning speed. The feasibility and effectiveness of the proposed algorithm are tested on 18 benchmark functions and two practical optimization problems. The merits of the improved method are compared with those of some other evolutionary algorithms (EAs), the results show that the proposed algorithm is an effective method for global optimization problems. | Teaching–learning-based optimization with learning experience of other learners and its application |
S156849461500558X | The single machine scheduling problem with sequence-dependent setup times with the objective of minimizing the total weighted tardiness is a challenging problem due to its complexity, and has a huge number of applications in real production environments. In this paper, we propose a memetic algorithm that combines and extends several ideas from the literature, including a crossover operator that respects both the absolute and relative position of the tasks, a replacement strategy that improves the diversity of the population, and an effective but computationally expensive neighborhood structure. We propose a new decomposition of this neighborhood that can be used by a variable neighborhood descent framework, and also some speed-up methods for evaluating the neighbors. In this way we can obtain competitive running times. We conduct an experimental study to analyze the proposed algorithm and prove that it is significantly better than the state-of-the-art in standard benchmarks. | An efficient memetic algorithm for total weighted tardiness minimization in a single machine with setups |
S1568494615005591 | Support Vector Machine (SVM) has important properties such as a strong mathematical background and a better generalization capability with respect to other classification methods. On the other hand, the major drawback of SVM occurs in its training phase, which is computationally expensive and highly dependent on the size of input data set. In this study, a new algorithm to speed up the training time of SVM is presented; this method selects a small and representative amount of data from data sets to improve training time of SVM. The novel method uses an induction tree to reduce the training data set for SVM, producing a very fast and high-accuracy algorithm. According to the results, the proposed algorithm produces results with similar accuracy and in a faster way than the current SVM implementations. | Data selection based on decision tree for SVM classification on large data sets |
S1568494615005608 | This paper proposes two strategies to design robust adaptive fault tolerant control (FTC) systems for a class of unknown n-order nonlinear systems in presence of actuator and sensor faults versus bounded unknown external disturbances. It is based on machine learning approaches which are continuous reinforcement learning (RL) and neural networks (NNs). In the first FTC strategy, an intelligent observer is designed for unknown nonlinear systems when faults occur or not. In the second strategy, a robust reinforcement learning FTC is proposed through combining reinforcement learning to treat the unknown nonlinear faulty system and nonlinear control theory to guarantee the stability and robustness of the system. Critic and actor of continuous RL are adopted based on the behavior of the defined Lyapunov function. In both strategies, to generate the residual a Gaussian radial basis function is used for an online estimation of the unknown dynamic function of the normal system. The adaptation law of the online estimator is derived in the sense of Lyapunov function which is defined based on adjustable parameters of the estimator and switching surfaces containing dynamic errors and residuals. Simulation results demonstrate the validity and feasibility of proposed FTC systems. | Continuous reinforcement learning to robust fault tolerant control for a class of unknown nonlinear systems |
S156849461500561X | The ability of artificial immune systems to adapt to varying pathogens makes such systems a suitable choice for various robotic applications. Generally, immunity-based robotic applications map local instantaneous sensory information into either an antigen or a co-stimulatory signal, according to the choice of representation schema. Algorithms then use relevant immune functions to output either evolved antibodies or maturity of dendritic cells, in terms of actuation signals. It is observed that researchers do not try to replicate the biological immunity but select necessary immune functions instead, resulting in an ad-hoc manner these applications are reported. On the other hand, the paradigm shift in robotics research from reactive to probabilistic approaches is also not being reflected in these applications. Authors, therefore, present a detailed review of immuno-inspired robotic applications in an attempt to identify the possible areas to explore. Moreover, the literature has been categorized according to the underlying immuno-definitions. Implementation details have been critically reviewed in terms of corresponding mathematical expressions and their representation schema that include binary, real or hybrid approaches. Limitations of reported applications have also been identified in light of modern immunological interpretations including the danger theory. As a result of this study, authors suggest a renewed focus on innate immunity, action contextualization prior to B/T cell invocation and behavior evolution instead of arbitration. In this context, a multi-tier immunological framework for robotics research, combining innate and adaptive components together is also suggested and skeletonized. | Immuno-inspired robotic applications: A review |
S1568494615005621 | Remanufacturing has attracted growing attention in recent years because of its energy-saving and emission-reduction potential. Process planning and scheduling play important roles in the organization of remanufacturing activities and directly affect the overall performance of a remanufacturing system. However, the existing research on remanufacturing process planning and scheduling is very limited due to the difficulty and complexity brought about by various uncertainties in remanufacturing processes. We address the problem by adopting a simulation-based optimization framework. In the proposed genetic algorithm, a solution represents the selected process routes for the jobs to be remanufactured, and the quality of a solution is evaluated through Monte Carlo simulation, in which a production schedule is generated following the specified process routes. The studied problem includes two objective functions to be optimized simultaneously (one concerned with process planning and the other concerned with scheduling), and therefore, Pareto-based optimization principles are applied. The proposed solution approach is comprehensively tested and is shown to outperform a standard multi-objective optimization algorithm. | A simulation-based genetic algorithm approach for remanufacturing process planning and scheduling |
S1568494615005633 | This paper is the second one of the two papers entitled “Weighted Superposition Attraction (WSA) Algorithm”, which is about the performance evaluation of the WSA algorithm in solving the constrained global optimization problems. For this purpose, the well-known mechanical design optimization problems, design of a tension/compression coil spring, design of a pressure vessel, design of a welded beam and design of a speed reducer, are selected as test problems. Since all these problems were formulated as constrained global optimization problems, WSA algorithm requires a constraint handling method for tackling them. For this purpose we have selected 6 formerly developed constraint handling methods for adapting into WSA algorithm and analyze the effect of the used constraint handling method on the performance of the WSA algorithm. In other words, we have the aim of producing concluding remarks over the performance and robustness of the WSA algorithm through a set of computational study in solving the constrained global optimization problems. Computational study indicates the robustness and the effectiveness of the WSA in terms of obtained results, reached level of convergence and the capability of coping with the problems of premature convergence, trapping in a local optima and stagnation. Iteration number (stopping condition) Current iteration number Number of artificial agents Number of dimensions of the problem User defined parameter User defined parameter User defined parameter Upper limit for the dimensions Lower limit for the dimensions Fitness of the current point of agent i Fitness of the target point Weight of the current point of an agent Current position vector of an agent Position vector of the target point Vector combines an agent to target point Move direction vector of an agent Signum function Step length | Weighted Superposition Attraction (WSA): A swarm intelligence algorithm for optimization problems – Part 2: Constrained optimization |
S1568494615005645 | In conventional single-input single-output (SISO) systems, the capacity is limited as base station can provide service to only one user at any instant. However, multiuser (MU) multiple-input multiple-output (MIMO) systems deliver optimum system capacity by providing service to multiple users (as many as transmit antennas) simultaneously according to dirty paper coding (DPC) scheme. However, DPC is an exhaustive search algorithm (ESA) where the user encoding sequence is important to transmit data to multiple users. Exhaustive search becomes imperative as the search space grows with number of users and number of transmit antennas in the MU MIMO system. This can be treated as an optimization problem of maximizing the achievable system sum-rate. In this paper, it has been demonstrated that combined user and antenna scheduling (CUAS) with binary genetic algorithm (BGA) adopting elitism and adaptive mutation (AM) achieves about 97–99% of system sum-rate obtained by ESA (DPC) with significantly reduced computational and time complexity. It has been shown that BGA is able to find the globally optimum solution for MU MIMO systems well within the time interval of modern wireless packet data communications. However, it is interesting to observe that BGA is able to find a solution to CUAS close to the optimum value quite rapidly. In this paper, it is also shown that BGA with elitism and AM achieves higher throughput than limited feedback scheduling schemes as well. | A computationally efficient genetic algorithm for MIMO broadcast scheduling |
S1568494615005657 | We show that the assertions (vii) and (viii) of Theorem 3.1 proposed by Agarwal et al. [Appl. Soft Comput. 13 (2013) 3552-3566] are incorrect by a counterexample. | Commentary on “Generalized intuitionistic fuzzy soft sets with applications in decision-making” |
S1568494615005669 | Accurate tax forecast is very important to carry on the macroscopic regulation efficiently under the market economy. However, experience shows that it is very difficult to improve the accuracy of forecasting by setting up a single forecasting model. This article describes the deficiencies of the present forecasting methods and puts forward a new approach to improve the accuracy of prediction by introducing error correction. First, this paper carries out initial prediction of tax revenue by using the LS-SVR model. Second, the accelerated translation transform and weighting mean value generating transform are introduced to process the error sequence. On the basis of the processed data, the error predictive method based on data transformational GM(1,1) is constructed and predicts the subsequent error. Third, the correction of preliminary prediction values is calibrated. The case study based on Chinese tax revenue during the last 30 years shows that the presented approach improves the accuracy of forecasting significantly compared with the prediction accuracy before correction, and then the validity of the model is verified. | Error correction method based on data transformational GM(1,1) and application on tax forecasting |
S1568494615005670 | Genetic Fuzzy Systems (GFSs) are models capable of integrating accuracy and high comprehensibility in their results. In the case of GFSs for classification, more emphasis has been given to improving the “Genetic” component instead of its “Fuzzy” counterpart. This paper focus on the Fuzzy Inference component to obtain a more accurate and interpretable system, presenting the so-called Genetic Programming Fuzzy Inference System for Classification (GPFIS-CLASS). This model is based on Multi-Gene Genetic Programming and aims to explore the elements of a Fuzzy Inference System. GPFIS-CLASS has the following features: (i) it builds fuzzy rules premises employing t-norm, t-conorm, negation and linguistic hedge operators; (ii) it associates to each rule premise a suitable consequent term; and (iii) it improves the aggregation process by using a weighted mean computed by restricted least squares. It has been evaluated in two sets of benchmarks, comprising a total of 45 datasets, and has been compared with eight different classifiers, six of them based on GFSs. The results obtained in both sets demonstrate that GPFIS-CLASS provides better results for most benchmark datasets. | GPFIS-CLASS: A Genetic Fuzzy System based on Genetic Programming for classification problems |
S1568494615005694 | The bodyguard allocation problem (BAP) is an optimization problem that illustrates the behavior of processes with contradictory individual goals in some distributed systems. The objective function of this problem is the maximization of a parameter called the social welfare. Although the main method proposed to solve this problem, known as CBAP, is simple and time efficient, it lacks the ability to generate a diverse set of solutions, which is one of the most important feature to improve the chances to reach the global optimum. To overcome this drawback, we address the BAP with an evolutionary algorithm, the EBAP. Later, we take advantage of the best properties of both algorithms, EBAP and CBAP, to generate a two-stage cascade evolutionary algorithm called FFC-BAP. Extensive experimental results show that the algorithm FFC-BAP outperforms both the EBAP and the CBAP, in terms of quality of solutions. | A cascade evolutionary algorithm for the bodyguard allocation problem |
S1568494615005700 | A three-phase intelligent technique has been constructed to improve the data-hiding algorithm in colour images with imperceptibility. The first phase of the learning system (LS) has been applied in advance, whereas the other phases have been applied after the hiding process. The first phase has been constructed to estimate the number of bits to be hidden at each pixel (NBH); this phase is based on adaptive neural networks with an adaptive genetic algorithm using upwind adaptive relaxation (LSANN_AGAUpAR1). The LS of the second phase (LSANN_AGAUpAR2) has been introduced as a detector to check the performance of the proposed steganographic algorithm by creating a rich images model from available cover and stego images. The LS of the last phase (LSCANN_AGAUpAR3) has been implemented through three steps, and it is based on a concurrent approach to improve the stego image and defend against attacks. The adaptive image filtering and adaptive image segmentation algorithms have been introduced to randomly hide a compressed and encrypted secret message into a cover image. The NBH for each pixel has been estimated cautiously using 32 principle situations (PS) with their 6 branch situations (BS). These situations have been worked through seven layers of security to augment protection from attacks. In this paper, hiding algorithms have been produced to fight three types of attacks: visual, structural, and statistical attacks. The simulation results have been discussed and compared with new literature using data hiding algorithms for colour images. The results of the proposed algorithm can efficiently embed a large quantity of data, up to 12bpp (bits per pixel), with better image quality. | A novel algorithm for colour image steganography using a new intelligent technique based on three phases |
S1568494615005712 | This study proposes a simulated annealing with restart strategy (SA_RS) heuristic for the multiconstraint team orienteering problem with multiple time windows (MC-TOP-MTW), an extension of the team orienteering problem with time windows (TOPTW). A set of vertices is given in the MC-TOP-MTW. Each vertex is associated with a score, a service time, one or more time windows, and additional knapsack constraints. The goal is to maximize the total collected score using a predetermined number of tours. We develop two versions of SA_RS. The first version, SA_RS BF , uses Boltzmann function to determine the acceptance probability of a worse solution while the second version, SA_RS CF , accepts a worse solution based on the acceptance probability determined by Cauchy function. Results of the computational study indicate that both SA_RS BF and SA_RS CF can effectively solve MC-TOP-MTW. Moreover, in several cases, they find new best solutions to benchmark instances. The results also show that SA with restart strategy performs better than that without restart strategy. | A simulated annealing heuristic for the multiconstraint team orienteering problem with multiple time windows |
S1568494615005724 | In the last two decades, multiobjective optimization has become main stream and various multiobjective evolutionary algorithms (MOEAs) have been suggested in the field of evolutionary computing (EC) for solving hard combinatorial and continuous multiobjective optimization problems. Most MOEAs employ single evolutionary operators such as crossover, mutation and selection for population evolution. In this paper, we suggest a multiobjective evolutionary algorithm based on multimethods (MMTD) with dynamic resource allocation for coping with continuous multi-objective optimization problems (MOPs). The suggested algorithm employs two well known population based stochastic algorithms namely MOEA/D and NSGA-II as constituent algorithms for population evolution with a dynamic resource allocation scheme. We have examined the performance of the proposed MMTD on two different MOPs test suites: the widely used ZDT problems and the recently formulated test instances for the special session on MOEAs competition of the 2009 IEEE congress on evolutionary computation (CEC’09). Experimental results obtained by the suggested MMTD are more promising than those of some state-of-the-art MOEAs in terms of the inverted generational distance (IGD)-metric on most test problems. | Multiobjective evolutionary algorithm based on multimethod with dynamic resources allocation |
S1568494615005736 | In this paper, a dynamic fuzzy energy state based AODV (DFES-AODV) routing protocol for Mobile Ad-hoc NETworks (MANETs) is presented. In DFES-AODV route discovery phase, each node uses a Mamdani fuzzy logic system (FLS) to decide its Route REQuests (RREQs) forwarding probability. The FLS inputs are residual battery level and energy drain rate of mobile node. Unlike previous related-works, membership function of residual energy input is made dynamic. Also, a zero-order Takagi Sugeno FLS with the same inputs is used as a means of generalization for state-space in SARSA-AODV a reinforcement learning based energy-aware routing protocol. The simulation study confirms that using a dynamic fuzzy system ensures more energy efficiency in comparison to its static counterpart. Moreover, DFES-AODV exhibits similar performance to SARSA-AODV and its fuzzy extension FSARSA-AODV. Therefore, the use of dynamic fuzzy logic for adaptive routing in MANETs is recommended. | Dynamic fuzzy logic and reinforcement learning for adaptive energy efficient routing in mobile ad-hoc networks |
S1568494615005748 | During the last decade, energy regulatory policies all over the globe have been influenced by the introduction of competition. In a multi-area deregulated power market, competitive bidding and allocation of energy and reserve is crucial for maintaining performance and reliability. The increased penetration of intermittent renewable generation requires for sufficient allocation of reserve services to maintain security and reliability. As a result the market operators and generating companies are opting for market models for joint energy and reserve dispatch with a cost minimization/profit maximization goal. The joint dispatch (JD) problem is more complex than the traditional economic dispatch (ED) due to the additional constraints like the reserve limits, transmission limits, area power balance, energy-reserve coupling constraints and separate sectional price offer curves for both, energy and reserve. The present work proposes a model for the joint static/dynamic dispatch of energy and reserve in deregulated market for multi-area operation using enhanced versions of particle swarm optimization (PSO) and differential evolution (DE). A parameter automation strategy is employed in the classical PSO and DE algorithms (i) to enhance their search capability; (ii) to avoid premature convergence; and (iii) to maintain a balance between global and local search. The performance of enhanced PSO and DE variants is compared for single/multi-area power systems for static/dynamic operation, taking both linear and non-smooth cost functions. The proposed approach is validated on two test systems for different demands, reserve requirements, tie-line capacities and generator outages. | Performance comparison of enhanced PSO and DE variants for dynamic energy/reserve scheduling in multi-zone electricity market |
S156849461500575X | In this paper, a new version of the particle swarm optimization (PSO) algorithm suitable for discrete optimization problems is presented and applied for the solution of the capacitated location routing problem and for the solution of a new formulation of the location routing problem with stochastic demands. The proposed algorithm combines three different topologies which are incorporated in a constriction particle swarm optimization algorithm and, thus, a very effective new algorithm, the global and local combinatorial expanding neighborhood topology particle swarm optimization, was developed. The algorithm was tested, initially, in the three classic sets of benchmark instances for the capacitated location routing problem with discrete demands and, then, as there are no benchmark instances for the location routing problem with stochastic demands, these instances were transformed appropriately in order to be suitable for the problem with stochastic demands. The algorithm was tested in the problem with the stochastic demands using these transformed sets of benchmark instances. The algorithm was compared with a number of different implementations of the PSO and with metaheuristic, evolutionary and nature inspired algorithms from the literature for the location routing problem with discrete and stochastic demands. | An improved particle swarm optimization algorithm for the capacitated location routing problem and for the location routing problem with stochastic demands |
S1568494615005761 | One of the most powerful, popular and accurate classification techniques is support vector machines (SVMs). In this work, we want to evaluate whether the accuracy of SVMs can be further improved using training set selection (TSS), where only a subset of training instances is used to build the SVM model. By contrast to existing approaches, we focus on wrapper TSS techniques, where candidate subsets of training instances are evaluated using the SVM training accuracy. We consider five wrapper TSS strategies and show that those based on evolutionary approaches can significantly improve the accuracy of SVMs. | Evolutionary wrapper approaches for training set selection as preprocessing mechanism for support vector machines: Experimental evaluation and support vector analysis |
S1568494615005773 | Efficient constraint handling techniques are of great significance when Evolutionary Algorithms (EAs) are applied to constrained optimization problems (COPs). Generally, when use EAs to deal with COPs, equality constraints are much harder to satisfy, compared with inequality constraints. In this study, we propose a strategy named equality constraint and variable reduction strategy (ECVRS) to reduce equality constraints as well as variables of COPs. Since equality constraints are always expressed by equations, ECVRS makes use of the variable relationships implied in such equality constraint equations. The essence of ECVRS is it makes some variables of a COP considered be represented and calculated by some other variables, thereby shrinking the search space and leading to efficiency improvement for EAs. Meanwhile, ECVRS eliminates the involved equality constraints that providing variable relationships, thus improves the feasibility of obtained solutions. ECVRS is tested on many benchmark problems. Computational results and comparative studies verify the effectiveness of the proposed ECVRS. | A variable reduction strategy for evolutionary algorithms handling equality constraints |
S1568494615005785 | Evolutionary algorithms and nature-inspired optimization algorithms are widely used in solving nonlinear optimization problems. Considering a D-dimensional space, this paper introduces a new random search algorithm called vector-based swarm optimization (VBSO). In this method, vectors with appropriate orientation gradually converge to a global optimum point. In the VBSO, random weighting coefficients are used with a predetermined strategy. Using multiplication of these coefficients by suitable vectors, the randomness property is provided. Taking same conditions into account, the proposed algorithm is compared with a number of well-known intuitive algorithms. Considering 29 unimodal and multimodal benchmark functions, simulation results confirm that the VBSO performs faster and more accurate than intuitive algorithms, such as CEP, FEP, GA, PSO, GSA, DE and ODE, in most cases. To evaluate the performance of the proposed algorithm in coping with difficult situations, some challenging cases are considered. They may have large number of variables, few number of populations, few number of iterations, or may be shifted functions with huge number of optimums. Overall, the simulation results reveal that the performance of the VBSO is satisfactory for such challenging cases. | Vector-based swarm optimization algorithm |
S1568494615005797 | Ensemble learning is a system that improves the performance and robustness of the classification problems. How to combine the outputs of base classifiers is one of the fundamental challenges in ensemble learning systems. In this paper, an optimized Static Ensemble Selection (SES) approach is first proposed on the basis of NSGA-II multi-objective genetic algorithm (called SES-NSGAII), which selects the best classifiers along with their combiner, by simultaneous optimization of error and diversity objectives. In the second phase, the Dynamic Ensemble Selection-Performance (DES-P) is improved by utilizing the first proposed method. The second proposed method is a hybrid methodology that exploits the abilities of both SES and DES approaches and is named Improved DES-P (IDES-P). Accordingly, combining static and dynamic ensemble strategies as well as utilizing NSGA-II are the main contributions of this research. Findings of the present study confirm that the proposed methods outperform the other ensemble approaches over 14 datasets in terms of classification accuracy. Furthermore, the experimental results are described from the view point of Pareto front with the aim of illustrating the relationship between diversity and the over-fitting problem. | A new ensemble learning methodology based on hybridization of classifier ensemble selection approaches |
S1568494615005803 | In this paper, a one rank cuckoo search algorithm (ORCSA) is proposed for solving economic load dispatch (ELD) problems. The main objective of the ELD problem is to minimize total cost of thermal generators while satisfying power balance constraint, prohibited operating zones, ramp rate constraints and operating limits of generators. Moreover, the generating units considered in this paper have different characteristics such as quadratic fuel cost function, nonconvex fuel cost function and multiple fuel options. The proposed ORCSA method has been developed by performing two modifications on the original cuckoo search algorithm (CSA) to improve optimal solution quality and computational time. The first modification is to merge new solution generated from both Lévy flights and replacement a fraction of egg together and to evaluate and rank the solutions at once only. A bound by best solution mechanism has been used in the second modification for properly handling the inequality constraints. The proposed ORCSA method has been tested on different systems with different characteristics of thermal units and constraints. The results obtained by ORCSA have been compared to those from other methods available in the literature and the result comparison has indicated that the ORCSA method can obtain better solution quality than many other methods. Therefore, the proposed ORCSA can be a very effective and efficient method for solving ELD problems. fuel cost coefficients of unit i fuel cost coefficients of unit i reflecting valve-point effects fuel cost coefficients for fuel type j of unit i fuel cost coefficients for fuel type j of unit i reflecting valve-point effects B-matrix coefficients for transmission power loss total number of generating units power output of unit i maximum power output of unit i minimum power output of unit i minimum power output for fuel j of unit i upper bound for prohibited zone k of unit i lower bound for prohibited zone k of generator i total system load demand total transmission loss spinning reserve of unit i maximum spinning reserve contribution of unit i total system spinning reserve requirement | The application of one rank cuckoo search algorithm for solving economic load dispatch problems |
S1568494615005815 | In this paper, we propose a complete, fully automatic and efficient clinical decision support system for breast cancer malignancy grading. The estimation of the level of a cancer malignancy is important to assess the degree of its progress and to elaborate a personalized therapy. Our system makes use of both Image Processing and Machine Learning techniques to perform the analysis of biopsy slides. Three different image segmentation methods (fuzzy c-means color segmentation, level set active contours technique and grey-level quantization method) are considered to extract the features used by the proposed classification system. In this classification problem, the highest malignancy grade is the most important to be detected early even though it occurs in the lowest number of cases, and hence the malignancy grading is an imbalanced classification problem. In order to overcome this difficulty, we propose the usage of an efficient ensemble classifier named EUSBoost, which combines a boosting scheme with evolutionary undersampling for producing balanced training sets for each one of the base classifiers in the final ensemble. The usage of the evolutionary approach allows us to select the most significant samples for the classifier learning step (in terms of accuracy and a new diversity term included in the fitness function), thus alleviating the problems produced by the imbalanced scenario in a guided and effective way. Experiments, carried on a large dataset collected by the authors, confirm the high efficiency of the proposed system, shows that level set active contours technique leads to an extraction of features with the highest discriminative power, and prove that EUSBoost is able to outperform state-of-the-art ensemble classifiers in a real-life imbalanced medical problem. | Evolutionary undersampling boosting for imbalanced classification of breast cancer malignancy |
S1568494615005827 | This paper is concerned with the problem of macroscopic road traffic flow model calibration and verification. Thoroughly validated models are necessary for both control system design and scenario evaluation purposes. Here, the second order traffic flow model METANET was calibrated and verified using real data. A powerful optimisation problem formulation is proposed for identifying a set of model parameters that makes the model fit to measurements. For the macroscopic traffic flow model validation problem, this set of parameters characterise the aggregate traffic flow features over a road network. In traffic engineering, one of the most important relationships whose parameters need to be determined is the fundamental diagram of traffic, which models the non-linear relationship between vehicular flow and density. Typically, a real network does not exhibit the same traffic flow aggregate behaviour everywhere and different fundamental diagrams are used for covering different network areas. As a result, one of the initial steps of the validation process rests on expert engineering opinion assigning the spatial extension of fundamental diagrams. The proposed optimisation problem formulation allows for automatically determining the number of different fundamental diagrams to be used and their corresponding spatial extension over the road network, simplifying this initial step. Although the optimisation problem suffers from local minima, good solutions which generalise well were obtained. The design of the system used is highly generic and allows for a number of evolutionary and swarm intelligence algorithms to be used. Two UK sites have been used for testing it. Calibration and verification results are discussed in detail. The resulting models are able to capture the dynamics of traffic flow and replicate shockwave propagation. A total of ten different algorithms were considered and compared with respect to their ability to converge to a solution, which remains valid for different sets of data. Particle swarm optimisation (PSO) algorithms have proven to be particularly effective and provide the best results both in terms of speed of convergence and solution generalisation. An interesting result reported is that more recently proposed PSO algorithms were outperformed by older variants, both in terms of speed of convergence and model error minimisation. | Swarm intelligence algorithms for macroscopic traffic flow model validation with automatic assignment of fundamental diagrams |
S1568494615005839 | Fuzzy statistics provides useful techniques for handling real situations which are affected by vagueness and imprecision. Several fuzzy statistical techniques (e.g., fuzzy regression, fuzzy principal component analysis, fuzzy clustering) have been developed over the years. Among these, fuzzy regression can be considered an important tool for modeling the relation between a dependent variable and a set of independent variables in order to evaluate how the independent variables explain the empirical data which are modeled through the regression system. In general, the standard fuzzy least squares method has been used in these situations. However, several applicative contexts, such as for example, analysis with small samples and short and fat matrices, violation of distributional assumptions, matrices affected by multicollinearity (ill-posed problems), may show more complex situations which cannot successfully be solved by the fuzzy least squares. In all these cases, different estimation methods should instead be preferred. In this paper we address the problem of estimating fuzzy regression models characterized by ill-posed features. We introduce a novel fuzzy regression framework based on the Generalized Maximum Entropy (GME) estimation method. Finally, in order to better highlight some characteristics of the proposed method, we perform two Monte Carlo experiments and we analyze a real case study. | A Generalized Maximum Entropy (GME) estimation approach to fuzzy regression model |
S1568494615005840 | Multi-criteria decision making (MCDM) has been a hot topic in decision making and systems engineering. The dual hesitant fuzzy sets (DHFSs) is a useful tool to deal with vagueness and ambiguity in the MADM problems. In this paper, we propose a wide range of dual hesitant fuzzy power aggregation operators based on Archimedean t-conorm and t-norm for dual hesitant fuzzy information. We first redefine some basic operations of dual hesitant fuzzy sets, which are consistent with those of dual hesitant fuzzy sets. We introduce three kinds of distance measures for dual hesitant fuzzy sets, which the corresponding support measures can be obtained. Then we propose several power aggregation operators on dual hesitant fuzzy sets, study their properties and give some specific dual hesitant fuzzy aggregation operators. In the end, we develop two approaches for multiple attribute group decision making with dual hesitant fuzzy information, and illustrate a real world example to show the behavior of the proposed operators. | Dual hesitant fuzzy power aggregation operators based on Archimedean t-conorm and t-norm and their application to multiple attribute group decision making |
S1568494615005864 | Five AI models are presented to model the dynamic nonlinear behavior of Buckling-Restrained Braces (BRBs). The AI techniques utilized in the models are: Time-Delayed Neural Networks (TDNN), Nonlinear Auto-Regressive eXogenous (NARX) neural networks, Gaussian-Mixture Models Regression (GMMR), Adaptive Neuro-Fuzzy Inference Systems (ANFIS) and Polynomial Classifier Regression (PCR). The models are developed using time-delayed brace displacements inputs and brace force outputs to predict updated brace forces during load reversals. The training and testing of the AI models are performed using experimental data from BRB specimens tested at the Pacific Earthquake Engineering Research (PEER) Center. The training stage for every method makes use of the experimental data from one specimen. In order to assess the models’ learning and generalization capabilities, three sets of experimental data for different specimens are used. To arrive at an optimized architecture that best models the phenomenon, the model performance with different parameters is evaluated. The brace force predicted by the proposed model shows excellent resemblance to the experimental results for the training sample, for all techniques. The predicted behavior of the testing samples shows noticeable accuracy and further demonstrates the generalization and prediction capability of the proposed modeling techniques. The various techniques are compared on the basis of selected performance criteria. It is found that the performance of two AI techniques standout among the others: the NARX and the PCR. Although the NARX demonstrates a slight advantage in the prediction accuracy over the PCR, the latter is far more superior in terms of computational efficiency. Thus, the PCR would be recommended for scenarios where online training is needed. The BRB design and performance investigation processes can be facilitated by the developed modeling techniques thus minimizing the need for, and extent of, experimental testing. | Modeling nonlinear behavior of Buckling-Restrained Braces via different artificial intelligence methods |
S1568494615005876 | Many multiple-criteria decision-making (MCDM) methods have been proposed for decision-making environments. However, the performance of these methods is degraded by the uncertainty and inaccuracy which characterizes most practical decision-making environments as a result of the inherent prejudices and preferences of the decision-makers or experts and an insufficient volume of multiple inputs and outputs (MIO) information. Accordingly, the present study proposes an enhanced MIO classification method to address these limitations of existing MCDM methods. The proposed MIO classification method designated as the FVM-index method integrates fuzzy set theory (FST), variable precision rough set (VPRS) theory, and a modified cluster validity index (MCVI) function, and is designed specifically to filter out the uncertainty and inaccuracy inherent in the surveyed MIO real-valued dataset; thereby improving the classification performance. The effectiveness of the proposed approach is first demonstrated by comparing the MIO classification results obtained for three relating UCI datasets: (1) the original dataset; (2) a dataset with a large amount of inaccurate instances; and (3) an FVM-index filtered dataset extracted from the original dataset using a statistical approach. Then, the validity of the proposed approach is illustrated by using an Augmented Reality product design and a hospital related datasets. The results confirm that the proposed FVM-index method provides a good classification performance even in the presence of inaccuracy and uncertainty. As a result, it provides a robust approach for the extraction of reliable decision-making rules. | A multi-attribute decision-making model for the robust classification of multiple inputs and outputs datasets with uncertainty |
S1568494615005888 | Malignant and benign types of tumor infiltrated in human brain are diagnosed with the help of an MRI scanner. With the slice images obtained using an MRI scanner, certain image processing techniques are utilized to have a clear anatomy of brain tissues. One such image processing technique is hybrid self-organizing map (SOM) with fuzzy K means (FKM) algorithm, which offers successful identification of tumor and good segmentation of tissue regions present inside the tissues of brain. The proposed algorithm is efficient in terms of Jaccard Index, Dice Overlap Index (DOI), sensitivity, specificity, peak signal to noise ratio (PSNR), mean square error (MSE), computational time and memory requirement. The algorithm proposed through this paper has better data handling capacities and it also performs efficient processing upon the input magnetic resonance (MR) brain images. Automatic detection of tumor region in MR (magnetic resonance) brain images has a high impact in helping the radio surgeons assess the size of the tumor present inside the tissues of brain and it also supports in identifying the exact topographical location of tumor region. The proposed hybrid SOM-FKM algorithm assists the radio surgeon by providing an automated tissue segmentation and tumor identification, thus enhancing radio therapeutic procedures. The efficiency of the proposed technique is verified using the clinical images obtained from four patients, along with the images taken from Harvard Brain Repository. | An unsupervised learning method with a clustering approach for tumor identification and tissue segmentation in magnetic resonance brain images |
S156849461500589X | Cognitive radio network (CRN) enables unlicensed users (or secondary users, SUs) to sense for and opportunistically operate in underutilized licensed channels, which are owned by the licensed users (or primary users, PUs). Cognitive radio network (CRN) has been regarded as the next-generation wireless network centered on the application of artificial intelligence, which helps the SUs to learn about, as well as to adaptively and dynamically reconfigure its operating parameters, including the sensing and transmission channels, for network performance enhancement. This motivates the use of artificial intelligence to enhance security schemes for CRNs. Provisioning security in CRNs is challenging since existing techniques, such as entity authentication, are not feasible in the dynamic environment that CRN presents since they require pre-registration. In addition these techniques cannot prevent an authenticated node from acting maliciously. In this article, we advocate the use of reinforcement learning (RL) to achieve optimal or near-optimal solutions for security enhancement through the detection of various malicious nodes and their attacks in CRNs. RL, which is an artificial intelligence technique, has the ability to learn new attacks and to detect previously learned ones. RL has been perceived as a promising approach to enhance the overall security aspect of CRNs. RL, which has been applied to address the dynamic aspect of security schemes in other wireless networks, such as wireless sensor networks and wireless mesh networks can be leveraged to design security schemes in CRNs. We believe that these RL solutions will complement and enhance existing security solutions applied to CRN To the best of our knowledge, this is the first survey article that focuses on the use of RL-based techniques for security enhancement in CRNs. | Application of reinforcement learning for security enhancement in cognitive radio networks |
S1568494615005906 | In this paper, a novel multi objective model is proposed for portfolio selection. The proposed model incorporates the DEA cross-efficiency into Markowitz mean–variance model and considers return, risk and efficiency of the portfolio. Also, in order to take uncertainty in proposed model, the asset returns are considered as trapezoidal fuzzy numbers. Due to the computational complication of the proposed model, the second version of non-dominated sorting genetic algorithm (NSGA-II) is applied. To illustrate the performance of our model, the model is implemented for 52 firms listed in stock exchange market of Iran and the results are analyzed. The results show that the proposed model is suitable in compared with Markowitz and DEA models due to considering return, risk and efficiency, simultaneously. | An integrated multi-objective Markowitz–DEA cross-efficiency model with fuzzy returns for portfolio selection problem |
S1568494615005918 | The Information and Communication Technologies (ICTs) play an important role in the economic development, making it necessary to assess the quality of service perceived by consumers in this sector. The most effective quality assessment from the consumer perspective is still to be researched, yet the most common approach is oriented towards quantitative indicators. This study proposes to use a two-dimensional model that combines the widely accepted segmentation of ICTs with elements from the SERVQUAL quality model. This model, useful in multi-criteria decision-making situations, has been developed using the 2-tuple linguistic representation and fuzzy logic principles. This methodology prevents data loss during processing and provides relevant information through 16 indicators related to the quality of service. Besides, an expert-based mechanism is defined for the use of historical information extracted from completed surveys. As a practical case, this mechanism is applied to the historical information of a telecommunications company for assessing the quality of the service provided to its customers. | SICTQUAL: A fuzzy linguistic multi-criteria model to assess the quality of service in the ICT sector from the user perspective |
S156849461500592X | Advanced sensing technologies have produced a significant amount of discrete point data in the past decade. Measurement uncertainty frequently occurs at the geometric discontinuity of mechanical parts. In this paper, a genetic search algorithm is developed for optimally-constrained multiple-line fitting of discrete data points. It contains two important technical components: (a) constrained least-squares fitting of multiple lines, and (b) genetic search for optimal corner/edge points. The algorithm is designed for both two-dimensional and three-dimensional cases. Numerical experiments demonstrate the effectiveness of the proposed approach, compared to the conventional least-squares fitting method as well as exhaustive search method. A comparative study with a particle swarm method indicates that both the genetic search and particle swarm search produce similar results in terms of minimum fitting errors. It can be used for the effective determination of sharp edges or corners based on discrete data points measured for high-precision industrial inspection and manufacturing. direction numbers of a line in a three-dimensional space binary numbers for a chromosome cognitive and social parameters decimal value corresponding to binary bits the shortest distance of the ith data point and the fitting line fitness function in two-dimensional cases fitness function in three-dimensional cases the fitness value of the ith candidate corner, mean and maximum fitness the probability of crossover the probability of mutation the position of the globally-best individual the best previous position of the ith particle random numbers that are uniformly distributed within a range [0,0.2] the kth position of the ith particle in 2D The kth position of the ith particle in 3D the coordination of the ith data point in a three-dimension space The coordination of start point for genetic search orthogonal projection of data points to fitted lines the real increment in a given genetic search the coordination of candidate constrained corner the size of genetic search in three coordinate directions the coordinates of the point passed by a line the coordinates of any point in a three-dimension space the kth velocity of the ith particle inertial weight the coefficients of a line equation in two and three dimensions the sum of squared coordinate difference in x coordinate between data points and their corresponding points on the fitted line the sum of squared coordinate difference in y coordinate between data points and their corresponding points on the fitted line the gradient of E(A, B) the gradient of E(C, D) standard deviation of data points | Genetic search for optimally-constrained multiple-line fitting of discrete data points |
S1568494615005943 | In France, buildings account for a large part of the energy consumption and carbon emissions. Both are mainly due to heating, ventilation and air-conditioning (HVAC) systems. Because older, oversized or poorly maintained systems may be using more energy and costing more to operate than necessary, new management approaches are needed. In addition, energy efficiency can be improved in central heating and cooling systems by introducing zoned operation. So, the present work deals with the predictive control of multizone HVAC systems in non-residential buildings. First, a real non-residential building located in Perpignan (south of France) has been modelled using the EnergyPlus software. We used the predicted mean vote (PMV) index as a thermal comfort indicator and developed low-order ANN-based models to be used as controller's internal models. A genetic algorithm allowed the optimization problem to be solved. In order to appraise the proposed management strategy, it has been compared to basic scheduling techniques. Using the proposed strategy, the operation of all the HVAC subsystems is optimized by computing the right time to turn them on and off, in both heating and cooling modes. Energy consumption is minimized and thermal comfort requirements are met. So, the simulation results highlight the pertinence of a predicitive approach for multizone HVAC systems management. time index (–) predicted mean vote in the room j of the building (–) difference between the heat produced and the heat lost in the room j of the building (Wm−2) occupants’ metabolic activity in the room j of the building (Wm−2) external work in the room j of the building (Wm−2) heat loss by diffusion through the skin in the room j of the building (Wm−2) heat loss by sweating in the room j of the building (Wm−2) heat loss by latent respiration in the room j of the building (Wm−2) heat loss by dry respiration in the room j of the building (Wm−2) heat loss by radiance in the room j of the building (Wm−2) heat loss by convection in the room j of the building (Wm−2) air temperature in the room j of the building (°C) radiant temperature in the room j of the building (°C) relative humidity in the room j of the building (%) air speed in the room j of the building (ms−1) clothing thermal insulation in the room j of the building (clo) outdoor temperature at 6a.m. (°C) outdoor temperature (°C) consumption of electrical power in the room j of the building (kW) occupancy in the room j of the building (–) HVAC temperature set-point in the room j of the building (°C) HVAC temperature set-point in the room l of the building (°C) HVAC temperature set-point in the room m of the building (°C) value of T j sp allowing to obtain PMV j sp in the room j of the building (°C) PMV set-point in the room j of the building (in the present study, PMV j sp = 0 ) (–) right time to turn the HVAC susbsytem in the room j of the building on or off (–) vector bringing the optimal HVAC switching times together (–) forecast horizon (–) thermal comfort threshold (in the present study, PMV j min = − 0.5 ) (–) thermal comfort threshold (in the present study, PMV j min = + 0.5 ) (–) crossover fraction (genetic algorithm) (–) amount of mutation (genetic algorithm) (–) sum over all the output neurons of the magnitude of the correlation between V e and E e,o (–) partial derivative of C with respect to each of the incoming weigths of the candidate unit (–) value of the candidate unit for example e (–) averaged value of V over all the training examples (–) residual output error measured at neuron o (–) averaged value of E o over all the training examples (–) correlation between the value of the candidate unit and neuron o (–) derivative of the candidate's activation function with respect to the sum of its inputs (for example e) (–) input received by the candidate unit from unit i (for example e) (–) | Predictive control of multizone heating, ventilation and air-conditioning systems in non-residential buildings |
S1568494615005955 | Avoiding the possibility of bankruptcy during the investment horizon is very important to multi-period portfolio management. This paper considers a multi-period fuzzy portfolio selection problem with bankruptcy control. A multi-period portfolio optimization model imposed by a bankruptcy control constraint in fuzzy environment is proposed on the basis of credibility theory. In the proposed model, a linearly recourse policy is used to reflect the influence of historical predication basis on current portfolio decision. Three optimization objectives, viz., maximizing the terminal wealth and minimizing the cumulative risk and the cumulative uncertainty of the returns of portfolios over the whole investment horizon, are taken into consideration. For solving the proposed model, a fuzzy programming approach is applied to transform it into a single objective programming model. Then, a hybrid particle swarm optimization algorithm is designed for solution. Finally, an empirical example is presented to illustrate the application of the proposed model and solution comparisons are also given to demonstrate the effectiveness of the designed algorithm. | Credibilistic multi-period portfolio optimization model with bankruptcy control and affine recourse |
S1568494615005967 | In this paper, we propose new scenarios for simulating search operators whose behaviors often change continuously during the search. In these scenarios, the performance of such operators decreases while they are applied. This is motivated by the fact that operators for optimization problems are often roughly classified into exploitation and exploration operators. Our simulation model is used to compare the performances of operator selection policies and to identify their ability to handle specific non-stationary operators. An experimental study highlights respective behaviors of operator selection policies when faced to such non-stationary search scenarios. | Simulating non-stationary operators in search algorithms |
S1568494615005979 | Wireless sensor networks have become increasingly popular because of their ability to cater to multifaceted applications without much human intervention. However, because of their distributed deployment, these networks face certain challenges, namely, network coverage, continuous connectivity and bandwidth utilization. All of these correlated issues impact the network performance because they define the energy consumption model of the network and have therefore become a crucial subject of study. Well-managed energy usage of nodes can lead to an extended network lifetime. One way to achieve this is through clustering. Clustering of nodes minimizes the amount of data transmission, routing delay and redundant data in the network, thereby conserving network energy. In addition to these advantages, clustering also makes the network scalable for real world applications. However, clustering algorithms require careful planning and design so that balanced and uniformly distributed clusters are created in a way that the network lifetime is enhanced. In this work, we extend our previous algorithm, titled the zone-based energy efficient routing protocol for mobile sensor networks (ZEEP). The algorithm we propose optimizes the clustering and cluster head selection of ZEEP by using a genetic fuzzy system. The two-step clustering process of our algorithm uses a fuzzy inference system in the first step to select optimal nodes that can be a cluster head based on parameters such as energy, distance, density and mobility. In the second step, we use a genetic algorithm to make a final choice of cluster heads from the nominated candidates proposed by the fuzzy system so that the optimal solution generated is a uniformly distributed balanced set of clusters that aim at an enhanced network lifetime. We also study the impact and dominance of mobility with regard to the variables. However, before we arrived at a GFS-based solution, we also studied fuzzy-based clustering using different membership functions, and we present our understanding on the same. Simulations were carried out in MATLAB and ns2. The results obtained are compared with ZEEP. | A genetic fuzzy system based optimized zone based energy efficient routing protocol for mobile sensor networks (OZEEP) |
S1568494615005980 | The aim of this study is to define the risk factors that are effective in Breast Cancer (BC) occurrence, and to construct a supportive model that will promote the cause-and-effect relationships among the factors that are crucial to public health. In this study, we utilize Rule-Based Fuzzy Cognitive Map (RBFCM) approach that can successfully represent knowledge and human experience, introducing concepts to represent the essential elements and the cause-and-effect relationships among the concepts to model the behavior of any system. In this study, a decision-making system is constructed to evaluate risk factors of BC based on the information from oncologists. To construct causal relationship, the weight matrix of RBFCM is determined with the combination of the experts’ experience, expertise and views. The results of the proposed methodology will allow better understanding into several root causes, with the help of which, oncologists can improve their prevention and protection recommendation. The results showed that Social Class and Late Maternal Age can be seen as important modifiable factors; on the other hand, Benign Breast Disease, Family History and Breast Density can be considered as important factors as non-modifiable risk factors. This study is somehow weighing the interrelations of the BC risk factors and is enabling us to make a sensitivity analysis between the scenario studies and BC risk factors. A soft computing method is used to simulate the changes of a system over time and address “what if” questions to compare between different case studies. | A fuzzy information-based approach for breast cancer risk factors assessment |
S1568494615005992 | This paper proposes an online preference learning algorithm named OnPL that can dynamically adapt the policy for dispatching AGVs to changing situations in an automated container terminal. The policy is based on a pairwise preference function that can be repeatedly applied to multiple candidate jobs to sort out the best one. An adaptation of the policy is therefore made by updating this preference function. After every dispatching decision, each of all the candidate jobs considered for the decision is evaluated by running a simulation of a short look-ahead horizon. The best job is then paired with each of the remaining jobs to make training examples of positive preferences, and the inversions of these pairs are each used to generate examples of negative preferences. These new training examples, together with some additional recent examples in the reserve pool, are used to relearn the preference function implemented by an artificial neural network. The experimental results show that OnPL can relearn its policy in real time, and can thus adapt to changing situations seamlessly. In comparison to OnPL, other methods cannot adapt well enough or are not applicable in real time owing to the very long computation time required. | Online preference learning for adaptive dispatching of AGVs in an automated container terminal |
S1568494615006006 | A replicated multi-response experiment is a process that includes more than one responses with replications. One of the main objectives in these experiments is to estimate the unknown relationship between responses and input variables simultaneously. In general, classical regression analysis is used for modeling of the responses. However, in most practical problems, the assumptions for regression analysis cannot be satisfied. In this case, alternative modeling methods such as fuzzy logic based modeling approaches can be used. In this study, fuzzy least squares regression (FLSR) and fuzzy clustering based modeling methods, which are switching fuzzy C-regression (SFCR) and Takagi–Sugeno (TS) fuzzy model, are preferred. The novelty of the study is presenting the applicability of SFCR to the multi-response experiment data set with replicated response measures. Three real data set examples are given for application purposes. In order to compare the prediction performance of modeling approaches, root mean square error (RMSE) criteria is used. It is seen from the results that the SFCR gives the better prediction performance among the other fuzzy modeling approaches for the replicated multi-response experimental data sets. | Comparison of fuzzy logic based models for the multi-response surface problems with replicated response measures |
S1568494615006018 | Supplier selection is a decision-making process to identify and evaluate suppliers for making contracts. Here, we use interval type-2 fuzzy values to show the decision makers’ preferences and also introduce a new formula to compute the distance between two interval type-2 fuzzy sets. The performance of the proposed distance formula in comparison with the normalized Hamming, normalized Hamming based on the Hausdorff metric, normalized Euclidean and the signed distances is evaluated. The results show that the signed distance has the same trend as our method, but the other three methods are not appropriate for interval type-2 fuzzy sets. Using this approach, we propose a hierarchical clustering-based method to solve a supplier selection problem and find the proximity of the suppliers. To illustrate the applicability of the proposed method, first a case study of supplier selection problem with 8 criteria and 8 suppliers are illustrated and next, an example taken from the literature is worked through. Then, to test the hierarchical clustering-based method and compare with the obtained results by two other methods, a comparative study using experimental analysis is designed. The results show that while the proposed hierarchical clustering algorithm provides acceptable results, it is also conveniently appropriate for using interval type-2 fuzzy sets and obtaining proximity of suppliers. | Supplier selection using a clustering method based on a new distance for interval type-2 fuzzy sets: A case study |
S156849461500602X | Many variants of particle swarm optimization (PSO) both enhance the performance of the original method and greatly increase its complexity. Motivated by this fact, we investigate factors that influence the convergence speed and stability of basic PSO without increasing its complexity, from which we develop an evaluation index called “Control Strategy PSO” (CSPSO). The evaluation index is based on the oscillation properties of the transition process in a control system. It provides a method of selection parameters that promote system convergence to the optimal value and thus helps manage the optimization process. In addition, it can be applied to the characteristic analyses and parameter confirmation processes associated with other intelligent algorithms. We present a detailed theoretical and empirical analysis, in which we compare the performance of CSPSO with published results on a suite of well-known benchmark optimization functions including rotated and shifted functions. We used the convergence rates and iteration numbers as metrics to compare simulation data, and thereby demonstrate the effectiveness of our proposed evaluation index. We applied CSPSO to antenna array synthesis, and our experimental results show that it offers high performance in pattern synthesis. | Control strategy PSO |
S1568494615006031 | In this paper, a new network is proposed for automated recognition and classification of the environment information into regions, or nodes. Information is utilized in learning the topological map of an environment. The architecture is based upon a multi-channel Adaptive Resonance Associative Memory (ARAM) that comprises of two layers, input and memory. The input layer is formed using the Multiple Bayesian Adaptive Resonance Theory, which collects sensory data and incrementally clusters the obtained information into a set of nodes. In the memory layer, the clustered information is used as a topological map, where nodes are connected with edges. Nodes in the topological map represent regions of the environment and stores the robot location, while edges connect nodes and stores the robot orientation or direction. The proposed method, a Multi-channel Bayesian Adaptive Resonance Associative Memory (MBARAM) is validated using a number of benchmark datasets. Experimental results indicate that MBARAM is capable of generating topological map online and the map can be used for localization. | Multi-channel Bayesian Adaptive Resonance Associate Memory for on-line topological map building |
S1568494615006055 | Generally the complexity of the large scale optimization problem is considered to increase as the size or dimension of the problem increases and to solve these problems; more efficient and robust algorithms are needed. Several experiments have shown that an increment in dimensions of the problem not only requires an increment in population size but increases the computational cost also. In this paper a Self Organizing Migrating Algorithm with Quadratic Interpolation (SOMAQI) has been extended to solve large scale global optimization problems for dimensions ranging from 100 to 3000 with a constant population size of 10 only. It produces high quality optimal solution with very low computational cost and converges very fast to optimal solution. | Self organizing migrating algorithm with quadratic interpolation for solving large scale global optimization problems |
S1568494615006067 | In this study, an approach based on artificial neural network (ANN) was proposed to predict the experimental cutting temperatures generated in orthogonal turning of AISI 316L stainless steel. Experimental and numerical analyses of the cutting forces were carried out to numerically obtain the cutting temperature. For this purpose, cutting tests were conducted using coated (TiCN+Al2O3 +TiN and Al2O3) and uncoated cemented carbide inserts. The Deform-2D programme was used for numerical modelling and the Johnson–Cook (J–C) material model was used. The numerical cutting forces for the coated and uncoated tools were compared with the experimental results. On the other hand, the cutting temperature value for each cutting tool was numerically obtained. The artificial neural network model was used to predict numerical cutting temperatures by means of the numerical cutting forces. The best results in predicting the cutting temperature were obtained using the network architecture with a hidden layer which has seven neurons and LM learning algorithm. Finally, the experimental cutting temperatures were predicted by entering the experimental cutting forces into a formula obtained from the artificial neural networks. Statistical results (R 2, RMSE, MEP) were quite satisfactory. This demonstrates that the established ANN model is a powerful one for predicting the experimental cutting temperatures. yield stress (MPa) output value aluminum oxide number of pattern artificial neural network absolute fraction of variance hardening modulus (MPa) root mean square error back propagation scaled conjugate gradient learning algorithm strain rate sensitivity parameter temperature of work material (°C) cutting tools room temperature (°C) main cutting force (N) melting temperature of work material (°C) feed force (N) cutting temperature (°C) feed rate (mm/rev) titanium carbo-nitride transfer function titanium nitride heat transfer coefficient of work material (kW/m2 °C) target value processing elements cutting speed (m/min) shear flow stress of chip (MPa) the weights of the connections between ith and jth processing elements Levenberg–Marquardt learning algorithm the weights of the biases between layers shear friction factor the output of the jth processing element thermal softening coefficient yield stress (MPa) absolute mean error percentage (%) strain rate (sn−1) the weighted sum of the input to the ith processing element reference strain rate (sn−1) strain hardening index equivalent plastic strain rate (sn−1) number of processing elements in the previous layer shear stress at the tool–chip interface (MPa) | Prediction of cutting temperature in orthogonal machining of AISI 316L using artificial neural network |
S1568494615006079 | We introduce a novel methodology for ranking hesitant fuzzy sets. It builds on a recent, theoretically sound contribution in Social Choice. In order to justify the applicability of such analysis, we develop two real implementations: (i) new metarankings of world academic institutions that build on real data from three reputed agencies, and (ii) a new procedure for improving teaching performance assessments which we illustrate with real data collected by ourselves. | Hesitant Fuzzy Worth: An innovative ranking methodology for hesitant fuzzy subsets |
S1568494615006080 | This paper presents a simple and efficient real-coded genetic algorithm (RCGA) for constrained real-parameter optimization. Different from some conventional RCGAs that operate evolutionary operators in a series framework, the proposed RCGA implements three specially designed evolutionary operators, named the ranking selection (RS), direction-based crossover (DBX), and the dynamic random mutation (DRM), to mimic a specific evolutionary process that has a parallel-structured inner loop. A variety of benchmark constrained optimization problems (COPs) are used to evaluate the effectiveness and the applicability of the proposed RCGA. Besides, some existing state-of-the-art optimization algorithms in the same category of the proposed algorithm are considered and utilized as a rigorous base of performance evaluation. Extensive comparison results reveal that the proposed RCGA is superior to most of the comparison algorithms in providing a much faster convergence speed as well as a better solution accuracy, especially for problems subject to stringent equality constraints. Finally, as a specific application, the proposed RCGA is applied to optimize the GaAs film growth of a horizontal metal-organic chemical vapor deposition reactor. Simulation studies have confirmed the superior performance of the proposed RCGA in solving COPs. | A simple and efficient real-coded genetic algorithm for constrained optimization |
S1568494615006092 | This paper aims to ease group decision-making by using an integration of fuzzy AHP (analytic hierarchy process) and fuzzy TOPSIS (technique for order preference by similarity to ideal solution) and its application to software selection of an electronic firm. Firstly, priority values of criteria in software selection problem have been determined by using fuzzy extension of AHP method. Fuzzy extension of AHP is suggested in this paper because of little computation time and much simpler than other fuzzy AHP procedures. Then, the result of the fuzzy TOPSIS model can be employed to define the most appropriate alternative with regard to this firm's goals in uncertain environment. Fuzzy numbers are presented in all phases in order to overcome any vagueness in decision making process. The final decision depends on the degree of importance of each decision maker so that wrong degree of importance causes the mistaken result. The researchers generally determine the degrees of importance of each decision maker according to special characteristics of each decision maker as subjectivity. In order to overcome this subjectivity in this paper, the judgments of decision makers are degraded to unique decision by using an attribute based aggregation technique. There is no study about software selection using integrated fuzzy AHP-fuzzy TOPSIS approach with group decision-making based on an attribute based aggregation technique. The results of the proposed approach and the other approaches are compared. Results indicate that our methodology allows decreasing the uncertainty and the information loss in group decision making and thus, ensures a robust solution to the firm. | An integrated fuzzy multi criteria group decision making approach for ERP system selection |
S1568494615006109 | The paper presents a multi-objective genetic approach to design interpretability-oriented fuzzy rule-based classifiers from data. The proposed approach allows us to obtain systems with various levels of compromise between their accuracy and interpretability. During the learning process, parameters of the membership functions, as well as the structure of the classifier's fuzzy rule base (i.e., the number of rules, the number of rule antecedents, etc.) evolve simultaneously using a Pittsburgh-type genetic approach. Since there is no particular coding of fuzzy rule structures in a chromosome (it reduces computational complexity of the algorithm), original crossover and mutation operators, as well as chromosome-repairing technique to directly transform the rules are also proposed. To evaluate both the accuracy and interpretability of the system, two measures are used. The first one – an accuracy measure – is based on the root mean square error of the system's response. The second one – an interpretability measure – is based on the arithmetic mean of three components: (a) the average length of rules (the average number of antecedents used in the rules), (b) the number of active fuzzy sets and (c) the number of active inputs of the system (an active fuzzy set or input means a set or input used by at least one fuzzy rule). Both measures are used as objectives in multi-objective (2-objective in our case) genetic optimization approaches such as well-known SPEA2 and NSGA-II algorithms. Moreover, for the purpose of comparison with several alternative approaches, the experiments are carried out both considering the so-called strong fuzzy partitions (SFPs) of attribute domains and without them. SFPs provide more semantically meaningful solutions, usually at the expense of their accuracy. The operation of the proposed technique in various classification problems is tested with the use of 20 benchmark data sets and compared to 11 alternative classification techniques. The experiments show that the proposed approach generates classifiers of significantly improved interpretability, while still characterized by competitive accuracy. | A multi-objective genetic optimization of interpretability-oriented fuzzy rule-based classifiers |
S1568494615006110 | A more scientific decision making process for radio frequency identification (RFID) technology selection is important to increase success rate of RFID technology application. RFID technology selection can be formulated as a kind of group decision making (GDM) problem with intuitionistic fuzzy preference relations (IFPRs). This paper develops a novel method for solving such problems. First, A technique for order preference by similarity to ideal solution (TOPSIS) based method is presented to rank intuitionistic fuzzy values (IFVs). To achieve higher group consensus as well as possible, we construct an intuitionistic fuzzy linear programming model to derive experts’ weights. Depending on the construction of membership and non-membership functions, the constructed intuitionistic fuzzy linear programming model is solved by three kinds of approaches: optimistic approach, pessimistic approach and mixed approach. Then to derive the ranking order of alternatives from the collective IFPR, we extend quantifier guided non-dominance degree (QGNDD) and quantifier guided dominance degree (QGDD) to intuitionistic fuzzy environment. A new two-phase ranking approach is designed to generate the ordering of alternatives based on QGNDD and QGDD. Thereby, the corresponding method is proposed for the GDM problems with IFPRs. Some generalizations on the constructed intuitionistic fuzzy linear programming model are further discussed. At length, the validity of the proposed method is illustrated with a real-world RFID technology selection example. | A novel group decision making method with intuitionistic fuzzy preference relations for RFID technology selection |
S1568494615006122 | In recent two decades, artificial neural networks have been extensively used in many business applications. Despite the growing number of research papers, only few studies have been presented focusing on the overview of published findings in this important and popular area. Moreover, the majority of these reviews were introduced more than 15 years ago. The aim of this work is to expand the range of earlier surveys and provide a systematic overview of neural network applications in business between 1994 and 2015. We have covered a total of 412 articles and classified them according to the year of publication, application area, type of neural network, learning algorithm, benchmark method, citations and journal. Our investigation revealed that most of the research has aimed at financial distress and bankruptcy problems, stock price forecasting, and decision support, with special attention to classification tasks. Besides conventional multilayer feedforward network with gradient descent backpropagation, various hybrid networks have been developed in order to improve the performance of standard models. Even though neural networks have been established as well-known method in business, there is enormous space for additional research in order to improve their functioning and increase our understanding of this influential area. | Artificial neural networks in business: Two decades of research |
S1568494615006134 | The purpose of this paper is to propose a systematic method to solve knowledge management performance evaluation (KMPE) problems. This method includes an integrated evaluation process starting from the measurement to the output of KMPE and combines subjective and objective indicators together. Firstly, we established an index system, involving the process of knowledge management, the organizational knowledge structure, economic benefits and efficiency. And based on this index system, a synthetic evaluation method is presented, using triangular fuzzy number to measure indexes and facilitating the KMPE with a group support system (GSS). To know better of the proposed method, we have an example to illustrate. Finally, the empirical study conducted in this paper indicates that the evaluation method has strong practicability and operability. Besides, the evaluation is enabled by using a group support system: the more objective scoring can be achieved due to synchronic/asynchronous and anonymous participation; Decision-makers improve their efficiency by the clear demonstration analysis results. The systematic method of KMPE based on the index system is able to improve organizations’ efficiency in performance evaluation process. | A synthetic method for knowledge management performance evaluation based on triangular fuzzy number and group support systems |
S1568494615006158 | Meta-heuristic algorithms have been successfully applied to solve the redundancy allocation problem in recent years. Among these algorithms, the electromagnetism-like mechanism (EM) is a powerful population-based algorithm designed for continuous decision spaces. This paper presents an efficient memory-based electromagnetism-like mechanism called MBEM to solve the redundancy allocation problem. The proposed algorithm employs a memory matrix in local search to save the features of good solutions and feed it back to the algorithm. This would make the search process more efficient. To verify the good performance of MBEM, various test problems, especially the 33 well-known benchmark instances in the literature, are examined. The experimental results show that not only optimal solutions of all benchmark instances are obtained within a reasonable computer execution time, but also MBEM outperforms EM in terms of the quality of the solutions obtained, even for large-size problems. | An efficient memory-based electromagnetism-like mechanism for the redundancy allocation problem |
S156849461500616X | As social media and e-commerce on the Internet continue to grow, opinions have become one of the most important sources of information for users to base their future decisions on. Unfortunately, the large quantities of opinions make it difficult for an individual to comprehend and evaluate them all in a reasonable amount of time. The users have to read a large number of opinions of different entities before making any decision. Recently a new retrieval task in information retrieval known as Opinion-Based Entity Ranking (OpER) has emerged. OpER directly ranks relevant entities based on how well opinions on them are matched with a user's preferences that are given in the form of queries. With such a capability, users do not need to read a large number of opinions available for the entities. Previous research on OpER does not take into account the importance and subjectivity of query keywords in individual opinions of an entity. Entity relevance scores are computed primarily on the basis of occurrences of query keywords match, by assuming all opinions of an entity as a single field of text. Intuitively, entities that have positive judgments and strong relevance with query keywords should be ranked higher than those entities that have poor relevance and negative judgments. This paper outlines several ranking features and develops an intuitive framework for OpER in which entities are ranked according to how well individual opinions of entities are matched with the user's query keywords. As a useful ranking model may be constructed from many ranking features, we apply learning to rank approach based on genetic programming (GP) to combine features in order to develop an effective retrieval model for OpER task. The proposed approach is evaluated on two collections and is found to be significantly more effective than the standard OpER approach. | Opinion-Based Entity Ranking using learning to rank |
S1568494615006171 | DNA microarray is an efficient new technology that allows to analyze, at the same time, the expression level of millions of genes. The gene expression level indicates the synthesis of different messenger ribonucleic acid (mRNA) molecule in a cell. Using this gene expression level, it is possible to diagnose diseases, identify tumors, select the best treatment to resist illness, detect mutations among other processes. In order to achieve that purpose, several computational techniques such as pattern classification approaches can be applied. The classification problem consists in identifying different classes or groups associated with a particular disease (e.g., various types of cancer, in terms of the gene expression level). However, the enormous quantity of genes and the few samples available, make difficult the processes of learning and recognition of any classification technique. Artificial neural networks (ANN) are computational models in artificial intelligence used for classifying, predicting and approximating functions. Among the most popular ones, we could mention the multilayer perceptron (MLP), the radial basis function neural network (RBF) and support vector machine (SVM). The aim of this research is to propose a methodology for classifying DNA microarray. The proposed method performs a feature selection process based on a swarm intelligence algorithm to find a subset of genes that best describe a disease. After that, different ANN are trained using the subset of genes. Finally, four different datasets were used to validate the accuracy of the proposal and test the relevance of genes to correctly classify the samples of the disease. | Classification of DNA microarrays using artificial neural networks and ABC algorithm |
S1568494615006183 | In this paper, a novel solving method for speech signal chaotic time series prediction model was proposed. A phase space was reconstructed based on speech signal's chaotic characteristics and the genetic programming (GP) algorithm was introduced for solving the speech chaotic time series prediction models on the phase space with the embedding dimension m and time delay τ. And then, the speech signal's chaotic time series models were built. By standardized processing of these models and optimizing parameters, a speech signal's coding model of chaotic time series with certain generalization ability was obtained. At last, the experimental results showed that the proposed method can get the speech signal chaotic time series prediction models much more effectively, and had a better coding accuracy than linear predictive coding (LPC) algorithms and neural network model. | A chaotic time series prediction model for speech signal encoding based on genetic programming |
S1568494615006195 | Particle swarm optimization (PSO) is a stochastic population-based algorithm motivated by intelligent collective behavior of birds. The performance of the PSO algorithm highly depends on choosing appropriate parameters. Inertia weight is a parameter of this algorithm which was first proposed by Shi and Eberhart to bring about a balance between the exploration and exploitation characteristics of PSO. This paper presents an adaptive approach which determines the inertia weight in different dimensions for each particle, based on its performance and distance from its best position. Each particle will then have different roles in different dimensions of the search environment. By considering the stability condition and an adaptive inertia weight, the acceleration parameters of PSO are adaptively determined. The corresponding approach is called stability-based adaptive inertia weight (SAIW). The proposed method and some other models for adjusting the inertia weight are evaluated and compared. The efficiency of SAIW is validated on 22 static test problems, moving peaks benchmarks (MPB) and a real-world problem for a radar system design. Experimental results indicate that the proposed model greatly improves the PSO performance in terms of the solution quality as well as convergence speed in static and dynamic environments. | A novel stability-based adaptive inertia weight for particle swarm optimization |
S1568494615006201 | Machine learning techniques can be used in diagnosis of breast cancer to help pathologists and physicians for decision making process. Kernel density estimation is a popular non-parametric method which can be applied for the estimation of data in many diverse applications. Selection of bandwidth and feature subset in kernel density estimator significantly influences the classification performance. In this paper, a PSO-KDE model is proposed that hybridize the particle swarm optimization (PSO) and non-parametric kernel density estimation (KDE) based classifier to diagnosis of breast cancer. In the proposed model, particle swarm optimization is used to simultaneously determine the kernel bandwidth and select the feature subset in the kernel density estimation based classifier. Classification performance and the number of selected features are the criteria used to design the objective function of PSO-KDE. The performance of the PSO-KDE is examined on Wisconsin Breast Cancer Dataset (WBCD) and Wisconsin Diagnosis Breast Cancer Database (WDBC) using classification accuracy, sensitivity and specificity. Experimental results demonstrate that the proposed model has better average performance than GA-KDE model in diagnosis of breast cancer. | Particle swarm optimization for bandwidth determination and feature selection of kernel density estimation based classifiers in diagnosis of breast cancer |
S1568494615006237 | The fixed charge problem is a special type of nonlinear programming problem which forms the basis of many industry problems wherein a charge is associated with performing an activity. In real world situations, the information provided by the decision maker regarding the coefficients of the objective functions may not be of a precise nature. This paper aims to describe a solution algorithm for solving such a fixed charge problem having multiple fractional objective functions which are all of a fuzzy nature. The enumerative technique developed not only finds the set of efficient solutions but also a corresponding fuzzy solution, enabling the decision maker to operate in the range obtained. A real life numerical example in the context of the ship routing problem is presented to illustrate the proposed method. | On solving a multiobjective fixed charge problem with imprecise fractional objectives |
S1568494615006249 | This paper presents bit masking oriented genetic algorithm (BMOGA) for context free grammar induction. It takes the advantages of crossover and mutation mask-fill operators together with a Boolean based procedure in two phases to guide the search process from ith generation to (i +1)th generation. Crossover and mutation mask-fill operations are performed to generate the proportionate amount of population in each generation. A parser has been implemented checks the validity of the grammar rules based on the acceptance or rejection of training data on the positive and negative strings of the language. Experiments are conducted on collection of context free and regular languages. Minimum description length principle has been used to generate a corpus of positive and negative samples as appropriate for the experiment. It was observed that the BMOGA produces successive generations of individuals, computes their fitness at each step and chooses the best when reached to threshold (termination) condition. As presented approach was found effective in handling premature convergence therefore results are compared with the approaches used to alleviate premature convergence. The analysis showed that the BMOGA performs better as compared to other algorithms such as: random offspring generation approach, dynamic allocation of reproduction operators, elite mating pool approach and the simple genetic algorithm. The term success ratio is used as a quality measure and its value shows the effectiveness of the BMOGA. Statistical tests indicate superiority of the BMOGA over other existing approaches implemented. | Grammar induction using bit masking oriented genetic algorithm and comparative analysis |
S1568494615006250 | In this paper, we propose a new optimization-based framework to reduce the dimensionality of hyperspectral images. One of the most problems in hyperspectral image classification is the Hughes phenomenon caused by the irrelevant spectral bands and the high correlation between the adjacent bands. The problematic is how to find the relevant bands to classify the pixels of hyperspectral image without reducing the classification accuracy rate. We propose to reformulate the problem of band selection as a combinatorial problem by modeling an objective function based on class separability measures and the accuracy rate. We use the Gray Wolf Optimizer, which is a new meta-heuristic algorithm more efficient than Practical Swarm Optimization, Gravitational Search Algorithm, Differential Evolution, Evolutionary Programming and Evolution Strategy. The experimentations are performed on three widely used benchmark hyperspectral datasets. Comparisons with the state-of-the-art approaches are also conducted. The analysis of the results proves that the proposed approach can effectively investigate the spectral band selection problem and provides a high classification accuracy rate by using a few samples for training. | Gray Wolf Optimizer for hyperspectral band selection |
S1568494615006274 | Multi-label learning deals with data associated with a set of labels simultaneously. Like traditional single-label learning, the high-dimensionality of data is a stumbling block for multi-label learning. In this paper, we first introduce the margin of instance to granulate all instances under different labels, and three different concepts of neighborhood are defined based on different cognitive viewpoints. Based on this, we generalize neighborhood information entropy to fit multi-label learning and propose three new measures of neighborhood mutual information. It is shown that these new measures are a natural extension from single-label learning to multi-label learning. Then, we present an optimization objective function to evaluate the quality of the candidate features, which can be solved by approximating the multi-label neighborhood mutual information. Finally, extensive experiments conducted on publicly available data sets verify the effectiveness of the proposed algorithm by comparing it with state-of-the-art methods. | Multi-label feature selection based on neighborhood mutual information |
S1568494615006286 | Not all products are marketed at the same time. If item (x) is marketed much earlier than item (z) is, then item (x) is associated with higher support compared with itemset (xz). In this situation, itemset (xz) cannot satisfy the minimum support; the association rule, x ⇒ z, possesses low confidence. To create better marketing strategies, managers must understand the sale associations between (x) and (z) and use (x) to promote (z) to increase the sales of (z). However, using traditional approaches for identifying the sale associations between earlier-marketed items and later-marketed item is difficult. In this study, we propose a new algorithm for determining the association rules by precisely calculating the support values of association rules. The association rules, which consist of an atomic consequent and its antecedents, consider the first time the consequent and its antecedents occurring in transactions. Furthermore, a new measure, TransRate, was designed to prevent generating useless itemsets. Experimental results from survey data indicated that the proposed approach can facilitate identifying rules of interest and valuable associations among later-marketed products. | Identifying association rules of specific later-marketed products |
S1568494615006298 | The changing economic conditions have challenged many financial institutions to search for more efficient and effective ways to assess emerging markets. Data envelopment analysis (DEA) is a widely used mathematical programming technique that compares the inputs and outputs of a set of homogenous decision making units (DMUs) by evaluating their relative efficiency. In the conventional DEA model, all the data are known precisely or given as crisp values. However, the observed values of the input and output data in real-world problems are sometimes imprecise or vague. In addition, performance measurement in the conventional DEA method is based on the assumption that inputs should be minimized and outputs should be maximized. However, there are circumstances in real-world problems where some input variables should be maximized and/or some output variables should be minimized. Moreover, real-world problems often involve high-dimensional data with missing values. In this paper we present a comprehensive fuzzy DEA framework for solving performance evaluation problems with coexisting desirable input and undesirable output data in the presence of simultaneous input–output projection. The proposed framework is designed to handle high-dimensional data and missing values. A dimension-reduction method is used to improve the discrimination power of the DEA model and a preference ratio (PR) method is used to rank the interval efficiency scores in the resulting fuzzy environment. A real-life pilot study is presented to demonstrate the applicability of the proposed model and exhibit the efficacy of the procedures and algorithms in assessing emerging markets for international banking. | A comprehensive fuzzy DEA model for emerging market assessment and selection decisions |
S1568494615006304 | In spite of the efficiency of the Artificial Neural Networks (ANNs) for modeling nonlinear and complicated rainfall-runoff (R-R) process, they suffer from some drawbacks. Support Vector Regression (SVR) model has appeared to be a powerful alternative to reduce some of these drawbacks while retaining many strengths of ANNs. In this paper, to form a new rainfall-runoff model called SVR-GANN, a SVR model is combined with a geomorphologic-based ANN model. The GANN is a three-layer perceptron model, in which the number of hidden neurons is equal to the number of possible flow paths within a watershed and the connection weights between hidden layer and output layer are specified by flow path probabilities which are not updated during the training process. The capabilities of the proposed SVR-GANN model in simulating the daily runoff is investigated in a case study of three sub-basins located in a semi-arid region in Iran. The results of the proposed model are compared with those of ANN-based back propagation algorithm (ANN-BP), traditional SVR, ANN-based genetic algorithm (ANN-GA), adaptive neuro-fuzzy inference system (ANFIS), and GANN from the standpoints of parsimony, equifinality, robustness, reliability, computational time, simulation of hydrograph ordinates (peak flow, time to peak, and runoff volume) and also saving the main statistics of the observed data. The results show that prediction accuracy of the SVR-GANN model is usually better than those of ANN-based models and the proposed model can be applied as a promising, reliable, and robust prediction tool for rainfall-runoff modeling. | Integrating Support Vector Regression and a geomorphologic Artificial Neural Network for daily rainfall-runoff modeling |
S1568494615006316 | In engineering design, selecting the most suitable material for a particular product is a typical multiple criteria decision making (MCDM) problem, which generally involves several feasible alternatives and conflicting criteria. In this paper, we aim to propose a novel approach based on interval-valued intuitionistic fuzzy sets (IVIFSs) and multi-attributive border approximation area comparison (MABAC) for handling material selection problems with incomplete weight information. First, individual evaluations of experts concerning each alternative are aggregated to construct the group interval-valued intuitionistic fuzzy (IVIF) decision matrix. Consider the situation where the criteria weight information is partially known, a linear programming model is established for determining the criteria weights. Then, an extended MABAC method within the IVIF environment is developed to rank and select the best material. Finally, two application examples are provided to demonstrate the applicability and effectiveness of the proposed IVIF-MABAC approach. The results suggest that for the automotive instrument panel, polypropylene is the best, for the hip prosthesis, Co–Cr alloys-wrought alloy is the optimal option. Finally, based on the results, comparisons between the IVIF-MABAC and other relevant representative methods are presented. It is observed that the obtained rankings of the alternative materials are good agreement with those derived by the past researchers. | An interval-valued intuitionistic fuzzy MABAC approach for material selection with incomplete weight information |
S1568494615006328 | In machine learning, a combination of classifiers, known as an ensemble classifier, often outperforms individual ones. While many ensemble approaches exist, it remains, however, a difficult task to find a suitable ensemble configuration for a particular dataset. This paper proposes a novel ensemble construction method that uses PSO generated weights to create ensemble of classifiers with better accuracy for intrusion detection. Local unimodal sampling (LUS) method is used as a meta-optimizer to find better behavioral parameters for PSO. For our empirical study, we took five random subsets from the well-known KDD99 dataset. Ensemble classifiers are created using the new approaches as well as the weighted majority algorithm (WMA) approach. Our experimental results suggest that the new approach can generate ensembles that outperform WMA in terms of classification accuracy. | A novel SVM-kNN-PSO ensemble method for intrusion detection system |
S156849461500633X | This paper deals with the problem of parameter estimation in the generalized Mallows model (GMM) by using both local and global search metaheuristic (MH) algorithms. The task we undertake is to learn parameters for defining the GMM from a dataset of complete rankings/permutations. Several approaches can be found in the literature, some of which are based on greedy search and branch and bound search. The greedy approach has the disadvantage of usually becoming trapped in local optima, while the branch and bound approach, basically A* search, usually comes down to approximate search because of memory requirements, losing in this way its guaranteed optimality. Here, we carry out a comparative study of several MH algorithms (iterated local search (ILS) methods, variable neighborhood search (VNS) methods, genetic algorithms (GAs) and estimation of distribution algorithms (EDAs)) and a tailored algorithm A* to address parameter estimation in GMMs. We use 22 real datasets of different complexity, all but one of which were created by the authors by preprocessing real raw data. We provide a complete analysis of the experiments in terms of accuracy, number of iterations and CPU time requirements. | Using metaheuristic algorithms for parameter estimation in generalized Mallows models |
S1568494615006341 | In financial time series pattern matching, segmentation is often performed as a pre-processing step to reduce the data points from the input sequence. The segmentation process extracts important data points and produces a time series with reduced data points. In this paper, we evaluate the effectiveness and accuracy of four approaches to financial time series pattern matching when used with four segmentation methods, the perceptually important points, piecewise aggregate approximation, piecewise linear approximation and turning points methods. The pattern matching approaches analysed in this paper include the template-based, rule-based, hybrid, decision tree, and Symbolic Aggregate approXimation (SAX) approaches. The analysis is performed twice, on a real data set (of Hang Seng Index prices from the Hong Kong stock market) and on a synthetic data set containing positive and negative cases of a technical pattern known as head-and-shoulders. | Effect of segmentation on financial time series pattern matching |
Subsets and Splits