FileName
stringlengths 17
17
| Abstract
stringlengths 163
6.01k
| Title
stringlengths 12
421
|
---|---|---|
S1568494613004298 | For data pre-processing of SVMs, many scholars tried to find those samples, which would become support vectors. Generally, support vectors locate in the overlap regions, which are between different classes. But overlap region does not always exist. In this paper, a new method is proposed to find the boundary regions of each class instead of overlap regions. This method could deal with the dataset without overlap regions. Summing the cosine of the sample-neighbor angle, the sum ranges from 0 to k. When the sample locates in the boundary region of data distribution, the sum would be close to k; when the sample locates in the interior of the data distribution, the sum would be close to 0. Using cosine sum, the samples locating in the interior of each class can be disposed before SVMs training. Experimental results show that the proposed method can solve the problem, which the methods based on finding overlap regions cannot deal with. | Neighbors’ distribution property and sample reduction for support vector machines |
S1568494613004390 | With respect to multiple attribute group decision making (MADM) problems in which attribute values take the form of intuitionistic linguistic numbers, some new group decision making methods are developed. Firstly, some operational laws, expected value, score function and accuracy function of intuitionistic linguistic numbers are introduced. Then, an intuitionistic linguistic power generalized weighted average (ILPGWA) operator and an intuitionistic linguistic power generalized ordered weighted average (ILPGOWA) operator are developed. Furthermore, some desirable properties of the ILPGWA and ILPGOWA operators, such as commutativity, idempotency and monotonicity, etc. are studied. At the same time, some special cases of the generalized parameters in these operators are analyzed. Based on the ILPGWA and ILPGOWA operators, two approaches to multiple attribute group decision making with intuitionistic linguistic information are proposed. Finally, an illustrative example is given to verify the developed approaches and to demonstrate their practicality and effectiveness. | Multiple attribute group decision making methods based on intuitionistic linguistic power generalized aggregation operators |
S1568494613004407 | In this paper a class of hybrid-fuzzy models is presented, where binary membership functions are used to capture the hybrid behavior. We describe a hybrid-fuzzy identification methodology for non-linear hybrid systems with mixed continuous and discrete states that uses fuzzy clustering and principal component analysis. The method first determines the hybrid characteristic of the system inspired by an inverse form of the merge method for clusters, which makes it possible to identify the unknown switching points of a process based on just input–output (I/O) data. Next, using the detected switching points, a hard partition of the I/O space is obtained. Finally, TS fuzzy models are identified as submodels for each partition. Two illustrative examples, a hybrid-tank system and a traffic model for highways, are presented to show the benefits of the proposed approach. | Hybrid-fuzzy modeling and identification |
S1568494613004419 | This work is motivated by a real problem posed to the authors by a company in Tenerife, Spain. Given a fleet of vehicles, daily routes have to be designed in order to minimize the total traveled distance while balancing the workload of drivers. This balance has been defined in relation to the length of the routes, regarding to the required time. A bi-objective mixed-integer linear model for the problem is proposed and a solution approach, based on the scatter search metaheuristic, is developed. An extensive computational experience is carried out, using benchmark instances with 25, 50 and 100 customers, to test several components of the proposed method. Comparisons with the exact Pareto fronts for instances up to 25 customers show that the proposed methods obtain good approximations. For comparison purposes, an NSGA-II algorithm has also been implemented. Results obtained on a real case instance are also discussed. In this case, the solution provided by the method proposed in this paper improves the solution implemented by the company. | A bi-objective vehicle routing problem with time windows: A real case in Tenerife |
S1568494613004420 | A new support vector machine, SVM, is introduced, called GSVM, which is specially designed for bi-classification problems where balanced accuracy between classes is the objective. Starting from a standard SVM, the GSVM is obtained from a low-cost post-processing strategy by modifying the initial bias. Thus, the bias for GSVM is calculated by moving the original bias in the SVM to improve the geometric mean between the true positive rate and the true negative rate. The proposed solution neither modifies the original optimization problem for SVM training, nor introduces new hyper-parameters. Experimentation carried out on a high number of databases (23) shows GSVM obtaining the desired balanced accuracy between classes. Furthermore, its performance improves well-known cost-sensitive schemes for SVM, without adding complexity or computational cost. | GSVM: An SVM for handling imbalanced accuracy between classes inbi-classification problems |
S1568494613004432 | Evolvable hardware (EH) is a thriving area of research which uses the genetic algorithm (GA) to construct novel circuits without manual engineering. These algorithms have been widely implemented using software but have not gained an appreciable edge because of the huge computation time involved. This has been a major hindrance to real-time applications. A major speed-up could be achieved by shifting the implementation to hardware. Major issues to be addressed in hardware implementation are scalability, providing flexibility and reduced computational delays. Presented here is the first complete hardware evolution (CHE) based system on programmable chip (SoPC) for EH. The architecture includes the required memory and modules for performing all operations of the algorithm. It is completely built on the configurable logic blocks (CLB) of a single commercial off the shelf (COTS) field programmable gate array (FPGA). The coding is done using Verilog hardware description language (HDL). Xilinx ISE 9.1i has been used for synthesis and simulation. As a proof of concept, the architecture has been synthesized for evolving three combinational circuits. The results show that the architecture is able to cater to evolution with no limit on the number of generations, accompanied with no scaling in the resource utilization. The results present computational delays of the order of a few nanoseconds for this CHE based architecture. | Complete hardware evolution based SoPC for evolvable hardware |
S1568494613004444 | Hypoglycaemia is a medical term for a body state with a low level of blood glucose. It is a common and serious side effect of insulin therapy in patients with diabetes. In this paper, we propose a system model to measure physiological parameters continuously to provide hypoglycaemia detection for Type 1 diabetes mellitus (TIDM) patients. The resulting model is a fuzzy inference system (FIS). The heart rate (HR), corrected QT interval of the electrocardiogram (ECG) signal (QT c ), change of HR, and change of QT c are used as the input of the FIS to detect the hypoglycaemic episodes. An intelligent optimiser is designed to optimise the FIS parameters that govern the membership functions and the fuzzy rules. The intelligent optimiser has an implementation framework that incorporates two wavelet mutated differential evolution optimisers to enhance the training performance. A multi-objective optimisation approach is used to perform the training of the FIS in order to meet the medical standards on sensitivity and specificity. Experiments with real data of 16 children (569 data points) with TIDM are studied in this paper. The data are randomly separated into a training set with 5 patients (l99 data points), a validation set with 5 patients (177 data points) and a testing set with 5 patients (193 data points). Experiment results show that the proposed FIS tuned by the proposed intelligent optimiser can offer good performance of classification. | Hypoglycaemia detection using fuzzy inference system with intelligent optimiser |
S1568494613004456 | The application of chaotic sequences can be an interesting alternative to provide search diversity in an optimization procedure, named chaos optimization algorithm (COA). Since the chaotic motion is pseudo-randomness and chaotic sequences are sensitive to the initial conditions, the search ability of COA is usually effected by the starting values. Considering this weakness, parallel chaos optimization algorithm (PCOA) is studied in this paper. To obtain optimum solution accurately, harmony search algorithm (HSA) is integrated with PCOA to form a novel hybrid algorithm. Different chaotic maps are compared and the impacts of parallel parameter on the hybrid algorithm are discussed. Several simulation results are used to show the effective performance of the proposed hybrid algorithm. | Hybrid parallel chaos optimization algorithm with harmony search algorithm |
S1568494613004468 | Metaheuristic methods have been demonstrated to be efficient tools to solve hard optimization problems. Most metaheuristics define a set of parameters that must be tuned. A good setup of that parameter values can lead to take advantage of the metaheuristic capabilities to solve the problem at hand. Tuning strategies are step by step methods based on multiple runs of the metaheuristic algorithm. In this study we compare four automated tuning methods: F-Race, Revac, ParamILS and SPO. We evaluate the performance of each method using a standard genetic algorithm for continuous function optimization. We discuss about the requirements of each method, the resources used and quality of solutions found in different scenarios. Finally we establish some guidelines that can help to choose the more appropriate tuning procedure. | A beginner's guide to tuning methods |
S1568494613004493 | Ovarian cancer is the ninth most common cancer among women and ranks fifth in cancer deaths. Statistics show that the five-year survival rate is greater than 75% if diagnosis occurs before the cancer cells have spread to other organs (stage I), but it drops to 20% when the cancer cells have spread to upper abdomen (stage III). Therefore, it is crucial to detect ovarian cancer as early as possible and to correctly identify the stage of the cancer to prevent any further delay of appropriate treatments. In this paper, we propose a novel self-organizing neural fuzzy inference system that functions as a reliable decision support system for ovarian cancer diagnoses. The system only requires a limited number of control parameters and constraints to derive simple yet convincing inference rules without human intervention and expert guidance. Because feature selection and attribute reduction are performed during training, the inference rules possess a great level of interpretability. Experiments are conducted on both established medical data sets and real-world cases collected from hospital. The experimental results of our proposed model in ovarian cancer diagnoses are encouraging because it achieves the most number of correct diagnoses when benchmarked against other computational intelligence based models. More importantly, its automatically derived rules are consistent with expert knowledge. | Ovarian cancer diagnosis using a hybrid intelligent system with simple yet convincing rules |
S156849461300450X | Artificial chromosomes with genetic algorithm (ACGA) is one of the latest versions of the estimation of distribution algorithms (EDAs). This algorithm has already been applied successfully to solve different kinds of scheduling problems. However, due to the fact that its probabilistic model does not consider variable interactions, ACGA may not perform well in some scheduling problems, particularly if sequence-dependent setup times are considered. This is due to the fact that the previous job will influence the processing time of the next job. Simply capturing ordinal information from the parental distribution is not sufficient for a probabilistic model. As a result, this paper proposes a bi-variate probabilistic model to add into the ACGA. This new algorithm is called the ACGA2 and is used to solve single machine scheduling problems with sequence-dependent setup times in a common due-date environment. A theoretical analysis is given in this paper. Some heuristics and local search algorithm variable neighborhood search (VNS) are also employed in the ACGA2. The results indicate that the average error ratio of this ACGA2 is half the error ratio of the ACGA. In addition, when ACGA2 is applied in combination with other heuristic methods and VNS, the hybrid algorithm achieves optimal solution quality in comparison with other algorithms in the literature. Thus, the proposed algorithms are effective for solving the scheduling problems. | Artificial chromosomes with genetic algorithm 2 (ACGA2) for single machine scheduling problems with sequence-dependent setup times |
S1568494613004511 | In this paper, we deal with the problem of classification of interval type-2 fuzzy sets through evaluating their distinguishability. To this end, we exploit a general matching algorithm to compute their similarity measure. The algorithm is based on the aggregation of two core similarity measures applied independently on the upper and lower membership functions of the given pair of interval type-2 fuzzy sets that are to be compared. Based on the proposed matching procedure, we develop an experimental methodology for evaluating the distinguishability of collections of interval type-2 fuzzy sets. Experimental results on evaluating the proposed methodology are carried out in the context of classification by considering interval type-2 fuzzy sets as patterns of suitable classification problem instances. We show that considering only the upper and lower membership functions of interval type-2 fuzzy sets is sufficient to (i) accurately discriminate between them and (ii) judge and quantify their distinguishability. | Distinguishability of interval type-2 fuzzy sets data by analyzing upper and lower membership functions |
S1568494614000027 | Nowadays, many healthcares are generating and collecting a huge amount of medical data. Due to the difficulty of analyzing this massive volume of data using traditional methods, medical data mining on Electronic Health Record (EHR) has been a major concern in medical research. Therefore, it is necessary to assess EHR architectures based on the capabilities of extracting useful medical knowledge from a huge amount of EHR databases. In this paper, we develop a bi-level interactive decision support framework to identify data mining-oriented EHR architectures. The contribution of this bi-level framework is fourfold: (1) it develops Interactive Simple Additive Weighting (ISAW) model from an individual single-level environment to a group bi-level environment; (2) it utilizes decision makers’ preferences gradually in the course of interactions to reach to a consensus on an data mining-oriented EHR architecture; (3) it considers fuzzy logic and fuzzy sets to represent ambiguous, uncertain or imprecise information; and (4) it synthesizes a representative outcome based on qualitative and quantitative indicators in the EHR assessment process. A case study demonstrates the applicability of the proposed bi-level interactive framework for benchmarking a national data mining-oriented EHR. | A bi-level interactive decision support framework to identify data mining-oriented electronic health record architectures |
S1568494614000039 | Since the introduction of DNA microarray technology, there has been an increasing interest on clinical application for cancer diagnosis. However, in order to effectively translate the advances in the field of microarray-based classification into the clinic area, there are still some problems related with both model performance and biological interpretability of the results. In this paper, a novel ensemble model is proposed able to integrate prior knowledge in the form of gene sets into the whole microarray classification process. Each gene set is used as an informed feature selection subset to train several base classifiers in order to estimate their accuracy. This information is later used for selecting those classifiers comprising the final ensemble model. The internal architecture of the proposed ensemble allows the replacement of both base classifiers and the heuristics employed to carry out classifier fusion, thereby achieving a high level of flexibility and making it possible to configure and adapt the model to different contexts. Experimental results using different datasets and several gene sets show that the proposal is able to outperform classical alternatives by using existing prior knowledge adapted from publicly available databases. | A novel ensemble of classifiers that use biological relevant gene sets for microarray classification |
S1568494614000040 | Construction projects are initiated in dynamic environment which result in circumstances of high uncertainty and risks due to accumulation of many interrelated parameters. The purpose of this study is to use novel analytic tools to evaluate the construction projects and their overall risks under incomplete and uncertain situations. It was also aimed to place the risk in a proper category and predict the level of it in advance to develop strategies and counteract the high-risk factors. The study covers identifying the key risk criteria of construction projects at King Abdulaziz University (KAU), and assessing the criteria by the integrated hybrid methodologies. The proposed hybrid methodologies were initiated with a survey for data collection. The relative importance index (RII) method was applied to prioritize the project risks based on the data obtained. The construction projects were then categorized by fuzzy AHP and fuzzy TOPSIS methodologies. Fuzzy AHP (FAHP) was used to create favorable weights for fuzzy linguistic variable of construction projects overall risk. The fuzzy TOPSIS method is very suitable for solving group decision making problems under the fuzzy environment. It attempted to incorporate vital qualitative attributes in performance analysis of construction projects and transformed the qualitative data into equivalent quantitative measures. Thirty construction projects were studied with respect to five main criteria that are the time, cost, quality, safety and environment sustainability. The results showed that these novel methodologies are able to assess the overall risks of construction projects, select the project that has the lowest risk with the contribution of relative importance index. This approach will have potential applications in the future. | Construction projects selection and risk assessment by fuzzy AHP and fuzzy TOPSIS methodologies |
S1568494614000052 | Natural resource allocation is a complex problem that entails difficulties related to the nature of real world problems and to the constraints related to the socio-economical aspects of the problem. In more detail, as the resource becomes scarce relations of trust or communication channels that may exist between the users of a resource become unreliable and should be ignored. In this sense, it is argued that in multi-agent natural resource allocation settings agents are not considered to observe or communicate with each other. The aim of this paper is to study multi-agent learning within this constrained framework. Two novel learning methods are introduced that operate in conjunction with any decentralized multi-agent learning algorithm to provide efficient resource allocations. The proposed methods were applied on a multi-agent simulation model that replicates a natural resource allocation procedure, and extensive experiments were conducted using popular decentralized multi-agent learning algorithms. Experimental results employed statistical figures of merit for assessing the performance of the algorithms with respect to the preservation of the resource and to the utilities of the users. It was revealed that the proposed learning methods improved the performance of all policies under study and provided allocation schemes that both preserved the resource and ensured the survival of the agents, simultaneously. It is thus demonstrated that the proposed learning methods are a substantial improvement, when compared to the direct application of typical learning algorithms to natural resource sharing, and are a viable means of achieving efficient resource allocations. | A robust approach for multi-agent natural resource allocation based on stochastic optimization algorithms |
S1568494614000064 | Multi-objectivization via Segmentation (MOS) has been shown to give improved results over other previous multi-objectivization approaches. This paper explores the mechanisms that make different segmentations in MOS successful in the context of the Traveling Salesman Problem (TSP). A variety of new segmentation methods are analyzed and theories regarding their performance are presented. Spatial segmentation methods are compared with other adaptive and static decomposition methods. Insight into why previous adaptive methods performed well is provided. New decomposition methods are proposed and several of these methods are shown to attain better performance than previously known methods of decomposition. The convergence of various degrees of multi-objectivization is examined leading to a new, more general segmentation algorithm, Multi-Objectivization via Progressive Segmentation (MOPS). MOPS combines the single-objective genetic algorithm with multi-objectivization in a general form. In a given run MOPS can progress from a more traditional single objective method to a strong multi-objectivization method. MOPS attempts to improve the ratio of fitness improvements to fitness decrements, signal-to-noise ratio (SNR), over the course of an evolutionary optimization based on the principle that often fitness improvements are generally easier to find early in the run rather than late in the run. It is shown that MOPS provides robust performance across a variety of problem instances and different computational budgets. | An analysis of decomposition approaches in multi-objectivization via segmentation |
S1568494614000155 | This paper first proposes a type-2 neural fuzzy system (NFS) learned through its type-1 counterpart (T2NFS-T1) and then implements the built IT2NFS-T1 in a field-programmable gate array (FPGA) chip. The antecedent part of each fuzzy rule in the T2NFS-T1 uses interval type-2 fuzzy sets, while the consequent part uses a Takagi-Sugeno-Kang (TSK) type with interval combination weights. The T2NFS-T1 uses a simplified type-reduction operation to reduce system training time and hardware implementation cost. Given a training data set, a TSK type-1 NFS is first learned through structure and parameter learning. The built type-1 fuzzy logic system (FLS) is then extended to a type-2 FLS, where highly overlapped type-1 fuzzy sets are merged into interval type-2 fuzzy sets to reduce the total number of fuzzy sets. Finally, the rule consequent and antecedent parameters in the T2NFS-T1 are tuned using a hybrid of the gradient descent and rule-ordered recursive least square (RLS) algorithms. Simulation results and comparisons with various type-1 and type-2 FLSs verify the effectiveness and efficiency of the T2NFS-T1 for system modeling and prediction problems. A new hardware circuit using both parallel-processing and pipeline techniques is proposed to implement the learned T2NFS-T1 in an FPGA chip. The T2NFS-T1 chip reduces the hardware implementation cost in comparison to other type-2 fuzzy chips. | A type-2 neural fuzzy system learned through type-1 fuzzy rules and its FPGA-based hardware implementation |
S1568494614000167 | Markerless Human Motion Capture is the problem of determining the joints’ angles of a three-dimensional articulated body model that best matches current and past observations acquired by video cameras. The problem of Markerless Human Motion Capture is high-dimensional and requires the use of models with a considerable number of degrees of freedom to appropriately adapt to the human anatomy. Particle filters have become the most popular approach for Markerless Human Motion Capture, despite their difficulty to cope with high-dimensional problems. Although several solutions have been proposed to improve their performance, they still suffer from the curse of dimensionality. As a consequence, it is normally required to impose mobility limitations in the body models employed, or to exploit the hierarchical nature of the human skeleton by partitioning the problem into smaller ones. Evolutionary algorithms, though, are powerful methods for solving continuous optimization problems, specially the high-dimensional ones. Yet, few works have tackled Markerless Human Motion Capture using them. This paper evaluates the performance of three of the most competitive algorithms in continuous optimization – Covariance Matrix Adaptation Evolutionary Strategy, Differential Evolution and Particle Swarm Optimization – with two of the most relevant particle filters proposed in the literature, namely the Annealed Particle Filter and the Partitioned Sampling Annealed Particle Filter. The algorithms have been experimentally compared in the public dataset HumanEva-I by employing two body models with different complexities. Our work also analyzes the performance of the algorithms in hierarchical and holistic approaches, i.e., with and without partitioning the search space. Non-parametric tests run on the results have shown that: (i) the evolutionary algorithms employed outperform their particle filter counterparts in all the cases tested; (ii) they can deal with high-dimensional models thus leading to better accuracy; and (iii) the hierarchical strategy surpasses the holistic one. | Comparing evolutionary algorithms and particle filters for Markerless Human Motion Capture |
S1568494614000179 | In this paper we research the location management, a vital task that controls the subscribers’ mobility in the mobile communication networks. This management task defines a multiobjective optimization problem with two objective functions: minimize the number of location updates and minimize the paging overhead. In this work, the first objective function is described following the Location Areas strategy (because it is widely used in current mobile networks) and the second one is expressed following different paging procedures. Furthermore, the different location management strategies studied in this work are optimized with our versions of two well-known multiobjective evolutionary algorithms (the Non-dominated Sorting Genetic Algorithm II and the Strength Pareto Evolutionary Algorithm 2). This study is a novel contribution of our research which will allow us to select the most suitable configuration of LA-PA (Location Areas with a specific paging procedure) depending on the real state of the signaling network. Experimental results also show that our proposals are very competitive because they outperform the results obtained by other optimization techniques proposed in the literature. | On the use of multiobjective optimization for solving the Location Areas strategy with different paging procedures in a realistic mobile network |
S1568494614000180 | Most of the recent proposed particle swarm optimization (PSO) algorithms do not offer the alternative learning strategies when the particles fail to improve their fitness during the searching process. Motivated by this fact, we improve the cutting edge teaching–learning-based optimization (TLBO) algorithm and adapt the enhanced framework into the PSO, thereby develop a teaching and peer-learning PSO (TPLPSO) algorithm. To be specific, the TPLPSO adopts two learning phases, namely the teaching and peer-learning phases. The particle firstly enters into the teaching phase and updates its velocity based on its historical best and the global best information. Particle that fails to improve its fitness in the teaching phase then enters into the peer-learning phase, where an exemplar is selected as the guidance particle. Additionally, a stagnation prevention strategy (SPS) is employed to alleviate the premature convergence issue. The proposed TPLPSO is extensively evaluated on 20 benchmark problems with different features, as well as one real-world problem. Experimental results reveal that the TPLPSO exhibits competitive performances when compared with ten other PSO variants and seven state-of-the-art metaheuristic search algorithms. | Teaching and peer-learning particle swarm optimization |
S1568494614000192 | In social choice voting, majorities based on difference of votes and their extension, majorities based on difference in support, implement the crisp preference values (votes) and the intensities of preference provided by voters when comparing pairs of alternatives, respectively. The aim of these rules is declaring which alternative is socially preferred and to that, they require the winner alternative to reach a certain positive difference in its social valuation with respect to the one reached by the loser alternative. This paper introduces a new aggregation rule that extends majorities based on difference of votes from the context of crisp preferences to the framework of linguistic preferences. Under linguistic majorities with difference in support, voters express their intensities of preference between pairs of alternatives using linguistic labels and an alternative defeats another one when a specific support, fixed before the election process, is reached. There exist two main representation methodologies of linguistic preferences: the cardinal one based on the use of fuzzy sets, and the ordinal one based on the use of the 2-tuples. Linguistic majorities with difference in support are formalised in both representation settings, and conditions are given to guarantee that fuzzy linguistic majorities and 2-tuple linguistic majorities are mathematically isomorphic. Finally, linguistic majorities based on difference in support are proved to verify relevant normative properties: anonymity, neutrality, monotonicity, weak Pareto and cancellativeness. | Linguistic majorities with difference in support |
S1568494614000209 | In massively multiplayer online role-playing games (MMORPGs), each race holds some attributes and skills. Each skill contains several abilities such as physical damage and hit rate. All those attributes and abilities are functions of the character's level, which are called Ability-Increasing Functions (AIFs). A well-balanced MMORPG is characterized by having a set of well-balanced AIFs. In this paper, we propose a coevolutionary design method, including integration with the modified probabilistic incremental program evolution (PIPE) and the cooperative coevolutionary algorithm (CCEA), to solve the balance problem of MMORPGs. Moreover, we construct a simplest turn-based game model and perform a series of experiments based on it. The results indicate that the proposed method is able to obtain a set of well-balanced AIFs more efficiently, compared with the simple genetic algorithm (SGA), the simulated annealing algorithm (SAA) and the hybrid discrete particle swarm optimization (HDPSO) algorithm. The results also show that the performance of PIPE has been significantly improved through the modification works. | Solving the balance problem of massively multiplayer online role-playing games using coevolutionary programming |
S1568494614000210 | Detection of mild laryngeal disorders using acoustic parameters of human voice is the main objective in this study. Observations of sustained phonation (audio recordings of vocalized /a/) are labeled by clinical diagnosis and rated by severity (from 0 to 3). Research is exclusively constrained to healthy (severity 0) and mildly pathological (severity 1) cases – two the most difficult classes to distinguish between. Comprehensive voice signal characterization and information fusion constitute the approach adopted here. Characterization is obtained through diverse feature set, containing 26 feature subsets of varying size, extracted from the voice signal. Usefulness of feature-level and decision-level fusion is explored using support vector machine (SVM) and random forest (RF) as basic classifiers. For both types of fusion we also investigate the influence of feature selection on model accuracy. To improve the decision-level fusion we introduce a simple unsupervised technique for ensemble design, which is based on partitioning the feature set by k-means clustering, where the parameter k controls the size and diversity of the prospective ensemble. All types of the fusion resulted in an evident improvement over the best individual feature subset. However, none of the types, including fusion setups comprising feature selection, proved to be significantly superior over the rest. The proposed ensemble design by feature set decomposition discernibly enhanced decision-level and significantly outperformed feature-level fusion. Ensemble of RF classifiers, induced from a cluster-based partitioning of the feature set, achieved equal error rate of 13.1±1.8% in the detection of mildly pathological larynx. This is a very encouraging result, considering that detection of mild laryngeal disorder is a more challenging task than a common discrimination between healthy and a wide spectrum of pathological cases. | Fusion of voice signal information for detection of mild laryngeal pathology |
S1568494614000325 | This paper suggests new evolving Takagi–Sugeno–Kang (TSK) fuzzy models dedicated to crane systems. A set of evolving TSK fuzzy models with different numbers of inputs are derived by the novel relatively simple and transparent implementation of an online identification algorithm. An input selection algorithm to guide modeling is proposed on the basis of ranking the inputs according to their important factors after the first step of the online identification algorithm. The online identification algorithm offers rule bases and parameters which continuously evolve by adding new rules with more summarization power and by modifying existing rules and parameters. The potentials of new data points are used with this regard. The algorithm is applied in the framework of the pendulum–crane system laboratory equipment. The evolving TSK fuzzy models are tested against the experimental data and a comparison with other TSK fuzzy models and modeling approaches is carried out. The comparison points out that the proposed evolving TSK fuzzy models are simple and consistent with both training data and testing data and that these models outperform other TSK fuzzy models. | Online identification of evolving Takagi–Sugeno–Kang fuzzy models for crane systems |
S1568494614000337 | In order to understand the patterns of various biological processes and discover the principles of protein–protein interactions (PPI), it is important to develop effective methods for identifying and predicting PPI and their hot spots accurately. As for multi-criteria optimization classifier (MCOC), it can learn a decision function from different classes of training data and use it to predict the class labels of unknown samples. In many real world applications, owing to noises, outliers, imbalanced class distribution, nonlinearly separable problems, and other uncertainties, the predictive performance of MCOC degenerates rapidly. In this paper, we introduce a fuzzy contribution to each instance of training data, the unequal penalty factors to the samples in imbalanced classes, and kernel method to nonlinearly separable dataset, then a novel multi-criteria optimization classifier with fuzzification, kernel and penalty factors (FKP-MCOC) is constructed so as to reduce the effects of anomalies, improve the class imbalanced performance, and nonlinear separability in classification. The experimental results of predicting active compounds and protein interaction hot spots and comparison with MCOC, support vector machines (SVM) and fuzzy SVM, the conclusion shows that FKP-MCOC significantly increases the efficiency of classification, the partition of active and inactive compounds in bioassay, the separation of hot spot residues and energetically unimportant residues in protein interactions, and the generalization of predicting active compounds and hot spot residues in new instances. | Multi-criteria optimization classifier using fuzzification, kernel and penalty factors for predicting protein interaction hot spots |
S1568494614000349 | Swarm intelligence, a nature inspired computing applies an algorithm situated within the context of agent based models that mimics the behavior of ants to detect sinkhole attacks in wireless sensor networks. An Ant Colony Optimization Attack Detection (ACO-AD) algorithm is proposed to identify the sinkhole attacks based on the nodeids defined in the ruleset. The nodes generating an alert on identifying a sinkhole attack are grouped together. A voting method is proposed to identify the intruder. An Ant Colony Optimization Boolean Expression Evolver Sign Generation (ABXES) algorithm is proposed to distribute the keys to the alerted nodes in the group for signing the suspect list to agree on the intruder. It is shown that the proposed method identifies the anomalous connections without generating false positives and minimizes the storage in the sensor nodes in comparison to LIDeA architecture for sinkhole attack detection. Experimental results demonstrating the Ant Colony Optimization approach of detecting a sinkhole attack are presented. | Swarm intelligence based approach for sinkhole attack detection in wireless sensor networks |
S1568494614000398 | This paper presents a non-intrusive fatigue detection system based on the video analysis of drivers. The system relies on multiple visual cues to characterize the level of alertness of the driver. The parameters used for detecting fatigue are: eye closure duration measured through eye state information and yawning analyzed through mouth state information. Initially, the face is located through Viola–Jones face detection method to ensure the presence of driver in video frame. Then, a mouth window is extracted from the face region, in which lips are searched through spatial fuzzy c-means (s-FCM) clustering. Simultaneously, the pupils are also detected in the upper part of the face window on the basis of radii, inter-pupil distance and angle. The monitored information of eyes and mouth are further passed to Fuzzy Expert System (FES) that classifies the true state of the driver. The system has been tested using real data, with different sequences recorded in day and night driving conditions, and with users belonging to different race and gender. The system yielded an average accuracy of 100% on all the videos on which it was tested. | Fully automated real time fatigue detection of drivers through Fuzzy Expert Systems |
S1568494614000404 | The paper concerns influences of global warming on health of the population. We consider an important parameter of global warming – a heat index – that is a characteristics of a human thermal comfort and represents a combination of air temperature and relative humidity. Based on the heat indexes, we propose a new approach – the fuzzy methods – to investigate heat waves which, if defined properly, can be used to assess the potential impacts of climate change on human health, e.g., in the heat-health warning systems. We find typical characteristics of heat indexes during different time periods, using most typical fuzzy expected values. We use these typical characteristics to process heat waves. Our results are applied on the data collected in the Ministry of Preservation of the Environment of Georgia during 1955–1970 and 1990–2007 years as well as free accessible meteorological data of air temperature and relative humidity during August 2003 in Paris (France) and Tbilisi (Georgia). | Investigation of heat waves with fuzzy methods |
S1568494614000416 | Linear model is a general forecasting model and moving average technical index (MATI) is one of useful forecasting methods to predict the future stock prices in stock markets. Therefore, individual investors, stock fund managers, and financial analysts attempt to predict price fluctuation in stock markets by either linear model or MATI. From literatures, three major drawbacks are found in many existing forecasting models. First, forecasting rules mined from some AI algorithms, such as neural networks, could be very difficult to understand. Second, statistic assumptions about variables are required for time series to generate forecasting models, which are not easily understandable by stock investors. Third, stock market investors usually make short-term decisions based on recent price fluctuations, i.e., the last one or two periods, but most time series models use only the last period of stock price. In order to overcome these drawbacks, this study proposes a hybrid forecasting model using linear model and MATI to predict stock price trends with the following four steps: (1) test the lag period of Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX) and calculate the last n-period moving average; (2) use subtractive clustering to partition technical indicator values into linguistic values based on data discretization method objectively; (3) employ fuzzy inference system (FIS) to build linguistic rules from the linguistic technical indicator dataset, and optimize the FIS parameters by adaptive network; and (4) refine the proposed model by adaptive expectation models. The proposed model is then verified by root mean squared error (RMSE), and a ten-year period of TAIEX is selected as experiment datasets. The results show that the proposed model is superior to the other forecasting models, namely Chen's model and Yu's model in terms of RMSE. | A hybrid ANFIS based on n-period moving average model to forecast TAIEX stock |
S1568494614000453 | The design and implementation of efficient abstract data types are important issues for software developers. Selecting and creating the appropriate data structure for implementing an abstract data type is not a trivial problem for a software developer, as it is hard to anticipate all the usage scenarios of the deployed application. Moreover, it is not clear how to select a good implementation for an abstract data type when access patterns to it are highly variant, or even unpredictable. The problem of automatic data structure selection is a complex one because each particular data structure is usually more efficient for some operations and less efficient for others, that is why a static analysis for choosing the best representation can be inappropriate, as the performed operations cannot be statically predicted. Therefore, we propose a predictive model in which the software system learns to choose the appropriate data representation, at runtime, based on the effective data usage pattern. This paper describes a novel approach in using a support vector machine model in order to dynamically select the most suitable representation for an aggregate according to the software system's execution context. Computational experiments confirm a good performance of the proposed model and indicates the potential of our proposal. The advantages of our approach in comparison with similar existing approaches are also emphasized. | A support vector machine model for intelligent selection of data representations |
S1568494614000465 | Knowledge management (KM) adoption in the supply chain (SC) needs high investment as well as few changes in culture of entire SC. This study proposes a prediction framework based on the fuzzy decision-making trail and evaluation laboratory (DEMATEL) and fuzzy multi-criteria decision-making (FMCDM) for KM adoption in SC. This study first identifying the evaluation criteria of KM adoption in SC from literature review and expert opinion. Further, it uses fuzzy DEMATEL to evaluate weighting of each evaluation criteria's, after that FMCDM method uses to obtain possible rating of success of KM adoption in SC. The proposed approach is helpful to predict the success of KM adoption in SC without actually adopted KM in SC. It also enables organizations to decide whether to initiate KM, restrain adoption or undertake remedial improvements to increase the possibility of successful KM adoption in SC. This prominent advantage can be considered as one of the contribution of this paper. This proposed approach demonstrated with empirical case of a hydraulic valve manufacturing organization in India. | A hybrid approach based on fuzzy DEMATEL and FMCDM to predict success of knowledge management adoption in supply chain |
S1568494614000477 | A novel support vector machine (SVM) model combining kernel principal component analysis (KPCA) with genetic algorithm (GA) is proposed for intrusion detection. In the proposed model, a multi-layer SVM classifier is adopted to estimate whether the action is an attack, KPCA is used as a preprocessor of SVM to reduce the dimension of feature vectors and shorten training time. In order to reduce the noise caused by feature differences and improve the performance of SVM, an improved kernel function (N-RBF) is proposed by embedding the mean value and the mean square difference values of feature attributes in RBF kernel function. GA is employed to optimize the punishment factor C, kernel parameters σ and the tube size ɛ of SVM. By comparison with other detection algorithms, the experimental results show that the proposed model performs higher predictive accuracy, faster convergence speed and better generalization. | A novel hybrid KPCA and SVM with GA model for intrusion detection |
S1568494614000489 | The uncontrolled firing of neurons in brain leads to epileptic seizures in the patients. A novel scheme to detect epileptic seizures from back ground electroencephalogram (EEG) is proposed in this paper. This scheme is based on discrete wavelet packet transform with energy, entropy, kurtosis, skewness, mean, median and standard deviation as the properties for creating features of signals for classification. Optimal features are selected using genetic algorithm (GA) with support vector machine as a classifier for creating objective function values for the GA. Clinical EEG data from epileptic and normal subjects are used in the experiment. The knowledge of neurologist (medical expert) is utilized to train the system. To evaluate the efficacy of the proposed scheme, a 10 fold cross-validation is implemented, and the detection rate is found 100% accurate with 100% of sensitivity and specificity for the data under consideration. The proposed GA-SVM scheme is a novel technique using a hybrid approach with wavelet packet decomposition, support vector machine and GA. It is novel in terms of selection of features sub set, use of SVM classifier as objective function for GA and improved classification rate. The proposed model can be used in the developing and the third world countries where the medical facilities are in acute shortage and qualified neurologists are not available. This system can be helpful in assisting the neurologists in terms of automated observation and saving valuable human expert time. | Genetic algorithms tuned expert model for detection of epileptic seizures from EEG signatures |
S1568494614000490 | An intelligent identification system for mixed anuran vocalizations is developed in this work to provide the public to easily consult online. The raw mixed anuran vocalization samples are first filtered by noise removal, high frequency compensation, and discrete wavelet transform techniques in order. An adaptive end-point detection segmentation algorithm is proposed to effectively separate the individual syllables from the noise. Six features, including spectral centroid, signal bandwidth, spectral roll-off, threshold-crossing rate, spectral flatness, and average energy, are extracted and served as the input parameters of the classifier. Meanwhile, a decision tree is constructed based on several parameters obtained during sample collection in order to narrow the scope of identification targets. Then fast learning neural-networks are employed to classify the anuran species based on feature set chosen by wrapper feature selection method. A series of experiments were conducted to measure the outcome performance of the proposed work. Experimental results exhibit that the recognition rate of the proposed identification system can achieve up to 93.4%. The effectiveness of the proposed identification system for anuran vocalizations is thus verified. | Intelligent feature extraction and classification of anuran vocalizations |
S1568494614000507 | Support vector machine (SVM) is currently state-of-the-art for classification tasks due to its ability to model nonlinearities. However, the main drawback of SVM is that it generates “black box” model, i.e. it does not reveal the knowledge learnt during training in human comprehensible form. The process of converting such opaque models into a transparent model is often regarded as rule extraction. In this paper we proposed a hybrid approach for extracting rules from SVM for customer relationship management (CRM) purposes. The proposed hybrid approach consists of three phases. (i) During first phase; SVM-RFE (SVM-recursive feature elimination) is employed to reduce the feature set. (ii) Dataset with reduced features is then used in the second phase to obtain SVM model and support vectors are extracted. (iii) Rules are then generated using Naive Bayes Tree (NBTree) in the final phase. The dataset analyzed in this research study is about Churn prediction in bank credit card customer (Business Intelligence Cup 2004) and it is highly unbalanced with 93.24% loyal and 6.76% churned customers. Further we employed various standard balancing approaches to balance the data and extracted rules. It is observed from the empirical results that the proposed hybrid outperformed all other techniques tested. As the reduced feature dataset is used, it is also observed that the proposed approach extracts smaller length rules, thereby improving the comprehensibility of the system. The generated rules act as an early warning expert system to the bank management. | Churn prediction using comprehensible support vector machine: An analytical CRM application |
S1568494614000519 | The widespread use and applicability of Evolutionary Algorithms is due in part to the ability to adapt them to a particular problem-solving context by tuning their parameters. This is one of the problems that a user faces when applying an Evolutionary Algorithm to solve a given problem. Before running the algorithm, the user typically has to specify values for a number of parameters, such as population size, selection rate, and probability operators. This paper empirically assesses the performance of an automatic parameter tuning system in order to avoid the problems of time requirements and the interaction of parameters. The system, based on Bayesian Networks and Case-Based Reasoning methodology, estimates the best parameter setting for maximizing the performance of Evolutionary Algorithms. The algorithms are applied to solve a basic problem in constraint-based, geometric parametric modeling, as an instance of general constraint-satisfaction problems. The experimental results demonstrate the validity of the proposed system and its potential effectiveness for configuring algorithms. | Automatic parameter tuning for Evolutionary Algorithms using a Bayesian Case-Based Reasoning system |
S1568494614000520 | Compaction of earth fill is a very important stage of construction projects. Degree of compaction is defined by relative compaction. The relative compaction of a compacted earth fill is calculated by dividing the dry unit weight obtained from in situ tests by-into the maximum dry unit weight obtained from laboratory compaction tests. This rate represents compaction quality in the field. Numerous test methods such as sand cone, rubber balloon, nuclear measurements, etc., are available to determine the maximum dry unit weight of soils in the field. It is well known that these methods have disadvantages as well as advantages. This study focused on estimation of dry unit weight of soils depending on water contents and P-wave velocities of compacted soils. The multi-layer perceptron (MLP) neural networks and general linear model (GLM) were used in this study to estimate the dry unit weight of different types of soils. Results of the MLP neural networks were compared with the GLM results. Based on the comparisons, it is found that the MLP generally gives better dry unit weight estimates than the GLM technique. The laboratory experiments and modeling studies showed that a new method for compaction control can be developed depending on P-wave velocity to estimate of the dry unit weight of compacted soils. | Estimating of the Dry Unit Weight of Compacted Soils Using General Linear Model and Multi-layer Perceptron Neural Networks |
S1568494614000532 | Reservoir operation optimization (ROO) is a complicated dynamically constrained nonlinear problem that is important in the context of reservoir system operation. In this study, improved adaptive particle swarm optimization (IAPSO) is proposed to solve the problem, which involves many conflicting objectives and constraints. The proposed algorithm takes particle swarm optimization (PSO) as the main evolution method. To overcome the premature convergence of PSO, adjusting dynamically the two sensitive parameters of PSO guides the evolution direction of each particle in the evolution process. In the IAPSO method, an adaptive dynamic parameter control mechanism is applied to determine parameter settings. Moreover, a new strategy is proposed to handle the reservoir output constraint of ROO problem. Finally, the feasibility and effectiveness of the proposed IAPSO algorithm are validated by the Three Gorges Project (TGP) with 42.23bkW power generation and XiLuoDo Project (XLDP) with 30.10bkW. Compared with other methods, the IAPSO provides a better operational result with greater effectiveness and robustness, and appears to be better in terms of power generation benefit and convergence performance. Meanwhile, the optimal results could meet output constraint at each interval. | An adaptive particle swarm optimization algorithm for reservoir operation optimization |
S1568494614000544 | This paper presents an efficient hybrid particle swarm optimization algorithm to solve dynamic economic dispatch problems with valve-point effects, by integrating an improved bare-bones particle swarm optimization (BBPSO) with a local searcher called directionally chaotic search (DCS). The improved BBPSO is designed as a basic level search, which can give a good direction to optimal regions, while DCS is used as a fine-tuning operator to locate optimal solution. And an adaptive disturbance factor and a new genetic operator are also incorporated into the improved BBPSO to enhance its search capability. Moreover, a heuristic handing mechanism for constraints is introduced to modify infeasible particles. Finally, the proposed algorithm is applied to the 5-, 10-, 30-unit-test power systems and several numerical functions, and a comparative study is carried out with other existing methods. Results clarify the significance of the proposed algorithm and verify its performance. | Hybrid bare-bones PSO for dynamic economic dispatch with valve-point effects |
S156849461400057X | Most of the real world problems have dynamic characteristics, where one or more elements of the underlying model for a given problem including the objective, constraints or even environmental parameters may change over time. Hyper-heuristics are problem-independent meta-heuristic techniques that are automating the process of selecting and generating multiple low-level heuristics to solve static combinatorial optimization problems. In this paper, we present a novel hybrid strategy for applicability of hyper-heuristic techniques on dynamic environments by integrating them with the memory/search algorithm. The memory/search algorithm is an important evolutionary technique that have applied on various dynamic optimization problems. We validate performance of our method by considering both the dynamic generalized assignment problem and the moving peaks benchmark. The former problem is extended from the generalized assignment problem by changing resource consumptions, capacity constraints and costs of jobs over time; and the latter one is a well-known synthetic problem that generates and updates a multidimensional landscape consisting of several peaks. Experimental evaluation performed on various instances of the given two problems validates that our hyper-heuristic integrated framework significantly outperforms the memory/search algorithm. | A hyper-heuristic based framework for dynamic optimization problems |
S1568494614000581 | Differential evolution (DE) is an efficient and robust evolutionary algorithm, which has been widely applied to solve global optimization problems. As we know, crossover operator plays a very important role on the performance of DE. However, the commonly used crossover operators of DE are dependent mainly on the coordinate system and are not rotation-invariant processes. In this paper, covariance matrix learning is presented to establish an appropriate coordinate system for the crossover operator. By doing this, the dependence of DE on the coordinate system has been relieved to a certain extent, and the capability of DE to solve problems with high variable correlation has been enhanced. Moreover, bimodal distribution parameter setting is proposed for the control parameters of the mutation and crossover operators in this paper, with the aim of balancing the exploration and exploitation abilities of DE. By incorporating the covariance matrix learning and the bimodal distribution parameter setting into DE, this paper presents a novel DE variant, called CoBiDE. CoBiDE has been tested on 25 benchmark test functions, as well as a variety of real-world optimization problems taken from diverse fields including radar system, power systems, hydrothermal scheduling, spacecraft trajectory optimization, etc. The experimental results demonstrate the effectiveness of CoBiDE for global numerical and engineering optimization. Compared with other DE variants and other state-of-the-art evolutionary algorithms, CoBiDE shows overall better performance. | Differential evolution based on covariance matrix learning and bimodal distribution parameter setting |
S1568494614000593 | Stock market prediction is of great interest to stock traders and investors due to high profit in trading the stocks. A successful stock buying/selling generally occurs near price trend turning point. Thus the prediction of stock market indices and its analysis are important to ascertain whether the next day's closing price would increase or decrease. This paper, therefore, presents a simple IIR filter based dynamic neural network (DNN) and an innovative optimized adaptive unscented Kalman filter for forecasting stock price indices of four different Indian stocks, namely the Bombay stock exchange (BSE), the IBM stock market, RIL stock market, and Oracle stock market. The weights of the dynamic neural information system are adjusted by four different learning strategies that include gradient calculation, unscented Kalman filter (UKF), differential evolution (DE), and a hybrid technique (DEUKF) by alternately executing the DE and UKF for a few generations. To improve the performance of both the UKF and DE algorithms, adaptation of certain parameters in both these algorithms has been presented in this paper. After predicting the stock price indices one day to one week ahead time horizon, the stock market trend has been analyzed using several important technical indicators like the moving average (MA), stochastic oscillators like K and D parameters, WMS%R (William indicator), etc. Extensive computer simulations are carried out with the four learning strategies for prediction of stock indices and the up or down trends of the indices. From the results it is observed that significant accuracy is achieved using the hybrid DEUKF algorithm in comparison to others that include only DE, UKF, and gradient descent technique in chronological order. Comparisons with some of the widely used neural networks (NNs) are also presented in the paper. | A hybrid evolutionary dynamic neural network for stock market trend analysis and prediction using unscented Kalman filter |
S156849461400060X | In actuality, for example, the review of the National Science Foundation and the blind peer review of doctoral dissertation in China, the evaluation experts are requested to provide two types of information such as the performance of the evaluation objects and the familiarity with the evaluation areas (called confidence levels). However, existing information aggregation research achievements cannot be used to fusion the two types information described above effectively. In this paper, we focus on the information aggregation issue in the situation where there are confidence levels of the aggregated arguments under intuitionistic fuzzy environment. Firstly, we develop some confidence intuitionistic fuzzy weighted aggregation operators, such as the confidence intuitionistic fuzzy weighted averaging (CIFWA) operator and the confidence intuitionistic fuzzy weighted geometric (CIFWG) operator. Then, based on the Einstein operations, we proposed the confidence intuitionistic fuzzy Einstein weighted averaging (CIFEWA) operator and the confidence intuitionistic fuzzy Einstein weighted geometric (CIFEWG) operator. Finally, a practical example about the review of the doctoral dissertation in Chinese universities is provided to illustrate the developed intuitionistic fuzzy information aggregation operators. | Intuitionistic fuzzy information aggregation under confidence levels |
S1568494614000611 | In the recent few decades there has been very significant developments in the theoretical understanding of Support vector machines (SVMs) as well as algorithmic strategies for implementing them, and applications of the approach to practical problems. SVMs introduced by Vapnik and others in the early 1990s are machine learning systems that utilize a hypothesis space of linear functions in a high dimensional feature space, trained with optimization algorithms that implements a learning bias derived from statistical learning theory. This paper reviews the state-of-the-art and focuses over a wide range of applications of SVMs in the field of hydrology. To use SVM aided hydrological models, which have increasingly extended during the last years; comprehensive knowledge about their theory and modelling approaches seems to be necessary. Furthermore, this review provides a brief synopsis of the techniques of SVMs and other emerging ones (hybrid models), which have proven useful in the analysis of the various hydrological parameters. Moreover, various examples of successful applications of SVMs for modelling different hydrological processes are also provided. | Support vector machine applications in the field of hydrology: A review |
S1568494614000623 | Complex network has become an important way to analyze the massive disordered information of complex systems, and its community structure property is indispensable to discover the potential functionality of these systems. The research on uncovering the community structure of networks has attracted great attentions from various fields in recent years. Many community detection approaches have been proposed based on the modularity optimization. Among them, the algorithms which optimize one initial solution to a better one are easy to get into local optima. Moreover, the algorithms which are susceptible to the optimized order are easy to obtain unstable solutions. In addition, the algorithms which simultaneously optimize a population of solutions have high computational complexity, and thus they are difficult to apply to practical problems. To solve the above problems, in this study, we propose a fast memetic algorithm with multi-level learning strategies for community detection by optimizing modularity. The proposed algorithm adopts genetic algorithm to optimize a population of solutions and uses the proposed multi-level learning strategies to accelerate the optimization process. The multi-level learning strategies are devised based on the potential knowledge of the node, community and partition structures of networks, and they work on the network at nodes, communities and network partitions levels, respectively. Extensive experiments on both benchmarks and real-world networks demonstrate that compared with the state-of-the-art community detection algorithms, the proposed algorithm has effective performance on discovering the community structure of networks. | Multi-level learning based memetic algorithm for community detection |
S1568494614000714 | In real projects, the trade-off between the project cost and the project completion time, and the environmental uncertainty are aspects of considerable importance for managers. For complex environment with more than one type of uncertainty, this paper presents three types of time–cost trade-off models, in which the project environment is described via introducing the fuzzy random theory. The expected value and the chance measure of fuzzy random variable are introduced for modeling the problem under different decision-making criteria. After that, this paper is devoted to designing a searching method integrating the technique of fuzzy random simulations and genetic algorithm for searching the quasi-optimal schedules. Finally, some numerical examples are given to demonstrate the effectiveness of the designed method for solving the proposed models. | Modeling project time–cost trade-off in fuzzy random environment |
S1568494614000738 | The multistage hybrid flow shop (HFS) scheduling problems are considered in this paper. Hybrid flowshop scheduling problems were proved to be NP-hard. A recently developed cuckoo search (CS) metaheuristic algorithm is presented in this paper to minimize the makespan for the HFS scheduling problems. A constructive heuristic called NEH heuristic is incorporated with the initial solutions to obtain the optimal or near optimal solutions rapidly in the improved cuckoo search (ICS) algorithm. The proposed algorithm is validated with the data from a leading furniture manufacturing company. Computational results show that the ICS algorithm outperforms many other metaheuristics. | Improved cuckoo search algorithm for hybrid flow shop scheduling problems to minimize makespan |
S156849461400074X | This paper proposes a new evolutionary algorithm for continuous non-linear optimization problems. This optimization algorithm is inspired by the procedure of trading the shares on stock market and it is called exchange market algorithm (EMA). Evaluation of how the stocks are traded on the stock market by elites has formed this evolutionary as an optimization algorithm. In the proposed method there are two different modes in EMA. In the first mode, there is no oscillation in the market whereas in the second mode, the market has oscillation. It is noticeable that at the end of each mode, the individuals’ finesses are evaluated. For the first mode, the algorithm's duty is to recruit people toward successful individuals, while in the second case the algorithm seeks optimal points. In this algorithm, the generation and organization of random numbers are performed in the best way due to the existence of two absorbent operators and two searching operators leading to high capability in global optimum point extraction. To evaluate the performance of the proposed algorithm, this algorithm has been implemented on 12 different benchmark functions with 10, 20, 30 and 50 dimension variables. The results obtained by 30 dimension variables are compared with the results obtained by the eight new and efficient algorithms. The results indicate the ability of the proposed algorithm in finding the global optimum point of the functions for each run of the program. | Exchange market algorithm |
S1568494614000751 | A new design equation is proposed for the prediction of shear strength of reinforced concrete (RC) beams without stirrups using an innovative linear genetic programming methodology. The shear strength was formulated in terms of several effective parameters such as shear span to depth ratio, concrete cylinder strength at date of testing, amount of longitudinal reinforcement, lever arm, and maximum specified size of coarse aggregate. A comprehensive database containing 1938 experimental test results for the RC beams was gathered from the literature to develop the model. The performance and validity of the model were further tested using several criteria. An efficient strategy was considered to guarantee the generalization of the proposed design equation. For more verification, sensitivity and parametric analysis were conducted. The results indicate that the derived model is an effective tool for the estimation of the shear capacity of members without stirrups (R =0.921). The prediction performance of the proposed model was found to be better than that of several existing buildings codes. | Linear genetic programming for shear strength prediction of reinforced concrete beams without stirrups |
S1568494614000775 | Detection and diagnosis of faults in cement industry is of great practical significance and paramount importance for the safe operation of the plant. In this paper, the design and development of Adaptive Neuro-Fuzzy Inference System (ANFIS) based fault detection and diagnosis of pneumatic valve used in cooler water spray system in cement industry is discussed. The ANFIS model is used to detect and diagnose the occurrence of various faults in pneumatic valve used in the cooler water spray system. The training and testing data required for model development were generated at normal and faulty conditions of pneumatic valve in a real time laboratory experimental setup. The performance of the developed ANFIS model is compared with the MLFFNN (Multilayer Feed Forward Neural Network) trained by the back propagation algorithm. From the simulation results it is observed that ANFIS performed better than ANN. | Fault detection and diagnosis of pneumatic valve using Adaptive Neuro-Fuzzy Inference System approach |
S1568494614000787 | Replicating and comparing computational experiments in applied evolutionary computing may sound like a trivial task. Unfortunately, it is not so. Namely, many papers do not document experimental settings in sufficient detail, and hence replication of experiments is almost impossible. Additionally, some work fails to satisfy the thumb rules for Experimentation throughout all disciplines, such that all experiments should be conducted and compared under the same or stricter conditions. Also, because of the stochastic properties inherent in evolutionary algorithms (EAs), experimental results should always be rich enough with respect to Statistics. Moreover, the comparisons conducted should be based on suitable performance measures and show the statistical significance of one approach over others. Otherwise, the derived conclusions may fail to have scientific merits. The primary objective of this paper is to offer some preliminary guidelines and reminders for assisting researchers to conduct any replications and comparisons of computational experiments when solving practical problems, by the use of EAs in the future. The common pitfalls are explained, that solve economic load dispatch problems using EAs from concrete examples found in some papers. | Replication and comparison of computational experiments in applied evolutionary computing: Common pitfalls and guidelines to avoid them |
S1568494614000799 | This paper proposes a modified discrete shuffled frog leaping algorithm (MDSFL) to solve 01 knapsack problems. The proposed algorithm includes two important operations: the local search of the ‘particle swarm optimization’ technique; and the competitiveness mixing of information of the ‘shuffled complex evolution’ technique. Different types of knapsack problem instances are generated to test the convergence property of MDSFLA and the result shows that it is very effective in solving small to medium sized knapsack problems. Further, computational experiments with a set of large-scale instances show that MDSFL can be an efficient alternative for solving tightly constrained 01 knapsack problems. | Shuffled frog leaping algorithm and its application to 0/1 knapsack problem |
S1568494614000805 | Data mining is the process of discovering meaningful new correlation, patterns and trends by sifting through large amounts of data, using pattern recognition technologies as well as statistical and mathematical techniques. Cluster analysis is often used as one of the major data analysis technique widely applied for many practical applications in emerging areas of data mining. Two of the most delegated, partition based clustering algorithms namely k-Means and Fuzzy C-Means are analyzed in this research work. These algorithms are implemented by means of practical approach to analyze its performance, based on their computational time. The telecommunication data is the source data for this analysis. The connection oriented broad band data is used to find the performance of the chosen algorithms. The distance (Euclidian distance) between the server locations and their connections are rearranged after processing the data. The computational complexity (execution time) of each algorithm is analyzed and the results are compared with one another. By comparing the result of this practical approach, it was found that the results obtained are more accurate, easy to understand and above all the time taken to process the data was substantially high in Fuzzy C-Means algorithm than the k-Means. | Performance based analysis between k-Means and Fuzzy C-Means clustering algorithms for connection oriented telecommunication data |
S1568494614000817 | Rating scales are the essential interfaces for many research areas such as in decision making and recommendation. Some issues concerning syntactic and sematic structures are still open to discuss. This research proposes a Compound Linguistic Scale (CLS), a two dimension rating scale, as a promising rating interface. The CLS is comprised of Compound Linguistic Variable (CLV) and Deductive Rating Strategy (DRS). CLV can ideally produce 21 to 73 ((7±2)((7±2)−1)+1) ordinal-in-ordinal rating items, which extends the classic rating scales usually on the basis of 7±2 principle, to better reflect the raters’ preferences whilst DRS is a double step rating approach for a rater to choose a compound linguistic term among two dimensional options on a dynamic rating interface. The numerical analyses show that the proposed CLS can address rating dilemma for a single rater and more accurately reflects consistency among various raters. CLS can be applied to surveys, questionnaires, psychometrics, recommender systems and decision analysis of various application domains. | Compound Linguistic Scale |
S1568494614000829 | In a recent paper, Kaur and Kumar (2012) proposed a new method based on ranking function for solving fuzzy transportation problem (FTP) by assuming that the values of transportation costs are represented by generalized trapezoidal fuzzy numbers. Here it is shown that once the ranking function is chosen, the FTP is converted into crisp one, which is easily solved by the standard transportation algorithms. The main contribution here is the reduction of the computational complexity of the existing method. By solving two application examples, it is shown that it is possible to find a same optimal solution without solving any FTP. Since the proposed approach is based on classical approach it is very easy to understand and to apply on real life transportation problems for the decision makers. | A simplified new approach for solving fuzzy transportation problems with generalized trapezoidal fuzzy numbers |
S1568494614000830 | This paper's proposal is to show some significant results obtained by the application of the optimization algorithm known as Fuzzy Adaptive Simulated Annealing (Fuzzy ASA) to the task of finding all Nash equilibria of normal form games. To that end, a special version of Fuzzy ASA, that utilizes space-filling curves to find good seeds, is applied to several well-known strategic games, showing its effectiveness in obtaining all Nash equilibria in all cases. The results are compared to previous work that also used computational intelligence techniques in order to solve the same problem but could not find all equilibria in all tests. Game theory is a very important subject, modeling interactions between generic agents, and Nash equilibrium represents a powerful concept portraying situations in which joint strategies are optimal in the sense that no player can benefit from changing her/his strategy while the other players do not change their strategies as well. So, new techniques are always welcome, mainly those that can find the whole set of solutions for a given strategic game. | Establishing Nash equilibria of strategic games: a multistart Fuzzy Adaptive Simulated Annealing approach |
S1568494614000842 | The conventional data envelopment analysis (DEA) measures the relative efficiencies of a set of decision making units (DMUs) with exact values of inputs and outputs. In real-world problems, however, inputs and outputs typically have some levels of fuzziness. To analyze a DMU with fuzzy input/output data, previous studies provided the fuzzy DEA (FDEA) model and proposed an associated evaluating approach. Nonetheless, numerous deficiencies must be improved, including the α-cut approaches, types of fuzzy numbers, and ranking techniques. Moreover, a fuzzy sample DMU (SDMU) still cannot be evaluated for the FDEA model. Therefore, the present paper proposes a generalized FDEA model which can evaluate SDMU and the traditional FDEA model. Five evaluation methods are provided and these methods not only improve the types of FDEA model, the types of fuzzy number, the α-cut approach but also firstly propose a new evaluation method based on vector. At last related algorithm and ranking methods are provided to test our new methods. A numerical experiment is used to demonstrate and compare the results with those obtained using alternative approaches. | Generalized fuzzy data envelopment analysis methods |
S1568494614000866 | This paper explores the significance of stereo-based stochastic feature compensation (SFC) methods for robust speaker verification (SV) in mismatched training and test environments. Gaussian Mixture Model (GMM)-based SFC methods developed in past has been solely restricted for speech recognition tasks. Application of these algorithms in a SV framework for background noise compensation is proposed in this paper. A priori knowledge about the test environment and availability of stereo training data is assumed. During the training phase, Mel frequency cepstral coefficient (MFCC) features extracted from a speaker's noisy and clean speech utterance (stereo data) are used to build front end GMMs. During the evaluation phase, noisy test utterances are transformed on the basis of a minimum mean squared error (MMSE) or maximum likelihood (MLE) estimate, using the target speaker GMMs. Experiments conducted on the NIST-2003-SRE database with clean speech utterances artificially degraded with different types of additive noises reveal that the proposed SV systems strictly outperform baseline SV systems in mismatched conditions across all noisy background environments. | Stochastic feature compensation methods for speaker verification in noisy environments |
S1568494614000878 | This paper presents a method for optimal sizing of truss structures based on a refined self-adaptive step-size search (SASS) algorithm. An elitist self-adaptive step-size search (ESASS) algorithm is proposed wherein two approaches are considered for improving (i) convergence accuracy, and (ii) computational efficiency. In the first approach an additional randomness is incorporated into the sampling step of the technique to preserve exploration capability of the algorithm during the optimization. Furthermore, an adaptive sampling scheme is introduced to enhance quality of the final solutions. In the second approach computational efficiency of the technique is accelerated through avoiding unnecessary analyses throughout the optimization process using the so-called upper bound strategy (UBS). The numerical results indicate the efficiency of the proposed ESASS algorithm. | An elitist self-adaptive step-size search for structural design optimization |
S156849461400088X | It is undeniably crucial for a firm to be able to make a forecast regarding the sales volume of new products. However, the current economic environments invariably have uncertain factors and rapid fluctuations where decision makers must draw conclusions from minimal data. Previous studies combine scenario analysis and technology substitution models to forecast the market share of multigenerational technologies. However, a technology substitution model based on a logistic curve will not always fit the S curve well. Therefore, based on historical data and the data forecast by both the Scenario and Delphi methods, a two stage fuzzy piecewise logistic growth model with multiple objective programming is proposed herein. The piecewise concept is adopted in order to reflect the market impact of a new product such that it can be possible to determine the effective length of sales forecasting intervals even when handling a large variation in data or small size data. In order to demonstrate the model's performance, two cases in the Television and Telecommunication industries are treated using the proposed method and the technology substitution model or the Norton and Bass diffusion model. A comparison of the results shows that the proposed model outperforms the technology substitution model and the Norton and Bass diffusion model. | A two stage fuzzy piecewise logistic model for penetration forecasting |
S1568494614000891 | A novel adaptive local search method is developed for hybrid evolutionary multiobjective algorithms (EMOA) to improve convergence to the Pareto front in multiobjective optimization. The concepts of local and global effectiveness of a local search operator are suggested for dynamic adjustment of adaptation parameters. Local effectiveness is measured by quantitative comparison of improvements in convergence made by local and genetic operators based on a composite objective. Global effectiveness is determined by the ratio of number of local search solutions to genetic search solutions in the nondominated solution set. To be consistent with the adaptation strategy, a new directional local search operator, eLS (efficient Local Search), minimizing the composite objective function is designed. The search direction is determined using a centroid solution of existing neighbor solutions without making explicit calculations of gradient information. The search distance of eLS decreases adaptively as the optimization process converges. Performances of hybrid methods NSGA-II+eLS are compared with the baseline NSGA-II and NSGA-II+HCS1 for multiobjective test problems, such as ZDT and DTLZ functions. The neighborhood radius and local search probability are selected as adaptation parameters. Results show that the present adaptive local search strategy can provide significant convergence enhancement from the baseline EMOA by dynamic adjustment of adaptation parameters monitoring the properties of multiobjective problems on the fly. | Adaptive directional local search strategy for hybrid evolutionary multiobjective optimization |
S1568494614000908 | To maintain the efficient and reliable operation of power systems, it is extremely important that the transmission line faults need to be detected and located in a reliable and accurate manner. A number of mathematical and intelligent techniques are available in the literature for estimating the fault location. However, the results are not satisfactory due to the wide variation in operating conditions such as system loading level, fault inception instance, fault resistance and dc offset and harmonics contents in the transient signal of the faulted transmission line. Keeping in view of aforesaid, a new approach based on generalized neural network (GNN) with wavelet transform is presented for fault location estimation. Wavelet transform is used to extract the features of faulty current signals in terms of standard deviation. Obtained features are used as an input to the GNN model for estimating the location of fault in a given transmission systems. Results obtained from GNN model are compared with ANN and well established mathematical models and found more accurate. | Generalized neural network and wavelet transform based approach for fault location estimation of a transmission line |
S156849461400091X | In this paper, some multi-item inventory models for deteriorating items are developed in a random planning horizon under inflation and time value money with space and budget constraints. The proposed models allow stock dependent consumption rate and partially backlogged shortages. Here the time horizon is a random variable with exponential distribution. The inventory parameters other than planning horizon are deterministic in one model and in the other, the deterioration and net value of the money are fuzzy, available budget and space are fuzzy and random fuzzy respectively. Fuzzy and random fuzzy constraints have been defuzzified using possibility and possibility–probability chance constraint techniques. The fuzzy objective function also has been defuzzified using possibility chance constraint against a goal. Both deterministic optimization problems are formulated for maximization of profit and solved using genetic algorithm (GA) and fuzzy simulation based genetic algorithm (FAGA). The models are illustrated with some numerical data. Results for different achievement levels are obtained and sensitivity analysis on expected profit function is also presented. Scope and purpose The traditional inventory model considers the ideal case in which depletion of inventory is caused by a constant demand rate. However for more sale, inventory should be maintained at a higher level. Of course, this would result in higher holding or procurement cost, etc. Also, in many real situations, during a shortage period, the longer the waiting time is, the smaller the backlogging rate would be. For instance, for fashionable commodities and high-tech products with short product life cycle, the willingness for a customer to wait for backlogging diminishes with the length of the waiting time. Most of the classical inventory models did not take into account the effects of inflation and time value of money. But at present, the economic situation of most of the countries has been much deteriorated due to large scale inflation and consequent sharp decline in the purchasing power of money. So, it has not been possible to ignore the effects of inflation and time value of money any further. The purpose of this article is to maximize the expected profit of two inventory control systems in the random planning horizon. | Multi-item partial backlogging inventory models over random planninghorizon in random fuzzy environment |
S1568494614000921 | In this paper, the type-2 fuzzy logic system (T2FLS) controller using the feedback error learning (FEL) strategy has been proposed for load frequency control (LFC) in the restructure power system. The original FEL strategy consists of an intelligent feedforward controller (INFC) (i.e. artificial neural network (ANN)) and the conventional feedback controller (CFC). The CFC acting as a general feedback controller to guarantee the stability of the system plays a crucial role in the transient state. The INFC is adopted in forward path to take over the control problem in the steady state. In this work, to improve the performance of the FEL strategy, the T2FLS is adopted instead of ANN in the INFC part due to its ability to model uncertainties, which may exist in the rules and measured data of sensors more effectively. The proposed FEL controller has been compared with a type-1 fuzzy logic system (T1FLS) – based FEL controller and the proportional, integral and derivative (PID) controller to highlight the effectiveness of the proposed method. | Application of type-2 fuzzy logic system for load frequency control using feedback error learning approaches |
S1568494614000933 | Nowadays, most road navigation systems’ planning of optimal routes is conducted by the On Board Unit (OBU). If drivers want to obtain information about the real-time road conditions, a Traffic Message Channel (TMC) module is also needed. However, this module can only provide the current road conditions, as opposed to actually planning appropriate routes for users. In this work, the concept of cellular automata is used to collect real-time road conditions and derive the appropriate paths for users. Notably, type-2 fuzzy logic is adopted for path analysis for each cell established in the cellular automata algorithm. Besides establishing the optimal routes, our model is expected to be able to automatically meet the personal demands of all drivers, achieve load balancing between all road sections to avoid the problem of traffic jams, and allow drivers to enjoy better driving experiences. A series of simulations were conducted to compare the proposed approach with the well-known A* Search algorithm and the latest state-of-the-art path planning algorithm found in the literature. The experimental results demonstrate that the proposed approach is scalable in terms of the turnaround times for individual users. The practicality and feasibility of applying the proposed approach in the real-time environment is thus justified. | Application of cellular automata and type-2 fuzzy logic to dynamic vehicle path planning |
S1568494614000945 | This paper addresses the strategic level of Hybrid Make-To-Stock (MTS)/Make-To-Order (MTO) production contexts using Fuzzy Analytic Network Process (FANP). The decisions which are involved in this level are family formation and order partitioning. First, a family formation procedure is developed; then, a fuzzy analytic network process is proposed to tackle partitioning decision. Since strategic decisions usually deal with the uncertainty and ambiguity of data as well as experts’ and managers’ linguistic opinions, the proposed model is equipped with fuzzy sets theory. An important attribute of the model is its generality due to diverse decision factors which are both elicited from the literature and developed by the authors themselves. Finally, the model is validated by applying it to a real industrial case study. | Hybrid MTS/MTO order partitioning framework based upon fuzzy analytic network process |
S1568494614000957 | The weather natural disaster prevention for quantitative daily rainfall forecasting derived from the SACZ-ULCV weather pattern is proposed in this paper by using intertwined statistical downscaling (SD) and soft computing (SC) approaches. The fuzzy statistical downscaling (FSD) is first introduced and, then, employed for dealing with the SACZ-ULCV atmospheric circulation-type specific weather pattern for supporting daily precipitation (rainfall) forecasting. This paper also addresses the performance comparison of the FSD and the neural statistical downscaling (NSD) approaches when taking into account 12 major urban centers all over the state of São Paulo, Brazil, for the summer period. The SACZ-ULCV summer pattern is identified in meteorological satellite images when the cloudiness of the Brazilian Northeast upper level cyclonic vortices (ULCV) meets the South Atlantic convergence zone (SACZ). Increasing the convection and the cloudiness over the Southeast region of Brazil, the SACZ-ULCV causes severe rainfalls and thunderstorms with impact on the population. Finding a manner to anticipate these extreme rainfall events is of vital importance for minimizing or avoiding disasters, and saving lives. Daily rainfall forecasting had their performance improved either by using the proposed FSD or NSD in comparison to the Multilinear Regression ETA model. Results demonstrate the FSD and the NSD become feasible alternatives for achieving a correspondence from meteorological and thermo-dynamical variables to the daily rainfall variable. | Neural network and fuzzy logic statistical downscaling of atmospheric circulation-type specific weather pattern for rainfall forecasting |
S1568494614001069 | This paper describes how low-cost embedded controllers for robot navigation can be obtained by using a small number of if-then rules (exploiting the connection in cascade of rule bases) that apply Takagi–Sugeno fuzzy inference method and employ fuzzy sets represented by normalized triangular functions. The rules comprise heuristic and fuzzy knowledge together with numerical data obtained from a geometric analysis of the control problem that considers the kinematic and dynamic constraints of the robot. Numerical data allow tuning the fuzzy symbols used in the rules to optimize the controller performance. From the implementation point of view, very few computational and memory resources are required: standard logical, addition, and multiplication operations and a few data that can be represented by integer values. This is illustrated with the design of a controller for the safe navigation of an autonomous car-like robot among possible obstacles toward a goal configuration. Implementation results of an FPGA embedded system based on a general-purpose soft processor confirm that percentage reduction in clock cycles is drastic thanks to applying the proposed neuro-fuzzy techniques. Simulation and experimental results obtained with the robot confirm the efficiency of the controller designed. Design methodology has been supported by the CAD tools of the environment Xfuzzy 3 and by the Embedded System Tools from Xilinx. | Neuro-fuzzy techniques to optimize an FPGA embedded controller for robot navigation |
S1568494614001070 | The objective of this paper is to develop five hybrid metaheuristic algorithms, including three hybrid ant colony optimization (hACO) variants, and compare their performances in two related applications: unrelated parallel machine scheduling and inbound truck sequencing in a multi-door cross docking system in consideration of sequence dependent setup, and both zero and nonzero release time. The three hACO variants were modified and adapted from the existing literature and they differ mainly in how a solution is coded and decoded, how a pheromone matrix is represented, and the local search methods employed. The other two hybrids are newly constructed hybrid simulated annealing (hSA) algorithms, which are built based on the authors’ knowledge and experience. The evaluation criteria are computational time and the objective function value, i.e., makespan. Based on the results of computational experiments the simulated annealing-tabu search hybrid turns out to be the best if maximal CPU time is used as the stopping criterion and the 2-stage hACO variant is the best if maximal number of evaluations is the stopping criterion. The contributions of this paper are: (i) being the first to carry out a comparative study of hybrid metaheuristics for the two selected applications, (ii) being the first to consider nonzero truck arrival time in multi-door cross docking operations, (iii) identifying which hACO variant is the best among the three, and (iv) investigating the effect of release time on the makespan. | A comparison of five hybrid metaheuristic algorithms for unrelated parallel-machine scheduling and inbound trucks sequencing in multi-door cross docking systems |
S1568494614001082 | In this study we propose an improved learning algorithm based on resource allocating network (RAN) for text categorization. RAN is a promising neural network of single hidden layer structure based on radial basis function. We firstly use the means clustering-based method to determine the initial centers in the hidden layer. Such method can effectively overcome the limitation of local-optimal of clustering algorithms. Subsequently, in order to improve the novelty criteria of RAN, we propose a root mean square (RMS) sliding window method which can reduce the underlying influence of undesirable noise data. Through the further research on the learning process of RAN, we divide the learning process of RAN into a preliminary study phase and a subsequent study phase. The former phase initializes the preliminary structure of RAN and decreases the complexity of network, while the latter phase refines its learning ability and improves the classification accuracy. Such a compact network structure decreases the computational complexity and maintains the higher convergence rate. Moreover, a latent semantic feature selection method is utilized to organize documents. This method reduces the input scale of network, and reveals the latent semantics between features. Extensive experiments are conducted on two benchmark datasets, and the results demonstrate the superiority of our algorithm in comparison with state of the art text categorization algorithms. | Taking advantage of improved resource allocating network and latent semantic feature selection approach for automated text categorization |
S1568494614001094 | Algorithms for deriving the rule base in Linguistic Fuzzy-Rule Based Systems usually proceed by selecting a set of candidate rules and, afterwards, finding both a subset of them and a combination of values for their consequents. Because of its cost, the latter process can be approached by using metaheuristic techniques such as genetic algorithms. However, existing works show that, if dealing with Mamdani rules – where the consequent is a linguistic label –, a basic local search clearly outperforms the genetic algorithm. In this work we aim to develop a local search algorithm to carry out the described process for the case of TSK-0 fuzzy rules, where the consequent is a real number. The experimental results show that some of the proposed algorithms clearly improve upon the state-of-the-art ones in terms of precision and number of rules, whereas learning times are fairly competitive and are orders of magnitude lower than those required by the genetic algorithm. | Learning TSK-0 linguistic fuzzy rules by means of local search algorithms |
S1568494614001100 | This study proposes a new approach, based on a hybrid algorithm combining of Improved Quantum-behaved Particle Swarm Optimization (IQPSO) and simplex algorithms. The Quantum-behaved Particle Swarm Optimization (QPSO) algorithm is the main optimizer of algorithm, which can give a good direction to the optimal global region and Nelder Mead Simplex method (NM) which is used as a local search to fine tune the obtained solution from QPSO. The proposed improved hybrid QPSO algorithm is tested on several benchmark functions and performed better than particle swarm optimization (PSO), QPSO and weighted QPSO (WQPSO). To assess the effectiveness and feasibility of the proposed method on real problems, it is used for solving the power system load flow problems and demonstrated by different standard and ill-conditioned test systems including IEEE 14, 30 and 57 buses test systems, and compared with the conventional Newton–Raphson (NR) method, PSO and some versions of QPSO algorithms. Furthermore, the proposed hybrid algorithm is proposed for solving load flow problems with considering the reactive limits at generation buses. Simulation results prove the robustness and better convergence of IQPSOS under normal and critical conditions, when conventional load flow methods fail. | A hybrid Improved Quantum-behaved Particle Swarm Optimization–Simplex method (IQPSOS) to solve power system load flow problems |
S1568494614001112 | Capacitated Arc Routing Problem (CARP) has attracted the attention of many researchers during the last few years, because it has a wide application in the real world. Recently, a Decomposition-Based Memetic Algorithm for Multi-Objective CARP (D-MAENS) has been demonstrated to be a competitive approach. However, the replacement mechanism and the assignment mechanism of the offspring in D-MAENS remain to be improved. First, the replacement after all the offspring are generated decreases the convergence speed of D-MAENS. Second, the representatives of these sub-problems are reassigned at each generation by only considering one objective function. In response to these issues, this paper presents an improved D-MAENS for Multi-Objective CARP (ID-MAENS). The two improvements of the proposed algorithm are as follows: (1) the replacement of the solutions is immediately done once an offspring is generated, which references to the steady-state evolutionary algorithm. The new offspring will accelerate the convergence speed; (2) elitism is implemented by using an archive to maintain the current best solution in its decomposition direction during the search, and these elite solutions can provide helpful information for solving their neighbor sub-problems by cooperation. Compared with the Multi-Objective CARP algorithm, experimental results on large-scale benchmark instances egl show that the proposed algorithm has performed significantly better than D-MAENS on 23 out of the total 24 instances. Moreover, ID-MAENS find all the best nondominated solutions on 13 egl instances. In the last section of this paper, the ID-MAENS also proves to be competitive to some state-of-art single-objective CARP algorithms in terms of quality of solutions and computational efficiency. | An Improved Decomposition-Based Memetic Algorithm for Multi-Objective Capacitated Arc Routing Problem |
S1568494614001124 | In this paper, a color difference based fuzzy filter is presented for fix and random-valued impulse noise. Noise detection scheme of two stages was applied to detect noise efficiently whereas for noise removal an improved Histogram based Fuzzy Color Filter (HFC) is presented. Pixels detected as noisy by the noise detection scheme are deliberated as candidate for the removal of noise. Candidate noisy pixels are then processed using a modified Histogram based Fuzzy Color Filter to estimate their non-noisy values. The idea of using multiple fuzzy membership functions is presented, so that best suitable membership function for local image statistics can be used automatically. In the proposed technique we have used three different types of fuzzy membership functions (bell-shaped, trapezoidal-shaped, and triangular-shaped) and their fuzzy number construction algorithms are proposed. Experimentation is also performed with three, five, and seven membership functions. Type and number of suitable fuzzy membership functions are then identified to remove noise. Comparison with the existing filtering techniques is established on the basis of objective quantitative measures including structural similarity index measure (SSIM) and peak-signal-to-noise-ratio (PSNR). Simulations show that this filter is superior to that of the existing state-of-the-art filtering techniques in removing fix and random-valued impulse noise whereas retaining the details of the image contents. | Color differences based fuzzy filter for extremely corrupted color images |
S1568494614001136 | In recent years, hybridization of multi-objective evolutionary algorithms (MOEAs) with traditional mathematical programming techniques have received significant attention in the field of evolutionary computing (EC). The use of multiple strategies with self-adaptation manners can further improve the algorithmic performances of decomposition-based evolutionary algorithms. In this paper, we propose a new multiobjective memetic algorithm based on the decomposition approach and the particle swarm optimization (PSO) algorithm. For brevity, we refer to our developed approach as MOEA/D-DE+PSO. In our proposed methodology, PSO acts as a local search engine and differential evolution works as the main search operator in the whole process of optimization. PSO updates the position of its solution with the help of the best information on itself and its neighboring solution. The experimental results produced by our developed memtic algorithm are more promising than those of the simple MOEA/D algorithm, on most test problems. Results on the sensitivity of the suggested algorithm to key parameters such as population size, neighborhood size and maximum number of solutions to be altered for a given subproblem in the decomposition process are also included. | Multiobjective memetic algorithm based on decomposition |
S1568494614001148 | Knowledge management system (KMS) is crucial for organization knowledge management. In order to help the evaluation and selection of KMS from the user's perspective, a new multiple criteria decision making (MCDM) method combining quality function deployment (QFD) with technique for order preference by similarity to an ideal solution (TOPSIS) in intuitionistic fuzzy environment is proposed. In the method, the customer criteria and system criteria for KMS selection are required. These two kinds of criteria are established from the user's perspective and the designer's perspective respectively. Customers give their linguistic opinions about the importance of the customer criteria and the rating of alternatives with respect to the customer criterion. Analysts give their linguistic opinions about the relationship between the customer criteria and the system criteria, and the correlation between the system criteria. After the aggregation of linguistic opinions in intuitionistic fuzzy environment, the customers’ opinions are transformed into the rating of the weight of system criteria and rating of the alternatives concerning the system criteria by the QFD. Afterwards the alternatives are ranked according to system criteria by TOPSIS method in intuitionistic fuzzy environment and the best alternative is determined. In the end an example is provided to illustrate the applicability of the proposed method. | A new MCDM method combining QFD with TOPSIS for knowledge management system selection from the user's perspective in intuitionistic fuzzy environment |
S156849461400115X | JCSE-SPIHT, an algorithm of joint compression and selective encryption based on set partitioning in hierarchical trees (SPIHT), is proposed to achieve image encryption and compression simultaneously. It can protect SPIHT compressed images by only fast scrambling a tiny portion of crucial data during the coding process while keeping all the virtues of SPIHT intact. Intensive experiments are conducted to validate and evaluate the proposed algorithm; the results show that the efficiency and the compression performance of JCSE-SPIHT are very close to original SPIHT. In security analysis, JCSE-SPIHT is proved to be immune to various attacks not only from traditional cryptanalysis, but also by utilizing sophisticated image processing techniques. | Joint SPIHT compression and selective encryption |
S1568494614001173 | Hybridization in optimization methods plays a very vital role to make it effective and efficient. Different optimization methods have different search tendency and it is always required to experiment the effect of hybridizing different search tendency of the optimization algorithm with each other. This paper presents the effect of hybridizing Biogeography-Based Optimization (BBO) technique with Artificial Immune Algorithm (AIA) and Ant Colony Optimization (ACO) in two different ways. So, four different variants of hybrid BBO, viz. two variants of hybrid BBO with AIA and two with ACO, are developed and experimented in this paper. All the considered optimization techniques have altogether a different search tendency. The proposed hybrid method is tested on many benchmark problems and real life problems. Friedman test and Holm–Sidak test are performed to have the statistical validity of the results. Results show that proposed hybridization of BBO with ACO and AIA is effective over a wide range of problems. Moreover, the proposed hybridization is also effective over other proposed hybridization of BBO and different variants of BBO available in the literature. | Effect of hybridizing Biogeography-Based Optimization (BBO) technique with Artificial Immune Algorithm (AIA) and Ant Colony Optimization (ACO) |
S1568494614001185 | This paper discusses the vehicle routing problem with multiple driving ranges (VRPMDR), an extension of the classical routing problem where the total distance each vehicle can travel is limited and is not necessarily the same for all vehicles – heterogeneous fleet with respect to maximum route lengths. The VRPMDR finds applications in routing electric and hybrid-electric vehicles, which can only cover limited distances depending on the running time of their batteries. Also, these vehicles require from long charging times, which in practice makes it difficult to consider en route recharging. The paper formally introduces the problem, describes an integer programming formulation and a multi-round heuristic algorithm that iteratively constructs a solution for the problem. Using a set of benchmarks adapted from the literature, the algorithm is then employed to analyze how distance-based costs are increased when considering ‘greener’ fleet configurations – i.e., when using electric vehicles with different degrees of autonomy. | Routing fleets with multiple driving ranges: Is it possible to use greener fleet configurations? |
S1568494614001197 | This paper proposes an efficient adaptive hierarchical artificial immune system (AHAIS) for complex global optimization problems. In the proposed AHAIS optimization, a hierarchy with top-bottom levels is used to construct the antibody population, where some antibodies with higher affinity become the top-level elitist antibodies and the other antibodies with lower affinity become the bottom-level common antibodies. The elitist antibodies experience different evolutionary operators from those common antibodies, and a well-designed dynamic updating strategy is used to guide the evolution and retrogradation of antibodies between two levels. In detail, the elitist antibodies focus on self-learning and local searching while the common antibodies emphasize elitist-learning and global searching. In addition, an adaptive searching step length adjustment mechanism is proposed to capture more accurate solutions. The suppression operator introduces an upper limit of the similarity-based threshold by considering the concentration of the candidate antibodies. To evaluate the effectiveness and the efficiency of algorithms, a series of comparative numerical simulations are arranged among the proposed AHAIS, DE, PSO, opt-aiNet and IA-AIS, where eight benchmark functions are selected as testbeds. The simulation results prove that the proposed AHAIS is an efficient method and outperforms DE, PSO, opt-aiNet and IA-AIS in convergence speed and solution accuracy. Moreover, an industrial application in RFID reader collision avoidance also demonstrates the searching capability and practical value of the proposed AHAIS optimization. | Adaptive hierarchical artificial immune system and its application in RFID reader collision avoidance |
S1568494614001203 | Supplier selection has become a very critical activity to the performance of organizations and supply chains. Studies presented in the literature propose the use of the methods Fuzzy TOPSIS (Fuzzy Technique for Order of Preference by Similarity to Ideal Solution) and Fuzzy AHP (Fuzzy Analytic Hierarchy Process) to aid the supplier selection decision process. However, there are no comparative studies of these two methods when applied to the problem of supplier selection. Thus, this paper presents a comparative analysis of these two methods in the context of supplier selection decision making. The comparison was made based on the factors: adequacy to changes of alternatives or criteria; agility in the decision process; computational complexity; adequacy to support group decision making; the number of alternative suppliers and criteria; and modeling of uncertainty. As an illustrative example, both methods were applied to the selection of suppliers of a company in the automotive production chain. In addition, computational tests were performed considering several scenarios of supplier selection. The results have shown that both methods are suitable for the problem of supplier selection, particularly to supporting group decision making and modeling of uncertainty. However, the comparative analysis has shown that the Fuzzy TOPSIS method is better suited to the problem of supplier selection in regard to changes of alternatives and criteria, agility and number of criteria and alternative suppliers. Thus, this comparative study contributes to helping researchers and practitioners to choose more effective approaches for supplier selection. Suggestions of further work are also proposed so as to make these methods more adequate to the problem of supplier selection. | A comparison between Fuzzy AHP and Fuzzy TOPSIS methods to supplier selection |
S1568494614001215 | In order to enhance the performance and lifetime of any equipment, maintenance is essential. The major power system components including generators and transmission lines require periodical maintenance and in this regard, the present work details Integrated Maintenance Scheduling (IMS) for the secure operation. The IMS problem has been formulated as a complex optimization problem that affects unit commitment and economic dispatch schedules. Most of the methodologies adopt decomposition approaches for the solution of IMS. In this work, Teaching Learning Based Optimization (TLBO) has been used as a prime optimization tool as it has been proved to be an effective optimization algorithm when applied to various practical optimization problems and its implementation is simple involving less computational effort. The methodology has been tested on standard test systems and it works well while including generator contingency. Numerical results comparison indicates that this method is a promising alternative for solving IMS problem. | Source and transmission line maintenance outage scheduling in a power system using teaching learning based optimization algorithm |
S1568494614001227 | Speech is the main medium for human communication and interaction. Apart from the traditional telephones, more and more applications come with speech interfaces, which use speech signal as an input for various purposes. However, many of these applications might fail to perform in noisy environments as the signal-to-noise ratio (SNR) degrades. Two important measures for any speech enhancement algorithm are noise suppression and speech distortion. Naturally, different speech enhancement algorithms will have different trade-offs. Moreover, depending on the environment, it is possible that one algorithm will outperform the others in some respects. This paper proposes a multi-filter system, which has the capability of continually adjusting the noise suppression level and the speech distortion level in a Pareto fashion. Moreover, we show that the system works under a variety of noisy environments and we obtain the efficient frontier of the combined filters for each background noise. Because the multi-filters are adapting in parallel, the final system can be implemented on FPGA efficiently. | FPGA multi-filter system for speech enhancement via multi-criteria optimization |
S1568494614001239 | In this paper, an adaptive hybrid control system (AHCS) based on the computed torque control for permanent-magnet synchronous motor (PMSM) servo drive is proposed. The proposed AHCS incorporating an auxiliary controller based on the sliding-mode, a recurrent radial basis function network (RBFN)-based self-evolving fuzzy-neural-network (RRSEFNN) controller and a robust controller. The RRSEFNN combines the merits of the self-evolving fuzzy-neural-network, recurrent-neural-network and RBFN. Moreover, it performs the structure and parameter-learning concurrently. Furthermore, to relax the requirement of the lumped uncertainty, an adaptive RRSEFNN uncertainty estimator is used to adaptively estimate the non-linear uncertainties online, yielding a controller that tolerate a wider range of uncertainties. Additionally, a robust controller is proposed to confront the uncertainties including approximation error, optimal parameter vector and higher order term in Taylor series. The online adaptive control laws are derived based on the Lyapunov stability analysis, so that the stability of the AHCS can be guaranteed. A computer simulation and an experimental system are developed to validate the effectiveness of the proposed AHCS. All control algorithms are implemented in a TMS320C31 DSP-based control computer. The simulation and experimental results confirm that the AHCS grants robust performance and precise dynamic response regardless of load disturbances and PMSM uncertainties. | Adaptive hybrid control system using a recurrent RBFN-based self-evolving fuzzy-neural-network for PMSM servo drives |
S1568494614001240 | To the best of our knowledge, there is no method in the literature to find the fuzzy optimal solution of fully fuzzy critical path (FFCP) problems i.e., critical path problems in which all the parameters are represented by LR flat fuzzy numbers. In this paper, a new method is proposed for the same. Also, it is shown that it is better to use JMD representation of LR flat fuzzy numbers in the proposed method as compared to the other representation of LR flat fuzzy numbers. | Linear programming approach for solving fuzzy critical path problems with fuzzy parameters |
S1568494614001252 | The present work investigates an appropriate way to solve the problem of optimizing fuel management in a VVER/1000 reactor. To automate this procedure, a computer program has been developed. This program suggests an optimal core configuration which is determined according to established safety constraints. The suggested solution is based on the use of coupled programs, one of which is the nuclear code, for making a database and modeling the core, and another one is the Hopfield neural network. In addition to we applied axial variations of enrichment in fuel rods to flat the flux core as novel role. This computational procedure consists of three main steps. The first one consists of creating the cross section database and calculating neutronic parameters by using WIMSD4 and CITATION codes. The second one consists of finding the best axial variations distributions of enrichment to create a fuel rod pattern by using Hopfield neural network artificial (HNNA) and the cross section database. The third one consists of loading of the fuel rods by the suggested fuel rod patterns and finding the optimum core configuration by HNNA that based on minimizing power peaking factor (PPF, PPF=maximum power/average power) and maximizing the effective multiplication factor (k eff, the ratio of the number of neutrons in two successive fission generations). The procedure uses the optimized parameters in order to find configurations in which k eff is maximized. The penalty function is applied to limit the value of local PPF in the neighborhood fuel assemblies. Therefore, in this paper, we proposed a new approach for the use of Hopfield neural network to guide the heuristic search, and applied axial variations distributions of enrichment as novel method to flat the neutron flux and for evaluating the obtained results pertaining to the first core. The results show that applying the HNNA led us to the appropriate PPF and k eff. Also, applying HNNA and axial variation of enrichment is promising to reach the flattening neutronic flux and guaranteeing safety condition in the reactor core. Therefore, we achieved to a set of two basic parameters PPF and k eff as effective factors on satisfying the safety constraints of VVER/1000 reactor core. | Using Hopfield neural network to optimize fuel rod loading patterns in VVER/1000 reactor by applying axial variation of enrichment distribution |
S1568494614001264 | Brain tumor is one of the major causes of death among other types of the cancers. Proper and timely diagnosis can prevent the life of a person to some extent. Therefore we have proposed an automated reliable system for the diagnosis of the brain tumor. Proposed system is a multi-stage system for brain tumor diagnosis and tumor region extraction. First, noise removal is performed as the preprocessing step on the brain MR images. Texture features are extracted from these noise free brain MR images. Next phase of the proposed system is classification that is based on these extracted features. Ensemble based SVM classification is used. More than 99% accuracy is achieved by the classification phase. After classification, proposed system extracts tumor region from tumorous images using multi-step segmentation. First step is skull removal and brain region extraction. Next step is separating tumor region from normal brain cells using FCM clustering. Results of the proposed technique show that tumor region is extracted quite accurately. This technique has been tested against the datasets of different patients received from Holy Family hospital and Abrar MRI & CT Scan center Rawalpindi. | Fuzzy anisotropic diffusion based segmentation and texture based ensemble classification of brain tumor |
S1568494614001276 | The problem of finding the expected shortest path in stochastic networks, where the presence of each node is probabilistic and the arc lengths are random variables, have numerous applications, especially in communication networks. The problem being NP-hard we use an ant colony system (ACS) to propose a metaheuristic algorithm for finding the expected shortest path. A new local heuristic is formulated for the proposed algorithm to consider the probabilistic nodes. The arc lengths are randomly generated based on the arc length distribution functions. Examples are worked out to illustrate the applicability of the proposed approach. | A modified ant colony system for finding the expected shortest path in networks with variable arc lengths and probabilistic nodes |
S1568494614001288 | In this paper, financial performance of Taiwan container shipping companies are evaluated by fuzzy multi-criteria decision-making (FMCDM). In the evaluating problem, we first apply grey relation analysis to partition financial ratios into several clusters and find representative indices from the clusters. Then the representative indices are considered as evaluation criteria on financial performance assessment of Taiwan container shipping companies, and an FMCDM method called fuzzy technique for order preference by similarity to ideal solution (fuzzy TOPSIS) is utilized to evaluate financial performance. By fuzzy TOPSIS, financial performances of container shipping companies are ranked, and thus a container shipping company can realize its finance competitive strength and weakness between container shipping companies. | The evaluation of financial performance for Taiwan container shipping companies by fuzzy TOPSIS |
S156849461400129X | The present paper proposes a multidimensional coupled chaotic map as a pseudo random number generator. Based on an introduced dynamical systems, a watermark scheme is presented. By modifying the original image and embedding a watermark in the difference values within the original image, the proposed scheme overcomes the problem of embedding a watermark in the spatial domain. As the watermark extraction does not require the original image, the introduced model can be employed for practical applications. This algorithm tries to improve the problem of failure of embedding in small key space, embedding speed and level of security. | Design and implementation of coupled chaotic maps in watermarking |
S1568494614001306 | Personal authentication based on palmprint features has been attracting extensive attention currently. In many palmprint based recognition approaches, major issue is textural analysis of palm. In this paper, we propose a novel and efficient texture based approach to palmprint recognition based on a 2D discrete orthonormal S-Transform called as the 2D-DOST. 2D-DOST is a new powerful tool for texture analysis which has been introduced in recent years. It can efficiently characterize the frequency contribution of image texture in the various bandwidths. In this work, First, 2D-DOST is applied to the palmprint to characterize the frequency content of palmprint texture. Palmprint features are obtained by computing the local energy of 2D-DOST magnitudes in different bandwidths. Then, ICA will be employed to the extracted features to reduce the dimensionality and eliminate the redundant features. The experimental results offer EER equal to 0.93%, 0.97% and 0.12% for IITD, CASIA and PolyU databases, respectively. The results demonstrate the efficiency and validity of the proposed method. | Palmprint authentication based on discrete orthonormal S-Transform |
S1568494614001318 | In this paper, a fuzzy controller is designed based on parallel distributed compensation (PDC) method and it is implemented in an experimental tank level control system. Firstly, a mathematical model of the system is obtained experimentally. An important feature of the plant is its nonlinearity. To control the level of water in the tank over the whole range, the nonlinear model of the system is linearized around three different operating points. Then, three PI controllers are designed for the operating points, using Skogestad's method. By using the PDC method, an overall fuzzy controller is designed by the fuzzy blending of the three PI-controllers. To evaluate the practical performance of the PDC-based fuzzy controller, the control system is implemented in the experimental system. The evaluation criteria considered are step response and disturbance rejection. The comparison results showed the superiority of the PDC-controller over the classical PI-controller. | Parallel distributed compensator design of tank level control based on fuzzy Takagi–Sugeno model |
S156849461400132X | Human performance evaluation is one of the most important fields to analyze for the continuity of an organization. Evaluation files filled by the managers generally end up in dusty folders where no one looks. This decreases the credibility of the evaluators and the process itself. Whereas the management thinks that they are taking the valuable time from the people who can do better things instead of these evaluations. In this paper, we add an engineering point of view to this process by giving a Hybrid Multicriteria Decision Making (MCDM) approach to evaluate employees’ performances working for a same task and explain an efficient way of handling the qualitative and quantitative data simultaneously. The real life situations where performance criteria show interaction will be possible to solve and the different types of interactions will be handled with the proposed hybrid method using Analytical Network Process (ANP) and Choquet Integral (CI) simultaneously. We also give a numerical illustration at the end of the study with the appropriate concluding remarks including the advantages of the proposed method. | An engineering approach to human resources performance evaluation: Hybrid MCDM application with interactions |
S1568494614001331 | Simplified fuzzy ARTMAP (SFAM) is used in numerous classification problems due to its high discriminant power and low training time. However, the performance of SFAM is affected by the presentation order of the training patterns. The genetic algorithm (GA) can be considered as a solution to the problem because the selection of the training pattern order is a complicated combinatorial problem in a large search space. In this paper, a new genetic ordering method for SFAM is proposed to improve the performance of the algorithm. Special genetic operators are employed in the genetic evolution. Compared to the conventional methods, the proposed SFAM demonstrates better classification performance since it can efficiently deliver the desirable properties of parents to their offspring. To demonstrate the performance of the proposed method, we perform experiments on various databases from the UCI repository. | An efficient genetic selection of the presentation order in simplified fuzzy ARTMAP patterns |
S1568494614001343 | In an electricity market generation companies need suitable bidding models to maximize their profits. Therefore, each supplier will bid strategically for choosing the bidding coefficients to counter the competitors bidding strategy. In this paper optimal bidding strategy problem is solved using a novel algorithm based on Shuffled Frog Leaping Algorithm (SFLA). It is memetic meta-heuristic that is designed to seek a global optimal solution by performing a heuristic search. It combines the benefits of the Genetic-based Memetic Algorithm (MA) and the social behavior-based Particle Swarm Optimization (PSO). Due to this it has better precise search which avoids premature convergence and selection of operators. Therefore, the proposed method overcomes the short comings of selection of operators and premature convergence of Genetic Algorithm (GA) and PSO method. Important merit of the proposed SFALA is that faster convergence. The proposed method is numerically verified through computer simulations on IEEE 30-bus system consist of 6 suppliers and practical 75-bus Indian system consist of 15 suppliers. The result shows that SFLA takes less computational time and producing higher profits compared to Fuzzy Adaptive PSO (FAPSO), PSO and GA. | Generation bidding strategy in a pool based electricity market using Shuffled Frog Leaping Algorithm |
S1568494614001355 | Fuzzy time series forecasting models can be divided into two subclasses which are first order and high order. In high order models, all lagged variables exist in the model according to the model order. Thus, some of these can exist in the model although these lagged variables are not significant in explaining fuzzy relationships. If such lagged variables can be removed from the model, fuzzy relationships will be defined better and it will cause more accurate forecasting results. In this study, a new fuzzy time series forecasting model has been proposed by defining a partial high order fuzzy time series forecasting model in which the selection of fuzzy lagged variables is done by using genetic algorithms. The proposed method is applied to some real life time series and obtained results are compared with those obtained from other methods available in the literature. It is shown that the proposed method has high forecasting accuracy. | Fuzzy lagged variable selection in fuzzy time series with genetic algorithms |
S1568494614001367 | Paper presents predator–prey based optimization (PPO) technique to obtain optimal generation scheduling of short-term hydrothermal system. PPO is a part of the swarm intelligence family and is capable to solve large scale non-linear optimization problems. PPO based algorithm combines the idea of particle swarm optimization concept with predator effect that helps to, maintain diversity in the swarm and preventing premature convergence to local sub-optimal. In this paper first of all feasible solution is obtained through random heuristic search and then thermal and hydro power generations are searched for optimum hydrothermal scheduling problem using PPO. Variable elimination method is implemented to handle the equality constraint by eliminating variable explicitly. These eliminated variables are considered by penalty method restricts slack units with in limits. Slack thermal generating unit for each sub-interval handles power balance equality constraint and slack hydro units handle water equality constraint. The performance of the proposed approach is illustrated on, fixed-head and variable-head hydrothermal power systems. The results obtained from the proposed technique are compared with the existing technique. From the numerical results, it is experienced that the PPO based approach is able to provide a better solution. | Scheduling short-term hydrothermal generation using predator prey optimization technique |
S1568494614001379 | In recent years some research works have been carried out on web software error analysis and reliability predictions. In all these works the web environment has been considered as crisp one, which is not a very realistic assumption. Moreover, web error forecasting remains unworthy for the researchers for quite a long time. Furthermore, among various well known forecasting techniques, fuzzy time series based methods are extensively used, though they are suffering from some serious drawbacks, viz., fixed sized intervals, using some fixed membership values (0, 0.5, and 1) and moreover, the defuzzification process only deals with the factor that is to be predicted. Prompted by these facts, the present authors have proposed a novel multivariate fuzzy forecasting algorithm that is able to remove all the aforementioned drawbacks as also can predict the future occurrences of different web failures (considering the web environment as fuzzy) with better predictive accuracy. Also, the complexity analysis of the proposed algorithm is done to unveil its better run time complexity. Moreover, the comparisons with the other existing frequently used forecasting algorithms were performed to demonstrate its better efficiency and predictive accuracy. Additionally, at the very end, the developed algorithm was applied on the real web failure data of http://www.ismdhanbad.ac.in/, the official website of ISM Dhanbad, India, collected from the corresponding HTTP log files. cluster r which is generated after modulo − m operation. the ith (i =0,1…m −1) cluster or interval related to the main factor. the linguistic variables corresponding to a i . the ith cluster or interval of the jth secondary factor. the linguistic variables corresponding to b j,i . it is the largest element of a cluster a i (i =0,1…m −1). it is the smallest element of a cluster a i (i =0,1…m−1). it is the average of max_clust[a i ]and min_clust[a i ] of cluster a i . distance between any two data points a and b. the jth (j =1,2…n) element of the ith (i =0,1…m −1) cluster. the membership of e_main(i,k) on a n . number of elements of a k (k =0,1…m −1) whose distances from e_main(p,q) is greater than sum_deviation of the main factor. The qth element of the pth cluster of the ith secondary factor. number of elements of the kth cluster of the ith secondary factor whose distances from e_sec(i) p,q is greater than the sum_deviation of the ith=(i =1,2…k) secondary factor. the changed mid_clust of a ( m i d _ c l u s t [ i ] m o d m ) . largest element of a n . smallest element of a n . the predicted value of i. | Web software fault prediction under fuzzy environment using MODULO-M multivariate overlapping fuzzy clustering algorithm and newly proposed revised prediction algorithm |
S1568494614001380 | This paper considers a bi-objective hybrid flowshop scheduling problems with fuzzy tasks’ operation times, due dates and sequence-dependent setup times. To solve this problem, we propose a bi-level algorithm to minimize two criteria, namely makespan, and sum of the earliness and tardiness, simultaneously. In the first level, the population will be decomposed into several sub-populations in parallel and each sub-population is designed for a scalar bi-objective. In the second level, non-dominant solutions obtained from sub-population bi-objective random key genetic algorithm (SBG) in the first level will be unified as one big population. In the second level, for improving the Pareto-front obtained by SBG, based on the search in Pareto space concept, a particle swarm optimization (PSO) is proposed. We use a defuzzification function to rank the Bell-shaped fuzzy numbers. The non-dominated sets obtained from each of levels and an algorithm presented previously in literature are compared. The computational results showed that PSO performs better than others and obtained superior results. | Multi-objective fuzzy multiprocessor flowshop scheduling |
Subsets and Splits