FileName
stringlengths
17
17
Abstract
stringlengths
163
6.01k
Title
stringlengths
12
421
S1568494614004013
This article presents a multi-objective genetic algorithm which considers the problem of data clustering. A given dataset is automatically assigned into a number of groups in appropriate fuzzy partitions through the fuzzy c-means method. This work has tried to exploit the advantage of fuzzy properties which provide capability to handle overlapping clusters. However, most fuzzy methods are based on compactness and/or separation measures which use only centroid information. The calculation from centroid information only may not be sufficient to differentiate the geometric structures of clusters. The overlap-separation measure using an aggregation operation of fuzzy membership degrees is better equipped to handle this drawback. For another key consideration, we need a mechanism to identify appropriate fuzzy clusters without prior knowledge on the number of clusters. From this requirement, an optimization with single criterion may not be feasible for different cluster shapes. A multi-objective genetic algorithm is therefore appropriate to search for fuzzy partitions in this situation. Apart from the overlap-separation measure, the well-known fuzzy J m index is also optimized through genetic operations. The algorithm simultaneously optimizes the two criteria to search for optimal clustering solutions. A string of real-coded values is encoded to represent cluster centers. A number of strings with different lengths varied over a range correspond to variable numbers of clusters. These real-coded values are optimized and the Pareto solutions corresponding to a tradeoff between the two objectives are finally produced. As shown in the experiments, the approach provides promising solutions in well-separated, hyperspherical and overlapping clusters from synthetic and real-life data sets. This is demonstrated by the comparison with existing single-objective and multi-objective clustering techniques.
A multi-objective genetic algorithm with fuzzy c-means for automatic data clustering
S1568494614004025
The PID controller with constant feedback gains has withstood as the preferred choice for control of linear plants or linearized plants, and under certain conditions for non-linear ones, where the control of robotic arms excels. In this paper a model-free self-tuning PID controller is proposed for tracking tasks. The key idea is to exploit the passivity-based formulation for robotic arms in order to shape the damping injection to enforce dissipativity and to guarantee semiglobal exponential convergence in the sense of Lyapunov. It is shown that a neuro-fuzzy network can be used to tune dissipation rate gain through a self-tuning policy of a single gain. Experimental studies are presented to confirm the viability of the proposed approach.
Neuro-fuzzy self-tuning of PID control for semiglobal exponential tracking of robot arms
S1568494614004037
The aim of this paper is to investigate the problem of finding the efficient number of clusters in fuzzy time series. The clustering process has been discussed in the existing literature, and a number of methods have been suggested. These methods have several drawbacks, especially the lack of cluster shape and quantity optimization. There are two critical dimensions in a fuzzy time series clustering: the selection of a proper interval for fuzzy clusters and the optimization of the membership degrees among the fuzzy cluster set. The existing methods for the interval selection assume that the intended data has a short-tailed distribution, and the cluster intervals are established in identical lengths (e.g. Song and Chissom, 1994; Chen, 1996; Yolcu et al., 2009). However, the time series data (particularly in economic research) is rarely short-tailed and mostly converges to long-tail distribution because of the boom-bust market behavior. This paper proposes a novel clustering method named histogram damping partition (HDP) to define sub-clusters on the standard deviation intervals and truncate the histogram of the data by a constraint based on the coefficient of variation. The HDP approach can be used for many different kinds of fuzzy time series models at the clustering stage.
A non-linear clustering method for fuzzy time series: Histogram damping partition under the optimized cluster paradox
S1568494614004049
This paper addresses the issue of activity understanding from video and its semantics-rich description. A novel approach is presented where activities are characterised and analysed at different resolutions. Semantic information is delivered according to the resolution at which the activity is observed. Furthermore, the multiresolution activity characterisation is exploited to detect abnormal activity. To achieve these system capabilities, the focus is given on context modelling by employing a soft computing-based algorithm which automatically enables the determination of the main activity zones of the observed scene by taking as input the trajectories of detected mobiles. Such areas are learnt at different resolutions (or granularities). In a second stage, learned zones are employed to extract people activities by relating mobile trajectories to the learned zones. In this way, the activity of a person can be summarised as the series of zones that the person has visited. Employing the inherent soft relation properties, the reported activities can be labelled with meaningful semantics. Depending on the granularity at which activity zones and mobile trajectories are considered, the semantic meaning of the activity shifts from broad interpretation to detailed description. Activity information at different resolutions is also employed to perform abnormal activity detection.
Multiresolution semantic activity characterisation and abnormality discovery in videos
S1568494614004050
In this paper, we derive the analytical structure of the interval type-2 fuzzy proportional – integral – derivative (IT2F-PID) controller, which consists of a parallel combination of the IT2F-PD controller and the IT2F-PI controller. The IT2F-PID controller uses the following identical elements: two interval T2 triangular input fuzzy sets for each of the two input variables, three interval triangular output fuzzy sets, the Mamdani interval type-2 fuzzy rule based, a Zadeh AND T-norm, a Lukasiewicz OR T-conorm, and a new method for type-reduction that we propose, which called simplified type-reduction method. This new method of type-reduction reduces the computational cost of the output processing for the interval type-2 fuzzy logic controller (IT2-FLC). We relate the resulting structure to conventional PID control theory and prove that the proposed IT2F-PID controller is a nonlinear PID with variable gains changing as the input variables values vary. Moreover, the sufficient conditions for the bounded-input bounded-output (BIBO) stability of the IT2F-PID control system have established using the well-known small gain theorem. The simulation results show that the IT2F-PID controller based on the proposed type-reduction method is able to improve the system performance compared with other type-reduction methods.
Derivation and stability analysis of the analytical structures of the interval type-2 fuzzy PID controller
S1568494614004062
To survive in today's telecommunication business it is imperative to distinguish customers who are not reluctant to move toward a competitor. Therefore, customer churn prediction has become an essential issue in telecommunication business. In such competitive business a reliable customer predictor will be regarded priceless. This paper has employed data mining classification techniques including Decision Tree, Artificial Neural Networks, K-Nearest Neighbors, and Support Vector Machine so as to compare their performances. Using the data of an Iranian mobile company, not only were these techniques experienced and compared to one another, but also we have drawn a parallel between some different prominent data mining software. Analyzing the techniques’ behavior and coming to know their specialties, we proposed a hybrid methodology which made considerable improvements to the value of some of the evaluations metrics. The proposed methodology results showed that above 95% accuracy for Recall and Precision is easily achievable. Apart from that a new methodology for extracting influential features in dataset was introduced and experienced. Nowadays business managers have started to appreciate the important role of churn prediction in their way of prosperity. In the literature, it has been repeatedly indicated that customer retention in comparison to absorbing new customers is significantly more achievable and less expensive. In today's competitive business environment, losing a customer should be considered as a real disaster. Loss of a customer can be contemplated in three different aspects. First, losing existing customers is figuratively equivalent to have a critical machine irreparably broken down due to the fact that they are any company's most precious assets. Furthermore, by the same imaginary assumption, losing a customer would mean passing our asset intentionally to our competitor. Finally it is too laborious a task to gain a new customer. To make the matters worse, even if a new customer were absorbed, they even would not be as loyal as the old customers. It may take some time for just a proportion of them to become slightly loyal. Therefore, the prevention strategy is absolutely worthwhile. Customer retention plays a major role in many enterprises, especially matured ones, including telecommunications and finances [1]. Acquiring it requires and rears churn prediction, which is another term and keyword in customer retention. It can be explained as predicting customers’ probable tendency to switch to our competitor. In today's telecommunication business environment, competition is tremendously fierce. The services and customers’ options also have become more comparable and more competitive. This is the reason why customer loyalty tends to erode. It costs customers figuratively nothing to switch from a service provider to another. They are customers after all and they have freewill to switch to a better and probably more inexpensive service in a competitive market. We, as company managers, ought to take every necessary step so as to get in their way of leaving. It is imperative to distinguish customers who are not reluctant to move toward another competitor before they actually consider so. Therefore, dealing with the probability of customers’ churn has become an inevitable issue in telecommunication industry [2]. The telecommunication service companies are annually facing with loss of valuable customers to competitors. Due to the changes and improvements in telecommunications’ services technologies in the last few years, customers’ churn has resulted in magnificent losses and it has proven itself as a real issue [3].
Improved churn prediction in telecommunication industry using data mining techniques
S1568494614004074
In this paper, we aim to solve the problems of decision making by introducing soft discernibility matrix in soft sets. Firstly, the notion of soft discernibility matrix is introduced in soft sets, and some properties associated with the soft discernibility matrix are researched. Secondly, an novel algorithm based on soft discernibility matrix is proposed to solve the problems of decision making. It can find not only the optimal choice object, but also an order relation of all the objects easily by scanning the soft discernibility matrix at most one time, rather than calculating the choice value. Finally, the weighted soft discernibility matrix is introduced in soft sets and its application to decision making is also investigated.
Soft discernibility matrix and its applications in decision making
S1568494614004086
A new kind of rapid structural damage detection method is proposed based on structural dynamic vibration data, which can be used to fast assess structural fault for short-term monitoring. In this paper, to identify structural damage rapidly after its occurrence, we have employed adaptive neuro-fuzzy inference system (ANFIS) for nonparametric system identification and response prediction, which exploits the best approximation property of ANFIS. Interval modeling technique is then used to extract the feature data by processing ANFIS output data. ANFIS is found to provide a high degree of accuracy for the prediction of the structural response, and interval modeling technique to effectively extract damage characteristics. ANFIS and interval modeling technique, relatively new topics when applied to structure, can be integrated to facilitate rapid structural fault detection. The benchmark structure of IASC-ASCE Task Group is modeled in finite element method (FEM), and then utilized for demonstration of the proposed method, within which the structural damage is modeled as element removal. The simulation results indicate that the effect of the integrated method of ANFIS and interval modeling technique presented in rapid damage detection for short-term monitoring is quite plausible.
A rapid structural damage detection method using integrated ANFIS and interval modeling technique
S1568494614004098
Concrete corrosion due to sulphuric acid attack is known to be one of the main contributory factors for degradation of concrete sewer pipes. This article proposes to use a novel data mining technique, namely, evolutionary polynomial regression (EPR), to predict degradation of concrete subject to sulphuric acid attack. A comprehensive dataset from literature is collected to train and develop an EPR model for this purpose. The results show that the EPR model can successfully predict mass loss of concrete specimens exposed to sulphuric acid. Parametric studies show that the proposed model is capable of representing the degree to which individual contributing parameters can affect the degradation of concrete. The developed EPR model is compared with a model based on artificial neural network (ANN) and the advantageous of the EPR approach over ANN is highlighted. In addition, based on the developed EPR model and using an optimisation technique, the optimum concrete mixture to provide maximum resistance against sulphuric acid attack has been identified.
An evolutionary approach to modelling concrete degradation due to sulphuric acid attack
S1568494614004104
This study presents a kind of fuzzy robustness design for nonlinear time-delay systems based on the fuzzy Lyapunov method, which is defined in terms of fuzzy blending quadratic Lyapunov functions. The basic idea of the proposed approach is to construct a fuzzy controller for nonlinear dynamic systems with disturbances in which the delay-independent robust stability criterion is derived in terms of the fuzzy Lyapunov method. Based on the robustness design and parallel distributed compensation (PDC) scheme, the problems of modeling errors between nonlinear dynamic systems and Takagi–Sugeno (T–S) fuzzy models are solved. Furthermore, the presented delay-independent condition is transformed into linear matrix inequalities (LMIs) so that the fuzzy state feedback gain and common solutions are numerically feasible with swarm intelligence algorithms. The proposed method is illustrated on a nonlinear inverted pendulum system and the simulation results show that the robustness controller cannot only stabilize the nonlinear inverted pendulum system, but has the robustness against external disturbance.
A novel criterion for nonlinear time-delay systems using LMI fuzzy Lyapunov method
S1568494614004116
A systematic approach has been attempted to design a non-linear observer to estimate the states of a non-linear system. The neural network based state filtering algorithm proposed by A.G. Parlos et al. has been used to estimate the state variables, concentration and temperature in the Continuous Stirred Tank Reactor (CSTR) process. (CSTR) is a typical chemical reactor system with complex nonlinear dynamics characteristics. The variables which characterize the quality of the final product in CSTR are often difficult to measure in real-time and cannot be directly measured using the feedback configuration. In this work, the comparison of the performances of an extended Kalman filter (EKF), unscented Kalman filter (UKF) and neural network (NN) based state filter for CSTR that rely solely on concentration estimation of CSTR via measured reactor temperature has been done. The performances of these three filters are analyzed in simulation with Gaussian noise source under various operating conditions and model uncertainties.
Critical evaluation of non-linear filter configurations for the state estimation of Continuous Stirred Tank Reactor
S1568494614004128
The aim of bankruptcy prediction in the areas of data mining and machine learning is to develop an effective model which can provide the higher prediction accuracy. In the prior literature, various classification techniques have been developed and studied, in/with which classifier ensembles by combining multiple classifiers approach have shown their outperformance over many single classifiers. However, in terms of constructing classifier ensembles, there are three critical issues which can affect their performance. The first one is the classification technique actually used/adopted, and the other two are the combination method to combine multiple classifiers and the number of classifiers to be combined, respectively. Since there are limited, relevant studies examining these aforementioned disuses, this paper conducts a comprehensive study of comparing classifier ensembles by three widely used classification techniques including multilayer perceptron (MLP) neural networks, support vector machines (SVM), and decision trees (DT) based on two well-known combination methods including bagging and boosting and different numbers of combined classifiers. Our experimental results by three public datasets show that DT ensembles composed of 80–100 classifiers using the boosting method perform best. The Wilcoxon signed ranked test also demonstrates that DT ensembles by boosting perform significantly different from the other classifier ensembles. Moreover, a further study over a real-world case by a Taiwan bankruptcy dataset was conducted, which also demonstrates the superiority of DT ensembles by boosting over the others.
A comparative study of classifier ensembles for bankruptcy prediction
S156849461400413X
This paper presents a real coded chemical reaction based (RCCRO) algorithm to solve the short-term hydrothermal scheduling (STHS) problem. Hydrothermal system is highly complex and related with every problem variables in a nonlinear way. The objective of the hydro thermal scheduling is to determine the optimal hourly schedule of power generation for different hydrothermal power system for certain intervals of time such that cost of power generation is minimum. Chemical reaction optimization mimics the interactions of molecules in term of chemical reaction to reach a low energy stable state. A real coded version of chemical reaction optimization, known as real-coded chemical reaction optimization (RCCRO) is considered here. To check the effectiveness of the RCCRO, 3 different test systems are considered and mathematical remodeling of the algorithm is done to make it suitable for solving short-term hydrothermal scheduling problem. Simulation results confirm that the proposed approach outperforms several other existing optimization techniques in terms quality of solution obtained and computational efficiency. Results also establish the robustness of the proposed methodology to solve STHS problems.
Real coded chemical reaction based optimization for short-term hydrothermal scheduling
S1568494614004141
Rating scales (such as, Likert scales, Guttman scales, Feelings thermometers, etc.) represent simple tools for measuring attitudes, judgements and subjective preferences in human rating contexts. Because rating scales show some useful properties (e.g., measurement uniformity, considerable flexibility, statistically appealing), they represent popular and reliable instruments in socio-behavioral sciences. However, standard rating scales suffer also from some relevant limitations. For example, they fail in measuring vague and imprecise information and, above all, they are only able to capture the final outcome of the cognitive process of rating (i.e., the rater's response). To overcome these limitations, some fuzzy versions of these scales (e.g., fuzzy conversion scales, fuzzy rating scales) have been proposed over the years. However, also these more sophisticated scales show some important shortcomings (e.g., difficulty in fuzzy variables construction and potential lack of ecological validity). In this paper, we propose a novel methodology (DYFRAT) for modeling human rating evaluations from a fuzzy-set perspective. In particular, DYFRAT captures the fuzziness of human ratings by modeling some real-time biometric events that occur during the cognitive process of rating in an ecological measurement setting. Moreover, in order to show some important characteristics of the proposed methodology, we apply DYFRAT to some empirical rating situations concerning decision making and risk assessment scenarios.
Dynamic Fuzzy Rating Tracker (DYFRAT): a novel methodology for modeling real-time dynamic cognitive processes in rating scales
S1568494614004153
This study aimed to consider the effect of dispositional optimism and pessimism to provide simple and useful decision models and methods for multiple criteria decision analysis within an interval-valued fuzzy environment. Uncertain and imprecise assessment information is usually present in many practical decision-making situations. Interval-valued fuzzy sets are useful for modeling impressions and quantifying the ambiguous nature of subjective judgments in a convenient way. Based on measurement tool estimations defined on interval-valued fuzzy sets, dual optimistic and pessimistic point operators were utilized in this study, and this paper discusses several important properties of optimistic/pessimistic averaging operations. Two algorithmic procedures were developed to address the effects of optimism and pessimism, involving changes in overall judgments and in the separate evaluations of alternatives with respect to each criterion. Furthermore, this study explored the practical problem of medical decision making to demonstrate the feasibility and applicability of the proposed method and to make a comparison with other existing methods. Finally, computational experiments were designed using enormous amounts of simulation data, and a comparative analysis of rank orders yielded by dual optimistic and pessimistic averaging operations was conducted.
Interval-valued fuzzy multiple criteria decision-making methods based on dual optimistic/pessimistic estimations in averaging operations
S1568494614004165
DVR is one of the custom power devices for compensating power quality indices. A self-tuning controller with a bi-objective structure is presented for controlling the DVR compensator in order to improve the THD and voltage sag indices of a sensitive load in the network. In this paper, the emotional controller which is based on emotional learning of human brain is proposed for controlling the DVR compensator. This controller has such a structure that makes it capable of considering a second objective in the control process of the system. So far, this capability of the emotional controller has not been used in any researches. The results of the paper demonstrate that compensating and controlling the voltage THD signal in the control process has caused more improvement in the voltage sag of the sensitive load. It was reported that the performance of the emotional controller depends on the selection of the values of its coefficients. Therefore, in order to better improve the proposed controller, these coefficients are tuned by an optimization algorithm. Teaching–learning-based optimization algorithm is considered as optimization algorithm to regulate these coefficients. According to simulation results, it works significantly better than classic PI controller and some intelligent controllers that have introduced in other researches already. model output Amygdala unit output orbitofrontal unit output stimulant input equivalent gain of Amygdala unit equivalent gain of orbitofrontal unit emotional signal learning rate in Amygdala unit learning rate in orbitofrontal unit weight coefficient of voltage sag error signal weight coefficient of computational model error signal weight coefficient of voltage THD error signal dynamic voltage restorer total harmonic distortion superconducting magnetic energy storage insulated gate bipolar transistor genetic algorithm chaotic accelerated particle swarm optimization
A novel self-tuning control method based on regulated bi-objective emotional learning controller's structure with TLBO algorithm to control DVR compensator
S1568494614004177
The setup and control of the finishing mill roll gap positions required to achieve the desired strip head thickness as measured by the finish mill exit X-ray gauge sensor is made by an intelligent controller based on an interval type-2 fuzzy logic system. The controller calculates the finishing mill stand screw positions required to achieve the strip finishing mill exit target thickness. The interval type-2 fuzzy head gage controller uses as inputs the transfer bar thickness, the width and the temperature at finishing mill entry, the strip target thickness, the width and the temperature at finishing mill exit, the stand work roll diameter, the stand work roll speed, the stand entry thickness, the stand exit thickness, the stand rolling force, and the %C of the strip. Taking into account that the measurements and inputs to the proposed system are modeled as type-1 non-singleton fuzzy numbers, we present the so called interval type-1 non-singleton type-2 fuzzy logic roll gap controller. As reported in the literature, interval type-2 fuzzy logic systems have greater non-linear approximation capacity than that of its type-1 counterpart and it has the advantage to develop more robust and reliable solutions than the latter. The experiments of these applications were carried out for three different types of coils, from a real hot strip mill. The results proved the feasibility of the developed system for roll gap control. Comparison against the mathematical based model shows that the proposed interval type-2 fuzzy logic system equalizes the performance in finishing mill stand screw positions setup and enhances the achieved strip thickness under the tested conditions characterized by high uncertainty levels.
Finishing mill strip gage setup and control by interval type-1 non-singleton type-2 fuzzy logic systems
S1568494614004189
In this paper, the author proposes some new ideas for the E-spread information systems for an epidemic E, and takes covering approximation spaces as mathematical models of E-spread information systems. By characterizations for connectivity of covering approximation spaces, the author solves the problem: How can one know that an epidemic E spreads easily or not easily in a E-spread information system? Furthermore, the author gives an example to demonstrate the usefulness, which gives a further application of rough set theory in medical sciences of the above result by logical methods and mathematical methods.
Connectivity of covering approximation spaces and its applications on epidemiological issue
S1568494614004190
This paper focuses on the development and validation of an optimal motion planning method for computer-assisted surgical training. The context of this work is the development of new-generation systems that combine artificial intelligence and computer vision techniques in order to adjust the learning process to specific needs of a trainee, while preventing a trainee from the memorization of particular task settings. The problem described in the paper is the generation of shortest, collision-free trajectories for laparoscopic instrument movements in the rigid block world used for hand–eye coordination tasks. Optimal trajectories are displayed on a monitor to provide continuous visual guidance for optimal navigation of instruments. The key result of the work is a framework for the transition from surgical training systems in which users are dependent on predefined task settings and lack guidance for optimal navigation of laparoscopic instruments, to the so called intelligent systems that can potentially deliver the utmost flexibility to the learning process. A preliminary empirical evaluation of the developed optimal motion planning method has demonstrated the increase of total scores measured by total time taken to complete the task, and the instrument movement economy ratio. Experimentation with different task settings and the technical enhancement of the visual guidance are subjects of future research.
An optimal motion planning method for computer-assisted surgical training
S1568494614004207
In this study, stochastic computational techniques are developed for the solution of boundary value problems (BVPs) of second order Pantograph functional differential equation (PFDE) using artificial neural networks(ANNs), simulated annealing (SA), pattern search (PS), genetic algorithms (GAs), active-set algorithm (ASA) and their hybrid combinations. The strength of ANNs is exploited to construct a model for PFDE by defining as unsupervised error to approximate the solution. The accuracy of the model is subjected to find the appropriate design parameters of the networks. These optimal weights of the networks are trained using SA, PS and GAs, used as a tool for viable global search, hybridized with ASA for rapid local convergence. The designed schemes are evaluated by solving a numbers of BVPs for the PFDE and comparing with standard results. The reliability and effectiveness of the proposed solvers are investigated through Monte-Carlo simulations and their statistical analysis.
Numerical treatment for boundary value problems of Pantograph functional differential equation using computational intelligence algorithms
S1568494614004219
In this paper a new nature-inspired metaheuristic algorithm is proposed to solve the optimal power flow problem in a power system. This algorithm is inspired by the black hole phenomenon. A black hole is a region of space-time whose gravitational field is so strong that nothing which enters it, not even light, can escape. The developed approach is called black-hole-based optimization approach. In order to show the effectiveness of the proposed approach, it has been demonstrated on the standard IEEE 30-bus test system for different objectives. Furthermore, in order to demonstrate the scalability and suitability of the proposed approach for large-scale and real power systems, it has been tested on the real Algerian 59-bus power system network. The results obtained are compared with those of other methods reported in the literature. Considering the simplicity of the proposed approach and the quality of the obtained results, this approach seems to be a promising alternative for solving optimal power flow problems.
Optimal power flow using black-hole-based optimization approach
S1568494614004220
In this paper we develop a problem with potential applications in humanitarian relief transportation and telecommunication networks. Given a set of vertices including the depot, facility and customer vertices, the goal is to construct a minimum length cycle over a subset of facilities while covering a given number of customers. Essentially, a customer is covered when it is located within a pre-specified distance of a visited facility on the tour. We propose two node-based and flow-based mathematical models and two metaheuristic algorithms including memetic algorithm and a variable neighborhood search for the problem. Computational tests on a set of randomly generated instances and on set of benchmark data indicate the effectiveness of the proposed algorithms.
The generalized covering traveling salesman problem
S1568494614004232
Personalisation in smart phones requires adaptability to dynamic context based on user mobility, application usage and sensor inputs. Current personalisation approaches, which rely on static logic that is developed a priori, do not provide sufficient adaptability to dynamic and unexpected context. This paper proposes genetic programming (GP), which can evolve program logic in realtime, as an online learning method to deal with the highly dynamic context in smart phone personalisation. We introduce the concept of collaborative smart phone personalisation through the GP Island Model, in order to exploit shared context among co-located phone users and reduce convergence time. We implement these concepts on real smartphones to demonstrate the capability of personalisation through GP and to explore the benefits of the Island Model. Our empirical evaluations on two example applications confirm that the Island Model can reduce convergence time by up to two-thirds over standalone GP personalisation.
Genetic programming for smart phone personalisation
S1568494614004244
The real-time constraint for the codec execution and high security level protection are the two most important multimedia encryption requirements. In this paper, a method is presented for the fast generation of large permutation and diffusion keys, which are based on the sorting of the solutions of the linear Diophantine equation (LDE). The coefficients of the LDE are integers which can be dynamically generated from any type of dynamical chaotic systems, as the maximum precision of the computer can be used for the conversion of floating-point chaotic values into integers. This technique yields a fast image block encryption algorithm in which the security level is considerably strengthened. Although the architecture used in this cipher is the one for which the permutation and diffusion are considered as two separate stages, the generation speed of permutation and diffusion keys allows reducing the computational time required by the duplication of the image-scanning stage during the permutation and diffusion operations.
Highly secured chaotic block cipher for fast image encryption
S1568494614004256
This research proposes a new model for constructing decision trees using interval-valued fuzzy membership values. Most existing fuzzy decision trees do not consider the uncertainty associated with their membership values, however, precise values of fuzzy membership values are not always possible. In this paper, we represent fuzzy membership values as intervals to model uncertainty and employ the look-ahead based fuzzy decision tree induction method to construct decision trees. We also investigate the significance of different neighbourhood values and define a new parameter insensitive to specific data sets using fuzzy sets. Some examples are provided to demonstrate the effectiveness of the approach.
Interval-valued fuzzy decision trees with optimal neighbourhood perimeter
S1568494614004268
This study develops the hybrid models of dynamic multidimensional efficiency classification. By integrating data envelopment analysis (DEA), naïve Bayesian networks (NBN) and dynamic Bayesian networks (DBN), this work proposes a five-step design for efficiency classification: (1) performance evaluation with DEA model, (2) efficiency discretization, (3) intra-period classification by NBN, (4) inter-period classification by DBN, (5) testing and validation. Due to the Markovian property of the dynamic models, the inter-period dependency is assumed invariant over time. In data-driven parameter learning, the fuzzy parameters for incorporating the variation in dynamic dependencies are introduced. We conduct an empirical case study of higher education in Taiwan to demonstrate the usability of this design.
Efficiency classification by hybrid Bayesian networks—The dynamic multidimensional models
S156849461400427X
In this paper, we investigate the deviation of the priority weights from hesitant multiplicative preference relations (HMPRs) in group decision-making environments. As basic elements of HMPRs, hesitant multiplicative elements (HMEs) usually have different numbers of possible values. To correctly compute or compare HMEs, there are two principles to normalize them, i.e., the α-normalization and the β-normalization. Based on the α-normalization, we develop a new goal programming model to derive the priority weights from HMPRs in group decision-making environments. Based on the β-normalization, a consistent HMPR and an acceptably consistent HMPR are defined, and their desired properties are studied. A convex combination method is then developed to obtain interval weights from an acceptably consistent HMPR. This approach is further extended to group decision-making situations in which the experts evaluate their preferences as several HMPRs. Finally, some numerical examples are provided to illustrate the validity and applicability of the proposed models.
Deriving the priority weights from hesitant multiplicative preference relations in group decision making
S1568494614004281
This paper represent a new multiple colony bees algorithm (MCBA) for functional optimization. The MCBA simulates the behaviours of honey bees in their own hive and realizes a communication strategy between the bees living in different hives. However, there is not much information about such a communication strategy between different hives of honey bees. Since information sharing is an essential issue from the optimization point of view, this relevant communication strategy has been based on the similarity between the waggle dance behaviours of real honey bees and the pheromone laying and following behaviours of ants. By the way the MCBA uses the positive feedback mechanism as distinct from the basic bees algorithm and other versions of the bee swarm optimization algorithms. The performance of the proposed MCBA is tested on a set of well-known test functions through a set of computational study, which contains comparison to some other standard meta-heuristics, cooperative approaches and ant-related approaches. The experimental results indicate the effectiveness of the proposed MCBA.
Multiple colony bees algorithm for continuous spaces
S1568494614004293
In this paper, we address the problem of localizing sensor nodes in a static network, given that the positions of a few of them (denoted as “beacons“) are a priori known. We refer to this problem as “auto-localization.” Three localization techniques are considered: the two-stage maximum-likelihood (TSML) method; the plane intersection (PI) method; and the particle swarm optimization (PSO) algorithm. While the first two techniques come from the communication-theoretic “world,” the last one comes from the soft computing “world.” The performance of the considered localization techniques is investigated, in a comparative way, taking into account (i) the number of beacons and (ii) the distances between beacons and nodes. Since our simulation results show that a PSO-based approach allows obtaining more accurate position estimates, in the second part of the paper we focus on this technique proposing a novel hybrid version of the PSO algorithm with improved performance. In particular, we investigate, for various population sizes, the number of iterations which are needed to achieve a given error tolerance. According to our simulation results, the hybrid PSO algorithm guarantees faster convergence at a reduced computational complexity, making it attractive for dynamic localization. In more general terms, our results show that the application of soft computing techniques to communication-theoretic problems leads to interesting research perspectives.
Swarm intelligent approaches to auto-localization of nodes in static UWB networks
S156849461400430X
Clustering is an efficient topology control method which balances the traffic load of the sensor nodes and improves the overall scalability and the life time of the wireless sensor networks (WSNs). However, in a cluster based WSN, the cluster heads (CHs) consume more energy due to extra work load of receiving the sensed data, data aggregation and transmission of aggregated data to the base station. Moreover, improper formation of clusters can make some CHs overloaded with high number of sensor nodes. This overload may lead to quick death of the CHs and thus partitions the network and thereby degrade the overall performance of the WSN. It is worthwhile to note that the computational complexity of finding optimum cluster for a large scale WSN is very high by a brute force approach. In this paper, we propose a novel differential evolution (DE) based clustering algorithm for WSNs to prolong lifetime of the network by preventing faster death of the highly loaded CHs. We incorporate a local improvement phase to the traditional DE for faster convergence and better performance of our proposed algorithm. We perform extensive simulation of the proposed algorithm. The experimental results demonstrate the efficiency of the proposed algorithm.
A novel differential evolution based clustering algorithm for wireless sensor networks
S1568494614004311
We present FI 2 DS a file system, host based anomaly detection system that monitors Basic Security Module (BSM) audit records and determines whether a web server has been compromised by comparing monitored activity generated from the web server to a normal usage profile. Additionally, we propose a set of features extracted from file system specific BSM audit records, as well as an IDS that identifies attacks based on a decision engine that employs one-class classification using a moving window on incoming data. We have used two different machine learning algorithms, Support Vector Machines (SVMs) and Gaussian Mixture Models (GMMs) and our evaluation is performed on real-world datasets collected from three web servers and a honeynet. Results are very promising, since FI 2 DS detection rates range between 91% and 95.9% with corresponding false positive rates ranging between 8.1×10−2 % and 9.3×10−4 %. Comparison of FI 2 DS to another state-of-the-art filesystem-based IDS, FWRAP, indicates higher effectiveness of the proposed IDS in all three datasets. Within the context of this paper FI 2 DS is evaluated for the web daemon user; nevertheless, it can be directly extended to model any daemon-user for both intrusion detection and postmortem analysis.
Of daemons and men: A file system approach towards intrusion detection
S1568494614004323
Liver biopsy is considered to be the gold standard for analyzing chronic hepatitis and fibrosis; however, it is an invasive and expensive approach, which is also difficult to standardize. Medical imaging techniques such as ultrasonography, computed tomography (CT), and magnetic resonance imaging are non-invasive and helpful methods to interpret liver texture, and may be good alternatives to needle biopsy. Recently, instead of visual inspection of these images, computer-aided image analysis based approaches have become more popular. In this study, a non-invasive, low-cost and relatively accurate method was developed to determine liver fibrosis stage by analyzing some texture features of liver CT images. In this approach, some suitable regions of interests were selected on CT images and a comprehensive set of texture features were obtained from these regions using different methods, such as Gray Level Co-occurrence matrix (GLCM), Laws’ method, Discrete Wavelet Transform (DWT), and Gabor filters. Afterwards, sequential floating forward selection and exhaustive search methods were used in various combinations for the selection of most discriminating features. Finally, those selected texture features were classified using two methods, namely, Support Vector Machines (SVM) and k-nearest neighbors (k-NN). The mean classification accuracy in pairwise group comparisons was approximately 95% for both classification methods using only 5 features. Also, performance of our approach in classifying liver fibrosis stage of subjects in the test set into 7 possible stages was investigated. In this case, both SVM and k-NN methods have returned relatively low classification accuracies. Our pairwise group classification results showed that DWT, Gabor, GLCM, and Laws’ texture features were more successful than the others; as such features extracted from these methods were used in the feature fusion process. Fusing features from these better performing families further improved the classification performance. The results show that our approach can be used as a decision support system in especially pairwise fibrosis stage comparisons.
Liver fibrosis staging using CT image texture analysis and soft computing
S1568494614004335
The complex behaviour of fine-grained materials in relation with structural elements has received noticeable attention from geotechnical engineers and designers in recent decades. In this research work an evolutionary approach is presented to create a structured polynomial model for predicting the undrained lateral load bearing capacity of piles. The proposed evolutionary polynomial regression (EPR) technique is an evolutionary data mining methodology that generates a transparent and structured representation of the behaviour of a system directly from raw data. It can operate on large quantities of data in order to capture nonlinear and complex relationships between contributing variables. The developed model allows the user to gain a clear insight into the behaviour of the system. Field measurement data from literature was used to develop the proposed EPR model. Comparison of the proposed model predictions with the results from two empirical models currently being implemented in design works, a neural network-based model from literature and also the field data shows that the EPR model is capable of capturing, predicting and generalizing predictions to unseen data cases, for lateral load bearing capacity of piles with very high accuracy. A sensitivity analysis was conducted to evaluate the effect of individual contributing parameters and their contribution to the predictions made by the proposed model. The merits and advantages of the proposed methodology are also discussed.
Lateral load bearing capacity modelling of piles in cohesive soils in undrained conditions: An intelligent evolutionary approach
S1568494614004347
Segmentation is an important research area in image processing, which has been used to extract objects in images. A variety of algorithms have been proposed in this area. However, these methods perform well on the images without noise, and their results on the noisy images are not good. Neutrosophic set (NS) is a general formal framework to study the neutralities’ origin, nature, and scope. It has an inherent ability to handle the indeterminant information. Noise is one kind of indeterminant information on images. Therefore, NS has been successfully applied into image processing algorithms. This paper proposed a novel algorithm based on neutrosophic similarity clustering (NSC) to segment gray level images. We utilize the neutrosophic set in image processing field and define a new similarity function for clustering. At first, an image is represented in the neutrosophic set domain via three membership sets: T, I and F. Then, a neutrosophic similarity function (NSF) is defined and employed in the objective function of the clustering analysis. Finally, the new defined clustering algorithm classifies the pixels on the image into different groups. Experiments have been conducted on a variety of artificial and real images. Several measurements are used to evaluate the proposed method's performance. The experimental results demonstrate that the NSC method segment the images effectively and accurately. It can process both images without noise and noisy images having different levels of noises well. It will be helpful to applications in image processing and computer vision.
A novel image segmentation algorithm based on neutrosophic similarity clustering
S1568494614004359
Inspired by the matrix-based methods used in feature extraction and selection, one matrix-pattern-oriented classification framework has been designed in our previous work and demonstrated to utilize one matrix pattern itself more effectively to improve the classification performance in practice. However, this matrix-based framework neglects the prior structural information of the whole input space that is made up of all the matrix patterns. This paper aims to overcome such flaw through taking advantage of one structure learning method named Alternative Robust Local Embedding (ARLE). As a result, a new regularization term R gl is designed, expected to simultaneously represent the globality and the locality of the whole data domain, further boosting the existing matrix-based classification method. To our knowledge, it is the first trial to introduce both the globality and the locality of the whole data space into the matrixized classifier design. In order to validate the proposed approach, the designed R gl is applied into the previous work matrix-pattern-oriented Ho-Kashyap classifier (MatMHKS) to construct a new globalized and localized MatMHKS named GLMatMHKS. The experimental results on a broad range of data validate that GLMatMHKS not only inherits the advantages of the matrixized learning, but also uses the prior structural information more reasonably to guide the classification machine design.
Globalized and localized matrix-pattern-oriented classification machine
S1568494614004360
Although there are many advanced systems to assist vessels with passing through narrow waterways, it continues to be a serious problem because of disturbance factors and geographic structure. By taking Istanbul Strait as a model for this study, we aimed to develop a decision support system and/or a guidance method to assist reciprocally passing vessels. The main purpose is to develop an Artificial Neural Network (ANN) that uses the data of manually controlled vessels to generate predictions about the future locations of those vessels. If there is any possibility of collision, this system is aimed to warn the operators in the Vessel Traffic Services (VTS) centre and to guide the personnel of the vessels. In this study, manually controlled and reciprocally passing vessels’ data were used (including coordinates, speed, and environmental conditions), neural networks were trained, and predictions were made about the locations of vessels three minutes after the initial point of evaluation (this duration was determined by considering the conditions of Istanbul Strait). With this purpose, we used data gathered from vessels and proved the success of the system, especially concerning predictions made during turnings, by determining the possibility of collision between two vessels three minutes after the data was gathered.
Decision support system for collision avoidance of vessels
S1568494614004372
Network reconfiguration is the process of changing the topology of distribution systems by altering the open/closed status of switches. Because there are many candidate-switching combinations in the distribution system, network reconfiguration is a complicated combinatorial, non-differentiable constrained optimization problem. In addition, the radiality constraint typically increases the intricacy of the problem. In this paper, to avoid create infeasible configuration, a new codification is proposed. The proposed codification is computationally efficient and guarantees to generate only feasible radial topologies all times. Also, in this paper, a modified heuristic approach for optimal reconfiguration in radial distribution systems is presented. Additionally, in order to economize voltage profile improvement, a number of new formulas have been represented. The effectiveness of the proposed method is demonstrated on balanced and unbalanced test distribution systems. the cost benefit of improving the voltage profiles (€) power factor of the ith load the cost of power (€/kWh) cost of power losses (€/kW) cost of the energy losses (€/kWh) discarded tie lines vector the total number of branches current in the ith branch (pu) maximum current limit of the ith branch (pu) single-phase current of the ith load (A) losses cost (€) the total number of node the number of tie lines node-power (ΩMW) active power consumption of the ith load (kW) active power losses flowing on the line l (kW) active power trough line k (kW) reactive power trough line k (kVAR) resistance of the lth branch (Ω) the number of links of a DNG the switching cost (€) study period (h) voltage of the sending end node of the ith branch (pu) maximum specified system node voltage (pu) minimum specified system node voltage (pu) single-phase voltage of the ith load (V) energy losses (kWh) radial constraint for the nth topology profit rat coefficient of variation
A novel codification and modified heuristic approaches for optimal reconfiguration of distribution networks considering losses cost and cost benefit from voltage profile improvement
S1568494614004384
It is envisioned that other than the grid-building communication, the smart buildings could potentially treat connected neighborhood buildings as a local buffer thus forming a local area energy network through the smart grid. As the hardware technology is in place, what is needed is an intelligent algorithm that coordinates a cluster of buildings to obtain Pareto decisions on short time scale operations. Research has proposed a memetic algorithm (MA) based framework for building cluster operation decisions and it demonstrated the framework is capable of deriving the Pareto solutions on an 8-h operation horizon and reducing overall energy costs. While successful, the memetic algorithm is computational expensive which limits its application to building operation decisions on an hourly time scale. To address this challenge, we propose a particle swarm framework, termed augmented multi-objective particle swarm optimization (AMOPSO). The performance of the proposed AMOPSO in terms of solution quality and convergence speed is improved via the fusion of multiple search methods. Extensive experiments are conducted to compare the proposed AMOPSO with nine multi-objective PSO algorithms (MOPSOs) and multi-objective evolutionary algorithms (MOEAs) collected from the literature. Results demonstrate that AMOPSO outperforms the nine state-of-the-art MOPSOs and MOEAs in terms of epsilon, spread, and hypervolume indicator. A building cluster case is then studied to show that the AMOPSO based decision framework is able to make hourly based operation decisions which could significantly improve energy efficiency and achieve more energy cost savings for the smart buildings.
An augmented multi-objective particle swarm optimizer for building cluster operation decisions
S1568494614004396
In this paper, a new outranking approach for multi-criteria decision-making (MCDM) problems is developed in the context of a simplified neutrosophic environment, where the truth-membership degree, indeterminacy-membership degree and falsity-membership degree for each element are singleton subsets in [0,1]. Firstly, the novel operations of simplified neutrosophic sets (SNSs) and relational properties are developed. Then some outranking relations for simplified neutrosophic number (SNNs) are defined, based on ELECTRE, and the properties within the outranking relations are further discussed in detail. Additionally, based on the outranking relations of SNNs, a ranking approach is developed in order to solve MCDM problems. Finally, two practical examples are provided to illustrate the practicality and effectiveness of the proposed approach. Moreover, a comparison analysis based on the same example is also conducted.
An outranking approach for multi-criteria decision-making problems with simplified neutrosophic sets
S1568494614004402
In this paper, we introduce a novel iterative method to finding the fixed point of a nonlinear function. Therefore, we combine ideas proposed in Artificial Bee Colony algorithm (Karaboga and Basturk, 2007) and Bisection method (Burden and Douglas, 1985). This method is new and very efficient for solving a non-linear equation. We illustrate this method with four benchmark functions and compare results with others methods, such as ABC, PSO, GA and Firefly algorithms.
The Bisection–Artificial Bee Colony algorithm to solve Fixed point problems
S1568494614004414
In this paper, we propose fuzzy logic-based cooperative reinforcement learning for sharing knowledge among autonomous robots. The ultimate goal of this paper is to entice bio-insects towards desired goal areas using artificial robots without any human aid. To achieve this goal, we found an interaction mechanism using a specific odor source and performed simulations and experiments [1]. For efficient learning without human aid, we employ cooperative reinforcement learning in multi-agent domain. Additionally, we design a fuzzy logic-based expertise measurement system to enhance the learning ability. This structure enables the artificial robots to share knowledge while evaluating and measuring the performance of each robot. Through numerous experiments, the performance of the proposed learning algorithms is evaluated.
Bio-insect and artificial robot interaction using cooperative reinforcement learning
S1568494614004426
Many industrial processes belong to nonlinear distributed parameter systems (DPS) with significant spatio-temporal dynamics. They often work at multiple operating points due to different production and working conditions. To obtain a global model, the direct modeling and experiments in a large operating range are often very difficult. Motivated by the multi-modeling, a fuzzy-based spatio-temporal multi-modeling approach is proposed for nonlinear DPS. To obtain a reasonable operating space division, a priori information and the fuzzy clustering are used to decompose the operating space from coarse scale to fine scale gradually. To reduce the dimension in the local spatio-temporal modeling, the Karhunen–Loève method is used for the space/time separation. Both multi-modeling and space/time separation can reduce the modeling complexity. Finally, to get a smooth global model, a three-domain (3D) fuzzy integration method is proposed. Using the proposed method, the model accuracy will be improved and the experiments become easier. The effectiveness is verified by simulations.
A fuzzy-based spatio-temporal multi-modeling for nonlinear distributed parameter processes
S1568494614004438
A contrast enhancement of medical images using Type II fuzzy set theory is suggested. Fuzzy set theory considers uncertainty in the form of membership function but to have better information on uncertainty on the membership function, Type II fuzzy set is considered. Type II fuzzy set considers fuzziness in the membership function. Hamacher T co norm is used as an aggregation operator to form a new membership function using the upper and lower membership function of Type II fuzzy set. The image with the new membership function is an enhanced image. As medical images contain lot of uncertainties, Type II fuzzy set may be a good tool for medical image analysis. To show the effectiveness of the proposed method, the results are compared with fuzzy, intuitionistic fuzzy, and existing Type II fuzzy methods. Experiments on several images show that the proposed Type II fuzzy method performs better than the existing methods. To show the advantage of the proposed enhancement method, detection or extraction of abnormal lesions or blood vessels has been carried out on enhanced images of all the methods. It is observed that the segmented results on the proposed enhanced images are better.
An improved medical image enhancement scheme using Type II fuzzy set
S156849461400444X
PieceWise AutoRegressive eXogenous (PWARX) models represent one of the broad classes of the hybrid dynamical systems (HDS). Among many classes of HDS, PWARX model used as an attractive modeling structure due to its equivalence to other classes. This paper presents a novel fuzzy distance weight matrix based parameter identification method for PWARX model. In the first phase of the proposed method estimation for the number of affine submodels present in the HDS is proposed using fuzzy clustering validation based algorithm. For the given set of input–output data points generated by predefined PWARX model fuzzy c-means (FCM) clustering procedure is used to classify the data set according to its affine submodels. The fuzzy distance weight matrix based weighted least squares (WLS) algorithm is proposed to identify the parameters for each PWARX submodel, which minimizes the effect of noise and classification error. In the final phase, fuzzy validity function based model selection method is applied to validate the identified PWARX model. The effectiveness of the proposed method is demonstrated using three benchmark examples. Simulation experiments show validation of the proposed method.
Parameter identification of PWARX models using fuzzy distance weighted least squares method
S1568494614004451
This paper considers the uniform machine scheduling problem with release dates so as to minimize the total completion time. The problem is known to be NP-hard in the strong sense, even when there is only a single machine. An intelligent scheduling algorithm, called ABISA, is proposed, in which agent technology is introduced to realize the robotization process of manufacturing scheduling by means of the intelligence of the machines. Semantic description of two kinds of agent is given, and token-ring mechanism is presented for agent coordination. A lower bound is derived which is used to evaluate the performance of the algorithms. By examining 1800 random problem instances, the algorithm shows an excellent performance in the solution quality. The results obtained by ABISA are better than the algorithms based on traditional heuristic rules, and are closer to the lower bound of the problem.
An agent-based intelligent algorithm for uniform machine scheduling to minimize total completion time
S1568494614004463
The Multiobjective Evolutionary Algorithm Based on Decomposition (MOEA/D) is a very efficient multiobjective evolutionary algorithm introduced in recent years. This algorithm works by decomposing a multiobjective optimization problem to many scalar optimization problems and by assigning each specimen in the population to a specific subproblem. The MOEA/D algorithm transfers information between specimens assigned to the subproblems using a neighborhood relation. In this paper it is shown that parameter settings commonly used in the literature cause an asymmetric neighbor assignment which in turn affects the selective pressure and consequently causes the population to converge asymmetrically. The paper contains theoretical explanation of how this bias is caused as well as an experimental verification. The described effect is undesirable, because a multiobjective optimizer should not introduce asymmetries not present in the optimization problem. The paper gives some guidelines on how to avoid such artificial asymmetries.
The effects of asymmetric neighborhood assignment in the MOEA/D algorithm
S1568494614004475
Reinforcement learning (RL) for solving large and complex problems faces the curse of dimensions problem. To overcome this problem, frameworks based on the temporal abstraction have been presented; each having their advantages and disadvantages. This paper proposes a new method like the strategies introduced in the hierarchical abstract machines (HAMs) to create a high-level controller layer of reinforcement learning which uses options. The proposed framework considers a non-deterministic automata as a controller to make a more effective use of temporally extended actions and state space clustering. This method can be viewed as a bridge between option and HAM frameworks, which tries to suggest a new framework to decrease the disadvantage of both by creating connection structures between them and at the same time takes advantages of them. Experimental results on different test environments show significant efficiency of the proposed method.
Automatic abstraction controller in reinforcement learning agent via automata
S1568494614004487
Background Short-term load forecasting is an important issue that has been widely explored and examined with respect to the operation of power systems and commercial transactions in electricity markets. Of the existing forecasting models, support vector regression (SVR) has attracted much attention. While model selection, including feature selection and parameter optimization, plays an important role in short-term load forecasting using SVR, most previous studies have considered feature selection and parameter optimization as two separate tasks, which is detrimental to prediction performance. Objective By evolving feature selection and parameter optimization simultaneously, the main aims of this study are to make practitioners aware of the benefits of applying unified model selection in STLF using SVR and to provide one solution for model selection in the framework of memetic algorithm (MA). Methods This study proposes a comprehensive learning particle swarm optimization (CLPSO)-based memetic algorithm (CLPSO-MA) that evolves feature selection and parameter optimization simultaneously. In the proposed CLPSO-MA algorithm, CLPSO is applied to explore the solution space, while a problem-specific local search is proposed for conducting individual learning, thereby enhancing the exploitation of CLPSO. Results Compared with other well-established counterparts, benefits of the proposed unified model selection problem and the proposed CLPSO-MA for model selection are verified using two real-world electricity load datasets, which indicates the SVR equipped with CLPSO-MA can be a promising alternative for short-term load forecasting.
Comprehensive learning particle swarm optimization based memetic algorithm for model selection in short-term load forecasting using support vector regression
S1568494614004499
Probabilistic graphical models such as Bayesian Networks are one of the most powerful structures known by the Computer Science community for deriving probabilistic inferences. However, modern cognitive psychology has revealed that human decisions could not follow the rules of classical probability theory, because humans cannot process large amounts of data in order to make judgments. Consequently, the inferences performed are based on limited data coupled with several heuristics, leading to violations of the law of total probability. This means that probabilistic graphical models based on classical probability theory are too limited to fully simulate and explain various aspects of human decision making. Quantum probability theory was developed in order to accommodate the paradoxical findings that the classical theory could not explain. Recent findings in cognitive psychology revealed that quantum probability can fully describe human decisions in an elegant framework. Their findings suggest that, before taking a decision, human thoughts are seen as superposed waves that can interfere with each other, influencing the final decision. In this work, we propose a new Bayesian Network based on the psychological findings of cognitive scientists. In Computer Science, to the best of our knowledge, there is no quantum like probabilistic system proposed, despite their promising performances. We made experiments with two very well known Bayesian Networks from the literature. The results obtained revealed that the quantum like Bayesian Network can affect drastically the probabilistic inferences, specially when the levels of uncertainty of the network are very high (no pieces of evidence observed). When the levels of uncertainty are very low, then the proposed quantum like network collapses to its classical counterpart.
Interference effects in quantum belief networks
S1568494614004505
The present paper proposes systematic designs of stable adaptive fuzzy logic controllers (AFLCs), employing hybridizations of Lyapunov strategy based approach (LSBA) and a contemporary stochastic optimization technique, for controlling the temperature of a thermal process in an air-heater system with transport-delay and in the presence of disturbances. Harmony search (HS) algorithm has been considered here as a candidate stochastic optimization technique utilized in conjunction with the Lyapunov theory to develop the hybrid models in this work. The objective of this work is to design stable adaptive fuzzy controllers which can provide high degree of automation, guarantee asymptotic stability and also achieve satisfactory transient performance by simultaneous adaptations of both the fuzzy controller structure and its free parameters. The results obtained from real-life experiments aptly demonstrate the usefulness of the proposed approach.
Harmony search algorithm and Lyapunov theory based hybrid adaptive fuzzy controller for temperature control of air heater system with transport-delay
S1568494614004517
The goal of this paper is to handle the large variation issues in fuzzy data by constructing a variable spread multivariate adaptive regression splines (MARS) fuzzy regression model with crisp parameters estimation and fuzzy error terms. It deals with imprecise measurement of response variable and crisp measurement of explanatory variables. The proposed method is a two-phase procedure which applies the MARS technique at phase one and an optimization problem at phase two to estimate the center and fuzziness of the response variable. The proposed method, therefore, handles two problems simultaneously: the problem of large variation issue and the problem of variation spreads in fuzzy observations. A realistic application of the proposed method is also presented, by which the suspended load is modeled using discharge in a hydrology engineering problem. Empirical results demonstrate that the proposed approach is more efficient and more realistic than some well-known least-squares fuzzy regression models.
A hybrid fuzzy regression model and its application in hydrology engineering
S1568494614004529
Nowadays, many real applications comprise data-sets where the distribution of the classes is significantly different. These data-sets are commonly known as imbalanced data-sets. Traditional classifiers are not able to deal with these kinds of data-sets because they tend to classify only majority classes, obtaining poor results for minority classes. The approaches that have been proposed to address this problem can be categorized into three types: resampling methods, algorithmic adaptations and cost sensitive techniques. Radial Basis Function Networks (RBFNs), artificial neural networks composed of local models or RBFs, have demonstrated their efficiency in different machine learning areas. Centers, widths and output weights for the RBFs must be determined when designing RBFNs. Taking into account the locally tuned response of RBFs, the objective of this paper is to study the influence of global and local paradigms on the weights training phase, within the RBFNs design methodology, for imbalanced data-sets. Least Mean Square and the Singular Value Decomposition have been chosen as representatives of local and global weights training paradigms respectively. These learning algorithms are inserted into classical RBFN design methods that are run on imbalanced data-sets and also on these data-sets preprocessed with re-balance techniques. After applying statistical tests to the results obtained, some guidelines about the RBFN design methodology for imbalanced data-sets are provided.
Training algorithms for Radial Basis Function Networks to tackle learning processes with imbalanced data-sets
S1568494614004530
Spatial architecture neural network (SANN), which is inspired by the connecting mode of excitatory pyramidal neurons and inhibitory interneurons of neocortex, is a multilayer artificial neural network and has good learning accuracy and generalization ability when used in real applications. However, the backpropagation-based learning algorithm (named BP-SANN) may be time consumption and slow convergence. In this paper, a new fast and accurate two-phase sequential learning scheme for SANN is hereby introduced to guarantee the network performance. With this new learning approach (named SFSL-SANN), only the weights connecting to output neurons will be trained during the learning process. In the first phase, a least-squares method is applied to estimate the span-output-weight on the basis of the fixed randomly generated initialized weight values. The improved iterative learning algorithm is then used to learn the feedforward-output-weight in the second phase. Detailed effectiveness comparison of SFSL-SANN is done with BP-SANN and other popular neural network approaches on benchmark problems drawn from the classification, regression and time-series prediction applications. The results demonstrate that the SFSL-SANN is faster convergence and time-saving than BP-SANN, and produces better learning accuracy and generalization performance than other approaches.
A fast and efficient two-phase sequential learning algorithm for spatial architecture neural network
S1568494614004542
Gene selection and sample classification based on gene expression data are important research trends in bioinformatics. It is very difficult to select significant genes closely related to classification because of the high dimension and small sample size of gene expression data. Rough set based on neighborhood has been successfully applied to gene selection, as it selects attributes without redundancy and deals with numerical attributes directly. Construction of neighborhoods, approximation operators and attribute reduction algorithm are three key components in this gene selection approach. In this study, a novel neighborhood named intersection neighborhood for numerical data was defined. The performances of two kinds of approximation operators were compared on gene expression data. A significant gene selection algorithm, which was applied to the analysis of plant stress response, was proposed by using positive region and gene ranking, and then this algorithm with thresholds optimization for intersection neighborhood was extended. The performance of the proposed algorithm, along with a comparison with other related methods, classical algorithms and rough set methods, was analyzed. The results of experiments on four data sets showed that intersection neighborhood was more flexible to adapt to the data with various structure, and approximation operator based on elementary set was more suitable for this application than that based on element. That was to say that the proposed algorithms were effective, as they could select significant gene subsets without redundancy and achieve high classification accuracy.
Gene selection using rough set based on neighborhood for the analysis of plant stress response
S1568494614004554
Vague sets were first proposed by Gau and Buehrer [11] as an extension of fuzzy sets which encompass fuzzy sets, inter-valued fuzzy sets, as special cases. Vague sets consist of two parts, that is, the membership function and nonmembership function. Therefore, in accordance with practical demand these sets are more flexible than the existing fuzzy sets and provide much more information about the situation. In this paper, a new approach for the ranking of trapezoidal vague sets is introduced. Shortcomings in some existing ranking approaches have been pointed out. Validation of the proposed ranking method has been established through the reasonable properties of the fuzzy quantities. Further, the proposed ranking approach is applied to develop a new method for dealing with vague risk analysis problems to find the probability of failure, of each component of compressor system, which could be used for managerial decision making and future system maintenance strategy. Also, the proposed method provides a useful way for handling vague risk analysis problems.
A novel method for ranking of vague sets for handling the risk analysis of compressor system
S1568494614004566
The technique for order preference by similarity to ideal solution (TOPSIS) method is a well-known compromising method for multiple criteria decision analysis. This paper develops an extended TOPSIS method with an inclusion comparison approach for addressing multiple criteria group decision-making problems in the framework of interval-valued intuitionistic fuzzy sets. Considering the relative agreement degrees and the importance weights of multiple decision makers, this paper presents a modified hybrid averaging method with an inclusion-based ordered weighted averaging operation for forming a collective decision environment. Based on the main structure of the TOPSIS method, this paper utilizes the concept of inclusion comparison possibilities to propose a new index for an inclusion-based closeness coefficient for ranking the alternatives. Additionally, two optimization models are established to determine the criterion weights for addressing situations in which the preference information is completely unknown or incompletely known. Finally, the feasibility and effectiveness of the proposed methods are illustrated by a medical group decision-making problem.
The inclusion-based TOPSIS method with interval-valued intuitionistic fuzzy sets for multiple criteria group decision making
S1568494614004578
The limited battery life of modern mobile devices is one of the key problems limiting their use. Even if the offloading of computation onto cloud computing platforms can considerably extend battery duration, it is really hard not only to evaluate the cases where offloading guarantees real advantages on the basis of the requirements of the application in terms of data transfer, computing power needed, etc., but also to evaluate whether user requirements (i.e. the costs of using the cloud services, a determined QoS required, etc.) are satisfied. To this aim, this paper presents a framework for generating models to make automatic decisions on the offloading of mobile applications using a genetic programming (GP) approach. The GP system is designed using a taxonomy of the properties useful to the offloading process concerning the user, the network, the data and the application. The fitness function adopted permits different weights to be given to the four categories considered during the process of building the model. Experimental results, conducted on datasets representing different categories of mobile applications, permit the analysis of the behavior of our algorithm in different applicative contexts. Finally, a comparison with the state of the art of the classification algorithm establishes the goodness of the approach in modeling the offloading process.
Automatic offloading of mobile applications into the cloud by means of genetic programming
S1568494614004682
Data analysis techniques have been traditionally conceived to cope with data described in terms of numeric vectors. The reason behind this fact is that numeric vectors have a well-defined and clear geometric interpretation, which facilitates the analysis from the mathematical viewpoint. However, the state-of-the-art research on current topics of fundamental importance, such as smart grids, networks of dynamical systems, biochemical and biophysical systems, intelligent trading systems, multimedia content-based retrieval systems, and social networks analysis, deal with structured and non-conventional information characterizing the data, providing richer and hence more complex patterns to be analyzed. As a consequence, representing patterns by complex (relational) structures and defining suitable, usually non-metric, dissimilarity measures is becoming a consolidated practice in related fields. However, as the data sources become more complex, the capability of judging over the data quality (or reliability) and related interpretability issues can be seriously compromised. For this purpose, automated methods able to synthesize relevant information, and at the same time rigorously describe the uncertainty in the available datasets, are very important: information granulation is the key aspect in the analysis of complex data. In this paper, we discuss our general viewpoint on the adoption of information granulation techniques in the general context of soft computing and pattern recognition, conceived as a fundamental approach towards the challenging problem of automatic modeling of complex systems. We focus on the specific setting of processing the so-called non-geometric data, which diverges significantly from what has been done so far in the related literature. We highlight the motivations, the founding concepts, and finally we provide the high-level conceptualization of the proposed data analysis framework.
Granular modeling and computing approaches for intelligent analysis of non-geometric data
S1568494614004694
Although neural networks and support vector machines (SVMs) are the traditional predictors for the classification of complex problems, these opaque paradigms cannot explain the logic behind the discrimination process. Therefore, within the quite unexplored area of evolutionary algorithms opening the SVM decision black box, the paper appoints a cooperative coevolutionary (CC) technique to extract discriminative and compact class prototypes following a SVM model. Various interactions between the SVM and CC are considered, while many experiments test three decisive hypotheses: fidelity to the SVM prediction, superior accuracy to the CC classifier alone and a compact and comprehensive resulting output, achieved through a class-oriented form of feature selection. Results support the hybridization by statistically and visually demonstrating its advantages.
Post-evolution of variable-length class prototypes to unlock decision making within support vector machines
S1568494614004700
Multi-attribute group decision making (MAGDM) is an important research topic in decision theory. In recent decades, many useful methods have been proposed to solve various MAGDM problems, but very few methods simultaneously take them into account from the perspectives of both the ranking and the magnitude of decision data, especially for the interval-valued intuitionistic fuzzy decision data. The purpose of this paper is to develop a soft computing technique based on maximizing consensus and fuzzy TOPSIS in order to solve interval-valued intuitionistic fuzzy MAGDM problems from such two aspects of decision data. To this end, we first define a consensus index from the perspective of the ranking of decision data, for measuring the degree of consensus between the individual and the group. Then, we establish an optimal model based on maximizing consensus to determine the weights of experts. Following the idea of TOPSIS, we calculate the closeness indices of the alternatives from the perspective of the magnitude of decision data. To identify the optimal alternatives and determine their optimum quantities, we further construct a multi-choice goal programming model based on the derived closeness indices. Finally, an example is given to verify the developed method and to make a comparative analysis.
Soft computing based on maximizing consensus and fuzzy TOPSIS approach to interval-valued intuitionistic fuzzy group decision making
S1568494614004712
Visual sensor networks (VSNs) consist of spatially distributed video cameras that are capable of compressing and transmitting the video sequences they acquire. We consider a direct-sequence code division multiple access (DS-CDMA) VSN, where each node has its individual requirements in compression bit rate and energy consumption, depending on the corresponding application and the characteristics of the monitored scene. We study two optimization criteria for the optimal allocation of the source and channel coding rates, which assume discrete values, as well as for the power levels of all nodes, which are continuous, under transmission bit rate constraints. The first criterion minimizes the average distortion of the video received by all nodes, while the second one minimizes the maximum video distortion among all nodes. The resulting mixed integer optimization problems are tackled with a modern optimization algorithm, namely particle swarm optimization (PSO), as well as a hybrid scheme that combines PSO with the deterministic Active-Set optimization method. Extensive experimentation on interference-limited as well as noisy environments offers significant intuition regarding the effectiveness of the considered optimization schemes, indicating the impact of the video sequence characteristics on the joint determination of the transmission parameters of the VSN.
A study on visual sensor network cross-layer resource allocation using quality-based criteria and metaheuristic optimization algorithms
S1568494614004724
Sea ports play a significant role in the development of a modern economy. The Baltic Sea is an arterial transport corridor between Eastern and Western Europe. There is need to develop a deep-water sea port in the Klaipeda region to satisfy economic needs. This problem involves a multitude of requirements and uncertain conditions that have to be taken into consideration simultaneously. Numerous studies have been designated for the resolution of similar problems by employing multi-criteria as an aid. This paper proposes an integrated multi-criteria decision-making model to solve the problem. The backbone of the proposed model consists of a combination of Analytic Hierarchy (AHP) and Fuzzy Ratio Assessment (ARAS-F) methods. This model is presented as a form of decision aiding that could to be implemented when regarding any specific port or a like site selection.
Multi-criteria selection of a deep-water port in the Eastern Baltic Sea
S1568494614004736
The development of mechanisms to ease human machine interaction is an issue about which there is increasing interest within both the software world in general, and database systems in particular. A way to tackle this problem is to try to approach the natural way of user expression. The Fuzzy Sets Theory and its application to build Fuzzy Databases constitute a consolidated advance in the literature. Another way is to adapt the interaction of the system to the context where it is running. In this sense, this paper presents an approach to build a model of Fuzzy Databases that dynamically adapts to user context. In order to do this, we have studied the management of the context in Fuzzy Database applications and we propose an architecture for the development of intelligent, flexible and customized context-aware database systems. We also present a proof of concept implementation to be used with SQL:99 standard in Oracle ORDBMS. Finally, through a real application in the medical area, we demonstrate the feasibility of the proposal.
Context-Aware Fuzzy Databases
S1568494614004748
The automatic design of controllers for mobile robots usually requires two stages. In the first stage, sensorial data are preprocessed or transformed into high level and meaningful values of variables which are usually defined from expert knowledge. In the second stage, a machine learning technique is applied to obtain a controller that maps these high level variables to the control commands that are actually sent to the robot. This paper describes an algorithm that is able to embed the preprocessing stage into the learning stage in order to get controllers directly starting from sensorial raw data with no expert knowledge involved. Due to the high dimensionality of the sensorial data, this approach uses Quantified Fuzzy Rules (QFRs), that are able to transform low-level input variables into high-level input variables, reducing the dimensionality through summarization. The proposed learning algorithm, called Iterative Quantified Fuzzy Rule Learning (IQFRL), is based on genetic programming. IQFRL is able to learn rules with different structures, and can manage linguistic variables with multiple granularities. The algorithm has been tested with the implementation of the wall-following behavior both in several realistic simulated environments with different complexity and on a Pioneer 3-AT robot in two real environments. Results have been compared with several well-known learning algorithms combined with different data preprocessing techniques, showing that IQFRL exhibits a better and statistically significant performance. Moreover, three real world applications for which IQFRL plays a central role are also presented: path and object tracking with static and moving obstacles avoidance.
Learning fuzzy controllers in mobile robotics with embedded preprocessing
S156849461400475X
Global positioning system (GPS) is the most widely used military and commercial positioning tool for real-time navigation and location. Geometric dilution of precision (GDOP) stands as a relevant measure of positioning accuracy and consequently, the performance quality of the GPS positioning algorithm. Since the calculation of GPS GDOP has a time and power burden that involves complicated transformation and inversion of measurement matrices, in this paper we propose hybrid intelligent methods, namely adaptive neuro-fuzzy inference system (ANFIS), improved ANFIS, and radial basis function (RBF), for GPS GDOP classification. Through investigation it is verified that the ANFIS is a high performance and valuable classifier. In the ANFIS training, the radius vector has very important role for its recognition accuracy. Therefore, in the optimization module, bee algorithm (BA) is proposed for finding the optimum vector of radius. In order to improve the performance of the proposed method, a new improvement for the BA is used. In addition, to enhance the accuracy of the method, principal component analysis (PCA) is utilized as a pre-processing step. Experimental results clearly indicate that the proposed intelligent methods have high classification accuracy rates comparing with conventional ones.
New Neural Network-based Approaches for GPS GDOP Classification based on Neuro-Fuzzy Inference System, Radial Basis Function, and Improved Bee Algorithm
S1568494614004773
In this paper we present a novel model originating from Takagi-Sugeno fuzzy models. It is based on a concept of transductive similarity, where unlike a simple inductive similarity, it considers also local neighborhood of a given element. Transductive property of a local space is used in an inference process, what allows the technique to be used also in incremental settings. Since incremental model construction brings new challenges, we are unable to use the offline transductive approach as some of the previous works did. The key idea of our model is to adjust activation properties of each rule, based on cross-rule similarities. Our method is capable of using the transductive property for any metric. Besides the final model, we also present several improvements to the transductive similarity technique itself, where we alternate the similarity metric in several ways to better exploit the influence of local neighborhood in the final metric. At the end, we demonstrate a superior performance of our technique over the state-of-the-art techniques build on TS fuzzy models on several machine learning datasets.
TITS-FM: Transductive incremental Takagi-Sugeno fuzzy models
S1568494614004785
Robots with vastly different capabilities and specifications are available for a wide range of applications. Selection of a robot for a specific application has become more complicated due to increase in the complexity, advanced features and facilities that are continuously being incorporated into the robots by different manufacturers. The aim of this paper is to present an integrated approach for the optimal selection of robots by considering both objective and subjective criteria. The approach utilizes Fuzzy Delphi Method (FDM), Fuzzy Analytical Hierarchical Process (FAHP), Fuzzy modified TOPSIS or Fuzzy VIKOR and Brown–Gibson model for robot selection. FDM is used to select the list of important objective and subjective criteria based on the decision makers’ opinion. Fuzzy AHP method is then used to find out the weight of each criterion (both objective and subjective). Fuzzy modified TOPSIS or Fuzzy VIKOR method is then used to rank the alternatives based on objective and subjective factors. The rankings obtained are used to calculate the robot selection index based on Brown–Gibson model. The proposed methodology is illustrated with a case study related to selection of robot for teaching purpose. It is found that the highest ranked alternative based on Fuzzy VIKOR is closest to the ideal solution.
An integrated fuzzy MCDM based approach for robot selection considering objective and subjective criteria
S1568494614004797
Visualization methods could significantly improve the outcome of automated knowledge discovery systems by involving human judgment. Star coordinate is a visualization technique that maps k-dimensional data onto a circle using a set of axes sharing the same origin at the center of the circle. It provides the users with the ability to adjust this mapping, through scaling and rotating of the axes, until no mapped point-clouds (clusters) overlap one another. In this state, similar groups of data are easily detectable. However an effective adjustment could be a difficult or even an impossible task for the user in high dimensions. This is specially the case when the input space dimension is about 50 or more. In this paper, we propose a novel method toward automatic axes adjustment for high dimensional data in Star Coordinate visualization method. This method finds the best two-dimensional view point that minimizes intra-cluster distances while keeping the inter-cluster distances as large as possible by using label information. We call this view point a discernible visualization, where clusters are easily detectable by human eye. The label information could be provided by the user or could be the result of performing a conventional clustering method over the input data. The proposed approach optimizes the Star Coordinate representation by formulating the problem as a maximization of a Fisher discriminant. Therefore the problem has a unique global solution and polynomial time complexity. We also prove that manipulating the scaling factor alone is effective enough for creating any given visualization mapping. Moreover it is showed that k-dimensional data visualization can be modeled as an eigenvalue problem. Using this approach, an optimal axes adjustment in the Star Coordinate method for high dimensional data can be achieved without any user intervention. The experimental results demonstrate the effectiveness of the proposed approach in terms of accuracy and performance.
Discernible visualization of high dimensional data using label information
S1568494614004803
This paper develops a new method for group decision making and introduces a linguistic continuous ordered weighted distance (LCOWD) measure. It is a new distance measure that combines the linguistic continuous ordered weighted averaging (LCOWA) operator with the ordered weighted distance (OWD) measure considering the risk attitude of decision maker. Moreover, it also can relieve the influence of extremely large or extremely small deviations on the aggregation results by assigning them smaller weights. These advantages make it suitable to deal with the situations where the input arguments are represented with uncertain linguistic information. Some of the main properties of the LCOWD measure and different particular cases are studied. The applicability of the new approach is also analyzed focusing on a group decision making problem.
Linguistic continuous ordered weighted distance measure and its application to multiple attributes group decision making
S1568494614004815
The purpose of this paper is to investigate the relationship between adverse events and infrastructure development investments in an active war theater by using soft computing techniques including fuzzy inference systems (FIS), artificial neural networks (ANNs), and adaptive neuro-fuzzy inference systems (ANFIS) where the accuracy of the predictions is directly beneficial from an economic and humanistic point of view. Fourteen developmental and economic improvement projects were selected as independent variables. A total of four outputs reflecting the adverse events in terms of the number of people killed, wounded or hijacked, and the total number of adverse events has been estimated. The results obtained from analysis and testing demonstrate that ANN, FIS, and ANFIS are useful modeling techniques for predicting the number of adverse events based on historical development or economic project data. When the model accuracy was calculated based on the mean absolute percentage error (MAPE) for each of the models, ANN had better predictive accuracy than FIS and ANFIS models, as demonstrated by experimental results. For the purpose of allocating resources and developing regions, the results can be summarized by examining the relationship between adverse events and infrastructure development in an active war theater, with emphasis on predicting the occurrence of events. We conclude that the importance of infrastructure development projects varied based on the specific regions and time period.
Investigating the relationship between adverse events and infrastructure development in an active war theater using soft computing techniques
S1568494614004827
The multiple traveling salesperson problem (MTSP) is similar to famous traveling salesperson problem (TSP) except for the fact that there are more than one salesperson to visit the cities though each city must be visited exactly once by only one salesperson. For this problem, we have considered two different objectives. First one is to minimize the total distance traveled by all the salespersons, whereas the second one is to minimize the maximum distance traveled by anyone salesperson. This latter objective is about fairness as it tries to balance the workload among salespersons. MTSP, being a generalization of TSP under both the objectives, is also NP -Hard. In this paper, we have proposed two metaheuristic approaches for the MTSP. The first approach is based on artificial bee colony algorithm, whereas the second approach is based on invasive weed optimization algorithm. We have also applied a local search to further improve the solution obtained through our approaches. Computational results on a wide range of benchmark instances show the superiority of our proposed approaches over all the other state-of-the-art approaches for this problem on both the objectives.
Two metaheuristic approaches for the multiple traveling salesperson problem
S1568494614004839
Data fitting with B-splines is a challenging problem in reverse engineering for CAD/CAM, virtual reality, data visualization, and many other fields. It is well-known that the fitting improves greatly if knots are considered as free variables. This leads, however, to a very difficult multimodal and multivariate continuous nonlinear optimization problem, the so-called knot adjustment problem. In this context, the present paper introduces an adapted elitist clonal selection algorithm for automatic knot adjustment of B-spline curves. Given a set of noisy data points, our method determines the number and location of knots automatically in order to obtain an extremely accurate fitting of data. In addition, our method minimizes the number of parameters required for this task. Our approach performs very well and in a fully automatic way even for the cases of underlying functions requiring identical multiple knots, such as functions with discontinuities and cusps. To evaluate its performance, it has been applied to three challenging test functions, and results have been compared with those from other alternative methods based on AIS and genetic algorithms. Our experimental results show that our proposal outperforms previous approaches in terms of accuracy and flexibility. Some other issues such as the parameter tuning, the complexity of the algorithm, and the CPU runtime are also discussed.
Elitist clonal selection algorithm for optimal choice of free knots in B-spline data fitting
S1568494614004852
A Genetic Fuzzy System (GFS) is basically a fuzzy system augmented by a learning process based on a genetic algorithm (GA). Fuzzy systems have demonstrated their ability to solve different kinds of problems in various application domains. Currently, there is an increasing interest to augment fuzzy systems with learning and adaptation capabilities. Two of the most successful approaches to hybridize fuzzy systems with learning and adaptation methods have been made in the realm of soft computing. The GA can be merged with Fuzzy system for different purposes like rule selection, membership function optimization, rule generation, co-efficient optimization, for data classification. Here we propose an Adaptive Genetic Fuzzy System (AGFS) for optimizing rules and membership functions for medical data classification process. The primary intension of the research is 1) Generating rules from data as well as for the optimized rules selection, adapting of genetic algorithm is done and to explain the exploration problem in genetic algorithm, introduction of new operator, called systematic addition is done, 2) Proposing a simple technique for scheming of membership function and Discretization, and 3) Designing a fitness function by allowing the frequency of occurrence of the rules in the training data. Finally, to establish the efficiency of the proposed classifier the presentation of the anticipated genetic-fuzzy classifier is evaluated with quantitative, qualitative and comparative analysis. From the outcome, AGFS obtained better accuracy when compared to the existing systems.
AGFS: Adaptive Genetic Fuzzy System for medical data classification
S1568494614004864
In designing a supply chain (SC) system, the problem arises when a company has unsatisfactory inventory control policy and material routing between supplier–producer and distributor in SC considering specified cost and demand. The integration of decisions of different functions into a single optimization model is the base of this research. The aim of this paper is to study and compare the existing models of supply, production and distribution in SC and propose a model which integrates mentioned criteria in supply chain management (SCM). Furthermore, it proposes a new method for calculation of fitness function in genetic algorithm (GA) process. The successful designing of this model has led us to explore the use of heuristic methods such as GA to quantify the flow of SC, information and material flow. At first, fuzzy analytic hierarchy process (FAHP) is adapted to evaluate objective function weights in SC. Then final weights of objective function are determined by the technique for order of preference by similarity to ideal solution (TOPSIS). This research also simulated the real company SC operations, and determines the most effective strategic and operational policies for an effective SC system. The result obtained from the model shows that it is robust. This model can also be applied to other industrial environments with slight modifications.
Hybrid GA for material routing optimization in supply chain
S1568494614004876
Nowadays, scheduling of production cannot be done in isolation from scheduling of transportation since a coordinated solution to the integrated problem may improve the performance of the whole supply chain. In this paper, because of the widely used of rail transportation in supply chain, we develop the integrated scheduling of production and rail transportation. The problem is to determine both production schedule and rail transportation allocation of orders to optimize customer service at minimum total cost. In addition, we utilize some procedures and heuristics to encode the model in order to address it by two capable metaheuristics: Genetic algorithm (GA), and recently developed one, Keshtel algorithm (KA). Latter is firstly used for a mathematical model in supply chain literature. Besides, Taguchi experimental design method is utilized to set and estimate the proper values of the algorithms’ parameters to improve their performance. For the purpose of performance evaluation of the proposed algorithms, various problem sizes are employed and the computational results of the algorithms are compared with each other. Finally, we investigate the impacts of the rise in the problem size on the performance of our algorithms.
Solving the integrated scheduling of production and rail transportation problem by Keshtel algorithm
S156849461400489X
In this research, we propose a novel framework referred to as collective game behavior decomposition where complex collective behavior is assumed to be generated by aggregation of several groups of agents following different strategies and complexity emerges from collaboration and competition of individuals. The strategy of an agent is modeled by certain simple game theory models with limited information. Genetic algorithms are used to obtain the optimal collective behavior decomposition based on history data. The trained model can be used for collective behavior prediction. For modeling individual behavior, two simple games, the minority game and mixed game are investigated in experiments on the real-world stock prices and foreign-exchange rate. Experimental results are presented to show the effectiveness of the new proposed model.
Evolutionary collective behavior decomposition model for time series data mining
S1568494614004906
In the framework of Axiomatic Fuzzy Set (AFS) theory, we propose a new approach to data clustering. The objective of this clustering is to adhere to some principles of grouping exercised by humans when determining a structure in data. Compared with other clustering approaches, the proposed approach offers more detailed insight into the cluster's structure and the underlying decision making process. This contributes to the enhanced interpretability of the results via the representation capabilities of AFS theory. The effectiveness of the proposed approach is demonstrated by using real-world data, and the obtained results show that the performance of the clustering is comparable with other fuzzy rule-based clustering methods, and benchmark fuzzy clustering methods FCM and K-means. Experimental studies have shown that the proposed fuzzy clustering method can discover the clusters in the data and help specify them in terms of some comprehensive fuzzy rules.
Fuzzy clustering with semantic interpretation
S1568494614004918
The present work addresses the problem of missing data in multidimensional time series such as those collected during operational transients in industrial plants. We propose a novel method for missing data reconstruction based on three main steps: (1) computing a fuzzy similarity measure between a segment of the time series containing the missing data and segments of reference time series; (2) assigning a weight to each reference segment; (3) reconstructing the missing values as a weighted average of the reference segments. The performance of the proposed method is compared with that of an Auto Associative Kernel Regression (AAKR) method on an artificial case study and a real industrial application regarding shut-down transients of a Nuclear Power Plant (NPP) turbine. number of reference trajectories index of the reference trajectories number of dimensions of a trajectory index of the signal m-th reference trajectory time length of a reference trajectory time index value of signal j of trajectory m at time k test trajectory signal with missing data length of the time window with missing data present time in the test trajectory value of signal j at time k ≤ t in the test trajectory reconstruction of a missing datum time length used for the similarity computation segment of the most recent L t measurements of signal j in the test trajectory segment of length L t of signal j which ends at time k in reference trajectory m squared Euclidean distance between the monodimensional segment x ¯ m tr ( k , j ) and x ¯ t ( j ) squared Euclidean distance between X ¯ ¯ and the k-th segment of the m-th reference trajectory membership function value of the “approximately zero” fuzzy set computed in δ m 2 ( k ) parameter of μ m (k) parameter of μ m (k) distance score between X ¯ ¯ and the k-th segment of the m-th reference trajectory weight given to the k-th segment of the m-th reference trajectory
Reconstruction of missing data in multidimensional time series by fuzzy similarity
S156849461400492X
We propose a new image encryption algorithm which is based on the spatiotemporal non-adjacent coupled map lattices. The system of non-adjacent coupled map lattices has more outstanding cryptography features in dynamics than the logistic map or coupled map lattices does. In the proposed image encryption, we employ a bit-level pixel permutation strategy which enables bit planes of pixels permute mutually without any extra storage space. Simulations have been carried out and the results demonstrate the superior security and high efficiency of the proposed algorithm.
A new image encryption algorithm based on non-adjacent coupled map lattices
S1568494614004943
Effective planning and scheduling of relief operations play a key role in saving lives and reducing damage in disasters. These emergency operations involve a variety of challenging optimization problems, for which evolutionary computation methods are well suited. In this paper we survey the research advances in evolutionary algorithms (EAs) applied to disaster relief operations. The operational problems are classified into five typical categories, and representative works on EAs for solving the problems are summarized, in order to give readers a general overview of the state-of-the-arts and facilitate them to find suitable methods in practical applications. Several state-of-art methods are compared on a set of real-world emergency transportation problem instances, and some lessons are drawn from the experimental analysis. Finally, the strengths, limitations and future directions in the area are discussed.
Evolutionary optimization for disaster relief operations: A survey
S1568494614004955
The process of drug design and discovery demands several man years and huge investment. Computer-aided drug design (CADD) technique is an aid to speed up the drug discovery process. De novo drug design, a CADD technique to identify drug-like novel chemical structures from a huge chemical search space, helps to find new drugs by the optimization of multiple pharmaceutically relevant parameters required for a successful drug. As the search space is very large in the case of de novo drug design, evolutionary algorithm (EA), a soft computing technique can be used to find an optimal solution, which in this case is a novel drug. In this paper, various EA techniques used in de novo drug design tools are surveyed and analyzed in detail, with particular emphasis on the computational aspects.
Evolutionary algorithms for de novo drug design – A survey
S1568494614004967
The design stage represents one of the most critic steps for product development. Here, a great number of considerations have to be borne in mind, e.g., technical, functional, aesthetic or economic criteria. More recently, the increasing concerns on environmental aspects have added complexity to the process, known as ecodesign. In this respect, a framework to integrate the criteria provided by quantitative environmental indicators has been proposed on the basis of Fuzzy Preference Programming method features and fuzzy logic reasoning. As a result, an integrated Ecodesign Index (EcoInd) is obtained. This idea enables the decision making at process and product level taking into account different indicators at a time. The ecodesign of children's footwear was taken as case study and an ecodesign tool (decision support system) that included the estimation of environmental indicators and their integration was developed. Different models of shoes were analyzed to identify the most environmentally friendly design and to test the tool. In this case, the Ecological Footprint and two Environmental Risk Assessment indicators, namely Hazard Quotient and Cancer Risk, were selected as relevant environmental indicators and they were computed from data provided by a shoes manufacturer. Then, these indicators were integrated in the ecodesign tool and the EcoInd values were appraised for the children's footwear models analyzed. According to these figures, they were ranked as Red Leather>White Leather>White Synthetic>Pink Synthetic, from best to worst. Adherence Factor (mgcm−2) Analytic Hierarchy Process Fuzzy Preference Programming Method Benzylbutylphthalate Mean Body Weight (kg) Center – One Side Gaussian Membership Function Parameter Consistency Index Consistency Ratio Cancer Risk Concentration of a Contaminant in the Shoes (mgkg−1) Contact Time (hday−1) Dermal Absorption Factor Dibutyl Phthalate Di(2-Ethylhexyl) Phthalate Di-Isononyl Phthalate Decision Maker Exposure Dose to the Chemical (mgkg−1 day−1) Ecoquality Function Deployment Ecodesign Index Maximum Ecodesign Index Ecological Footprint (gm2/Pair of Shoes) Ecological Footprint by Subcategory (gm2/Pair of Shoes) Elimination and Choice Expressing Reality Method Energy Productivity by Subcategory (GJha−1 y−1) Environmental Risk Assessment Consumption/Generation of Energy for Each Subcategory r (GJ) A User-Friendly Software [16] for the Development of Multivariable Indices Fuzzy EcoDesign Index [19]. Equivalence Factor (**) Fuzzy Hazardous Waste Index Fuzzy Water Quality Index Generalized Reduced Gradient Result of a Fuzzy Aggregation Method Hazard Quotient Indicates the Corresponding Design to be Evaluated Indicates the Corresponding Environmental Indicator Indicates the Corresponding Ecological Footprint (EF) Category Indicates the Corresponding Kinds of Land (EF) Fuzzy Number Lower Value Life Cycle Assessment Lower Bound of the jth Environmental Indicator (*) Fuzzy Number Center Value Monte Carlo Simulation Multiple Objective Programming Number of Considered Environmental Indicators Natural Productivity (tha−1 y−1) Probability Distribution Function Preference Ranking Organization Method for Enrichment Evaluation Indicates the Corresponding Ecological Footprint (EF) Subcategory Reference Doses for Non-Carcinogenic Effects (mgkg−1 day−1) Consistency Index of a Randomly Generated Reciprocal Matrix Sustainability Assessment by Fuzzy Evaluation Mean Surface Area for Feet (m2) Slope Factors for Carcinogenic Effects (kgdaymg−1) Fuzzy Number Upper Value Upper Bound of the jth Environmental Indicator Visual Basic for Applications Consumption/Generation for Each Subcategory r/Pair of Shoes (t) jth Environmental Indicator (*) Normalized jth Environmental Indicator Parameter of the Aggregation Method Consistency Factor of AHP–FPP Method Standard Deviation – One Side Gaussian Membership Function Parameter Maximum Eigenvalue of AHP Matrix Membership Degree (score) for the jth Environmental Indicator Depends on the Type of Environmental Indicator. Equivalence Factors 2003 [42].
A decision support system based on fuzzy reasoning and AHP–FPP for the ecodesign of products: Application to footwear as case study
S1568494614004979
In this paper, based on the logarithmic image processing model and the dyadic wavelet transform (DWT), we introduce a logarithmic DWT (LDWT) that is a mathematical transform. It can be used in image edge detection, signal and image reconstruction. Comparative study of this proposed LDWT-based method is done with the edge detection Canny and Sobel methods using Pratt's Figure of Merit, and the comparative results show that the LDWT-based method is better and more robust in detecting low contrast edges than the other two methods. The gradient maps of images are detected by using the DWT- and LDWT-based methods, and the experimental results demonstrate that the gradient maps obtained by the LDWT-based method are more adequate and precisely located. Finally, we use the DWT- and LDWT-based methods to reconstruct one-dimensional signals and two-dimensional images, and the reconstruction results show that the LDWT-based reconstruction method is more effective.
Logarithmic dyadic wavelet transform with its applications in edge detection and reconstruction
S1568494614004980
Clinical guidelines and protocols (CGPs) are standard documents with the aim of helping practitioners in their daily work. Their computerization has received much attention in recent years, but it still presents some problems, mainly due to the low sustainability and low adaptability to changes (both in knowledge and technology) of the computerized CGPs. This paper presents an approach to an easy and automatic creation of Fuzzy Inference Systems (FISs), which are suitable for the computerized interpretation of differential diagnoses. The proposed FIS development process is based on applying Model-Driven Software Engineering techniques: automatic generation of computer artefacts and separation of concerns. The process focuses on the separation of roles during the design stage: domain experts use a basic editor that allows them to define the categories and factors that will be involved in the FIS in natural language, while knowledge engineers at a later stage refine these elements using a more advanced editor. The whole system has been tested by automatically generating two FISs that have been included in a computerized CGP for the diagnosis of a rare disease called hyperammonemia. This CGP has been validated and it is currently in use.
Automatic construction of Fuzzy Inference Systems for computerized clinical guidelines and protocols
S1568494614004992
This paper formulates the global route planning problem for the unmanned aerial vehicles (UAVs) as a constrained optimization problem in the three-dimensional environment and proposes an improved constrained differential evolution (DE) algorithm to generate an optimal feasible route. The flight route is designed to have a short length and a low flight altitude. The multiple constraints based on the realistic scenarios are taken into account, including maximum turning angle, maximum climbing/gliding slope, terrain, forbidden flying areas, map and threat area constraints. The proposed DE-based route planning algorithm combines the standard DE with the level comparison method and an improved strategy is proposed to control the satisfactory level. To show the high performance of the proposed method, we compare the proposed algorithm with six existing constrained optimization algorithms and five penalty function based methods. Numerical experiments in two test cases are carried out. Our proposed algorithm demonstrates a good performance in terms of the solution quality, robustness, and the constraint-handling ability.
An improved constrained differential evolution algorithm for unmanned aerial vehicle global route planning
S1568494614005006
This paper studies some decision rules for ambulance scheduling. The scheduling decision rules embedded in the decision support systems for emergency ambulance scheduling consider the criteria on the average response time and the percentage of the ambulance requests that are responded within 15min, which is usually ignored in traditional scheduling policies. The challenge in designing the decision rules lies in the stochastic and dynamic nature of request arrivals, fulfillment processes, and complex traffic conditions as well as the time-dependent spatial patterns of some parameters complicate the decisions in the problem. To illustrate the proposed decision rules’ usage in practice, a simulator is developed for performing some numerical experiments to validate the effectiveness and the efficiency of the proposed decision rules.
Decision rules for ambulance scheduling decision support systems
S156849461400502X
This paper presents fully fuzzy fixed charge multi-item solid transportation problems (FFFCMISTPs), in which direct costs, fixed charges, supplies, demands, conveyance capacities and transported quantities (decision variables) are fuzzy in nature. Objective is to minimize the total fuzzy cost under fuzzy decision variables. In this paper, some approaches are proposed to find the fully fuzzy transported amounts for a fuzzy solid transportation problem (FSTP). Proposed approaches are applicable for both balanced and unbalanced FFFCMISTPs. Another fuzzy fixed charge multi-item solid transportation problem (FFCMISTP) in which transported amounts (decision variables) are not fuzzy is also presented and solved by some other techniques. The models are illustrated with numerical examples and nature of the solutions is discussed.
Fully fuzzy fixed charge multi-item solid transportation problem
S1568494614005031
Evolutionary techniques such as Genetic Algorithm (GA), Particle Swarm Optimization (PSO) and Cuckoo Search (CS) are promising nature-inspired meta-heuristic optimization algorithms. Cuckoo Search combined with Lévy flights behavior and Markov chain random walk can search global optimal solution very quickly. The aim of this paper is to investigate the applicability of Cuckoo Search algorithm in cryptanalysis of Vigenere cipher. It is shown that optimal solutions obtained by CS are better than the best solutions obtained by GA or PSO for the analysis of the Vigenere cipher. The results show that a Cuckoo Search based attack is very effective on the Vigenere cryptosystem.
Cryptanalysis of Vigenere cipher using Cuckoo Search
S1568494614005043
Reinforcement learning (RL) is a powerful solution to adaptive control when no explicit model exists for the system being controlled. To handle uncertainty along with the lack of explicit model for the Cloud's resource management systems, this paper utilizes continuous RL in order to provide an intelligent control scheme for dynamic resource provisioning in the spot market of the Cloud's computational resources. On the other hand, the spot market of computational resources inside Cloud is a real-time environment in which, from the RL point of view, the control task of dynamic resource provisioning requires defining continuous domains for (state, action) pairs. Commonly, function approximation is used in RL controllers to overcome continuous requirements of (state, action) pair remembrance and to provide estimates for unseen statuses. However, due to the computational complexities of approximation techniques like neural networks, RL is almost impractical for real-time applications. Thus, in this paper, Ink Drop Spread (IDS) modeling method, which is a solution to system modeling without dealing with heavy computational complexities, is used as the basis to develop an adaptive controller for dynamic resource provisioning in Cloud's virtualized environment. The performance of the proposed control mechanism is evaluated through measurement of job rejection rate and capacity waste. The results show that at the end of the training episodes, in 90 days, the controller learns to reduce job rejection rate down to 0% while capacity waste is optimized down to 11.9%.
Using IDS fitted Q to develop a real-time adaptive controller for dynamic resource provisioning in Cloud's virtualized environment
S1568494614005055
In this study, a new state space representation of the protein folding problem for the use of reinforcement learning methods is proposed. In the existing studies, the way of defining the state-action space prevents the agent to learn the state space for any amino-acid sequence, but rather, the defined state-action space is valid for only a particular amino-acid sequence. Moreover, in the existing methods, the size of the state space is strictly depends on the amino-acid sequence length. The newly proposed state-action space reduces this dependency and allows the agent to find the optimal fold of any sequence of a certain length. Additionally, by utilizing an ant based reinforcement learning algorithm, the Ant-Q algorithm, optimum fold of a protein is found rapidly when compared to the standard Q-learning algorithm. Experiments showed that, the new state-action space with the ant based reinforcement learning method is much more suited for the protein folding problem in two dimensional lattice model.
A novel state space representation for the solution of 2D-HP protein folding problem using reinforcement learning methods
S1568494614005067
Attribute reduction is viewed as an important preprocessing step for pattern recognition and data mining. Most of researches are focused on attribute reduction by using rough sets. Recently, Tsang et al. discussed attribute reduction with covering rough sets in the paper (Tsang et al., 2008), where an approach based on discernibility matrix was presented to compute all attribute reducts. In this paper, we provide a new method for constructing simpler discernibility matrix with covering based rough sets, and improve some characterizations of attribute reduction provided by Tsang et al. It is proved that the improved discernibility matrix is equivalent to the old one, but the computational complexity of discernibility matrix is relatively reduced. Then we further study attribute reduction in decision tables based on a different strategy of identifying objects. Finally, the proposed reduction method is compared with some existing feature selection methods by numerical experiments and the experimental results show that the proposed reduction method is efficient and effective.
An improved attribute reduction scheme with covering based rough sets
S1568494614005079
As a fuzzy set extension, the hesitant set is effectively used to model situations where it is allowable to determine several possible membership degrees of an element to a set due to the ambiguity between different values. We first introduce some new operational rules of hesitant fuzzy sets based on the Hamacher t-norm and t-conorm, in which a family of hesitant fuzzy Hamacher operators is proposed for aggregating hesitant fuzzy information. Some basic properties of these proposed operators are given, and the relationships between them are shown in detail. We further discuss the interrelations between the proposed aggregation operators and the existing hesitant fuzzy aggregation operators. Applying the proposed hesitant fuzzy operators, we develop a new technique for hesitant fuzzy multicriteria decision making problems. Finally, the effectiveness of the proposed technique is illustrated by mean of a practical example.
Hesitant fuzzy Hamacher aggregation operators for multicriteria decision making
S1568494614005080
Almost all the molecule docking models, using by widespread docking software, are approximate. Approximation will make the scoring function inaccurate under some circumstances. This study proposed a new molecule docking scoring method: based on force-field scoring function, it use information entropy genetic algorithm to solve the docking problem. Empirical-based and knowledge-based scoring function are also considered in this method. Instead of simple combination with fixed weights, coefficients of each factor are adaptive in the process of searching optimum solution. Genetic algorithm with the multi-population evolution and entropy-based searching technique with narrowing down space is used to solve the optimization model for molecular docking problem. To evaluate this method, we carried out a numerical experiment with 134 protein–ligand complexes of the publicly available GOLD test set. The results show that this study improved the docking accuracy over the individual force-field scoring greatly. Comparing with other popular docking software, it has the best average Root-Mean-Square Deviation (RMSD). The average computing time of this study is also good among them.
Adaptive molecular docking method based on information entropy genetic algorithm
S1568494614005092
In this paper, novel computing approach using three different models of feed-forward artificial neural networks (ANNs) are presented for the solution of initial value problem (IVP) based on first Painlevé equation. These mathematical models of ANNs are developed in an unsupervised manner with capability to satisfy the initial conditions exactly using log-sigmoid, radial basis and tan-sigmoid transfer functions in hidden layers to approximate the solution of the problem. The training of design parameters in each model is performed with sequential quadratic programming technique. The accuracy, convergence and effectiveness of the proposed schemes are evaluated on the basis of the results of statistical analyses through sufficient large number of independent runs with different number of neurons in each model as well. The comparisons of these results of proposed schemes with standard numerical and analytical solutions validate the correctness of the design models. exact solution of first Painlevé equation reference results with fully explicit Runge–Kutta method approximate neural networks solution approximate solution u ˆ ( x ) using log-sigmoid activation function approximate solution u ˆ ( x ) using radial basis activation function approximate solution u ˆ ( x ) using tan-sigmoid activation function approximate neural networks solution with exactly satisfying initial conditions approximate solution u ˜ ( x ) using log-sigmoid activation function approximate solution u ˜ ( x ) using radial basis activation function approximate solution u ˜ ( x ) using tan-sigmoid activation function approximate analytical solution approximate solution y(x) with Homotopy perturbation method approximate solution y(x) with variational iterational method approximate solution y(x) with modified Homotopy perturbation method the root mean square error associated with differential equation objective or fitness function global root mean square error associated with the differential equation global mean square error mean fitness unknown weight vector
Exactly satisfying initial conditions neural network models for numerical treatment of first Painlevé equation
S1568494614005201
Bat swarm optimisation (BSO) is a novel heuristic optimisation algorithm that is being used for solving different global optimisation problems. The paramount problem in BSO is that it severely suffers from premature convergence problem, that is, BSO is easily trapped in local optima. In this paper, chaotic-based strategies are incorporated into BSO to mitigate this problem. Ergodicity and non-repetitious nature of chaotic functions can diversify the bats and mitigate premature convergence problem. Eleven different chaotic map functions along with various chaotic BSO strategies are investigated experimentally and the best one is chosen as the suitable chaotic strategy for BSO. The results of applying the proposed chaotic BSO to different benchmark functions vividly show that premature convergence problem has been mitigated efficiently. Actually, chaotic-based BSO significantly outperforms conventional BSO, cuckoo search optimisation (CSO), big bang-big crunch algorithm (BBBC), gravitational search algorithm (GSA) and genetic algorithm (GA).
Chaotic bat swarm optimisation (CBSO)
S1568494614005213
Over the last two decades, many different evolutionary algorithms (EAs) have been introduced for solving constrained optimization problems (COPs). Due to the variability of the characteristics in different COPs, no single algorithm performs consistently over a range of practical problems. To design and refine an algorithm, numerous trial-and-error runs are often performed in order to choose a suitable search operator and the parameters. However, even by trial-and-error, one may not find an appropriate search operator and parameters. In this paper, we have applied the concept of training and testing with a self-adaptive multi-operator based evolutionary algorithm to find suitable parameters. The training and testing sets are decided based on the mathematical properties of 60 problems from two well-known specialized benchmark test sets. The experimental results provide interesting insights and a new way of choosing parameters.
Training and testing a self-adaptive multi-operator evolutionary algorithm for constrained optimization
S1568494614005249
This paper applied fuzzy set theory based on modified SERVQUAL model to analysis service quality in certification & inspection industry in China. The study consists of 405 randomly selected participants who are customers of China Certification & Inspection Company (CCIC). It includes four parts: introduction, methodology, a case study of certification & inspection service quality and conclusions. Study shows that the five dimensions of tangible have the biggest gap between the service quality expectations and perceptions. So, the company we studied (CCIC) need to increase investment in tangible aspects in order to improve their service quality.
Applying the fuzzy SERVQUAL method to measure the service quality in certification & inspection industry
S1568494614005250
This paper presents mathematical modeling of joint lot sizing and scheduling problem in a job shop environment under a set of realistic working conditions. One main realistic assumption of the current study is dealing with flexible machines able to change their working speeds, known as process compressibility. The production schedules should be subject to limited available time in every planning period. Also, the model assumes that periodical sequences should be determined in a way that they obey from a fixed global sequence. Another complicating aspect of the problem is about consideration of precedence relationships for the needed processes of an item type over the corresponding machines. As the problem is proved to be strongly NP-hard, it is solved by a particle swarm optimization (PSO) algorithm. The proposed algorithm is self-controller about its working parameters, time to stop the search process, search diversification/intensification, and totally about its behavior. Performance of the algorithm is verified in terms of optimality gap using Lingo 11.0 on a set of randomly generated test data.
A self-adaptive PSO for joint lot sizing and job shop scheduling with compressible process times
S1568494614005274
The ensemble learning paradigm has proved to be relevant to solving most challenging industrial problems. Despite its successful application especially in the Bioinformatics, the petroleum industry has not benefited enough from the promises of this machine learning technology. The petroleum industry, with its persistent quest for high-performance predictive models, is in great need of this new learning methodology. A marginal improvement in the prediction indices of petroleum reservoir properties could have huge positive impact on the success of exploration, drilling and the overall reservoir management portfolio. Support vector machines (SVM) is one of the promising machine learning tools that have performed excellently well in most prediction problems. However, its performance is a function of the prudent choice of its tuning parameters most especially the regularization parameter, C. Reports have shown that this parameter has significant impact on the performance of SVM. Understandably, no specific value has been recommended for it. This paper proposes a stacked generalization ensemble model of SVM that incorporates different expert opinions on the optimal values of this parameter in the prediction of porosity and permeability of petroleum reservoirs using datasets from diverse geological formations. The performance of the proposed SVM ensemble was compared to that of conventional SVM technique, another SVM implemented with the bagging method, and Random Forest technique. The results showed that the proposed ensemble model, in most cases, outperformed the others with the highest correlation coefficient, and the lowest mean and absolute errors. The study indicated that there is a great potential for ensemble learning in petroleum reservoir characterization to improve the accuracy of reservoir properties predictions for more successful explorations and increased production of petroleum resources. The results also confirmed that ensemble models perform better than the conventional SVM implementation.
Improving the prediction of petroleum reservoir characterization with a stacked generalization ensemble model of support vector machines
S1568494614005286
Process and manufacturing industries today are under pressure to deliver high quality outputs at lowest cost. The need for industry is therefore to implement cost savings measures immediately, in order to remain competitive. Organizations are making strenuous efforts to conserve energy and explore alternatives. This paper explores the development of an intelligent system to identify the degradation of heat exchanger system and to improve the energy performance through online monitoring system. The various stages adopted to achieve energy performance assessment are through experimentation, design of experiments and online monitoring system. Experiments are conducted as per full factorial design of experiments and the results are used to develop artificial neural network models. The predictive models are used to predict the overall heat transfer coefficient of clean/design heat exchanger. Fouled/real system value is computed with online measured data. Overall heat transfer coefficient of clean/design system is compared with the fouled/real system and reported. It is found that neural net work model trained with particle swarm optimization technique performs better comparable to other developed neural network models. The developed model is used to assess the performance of heat exchanger with the real/fouled system. The performance degradation is expressed using fouling factor, which is derived from the overall heat transfer coefficient of design system and real system. It supports the system to improve the performance by asset utilization, energy efficient and cost reduction in terms of production loss. This proposed online energy performance system is implemented into the real system and the adoptability is validated.
Performance assessment of heat exchanger using intelligent decision making tools