FileName
stringlengths 17
17
| Abstract
stringlengths 163
6.01k
| Title
stringlengths 12
421
|
---|---|---|
S1568494615003804 | Quadrotor Unmanned Aerial Vehicles (UAVs) can perform numerous tasks fearless of unnecessary loss of human life. Lately, to enhance UAV control performance, system identification and states estimation has been an active field of research. This work presents a simulation study that investigates unknown dynamics model parameters estimation of a Quadrotor UAV under presence of noisy feedback signals. The latter constitute a challenge for UAV control performance especially with the presence of uncertainties. Therefore, estimation techniques are usually used to reduce the effect of such uncertainties. In this paper, three estimation methods are presented to estimate unknown parameters of the “OS4” Quadrotor. Those methods are Iterative Bi-Section Shooting method “IBSS”, Artificial Neural Network method “ANN”, and “Hybrid ANN_IBSS”, which is a novel method that integrates ANN with IBSS. The “Hybrid ANN_IBSS” is the main contribution of this work. Percentage error of the estimated parameters is used to evaluate accuracy of the aforementioned methods. Results show that IBSS and ANN are capable of estimating most of the parameters even with the presence of noisy feedback signals. However, their performance lacks accuracy when estimating small-value parameters. On the other hand, Hybrid ANN_IBSS achieved higher estimation accuracy compared to the other two methods. Accurate parameter estimation is expected to enhance reliability of the “OS4” dynamics model and hence improve control quality. roll angular rate in earth frame pitch angular rate in earth frame yaw angular rate in earth frame roll angle in body frame pitch angle in body frame yaw angle in body frame position along x axis position along y axis position along z axis sin ϕ cos ϕ sin θ cos θ sin ψ cos ψ sample time rotor speed input torque thrust force input | Unmanned Aerial Vehicles parameter estimation using Artificial Neural Networks and Iterative Bi-Section Shooting method |
S1568494615003816 | The problem of optimal non-hierarchical clustering is addressed. A new algorithm combining differential evolution and k-means is proposed and tested on eight well-known real-world data sets. Two criteria (clustering validity indexes), namely TRW and VCR, were used in the optimization of classification. The classification of objects to be optimized is encoded by the cluster centers in differential evolution (DE) algorithm. It induced the problem of rearrangement of centers in the population to ensure an efficient search via application of evolutionary operators. A new efficient heuristic for this rearrangement was also proposed. The plain DE variants with and without the rearrangement were compared with corresponding hybrid k-means variants. The experimental results showed that hybrid variants with k-means algorithm are essentially more efficient than the non-hybrid ones. Compared to a standard k-means algorithm with restart, the new hybrid algorithm was found more reliable and more efficient, especially in difficult tasks. The results for TRW and VCR criterion were compared. Both criteria provided the same optimal partitions and no significant differences were found in efficiency of the algorithms using these criteria. | Hybrid differential evolution algorithm for optimal clustering |
S1568494615003828 | This paper suggests a dynamic multi-colony multi-objective artificial bee colony algorithm (DMCMOABC) by using the multi-deme model and a dynamic information exchange strategy. In the proposed algorithm, K colonies search independently most of the time and share information occasionally. In each colony, there are S bees containing equal number of employed bees and onlooker bees. For each food source, the employed or onlooker bee will explore a temporary position generated by using neighboring information, and the better one determined by a greedy selection strategy is kept for the next iterations. The external archive is employed to store non-dominated solutions found during the search process, and the diversity over the archived individuals is maintained by using crowding-distance strategy. If a randomly generated number is smaller than the migration rate R, then an elite, defined as the intermediate individual with the maximum crowding-distance value, is identified and used to replace the worst food source in a randomly selected colony. The proposed DMCMOABC is evaluated on a set of unconstrained/constrained test functions taken from the CEC2009 special session and competition in terms of four commonly used metrics EPSILON, HV, IGD and SPREAD, and it is compared with other state-of-the-art algorithms by applying Friedman test on the mean of IGD. The test results show that DMCMOABC is significantly better than or at least comparable to its competitors for both unconstrained and constrained problems. | A dynamic multi-colony artificial bee colony algorithm for multi-objective optimization |
S156849461500383X | In this paper, we investigate the reduction in total transmission time and the energy consumption of wireless sensor networks using multi-hop data aggregation by forming coordination in hierarchical clustering. Novel algorithm handles wireless sensor network in numerous circumstances as in large extent and high density deployments. One of the major purposes is to collect information from inaccessible areas by using factorization of the area into subareas (clusters) and appointing cluster head in each of the subarea. Coordination and cooperation among the local nodes via relay nodes in local cluster (By forming sub clusters) helped to serve each and every node. Routing is based on the predefined path, proposed by new transmission algorithm. Transmission distance is minimized by using cluster coordinators for inter cluster communication and relay nodes within the cluster. We show by extended simulations that Chain Based Cluster Cooperative Protocol (CBCCP) performs very well in terms of energy and time. To prove it, we compare it with LEACH, SEP, genetic HCR and ERP and found that new protocol consumes six times less energy than LEACH, five times less energy than SEP, four time less energy than genetic HCR and three times less energy than ERP, which further validate our work. | Energy efficient chain based cooperative routing protocol for WSN |
S1568494615003841 | The dynamic vehicle routing and scheduling problem is a well-known complex combinatorial optimization problem that drew significant attention over the past few years. This paper presents a novel algorithm introducing a new strategy to integrate anticipated future visit requests during plan generation, aimed at explicitly improving customer satisfaction. An evaluation of the proposed strategy is performed using a hybrid genetic algorithm previously designed for the dynamic vehicle problem with time windows that we modified to capture customer satisfaction over multiple visits. Simulations compare the value of the revisited algorithm exploiting the new strategy, clearly demonstrating its impact on customer satisfaction level. | Customer satisfaction in dynamic vehicle routing problem with time windows |
S1568494615003853 | Over the past decade, the particle swarm optimization (PSO) has been an effective algorithm for solving single and multi-object optimization problems. Recently, the chemical reaction optimization (CRO) algorithm is emerging as a new algorithm used to efficiently solve single-object optimization. In this paper, we present HP-CRO (hybrid of PSO and CRO) a new hybrid algorithm for multi-object optimization. This algorithm has features of CRO and PSO, HP-CRO creates new molecules (particles) not only used by CRO operations as found in CRO algorithm but also by mechanisms of PSO. The balancing of CRO and PSO operators shows that the method can be used to avoid premature convergence and explore more in the search space. This paper proposes a model with modified CRO operators and also adding new saving molecules into the external population to increase the diversity. The experimental results of the HP-CRO algorithm compared to some meta-heuristics algorithms such as FMOPSO, MOPSO, NSGAII and SPEA2 show that there is improved efficiency of the HP-CRO algorithm for solving multi-object optimization problems. | A hybrid algorithm based on particle swarm and chemical reaction optimization for multi-object problems |
S1568494615003865 | This paper presents a new power system planning strategy which combines firefly algorithm (FFA) with pattern search algorithm (PS). The purpose is minimizing total fuel cost, total power loss and reducing total voltage deviation, with the objective of enhancing the loading margin stability and consequently the power system security. A new interactive and simple mechanism, inspired in brainstorming process, is proposed that allows FFA and PS algorithms to explore new regions of the search space. In this study the Static VAR compensator (SVC) is modeled and integrated in an efficient location which is chosen considering the voltage stability index. The proposed algorithm is interactive and tries to optimize a set of control variables at the same time, namely, active power generations, voltage of generators, tap transformers, and the reactive power of shunt compensators to optimize three objective functions such as: fuel cost, total power loss and total voltage deviation. These variables are optimized using a flexible interactive and competitive search mechanism. The proposed planning strategy has been examined and applied to two practical test systems IEEE 14-Bus and IEEE 30-Bus. Simulation results confirm the effectiveness of this hybrid strategy for solving the security optimal power flow. | Security optimal power flow considering loading margin stability using hybrid FFA–PS assisted with brainstorming rules |
S1568494615003877 | This paper proposes an optimal design method for passive power filters (PPFs) in order to suppress critical harmonics and improve power factor. The characteristics of common passive filters, such as single-tuned, second-order, third-order, and C-type damped filters are introduced. In addition, several objective functions and constraints for PPF design problems are constructed. A new multi-objective optimization based on the modified bat algorithm and Pareto front is developed for solving PPF design problems. A case study is also presented to demonstrate the efficiency and superiority of the proposed method. | Optimal design of passive power filters based on multi-objective bat algorithm and pareto front |
S1568494615003889 | One-class learning algorithms are used in situations when training data are available only for one class, called target class. Data for other class(es), called outliers, are not available. One-class learning algorithms are used for detecting outliers, or novelty, in the data. The common approach in one-class learning is to use density estimation techniques or adapt standard classification algorithms to define a decision boundary that encompasses only the target data. In this paper, we introduce OneClass-DS learning algorithm that combines rule-based classification with greedy search algorithm based on density of features. Its performance is tested on 25 data sets and compared with eight other one-class algorithms; the results show that it performs on par with those algorithms. | Rule-based OneClass-DS learning algorithm |
S1568494615003890 | In this paper, we address the problem of integrating new classes on the fly into on-line classification systems. The main focus is on visual inspection tasks, although the concepts proposed in this paper can easily be applied to any other on-line classification systems. We use evolving fuzzy classifiers (EFCs), which can adapt their structure and update their parameters in incrementally due to embedded on-line adaptable classifier learning engines. We consider two different model architectures — classical single model and an all-pairs approach that uses class information to decompose the classification problem into several smaller sub-problems. The latter technique is essential for establishing new classes quickly and efficiently in the classifier, and for reducing class imbalance. Methodological novelties are (i) making appropriate structural changes in the EFC whenever a new class appears while operating in a single-pass incremental manner and (ii) estimating the expected change in classifier accuracy on the older classes. The estimation is based on an analysis of the impact of new classes on the established decision boundaries. This is important for operators, who are already familiar with an established classifier, the accuracy of which is known. The new concepts are evaluated in a real-world visual inspection scenario, where the main task is to classify event types which may occur dynamically on micro-fluidic chips and may reduce their quality. The results show stable performance of established classifiers and efficient (low number of samples requested) as well as fast integration (steeply rising accuracy curves) of new event types (classes). | Integrating new classes on the fly in evolving fuzzy classifier designs and their application in visual inspection |
S1568494615003907 | Estimates of suspended sediment concentrations and transport are an important part of any marine environment assessment study because these factors have a direct impact on the life cycle and survival of marine ecosystems. This paper proposes to implement a combined methodology to tackle these estimates. The first component of the methodology comprised two numerical current and wave models, while the second component was based on the artificial intelligence technique of neural networks (ANNs) used to reproduce values of sediment concentrations observed at two sites. The ANNs were fed with modelled currents and waves and trained to produce area-specific concentration estimates. The trained ANNs were then applied to predict sediment concentrations over an independent period of observations. The use of a data set that merged together observations from both the mentioned sites provided the best ANN testing results in terms of both the normalised root mean square error (0.13) and the mean relative error (0.02). | Combining deterministic modelling with artificial neural networks for suspended sediment estimates |
S1568494615003919 | In the present study, a new intelligent hardware implementation was developed for chaotic systems by using field programmable gate array (FPGA). The success and superior properties of this new intelligent hardware implementation was shown by applying the Modified Van der Pol–Duffing Oscillator Circuit (MVPDOC). The validation of intelligent system model was tested with both software and hardware. For this purpose, initially the intelligent system model of MVPDOC was obtained by using the wavelet decompositions and Artificial Neural Network (ANN). Then, the intelligent system model obtained has been written in Very High Speed Integrated Circuit Hardware Description Language (VHDL). In the next step, these configurations have been simulated and tested under ModelSim Xilinx software. And finally the best configuration has been implemented under the Xilinx Virtex-II Pro FPGA (XC2V1000). Furthermore, the High Personal Simulation Program with Integrated Circuit Emphasis (HSPICE) simulation of MVPDOC has been carried out under ModelSim Xilinx software for comparison with proposed intelligent system. The results obtained show that the proposed intelligent system simulation has much higher speed in comparison with HSPICE simulation. | A new intelligent hardware implementation based on field programmable gate array for chaotic systems |
S1568494615003932 | Automated control of blood glucose (BG) concentration with a fully automated artificial pancreas will certainly improve the quality of life for insulin-dependent patients. Closed-loop insulin delivery is challenging due to inter- and intra-patient variability, errors in glucose sensors and delays in insulin absorption. Responding to the varying activity levels seen in outpatients, with unpredictable and unreported food intake, and providing the necessary personalized control for individuals is a challenging task for existing control algorithms. A novel approach for controlling glycemic variability using simulation-based learning is presented. A policy iteration algorithm that combines reinforcement learning with Gaussian process approximation is proposed. To account for multiple sources of uncertainty, a control policy is learned off-line using an Ito's stochastic model of the glucose-insulin dynamics. For safety and performance, only relevant data are sampled through Bayesian active learning. Results obtained demonstrate that a generic policy is both safe and efficient for controlling subject-specific variability due to a patient's lifestyle and its distinctive metabolic response. | Controlling blood glucose variability under uncertainty using reinforcement learning and Gaussian processes |
S1568494615003944 | Since esophageal cancer has no symptoms in the early stage, it is usually not detected until advanced stages in which treatment is challenging. Integrated treatment provided by a multidisciplinary team is crucial for maximizing the prognosis and survival of patients with esophageal cancer. Currently, clinicians must rely on the cancer staging system for diagnosis and treatment. An accurate and easily applied system for predicting the prognosis of esophageal cancer would be useful for comparing different treatment strategies and for calculating cancer survival probability. This study presents a hazard modeling and survival prediction system based on adaptive neuro-fuzzy inference system (ANFIS) to assist clinicians in prognostic assessment of patients with esophageal cancer and in predicting the survival of individual patients. Expert knowledge was used to construct the fuzzy rule based prognosis inference system for esophageal cancer. Fuzzy logic was used to process the values of input variables rather than categorizing values as normal or abnormal based on cutoffs. After transformation and expansion, censored survival data could be used by the ANFIS for training to establish the risk model for accurately predicting individual survival for different time intervals or for different treatment modalities. Actual values for serum C-reactive protein, albumin, and time intervals were input into the model for use in predicting the survival of individual patients for different time intervals. The curves obtained by the ANFIS approach were fitted to those obtained using the actual values. The comparison results show that the ANFIS is a practical, effective, and accurate method of predicting the survival of esophageal cancer patients. | Predicting survival of individual patients with esophageal cancer by adaptive neuro-fuzzy inference system approach |
S1568494615003968 | Infectious diarrhea is an important public health problem around the world. Meteorological factors have been strongly linked to the incidence of infectious diarrhea. Therefore, accurately forecast the number of infectious diarrhea under the effect of meteorological factors is critical to control efforts. In recent decades, development of artificial neural network (ANN) models, as predictors for infectious diseases, have created a great change in infectious disease predictions. In this paper, a three layered feed-forward back-propagation ANN (BPNN) model trained by Levenberg–Marquardt algorithm was developed to predict the weekly number of infectious diarrhea by using meteorological factors as input variable. The meteorological factors were chosen based on the strongly relativity with infectious diarrhea. Also, as a comparison study, the support vector regression (SVR), random forests regression (RFR) and multivariate linear regression (MLR) also were applied as prediction models using the same dataset in addition to BPNN model. The 5-fold cross validation technique was used to avoid the problem of overfitting in models training period. Further, since one of the drawbacks of ANN models is the interpretation of the final model in terms of the relative importance of input variables, a sensitivity analysis is performed to determine the parametric influence on the model outputs. The simulation results obtained from the BPNN confirms the feasibility of this model in terms of applicability and shows better agreement with the actual data, compared to those from the SVR, RFR and MLR models. The BPNN model, described in this paper, is an efficient quantitative tool to evaluate and predict the infectious diarrhea using meteorological factors. | Artificial neural networks for infectious diarrhea prediction using meteorological factors in Shanghai (China) |
S156849461500397X | This paper introduces a novel approach to detect and classify power quality disturbance in the power system using radial basis function neural network (RBFNN). The proposed method requires less number of features as compared to conventional approach for the identification. The feature extracted through the wavelet is trained by a radial basis function neural network for the classification of events. After training the neural network, the weight obtained is used to classify the Power Quality (PQ) problems. For the classification, 20 types of disturbances are taken into account. The classification performance of RBFNN is compared with feed forward multilayer network (FFML), learning vector quantization (LVQ), probabilistic neural network (PNN) and generalized regressive neural network (GRNN). The classification accuracy of the RBFNN network is improved, just by rewriting the weights and updating the weights with the help of cognitive as well as the social behavior of particles along with fitness value. The simulation results possess significant improvement over existing methods in signal detection and classification. | Power quality disturbance detection and classification using wavelet and RBFNN |
S1568494615003981 | Clustering multi-dense large scale high dimensional numeric datasets is a challenging task duo to high time complexity of most clustering algorithms. Nowadays, data collection tools produce a large amount of data. So, fast algorithms are vital requirement for clustering such data. In this paper, a fast clustering algorithm, called Dimension-based Partitioning and Merging (DPM), is proposed. In DPM, first, data is partitioned into small dense volumes during the successive processing of dataset dimensions. Then, noise is filtered out using dimensional densities of the generated partitions. Finally, merging process is invoked to construct clusters based on partition boundary data samples. DPM algorithm automatically detects the number of data clusters based on three insensitive tuning parameters which decrease the burden of its usage. Performance evaluation of the proposed algorithm using different datasets shows its fastness and accuracy compared to other clustering competitors. | Fast Dimension-based Partitioning and Merging clustering algorithm |
S1568494615003993 | Chaos optimization algorithm (COA) utilizes the chaotic maps to generate the pseudo-random sequences mapped as the decision variables for global optimization applications. A kind of parallel chaos optimization algorithm (PCOA) has been proposed in our former studies to improve COA. The salient feature of PCOA lies in its pseudo-parallel mechanism. However, all individuals in the PCOA search independently without utilizing the fitness and diversity information of the population. In view of the limitation of PCOA, a novel PCOA with migration and merging operation (denoted as MMO-PCOA) is proposed in this paper. Specifically, parallel individuals are randomly selected to be conducted migration and merging operation with the so far parallel solutions. Both migration and merging operation exchange information within population and produce new candidate individuals, which are different from those generated by stochastic chaotic sequences. Consequently, a good balance between exploration and exploitation can be achieved in the MMO-PCOA. The impacts of different one-dimensional maps and parallel numbers on the MMO-PCOA are also discussed. Benchmark functions and parameter identification problems are used to test the performance of the MMO-PCOA. Simulation results, compared with other optimization algorithms, show the superiority of the proposed MMO-PCOA algorithm. | Parallel chaos optimization algorithm with migration and merging operation |
S1568494615004019 | Certain type of linguistic terms such as satisfactory, good, very good and excellent have an order among them. In this paper we introduce a new concept of soft sets with some order among the parameters. Some properties of lattice ordered soft sets are given. Lattice ordered soft sets are very useful in particular type of decision making problems where some order exists among the elements of parameters set. | On lattice ordered soft sets |
S1568494615004020 | Sensor node localization is considered as one of the most significant issues in wireless sensor networks (WSNs) and is classified as an unconstrained optimization problem that falls under NP-hard class of problems. Localization is stated as determination of physical co-ordinates of the sensor nodes that constitutes a WSN. In applications of sensor networks such as routing and target tracking, the data gathered by sensor nodes becomes meaningless without localization information. This work aims at determining the location of the sensor nodes with high precision. Initially this work is performed by localizing the sensor nodes using a range-free localization method namely, Mobile Anchor Positioning (MAP) which gives an approximate solution. To further minimize the location error, certain meta-heuristic approaches have been applied over the result given by MAP. Accordingly, Bat Optimization Algorithm with MAP (BOA-MAP), Modified Cuckoo Search with MAP (MCS-MAP) algorithm and Firefly Optimization Algorithm with MAP (FOA-MAP) have been proposed. Root mean square error (RMSE) is used as the evaluation metrics to compare the performance of the proposed approaches. The experimental results show that the proposed FOA-MAP approach minimizes the localization error and outperforms both MCS-MAP and BOA-MAP approaches. | Meta-heuristic approaches for minimizing error in localization of wireless sensor networks |
S1568494615004032 | The TOPSIS method, commonly known as the technique for order preference by similarity to ideal solutions, is one of the most popular approaches used in multi-attribute decision making (MADM). The fundamental procedure of the traditional TOPSIS method is rather straightforward, the ranking position of an alternative depends on its relative closeness to the positive ideal solution and the negative ideal solution, respectively. In order to encompass uncertain and ambiguous decision elements, an extension of the original TOPSIS method has been coined. With the help of fuzzy sets based TOPSIS, an overwhelming trend of fuzzy decision making applications has been witnessed. In the present work, however, it is found that the extended fuzzy TOPSIS method is unable to distinguish all the different alternatives under linguistic environment. Moreover, the undistinguishable alternatives are countless in quantity, and they have formed specific patterns with respect to the parameters of TOPSIS methods. To dampen this ranking ambiguity, we designed a set of supplemental methods to construct a revised TOPSIS approach with linguistic evaluations. Correspondingly, the sufficiency of the revised TOPSIS method to guarantee total orders has been proven. Furthermore, a numerical example concerning the production line improvement of a manufacturing company is demonstrated to validate the feasibility and supremacy of the proposed method. Finally, a series of further discussions are performed to shed some lights on the impacts caused by the changes of the alternative quantity, the attribute quantity, and the type of linguistic term. | A note on the TOPSIS method in MADM problems with linguistic evaluations |
S1568494615004044 | This paper presents the significances of a novel quasi-oppositional harmony search (HS) (QOHS) algorithm in the context of automatic generation control (AGC) of power system. The proposed QOHS algorithm is framed by utilizing the quasi-oppositional concept in the pre-available basic HS algorithm. Also, the proposed algorithm houses both the characters of two guesses i.e. opposite-point and its mirror point (quasi-opposite point) to converge rapidly toward the optimal solution(s). The proposed QOHS algorithm is, individually, applied to single-, three- and five-area interconnected test power systems (considering suitable cases) for its survival in AGC domain. The single- and three-area test systems are supplemented with the proportional-integral-derivative (PID) controller installed in each control area. In the second phase of investigation, the proposed QOHS based integral-double derivative (IDD) controller is also examined in AGC mechanism of five-area test power system. Initially, integral of square error based objective function is minimized and, further, two performance indices (such as integral of time absolute error and integral of time square error) are also calculated to test the AGC performance offered by the proposed QOHS based PID/IDD controller. To add some degree of non-linearities, appropriate generation rate constraint (GRC) is also considered for both three- and five-area test power systems. The simulated results, as obtained by the proposed QOHS algorithm, are compared to those offered by other optimization algorithms reported in the recent state-of-the-art literature. The extensive results, as presented in this paper, reveal that the proposed QOHS algorithm may be, effectively, imposed to boost the AGC performance of power system having various degrees of complexities (like model uncertainties) and non-linearities (like GRC). | A novel quasi-oppositional harmony search algorithm for automatic generation control of power system |
S1568494615004056 | Cancer staging has been regarded as a critical activity for cancer control. Cancer staging systems typically split tumors into 5 crisp categories. The classification of the tumor into one of the five stages significantly affects not only the treatment design and surgical decision for individuals but also cancer control for populations. Several cancer staging systems have been in use of which the TNM is the most widely applied. The acute distinction between the stages makes the staging unrealistic since the drastic modification in treatment based on a change of stage may be based on a slight shift around the stage boundary. Tumor size is the major component of staging systems. The TNM is no exception, where the T represents the size which is the dominant component of the staging system. In this paper we discuss the need for a fuzzy cancer staging system to capture the uncertainty and use it for more accurate treatment and medical decisions. The authors then focus on the size computation component of the cancer staging presenting a new approach depending on fuzzy volume computation. In the course, the authors demonstrate how the fuzzy volume can affect the staging system and, consequently, the medical treatment, decision, and possibly drug design. | Tumor volume fuzzification for intelligent cancer staging |
S1568494615004068 | Functional annotation is the process that assigns a biological functionality to a deoxyribonucleic acid (DNA) sequence. It requires searching in huge data sets for candidates, and inferring the most appropriate features based on the information found and expert knowledge. When humans perform most of these tasks, results are of a high quality, but there is a bottleneck in processing; when experts are largely replaced by automated tools, annotation is faster but of poorer quality. Combining the automatic annotation with expert systems (ESs) can enhance the quality of the annotation, while effectively reducing experts’ workload. This paper presents INFAES, a rule-based ES developed for mimicking the human reasoning in the inference stage of the functional annotation. It integrates knowledge on Biology and heuristics about the use of Bioinformatics tools. Its development adopts state-of-the-art methodologies to facilitate the acquisition and integration of new knowledge. INFAES showed a high performance when compared to the systems developed for the first large-scale community-based critical assessment of protein function annotation (CAFA) [1]. | A rule-based expert system for inferring functional annotation |
S156849461500407X | This paper introduces a novel approach for identity authentication system based on metacarpophalangeal joint patterns (MJPs). A discriminative common vector (DCV) based method is utilized for feature selection. In the literature, there is no study using whole MJP for identity authentication, exceptionally a work (Ferrer et al., 2005) using the hand knuckle pattern which is some part of the MJP draws the attention as a similar study. The originality of this approach is that: whole MJP is firstly used as a biometric identifier and DCV method is firstly applied for extracting the feature set of MJP. The developed system performs some basic tasks like image acquisition, image pre-processing, feature extraction, matching, and performance evaluation. The feasibility and effectiveness of this approach is rigorously evaluated using the k-fold cross validation technique on two different databases: a publicly available database and a specially established database. The experimental results indicate that the MJPs are very distinctive biometric identifiers and can be securely used in biometric identification and verification systems, DCV method is successfully employed for obtaining the feature set of MJPs and proposed MJP based authentication approach is very successful according to state of the art techniques with a recognition rate of between 95.33% and 100.00%. | Metacarpophalangeal joint patterns based personal identification system |
S1568494615004081 | Real Time Location Systems (RTLS) have gained importance in contemporary world since they allow real time positioning of assets, people and workflows. They can be used in different sectors to increase work efficiency and quality in areas of application. The selection of the appropriate RTLS technology becomes a major decision problem since it has a multi-criteria structure which includes both qualitative and quantitative factors. In this study, a decision making model is developed for selection of the appropriate RTLS technology for companies operating in different sectors. Three main criteria are determined by existing literature and with the help of the experts, namely economic, technical and implementation factors. The Fuzzy Analytic Hierarchy Process (FAHP) method is proposed to select the appropriate RTLS technology. Also sensitivity analysis is performed according to the incremental rates of main criteria. The model is applied to a hospital in Turkey considering three types of RTLS systems which are given as IR–RF hybrid, UHF RFID and Active RFID. Since the scores of the hybrid system for economic, implementation and technical factors are higher comparing the other technologies, it is selected as the best alternative. | Fuzzy decision making model for selection of real time location systems |
S1568494615004093 | Churn prediction is an important application of classification models that identify those customers most likely to attrite based on their respective characteristics described by e.g. socio-demographic and behavioral variables. Since nowadays more and more of such features are captured and stored in the respective computational systems, an appropriate handling of the resulting information overload becomes a highly relevant issue when it comes to build customer retention systems based on churn prediction models. As a consequence, feature selection is an important step of the classifier construction process. Most feature selection techniques; however, are based on statistically inspired validation criteria, which not necessarily lead to models that optimize goals specified by the respective organization. In this paper we propose a profit-driven approach for classifier construction and simultaneous variable selection based on support vector machines. Experimental results show that our models outperform conventional techniques for feature selection achieving superior performance with respect to business-related goals. | Profit-based feature selection using support vector machines – General framework and an application for customer retention |
S156849461500410X | In this paper a new approach called evolving principal component clustering is applied to a data stream. Regions of the data described by linear models are identified. The method recursively estimates the data variance and the linear model parameters for each cluster of data. It enables good performance, robust operation, low computational complexity and simple implementation on embedded computers. The proposed approach is demonstrated on real and simulated examples from laser-range-finder data measurements. The performance, complexity and robustness are validated through a comparison with the popular split-and-merge algorithm. | Evolving principal component clustering with a low run-time complexity for LRF data mapping |
S1568494615004111 | Inspired by ant's stochastic behavior in search for multiple food sources, we propose a cooperating multi-task ant system for tracking multiple synthetic objects as well as multiple real cells in a bio-medical field. In our framework, each ant colony is assumed and assigned to fulfill a given task to estimate the state of an object. Furthermore, two ant levels are used, i.e., ant individual level and ant cooperation level. In the ant individual level, ants within one colony perform independently, and the motion of each individual is probabilistically determined by both its intended motion modes and the likelihood function score. In the ant cooperation level, each ant adjusts individual state within its influence region according to heuristic information of all other ants within the same colony, while the global best template at current iteration is found among all ant colonies and utilized to update ant model probability, influence region, and probability of fulfilling task. Our algorithm is validated by comparing it to the-state-of-art algorithms, and specifically the improved tracking performance in terms of false negative rate (up to 10.0%) and false negative rate (up to 2.1%) is achieved based on the studied three real cell image sequences. | Multi-task ant system for multi-object parameter estimation and its application in cell tracking |
S1568494615004123 | Employing an effective learning process is a critical topic in designing a fuzzy neural network, especially when expert knowledge is not available. This paper presents a genetic algorithm (GA) based learning approach for a specific type of fuzzy neural network. The proposed learning approach consists of three stages. In the first stage the membership functions of both input and output variables are initialized by determining their centers and widths using a self-organizing algorithm. The second stage employs the proposed GA based learning algorithm to identify the fuzzy rules while the final stage tunes the derived structure and parameters using a back-propagation learning algorithm. The capabilities of the proposed GA-based learning approach are evaluated using a well-examined benchmark example and its effectiveness is analyzed by means of a comparative study with other approaches. The usefulness of the proposed GA-based learning approach is also illustrated in a practical case study where it is used to predict the performance of road traffic control actions. Results from the benchmarking exercise and case study effectively demonstrate the ability of the proposed three stages learning approach to identify relevant fuzzy rules from a training data set with a higher prediction accuracy than alternative approaches. | GA-based learning for rule identification in fuzzy neural networks |
S1568494615004135 | Phishing is a method of stealing electronic identity in which social engineering and website forging methods are used in order to mislead users and reveal confidential information having economic value. Destroying the trust between users in business network, phishing has a negative effect on the budding area of e-commerce. Developing countries such as Iran have been recently facing Internet threats like phishing, whose methods, regarding the social differences, may be different from other experiences. Thus, it is necessary to design a suitable detection method for these deceits. The aim of current paper is to provide a phishing detection system to be used in e-banking system in Iran. Identifying the outstanding features of phishing is one of the important prerequisites in design of an accurate system; therefore, in first step, to identify the influential features of phishing that best fit the Iranian bank sites, a list of 28 phishing indicators was prepared. Using feature selection algorithm based on rough sets theory, six main indicators were identified as the most effective factors. The fuzzy expert system was designed using these indicators, afterwards. The results show that the proposed system is able to determine the Iranian phishing sites with a reasonable speed and precision, having an accuracy of 88%. | Detection of phishing attacks in Iranian e-banking using a fuzzy–rough hybrid system |
S1568494615004147 | In order to simulate the hesitancy and uncertainty associated with impression or vagueness, a decision maker may give her/his judgments by means of hesitant fuzzy preference relations in the process of decision making. The study of their consistency becomes a very important aspect to avoid a misleading solution. This paper defines the concept of additive consistent hesitant fuzzy preference relations. The characterizations of additive consistent hesitant fuzzy preference relations are studied in detail. Owing to the limitations of the experts’ professional knowledge and experience, the provided preferences in a hesitant fuzzy preference relation are usually incomplete. Consequently, this paper introduces the concepts of incomplete hesitant fuzzy preference relation, acceptable incomplete hesitant fuzzy preference relation, and additive consistent incomplete hesitant fuzzy preference relation. Then, two estimation procedures are developed to estimate the missing information in an expert's incomplete hesitant fuzzy preference relation. The first procedure is used to construct an additive consistent hesitant fuzzy preference relation from the lowest possible number, (n −1), of pairwise comparisons. The second one is designed for the estimation of missing elements of the acceptable incomplete hesitant fuzzy preference relations with more known judgments. Moreover, an algorithm is given to solve the multi-criteria group decision making problem with incomplete hesitant fuzzy preference relations. Finally, a numerical example is provided to illustrate the solution processes of the developed algorithm and to verify its effectiveness and practicality. | Multi-criteria group decision making with incomplete hesitant fuzzy preference relations |
S1568494615004287 | In the current volatile and demanding business environment, managers are so eager to demonstrate that their organizations are excellent which can mainly be achieved through continuous performance improvement. The most applicable and suitable tools that by the assessment of organizations shows how successful they are in the organizational excellence path is European Foundation for Quality Management (EFQM) Excellence Model. This study aims at presenting a new integrated approach based on EFQM model using Fuzzy Logic, Analytical Hierarchy Process (AHP) technique and Operations Research (OR) model to improve the organizations’ excellence level by increasing the quality of business performance evaluation and determining of improvement projects with high priority. A case study in Yazd Regional Electricity Co. in Iran is presented to demonstrate the applicability of the proposed approach. In a way that, primarily, performance assessment by crisp method and the proposed method, Fuzzy method, is carried out. Then, strength points and the areas for improvement are identified by defining the scores for sub-criteria. Next, sub-criteria are prioritized to define the improvement projects by using AHP technique and Operations Research model. Finally, improvement projects with high priority are determined and some action plans for improvement projects are defined. | Implementing Fuzzy Logic and AHP into the EFQM model for performance improvement: A case study |
S1568494615004299 | Swarm intelligence is a meta-heuristic algorithm which is widely used nowadays for efficient solution of optimization problems. Particle Swarm Optimization (PSO) is one of the most popular types of swarm intelligence algorithm. This paper proposes a new Particle Swarm Optimization algorithm called Starling PSO based on the collective response of starlings. Although PSO performs well in many problems, algorithms in this category lack mechanisms which add diversity to exploration in the search process. Our proposed algorithm introduces a new mechanism into PSO to add diversity, a mechanism which is inspired by the collective response behavior of starlings. This mechanism consists of three major steps: initialization, which prepares alternative populations for the next steps; identifying seven nearest neighbors; and orientation change which adjusts velocity and position of particles based on those neighbors and selects the best alternative. Because of this collective response mechanism, the Starling PSO explores a wider area of the search space and thus avoids suboptimal solutions. We tested the algorithm with commonly used numerical benchmarking functions as well as applying it to a real world application involving data clustering. In these evaluations, we compared Starling PSO with a variety of state of the art algorithms. The results show that Starling PSO improves the performance of the original PSO and yields the optimal solution in many numerical benchmarking experiments. It also gives the best results in almost all clustering experiments. | Particle Swarm Optimization inspired by starling flock behavior |
S1568494615004305 | A memetic approach that combines a genetic algorithm (GA) and quadratic programming is used to address the problem of optimal portfolio selection with cardinality constraints and piecewise linear transaction costs. The framework used is an extension of the standard Markowitz mean–variance model that incorporates realistic constraints, such as upper and lower bounds for investment in individual assets and/or groups of assets, and minimum trading restrictions. The inclusion of constraints that limit the number of assets in the final portfolio and piecewise linear transaction costs transforms the selection of optimal portfolios into a mixed-integer quadratic problem, which cannot be solved by standard optimization techniques. We propose to use a genetic algorithm in which the candidate portfolios are encoded using a set representation to handle the combinatorial aspect of the optimization problem. Besides specifying which assets are included in the portfolio, this representation includes attributes that encode the trading operation (sell/hold/buy) performed when the portfolio is rebalanced. The results of this hybrid method are benchmarked against a range of investment strategies (passive management, the equally weighted portfolio, the minimum variance portfolio, optimal portfolios without cardinality constraints, ignoring transaction costs or obtained with L 1 regularization) using publicly available data. The transaction costs and the cardinality constraints provide regularization mechanisms that generally improve the out-of-sample performance of the selected portfolios. | A memetic algorithm for cardinality-constrained portfolio optimization with transaction costs |
S1568494615004317 | The bus vehicle scheduling problem addresses the task of assigning vehicles to cover the trips in a timetable. In this paper, a clonal selection algorithm based vehicle scheduling approach is proposed to quickly generate satisfactory solutions for large-scale bus scheduling problems. Firstly, a set of vehicle blocks (consecutive trips by one bus) is generated based on the maximal wait time between any two adjacent trips. Then a subset of blocks is constructed by the clonal selection algorithm to produce an initial vehicle scheduling solution. Finally, two heuristics adjust the departure times of vehicles to further improve the solution. The proposed approach is evaluated using a real-world vehicle scheduling problem from the bus company of Nanjing, China. Experimental results show that the proposed approach can generate satisfactory scheduling solutions within 1min. | A clonal selection algorithm for urban bus vehicle scheduling |
S1568494615004329 | Progressively huge amounts of data, tracking vessels during their voyages across the seas, are becoming available, mostly due to the automatic identification system (AIS) that vessels of specific categories are required to carry. These datasets provide detailed insights into the patterns vessels follow, while safely navigating across the globe, under various conditions. In this paper, we develop an Artificial Neural Network (ANN) capable of predicting a vessels future behaviour (position, speed and course), based on events that occur in a predictable pattern, across large map areas. The main concept of this study is to determine if an ANN is capable of inferring the unique behavioural patterns that each vessel follows and successively use this as a means for predicting multiple vessel behaviour into a future point in time. We design, train and implement a proof of concept ANN, as a cloud based web application, with the ability of overlaying predicted short and long term vessel behaviour on an interactive map. Our proposed approach could potentially assist in busy port scheduling, vessel route planning, anomaly detection and increasing overall Maritime Domain Awareness. | A cloud based architecture capable of perceiving and predicting multiple vessel behaviour |
S1568494615004330 | In this study, gene expression programming (GEP) is employed as a new method for estimating the side weir discharge coefficient. The accuracy of existing equations in evaluating the side weir discharge coefficient is first examined. Afterward, taking into consideration the dimensionless parameters that affect the estimation of this parameter and sensitivity analysis, five different models are presented. Coefficient determination (R 2), root mean square error (RMSE), mean absolute relative error (MARE), scatter index (SI) and BIAS are used for measuring the models’ performance. Two sets of experimental data are applied to evaluate the models. According to the results obtained indicate that the model with Froude number (F 1), dimensionless weir length (b/B), ratio of weir length to depth of upstream flow (b/y 1), and ratio of weir height to its length (p/y 1) parameters of R 2 =0.947, MARE =0.05, RMSE =0.037, BIAS =0.01 and SI =0.067, performed the best. Accordingly, this new equation proposed through GEP can be utilized for estimating the discharge coefficient in rectangular sharp-crested side weirs. | Gene expression programming to predict the discharge coefficient in rectangular side weirs |
S1568494615004342 | The nurse rostering problem (NRP) is a combinatorial optimization problem tackled by assigning a set of shifts to a set of nurses, each has specific skills and work contract, to a predefined rostering period according to a set constraints. The metaheuristics are the most successful methods for tackling this problem. This paper proposes a metaheuristic technique called a hybrid artificial bee colony (HABC) for NRP. In HABC, the process of the employed bee operator is replaced with the hill climbing optimizer (HCO) to empower its exploitation capability and the usage of HCO is controlled by hill climbing rate (HCR) parameter. The performance of the proposed HABC is evaluated using the standard dataset published in the first international nurse rostering competition 2010 (INRC2010). This dataset consists of 69 instances which reflect this problem in many real-world cases that are varied in size and complexity. The experimental results of studying the effect of HCO using different value of HCR show that the HCO has a great impact on the performance of HABC. In addition, a comparative evaluation of HABC is carried out against other eleven methods that worked on INRC2010 dataset. The comparative results show that the proposed algorithm achieved two new best results for two problem instances, 35 best published results out of 69 instances as achieved by other comparative methods, and comparable results in the remaining instances of INRC2010 dataset. | A hybrid artificial bee colony for a nurse rostering problem |
S1568494615004354 | Text feature selection is an importance step in text classification and directly affects the classification performance. Classic feature selection methods mainly include document frequency (DF), information gain (IG), mutual information (MI), chi-square test (CHI). Theoretically, these methods are difficult to get improvement due to the deficiency of their mathematical models. In order to further improve effect of feature selection, many researches try to add intelligent optimization algorithms into feature selection method, such as improved ant colony algorithm and genetic algorithms, etc. Compared to the ant colony algorithm and genetic algorithms, particle swarm optimization algorithm (PSO) is simpler to implement and can find the optimal point quickly. Thus, this paper attempt to improve the effect of text feature selection through PSO. By analyzing current achievements of improved PSO and characteristic of classic feature selection methods, we have done many explorations in this paper. Above all, we selected the common PSO model, the two improved PSO models based respectively on functional inertia weight and constant constriction factor to optimize feature selection methods. Afterwards, according to constant constriction factor, we constructed a new functional constriction factor and added it into traditional PSO model. Finally, we proposed two improved PSO models based on both functional constriction factor and functional inertia weight, they are respectively the synchronously improved PSO model and the asynchronously improved PSO model. In our experiments, CHI was selected as the basic feature selection method. We improved CHI through using the six PSO models mentioned above. The experiment results and significance tests show that the asynchronously improved PSO model is the best one among all models both in the effect of text classification and in the stability of different dimensions. | Improved particle swarm optimization algorithm and its application in text feature selection |
S1568494615004366 | The identification of a module's fault-proneness is very important for minimizing cost and improving the effectiveness of the software development process. How to obtain the correlation between software metrics and module's fault-proneness has been the focus of much research. This paper presents the application of hybrid artificial neural network (ANN) and Quantum Particle Swarm Optimization (QPSO) in software fault-proneness prediction. ANN is used for classifying software modules into fault-proneness or non fault-proneness categories, and QPSO is applied for reducing dimensionality. The experiment results show that the proposed prediction approach can establish the correlation between software metrics and modules’ fault-proneness, and is very simple because its implementation requires neither extra cost nor expert's knowledge. Proposed prediction approach can provide the potential software modules with fault-proneness to software developers, so developers only need to focus on these software modules, which may minimize effort and cost of software maintenance. | Prediction approach of software fault-proneness based on hybrid artificial neural network and quantum particle swarm optimization |
S1568494615004378 | Differential evolution (DE) refers to a class of population based stochastic optimization algorithms. In order to solve practical problems involving constraints, such algorithms are typically coupled with a constraint handling scheme. Such constraints often emerge out of user requirements, physical laws, statutory requirements, resource limitations etc and are routinely evaluated using computationally expensive analysis i.e., solvers relying on finite element methods, computational fluid dynamics, computational electromagnetics etc. In this paper, we introduce a novel scheme of constraint handling, wherein every solution is assigned a random sequence of constraints and the evaluation process is aborted whenever a constraint is violated. The solutions are sorted based on two measures i.e., the number of satisfied constraints and the violation measure. The number of satisfied constraints takes precedence over the amount of violation. We illustrate the performance of the proposed scheme and compare it with other state of the art constraint handling methods using a common framework based on differential evolution. The results are compared using CEC-2006, CEC-2010 and CEC-2011 test functions with inequality constraints. The results clearly highlight two key aspects (a) the ability to identify feasible solutions earlier than other constraint handling schemes and (b) potential savings in computational cost offered by the proposed strategy. | A differential evolution algorithm with constraint sequencing: An efficient approach for problems with inequality constraints |
S156849461500438X | Despite the wide application of evolutionary computation (EC) techniques to rule discovery in stock algorithmic trading (AT), a comprehensive literature review on this topic is unavailable. Therefore, this paper aims to provide the first systematic literature review on the state-of-the-art application of EC techniques for rule discovery in stock AT. Out of 650 articles published before 2013 (inclusive), 51 relevant articles from 24 journals were confirmed. These papers were reviewed and grouped into three analytical method categories (fundamental analysis, technical analysis, and blending analysis) and three EC technique categories (evolutionary algorithm, swarm intelligence, and hybrid EC techniques). A significant bias toward the applications of genetic algorithm-based (GA) and genetic programming-based (GP) techniques in technical trading rule discovery is observed. Other EC techniques and fundamental analysis lack sufficient study. Furthermore, we summarize the information on the evaluation scheme of selected papers and particularly analyze the researches which compare their models with buy and hold strategy (B&H). We observe an interesting phenomenon where most of the existing techniques perform effectively in the downtrend and poorly in the uptrend, and considering the distribution of research in the classification framework, we suggest that this phenomenon can be attributed to the inclination of factor selections and problem in transaction cost selections. We also observe the significant influence of the transaction cost change on the margins of excess return. Other influenced factors are also presented in detail. The absence of ways for market trend prediction and the selection of transaction cost are two major limitations of the studies reviewed. In addition, the combination of trading rule discovery techniques and portfolio selection is a major research gap. Our review reveals the research focus and gaps in applying EC techniques for rule discovery in stock AT and suggests a roadmap for future research. | Application of evolutionary computation for rule discovery in stock algorithmic trading: A literature review |
S1568494615004391 | Tomato (Solanum lycopersicum) ripeness estimation is an important process that affects its quality evaluation and marketing. However, the slow speed, subjectivity, time consumption associated with manual assessment has been forcing the agriculture industry to apply automation through robots. The vision system of harvesting robot is responsible for two-tasks. The first task is the recognition of object (tomato) and second is the classification of recognized objects (tomatoes). In this paper, Fuzzy Rule-Based Classification approach (FRBCS) has been proposed to estimate the ripeness of tomatoes based on color. The two color depictions: red-green color difference and red-green color ratio are derived from extracted RGB color information. These are then compared as a criterion for classification. Fuzzy partitioning of the feature space into linguistic variables is done by means of a learning algorithm. A rule set is automatically generated from the derived feature set using Decision Trees. Mamdani fuzzy inference system is adopted for building the fuzzy rule based classification system that classifies the tomatoes into six maturity stages. Dataset used for experiments has been created using the real images that were collected from a farm. 70% of the total images were used for training and 30% images of the total were used for testing the dataset respectively. Training dataset is divided into six classes representing the six different stages of tomato ripeness. Experimental results showed the system achieved the ripeness classification accuracy of 94.29% using proposed FRBCS. | Fuzzy classification of pre-harvest tomatoes for ripeness estimation – An approach based on automatic rule learning using decision tree |
S1568494615004408 | Electrocardiogram is the most commonly used tool for the diagnosis of cardiologic diseases. In order to help cardiologists to diagnose the arrhythmias automatically, new methods for automated, computer aided ECG analysis are being developed. In this paper, a Modified Artificial Bee Colony (MABC) algorithm for ECG heart beat classification is introduced. It is applied to ECG data set which is obtained from MITBIH database and the result of MABC is compared with seventeen other classifier's accuracy. In classification problem, some features have higher distinctiveness than others. In this study, in order to find higher distinctive features, a detailed analysis has been done on time domain features. By using the right features in MABC algorithm, high classification success rate (99.30%) is obtained. Other methods generally have high classification accuracy on examined data set, but they have relatively low or even poor sensitivities for some beat types. Different data sets, unbalanced sample numbers in different classes have effect on classification result. When a balanced data set is used, MABC provided the best result as 97.96% among all classifiers. Not only part of the records from examined MITBIH database, but also all data from selected records are used to be able to use developed algorithm on a real time system in the future by using additional software modules and making adaptation on a specific hardware. | ECG heart beat classification method based on modified ABC algorithm |
S156849461500441X | In this paper, we propose a H.264/AVC compressed domain human action recognition system with projection based metacognitive learning classifier (PBL-McRBFN). The features are extracted from the quantization parameters and the motion vectors of the compressed video stream for a time window and used as input to the classifier. Since compressed domain analysis is done with noisy, sparse compression parameters, it is a huge challenge to achieve performance comparable to pixel domain analysis. On the positive side, compressed domain allows rapid analysis of videos compared to pixel level analysis. The classification results are analyzed for different values of Group of Pictures (GOP) parameter, time window including full videos. The functional relationship between the features and action labels are established using PBL-McRBFN with a cognitive and meta-cognitive component. The cognitive component is a radial basis function, while the meta-cognitive component employs self-regulation to achieve better performance in subject independent action recognition task. The proposed approach is faster and shows comparable performance with respect to the state-of-the-art pixel domain counterparts. It employs partial decoding, which rules out the complexity of full decoding, and minimizes computational load and memory usage. This results in reduced hardware utilization and increased speed of classification. The results are compared with two benchmark datasets and show more than 90% accuracy using the PBL-McRBFN. The performance for various GOP parameters and group of frames are obtained with twenty random trials and compared with other well-known classifiers in machine learning literature. | Human action recognition in H.264/AVC compressed domain using meta-cognitive radial basis function network |
S1568494615004421 | The aim of this paper is to develop a simulated annealing-based permutation method for multiple criteria decision analysis within the environment of interval type-2 fuzzy sets. The outranking methodology constitutes one of the most fruitful approaches in multiple criteria decision making and has been applied in numerous real-world problems. The permutation method is a classical outranking model, which generalizes Jacquet–Lagreze's permutation method and is based on a pairwise criterion comparison of the alternatives. Because modeling of the uncertainty in the decision-making process becomes increasingly important, an extension to the interval type-2 fuzzy environment is a useful generalization of the permutation method and is appropriate for handling uncertain and imprecise information in practical decision-making situations. This paper produces a signed-distance-based comparison among the comprehensive rankings of alternatives for concordance and discordance analyses. An integrated nonlinear programming model is constructed for estimation of the criterion weights and the optimal ranking order of the alternatives under incomplete preference information. To enhance the implementation efficiency, a simulated annealing-based permutation method and its meta-heuristic algorithm are developed to produce a polynomial time solution for the total completion time problem. Furthermore, computational experiments with notably large amounts of simulation data are conducted to test the solution approach and validate the correctness of the approximate solution compared with the optimal all-permutation-based result. | A simulated annealing-based permutation method and experimental analysis for multiple criteria decision analysis with interval type-2 fuzzy sets |
S1568494615004445 | Exploration is one of the most important functions for a mobile service robot because a map is required to carry out various tasks. A suitable strategy is needed to efficiently explore an environment and to build an accurate map. This study proposed the use of several gains (information, driving, localization) that, if considered during exploration, can simultaneously improve the efficiency of the exploration process and quality of the resulting map. Considering the information and driving gains reduces behavior that leads a robot to explore a previously visited place, and thus the exploration distance is reduced. In addition, the robot can select a favorable path for localization by considering the localization gain during exploration, and the robot can estimate its pose more robustly than other methods that do not consider localizability during exploration. This proposed exploration method was verified by various experiments, which verified that a robot can build an accurate map fully autonomously and efficiently in various home environments using the proposed method. | Sensor fusion-based exploration in home environments using information, driving and localization gains |
S1568494615004457 | In this paper, we propose the use of a hybrid algorithm for the inversion of 3D Alternate Current (AC) resistivity logging measurements. The forward problem is solved using a goal-oriented self-adaptive hp-Finite Element Method (hp-FEM) that provides exponential convergence of the numerical error with respect to the mesh size. The inverse problem is solved using a Hierarchical Genetic Search (HGS) coupled with a Broyden–Fletcher–Goldfar–Shanno (BFGS) method. Individuals from the genetic populations represent the resistivity of the formation layers. The fitness function is estimated based on hp-FEM results. The hybrid method controls the accuracy of evaluation of particular individuals, as well as the accuracy of the genetic coding. After finding those regions where the fitness function has small values, the local search method by means of BFGS algorithm is executed. The paper is concluded with numerical results for the hybrid algorithm. magnetic field (magnetizing field) electric field electric current density electric free charge distribution permittivity, permeability, conductivity, resistivity and angular frequency domain in ℝ 3 and its boundary ∂Ω=Γ main space of forward solutions (see (3) and (4)) and its conjugate and biconjugate spaces sesquilinear form of Maxwell's equation (see (10)) quantity of interest functional (see (23)) primal and dual forward solutions approximate hp-FEM forward solutions primal and dual relative hp-FEM errors admissible set of inverse solutions (see (43)) and fitness function domain (see Section 5) | A hybrid method for inversion of 3D AC resistivity logging measurements |
S1568494615004469 | This paper presents an experimental methodology of Design for Manufacturing (DFM) used for survey and analysis of geometric deviations of CNC Machine-Tools, through their final product. These deviations generate direct costs that can be avoided through the use of Intelligent Manufacturing Systems (IMS), by the application of Artificial Neural Networks (ANNs) to predict the fabrication parameters. Finally, after the experiments, it was possible to evaluate the experimental methodology used, the equations, the variables of data adjustment and thus enable the validation of the methodology used as a tool for DFM with high potential return on product quality, development time and reliability of the process with wide application in various CNC Machines. | Correcting geometric deviations of CNC Machine-Tools: An approach with Artificial Neural Networks |
S1568494615004470 | Parallel robots have complicated structures as well as complex dynamic and kinematic equations, rendering model-based control approaches as ineffective due to their high computational cost and low accuracy. Here, we propose a model-free dynamic-growing control architecture for parallel robots that combines the merits of self-organizing systems with those of interval type-2 fuzzy neural systems. The proposed approach is then applied experimentally to position control of a 3-PSP (Prismatic–Spherical–Prismatic) parallel robot. The proposed rule-base construction is different from most conventional self-organizing approaches by omitting the node pruning process while adding nodes more conservatively. This helps preserve valuable historical rules for when they are needed. The use of interval type-2 fuzzy logic structure also better enables coping with uncertainties in parameters, dynamics of the robot model and uncertainties in rule space. Finally, the adaptation structure allows learning and further adapts the rule base to changing environment. Multiple simulation and experimental studies confirm that the proposed approach leads to fewer rules, lower computational cost and higher accuracy when compared with two competing type-1 and type-2 fuzzy neural controllers. | Position tracking of a 3-PSP parallel robot using dynamic growing interval type-2 fuzzy neural control |
S1568494615004482 | Firefly algorithm (FA) is a newer member of bio-inspired meta-heuristics, which was originally proposed to find solutions to continuous optimization problems. Popularity of FA has increased recently due to its effectiveness in handling various optimization problems. To enhance the performance of the FA even further, an adaptive FA is proposed in this paper to solve mechanical design optimization problems, and the adaptivity is focused on the search mechanism and adaptive parameter settings. Moreover, chaotic maps are also embedded into AFA for performance improvement. It is shown through experimental tests that some of the best known results are improved by the proposed algorithm. | Adaptive firefly algorithm with chaos for mechanical design optimization problems |
S1568494615004494 | An intelligent detection method is proposed in this paper to enrich the study of applying machine learning and data mining techniques to building structural damage identification. The proposed method integrates the multi-sensory data fusion and classifier ensemble to detect the location and extent of the damage. First, the wavelet package analysis is used to transform the original vibration acceleration signal into energy features. Then the posteriori probability support vector machines (PPSVM) and the Dempster–Shafer (DS) evidence theory are combined to identify the damage. Empirical study on a benchmark structure model shows that, compared with popular data mining approaches, the proposed method can provide more accurate and stable detection results. Furthermore, this paper compares the detection performance of the information fusion at different levels. The experimental analysis demonstrates that the proposed method with the fusion at the decision level can make good use of multi-sensory information and is more robust in practice. | Structural damage detection based on posteriori probability support vector machine and Dempster–Shafer evidence theory |
S1568494615004500 | Cooperative optimization algorithms have been applied with success to solve many optimization problems. However, many of them often lose their effectiveness and advantages when solving large scale and complex problems, e.g., those with interacted variables. A key issue involved in cooperative optimization is the task of problem decomposition. In this paper, a fast search operator is proposed to capture the interdependencies among variables. Problem decomposition is performed based on the obtained interdependencies. Another key issue involved is the optimization of the subproblems. A cross-cluster mutation strategy is proposed to further enhance exploitation and exploration. More specifically, each operator is identified as exploitation-biased or exploration-biased. The population is divided into several clusters. For the individuals within each cluster, the exploitation-biased operators are applied. For the individuals among different clusters, the exploration-biased operators are applied. The proposed operators are incorporated into the original differential evolution algorithm. The experiments were carried out on CEC2008, CEC2010, and CEC2013 benchmarks. For comparison, six algorithms that yield top ranked results in CEC competition are selected. The comparison results demonstrated that the proposed algorithm is robust and comprehensive for large scale optimization problems. | Cooperative differential evolution with fast variable interdependence learning and cross-cluster mutation |
S1568494615004512 | Even though Self-Organizing Maps (SOMs) constitute a powerful and essential tool for pattern recognition and data mining, the common SOM algorithm is not apt for processing categorical data, which is present in many real datasets. It is for this reason that the categorical values are commonly converted into a binary code, a solution that unfortunately distorts the network training and the posterior analysis. The present work proposes a SOM architecture that directly processes the categorical values, without the need of any previous transformation. This architecture is also capable of properly mixing numerical and categorical data, in such a manner that all the features adopt the same weight. The proposed implementation is scalable and the corresponding learning algorithm is described in detail. Finally, we demonstrate the effectiveness of the presented algorithm by applying it to several well-known datasets. | Mixing numerical and categorical data in a Self-Organizing Map by means of frequency neurons |
S1568494615004524 | The paper focuses on the adaptive relational association rule mining problem. Relational association rules represent a particular type of association rules which describe frequent relations that occur between the features characterizing the instances within a data set. We aim at re-mining an object set, previously mined, when the feature set characterizing the objects increases. An adaptive relational association rule method, based on the discovery of interesting relational association rules, is proposed. This method, called ARARM (Adaptive Relational Association Rule Mining) adapts the set of rules that was established by mining the data before the feature set changed, preserving the completeness. We aim to reach the result more efficiently than running the mining algorithm again from scratch on the feature-extended object set. Experiments testing the method's performance on several case studies are also reported. The obtained results highlight the efficiency of the ARARM method and confirm the potential of our proposal. | A novel approach to adaptive relational association rule mining |
S1568494615004536 | This paper addresses the selection of sub-feature from each feature using fuzzy methodologies maintaining the privacy during collection of data from participating parties in distributed environment. Based on fuzzy random variables conditional expectation is used in which two fuzzy sets are generated using Borel set that helps to determine sub-feature within certain interval. The privacy and selection of sub-feature leading to a distinguished class is the main objective of this research work. These two problems are directly related to data mining problems of classification and characterization of feature. In many cases traditional techniques are not suitable for complex databases. However our methodology provides better way for selection of sub-features under different situations. The proposed model and techniques both presents extensive theoretical analysis and experimental results. The experiments show the effectiveness and performance based on real world data set. | Privacy preserving sub-feature selection in distributed data mining |
S1568494615004548 | A two-stage memory architecture is maintained within the framework of great deluge algorithm for the solution of single-objective quadratic assignment problem. Search operators exploiting the accumulated experience in memory are also implemented to direct the search towards more promising regions of the solution space. The level-based acceptance criterion of the great deluge algorithm is applied for each best solution extracted in a particular iteration. The use of short- and long-term memory-based search supported by effective move operators resulted in a powerful combinatorial optimization algorithm. A successful variant of tabu search is employed as the local search method that is only applied over a few randomly selected memory elements when the second stage memory is updated. The success of the presented approach is illustrated using sets of well-known benchmark problems and evaluated in comparison to well-known combinatorial optimization algorithms. Experimental evaluations clearly demonstrate that the presented approach is a competitive and powerful alternative for solving quadratic assignment problems. | A great deluge and tabu search hybrid with two-stage memory support for quadratic assignment problem |
S156849461500455X | This paper presents a biologically inspired, sequential learning spiking neural classifier (SLSNC) for pattern classification problems. It consists of a two layered neural network and a separate decision block which estimates the predicted class label. Inspired by observations in the neuroscience literature, the input layer employs a new neuron model which converts real valued stimuli into spikes with varying amplitudes and firing times. The intermediate layer neurons are modeled as integrate-and-fire spiking neurons. The decision block identifies that intermediate neuron which fires first and returns the class label associated with that neuron as the predicted class label. The sequential learning algorithm for the spiking neural network automatically determines the network structure from the training samples and adapts its synaptic weights by long term potentiation and long term depression. Performance of SLSNC has been evaluated using a number of benchmark classification problems and the results have been compared with other well-known spiking neural network classifiers in the literature as well as with the standard support vector machine (SVM) with a Gaussian kernel and the fast learning Extreme Learning Machine (ELM) classifiers. The results clearly indicate that the described spiking neural network produces similar or better generalization performance with a smaller network. | A sequential learning algorithm for a spiking neural classifier |
S1568494615004561 | The offline 2D bin packing problem (2DBPP) is an NP-hard combinatorial optimization problem in which objects with various width and length sizes are packed into minimized number of 2D bins. Various versions of this well-known industrial engineering problem can be faced frequently. Several heuristics have been proposed for the solution of 2DBPP but it has not been possible to find the exact solutions for large problem instances. Next fit, first fit, best fit, unified tabu search, genetic and memetic algorithms are some of the state-of-the-art methods successfully applied to this important problem. In this study, we propose a set of novel hyper-heuristic algorithms that select/combine the state-of-the-art heuristics and local search techniques for minimizing the number of 2D bins. The proposed algorithms introduce new crossover and mutation operators for the selection of the heuristics. Through the results of exhaustive experiments on a set of offline 2DBPP benchmark problem instances, we conclude that the proposed algorithms are robust with their ability to obtain high percentage of the optimal solutions. | Robust hyper-heuristic algorithms for the offline oriented/non-oriented 2D bin packing problems |
S1568494615004585 | Detecting fraudulent and abusive cases in healthcare is one of the most challenging problems for data mining studies. However, most of the existing studies have a shortage of real data for analysis and focus on a very limited version of the problem by covering only a specific actor, healthcare service, or disease. The purpose of this study is to implement and evaluate a novel framework to detect fraudulent and abusive cases independently from the actors and commodities involved in the claims and an extensible structure to introduce new fraud and abuse types. Interactive machine learning that allows incorporating expert knowledge in an unsupervised setting is utilized to detect fraud and abusive cases in healthcare. In order to increase the accuracy of the framework, several well-known methods are utilized, such as the pairwise comparison method of analytic hierarchical processing (AHP) for weighting the actors and attributes, expectation maximization (EM) for clustering similar actors, two-stage data warehousing for proactive risk calculations, visualization tools for effective analyzing, and z-score and standardization in order to calculate the risks. The experts are involved in all phases of the study and produce six different abnormal behavior types using storyboards. The proposed framework is evaluated with real-life data for six different abnormal behavior types for prescriptions by covering all relevant actors and commodities. The Area Under the Curve (AUC) values are presented for each experiment. Moreover, a cost-saving model is also presented. The developed framework, i.e., the eFAD suite, is actor- and commodity-independent, configurable (i.e., easily adaptable in the dynamic environment of fraud and abusive behaviors), and effectively handles the fragmented nature of abnormal behaviors. The proposed framework combines both proactive and retrospective analysis with an enhanced visualization tool that significantly reduces the time requirements for the fact-finding process after the eFAD detects risky claims. This system is utilized by a company to produce monthly reports that include abnormal behaviors to be evaluated by the insurance company. | An interactive machine-learning-based electronic fraud and abuse detection system in healthcare insurance |
S1568494615004597 | Corneal images can be acquired using confocal microscopes which provide detailed views of the different layers inside a human cornea. Some corneal problems and diseases can occur in one or more of the main corneal layers: the epithelium, stroma and endothelium. Consequently, for automatically extracting clinical information associated with corneal diseases, identifying abnormality or evaluating the normal cornea, it is important to be able to automatically recognise these layers reliably. Artificial intelligence (AI) approaches can provide improved accuracy over the conventional processing techniques and save a useful amount of time over the manual analysis time required by clinical experts. Artificial neural networks (ANNs), adaptive neuro fuzzy inference systems (ANFIS) and a committee machine (CM) have been investigated and tested to improve the recognition accuracy of the main corneal layers and identify abnormality in these layers. The performance of the CM, formed from ANN and ANFIS, achieves an accuracy of 100% for some classes in the processed data sets. Three normal corneal data sets and seven abnormal corneal images associated with diseases in the main corneal layers have been investigated with the proposed system. Statistical analysis for these data sets is performed to track any change in the processed images. This system is able to pre-process (quality enhancement, noise removal), classify corneal images, identify abnormalities in the analysed data sets and visualise corneal stroma images as well as each individual keratocyte cell in a 3D volume for further clinical analysis. | Medical image classification based on artificial intelligence approaches: A practical study on normal and abnormal confocal corneal images |
S1568494615004603 | Classification is a major research field in pattern recognition and many methods have been proposed to enhance the generalization ability of classification. Ensemble learning is one of the methods which enhance the classification ability by creating several classifiers and making decisions by combining their classification results. On the other hand, when we consider stock trading problems, trends of the markets are very important to decide to buy and sell stocks. In this case, the combinations of trading rules that can adapt to various kinds of trends are effective to judge the good timing of buying and selling. Therefore, in this paper, to enhance the performance of the stock trading system, ensemble learning mechanism of rule-based evolutionary algorithm using multi-layer perceptron (MLP) is proposed, where several rule pools for stock trading are created by rule-based evolutionary algorithm, and effective rule pools are adaptively selected by MLP and the selected rule pools cooperatively make decisions of stock trading. In the simulations, it is clarified that the proposed method shows higher profits or lower losses than the method without ensemble learning and buy&hold. | Ensemble learning of rule-based evolutionary algorithm using multi-layer perceptron for supporting decisions in stock trading problems |
S1568494615004615 | Cross docking is a logistic concept in which product items are unloaded from inbound trucks into a warehouse and then are sorted out based on customer demands and loaded into outbound trucks. For a dock holding pattern for outbound trucks, two possible scenarios can be defined. In the first scenario, whenever a truck goes into a shipping dock, it does not leave the dock until all needed product items are loaded into outbound truck. In the second scenario, outbound trucks can enter and leave the dock repeatedly. Therefore, in the second scenario it is possible that an outbound truck loads some of its needed products from shipping dock, leaves the dock for another outbound truck, waits and goes into the shipping dock again to load all or part of its remaining product items. This paper proposes a genetic algorithm-based framework for scheduling inbound and outbound trucks in cross docking systems with temporary storage of product items at shipping dock for the second defined scenario such that minimizes total operation time. In order to show the merit of the proposed method in providing a sequence that minimizes the total operation time, the operation time of the proposed method is compared to a well-known existing model by several numerical examples. The numerical results show the high performance of the proposed algorithm. | Scheduling trucks in cross docking systems with temporary storage and repetitive pattern for shipping trucks |
S1568494615004627 | The management of hesitant fuzzy information is a topic of special interest in fuzzy decision making. In this paper, we focus on the use and properties of the fuzzy linguistic modelling based on discrete fuzzy numbers to manage hesitant fuzzy linguistic information. Among these properties, we can highlight the existence of aggregation functions with no need of transformations or the possibility of a greater flexibilization of the opinions of the experts, even using different linguistic chains (multigranularity). Furthermore, based on these properties we perform a comparison between this model and the one based on hesitant fuzzy linguistic term sets, showing the advantages of the former with respect to the latter. Finally, a fuzzy decision making model based on discrete fuzzy numbers is proposed. | Some interesting properties of the fuzzy linguistic model based on discrete fuzzy numbers to manage hesitant fuzzy linguistic information |
S1568494615004639 | Feature selection is the basic pre-processing task of eliminating irrelevant or redundant features through investigating complicated interactions among features in a feature set. Due to its critical role in classification and computational time, it has attracted researchers’ attention for the last five decades. However, it still remains a challenge. This paper proposes a binary artificial bee colony (ABC) algorithm for the feature selection problems, which is developed by integrating evolutionary based similarity search mechanisms into an existing binary ABC variant. The performance analysis of the proposed algorithm is demonstrated by comparing it with some well-known variants of the particle swarm optimization (PSO) and ABC algorithms, including standard binary PSO, new velocity based binary PSO, quantum inspired binary PSO, discrete ABC, modification rate based ABC, angle modulated ABC, and genetic algorithms on 10 benchmark datasets. The results show that the proposed algorithm can obtain higher classification performance in both training and test sets, and can eliminate irrelevant and redundant features more effectively than the other approaches. Note that all the algorithms used in this paper except for standard binary PSO and GA are employed for the first time in feature selection. | A binary ABC algorithm based on advanced similarity scheme for feature selection |
S1568494615004640 | Aiming at the chaotic behavior of PMSG under certain parameters, the new ADHDP method based on Cloud RBF neural network is proposed to track the point of maximum wind power, which can make the system out of chaos and track the point of maximum power stably, and the optimal control problems of the complex nonlinear system can be solved effectively. This method is realized by using the optimal power–speed curve and the vector control principle, and by adjusting the stator output voltage to control the electromagnetic torque, then the rotor speed of wind turbine can operate at the optimal speed that corresponds to the point of maximum power, meanwhile, the measure of wind speed also can be avoided. The simulation results show the effectiveness of the proposed method. | Maximum wind power tracking for PMSG chaos systems – ADHDP method |
S1568494615004652 | A skeleton provides a synthetic and thin representation of three dimensional objects, and is useful for shape description and recognition. In this paper, a novel 3D skeleton algorithm is proposed based on neutrosophic cost function. Firstly, the distance transform is used to a 3D volume, and the distance matrix is obtained for each voxel in the volume. The ridge points are identified based on their distance transform values and are used as the candidates for the skeleton. Then, a novel cost function, namely neutrosophic cost function (NCF) is proposed based on neutrosophic set, and is utilized to define the cost between each ridge points. Finally, a shortest path finding algorithm is used to identify the optimum path in the 3D volume with least cost, in which the costs of paths are calculated using the new defined NCF. The optimum path is treated as the skeleton of the 3D volume. A variety of experiments have been conducted on different 3D volume. The experimental results demonstrate the better performance of the proposed method. It can identify the skeleton for different volumes with high accuracy. In addition, the proposed method is robust to the noise on the volume. This advantage will lead it to wide application in the skeleton detection applications in the real world. | A novel 3D skeleton algorithm based on neutrosophic cost function |
S1568494615004664 | A focused crawler is topic-specific and aims selectively to collect web pages that are relevant to a given topic from the Internet. In many studies, the Vector Space Model (VSM) and Semantic Similarity Retrieval Model (SSRM) take advantage of cosine similarity and semantic similarity to compute similarities between web pages and the given topic. However, if there are no common terms between a web page and the given topic, the VSM will not obtain the proper topical similarity of the web page. In addition, if all of the terms between them are synonyms, then the SSRM will also not obtain the proper topical similarity. To address these problems, this paper proposes an improved retrieval model, the Semantic Similarity Vector Space Model (SSVSM), which integrates the TF*IDF values of the terms and the semantic similarities among the terms to construct topic and document semantic vectors that are mapped to the same double-term set, and computes the cosine similarities between these semantic vectors as topic-relevant similarities of documents, including the full texts and anchor texts of unvisited hyperlinks. Next, the proposed model predicts the priorities of the unvisited hyperlinks by integrating the full text and anchor text topic-relevant similarities. The experimental results demonstrate that this approach improves the performance of the focused crawlers and outperforms other focused crawlers based on Breadth-First, VSM and SSRM. In conclusion, this method is significant and effective for focused crawlers. | An improved focused crawler based on Semantic Similarity Vector Space Model |
S1568494615004676 | A novel classification method based on SVM is proposed for binary classification tasks of homogeneous data in this paper. The proposed method can effectively predict the binary labeling of the sequence of observation samples in the test set by using the following procedure: we first make different assumptions about the class labeling of this sequence, then we utilize SVM to obtain two classification errors respectively for each assumption, and finally the binary labeling is determined by comparing the obtained two classification errors. The proposed method leverages the homogeneity within the same classes and exploits the difference between different classes, and hence can achieve the effective classification for homogeneous data. Experimental results indicate the power of the proposed method. | A SVM based classification method for homogeneous data |
S1568494615004688 | This paper introduces a novel metaheuristic optimization method called the lightning search algorithm (LSA) to solve constraint optimization problems. It is based on the natural phenomenon of lightning and the mechanism of step leader propagation using the concept of fast particles known as projectiles. Three projectile types are developed to represent the transition projectiles that create the first step leader population, the space projectiles that attempt to become the leader, and the lead projectile that represent the projectile fired from best positioned step leader. In contrast to that of the counterparts of the LSA, the major exploration feature of the proposed algorithm is modeled using the exponential random behavior of space projectile and the concurrent formation of two leader tips at fork points using opposition theory. To evaluate the reliability and efficiency of the proposed algorithm, the LSA is tested using a well-utilized set of 24 benchmark functions with various characteristics necessary to evaluate a new algorithm. An extensive comparative study with four other well-known methods is conducted to validate and compare the performance of the LSA. The result demonstrates that the LSA generally provides better results compared with the other tested methods with a high convergence rate. | Lightning search algorithm |
S156849461500469X | The main aim in network anomaly detection is effectively spotting hostile events within the traffic pattern associated to network operations, by distinguishing them from normal activities. This can be only accomplished by acquiring the a-priori knowledge about any kind of hostile behavior that can potentially affect the network (that is quite impossible for practical reasons) or, more easily, by building a model that is general enough to describe the normal network behavior and detect the violations from it. Earlier detection frameworks were only able to distinguish already known phenomena within traffic data by using pre-trained models based on matching specific events on pre-classified chains of traffic patterns. Alternatively, more recent statistics-based approaches were able to detect outliers respect to a statistic idealization of normal network behavior. Clearly, while the former approach is not able to detect previously unknown phenomena (zero-day attacks) the latter one has limited effectiveness since it cannot be aware of anomalous behaviors that do not generate significant changes in traffic volumes. Machine learning allows the development of adaptive, non-parametric detection strategies that are based on “understanding” the network dynamics by acquiring through a proper training phase a more precise knowledge about normal or anomalous phenomena in order to classify and handle in a more effective way any kind of behavior that can be observed on the network. Accordingly, we present a new anomaly detection strategy based on supervised machine learning, and more precisely on a batch relevance-based fuzzyfied learning algorithm, known as U-BRAIN, aiming at understanding through inductive inference the specific laws and rules governing normal or abnormal network traffic, in order to reliably model its operating dynamics. The inferred rules can be applied in real time on online network traffic. This proposal appears to be promising both in terms of identification accuracy and robustness/flexibility when coping with uncertainty in the detection/classification process, as verified through extensive evaluation experiments. | An uncertainty-managing batch relevance-based approach to network anomaly detection |
S1568494615004706 | The stock selection problem is one of the major issues in the investment industry, which is mainly solved by analyzing financial ratios. However, considering the complexity and imprecise patterns of the stock market, obvious and easy-to-understand investment rules, based on fundamental analysis, are difficult to obtain. Therefore, in this paper, we propose a combined soft computing model for tackling the value stock selection problem, which includes dominance-based rough set approach, formal concept analysis, and decision-making trial and evaluation laboratory technique. The objectives of the proposed approach are to (1) obtain easy-to-understand decision rules, (2) identify the core attributes that may distinguish value stocks, (3) explore the cause–effect relationships among the attributes or criteria in the strong decision rules to gain more insights. To examine and illustrate the proposed model, this study used a group of IT stocks in Taiwan as an empirical case. The findings contribute to the in-depth understanding of the value stock selection problem in practice. | Combined soft computing model for value stock selection based on fundamental analysis |
S1568494615004718 | The optimum selection of parameters is great important for the final quality of product in modern industrial manufacturing process. In order to achieve highly product quality, an effective optimization technique is indispensable. In this paper, a new hybrid algorithm named teaching-learning-based cuckoo search (TLCS) is proposed for parameter optimization problems in structure designing as well as machining processes. The TLCS combines the Lévy flight with teaching-learning process, then evolves with a co-evolutionary mechanism: for solutions to be abandoned in the cuckoo search will perform Lévy flight to generate new solutions, while for other better solutions, the teaching-learning process is used to improve the local searching ability of the algorithm. Then the proposed TLCS method is adopted into several well-known engineering parameter optimization problems. Experimental results show that TLCS obtains some solutions better than those previously reported in the literature, which reveals that the proposed TLCS is a very effective and robust approach for the parameter optimization problems. | An effective teaching-learning-based cuckoo search algorithm for parameter optimization problems in structure designing and machining processes |
S156849461500472X | Surface Electromyography (sEMG) is a non-invasive, easy to record signal of superficial muscles from the skin surface. The sEMG is widely used in evaluating the functional status of the hand to assist in hand gesture recognition, prosthetics and rehabilitation applications. Considering the nonlinear and non-stationary characteristics of sEMG, hand gesture recognition using sEMG signals necessitate designers to use Maximal Lyapunov Exponent (MLE) or ensemble Empirical Mode Decomposition (EMD) based MLEs. In this research, we propose a hand gesture recognition method of sEMG based on nonlinear multiscale MLE. The aim is to increase the classification accuracy of sEMG features while reducing the complexity of EMD. The nonlinear MLE features are classified using Flexible Neural Tree (FNT), which can solve highly structured dependent problems of the Artificial Neural Network (ANN). The testing has been conducted using several experiments with five participants. The classification performance of nonlinear multiscale MLE method is compared with MLE and EMD-based MLE through simulations. Experimental results demonstrate that the former algorithm outperforms the two latter algorithms and can classify six different hand gestures up to 97.6% accuracy. | Nonlinear multiscale Maximal Lyapunov Exponent for accurate myoelectric signal classification |
S1568494615004731 | The computational representation and classification of behaviors is a task of growing interest in the field of Behavior Informatics, being series of data a common way of describing those behaviors. However, as these data are often imperfect, new representation models are required in order to effectively handle imperfection in this context. This work presents a new approach, Frequent Correlated Trends, for representing uncertain and imprecise multivariate data series. Such a model can be applied to any domain where behaviors recur in similar—but not identical—shape. In particular, we have already used them to the task of identifying the performers of violin recordings with good results. The present paper describes the abstract model representation and a general learning algorithm, and discusses several potential applications. | Representation model and learning algorithm for uncertain and imprecise multivariate behaviors, based on correlated trends |
S1568494615004743 | Regression techniques, such as ridge regression (RR) and logistic regression (LR), have been widely used in supervised learning for pattern classification. However, these methods mainly exploit the class label information for linear mapping function learning. They will become less effective when the number of training samples per class is small. In visual classification tasks such as face recognition, the appearance of the training sample images also conveys important discriminative information. This paper proposes a novel regression based classification model, namely Bayesian sample steered discriminative regression (BSDR), which simultaneously exploits the sample class label and the sample appearance for linear mapping function learning by virtue of the Bayesian formula. BSDR learns a linear mapping for each class to extract the image class label features, and classification can be simply done by nearest neighbor classifier. The proposed BSDR method has advantages such as small number of mappings, insensitiveness to input feature dimensionality and robustness to small sample size. Extensive experiments on several biometric databases also demonstrate the promising classification performance of our method. | Bayesian sample steered discriminative regression for biometric image classification |
S1568494615004755 | Models for the short-term load forecasting based on the similarity of patterns of seasonal cycles are presented. They include: kernel estimation-based model, nearest neighbor estimation-based models and pattern clustering-based models such as classical clustering methods and new artificial immune systems. The problem of construction of the pattern similarity-based forecasting models and the elements and procedures of the model space are characterized. Details of the model learning and optimization using deterministic and stochastic methods such as evolutionary algorithms and tournament searching are described. Sensitivities of the models to changes in parameter values and their robustness to noisy and missing data are examined. The comparative studies with other popular forecasting methods such as ARIMA, exponential smoothing and neural networks are performed. The advantages of the proposed models are their simplicity and a small number of parameters to be estimated, which implies simple optimization procedures. The models can successfully deal with missing data. The increased number of the model outputs does not complicate their structure. The local nature of the models leads to their simplification and accuracy improvement. The proposed models are strong competitors for other popular univariate methods, which was confirmed in the simulation studies. | Pattern similarity-based methods for short-term load forecasting – Part 2: Models |
S1568494615004767 | In this paper, a novel robust observer-based adaptive controller is presented using a proposed simplified type-2 fuzzy neural network (ST2FNN) and a new three dimensional type-2 membership function is presented. Proposed controller can be applied to the control of high-order nonlinear systems and adaptation of the consequent parameters and stability analysis are carried out using Lyapunov theorem. Moreover, a new adaptive compensator is presented to eliminate the effect of the external disturbance, unknown nonlinear functions approximation errors and sate estimation errors. In the proposed scheme, using the Lyapunov and Barbalat's theorem it is shown that the system is stable and the tracking error of the system converges to zero asymptotically. The proposed method is simulated on a flexible joint robot, two-link robot manipulator and inverted double pendulums system. Simulation results confirm that in contrast to other robust techniques, our proposed method is simple, give better performance in the presence of noise, external disturbance and uncertainties, and has less computational cost. | A new robust observer-based adaptive type-2 fuzzy control for a class of nonlinear systems |
S1568494615004779 | In the real world, a computer/communication system is usually modeled as a capacitated-flow network since each transmission line (resp. facility) denoted by an edge (resp. node) has multiple capacities. System reliability is thus defined to be a probability that d units of data are transmitted successfully from a source node to a sink node. From the perspective of quality management, system reliability is a critical performance indicator of the computer network. This paper focuses on maximizing system reliability for the computer network by finding the optimal two-class allocation subject to a budget, in which the two-class allocation is to allocate exactly one transmission line (resp. facility) to each edge (resp. node). In addition, allocating transmission lines and facilities to the computer network involves an allocation cost where the cost for allocating a transmission line depends on its length. For solving the addressed problem, a genetic algorithm based method is proposed, in which system reliability is evaluated in terms of minimal paths and state-space decomposition. Several experimental results demonstrate that the proposed algorithm can be executed in a reasonable time and has better computational efficiency than several popular soft computing algorithms. | System reliability maximization for a computer network by finding the optimal two-class allocation subject to budget |
S1568494615004780 | In this paper, a novel multi-objective mathematical model is developed to solve a capacitated single-allocation hub location problem with a supply chain overview. Three mathematical models with various objective functions are developed. The objective functions are to minimize: (a) total transportation and installation costs, (b) weighted sum of service times in the hubs to produce and transfer commodities and the tardiness and earliness times of the flows including raw materials and finished goods, and (c) total greenhouse gas emitted by transportation modes and plants located in the hubs. To come closer to reality, some of the parameters of the proposed mathematical model are regarded as uncertain parameters, and a robust approach is used to solve the given problem. Furthermore, two methods, namely fuzzy multi-objective goal programming (FMOGP) and the Torabi and Hassini's (TH) method are used to solve the multi-objective mathematical model. Finally, the concluding part presents the comparison of the obtained results. | Robust and fuzzy goal programming optimization approaches for a novel multi-objective hub location-allocation problem: A supply chain overview |
S1568494615004792 | In this study, two approaches are presented to detect short-circuit faults in power transmission lines. The two proposed methods are completely novel from both theoretical and technical aspects. The first approach is a soft computing method that uses discrete wavelet transform with Daubechies mother wavelets db1, db2, db3, and db4. The second approach is a hardware based method that utilizes a novel proposed two-stage finite impulse response filter with a sampling frequency of 32kHz, and a very short process time about three samples time. The two approaches are analyzed by presenting theoretical results. Simulated results obtained by simulating a three-phase 230kV, 50Hz power transmission line are given that validate the theoretical results, and explicitly verify that the filter based approach has an accuracy of 100% in presence of 10% disturbance while the accuracy of the wavelet transform based approach is maximally 97%, but it has less complication and implementation cost. Another comparative study between this work and other works shows that the two proposed methods have higher accuracy and very shorter process time compared to the other methods, especially in presence of 10% disturbance that actually occurs in power transmission lines. | Two novel proposed discrete wavelet transform and filter based approaches for short-circuit faults detection in power transmission lines |
S1568494615004809 | In this paper, the architecture of feedforward kernel neural networks (FKNN) is proposed, which can include a considerably large family of existing feedforward neural networks and hence can meet most practical requirements. Different from the common understanding of learning, it is revealed that when the number of the hidden nodes of every hidden layer and the type of the adopted kernel based activation functions are pre-fixed, a special kernel principal component analysis (KPCA) is always implicitly executed, which can result in the fact that all the hidden layers of such networks need not be tuned and their parameters can be randomly assigned and even may be independent of the training data. Therefore, the least learning machine (LLM) is extended into its generalized version in the sense of adopting much more error functions rather than mean squared error (MSE) function only. As an additional merit, it is also revealed that rigorous Mercer kernel condition is not required in FKNN networks. When the proposed architecture of FKNN networks is constructed in a layer-by-layer way, i.e., the number of the hidden nodes of every hidden layer may be determined only in terms of the extracted principal components after the explicit execution of a KPCA, we can develop FKNN's deep architecture such that its deep learning framework (DLF) has strong theoretical guarantee. Our experimental results about image classification manifest that the proposed FKNN's deep architecture and its DLF based learning indeed enhance the classification performance. | Feedforward kernel neural networks, generalized least learning machine, and its deep learning with application to image classification |
S1568494615004937 | Due to technological improvements, the number and volume of datasets are considerably increasing and bring about the need for additional memory and computational complexity. To work with massive datasets in an efficient way; feature selection, data reduction, rule based and exemplar based methods have been introduced. This study presents a method, which may be called joint generalized exemplar (JGE), for classification of massive datasets. This method aims to enhance the computational performance of NGE by working against nesting and overlapping of hyper-rectangles with reassessing the overlapping parts with the same procedure repeatedly and joining non-overlapped hyper-rectangle sections that falling within the same class. This provides an opportunity to have adaptive decision boundaries, and also employing batch data searching instead of incremental searching. Later, the classification was done in accordance with the distance between each particular query and generalized exemplars. The accuracy and time requirements for classification of synthetic datasets and a benchmark dataset obtained by JGE, NGE and other popular machine learning methods were compared and the achieved results by JGE found acceptable. | A joint generalized exemplar method for classification of massive datasets |
S1568494615004949 | Recently, people have been able to connect with different types of networks anytime, anywhere using advanced network technologies. In order to properly distribute wireless network resources among different clients, this work proposed a user mobility prediction algorithm, which takes the coverage of different kinds of base stations, and the volatile mobility of pedestrians, vehicles, and mass transportation, into consideration. In addition, a novel bandwidth utilization optimization technique is proposed in the algorithm to allocate bandwidth more efficiently. The Hybrid Genetic Algorithm, which combines the Genetic Algorithm and local searches to improve the frequency of finding a Pareto set, is used to realize the optimization problem as well. Compared with our previous work and the other four methods in the literature, the simulation results show that our proposed work can achieve desirable performance by network utilization, throughput, and QoS quality in the heterogeneous wireless networks. | A self-adaptive joint bandwidth allocation scheme for heterogeneous wireless networks |
S1568494615004950 | The success of an artificial neural network (ANN) strongly depends on the variety of the connection weights and the network structure. Among many methods used in the literature to accurately select the network weights or structure in isolate; a few researchers have attempted to select both the weights and structure of ANN automatically by using metaheuristic algorithms. This paper proposes modified bat algorithm with a new solution representation for both optimizing the weights and structure of ANNs. The algorithm, which is based on the echolocation behaviour of bats, combines the advantages of population-based and local search algorithms. In this work, ability of the basic bat algorithm and some modified versions which are based on the consideration of the personal best solution in the velocity adjustment, the mean of personal best and global best solutions through velocity adjustment and the employment of three chaotic maps are investigated. These modifications are aimed to improve the exploration and exploitation capability of bat algorithm. Different versions of the proposed bat algorithm are incorporated to handle the selection of the structure as well as weights and biases of the ANN during the training process. We then use the Taguchi method to tune the parameters of the algorithm that demonstrates the best ability compared to the other versions. Six classifications and two time series benchmark datasets are used to test the performance of the proposed approach in terms of classification and prediction accuracy. Statistical tests demonstrate that the proposed method generates some of the best results in comparison with the latest methods in the literature. Finally, our best method is applied to a real-world problem, namely to predict the future values of rainfall data and the results show satisfactory of the method. | Optimization of neural network model using modified bat-inspired algorithm |
S1568494615004962 | The capacitated arc routing problem (CARP) has garnered much attention recently, owing to its wide range of applications in the real world. This study describes an efficient memetic algorithm for solving CARP. First, the concepts of Rank Number (RankNo), and Rank Count (RankCount) are proposed, to shed light onto the edge selection rules. Then, the essential backbone of the algorithm, the Rank-based Neighborhood Search (RENS) operator, is introduced. Based on these above-mentioned concepts, methods relating to selection and evaluation of edge(s) are designed to make local search more effective. Two rules, namely mapping rule (MAR) and move rule (MOR), are constructed explain the working of the RENS operator. Finally, this algorithm is tested on seven famous benchmark sets. The experimental results show that it has the better performance than two compared state-of-the-art algorithms. | Rank-based memetic algorithm for capacitated arc routing problems |
S1568494615004986 | This paper proposes a new optimization algorithm named ITGO (Invasive Tumor Growth Optimization) algorithm based on the principle of invasive tumor growth. The study of tumor growth mechanism shows that each cell of tumor strives for the nutrient in their microenvironment to grow and proliferate. In ITGO algorithm, tumor cells were divided into three categories: proliferative cells, quiescent cells and dying cells. The cell movement relies on the chemotaxis, random walk of motion and interaction with other cells in different categories. Invasive behaviors of proliferative cells and quiescent cells are simulated by levy flight and dying cells are simulated through interaction with proliferative cells and quiescent cells. In order to test the effectiveness of ITGO algorithm, 50 functions from CEC2005, CEC2008, CEC2010 and a support vector machine (SVM) parameter optimization problem were used to compare ITGO with other well-known heuristic optimization methods. Statistical analysis using Friedman test and Wilcoxon signed-rank statistical test with Bonferroni–Holm correction demonstrates that the ITGO algorithm is better in solving global optimization problems in comparison to the other meta-heuristic algorithms. | ITGO: Invasive tumor growth optimization algorithm |
S1568494615004998 | Nowadays healthcare becoming more important aspect for everybody. Healthcare institutions now giving more attention to their patients’ safety by reducing the frequency of medical errors and trying to provide all kinds of best facilities to them. Clinical processes can be understood as a series of interactions between patients, providers, and technologies. Therefore, there are some chances exist for medical errors due to the involvement of human beings and machines. A number of tools exist to prospectively analyze processes in healthcare which generally needs precise numerical data. In general, available or extracted data is not precise and sufficient to assess the clinical processes upto a desired degree of accuracy due to various practical and economic reasons. Thus, collected data may have some sort of uncertainties and quantification of these uncertainties should be done very carefully before analysing further. In this paper, a new fuzzy fault tree approach has been presented for patient safety risk modeling in healthcare. This approach applies fault-tree, trapezoidal fuzzy numbers, α-cut set, and the weakest-t-norm (T ω ) based approximate arithmetic operations to obtain fuzzy failure probability of the system. The effectiveness of the developed approach is illustrated with two different kinds of problems taken from literature related to healthcare. Also, Tanaka et al.'s approach has been used to rank the critical basic events of the considered problems. Computed results have been compared with results obtained from other existing techniques. | Fuzzy fault tree analysis for patient safety risk modeling in healthcare under uncertainty |
S1568494615005001 | This paper presents a residential hybrid thermal/electrical grid-connected home energy system (HES), including a fuel-cell with combined heat and power and a battery-based energy storage system. The minimum operation cost of this integrated energy system is achieved by proper scheduling of different energy resources, found by applying a new powerful optimization algorithm, the Hyper-Spherical Search (HSS) algorithm, to the system's scheduling problem. This is the first time that HSS is used to face the energy resource dispatch problem. The HSS has been only tested in mathematical problems in the previous study. The optimization procedure generates an efficient look-up table in which the powers generated by different energy resources are determined for all time intervals. The effect of different electricity tariffs for purchasing electricity from the main grid on the operation costs of the system is investigated. Moreover, a battery is properly dispatched in the energy system to decrease the operation costs. A real load demand is used in the simulation. The results of HSS are compared with the harmony search algorithm and the colonial competitive algorithm to show the power and effectiveness of HSS to find the optimal dispatch strategy of energy resources for the first time. This is the first time that HSS is compared with CCA. The results of this paper are expected to contribute to home energy systems and real projects. fuel cell startup cost ($) fuel cell shut down cost ($) maximum limit of fuel cell generated power (kW) minimum limit of fuel cell generated power (kW) upper limit of ramp rate of fuel cell (kW) lower limit of ramp rate of fuel cell (kW) length of time interval (h) initial available energy in battery (kWh) available energy in battery (kWh) charging efficiency of battery discharging efficiency of battery maximum level of stored energy in battery (kWh) minimum level of stored energy in battery (kWh) maximum electricity purchasing cost ($/kWh) cost of purchasing natural gas per kWh ($/kWh) operation costs of battery per kWh ($/kWh) cost of generated power by fuel cell ($/day) cost of electricity purchased from utility ($/day) cost of operation of battery ($/day) cost of purchasing gas ($/day) cost of electricity purchased from utility in the peak period ($/kWh) cost of electricity purchased from utility in the off-peak period ($/kWh) maximum discharging rate of the battery (kW) maximum charging rate of the battery (kW) heat power produced by fuel cell (kW) electrical power produced by fuel cell (kW) fuel cell efficiency heat power directly produced from gas (kW) normalized price of electricity tariff electricity power provided by utility (kW) generated/absorbed power by battery (kW) electrical demand (kW) thermal demand (kW) part load ratio heat to electrical power ratio of fuel cell number of the interval that appears as variables subscript and indicates value of that variable in the i-th interval. | Application of Hyper-Spherical Search algorithm for optimal energy resources dispatch in residential microgrids |
S1568494615005013 | In this paper we propose a system that involves a Background Subtraction, BS, model implemented in a neural Self Organized Map with a Fuzzy Automatic Threshold Update that is robust to illumination changes and slight shadow problems. The system incorporates a scene analysis scheme to automatically update the Learning Rates values of the BS model considering three possible scene situations. In order to improve the identification of dynamic objects, an Optical Flow algorithm analyzes the dynamic regions detected by the BS model, whose identification was not complete because of camouflage issues, and it defines the complete object based on similar velocities and direction probabilities. These regions are then used as the input needed by a Matte algorithm that will improve the definition of the dynamic object by minimizing a cost function. Among the original contributions of this work are; an adapting fuzzy-neural segmentation model whose thresholds and learning rates are adapted automatically according to the changes in the video sequence and the automatic improvement on the segmentation results based on the Matte algorithm and Optical flow analysis. Findings demonstrate that the proposed system produces a competitive performance compared with state-of-the-art reported models by using BMC and Li databases. | Fuzzy-neural self-adapting background modeling with automatic motion analysis for dynamic object detection |
S1568494615005025 | This paper proposes a new encryption scheme for color images based on Deoxyribonucleic acid (DNA) sequence operations and multiple improved one-dimensional (1D) chaotic systems with excellent performance. Firstly, the key streams are generated from three improved 1D chaotic systems by using the secret keys and the plain-image. Transform randomly the key streams and the plain-image into the DNA matrices by the DNA encoding rules, respectively. Secondly, perform the DNA complementary and XOR operations on the DNA matrices to get the scrambled DNA matrices. Thirdly, decompose equally the scrambled DNA matrices into blocks and shuffle these blocks randomly. Finally, implement the DNA XOR and addition operations on the DNA matrices obtained from the previous step and the key streams, and then convert the encrypted DNA matrices into the cipher-image by the DNA decoding rules. Experimental results and security analysis show that the proposed encryption scheme has a good encryption effect and high security. Moreover, it has a strong robustness for the common image processing operations and geometric attack. | A new color image encryption scheme based on DNA sequences and multiple improved 1D chaotic maps |
S1568494615005037 | This paper introduces a novel memetic algorithm namely Fractional Particle Swarm Optimization-based Memetic Algorithm (FPSOMA) to solve optimization problem using fractional calculus concept. The FC illustrates a potential for interpreting progression of the algorithm by controlling its convergence. The FPSOMA accomplishes global search over the whole search space through PSO whereas local search is performed by PSO with fractional order velocity to alter the memory of best location of the particles. To assess the performance of the proposed algorithm, firstly an empirical comparison study is presented for solving different test functions adopted from literature. Comparisons demonstrate the preference of FPSOMA than other related algorithms. Subsequently, experiments are conducted to achieve optimal gains of Fractional Order Proportional-Integral-Derivative (FO PID) controller in solving tracking problem. Results verify the efficiency of the proposed algorithm. | A memetic algorithm applied to trajectory control by tuning of Fractional Order Proportional-Integral-Derivative controllers |
S1568494615005049 | Complexity and complex systems are all around us: from molecular and cellular systems in biology up to economics and human societies. There is an urgent need for methods that can capture the multi-scale spatio-temporal characteristics of complex systems. Recent emphasis has centered on two methods in particular, those being complex networks and agent-based models. In this paper we look at the combination of these two methods and identify “Complex Agent Networks”, as a new emerging computational paradigm for complex system modeling. We argue that complex agent networks are able to capture both individual-level dynamics as well as global-level properties of a complex system, and as such may help to obtain a better understanding of the fundamentals of such systems. | Complex agent networks: An emerging approach for modeling complex systems |
S1568494615005050 | In the paper, two novel negative selection algorithms (NSAs) were proposed: FB-NSA and FFB-NSA. FB-NSA has two types of detectors: constant-sized detector (CFB-NSA) and variable-sized detector (VFB-NSA). The detectors of traditional NSA are generated randomly. Even for the same training samples, the position, size, and quantity of the detectors generated in each time are different. In order to eliminate the effect of training times on detectors, in the proposed approaches, detectors are generated in non-random ways. To determine the performances of the approaches, the experiments on 2-dimensional synthetic datasets, Iris dataset and ball bearing fault data were performed. Results show that FB-NSA and FFB-NSA outperforms the other anomaly detection methods in most cases. Besides, CFB-NSA can detect the abnormal degree of mechanical equipment. To determine the performances of CFB-NSA, the experiments on ball bearing fault data were performed. Results show that the abnormal degree based on the CFB-NSA can be used to diagnose the different fault types with the same fault degree, and the same fault type with the different fault degree. | Negative selection algorithm with constant detectors for anomaly detection |
S1568494615005062 | Current research is constantly producing an enormous amount of information, which presents a challenge for data mining algorithms. Many of the problems in some of the most relevant research areas, such as bioinformatics, security and intrusion detection or text mining, involve large or huge datasets. Data mining algorithms are seriously challenged by these datasets. One of the most common methods to handle large datasets is data reduction. Among others, feature and instance selection are arguably the most commonly used methods for data reduction. Conversely, feature and instance weighting focus on improving the performance of the data mining task. Due to the different aims of these four methods, instance and feature selection and weighting, they can be combined to improve the performance of the data mining methods used. In this paper, a general framework for combining these four tasks is presented, and a comprehensive study of the usefulness of the 15 possible combinations is performed. Using a large set of 80 problems, a study of the behavior of all possible combinations in classification performance, data reduction and execution time is carried out. These factors are also studied using 60 class-imbalanced datasets. | Simultaneous instance and feature selection and weighting using evolutionary computation: Proposal and study |
S1568494615005074 | Processing lineages (also called provenances) over uncertain data consists in tracing the origin of uncertainty based on the process of data production and evolution. In this paper, we focus on the representation and processing of lineages over uncertain data, where we adopt Bayesian network (BN), one of the popular and important probabilistic graphical models (PGMs), as the framework of uncertainty representation and inferences. Starting from the lineage expressed as Boolean formulae for SPJ (Selection–Projection–Join) queries over uncertain data, we propose a method to transform the lineage expression into directed acyclic graphs (DAGs) equivalently. Specifically, we discuss the corresponding probabilistic semantics and properties to guarantee that the graphical model can support effective probabilistic inferences in lineage processing theoretically. Then, we propose the function-based method to compute the conditional probability table (CPT) for each node in the DAG. The BN for representing lineage expressions over uncertain data, called lineage BN and abbreviated as LBN, can be constructed while generally suitable for both safe and unsafe query plans. Therefore, we give the variable-elimination-based algorithm for LBN's exact inferences to obtain the probabilities of query results, called LBN-based query processing. Then, we focus on obtaining the probabilities of inputs or intermediate tuples conditioned on query results, called LBN-based inference query processing, and give the Gibbs-sampling-based algorithm for LBN's approximate inferences. Experimental results show the efficiency and effectiveness of our methods. | Representing and processing lineages over uncertain data based on the Bayesian network |
S1568494615005086 | Ant-tree is a clustering algorithm inspired by biological ants. This paper defines a variant of such algorithm to perform colour quantization. Some of the features of the basic Ant-tree have been adapted to obtain a quicker algorithm and to perform the main steps of colour quantization on a big input set. The centroid of every cluster defines a colour of the palette and once the complete colour palette is established, the algorithm represents each pixel of the original image by the colour associated to its cluster. Computational results show that the error obtained for the quantized images is smaller than the error generated by some other well-known quantization methods. | Colour quantization with Ant-tree |
S1568494615005098 | Deterministic approaches to simultaneously solve different interrelated optimisation problems lead to a general class of nonlinear complementarity problem (NCP). Due to differentiability and convexity requirements of the problems, sophisticated algorithms are introduced in literature. This paper develops an evolutionary algorithm to solve the NCPs. The proposed approach is a parallel search in which multiple populations representing different agents evolve simultaneously whilst in contact with each other. In this context, each agent autonomously solves its optimisation programme while sharing its decisions with the neighbouring agents and, hence, it affects their actions. The framework is applied to an environmental and an aerospace application where the obtained results are compared with those found in literature. The convergence and scalability of the approach is tested and its search algorithm performance is analysed. Results encourage the application of such an evolutionary based algorithm for complementarity problems and future work should investigate its development as well as its performance improvements. | An evolutionary approach to solve a system of multiple interrelated agent problems |
S1568494615005104 | When multiple algorithms are applied to multiple benchmarks as it is common in evolutionary computation, a typical issue rises, how can we rank the algorithms? It is a common practice in evolutionary computation to execute the algorithms several times and then the mean value and the standard deviation are calculated. In order to compare the algorithms performance it is very common to use statistical hypothesis tests. In this paper, we propose a novel alternative method based on the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) to support the performance comparisons. In this case, the alternatives are the algorithms and the criteria are the benchmarks. Since the standard TOPSIS is not able to handle the stochastic nature of evolutionary algorithms, we apply the Hellinger-TOPSIS, which uses the Hellinger distance, for algorithm comparisons. Case studies are used to illustrate the method for evolutionary algorithms but the approach is general. The simulation results show the feasibility of the Hellinger-TOPSIS to find out the ranking of algorithms under evaluation. | Ranking and comparing evolutionary algorithms with Hellinger-TOPSIS |
Subsets and Splits