FileName
stringlengths
17
17
Abstract
stringlengths
163
6.01k
Title
stringlengths
12
421
S156849461630076X
Technology credit scoring models have been used to screen loan applicant firms based on their technology. Typically a logistic regression model is employed to relate the probability of a loan default of the firms with several evaluation attributes associated with technology. However, these attributes are evaluated in linguistic expressions represented by fuzzy number. Besides, the possibility of loan default can be described in verbal terms as well. To handle these fuzzy input and output data, we proposed a fuzzy credit scoring model that can be applied to predict the default possibility of loan for a firm that is approved based on its technology. The method of fuzzy logistic regression as an appropriate prediction approach for credit scoring with fuzzy input and output was presented in this study. The performance of the model is improved compared to that of typical logistic regression. This study is expected to contribute to practical utilization of the technology credit scoring with linguistic evaluation attributes.
Technology credit scoring model with fuzzy logistic regression
S1568494616300771
This paper presents techniques for GPS based autonomous navigation of heavy vehicles at high speed. The control system has two main functions: vehicle position estimation and generation of the steering commands for the vehicle to follow a given path autonomously. Position estimation is based on fusion of measurements from a carrier-phase differential GPS system and odometric sensors using fuzzy logic. A Takagi–Sugeno fuzzy controller is used for steering commands generation, to cope with different road geometry and vehicle velocity. The presented system has been implemented in a 13tons truck, and fully tested in very demanding conditions, i.e. high velocity and large curvature variations in paved and unpaved roads.
High-speed autonomous navigation system for heavy vehicles
S1568494616300783
The multidimensional knapsack problem (MKP) is a well-known NP-hard optimization problem. Various meta-heuristic methods are dedicated to solve this problem in literature. Recently a new meta-heuristic algorithm, called artificial algae algorithm (AAA), was presented, which has been successfully applied to solve various continuous optimization problems. However, due to its continuous nature, AAA cannot settle the discrete problem straightforwardly such as MKP. In view of this, this paper proposes a binary artificial algae algorithm (BAAA) to efficiently solve MKP. This algorithm is composed of discrete process, repair operators and elite local search. In discrete process, two logistic functions with different coefficients of curve are studied to achieve good discrete process results. Repair operators are performed to make the solution feasible and increase the efficiency. Finally, elite local search is introduced to improve the quality of solutions. To demonstrate the efficiency of our proposed algorithm, simulations and evaluations are carried out with total of 94 benchmark problems and compared with other bio-inspired state-of-the-art algorithms in the recent years including MBPSO, BPSOTVAC, CBPSOTVAC, GADS, bAFSA, and IbAFSA. The results show the superiority of BAAA to many compared existing algorithms.
Binary artificial algae algorithm for multidimensional knapsack problems
S1568494616300795
Nowadays, the development of classification algorithms gives the ability to improve Automatic Modulation Recognition (AMR) effectively. This paper presents a novel modulation recognition algorithm based on clustering approach. Generally, we aim to distinguish multicarrier modulation OFDM from single-carrier modulations. In this regard, two statistics of the amplitude of the received signal are calculated at the output of a quadrature mixer as key features. The extracted features of training data points are submitted to the clustering algorithm, then, centroids for single-carrier and multicarrier modulations are assessed. Afterwards, each point of testing dataset is dedicated to its nearest centroid based on Euclidian distance and the recognition is accomplished. Simulation results demonstrate that the algorithm is beneficial in a wide range from low to high SNRs.
Adaptive modulation recognition based on the evolutionary algorithms
S1568494616300801
European Union has introduced the European Trading System (ETS) as a tool for developing and implementing international treaties related to climate changes and to identify the most cost-effective methods for reducing greenhouse gas emissions, in particular carbon dioxide (CO2), which is the most substantial. Companies producing carbon emissions must effectively manage associated costs by buying or selling carbon emission futures. Viewed from this perspective, this paper provides a model for managing the risk by buying and selling carbon emission futures by implementing techniques that leverage computational intelligence. Three computational intelligence techniques are proposed to provide accurate and timely forecasts for changes in the price of carbon: a novel hybrid neuro-fuzzy controller that forms a closed-loop feedback mechanism called PATSOS; an artificial neural network (ANN) based system; an adaptive neuro-fuzzy inference system (ANFIS). Results are based on 1074 daily carbon price observations collected to comprise a useful time-series dataset and for evaluation of the proposed techniques. The extra-sample performance of the proposed techniques is calculated. Analysis results are compared with those produced by other models. Comparison studies reveal that PATSOS is the most accurate and promising methodology for predicting the price of carbon. It is stated that this paper registers a first attempt to apply a hybrid neuro-fuzzy controller to forecasting carbon prices.
Using computational intelligence to forecast carbon prices
S1568494616300837
The present work focuses on the formulation of an optimization algorithm based on Ant Colony Search (ACS) technique operational from a Blackfin DSP microcontroller platform. This intelligent algorithm enables switching of appropriate Shunt Capacitor Banks (SCBs) in installations of bulk consumers of electricity to improve their power factor. The application is mainly targeted on cold storages, rolling mills, traction substation and steel plants in India, where SCBs are installed over a long period in an unplanned manner. Unawareness of the impact of energy savings coupled with the fiscal incapability of these users make them unable to install costly power electronic devices for p.f. control ACS based embedded controller is sensitive to unpredictable load changes and selective capacitors switching cause optimal VAR control with minimum stress to the system. Employing the distributed computational model based on ACS becomes an ideal tool for the combinatorial optimization problem to select a best combination within a bank of varying sizes and varying status signifying dynamically changing search space. This low cost device will reduce the penalty billing of consumers by improving the p.f. and improved metered reading. The use of this device will lead to permeation of technology to the strata which needs it most making energy management possible without any human intervention.
An ant colony system based control of shunt capacitor banks for bulk electricity consumers
S1568494616300849
In this paper we are presenting a modification of a bio-inspired algorithm based on the bee behavior (BCO, bee colony optimization) for optimizing fuzzy controllers. BCO is a metaheuristic technique inspired by the behavior presented by bees in nature, which can be used for solving optimization problems. First, the traditional BCO is tested with the optimization of fuzzy controllers. Second, a modification of the original method is presented by including fuzzy logic to dynamically change the main parameter values of the algorithm during execution. Third, the proposed modification of the BCO algorithm with the fuzzy approach is used to optimize benchmark control problems. The comparison of results show that the proposed fuzzy BCO method outperforms the traditional BCO in the optimal design of fuzzy controllers.
Optimization of fuzzy controller design using a new bee colony algorithm with fuzzy dynamic parameter adaptation
S1568494616300850
In previous works, it was verified that the discrete-time microstructure (DTMS) model, which is estimated by training dataset of a financial time series, may be effectively applied to asset allocation control on the following test data. However, if the length of test dataset is too long, prediction capability of the estimated DTMS model may gradually decline due to behavior change of financial market, so that the asset allocation result may become worse on the latter part of test data. To overcome the drawback, this paper presents a semi-on-line adaptive modeling and trading approach to financial time series based on the DTMS model and using a receding horizon optimization procedure. First, a long-interval identification window is selected, and the dataset on the identification window is used to estimate a DTMS model, which will be used to do asset allocation on the following short-term trading interval that is referred to as the trading window. After asset allocation is over on the trading window, the length-fixed identification window is then moved to a new window that includes the previous trading window, and a new DTMS model is estimated by using the dataset on the new identification window. Next, asset allocation continues on the next trading window that follows the previous trading window, and then the modeling and asset allocation process will go on according to the above steps. In order to enhance the flexibility and adaptability of the DTMS model, a comprehensive parameter optimization method is proposed, which incorporates particle swarm optimization (PSO) with Kalman filter and maximum likelihood method for estimating the states and parameters of DTMS model. Based on the adaptive DTMS model estimated on each identification window, an adaptive asset allocation control strategy is designed to achieve optimal control of financial assets. The parameters of the asset allocation controller are optimized by the PSO algorithm on each identification window. Case studies on Hang Seng Index (HSI) of Hong Kong stock exchange and S&P 500 index show that the proposed adaptive modeling and trading strategy can obtain much better asset allocation control performance compared with the parameters-fixed DTMS model.
An adaptive modeling and asset allocation approach to financial markets based on discrete microstructure model
S1568494616300862
The parallelization of heuristic methods allows the researchers both to explore the solution space more extensively and to accelerate the search process. Nowadays, there is an increasing interest on developing parallel algorithms using standard software components that take advantage of modern microprocessors including several processing cores with local and shared cache memories. The aim of this paper is to show it is possible to parallelize algorithms included in computational software using standard software libraries in low-cost multi-core systems, instead of using expensive high-performance systems or supercomputers. In particular, it is analyzed the benefits provided by master-worker and island parallel models, implemented with MPI and OpenMP software libraries, to parallelize population-based meta-heuristics. The capacitated vehicle routing problem with hard time windows (VRPTW) has been used to evaluate the performance of these parallel strategies. The empirical results for a set of Solomon's benchmarks show that the parallel approaches executed on a multi-core processor produce better solutions than the sequential algorithm with respect to both the quality of the solutions obtained and the runtime required to get them. Both MPI and OpenMP parallel implementations are able to obtain better or at least equal solutions (in terms of distance traveled) than the best known ones for the considered benchmark instances.
Analysis of OpenMP and MPI implementations of meta-heuristics for vehicle routing problems
S1568494616300874
In engineering design, a set of potentially competitive designs is conceived in the early part of the design process. The purpose of this research is to help such a process by investigating algorithm that enables approximate identification of a set of inputs of real variables that return desired responses from a function or a computer simulation. We explore sequential or adaptive sampling methods based on Self-Organizing Maps (SOM). The proposed method does not rely on parameterized distributions, and can sample from multi-modal and non-convex distributions. Furthermore, the proposed merit function provides infill characteristics by favoring sampling points that lay farther from existing points. The results indicate that multiple feasible solutions can be efficiently obtained by the new adaptive sampling algorithm. The iterative use of the SOM in density learning to identify feasible or good designs is our new contribution and it shows a very rapid increase in number of feasible solutions to total number of function evaluation ratio. Application examples to planing hull designs (such as in powerboats and seaplanes) indicate the merits of the feasible region approach to observe trends and design rules. Additionally, the well distributed sampling points of the proposed method played favorable effect in improving the prediction performance of a classification problem learned by Support Vector Machine.
Design space exploration using Self-Organizing Map based adaptive sampling
S1568494616300886
In this paper, a new path planning algorithm for unstructured environments based on a Multiclass Support Vector Machine (MSVM) is presented. Our method uses as its input an aerial image or an unfiltered auto-generated map of the area in which the robot will be moving. Given this, the algorithm is able to generate a graph showing all of the safe paths that a robot can follow. To do so, our algorithm takes advantage of the training stage of a MSVM, making it possible to obtain the set of paths that maximize the distance to the obstacles while minimizing the effect of measurement errors, yielding paths even when the input data are not sufficiently clear. The method also ensures that it is able to find a path, if it exists, and it is fully adaptable to map changes over time. The functionality of these features was assessed using tests, divided into simulated results and real-world tests. For the latter, four different scenarios were evaluated involving 500 tests each. From these tests, we concluded that the method presented is able to perform the tasks for which it was designed.
Path planning using a Multiclass Support Vector Machine
S1568494616300898
In this paper, a new optimization algorithm to solve continuous and non-linear optimization problems is introduced. This algorithm is inspired by the optimal mechanism of viruses when infecting body cells. Special mechanism and function of viruses which includes the recognition of fittest viruses to infect body cells, reproduction (cloning) of these cells to prompt “invasion” operation of ready-to-infect regions and then escaping from infected regions (to avoid immune reaction) is the basis of this evolutionary optimization algorithm. Like many evolutionary algorithms, the Virulence Optimization Algorithm (VOA) starts the optimization process with an initial population consisting of viruses and host cells. The host cell population represents the resources available in host environment or the region containing the global optimum solution. The virus population infiltrates the host environment and attempts to infect it. In the optimization procedure, at first the viruses reside in the constituted regions or clusters of the environment called virus groups (via K-means clustering). Then they scatter in host environment through mutation (Drifting) and recombination (Shifting) operators. Then among the virus groups, the group with highest mean fitness is chosen as escape destination. Before the escape operation commences, the best viruses in each virus group are recognized and undergoes a cloning operation to spread the Virulence in the host environment. This procedure continues until the majority of the virus population is gathered in the region containing the maximum resources or the global optimum solution. The novelty of the proposed algorithm is achieved by simulating three important and major mechanisms in the virus life, namely (1) the reproduction and mutation mechanism, (2) the cloning mechanism to generate the best viruses for rapid and excessive infection of the host environment and (3) the mechanism of escaping from the infected region. Simulating the first mechanism in the virus life enables the proposed algorithm to generate new and fittest virus varieties. The cloning mechanism facilitates the excessive spread of the fittest viruses in the host environment to infect the host environment more quickly. Also, to avoid the immune response, the fittest viruses (with a great chance of survival) are duplicated through the cloning process, and scattered according to the Vicinity Region Radius of each region. Then, the fittest viruses escape the infected region to reside in a region which possess the resources necessary to survive (global optimum). The evaluation of this algorithm on 11 benchmark test functions has proven its capability to deal with complex and difficult optimization problems.
Virulence Optimization Algorithm
S1568494616300904
Many neural network methods such as ML-RBF and BP-MLL have been used for multi-label classification. Recently, extreme learning machine (ELM) is used as the basic elements to handle multi-label classification problem because of its fast training time. Extreme learning machine based auto encoder (ELM-AE) is a novel method of neural network which can reproduce the input signal as well as auto encoder, but it can not solve the over-fitting problem in neural networks elegantly. Introducing weight uncertainty into ELM-AE, we can treat the input weights as random variables following Gaussian distribution and propose weight uncertainty ELM-AE (WuELM-AE). In this paper, a neural network named multi layer ELM-RBF for multi-label learning (ML-ELM-RBF) is proposed. It is derived from radial basis function for multi-label learning (ML-RBF) and WuELM-AE. ML-ELM-RBF firstly stacks WuELM-AE to create a deep network, and then it conducts clustering analysis on samples features of each possible class to compose the last hidden layer. ML-ELM-RBF has achieved satisfactory results on single-label and multi-label data sets. Experimental results show that WuELM-AE and ML-ELM-RBF are effective learning algorithms.
Multi layer ELM-RBF for multi-label learning
S1568494616300916
This paper investigates the prize-collecting vehicle routing problem (PCVRP), which has a strong background in practical industries. In the PCVRP, the capacities of all available vehicles are not sufficient to satisfy the demands of all customers. Consequently it is not a compulsory requirement that all customers should be visited. However, a prize can be collected once a customer is visited. In addition, it is required that the total demands of visited customers should reach a pre-specified value at least. The objective is to establish a schedule of vehicle routes so as to minimize the total transportation cost and at the same time maximize the prize collected by all vehicles. The total transportation cost consists of the total distance of vehicle routes and the sum of vehicles used in the schedule. To solve the PCVRP, a two-level self-adaptive variable neighborhood search (TLSAVNS) algorithm is developed according to the two levels of decisions in the PCVRP, namely the selection of customers to visit and the visiting sequence of selected customers in each vehicle route. The proposed TLSAVNS algorithm is self-adaptive because the neighborhoods and their search sequence are determined automatically by the algorithm itself based on the analysis of its search history. In addition, a graph extension method is adopted to obtain the lower bound for PCVRP by transforming the proposed mixed integer programming model of PCVRP into an equivalent traveling salesman problem (TSP) model, and the obtained lower bound is used to evaluate the proposed TLSAVNS algorithm. Computational results on benchmark problems show that the proposed TLSAVNS algorithm is efficient for PCVRP.
A two-level self-adaptive variable neighborhood search algorithm for the prize-collecting vehicle routing problem
S1568494616300928
In this paper, a new method based on single layer Legendre Neural Network (LeNN) model has been developed to solve initial and boundary value problems. In the proposed approach a Legendre polynomial based Functional Link Artificial Neural Network (FLANN) is developed. Nonlinear singular initial value problem (IVP), boundary value problem (BVP) and system of coupled ordinary differential equations are solved by the proposed approach to show the reliability of the method. The hidden layer is eliminated by expanding the input pattern using Legendre polynomials. Error back propagation algorithm is used for updating the network parameters (weights). Results obtained are compared with the existing methods and are found to be in good agreement.
Application of Legendre Neural Network for solving ordinary differential equations
S156849461630093X
In this paper, an exchange market algorithm (EMA) approach is applied to solve highly non-linear power system optimal reactive power dispatch (ORPD) problems. ORPD is most vital optimization problems in power system study and are usually devised as optimal power flow (OPF) problem. The problem is formulated as nonlinear, non-convex constrained optimization problem with the presence of both continuous and discrete control variables. The EMA searches for optimal solution via two main phases; namely, balanced market and oscillation market. Each of the phases comprises of both exploration and exploitation, which makes the algorithm unique. This uniqueness of EMA is exploited in this paper to solve various vital objectives associated with ORPD problems. Programs are developed in MATLAB and tested on standard IEEE 30 and IEEE 118 bus systems. The results obtained using EMA are compared with other contemporary methods in the literature. Simulation results demonstrate the superiority of EMA in terms of its computational efficiency and robustness. Consumed function evaluation for each case study is mentioned in the convergence plot itself for better clarity. Parametric study is also performed on different case studies to obtain the suitable values of tuneable parameters. susceptance of the lines connecting between bus i and bus j objective function conductance of transmission line between bus i and bus j stability index of j th load bus number of generator busses or PV busses number of load busses or PQ busses number of tap changing transformers number of transmission lines number of shunt capacitors active power loss in MW minimum limit of active power output (MW) of slack generator (G 1 ) maximum limit of active power output (MW) of slack generator (G 1 ) limiting value of active power output (MW) of slack generator (G 1 ) active power output (MW) of slack generator G1 active power generation (MW) of i th generator active power load demand (MW) at i th bus minimum limit of reactive power output (MVAr) of i th generator maximum limit of reactive power output (MVAr) of ith generator reactive power generation (MVAr) for i th generator reactive power load demand (MVAr) at i th bus limiting value of reactive power generation (MVAr) of i th generator reactive power output of slack generator G1 (MVAr) reactive power output of Shunt capacitor (MVAr) minimum value of reactive power output of i th shunt capacitor maximum value of reactive power output of i th shunt capacitor reactive power output of 1 st shunt capacitor (MVAr) reactive power output of i th shunt capacitor (MVAr) apparent power flow through L th transmission line (MVA) apparent power flow limit of L th transmission line (MVA) max. limit of apparent power flow through L th transmission line (MVA) shares of shareholders lower limit of j thshare of i th shareholder upper limit of j th share of i th shareholder variation in the share of K th shareholder of the third group tap positions of tap changing transformers minimum position of k th tap changing transformer maximum position of k th tap changing transformer tap position of the first tap changing transformer tap position of the k th tap changing transformer voltage of first generator G 1 (p.u). magnitude of the load voltage (p.u) voltage magnitude of the first load bus (p.u) voltage magnitude of the i th bus (p.u) voltage magnitude of the j th bus (p.u) minimum voltage of i th generator (p.u) maximum voltage of i th generator (p.u) voltage of i th generator (p.u) minimum voltage of i th load bus or PQ bus (p.u) maximum voltage of i th load bus or PQ bus (p.u) reference voltage magnitude at i th load bus (p.u) voltage phase angle difference between bus i and j penalty factor associated with slack bus power output penalty factor associated load voltage penalty factor associated with generators reactive power generation penalty factor associated with line loadings
Exchange market algorithm based optimum reactive power dispatch
S1568494616300941
This paper introduces a synergic predator-prey optimization (SPPO) algorithm to solve economic load dispatch (ELD) problem for thermal units with practical aspects. The basic PPO model comprises prey and predator as essential components. SPPO uses collaborative decision for movement and direction of prey and maintains diversity in the swarm due to fear factor of predator, which acts as the baffled state of preys’ mind. In the SPPO, the decision making of prey is bifurcated into corroborative and impeded parts. It comprises four behaviors namely inertial, cognitive, collective swarm intelligence, and prey's individual and neighborhood concern of predator. The prey particle memorizes its best and not-best positions as experiences. In this research work, to improve the quality of prey swarm, which influence convergence rate, opposition based initialization is used. To verify robustness of proposed algorithm general benchmark problems and small, medium, and large power generation test power system are simulated. These test systems have non-linear behavior due to multi-fuel options and practical constraints. The constraints of prohibited operating zone and ramp rate limits of power generators’ are handled using heuristics. Newton–Raphson procedure is exploited to attain the transmission losses using load flow analysis. The outcomes of SPPO are compared with the results described in literature and are found satisfactory.
Synergic predator-prey optimization for economic thermal power dispatch problem
S1568494616300953
Noise elimination is an important pre-processing step in magnetic resonance (MR) images for clinical purposes. In the present study, as an edge-preserving method, bilateral filter (BF) was used for Rician noise removal in MR images. The choice of BF parameters affects the performance of denoising. Therefore, as a novel approach, the parameters of BF were optimized using genetic algorithm (GA). First, the Rician noise with different variances (σ =10, 20, 30) was added to simulated T1-weighted brain MR images. To find the optimum filter parameters, GA was applied to the noisy images in searching regions of window size [3×3, 5×5, 7×7, 11×11, and 21×21], spatial sigma [0.1–10] and intensity sigma [1–60]. The peak signal-to-noise ratio (PSNR) was adjusted as fitness value for optimization. After determination of optimal parameters, we investigated the results of proposed BF parameters with both the simulated and clinical MR images. In order to understand the importance of parameter selection in BF, we compared the results of denoising with proposed parameters and other previously used BFs using the quality metrics such as mean squared error (MSE), PSNR, signal-to-noise ratio (SNR) and structural similarity index metric (SSIM). The quality of the denoised images with the proposed parameters was validated using both visual inspection and quantitative metrics. The experimental results showed that the BF with parameters proposed by us showed a better performance than BF with other previously proposed parameters in both the preservation of edges and removal of different level of Rician noise from MR images. It can be concluded that the performance of BF for denoising is highly dependent on optimal parameter selection.
Determination of optimal parameters for bilateral filter in brain MR image denoising
S1568494616300977
Polymer electrolyte fuel cells (PEFCs) have attracted considerable interest within the research community due to the increasing demands for renewable energy. Within the PEFCs’ many components, a cathode electrode plays a primary function in the operation of the cell. Here, a computational-intelligence-aided design and engineering (CIAD/CIAE) framework with potential cross-disciplinary applications is proposed to minimize the over-potential difference η and improve the overall efficiency of PEFCs. A newly developed swarm dolphin algorithm is embedded in a computational-intelligence-integrated solver to optimize a triple-layer cathode electrode model. The simulation results demonstrate the potential application of the proposed CIAD/CIAE framework in the design automation and optimization of PEFCs.
Multi-objective optimization on multi-layer configuration of cathode electrode for polymer electrolyte fuel cells via computational-intelligence-aided design and engineering framework
S1568494616300989
This paper addresses the effect of the wind power units into the classical Environment/Economic Dispatch (EED) model which called hereafter as Wind/Environment/Economic Dispatch (WEED) problem. The optimal dispatch between thermal and wind units so that minimized the total generating costs are considered as multi objective model. Normally, the nature of the wind energy as a renewable energy sources has uncertainty in generation. Therefore, in this paper, use a practical model known as 2m-point to estimate the uncertainty of wind power. To solve the WEED problem, this paper proposed a new meta-heuristic optimization algorithm that uses online learning mechanism. Honey Bee Mating Optimization (HBMO), a moderately new population-based intelligence algorithm, shows fine performance on optimization problems. Unfortunately, it is usually convergence to local optima. Therefore, in the proposed Online Learning HBMO (OLHBMO), two neural networks are trained when reached to the predefined threshold by current and previous position of solutions and their fitness values. Moreover, Chaotic Local Search (CLS) operator is use to develop the local search ability and a new data sharing model determine the set of non-dominated optimal solutions and the set of non-dominated solutions to kept in the external memory. Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) as a decision-making technique is employed to find the best solution from the set of Pareto solutions. The proposed model has been individually examined and applied on the IEEE 30-bus 6-unit, the IEEE 118-bus 14-unit, and 40-unit with valve point effect test systems. The robustness and effectiveness of this algorithm is shows by these test systems compared to other available algorithms.
Modeling of Wind/Environment/Economic Dispatch in power system and solving via an online learning meta-heuristic method
S1568494616300990
The minimum constraint removal (MCR) motion planning problem aims to remove the minimum geometric constraints necessary for removing a free path that connects the starting point and the target point. In essence, discrete MCR problems are non-deterministic polynomial-time (NP)-hard problems; there is a “combinatorial explosion” phenomenon in solving such problems on a large scale. Therefore, we are searching for highly efficient approximate solutions. In the present study, an ant colony algorithm was used to solve these problems. The ant colony algorithm was improved based on the social force model during the solving process, such that it was no longer easy for the algorithm to fall into local extreme, and the algorithm was therefore suitable for solving the MCR problem. The results of the simulation experiments demonstrated that the algorithm used in the present study was superior to the exact algorithm and the greedy algorithm in terms of solution quality and running time.
Solving minimum constraint removal (MCR) problem using a social-force-model-based ant colony algorithm
S1568494616301004
We introduce a novel approach to signal classification based on evolving temporal pattern detectors (TPDs) that can find the occurrences of embedded temporal structures in discrete time signals and illustrate its application to characterizing the alcoholic brain using visually evoked response potentials. In contrast to conventional techniques used for most signal classification tasks, this approach unifies the feature extraction and classification steps. It makes no prior assumptions regarding the spectral characteristics of the data; it merely assumes that some temporal patterns exist that distinguish two classes of signals and therefore could be applied to new signal classification tasks where a body of prior work identifying important features does not exist. Evolutionary computation (EC) discovers a classifier by simply learning from the time series samples. The alcoholic classification (AC) problem consists of 2 sub-tasks, one spatial and one temporal: choosing a subset of electroencephalogram leads used to create a composite signal (the spatial task), and detecting temporal patterns in this signal that are more prevalent in the alcoholics than the controls (the temporal task). To accomplish this, a novel representation and crossover operator were devised that enable multiple feature subset tasks to be solved concurrently. Three TPD techniques are presented that differ in the mechanism by which partial credit is assigned to temporal patterns that deviate from the specified pattern. An EC approach is used for evolving a subset of sensors and the TPD specifications. We found evidence that partial credit does help evolutionary discovery. Regions on the skull of an alcoholic subject that produced abnormal electrical activity compared to the controls were located. These regions were consistent with prior findings in the literature. The classification accuracy was measured as the area under the receiver operator characteristic curve (ROC); the ROC area for the training set varied from 90.32% to 98.83% and for the testing set it varied from 87.17% to 95.9%.
A novel approach to signal classification with an application to identifying the alcoholic brain
S1568494616301016
In our previous work, we have developed a sudden cardiac death index (SCDI) using electrocardiogram (ECG) signals that could effectively predict the occurrence of SCD four minutes before the onset. Thus, the prediction of SCD before its onset by using heart rate variability (HRV) signals is a worthwhile task for further investigation. Therefore, in this paper, a new novel methodology to automatically classify the HRV signals of normal and subjects at risk of SCD by using nonlinear techniques has been presented. In this study, we have predicted SCD by analyzing four-minutes of HRV signals (separately for each one-minute interval) prior to SCD occurrence by using nonlinear features such as Renyi entropy (REnt), fuzzy entropy (FE), Hjorth's parameters (activity, mobility and complexity), Tsallis entropy (TEnt), and energy features of discrete wavelet transform (DWT) coefficients. All the clinically significant features obtained are ranked using their t-value and fed to classifiers such as K-nearest neighbor (KNN), decision tree (DT), and support vector machine (SVM). In this work, we have achieved an accuracy of 97.3%, 89.4%, 89.4%, and 94.7% for prediction of SCD one, two, three, and four minutes prior to the SCD onset respectively using SVM classifier. Furthermore, we have also developed a novel SCD Index (SCDI) by using nonlinear HRV signal features to classify the normal and SCD prone HRV signals. Our proposed technique is able to identify the person at risk of developing SCD four minutes earlier, thereby providing sufficient time for the clinicians to respond with treatment in Intensive Care Units (ICU). Thus, this proposed technique can thus serve as a valuable tool for increasing the survival rate of many cardiac patients.
Sudden cardiac death (SCD) prediction based on nonlinear heart rate variability features and SCD index
S1568494616301028
Ant colony optimization (ACO) is a very suitable path search algorithm, whose typical application is traveling salesman problem. However, as one heuristic algorithm, it has many shortcomings, such as slow convergent speed and low searching efficiency. To overcome these shortcomings, the premium-penalty strategy has been introduced, and the pheromone diversity of the good paths and the ordinary ones is increased to polarize pheromone density of all paths. Thus, premium-penalty ant colony optimization (PPACO) is proposed. And its good performance is verified by the applications to some typical traveling salesman problems. Its two important parameters are discussed too. Because location critical slip surface in slope stability analysis is a path search problem, it can be solved by the ACO very suitably. Therefore, based on PPACO and typical mature limit equilibrium analysis (Spencer method), a new method to analyze the slope stability is proposed. Through two typical examples, one simple slope and one complicated slope, the efficiency and effectiveness of the new algorithm are verified. The results show that, the new algorithm can always find the less safety factor and its critical slip surface in shorter time than many previous algorithms, and the new algorithm can be used in real engineering very well.
Premium-penalty ant colony optimization and its application in slope stability analysis
S156849461630103X
Diaphragmatic electromyogram (EMGdi) signal plays an important role in the diagnosis and analysis of respiratory diseases. However, EMGdi recordings are often contaminated by electrocardiographic (ECG) interference, which posing serious obstacle to traditional denoising approaches due to overlapped spectra of these signals. In this paper, a novel method based on wavelet transform and independent component analysis (ICA) is proposed to remove the ECG interference from noisy EMGdi signals. With the proposed method, the original independent components of contaminated EMGdi signal were first obtained with ICA. Then the ECG components contained were removed by a specially designed wavelet domain filter. After that, the purified independent components were reconstructed back to the original signal space by ICA to obtain clean EMGdi signals. Experimental results achieved on practical clinical data show that the proposed approach is better than several traditional methods include wavelet transform (WT), ICA, digital filter and adaptive filter in ECG interference removing.
EMGdi signal enhancement based on ICA decomposition and wavelet transform
S1568494616301041
This paper presents an approach for breast cancer diagnosis in digital mammograms using wave atom transform. Wave atom is a recent member of the multi-resolution representation methods. Primarily, the mammogram images are decomposed on the basis of wave atoms, and then a special set of the biggest coefficients from wave atom transform is used as a feature vector. Two different classifiers, support vector machine and k-nearest neighbors, are employed to classify mammograms. The method is tested using two different sets of images provided by MIAS and DDSM database.
Investigation of wave atom transform by using the classification of mammograms
S1568494616301053
Deformable models are segmentation techniques that adapt a curve with the goal of maximizing its overlap with the actual contour of an object of interest within an image. Such a process requires the definition of an optimization framework whose most critical issues include: choosing an optimization method which exhibits robustness with respect to noisy and highly-multimodal search spaces; selecting the optimization and segmentation algorithms’ parameters; choosing the representation for encoding prior knowledge on the image domain of interest; and initializing the curve in a location which favors its convergence onto the boundary of the object of interest. All these problems are extensively discussed within this manuscript, with reference to the family of global stochastic optimization techniques that are generally termed metaheuristics, and are designed to solve complex optimization and machine learning problems. In particular, we present a complete study on the application of metaheuristics to image segmentation based on deformable models. This survey studies, analyzes and contextualizes the most notable and recent works on this topic, proposing an original categorization for these hybrid approaches. It aims to serve as a reference work which proposes some guidelines for choosing and designing the most appropriate combination of deformable models and metaheuristics when facing a given segmentation problem. After recalling the principles underlying deformable models and metaheuristics, we broadly review the different hybrid approaches employed to solve image segmentation problems, and conclude with a general discussion about methodological and design issues as well as future research and application trends.
A survey on image segmentation using metaheuristic-based deformable models: state of the art and critical analysis
S1568494616301168
Networked control of a class of nonlinear systems is considered. For this purpose, the previously proposed variable selective control (VSC) methodology is extended to the nonlinear systems. This extension is based upon the decomposition of the nonlinear system to a set fuzzy-blended locally linearized subsystems, and further application of the VSC methodology to each subsystem. Using the idea of parallel distributed compensation (PDC) method, the closed-loop stability of the overall networked system is guaranteed, using new linear matrix inequalities (LMIs). For the real-time implementation, real-time control signals are constructed for every entry of pre-specified vector of time delays, which is selected based on the presumed upper-bound of the network time delay. Similar to the traditional packet-base control methodology, such control signals are then packed as a control-side packet and transmitted back to a time delay compensator (TDC) located on the plant-side of the network. According to the most recent network time delay, the TDC selects just one entry of the control vector and applies it to the actuator through a zero order hold element. A sufficient condition for closed-loop asymptotic stability is determined. Simulation studies on nonlinear benchmark problems demonstrate the effectiveness of the proposed method.
A new method for control of nonlinear networked systems
S156849461630117X
Conventionally, optimal reactive power dispatch (ORPD) is described as the minimization of active power transmission losses and/or total voltage deviation by controlling a number of control variables while satisfying certain equality and inequality constraints. This article presents a newly developed meta-heuristic approach, chaotic krill herd algorithm (CKHA), for the solution of the ORPD problem of power system incorporating flexible AC transmission systems (FACTS) devices. The proposed CKHA is implemented and its performance is tested, successfully, on standard IEEE 30-bus test power system. The considered power system models are equipped with two types of FACTS controllers (namely, thyristor controlled series capacitor and thyristor controlled phase shifter). Simulation results indicate that the proposed approach yields superior solution over other popular methods surfaced in the recent state-of-the-art literature including chaos embedded few newly developed optimization techniques. The obtained results indicate the effectiveness for the solution of ORPD problem of power system considering FACTS devices. Finally, simulation is extended to some large-scale power system models like IEEE 57-bus and IEEE 118-bus test power systems for the same objectives to emphasis on the scalability of the proposed CKHA technique. The scalability, the robustness and the superiority of the proposed CKHA are established in this paper.
Chaotic krill herd algorithm for optimal reactive power dispatch considering FACTS devices
S1568494616301181
In the new decade due to rich and dense water resources, it is vital to have an accurate and reliable sediment prediction and incorrect estimation of sediment rate has a huge negative effect on supplying drinking and agricultural water. For this reason, many studies have been conducted in order to improve the accuracy of prediction. In a wide range of these studies, various soft computing techniques have been used to predict the sediment. It is expected that combining the predictions obtained by these soft computing techniques can improve the prediction accuracy. Stacking method is a powerful machine learning technique to combine the prediction results of other methods intelligently through a meta-model based on cross validation. However, to the best of our knowledge, the stacking method has not been used to predict sediment or other hydrological parameters, so far. This study introduces stacking method to predict the suspended sediment. For this purpose, linear genetic programming and neuro-fuzzy methods are applied as two successful soft computing methods to predict the suspended sediment. Then, the accuracy of prediction is increased by combining their results with the meta-model of neural network based on cross validation. To evaluate the proposed method, two stations including Rio Valenciano and Quebrada Blanca, in the USA were selected as case studies and streamflow and suspended sediment concentration were defined as inputs to predict the daily suspended sediment. The obtained results demonstrated that the stacking method greatly improved RMSE and R 2 statistics for both stations compared to use of linear genetic programming or neuro-fuzzy solitarily.
Suspended sediment concentration estimation by stacking the genetic programming and neuro-fuzzy predictions
S156849461630120X
In this paper, we studied fuzzy linear fractional differential equations (FLFDEs), under Riemann–Liouville H-differentiability as the fuzzy initial value problems D 0 + α y ( x ) = λ ⊙ y ( x ) + f ( x ) . Some of the previous results on solutions of these equations are concreted. We obtained new solutions by using the fractional hyperbolic functions and their properties, in details. Finally, an application and two examples are given to illustrate our results.
Concreted solutions to fuzzy linear fractional differential equations
S1568494616301211
Warehousing management policy is a crucial issue in logistic management. It must be managed effectively and efficiently to reduce the production cost as well as the customer satisfaction. Synchronized zoning system is a warehousing management policy which aims to increase the warehouse utilization and customer satisfaction by reducing the customer waiting time. This policy divides a warehouse into several zones where each zone has its own picker who can work simultaneously. Herein, item assignment plays an important role since it influences the order processing performance. This study proposes an application of metaheuristic algorithms, namely particle swarm optimization (PSO) and genetic algorithm (GA), for item assignment in synchronized zoning system. The original PSO and GA algorithms are modified so that it is suitable for solving item assignment problem. The datasets with different sizes are used for method validation. Results obtained by PSO and GA are then compared with the result of an existing algorithm. The experimental results showed that PSO and GA can perform better than the existing algorithm. These results also show that PSO has better performance than GA, especially for bigger problems. It proves that item assignment policy obtained by PSO and GA has higher utilization rates than the existing algorithm. The index of item, a =1,2,…,q The index of item, b =1,2,…,q, b > a The number of zones The number of orders The similarity between two items, a and b The number of items
Application of metaheuristics-based clustering algorithm to item assignment in a synchronized zone order picking system
S1568494616301223
Remaining useful life prediction is one of the key requirements in prognostics and health management. While a system or component exhibits degradation during its life cycle, there are various methods to predict its future performance and assess the time frame until it does no longer perform its desired functionality. The proposed data-driven and model-based hybrid/fusion prognostics framework interfaces a classical Bayesian model-based prognostics approach, namely particle filter, with two data-driven methods in purpose of improving the prediction accuracy. The first data-driven method establishes the measurement model (inferring the measurements from the internal system state) to account for situations where the internal system state is not accessible through direct measurements. The second data-driven method extrapolates the measurements beyond the range of actually available measurements to feed them back to the model-based method which further updates the particles and their weights during the long-term prediction phase. By leveraging the strengths of the data-driven and model-based methods, the proposed fusion prognostics framework can bridge the gap between data-driven prognostics and model-based prognostics when both abundant historical data and knowledge of the physical degradation process are available. The proposed framework was successfully applied on lithium-ion battery remaining useful life prediction and achieved a significantly better accuracy compared to the classical particle filter approach.
A hybrid framework combining data-driven and model-based methods for system remaining useful life prediction
S1568494616301247
One of the most accurate types of prototype selection algorithms, preprocessing techniques that select a subset of instances from the data before applying nearest neighbor classification to it, are evolutionary approaches. These algorithms result in very high accuracy and reduction rates, but unfortunately come at a substantial computational cost. In this paper, we introduce a framework that allows to efficiently use the intermediary results of the prototype selection algorithms to further increase their accuracy performance. Instead of only using the fittest prototype subset generated by the evolutionary algorithm, we use multiple prototype subsets in an ensemble setting. Secondly, in order to classify a test instance, we only use prototype subsets that accurately classify training instances in the neighborhood of that test instance. In an experimental evaluation, we apply our new framework to four state-of-the-art prototype selection algorithms and show that, by using our framework, more accurate results are obtained after less evaluations of the prototype selection method. We also present a case study with a prototype generation algorithm, showing that our framework is easily extended to other preprocessing paradigms as well.
Improving nearest neighbor classification using Ensembles of Evolutionary Generated Prototype Subsets
S1568494616301259
Autonomous mobile vehicles are becoming commoner in outdoor scenarios for agricultural applications. They must be equipped with a robot navigation system for sensing, mapping, localization, path planning, and obstacle avoidance. In autonomous vehicles, safety becomes a major challenge where unexpected obstacles in the working area must be conveniently addressed. Of particular interest are, people or animals crossing in front of the vehicle or fixed/moving uncatalogued elements in specific positions. Detection of unexpected obstacles or elements on video sequences acquired with a machine vision system on-board a tractor moving in cornfields makes the main contribution to this research. We propose a new strategy for automatic video analysis to detect static/dynamic obstacles in agricultural environments via spatial-temporal analysis. At a first stage obstacles are detected by using spatial information based on spectral colour analysis and texture data. At a second stage temporal information is used to detect moving objects/obstacles at the scene, which is of particular interest in camouflaged elements within the environment. A main feature of our method is that it does not require any training process. Another feature of our approach consists in the spatial analysis to obtain an initial segmentation of interesting objects; afterwards, temporal information is used for discriminating between moving and static objects. To the best of our knowledge in the field of agricultural image analysis, classical approaches make use of either spatial or temporal information, but not both at the same time, making an important contribution. Our method shows favourable results when tested in different outdoor scenarios in agricultural environments, which are really complex, mainly due to the high variability in the illumination conditions, causing undesired effects such as shadows and alternating lighted and dark areas. Dynamic background, camera vibrations and static and dynamic objects are also factors complicating the situation. The results are comparable to those obtained with other state-of-art techniques reported in literature.
Spatio-temporal analysis for obstacle detection in agricultural videos
S1568494616301260
The intravascular ultrasound-based tissue characterization of coronary plaque is important for the early diagnosis of acute coronary syndromes. The conventional tissue characterization techniques however cannot obtain sufficient identification accuracy for various tissue properties, because the feature employed for characterization are static features, which lack dynamical information about backscattered radio-frequency (RF) signals. In this work, we propose a new intravascular ultrasound-based tissue characterization method that uses a modular network self-organizing map (mnSOM) in which each module is composed of an autoregressive model for representing the dynamics of the RF signals. The proposed method can create a map of various dynamical features from the RF signal. This map enables generalized tissue characterizations. The proposed method is verified by comparing its tissue characterization performance with that of the conventional method using real intravascular ultrasound signals.
Intravascular ultrasound-based tissue characterization using modular network self-organizing map
S1568494616301272
Cooperation between autonomous robot vehicles holds several promising advantages like robustness, adaptability, configurability, and scalability. Coordination between the different robots and the individual relative motion represent both the main challenges especially when dealing with formation control and maintenance. Cluster space control provides a simple concept for controlling multi-agent formation. In the classical approach, formation control is the unique task for the multi-agent system. In this paper, the development and application of a novel Behavioral Adaptive Fuzzy-based Cluster Space Control (BAFC) to non-holonomic robots is presented. By applying a fuzzy priority control approach, BAFC deals with two conflicting tasks: formation maintenance and target following. Using priority rules, the fuzzy approach is used to adapt the controller and therefore the behavior of the system, taking into accounts the errors in the formation states and the target following states. The control approach is easy to implement and has been implemented in this paper using SIMULINK real-time platform. The communication between the different agents and the controller is established through Wi-Fi link. Both simulation and experimental results demonstrate the behavioral response where the robot performs the higher priority tasks first. This new approach shows a great performance with a lower control signal when benchmarked with previously known results in the literature.
A Behavioral Adaptive Fuzzy controller of multi robots in a cluster space
S1568494616301284
Extreme learning machine (ELM) is a recently proposed learning algorithm for single hidden layer feedfoward neural networks (SLFN) that achieved remarkable performances in various applications. In ELM, the hidden neurons are randomly assigned and the output layer weights are learned in a single step using the Moore-Penrose generalized inverse. This approach results in a fast learning neural network algorithm with a single hyperparameter (the number of hidden neurons). Despite the aforementioned advantages, using ELM can result in models with a large number of hidden neurons and this can lead to poor generalization. To overcome this drawback, we propose a novel method to prune hidden layer neurons based on genetic algorithms (GA). The proposed approach, referred as GAP-ELM, selects subset of the hidden neurons to optimize a multiobjective fitness function that defines a compromise between accuracy and the number of pruned neurons. The performance of GAP-ELM is assessed on several real world datasets and compared to other SLFN and a well known pruning method called Optimally Pruned ELM (OP-ELM). On the basis of our experiments, we can state that GAP-ELM is a valid alternative for classification tasks.
A new pruning method for extreme learning machines via genetic algorithms
S1568494616301296
Sudoku, of order n, is a combinatorial puzzle having partially filled n 2 × n 2 grid consisting of sub-grids of n × n dimension. In this paper, a new membrane algorithm, namely MA_PSO_M, is presented. It uses the modified rules of Particle Swarm Optimization coupled with a carefully designed mutation operator within the framework of cell-like P-systems. Another significant contribution of this paper is the novel way in which the search space for solving the Sudoku problem is defined. Initially, the proposed algorithm is used to solve Sudoku puzzles of order 3 available in literature. On the basis of experiments performed on sample Sudoku puzzles of ‘easy’ and ‘medium’ difficulty levels it is concluded that the proposed membrane algorithm, MA_PSO_M, is very efficient and reliable. For the ‘hard’ and ‘evil’ difficultly levels, too the algorithm performs very well after incorporating an additional deterministic phase. The performance of the algorithm is further enhanced with an increased population size in a very small computational time. To further demonstrate efficiency of algorithm it is applied to Sudoku puzzles of order 4. The obtained results prove that the proposed membrane algorithm clearly dominates any of the PSO based membrane algorithm existing in the literature.
A new membrane algorithm using the rules of Particle Swarm Optimization incorporated within the framework of cell-like P-systems to solve Sudoku
S1568494616301302
This paper proposes a novel hybrid t-way test generation strategy (where t indicates interaction strength), called High Level Hyper-Heuristic (HHH). HHH adopts Tabu Search as its high level meta-heuristic and leverages on the strength of four low level meta-heuristics, comprising of Teaching Learning based Optimization, Global Neighborhood Algorithm, Particle Swarm Optimization, and Cuckoo Search Algorithm. HHH is able to capitalize on the strengths and limit the deficiencies of each individual algorithm in a collective and synergistic manner. Unlike existing hyper-heuristics, HHH relies on three defined operators, based on improvement, intensification and diversification, to adaptively select the most suitable meta-heuristic at any particular time. Our results are promising as HHH manages to outperform existing t-way strategies on many of the benchmarks.
A Tabu Search hyper-heuristic strategy for t-way test suite generation
S1568494616301326
Type-2 fuzzy logic systems have extensively been applied to various engineering problems, e.g. identification, prediction, control, pattern recognition, etc. in the past two decades, and the results were promising especially in the presence of significant uncertainties in the system. In the design of type-2 fuzzy logic systems, the early applications were realized in a way that both the antecedent and consequent parameters were chosen by the designer with perhaps some inputs from some experts. Since 2000s, a huge number of papers have been published which are based on the adaptation of the parameters of type-2 fuzzy logic systems using the training data either online or offline. Consequently, the major challenge was to design these systems in an optimal way in terms of their optimal structure and their corresponding optimal parameter update rules. In this review, the state of the art of the three major classes of optimization methods are investigated: derivative-based (computational approaches), derivative-free (heuristic methods) and hybrid methods which are the fusion of both the derivative-free and derivative-based methods.
Optimal design of adaptive type-2 neuro-fuzzy systems: A review
S1568494616301338
Reconstruction of cross-cut shredded text documents (RCCSTD) plays a crucial role in many fields such as forensic and archeology. To handle and reconstruct the shreds, in addition to some image processing procedures, a well-designed optimization algorithm is required. Existing works adopt some general methods in these two aspects, which may not be very efficient since they ignore the specific structure or characteristics of RCCSTD. In this paper, we develop a splicing-driven memetic algorithm (SD-MA) specifically for tackling the problem. As the name indicates, the algorithm is designed from a splicing-centered perspective, in which the operators and fitness evaluation are developed for the purpose of splicing the shreds. We design novel crossover and mutation operators that utilize the adjacency information in the shreds to breed high-quality offsprings. Then, a local search strategy based on shreds is performed, which further improves the evolution efficiency of the population in complex search space. To extract valid information from shreds and improve the accuracy of splicing costs, we propose a comprehensive objective function that considers both edge and empty row-based splicing errors. Experiments are carried out on 30 RCCSTD scenarios and comparisons are made against previous best-known algorithms. Experimental results show that the proposed SD-MA displays a significantly improved performance in terms of solution accuracy and convergence speed.
A splicing-driven memetic algorithm for reconstructing cross-cut shredded text documents
S1568494616301454
This paper presents a genetic algorithm applied to the protein structure prediction in a hydrophobic-polar model on a cubic lattice. The proposed genetic algorithm is extended with crowding, clustering, repair, local search and opposition-based mechanisms. The crowding is responsible for maintaining the good solutions to the end of the evolutionary process while the clustering is used to divide a whole population into a number of subpopulations that can locate different good solutions. The repair mechanism transforms infeasible solutions to feasible solutions that do not occupy the lattice point for more than one monomer. In order to improve convergence speed the algorithm uses local search. This mechanism improves the quality of conformations with the local movement of one or two consecutive monomers through the entire conformation. The opposition-based mechanism is introduced to transform conformations to the opposite direction. In this way the algorithm easily improves good solutions on both sides of the sequence. The proposed algorithm was tested on a number of well-known hydrophobic-polar sequences. The obtained results show that the mechanisms employed improve the algorithm's performance and that our algorithm is superior to other state-of-the-art evolutionary and swarm algorithms.
Genetic algorithm with advanced mechanisms applied to the protein structure prediction in a hydrophobic-polar model and cubic lattice
S1568494616301478
Attribute reduction with variable precision rough sets (VPRS) attempts to select the most information-rich attributes from a dataset by incorporating a controlled degree of misclassification into approximations of rough sets. However, the existing attribute reduction algorithms with VPRS have no incremental mechanisms of handling dynamic datasets with increasing samples, so that they are computationally time-consuming for such datasets. Therefore, this paper presents an incremental algorithm for attribute reduction with VPRS, in order to address the time complexity of current algorithms. First, two Boolean row vectors are introduced to characterize the discernibility matrix and reduct in VPRS. Then, an incremental manner is employed to update minimal elements in the discernibility matrix at the arrival of an incremental sample. Based on this, a deep insight into the attribute reduction process is gained to reveal which attributes to be added into and/or deleted from a current reduct, and our incremental algorithm is designed by this adoption of the attribute reduction process. Finally, experimental comparisons validate the effectiveness of our proposed incremental algorithm.
An incremental algorithm for attribute reduction with variable precision rough sets
S156849461630148X
In this paper, I introduce a new method for feature extraction to classify digital mammograms using fast finite shearlet transform. Initially, fast finite shearlet transform was performed over mammogram images, and feature vectors were built using coefficients of the transform. In subsequent calculations, features were ranked according to t-test statistics, and capabilities were distinguished between different classes. To maximize differences between class representatives, a thresholding process was implemented as a final stage of feature extraction, and classifications were calculated over the optimal feature set using 5-fold cross validation and a support vector machine (SVM) classifier. The present results show that the proposed method provides satisfactory classification accuracy.
A new feature extraction method based on multi-resolution representations of mammograms
S1568494616301491
This paper presents an alternative technique for financial distress prediction systems. The method is based on a type of neural network, which is called hybrid associative memory with translation. While many different neural network architectures have successfully been used to predict credit risk and corporate failure, the power of associative memories for financial decision-making has not been explored in any depth as yet. The performance of the hybrid associative memory with translation is compared to four traditional neural networks, a support vector machine and a logistic regression model in terms of their prediction capabilities. The experimental results over nine real-life data sets show that the associative memory here proposed constitutes an appropriate solution for bankruptcy and credit risk prediction, performing significantly better than the rest of models under class imbalance and data overlapping conditions in terms of the true positive rate and the geometric mean of true positive and true negative rates.
Financial distress prediction using the hybrid associative memory with translation
S1568494616301508
The payload packing problem of a satellite module (SM3P) belongs to a complex engineering layout and combinatorial optimization problem. SM3P can not be solved effectively by traditional exact methods. Evolutionary algorithms have shown some promise of tackling SM3P in previous work; however, the solution quality and computational efficiency are still challenges. Inspired by previous works (such as divide-and-conquer and no free lunch theorem), this study designs three-stage solution strategy in the light of the characteristics of SM3P and proposes a hybrid multi-mechanism optimization approach (HMMOA) integrating knowledge heuristic rules with two evolutionary algorithms such as ant colony optimization (ACO) and particle swarm optimization (PSO) in different stages. Firstly, the payloads to be placed are assigned to different bearing surfaces in the distribution stage. Then SM3P is decomposed into several subproblems solved by the heuristic ACO algorithm in the second stage, where a better feasible packing scheme obtained by the knowledge-based heuristic ACO is further improved by a heuristic adjustment strategy. At last, the solutions of different subproblems are combined to form a whole solution that is optimized by PSO in a way of rotation to minimize both errors of the mass center and inertia angle while other design objectives remain unchanged. The experimental results illustrate the capability of the proposed HMMOA in tackling the complex problem with better solution quality while less computational effort.
A hybrid multi-mechanism optimization approach for the payload packing design of a satellite module
S1568494616301521
In recent years, the number of direct flights between Taiwan and mainland China has grown rapidly, as charter flights have been turned into regular flights. This important issue has prompted airport ground handling service (AGHS) companies in Taiwan to enhance convenient services for passengers and to invest in airport logistics center expansion plans (ALCEP) to broaden the AGHS market. Due to their budgetary restrictions, AGHS companies need to outsource many of their services to contractors to implement these plans. This study proposes an ALCEP solution procedure to guide AGHS companies in adjusting their priority goals and selecting the best contractor according to their needs. This proposed procedure successfully solves the ALCEP problem and facilitates the assignment of contractors by considering both qualitative and quantitative methods.
Using binary fuzzy goal programming and linear programming to resolve airport logistics center expansion plan problems
S1568494616301533
The Iterated Prisoner’s Dilemma (IPD) game has been commonly used to investigate the cooperation among competitors. However, most previous studies on the IPD focused solely on maximizing players’ average payoffs without considering their risk preferences. By introducing the concept of income stream risk into the IPD game, this paper presents a novel evolutionary IPD model with agents seeking to balance between average payoffs and risks with respect to their own risk attitudes. We build a new IPD model of multiple agents, in which agents interact with one another in the pair-wise IPD game while adapting their risk attitudes according to their received payoffs. Agents become more risk averse after their payoffs exceed their aspirations, or become more risk seeking after their payoffs fall short of their aspirations. The aspiration levels of agents are determined based on their historical self-payoff information or the payoff information of the agent population. Simulations are conducted to investigate the emergence of cooperation under these two comparison methods. Results indicate that agents can sustain a highly cooperative equilibrium when they consider only their own historical payoffs as aspirations (called historical comparison) in adjusting their risk attitudes. This holds true even for the IPD with a short game encounter, for which cooperation was previously demonstrated difficult. However, when agents evaluate their payoffs in comparison with the population average payoff (called social comparison), those agents with payoffs below the population average tend to be dissatisfied with the game outcomes. This dissatisfaction will induce more risk-seeking behavior of agents in the IPD game, which will constitute a strong deterrent to the emergence of mutual cooperation in the population.
Cooperation in the evolutionary iterated prisoner’s dilemma game with risk attitude adaptation
S1568494616301545
Software defect prediction predicts fault-prone modules which will be tested thoroughly. Thereby, limited quality control resources can be allocated effectively on them. Without sufficient local data, defects can be predicted via cross-project defect prediction (CPDP) utilizing data from other projects to build a classifier. Software defect datasets have the class imbalance problem, indicating the defect class has much fewer instances than the non-defect class does. Unless defect instances are predicted correctly, software quality could be degraded. In this context, a classifier requires to provide high accuracy of the defect class without severely worsening the accuracy of the non-defect class. This class imbalance principle seamlessly connects to the purpose of the multi-objective (MO) optimization in that MO predictive models aim at balancing many of the competing objectives. In this paper, we target to identify effective multi-objective learning techniques under cross-project (CP) environments. Three objectives are devised considering the class imbalance context. The first objective is to maximize the probability of detection (PD). The second objective is to minimize the probability of false alarm (PF). The third objective is to maximize the overall performance (e.g., Balance). We propose novel MO naive Bayes learning techniques modeled by a Harmony Search meta-heuristic algorithm. Our approaches are compared with single-objective models, other existing MO models and within-project defect prediction models. The experimental results show that the proposed approaches are promising. As a result, they can be effectively applied to satisfy various prediction needs under CP settings.
Effective multi-objective naïve Bayes learning for cross-project defect prediction
S1568494616301557
This article proposes a new measure to compare soft computing methods for software estimation. This new measure is based on the concepts of Equivalence Hypothesis Testing (EHT). Using the ideas of EHT, a dimensionless measure is defined using the Minimum Interval of Equivalence and a random estimation. The dimensionless nature of the metric allows us to compare methods independently of the data samples used. The motivation of the current proposal comes from the biases that other criteria show when applied to the comparison of software estimation methods. In this work, the level of error for comparing the equivalence of methods is set using EHT. Several soft computing methods are compared, including genetic programming, neural networks, regression and model trees, linear regression (ordinary and least mean squares) and instance-based methods. The experimental work has been performed on several publicly available datasets. Given a dataset and an estimation method we compute the upper point of Minimum Interval of Equivalence, MIEu, on the confidence intervals of the errors. Afterwards, the new measure, MIEratio, is calculated as the relative distance of the MIEu to the random estimation. Finally, the data distributions of the MIEratios are analysed by means of probability intervals, showing the viability of this approach. In this experimental work, it can be observed that there is an advantage for the genetic programming and linear regression methods by comparing the values of the intervals.
Evaluation of estimation models using the Minimum Interval of Equivalence
S1568494616301569
This paper considers the order acceptance and scheduling problem in a single machine environment where each customer order is characterized by a known processing time, due date, revenue, and class setup time. The objective is to maximize the total revenue. Since the problem is computationally intractable, we first conduct a preliminary study of applying the basic artificial bee colony algorithm to address the problem under study. Specifically, we design appropriate neighborhood operators with respect to the problem. Based on the results of the preliminary study and the problem characteristics, an enhanced artificial bee colony algorithm is developed with a series of modifications. The extensive experimental results indicate that the enhanced artificial bee colony algorithm is both computationally efficient and effective for large sized problem instances.
An enhanced ABC algorithm for single machine order acceptance and scheduling with class setups
S1568494616301594
This paper proposes a hybrid particle swarm optimization algorithm in a rolling horizon framework to solve the aircraft landing problem (ALP). ALP is an important optimization problem in air traffic control and is well known as NP-hard. The problem consists of allocating the arriving aircrafts to runways at an airport and assigning a landing time to each aircraft. Each aircraft has an optimum target landing time determined based on its most fuel-efficient airspeed and a deviation from it incurs a penalty which is proportional to the amount of deviation. The landing time of each aircraft is constrained within a specified time window and must satisfy minimum separation time requirement with its preceding aircrafts. The objective is to minimize the total penalty cost due to deviation of landing times of aircrafts from the respective target landing times. The performance of the proposed algorithm is evaluated on a set of benchmark instances involving upto 500 aircrafts and 5 runways. Computational results reveal that the proposed algorithm is effective in solving the problem in short computational time.
An efficient hybrid particle swarm optimization algorithm in a rolling horizon framework for the aircraft landing problem
S1568494616301600
Gabor filter bank has been successfully used for false positive reduction problem and the discrimination of benign and malignant masses in breast cancer detection. However, a generic Gabor filter bank is not adapted to multi-orientation and multi-scale texture micro-patterns present in the regions of interest (ROIs) of mammograms. There are two main optimization concerns: how many filters should be in a Gabor filter band and what should be their parameters. Addressing these issues, this work focuses on finding optimizing Gabor filter banks based on an incremental clustering algorithm and Particle Swarm Optimization (PSO). We employ an SVM with Gaussian kernel as a fitness function for PSO. The effect of optimized Gabor filter bank was evaluated on 1024 ROIs extracted from a Digital Database for Screening Mammography (DDSM) using four performance measures (i.e., accuracy, area under ROC curve, sensitivity and specificity) for the above mentioned mass classification problems. The results show that the proposed method enhances the performance and reduces the computational cost. Moreover, the Wilcoxon signed rank test over the significance level of 0.05 reveals that the performance difference between the optimized Gabor filter bank and non-optimized Gabor filter bank is statistically significant.
Optimized Gabor features for mass classification in mammography
S1568494616301612
The focus of this study is to use Monte Carlo method in fuzzy linear regression. The purpose of the study is to figure out the appropriate error measures for the estimation of fuzzy linear regression model parameters with Monte Carlo method. Since model parameters are estimated without any mathematical programming or heavy fuzzy arithmetic operations in fuzzy linear regression with Monte Carlo method. In the literature, only two error measures (E 1 and E 2) are available for the estimation of fuzzy linear regression model parameters. Additionally, accuracy of available error measures under the Monte Carlo procedure has not been evaluated. In this article, mean square error, mean percentage error, mean absolute percentage error, and symmetric mean absolute percentage error are proposed for the estimation of fuzzy linear regression model parameters with Monte Carlo method. Moreover, estimation accuracies of existing and proposed error measures are explored. Error measures are compared to each other in terms of estimation accuracy; hence, this study demonstrates that the best error measures to estimate fuzzy linear regression model parameters with Monte Carlo method are proved to be E 1, E 2, and the mean square error. One the other hand, the worst one can be given as the mean percentage error. These results would be useful to enrich the studies that have already focused on fuzzy linear regression models.
Error measures for fuzzy linear regression: Monte Carlo simulation approach
S1568494616301624
Accurate modeling for forecasting of stock market volatility is a widely interesting research area both in academia as well as financial markets. This paper proposes an innovative Fuzzy Computationally Efficient EGARCH model to forecast the volatility of three stock market indexes. The proposed model represents a joint estimation of the membership function parameters of a TSK-type fuzzy inference system along with the leverage effect, asymmetric shock by leverage effect of EGARCH model in forecasting highly nonlinear and complicated financial time series model more accurately. Further unlike the conventional TSK type fuzzy neural network the proposed model uses a functional link neural network (FLANN) in the consequent part of the fuzzy rules to provide an improved mapping. Moreover, a differential evolution (DE) algorithm is suggested to solve the parameters estimation problem of Fuzzy Computationally Efficient EGARCH model. Being a parallel direct search algorithm, DE has the strength of finding global optimal solutions regardless of the initial values of its few control parameters. Furthermore, the DE based algorithm aims to achieve an optimal solution with a rapid convergence rate. The proposed model has been compared with some GARCH family models and hybrid fuzzy systems and GARCH models based on three performance metrics: MSFE, RMSFE, and MAFE. The results indicate that the proposed method offers significant improvements in volatility forecasting performance in comparison with all other specified models.
An evolutionary hybrid Fuzzy Computationally Efficient EGARCH model for volatility prediction
S1568494616301648
The purpose of this paper is to develop a projection-based compromising method for addressing multiple criteria decision-making problems based on interval-valued intuitionistic fuzzy sets. The concept of projections considers not only the distance but also the included angle between evaluative ratings of alternative actions with respect to a criterion. In the interval-valued intuitionistic fuzzy context, this paper determines the respective projections of the evaluative ratings of each alternative on the positive-ideal and negative-ideal solutions and explores several essential properties. Next, this paper introduces the concepts of projection-based compromising indices and comprehensive compromising indices and further investigates relevant theorems for supporting the usefulness of these indices. Additionally, this paper proposes the projection-based comparative index and the comprehensive comparative index to serve as benchmark values for the comparison purpose. The improvement percentage of the comprehensive compromising value is acquired to determine the priority order of the alternatives, including the complete ranking order and the approval status for each alternative. The feasibility and the applicability of the proposed method are validated with an application problem of watershed site selection. Finally, several comparative analyses are conducted to verify the effectiveness and advantages of the proposed method over other relevant compromising decision-making methods.
A projection-based compromising method for multiple criteria decision analysis with interval-valued intuitionistic fuzzy information
S156849461630165X
Materials informatics is a growing field in materials science. Materials scientists have begun to use soft computing techniques to discover novel materials. In order to apply these techniques, the descriptors (referred to as features in computer science) of a material must be selected, thereby deciding the resulting performance. As a way of describing a material, the properties of each element in the material are used directly as the features of the input variable. Depending on the number of elements in the material, the dimensionality of the input may differ. Hence, it is not possible to apply the same model to materials with different numbers of elements for tasks such as regression or discrimination. In the present paper, we present a novel method of uniforming the dimensionality of the input that allows regression or discriminative tasks to be performed using soft computing techniques. The main contribution of the proposed method is to provide a solution for uniforming the dimensionality among input vectors of different size. The proposed method is a variant of the denoising autoencoder Vincent et al. (2008) [1] using neural networks and gives a latent representation with uniformed dimensionality of the input. In the experiments of the present study, we consider compounds with ionic conductivity and hydrogen storage materials. The results of the experiments indicate that the regression tasks can be performed using the uniformed latent data learned by the proposed method. Moreover, in the clustering task using these latent data, we observed distance preservation in data space, which is also the case for the denoising autoencoder. This result may enable the proposed method to be used in a broad range of applications.
Uniforming the dimensionality of data with neural networks for materials informatics
S1568494616301661
Three-phase induction motors (TIMs) are the key elements of electromechanical energy conversion in a variety of productive sectors. Identifying a defect in a running motor, before a failure occurs, can provide greater security in the decision-making processes for machine maintenance, reduced costs and increased machine operation availability. This paper proposes a new approach for identifying faults and improving performance in three-phase induction motors by means of a multi-agent system (MAS) with distinct behavior classifiers. The faults observed are related to faulty bearings, breakages in squirrel-cage rotor bars, and short-circuits between the coils of the stator winding. By analyzing the amplitudes of the current signals in the time domain, experimental results are obtained through the different methods of pattern classification under various sinusoidal power and mechanical load conditions for TIMs. The use of an MAS to classify induction motor faults allows the agents to work in conjunction in order to perform a specific set of tasks and achieve the goals. This technique proved its effectiveness in the evaluated situations with 1 and 2hp motors, providing an alternative tool to traditional methods to identify bearing faults, broken rotor bars and stator short-circuit faults in TIMs.
A novel multi-agent approach to identify faults in line connected three-phase induction motors
S1568494616301685
Academic research on E-learning has increased extensively over the past few years. Although, many multi-criteria decision making methods have been proposed to evaluate and examine the effectiveness of E-learning, there is a lack of study concerning systematic literature review and classification of research in this area. Regarding this, five major databases including ScienceDirect, Emerald, Taylor and Francis, IEEE, and Springer have been selected and a systematic methodology proposed. Consequently, a review of 42 published papers appearing in 33 academic journals and international conferences between 2001 and 2015 have been obtained to achieve a comprehensive review of MCDM application in E-learning. Accordingly, the selected papers have been classified by the year of publication, MCDM techniques, and journals and conferences in which they appeared. In addition, the significant criteria in evaluating E-learning were found. This study supports researchers and practitioners in effectively adopting MCDM techniques regarding E-learning evaluation and provides an insight into its state-of-the-art.
Multi-criteria decision making approach in E-learning: A systematic review and classification
S1568494616301752
Evaluation of driving performance is of utmost importance in order to reduce road accident rate. Since driving ability includes visual-spatial and operational attention, among others, head pose estimation of the driver is a crucial indicator of driving performance. This paper proposes a new automatic method for coarse and fine head's yaw angle estimation of the driver. We rely on a set of geometric features computed from just three representative facial keypoints, namely the center of the eyes and the nose tip. With these geometric features, our method combines two manifold embedding methods and a linear regression one. In addition, the method has a confidence mechanism to decide if the classification of a sample is not reliable. The approach has been tested using the CMU-PIE dataset and our own driver dataset. Despite the very few facial keypoints required, the results are comparable to the state-of-the-art techniques. The low computational cost of the method and its robustness makes feasible to integrate it in massive consume devices as a real time application.
A reduced feature set for driver head pose estimation
S1568494616301764
In this paper, the problem of relaxed observer design of discrete-time nonlinear systems is studied by developing a novel ranking-based switching mechanism. To do this, the useful ranking information of the normalized fuzzy weighting functions is utilized in order to give a denser subdivision of the normalized fuzzy weighting function space and therefore essentially yields the proposed ranking-based switching mechanism. Based on the obtained switching mechanism, a family of switching observers can be developed for the purpose of guaranteeing the estimation error system to be asymptotically stable with less conservatism than the existing results available in the references. Finally, two numerical examples are presented to illustrate the advantages of the proposed method.
Relaxed observer design of discrete-time nonlinear systems via a novel ranking-based switching mechanism
S1568494616301776
Teaching-Learning-Based-Optimization (TLBO) is a population-based Evolutionary Algorithm which uses an analogy of the influence of a teacher on the output of learners in a class. TLBO has been reported to obtain very good results for many constrained and unconstrained benchmark functions and engineering problems. The choice for TLBO by many researchers is partially based on the study of TLBO's performance on standard benchmark functions. In this paper, we explore the performance on several of these benchmark functions, which reveals an inherent origin bias within the Teacher Phase of TLBO. This previously unexplored origin bias allows the TLBO algorithm to more easily solve benchmark functions with higher success rates when the objective function has its optimal solution as the origin. The performance on such problems must be studied to understand the performance effects of the origin bias. A geometric interpretation is applied to the Teaching and Learning Phases of TLBO. From this interpretation, the spatial convergence of the population is described, where it is shown that the origin bias is directly tied to spatial convergence of the population. The origin bias is then explored by examining the performance effect due to: the origin location within the objective function, and the rate of convergence. It is concluded that, although the algorithm is successful in many engineering problems, TLBO does indeed have an origin bias affecting the population convergence and success rates of objective functions with origin solutions. This paper aims to inform researchers using TLBO of the performance effects of the origin bias and the importance of discussing its effects when evaluating TLBO.
On the convergence and origin bias of the Teaching-Learning-Based-Optimization algorithm
S1570870514000481
Understanding security failures of cryptographic protocols is the key to both patching existing protocols and designing future schemes. In this work, we investigate two recent proposals in the area of smart-card-based password authentication for security-critical real-time data access applications in hierarchical wireless sensor networks (HWSN). Firstly, we analyze an efficient and DoS-resistant user authentication scheme introduced by Fan et al. in 2011. This protocol is the first attempt to address the problems of user authentication in HWSN and only involves lightweight cryptographic primitives, such as one-way hash function and XOR operations, and thus it is claimed to be suitable for the resource-constrained HWSN environments. However, it actually has several security loopholes being overlooked, and we show it is vulnerable to user anonymity violation attack, smart card security breach attack, sensor node capture attack and privileged insider attack, as well as its other practical pitfalls. Then, A.K. Das et al.’s protocol is scrutinized, and we point out that it cannot achieve the claimed security goals: (1) It is prone to smart card security breach attack; (2) it fails to withstand privileged insider attack; and (3) it suffers from the defect of server master key disclosure. Our cryptanalysis results discourage any practical use of these two schemes and reveal some subtleties and challenges in designing this type of schemes. Furthermore, using the above two foremost schemes as case studies, we take a first step towards investigating the underlying rationale of the identified security failures, putting forward three basic principles which we believe will be valuable to protocol designers for advancing more robust two-factor authentication schemes for HWSN in the future.
Understanding security failures of two-factor authentication schemes for real-time applications in hierarchical wireless sensor networks
S1574013714000100
This paper presents the current state of the art on attack and defense modeling approaches that are based on directed acyclic graphs (DAGs). DAGs allow for a hierarchical decomposition of complex scenarios into simple, easily understandable and quantifiable actions. Methods based on threat trees and Bayesian networks are two well-known approaches to security modeling. However there exist more than 30 DAG-based methodologies, each having different features and goals. The objective of this survey is to summarize the existing methodologies, compare their features, and propose a taxonomy of the described formalisms. This article also supports the selection of an adequate modeling technique depending on user requirements.
DAG-based attack and defense modeling: Don’t miss the forest for the attack trees
S1574013715000192
A fundamental challenge in the intersection of Artificial Intelligence and Databases consists of developing methods to automatically manage Knowledge Bases which can serve as a knowledge source for computer systems trying to replicate the decision-making ability of human experts. Despite of most of the tasks involved in the building, exploitation and maintenance of KBs are far from being trivial, and significant progress has been made during the last years. However, there are still a number of challenges that remain open. In fact, there are some issues to be addressed in order to empirically prove the technology for systems of this kind to be mature and reliable.
Automated knowledge base management: A survey
S1574013715300216
Location management is an important area of mobile computing. Location management in mobile network deals with location registration and tracking of mobile terminals. The location registration process is called location update and the searching process is called paging. Various types of location management methods exist such as mobility based location management, data replication based location management, signal attenuation based location tracking, time, zone and distance based location update etc. In this paper, existing location management schemes are discussed and compared with respect to their cost consumption in terms of bytes. Finally the key issues are addressed in the context of location management for future generation mobile network.
Location management in mobile network: A survey
S1742287614000024
The detection of stego images, used as a carrier for secret messages for nefarious activities, forms the basis for Blind Image Steganalysis. The main issue in Blind Steganalysis is the non-availability of knowledge about the Steganographic technique applied to the image. Feature extraction approaches best suited for Blind Steganalysis, either dealt with only a few features or single domain of an image. Moreover, these approaches lead to low detection percentage. The main objective of this paper is to improve the detection percentage. In this paper, the focus is on Blind Steganalysis of JPEG images through the process of dilation that includes splitting of given image into RGB components followed by transformation of each component into three domains, viz., frequency, spatial, and wavelet. Extracted features from each domain are given to the Support Vector Machine (SVM) classifier that classified the image as steg or clean. The proposed process of dilation was tested by experiments with varying embedded text sizes and varying number of extracted features on the trained SVM classifier. Overall Success Rate (OSR) was chosen as the performance metric of the proposed solution and is found to be effective, compared with existing solutions, in detecting higher percentage of steg images.
Blind Image Steganalysis of JPEG images using feature extraction through the process of dilation
S1742287615000717
Mobile security threats have recently emerged because of the fast growth in mobile technologies and the essential role that mobile devices play in our daily lives. For that, and to particularly address threats associated with malware, various techniques are developed in the literature, including ones that utilize static, dynamic, on-device, off-device, and hybrid approaches for identifying, classifying, and defend against mobile threats. Those techniques fail at times, and succeed at other times, while creating a trade-off of performance and operation. In this paper, we contribute to the mobile security defense posture by introducing Andro-AutoPsy, an anti-malware system based on similarity matching of malware-centric and malware creator-centric information. Using Andro-AutoPsy, we detect and classify malware samples into similar subgroups by exploiting the profiles extracted from integrated footprints, which are implicitly equivalent to distinct characteristics. The experimental results demonstrate that Andro-AutoPsy is scalable, performs precisely in detecting and classifying malware with low false positives and false negatives, and is capable of identifying zero-day mobile malware.
Andro-AutoPsy: Anti-malware system based on similarity matching of malware and malware creator-centric information
S1746809413000360
Software based efficient and reliable ECG data compression and transmission scheme is proposed here. The algorithm has been applied to various ECG data of all the 12 leads taken from PTB diagnostic ECG database (PTB-DB). First of all, R-peaks are detected by differentiation and squaring technique and QRS regions are located. To achieve a strict lossless compression in the QRS regions and a tolerable lossy compression in rest of the signal, two different compression algorithms have used. The whole compression scheme is such that the compressed file contains only ASCII characters. These characters are transmitted using internet based Short Message Service (SMS) and at the receiving end, original ECG signal is brought back using just the reverse logic of compression. It is observed that the proposed algorithm can reduce the file size significantly (compression ratio: 22.47) preserving ECG signal morphology.
ECG signal compression using ASCII character encoding and transmission via SMS
S1746809413000372
Magnetic resonance imaging (MRI) is a sensitive diagnostic method in improving the diagnostic capacity for hepatic cirrhosis and determining the accurate characterization of hepatic cirrhosis. But hepatic MRI has some shortcomings in detection and classification hepatic cirrhosis in clinical, especially using non-enhanced MRI for diagnosing early hepatic cirrhosis. And computer-aided diagnostic (CAD) system, including quantitative description of lesion and automatically classification, can provide radiologists or physicians an alternative second opinion to efficiently apply the abundant information of the hepatic MRI. However, it is expected to character comprehensively the lesion and guarantee high classification rate of CAD system. In this paper, a new CAD system for hepatic cirrhosis detection and classification from normal hepatic tissue non-enhanced MRI is presented. According to prior approach, six texture features with different properties based on gray level difference method are extracted from regions of interest (ROI). Then duplicative-feature support vector machine (DFSVM) is proposed for feature selection and classification: Firstly, the search process of DFSVM imitates diagnosis of doctors: doctor will take a more feature for consideration until the final diagnoses regardless of whether the feature is used in advance. So our algorithm is consistent with the process of clinical diagnosis. Secondly, the impact of the most valuable features will be well strengthened and then the high prediction performance can be got. Experimental results also illustrate the satisfying classification rate. Performance of extracted features and normalization are studied. And it is also compared with typical classifier ANN.
Cirrhosis classification based on MRI with duplicative-feature support vector machine (DFSVM)
S1746809413000517
Contemporary methods of atrial flutter (AFL), atrial tachycardia (AT), and atrial fibrillation (AF) monitoring, although superior to the standard 12-lead ECG and symptom-based monitoring, are unable to accurately discriminate between AF, AFL and AT. Thus, there is a need to develop accurate, automated, and comprehensive atrial arrhythmia detection algorithms using standard ECG recorders. To this end, we have developed a sensitive and real-time realizable algorithm for accurate AFL and AT detection using any standard electrocardiographic recording. Our novel method for automatic detection of atrial flutter and atrial tachycardia uses a Bayesian approach followed by a high resolution time–frequency spectrum. We find the TQ interval of the electrocardiogram (ECG) corresponding to atrial activity by using a particle filter (PF), and analyze the atrial activity with a high resolution time–frequency spectral method: variable frequency complex demodulation (VFCDM). The rationale for using a high-resolution time–frequency algorithm is that our approach tracks the time-varying fundamental frequency of atrial activity, where AT is within 2.0–4.0Hz, AFL is within 4.0–5.3Hz and NSR is found at frequencies less than 2.0Hz. For classifications of AFL (n =22), AT (n =10) and normal sinus rhythms (NSR) (n =29), we found that our approach resulted in accuracies of 0.89, 0.87 and 0.91, respectively; the overall accuracy was 0.88.
Atrial flutter and atrial tachycardia detection using Bayesian approach with high resolution time–frequency spectrum from ECG recordings
S1746809413000529
Automatic image segmentation of immunohistologically stained breast tissue sections helps pathologists to discover the cancer disease earlier. The detection of the real number of cancer nuclei in the image is a very tedious and time consuming task. Segmentation of cancer nuclei, especially touching nuclei, presents many difficulties to separate them by traditional segmentation algorithms. This paper presents a new automatic scheme to perform both classification of breast stained nuclei and segmentation of touching nuclei in order to get the total number of cancer nuclei in each class. Firstly, a modified geometric active contour model is used for multiple contour detection of positive and negative nuclear staining in the microscopic image. Secondly, a touching nuclei method based on watershed algorithm and concave vertex graph is proposed to perform accurate quantification of the different stains. Finally, benign nuclei are identified by their morphological features and they are removed automatically from the segmented image for positive cancer nuclei assessment. The proposed classification and segmentation schemes are tested on two datasets of breast cancer cell images containing different level of malignancy. The experimental results show the superiority of the proposed methods when compared with other existing classification and segmentation methods. On the complete image database, the segmentation accuracy in term of cancer nuclei number is over than 97%, reaching an improvement of 3–4% over earlier methods.
Automatic image segmentation of nuclear stained breast tissue sections using color active contour model and an improved watershed method
S1746809413000530
Traditional finite element (FE) analysis is computationally demanding. The computational time becomes prohibitively long when multiple loading and boundary conditions need to be considered such as in musculoskeletal movement simulations involving multiple joints and muscles. Presented in this study is an innovative approach that takes advantage of the computational efficiency of both the dynamic multibody (MB) method and neural network (NN) analysis. A NN model that captures the behavior of musculoskeletal tissue subjected to known loading situations is built, trained, and validated based on both MB and FE simulation data. It is found that nonlinear, dynamic NNs yield better predictions over their linear, static counterparts. The developed NN model is then capable of predicting stress values at regions of interest within the musculoskeletal system in only a fraction of the time required by FE simulation.
Application of neural networks for the prediction of cartilage stress in a musculoskeletal system
S1746809413000542
This paper proposes an individualized approach to closed-loop control of depth of hypnosis during propofol anesthesia. The novelty of the paper lies in the individualization of the controller at the end of the induction phase of anesthesia, based on a patient model identified from the dose–response relationship during induction of anesthesia. The proposed approach is shown to be superior to administration of propofol based on population-based infusion schemes tailored to individual patients. This approach has the potential to outperform fully adaptive approaches in regards to controller robustness against measurement variability due to surgical stimulation. To streamline controller synthesis, two output filters were introduced (inverting the Hill dose–response model and the linear time-invariant sensor model), which yield a close-to-linear representation of the system dynamics when used with a compartmental patient model. These filters are especially useful during the induction phase of anesthesia in which a nonlinear dose–response relationship complicates the design of an appropriate controller. The proposed approach was evaluated in simulation on pharmacokinetic and pharmacodynamic models of 44 patients identified from real clinical data. A model of the NeuroSense, a hypnotic depth monitor based on wavelet analysis of EEG, was also included. This monitor is similar to the well-known BIS, but has linear time-invariant dynamics and does not introduce a delay. The proposed scheme was compared with a population-based controller, i.e. a controller only utilizing models based on demographic covariates for its tuning. On average, the proposed approach offered 25% improvement in disturbance attenuation, measured as the integrated absolute error following a step disturbance. The corresponding standard deviation from the reference was also decreased by 25%. Results are discussed and possible directions of future work are proposed.
Individualized closed-loop control of propofol anesthesia: A preliminary study
S1746809413000566
The heart rate variability (HRV) spectral parameters are classically used for studying the autonomic nervous system, as they allow the evaluation of the balance between the sympathetic and parasympathetic influences on heart rhythm. However, this evaluation is usually based on fixed frequency regions, which does not allow possible variation, or is based on an adaptive individual time dependent spectral boundaries (ITSB) method sensitive to noisy environments. In order to overcome these difficulties, we propose the constrained Gaussian modeling (CGM) method that dynamically models the power spectrum as a two Gaussian shapes mixture. It appeared that this procedure was able to accurately follow the exact parameters in the case of simulated data, in comparison with a parameter estimation obtained with a rigid frequency cutting approach or with the ITSB algorithm. Real data results obtained on a classical stand-test and on the Fantasia database are also presented and discussed.
HRV spectral estimation based on constrained Gaussian modeling in the nonstationary case
S1746809413000700
In this study, the correlations between blood lactate concentration (BLC), different vector electrocardiogram (VECG) parameters, ventilatory parameters and heart rate during exercise and recovery periods were investigated. The aim was to clarify the relationships between VECG parameters and different exercise intensity markers. Six (25–37 years old) nonathlete, healthy, male participants took part in the study. All participants performed two different bicycle ergospirometric protocols (P1 and P2) in order to attain different lactate levels with different heart rate profiles. A principal component regression (PCR) approach is introduced for preprocessing the VECG components. PCR was compared to Sawitzcy Golay and wavelet filtering methods using simulated data. The performance of the PCR approach was clearly better in low signal-to-noise ratio (SNR) situations, and thus, it enables reliable VECG estimates even during intensive exercise. As a result, strong positive mean individual correlations between BLC and T-wave kurtosis (P1: r =0.86 and P2: r =0.8, p <0.05 in 12/12 measurements) and negative correlation between BLC and cosRT (P1: r =−0.7, P2: r =−0.62, p <0.05 in 8/12 measurements) were observed. The results of this study indicate that VECG parameters (in addition to heart rate) can make a significant contribution to monitoring of exercise intensity and recovery.
The correlation of vectorcardiographic changes to blood lactate concentration during an exercise test
S1746809413000712
Recently, there has been a growing interest in the sparse representation of signals over learned and overcomplete dictionaries. Instead of using fixed transforms such as the wavelets and its variants, an alternative way is to train a redundant dictionary from the image itself. This paper presents a novel de-speckling scheme for medical ultrasound and speckle corrupted photographic images using the sparse representations over a learned overcomplete dictionary. It is shown that the proposed algorithm can be used effectively for the removal of speckle by combining an existing pre-processing stage before an adaptive dictionary could be learned for sparse representation. Extensive simulations are carried out to show the effectiveness of the proposed filter for the removal of speckle noise both visually and quantitatively.
Removal of correlated speckle noise using sparse and overcomplete representations
S1746809413000797
Arteriosclerosis is considered to be a major cause of cardiovascular diseases, which account for approximately 30% of the causes of death in the world. We have recently demonstrated a strong correlation between arteriosclerosis (arterial elasticity) and two characteristics: maximum systolic velocity (S1) and systolic second peak velocity (S2) of the common carotid artery flow velocity waveform (CCFVW). The CCFVW can be measured by using a small portable measuring device. However, there is currently no theoretical evidence supporting the causes of the relation between CCFVW and arterial elasticity, or the origin of the CCFVW characteristics. In this study, the arterial blood flow was simulated using a one-dimensional systemic arterial segments model of human artery in order to conduct a qualitative evaluation of the relationship between arterial elasticity and the characteristics of CCFVW. The simulation was carried out based on the discretized segments with the physical properties of a viscoelastic tube (the cross-sectional area at the proximal and terminal ends, the length, and the compliance per unit area of the tube (C S )). The findings obtained through this study revealed that the simulated CCFVW had shape similar characteristics to that of the measured CCFVW. Moreover, when the compliance C S of the model was decreased, the first peak of the simulated-CCFVW decreased and the second peak increased. Further, by separating the anterograde pulse wave and the reflected pulse wave, which form the CCFVW, we found that the decrease in the first peak of the simulated CCFVW was due to the arrival of a reflected pulse wave from the head after the common carotid artery toward the arrival of a anterograde pulse wave ejected directly from the heart and that the increase in the second peak resulted from the arrival of the peak of the reflected pulse wave from the thoracic aorta. These results establish that the CCFVW characteristics contribute to the assessment of arterial elasticity.
Evaluation of blood flow velocity waveform in common carotid artery using multi-branched arterial segment model of human arteries
S1746809413000815
Sleep apnoea is a very common sleep disorder which is able to cause symptoms such as daytime sleepiness, irritability and poor concentration. This paper presents a combinational feature extraction approach based on some nonlinear features extracted from Electro Cardio Graph (ECG) Reconstructed Phase Space (RPS) and usually used frequency domain features for detection of sleep apnoea. Here 6 nonlinear features extracted from ECG RPS are combined with 3 frequency based features to reconstruct final feature set. The nonlinear features consist of Detrended Fluctuation Analysis (DFA), Correlation Dimensions (CD), 3 Large Lyapunov Exponents (LLEs) and Spectral Entropy (SE). The final proposed feature set show about 94.8% accuracy over the Physionet sleep apnoea dataset using a kernel based SVM classifier. This research also proves that using non-linear analysis to detect sleep apnoea can potentially improve the classification accuracy of apnoea detection system.
Sleep apnoea detection from ECG using features extracted from reconstructed phase space and frequency domain
S1746809413000827
This paper introduces a data-driven methodology for detecting therapeutically correct and incorrect measurements in continuous glucose monitoring systems (CGMSs) in an intensive care unit (ICU). The data collected from 22 patients in an ICU with insulin therapy were obtained following the protocol established in the ICU. Measurements were classified using principal component analysis (PCA) in combination with case-based reasoning (CBR), where a PCA model was built to extract features that were used as inputs of the CBR system. CBR was trained to recognize patterns and classify these data. Experimental results showed that this methodology is a potential tool to distinguish between therapeutically correct and incorrect measurements from a CGMS, using the information provided by the monitor itself, and incorporating variables about the patient's clinical condition.
Principal component analysis in combination with case-based reasoning for detecting therapeutically correct and incorrect measurements in continuous glucose monitoring systems
S1746809413000839
Freehand three-dimensional ultrasound imaging is a highly attractive research area because it is capable of volumetric visualization and analysis of tissues and organs. The reconstruction algorithm plays a key role to the construction of three-dimensional ultrasound volume data with higher image quality and faster reconstruction speed. However, a systematic approach to such problem is still missing. A new fast marching method (FMM) for three-dimensional ultrasound volume reconstruction using the tracked and hand-held probe is proposed in this paper. Our reconstruction approach consists of two stages: bin-filling stage and hole-filling stage. Each pixel in the B-scan images is traversed and its intensity value is assigned to its nearest voxel in the bin-filling stage. For the efficient and accurate reconstruction, we present a new hole-filling algorithm based on the fast marching method. Our algorithm advances the interpolation boundary along its normal direction and fills the area closest to known voxel points in first, which ensure that the structural details of image can be preserved. Experimental results on both ultrasonic abdominal phantom and in vivo urinary bladder of human subject and comparisons with some popular algorithms are used to demonstrate its improvement in both reconstruction accuracy and efficiency.
An accurate and effective FMM-based approach for freehand 3D ultrasound reconstruction
S1746809413000840
Photoplethysmographic signals obtained from a webcam are analyzed through a continuous wavelet transform to assess the instantaneous heart rate. The measurements are performed on human faces. Robust image and signal processing are introduced to overcome drawbacks induced by light and motion artifacts. In addition, the respiration signal is recovered using the heart rate series by respiratory sinus arrhythmia, the natural variation in heart rate driven by the respiration. The presented algorithms are implemented on a mid-range computer and the overall method works in real-time. The performance of the proposed heart and breathing rates assessment method was evaluated using approved contact probes on a set of 12 healthy subjects. Results show high degrees of correlation between physiological measurements even in the presence of motion. This paper provides a motion-tolerant method that remotely measures the instantaneous heart and breathing rates. These parameters are particularly used in telemedicine and affective computing, where the heart rate variability analysis can provide an index of the autonomic nervous system.
Continuous wavelet filtering on webcam photoplethysmographic signals to remotely assess the instantaneous heart rate
S1746809413000864
An alternative technique for sleep stages classification based on heart rate variability (HRV) was presented in this paper. The simple subject specific scheme and a more practical subject independent scheme were designed to classify wake, rapid eye movement (REM) sleep and non-REM (NREM) sleep. 41 HRV features extracted from RR sequence of 45 healthy subjects were trained and tested through random forest (RF) method. Among the features, 25 were newly proposed or applied to sleep study for the first time. For the subject independent classifier, all features were normalized with our developed fractile values based method. Besides, the importance of each feature for sleep staging was also assessed by RF and the appropriate number of features was explored. For the subject specific classifier, a mean accuracy of 88.67% with Cohen's kappa statistic κ of 0.7393 was achieved. While the accuracy and κ dropped to 72.58% and 0.4627, respectively when the subject independent classifier was considered. Some new proposed HRV features even performed more effectively than the conventional ones. The proposed method could be used as an alternative or aiding technique for rough and convenient sleep stages classification.
Sleep stages classification based on heart rate variability and random forest
S1746809413000888
We present a novel approach to treating lung sounds contaminated by heart sounds by means of quasi-periodic signal modeling. Heart sounds are described through the long and short-term quasi-periodicity generated by cardiac cycle repetition and oscillations caused by valves openings and closures, respectively. In terms of signals, these quasi-periodicities drive time-variant HS envelope and phase which are modeled by single and piecewise time polynomials, respectively. Single polynomials account for slow and continuous envelope time variations, while piecewise polynomials capture fast and abrupt phase changes in short time intervals. Such a compact signal description provides an efficient way to fundamental heart sound (FHS) components localization and posterior removal from lung sounds. The results show that the proposed method outperforms two reference methods for medium (15ml/s/kg) and high (22.5ml/s/kg) air flow rates.
Quasi-periodic modeling for heart sound localization and suppression in lung sounds
S174680941300089X
Automatic analysis of biomedical time series such as electroencephalogram (EEG) and electrocardiographic (ECG) signals has attracted great interest in the community of biomedical engineering due to its important applications in medicine. In this work, a simple yet effective bag-of-words representation that is originally developed for text document analysis is extended for biomedical time series representation. In particular, similar to the bag-of-words model used in text document domain, the proposed method treats a time series as a text document and extracts local segments from the time series as words. The biomedical time series is then represented as a histogram of codewords, each entry of which is the count of a codeword appeared in the time series. Although the temporal order of the local segments is ignored, the bag-of-words representation is able to capture high-level structural information because both local and global structural information are well utilized. The performance of the bag-of-words model is validated on three datasets extracted from real EEG and ECG signals. The experimental results demonstrate that the proposed method is not only insensitive to parameters of the bag-of-words model such as local segment length and codebook size, but also robust to noise.
Bag-of-words representation for biomedical time series classification
S1746809413000906
Electrocardiography (ECG) signals are often contaminated by various kinds of noise or artifacts, for example, morphological changes due to motion artifact, non-stationary noise due to muscular contraction (EMG), etc. Some of these contaminations severely affect the usefulness of ECG signals, especially when computer aided algorithms are utilized. In this paper, a novel ECG enhancement algorithm is proposed based on sparse derivatives. By solving a convex ℓ1 optimization problem, artifacts are reduced by modeling the clean ECG signal as a sum of two signals whose second and third-order derivatives (differences) are sparse respectively. The algorithm is applied to a QRS detection system and validated using the MIT-BIH Arrhythmia database (109,452 anotations), resulting a sensitivity of Se = 99.87% and a positive prediction of +P = 99.88%.
ECG Enhancement and QRS Detection Based on Sparse Derivatives
S1746809413000918
The prehensile hand gestures play an important role in daily living for seizing or holding subjects stably. In order to realize the accurate recognition of eight prehensile hand gestures with a minimal number of electrodes, an off-line myoelectric control system with only two electrodes is developed. We choose Mean Absolute Value, Variance, the fourth-order Autoregressive Coefficient, Zero Crossings, Mean Frequency and Middle Frequency as original electromyography feature set and utilize the linear discriminant analysis to reduce the dimension and complete classification. The extent of dimension reduction is investigated and on the premise of it, the average accuracy can achieve 97.46% in the recognition of six hand gestures. The optimal feature set based on the original feature set is determined to be Mean Absolute Value, Variance, and the fourth-order Autoregressive Coefficient, which yields an average accuracy of 95.94% in the recognition of eight hand gestures. An average method is proposed to improve the accuracy further, resulting in the average accuracy in eight gestures being 98.12% and the best individual accuracy of some hand gestures being 100%.
The recognition of multi-finger prehensile postures using LDA
S174680941300092X
A spectral angle based feature extraction method, Spectral Clustering Independent Component Analysis (SC-ICA), is proposed in this work to improve the brain tissue classification from Magnetic Resonance Images (MRI). SC-ICA provides equal priority to global and local features; thereby it tries to resolve the inefficiency of conventional approaches in abnormal tissue extraction. First, input multispectral MRI is divided into different clusters by a spectral distance based clustering. Then, Independent Component Analysis (ICA) is applied on the clustered data, in conjunction with Support Vector Machines (SVM) for brain tissue analysis. Normal and abnormal datasets, consisting of real and synthetic T1-weighted, T2-weighted and proton density/fluid-attenuated inversion recovery images, were used to evaluate the performance of the new method. Comparative analysis with ICA based SVM and other conventional classifiers established the stability and efficiency of SC-ICA based classification, especially in reproduction of small abnormalities. Clinical abnormal case analysis demonstrated it through the highest Tanimoto Index/accuracy values, 0.75/98.8%, observed against ICA based SVM results, 0.17/96.1%, for reproduced lesions. Experimental results recommend the proposed method as a promising approach in clinical and pathological studies of brain diseases.
Spectral clustering independent component analysis for tissue classification from brain MRI
S1746809413000931
This paper addresses the design of blood glucose control during the postprandial period for Type 1 diabetes patients. An artificial pancreas for ambulatory purposes has to deal with the delays inherent to the subcutaneous route, the carbohydrate intakes, the metabolic changes, the glucose sensor errors and noise, and the insulin pump constraints. A time response typically obtained in closed-loop insulin delivery shows hyperglycemia in the early postprandial period caused by the lag in the insulin absorbtion, followed by hypoglycemia caused by control over-reaction. A hybrid control system is proposed in this paper to overcome these problems. An insulin bolus is administered prior to the meals like in open-loop control, whereas a PD controller is used for robust glucose regulation. The controller gain is progressively increased after the bolus from zero up to its nominal value as function of the insulin on board, so that the PD controller becomes fully operational just when the insulin on board falls below a prescribed value. An excessive accumulation of active insulin is avoided in this way, drastically reducing the risk of hypoglycemia. The controller gain is adapted by means of a variable structure algorithm, allowing a very simple software implementation. The robust performance of the control algorithm is intensively assessed in silico on a cohort of virtual patients under challenging realistic scenarios considering mixed meals, circadian variations, time-varying uncertainties, discrete measurement and actuation, sensor errors and other disturbances.
Postprandial blood glucose control using a hybrid adaptive PD controller with insulin-on-board limitation
S1746809413000943
Osteoporosis is a disease in which low bone mass and microarchitectural deterioration of bone tissue lead to increased bone fragility and a consequent increase in fracture risk. The objective of this paper is to develop and validate a new method to assess bone microarchitecture on radiographs. Taking into account the piecewise fractal nature of bone radiograph images, an appropriate fractal model (piecewise fractional Brownian motion) is used to characterize the trabecular bone network. Based on the Whittle estimator, a new method for calculating the Hurst exponent H is developed to better consider the piecewise fractal nature of the data. Different estimators are used and compared to the proposed method to discriminate two populations composed of healthy controls and osteoporotic patients. Our findings demonstrate that the new estimator proposed here provides effective results in terms of discrimination of the subjects and is better adapted to bone radiograph image analysis.
Piecewise Whittle estimator for trabecular bone radiograph characterization
S1746809413000967
To augment the classification accuracy of the ultrasound computer-aided diagnosis (CAD) for breast tumor detection based on texture feature, we proposed to extract texture feature descriptors by the shearlet transform. Shearlet transform provides a sparse representation of high dimensional data with especially superior directional sensitivity at various scales. Therefore, shearlet-based texture feature descriptors can characterize breast tumors well. In order to objectively evaluate the performance of shearlet-based features, curvelet, contourlet, wavelet and gray level co-occurrence matrix based texture feature descriptors are also extracted for comparison. All these features were then fed to two different classifiers, support machine vector (SVM) and AdaBoost, to evaluate the consistency. The experimental results of breast tumor classification showed that the classification accuracy, sensitivity, specificity, positive predictive value, negative predictive value and Matthew's correlation coefficient of shearlet-based method were 91.0±3.8%, 92.5±6.6%, 90.0±3.8%, 90.3±3.8%, 92.6±6.3%, 0.822±0.078 by SVM, and 90.0±2.8%, 90.0±4.0%, 90.0±2.3%, 89.9±2.4%, 90.1±3.6%, 0.803±0.056 by AdaBoost, respectively. Most of the shearlet-based results significantly outperformed those of other method based results under both the classifiers. The results suggest that the proposed method can well characterize the properties of breast tumor in ultrasound images, and has the potential to be used for breast CAD in ultrasound image.
Shearlet-based texture feature extraction for classification of breast tumor in ultrasound image
S1746809413000979
The qualitative definition of repolarization alternans (RA) as an every-other-beat alternation of the repolarization amplitude allows several possible quantitative characterizations of RA. In the absence of a standardization, any correct comparison among quantitative outputs by different automatic methods requires knowledge of the differences in the RA parameterization at the basis of their algorithms. Thus, aim of the present study was to investigate the kind of information provided by five methods, namely the fast Fourier spectral method (FFTSM), the complex demodulation method (CDM), the modified moving average method (MMAM), the Laplacian likelihood ratio method (LLRM) and the heart-rate adaptive match filter method (AMFM) when characterizing RA in terms of its amplitude and location. Eight synthetic ECG recordings affected by stationary RA with uniform and triangular profiles localized along the ST segment, over the T wave, at the end of the T wave and all along the JT segment, respectively, were considered. Results indicate that quantitative RA characterization is method dependent. More specifically, the FFTSM and the LLRM provide a measure that matches the root mean square of the RA profile over the JT segment. Instead, the CDM and the AMFM compute RA amplitude as the mean value of the RA profile over the JT segment. Eventually, the MMAM provides the maximum amplitude difference between consecutive beats along repolarization. RA location is homogeneously among methods, since they all provide the time instant in correspondence of which the center of mass of the alternation occurs.
Quantitative characterization of repolarization alternans in terms of amplitude and location: What information from different methods?
S1746809413000980
The spatiotemporal characteristics of cardiac fibrillation are often investigated by using indices extracted from the spectrum of cardiac signals. However different signal acquisition systems may produce signals of different spectra and affect the estimation of some spectral indices. In this study, we investigate the robustness of four spectral indices previously proposed for describing fibrillation, namely the dominant frequency (DF), the peak frequency (PF), the median frequency (MF) and the organization index (OI). The effects of different lead configurations on the values of the spectral indices are statistically quantified and further analyzed in a database consisting of unipolar and bipolar intracardiac electrograms (EGM), recorded by implantable cardioverter-defibrillators during ventricular fibrillation. Our analysis shows that the lead configuration significantly affects the PF, the MF and the OI, whereas the DF remains unaffected. We further explore the nature of cardiac spectrum and show that unipolar EGM concentrate power at lower frequencies than bipolar EGM. We conclude that indices that depend on the envelope of the spectrum of cardiac signals are in general sensitive to the lead configuration.
Analysis of the robustness of spectral indices during ventricular fibrillation
S1746809413000992
Automotive driving under unacceptable levels of accumulated stress deteriorates their vehicle control and risk-assessment capabilities often inviting road accidents. Design of a safety-critical wearable driver assist system for continuous stress level monitoring requires development of an intelligent algorithm capable of recognizing the drivers’ affective state and cumulatively account for increasing stress level. Task induced modifications in rhythms of physiological signals acquired during a real-time driving are clinically proven hallmarks for quantitative analysis of stress and mental fatigue. The present work proposes a neural network driven based solution to learning driving-induced stress patterns and correlating it with statistical, structural and time-frequency changes observed in the recorded biosignals. Physiological signals like Galvanic Skin Response (GSR) and Photoplethysmography (PPG) were selected for the present work. A comprehensive performance analysis on the selected neural network configurations (both Feed forward and Recurrent) concluded that Layer Recurrent Neural Networks are most optimal for stress level detection. This evaluation achieved an average precision of 89.23%, sensitivity of 88.83% and specificity of 94.92% when tested over 19 automotive drivers. The biofeedback inferred about the driver's ongoing physiological state using this neural network based inference engine would provide crucial information to on-board safety embedded systems to activate accordingly. It is envisaged that such a driver-centric safety system will help save precious lives by way of providing fast and credible real-time alerts to drivers and their coupled cars.
A comparative evaluation of neural network classifiers for stress level analysis of automotive drivers using physiological signals
S1746809413001006
Fast axonal conduction of action potentials in mammals relies on myelin insulation. Demyelination can cause slowed, blocked, desynchronized, or paradoxically excessive spiking that underlies the symptoms observed in demyelination diseases. Feedback control via functional electrical stimulation (FES) seems to be a promising treatment modality in such diseases. However, there are challenges to implementing such method for neurons: high nonlinearity, biological tissue constrains and unobservable ion channel states. To address this problem, we propose an estimating and tracking control strategy for systems based on Kalman filter, in order to enhance the action potential propagation reliability of demyelinated neuron via FES. Unscented Kalman filter (UKF) is employed to estimate the unobservable states and parameters in the demyelination neuron model from membrane potential dynamics. Our method could promote the design of new closed-loop electrical stimulation systems for patients suffering from different nerve system dysfunctions.
Observer-based tracking control of abnormal oscillations in demyelination symptom
S1746809413001018
We propose a method for feature extraction from clinical color images, with application in classification of skin lesions. Proposed feature extraction method is based on tensor decomposition of the clinical color image of skin lesion. Since color image is naturally represented as a three-way tensor, it is reasonable to use multi-way techniques to capture the underlying information contained in the image. Extracted features are elements of the core tensor in the corresponding multi-way decomposition, and represent spatial-spectral profile of the lesion. In contrast to common methods that exploit either texture or spectral diversity of the tumor only, the proposed approach simultaneously captures spatial and spectral characteristics. The procedure is tested on a problem of noninvasive diagnosis of melanoma from the clinical color images of skin lesions, with overall sensitivity 82.1% and specificity 86.9%. Our method compares favorably with the state of the art results reported in the literature and provides an interesting alternative to the existing approaches.
Noninvasive diagnosis of melanoma with tensor decomposition-based feature extraction from clinical color image
S174680941300102X
Due to noises, speckles, etc., automatic prostate segmentation is rather challenging, and using only low-level information such as intensity gradient is insufficient and unable to tackle the problem. In this paper, we propose an automatic prostate segmentation method combining intrinsic properties of TRUS images with the high-level shape prior information. First, intrinsic properties of TRUS images, such as the intensity transition near the prostate boundary as well as the speckle induced texture features obtained by Gabor filter banks, are integrated to deform the model to the target contour. These properties make our method insensitive to high gradient regions introduced by noises and speckles. Then, the preliminary segmentation is fine-tuned by the non-parametric shape prior, which is optimally distilled by non-parametric kernel density estimation as it can approximate arbitrary distributions. The refinement is along the direction of mean shift vector, and considerably strengthens the robustness of the method. The performance of our method is validated by experimental results. Compared with the state of the art, the accuracy and robustness of the method is quite promising, and the mean absolute distance is only 1.21±0.85mm.
TRUS image segmentation with non-parametric kernel density estimation shape prior
S1746809413001031
Background Identification of individualized models for patients with type 1 diabetes is of vital importance for the development of a successful artificial pancreas and other model-based strategies of insulin treatment. However, the huge intra-patient glycemic variability frequently prevents the identification of reliable models, especially in the postprandial period. In this work, the identification of postprandial models characterizing intra-patient variability is addressed. Methods Regarding the postprandial response, uncertainties due to physiological variability, input errors in insulin infusion rate and in meal content estimation are characterized by means of interval models, which predict a glucose envelope containing all possible patient responses according to the model. Multi-objective optimization is performed over a cohort of virtual patients, minimizing both the fitting error and the output glucose envelope width. A Pareto Front is then built ranging from classic identification representing average behaviors to interval identification guaranteeing full enclosure of the measurements. A method for the selection of the best individual in the Pareto Front for identification from home monitoring data with a continuous glucose monitor is presented, reducing the overestimation of patient's variability due to monitor inaccuracies and noise. Results Identification using glucose reference data provide model bands that accurately fit all data points in the used virtual data set. Identification from continuous glucose monitor data, using two different width estimation procedures yield very similar prediction capabilities of around 60% of the data points predicted, and less than a 5% average error. Conclusions In this work, a new approach to evaluate intra-patient variability in the identification of postprandial models is presented. The proposed method is feasible and shows good prediction capabilities in a 5-h time horizon as compared to reference measurements.
Identification of intra-patient variability in the postprandial response of patients with type 1 diabetes
S1746809413001043
Brain computer interfaces (BCI) provide a new approach to human computer communication, where the control is realised via performing mental tasks such as motor imagery (MI). In this study, we investigate a novel method to automatically segment electroencephalographic (EEG) data within a trial and extract features accordingly in order to improve the performance of MI data classification techniques. A new local discriminant bases (LDB) algorithm using common spatial patterns (CSP) projection as transform function is proposed for automatic trial segmentation. CSP is also used for feature extraction following trial segmentation. This new technique also allows to obtain a more accurate picture of the most relevant temporal–spatial points in the EEG during the MI. The results are compared with other standard temporal segmentation techniques such as sliding window and LDB based on the local cosine transform (LCT).
Extracting optimal tempo-spatial features using local discriminant bases and common spatial patterns for brain computer interfacing