FileName
stringlengths
17
17
Abstract
stringlengths
163
6.01k
Title
stringlengths
12
421
S0950705114004456
Class imbalance problem occurs when the number of training instances belonging to different classes are clearly different. In this scenario, many traditional classifiers often fail to provide excellent enough classification performance, i.e., the accuracy of the majority class is usually much higher than that of the minority class. In this article, we consider to deal with class imbalance problem by utilizing support vector machine (SVM) classifier with an optimized decision threshold adjustment strategy (SVM-OTHR), which answers a puzzled question: how far the classification hyperplane should be moved towards the majority class? Specifically, the proposed strategy is self-adapting and can find the optimal moving distance of the classification hyperplane according to the real distributions of training samples. Furthermore, we also extend the strategy to develop an ensemble version (EnSVM-OTHR) that can further improve the classification performance. Two proposed algorithms are both compared with many state-of-the-art classifiers on 30 skewed data sets acquired from Keel data set Repository by using two popular class imbalance evaluation metrics: F-measure and G-mean. The statistical results of the experiments indicate their superiority.
Support vector machine-based optimized decision threshold adjustment strategy for classifying imbalanced data
S0950705114004547
This paper develops a two-phase fuzzy goal programming (FGP) approach for multi-level linear programming (MLLP) problems in an uncertain environment. A numerical MLLP model is established based on a confidence level. In the first phase, an FGP model is used to find a solution that reaches the overall satisfaction of all decision-makers (DMs), ensuring that the fuzzy goal of each higher-level DM is more satisfied than those of lower-level DMs. Since the higher-level DM has the direct authority to manage the subordinate DM, an adjustment scheme is provided in the second phase for each higher-level DM, who has the opportunity to increase or decrease the satisfaction degree of their fuzzy goal by changing the relative satisfaction of the lower-level DM compared to that of their higher-level DM. A fuzzy variable of the relative satisfaction containing a set of linguistic terms is provided for these adjustments. The adjustment processes are carried out sequentially, from the top to bottom in the hierarchical decision structure. The proposed FGP model in the second phase considers these sequential adjustments. A numerical example and comparisons with existing methods are used to demonstrate the applicability and performance of the proposed approach.
A two-phase fuzzy approach for solving multi-level decision-making problems
S0950705114004559
Granulation extracts a bundle of similar patterns by decomposing universe. Hyperboxes are granular classifiers to confront the uncertainties in granular computing. This paper proposes a granular classifier to discover hyperboxes in three phases. The first phase of the proposed model uses the set calculus to build the hyperboxes; where, the means of the DBSCAN clustering algorithm constructs the structure. The second phase develops the geometry of hyperboxes to improve the classification rate. It uses the Particle Swarm Optimization (PSO) algorithm to optimize the seed_points and expand the hyperboxes. Finally, the third phase identifies the noise points; where, the patterns in the second phase did not belong to any hyperboxes. We have used the capability of membership function of a fuzzy set to improve the geometry of classifier. The performance of a proposed model is carried out in terms of coverage, misclassification error and accuracy. Experimental results reveal that the proposed model can adaptively choose an appropriate granularity.
The synergistic combination of particle swarm optimization and fuzzy sets to design granular classifier
S0950705114004602
Due to the disastrous consequences of slope failures, forecasting their occurrences is a practical need of government agencies to develop strategic disaster prevention programs. This research proposes a Swarm-Optimized Fuzzy Instance-based Learning (SOFIL) model for predicting slope collapses. The proposed model utilizes the Fuzzy k-Nearest Neighbor (FKNN) algorithm as an instance-based learning method to predict slope collapse events. Meanwhile, to determine the model’s hyper-parameters appropriately, the Firefly Algorithm (FA) is employed as an optimization technique. Experimental results have pointed out that the newly established SOFIL can outperform other benchmarking algorithms. Therefore, the proposed model is very promising to help decision-makers in coping with the slope collapse prediction problem.
A Swarm-Optimized Fuzzy Instance-based Learning approach for predicting slope collapses in mountain roads
S0950705114004699
The class imbalance problem has attracted a lot of attention from the data mining community recently, becoming a current trend in machine learning research. The Consolidated Tree Construction (CTC) algorithm was proposed as an algorithm to solve a classification problem involving a high degree of class imbalance without losing the explaining capacity, a desirable characteristic of single decision trees and rule sets. CTC works by resampling the training sample and building a tree from each subsample, in a similar manner to ensemble classifiers, but applying the ensemble process during the tree construction phase, resulting in a unique final tree. In the ECML/PKDD 2013 conference the term “Inner Ensembles” was coined to refer to such methodologies. In this paper we propose a resampling strategy for classification algorithms that use multiple subsamples. This strategy is based on the class distribution of the training sample to ensure a minimum representation of all classes when resampling. This strategy has been applied to CTC over different classification contexts. A robust classification algorithm should not just be able to rank in the top positions for certain classification problems but should be able to excel when faced with a broad range of problems. In this paper we establish the robustness of the CTC algorithm against a wide set of classification algorithms with explaining capacity.
Coverage-based resampling: Building robust consolidated decision trees
S0950705114004729
The proliferation of smart phones has opened up new kinds of data to model human behavior and predict future activity but this prediction can be tempered by the relative sparsity of data. In this paper, we integrate a time-dependent instance transfer mechanism, driven by a hybrid similarity measure, into learning and predicting human behavior. In particular, transfer component analysis (TCA) is utilized for domain adaptation from different data types to overcome data sparsity. The hybrid user similarity measure is developed based on three different characteristics: eigen-behavior, longest common behavior (LCB), and daily common behavior (DCB). Extensive comparisons are made against state-of-the-art time series prediction algorithms using the Nokia Mobile Data Challenge (MDC) dataset and the MIT Reality Mining dataset. We compare the prediction performance given (i) no additional data, (ii) only data from identical behavior from other users, and (iii) data from any type of behavior from other users. Experimental results show that our proposed algorithm significantly improves the performance of behavior prediction.
Sequential behavior prediction based on hybrid similarity and cross-user activity transfer
S095070511500009X
Accurate interval forecasting of agricultural commodity futures prices over future horizons is challenging and of great interests to governments and investors, by providing a range of values rather than a point estimate. Following the well-established “linear and nonlinear” modeling framework, this study extends it to forecast interval-valued agricultural commodity futures prices with vector error correction model (VECM) and multi-output support vector regression (MSVR) (abbreviated as VECM–MSVR), which is capable of capturing the linear and nonlinear patterns exhibited in agricultural commodity futures prices. Two agricultural commodity futures prices from Chinese futures market are used to justify the performance of the proposed VECM–MSVR method against selected competitors. The quantitative and comprehensive assessments are performed and the results indicate that the proposed VECM–MSVR method is a promising alternative for forecasting interval-valued agricultural commodity futures prices.
A combination method for interval forecasting of agricultural commodity futures prices
S0950705115000313
In this paper, we investigate the problem of classifying an image set of an object, and develop a novel image set representation and classification algorithm. We propose to represent an image set by a joint representation method using both an affine hull of its image samples and a combination of its reference images, and further classify it by a linear classification function from its representation. A unified objective function is formulated to learn both the representation and classifier parameters. Similar to support vector machine, the hinge losses and the squared ℓ 2 norm of the image set classifier are minimized simultaneously in the objective. Moreover, the differences between the two different representations are also minimized. The objective function is optimized with respect to representation and classifier parameters alternately in an iterative algorithm. The proposed algorithm is named as support image set machine (SupISMac) because it takes advantage of support vector machine formulation to learn an image set classifier. The experiments on two different image set classification benchmark databases show that SupISMac not only outperforms the state-of-the-art image set classification methods, but also reduces the running time of test procedures significantly.
Support image set machine: Jointly learning representation and classifier for image set classification
S0950705115000374
Diabetes Mellitus (DM), a chronic lifelong condition, is characterized by increased blood sugar levels. As there is no cure for DM, the major focus lies on controlling the disease. Therefore, DM diagnosis and treatment is of great importance. The most common complications of DM include retinopathy, neuropathy, nephropathy and cardiomyopathy. Diabetes causes cardiovascular autonomic neuropathy that affects the Heart Rate Variability (HRV). Hence, in the absence of other causes, the HRV analysis can be used to diagnose diabetes. The present work aims at developing an automated system for classification of normal and diabetes classes by using the heart rate (HR) information extracted from the Electrocardiogram (ECG) signals. The spectral analysis of HRV recognizes patients with autonomic diabetic neuropathy, and gives an earlier diagnosis of impairment of the Autonomic Nervous System (ANS). Significant correlations with the impaired ANS are observed of the HRV spectral indices obtained by using the Discrete Wavelet Transform (DWT) method. Herein, in order to diagnose and detect DM automatically, we have performed DWT decomposition up to 5 levels, and extracted the energy, sample entropy, approximation entropy, kurtosis and skewness features at various detailed coefficient levels of the DWT. We have extracted relative wavelet energy and entropy features up to the 5th level of DWT coefficients extracted from HR signals. These features are ranked by using various ranking methods, namely, Bhattacharyya space algorithm, t-test, Wilcoxon test, Receiver Operating Curve (ROC) and entropy. The ranked features are then fed into different classifiers, that include Decision Tree (DT), K-Nearest Neighbor (KNN), Naïve Bayes (NBC) and Support Vector Machine (SVM). Our results have shown maximum diagnostic differentiation performance by using a minimum number of features. With our system, we have obtained an average accuracy of 92.02%, sensitivity of 92.59% and specificity of 91.46%, by using DT classifier with ten-fold cross validation.
Computer-aided diagnosis of diabetic subjects by heart rate variability signals using discrete wavelet transform method
S0950705115000507
The choice of the dictionary that provides the possible translations a system has to choose when performing Cross-Lingual Word Sense Disambiguation (CLWSD) is one of the most important steps in such a task. In this work, we present a comparison between different dictionaries, in two different frameworks. First of all, a technique for analysing the potential results of an ideal system using those dictionaries is developed. The second framework considers the particular unsupervised CLWSD system CO-Graph, and analyses the results obtained when using different bilingual dictionaries providing the potential translations. Two different CLWSD tasks from the 2010 and 2013 SemEval competitions are used for evaluation, and statistics from the words in the test datasets of those competitions are studied. The conclusions of the analysis of dictionaries on a particular system lead us to a proposal that substantially improves the results obtained in that framework. In this proposal a hybrid system is developed, by combining the results provided by a probabilistic dictionary, and those obtained with a Most Frequent Sense (MFS) approach. The hybrid approach also outperforms the results obtained by other unsupervised systems in the considered competitions.
Choosing the best dictionary for Cross-Lingual Word Sense Disambiguation
S0950705115000519
This paper reviews the fingerprint classification literature looking at the problem from a double perspective. We first deal with feature extraction methods, including the different models considered for singular point detection and for orientation map extraction. Then, we focus on the different learning models considered to build the classifiers used to label new fingerprints. Taxonomies and classifications for the feature extraction, singular point detection, orientation extraction and learning methods are presented. A critical view of the existing literature have led us to present a discussion on the existing methods and their drawbacks such as difficulty in their reimplementation, lack of details or major differences in their evaluations procedures. On this account, an experimental analysis of the most relevant methods is carried out in the second part of this paper, and a new method based on their combination is presented.
A survey of fingerprint classification Part I: Taxonomies on feature extraction methods and learning models
S0950705115000581
In the first part of this paper we reviewed the fingerprint classification literature from two different perspectives: the feature extraction and the classifier learning. Aiming at answering the question of which among the reviewed methods would perform better in a real implementation we ended up in a discussion which showed the difficulty in answering this question. No previous comparison exists in the literature and comparisons among papers are done with different experimental frameworks. Moreover, the difficulty in implementing published methods was stated due to the lack of details in their description, parameters and the fact that no source code is shared. For this reason, in this paper we will go through a deep experimental study following the proposed double perspective. In order to do so, we have carefully implemented some of the most relevant feature extraction methods according to the explanations found in the corresponding papers and we have tested their performance with different classifiers, including those specific proposals made by the authors. Our aim is to develop an objective experimental study in a common framework, which has not been done before and which can serve as a baseline for future works on the topic. This way, we will not only test their quality, but their reusability by other researchers and will be able to indicate which proposals could be considered for future developments. Furthermore, we will show that combining different feature extraction models in an ensemble can lead to a superior performance, significantly increasing the results obtained by individual models.
A survey of fingerprint classification Part II: Experimental analysis and ensemble proposal
S0950705115000726
Granular computing has attracted many researchers as a new and rapidly growing paradigm of information processing. In this paper, we apply systematic mapping study to classify the granular computing researches to discover relative derivations to specify its research strength and quality. Our search scope is limited to the Science Direct and IEEE Transactions papers published between January 2012 and August 2014. We defined four perspectives of classification schemes to map the selected studies that are focus area, contribution type, research type and framework. Results of mapping the selected studies show that almost half of the research focused area belongs to category of data analysis. In addition, most of the selected papers belong to proposing the solutions in research type scheme. Distribution of papers between tool, method and enhancement categories of contribution type are almost equal. Moreover, 39% of the relevant papers belong to the rough set framework. The results show that there is little attention paid to cluster analysis in existing frameworks to discover granules for classification. We applied five clustering algorithms on three datasets from UCI repository to compare the form of information granules, and then classify the patterns and define them to a specific class based on their geometry and belongings. The clustering algorithms are DBSCAN, c-means, k-means, GAk-means and Fuzzy-GrC and the comparison of information granules are based on the coverage, misclassification and accuracy. The survey of experimental results mostly shows Fuzzy-GrC and GAk-means algorithm superior to other clustering algorithms; while, c-means clustering algorithm shows inferior to other clustering algorithms.
Systematic mapping study on granular computing
S0950705115000763
This paper introduces an improved simplified swarm optimization (iSSO) by undertaking a major revision of the update mechanism (UM) of traditional SSO. To test its performance, the proposed iSSO is compared with another recently introduced swarm-based algorithm, the Artificial Bee Colony Algorithm (ABC), on 50 different widely used multivariable and multimodal numerical test functions. Numerical examples conclude that the proposed iSSO outperforms ABC in both solution quality and efficiency. We also test the roles of the proposed UMs and iterative local search. The proposed algorithm is thus useful to both practitioners and researchers.
An improved simplified swarm optimization
S0950705115000775
The transitivity property of trust enables the propagation of a trust value through a chain of trusting users in social networks and then provides an expected trust value for another user. Logically, a user in social networks can assess a large number of other users, even if two users have not been directly connected previously. However, a large percentage of trust propagation efforts fail to find reliable trust paths from a source user to a target user because the web of trust in real-world online social networks is too sparse. The success (both quality and quantity) of a trust propagation algorithm strongly relies on the density of a web of trust. The more trust paths that are able to reach the given target user, the more reliable will be the trust estimates based on the trust path with the highest strength. In this paper, we propose an enriched trust propagation approach by combining a homophily-based trust network with an expertise-based trust network, which enhances the density of the trust network. We then evaluate the prediction accuracy and coverage of trust propagation based on various aggregation methods and highlight the most promising method.
An enhanced trust propagation approach with expertise and homophily-based trust networks
S0950705115000854
Linguistic distribution assessments with exact symbolic proportions have been recently presented. Due to various subjective and objective conditions, it is often difficult for decision makers to provide exact symbolic proportions in linguistic distribution assessments. In some situations, decision makers will express their preferences in multi-granular unbalanced linguistic contexts. Therefore, in this study, we propose the concept of linguistic distribution assessments with interval symbolic proportions under multi-granular unbalanced linguistic contexts. First, the weighted averaging operator and the ordered weighted averaging operator for the linguistic distribution assessments with interval symbolic proportions are presented. Then, we develop the transformation functions among the multi-granular unbalanced linguistic distribution assessments with interval symbolic proportions. Finally, we present the application of the proposed linguistic distribution assessments in multiple attribute group decision making.
Multi-granular unbalanced linguistic distribution assessments with interval symbolic proportions
S0950705115000970
Making decisions by learning preferences requires to consider semantical aspects dealing with the meaning and use of the preference concept. Examining recent developments on bipolarity, where concepts are measured/verified regarding a pair of opposite poles, we focus on the dialectic process by which the meaning of concepts emerges. Our proposal is based on the neutrality in between the opposite poles, such that a basic type of structure is used to characterize in logical terms the concepts and the knowledge that they generate. In this paper we model the meaning of concepts by paired structures, and apply these structures for learning and building the different meanings of preference for decision making.
Building the meaning of preference from logical paired structures
S0950705115001033
Wrapper-based feature subset selection (FSS) methods tend to obtain better classification accuracy than filter methods but are considerably more time-consuming, particularly for applications that have thousands of features, such as microarray data analysis. Accelerating this process without degrading its high accuracy would be of great value for gene expression analysis. In this study, we explored how to reduce the time complexity of wrapper-based FSS with an embedded K-Nearest-Neighbor (KNN) classifier. Instead of considering KNN as a black box, we proposed to construct a classifier distance matrix and incrementally update the matrix to accelerate the calculation of the relevance criteria in evaluating the quality of the candidate features. Extensive experiments on eight publicly available microarray datasets were first conducted to demonstrate the effectiveness of the wrapper methods with KNN for selecting informative features. To demonstrate the performance gain in terms of time cost reduction, we then conducted experiments on the eight microarray datasets with the embedded KNN classifiers and analyzed the theoretical time/space complexity. Both the experimental results and theoretical analysis demonstrated that the proposed approach markedly accelerates the wrapper-based feature selection process without degrading the high classification accuracy, and the space complexity analysis indicated that the additional space overhead is affordable in practice.
Accelerating wrapper-based feature selection with K-nearest-neighbor
S0950705115001045
The determination of the key dimensions of gear blank preforms with complicated geometries is a highly nonlinear optimization task. To determine critical design dimensions, we propose a novel and efficient dimensionality reduction (DR) model that adapts Gaussian process regression (GPR) to construct a topological constraint between the design latent variables (LVs) and the regression space. This procedure is termed the regression-constrained Gaussian process latent variables model (R-GPLVM), which overcomes GPLVM’s drawback of ignoring the regression constrains. To determine the appropriate sub-manifolds of the high-dimensional sample space, we combine the maximum a posteriori method with the scaled conjugate gradient (SCG) algorithm. This procedure can estimate the coordinates of preform samples in the space of LVs. Numerical experiments reveal that the R-GPLVM outperforms the pure GPR in various dimensional spaces, when the proper hyper-parameters and kernel functions are solved for. Results using an extreme learning model (ELM) obtain a better prediction precision than the back propagation method (BP), when the dimensions are reduced to seven and a Gaussian kernel function is adopted. After the seven key variables are screened out, the ELM model will be constructed with realistic inputs and obtains improved prediction accuracy. However, since the ELM has a problem with validity of the prediction, a genetic algorithm (GA) is exploited to optimize the connection parameters between each network layer to improve the reliability and generalization. In terms of prediction accuracy for testing datasets, GA has a better performance compared to the differential evolution (DE) approach, which motivates the choice to use the genetic algorithm-extreme learning model (GA-ELM). Moreover, GA-ELM is employed to measure the aforementioned DR using engineering criteria. In the end, to obtain the optimal geometry, a parallel selection method of multi-objective optimization is proposed to obtain the Pareto-optimal solution, while the maximum finisher forming force (MFFF) and the maximum finisher die stress (MFDS) are both minimized. Comparative analysis with other numerical models including finite element model (FEM) simulation is conducted using the GA optimized preform. Results show that the values of MFFF and MFDS predicted by GA-ELM and R-GPLVM agree well with the experimental results, which validates the feasibility of our proposed methods.
Optimization of gear blank preforms based on a new R-GPLVM model utilizing GA-ELM
S0950705115001094
Early prediction of person at risk of Sudden Cardiac Death (SCD) with or without the onset of Ventricular Tachycardia (VT) or Ventricular Fibrillation (VF) still remains a continuing challenge to clinicians. In this work, we have presented a novel integrated index for prediction of SCD with a high level of accuracy by using electrocardiogram (ECG) signals. To achieve this, nonlinear features (Fractal Dimension (FD), Hurst’s exponent (H), Detrended Fluctuation Analysis (DFA), Approximate Entropy (ApproxEnt), Sample Entropy (SampEnt), and Correlation Dimension (CD)) are first extracted from the second level Discrete Wavelet Transform (DWT) decomposed ECG signal. The extracted nonlinear features are ranked using t-value and then, a combination of highly ranked features are used in the formulation and employment of an integrated Sudden Cardiac Death Index (SCDI). This calculated novel SCDI can be used to accurately predict SCD (four minutes before the occurrence) by using just one numerical value four minutes before the SCD episode. Also, the nonlinear features are fed to the following classifiers: Decision Tree (DT), k-Nearest Neighbour (KNN), and Support Vector Machine (SVM). The combination of DWT and nonlinear analysis of ECG signals is able to predict SCD with an accuracy of 92.11% (KNN), 98.68% (SVM), 93.42% (KNN) and 92.11% (SVM) for first, second, third and fourth minutes before the occurrence of SCD, respectively. The proposed SCDI will constitute a valuable tool for the medical professionals to enable them in SCD prediction.
An integrated index for detection of Sudden Cardiac Death using Discrete Wavelet Transform and nonlinear features
S0950705115001112
Traditionally, pattern-based relation extraction methods are usually based on iterative bootstrapping model which generally implies semantic drift or low recall problem. In this paper, we present a novel semantic bootstrapping framework that uses semantic information of patterns and flexible match method to address such problem. We introduce formalization for this class of bootstrapping models, which allows semantic constraint to guide learning iterations and use flexible bottom-up kernel to compare patterns. To obtain the insights of reliability and applicability of our framework, we applied it to the English Slot Filling (ESF) task of Knowledge Based Population (KBP) at Text Analysis Conference (TAC). Experimental results show that our framework obtains performance superior to the state of the art.
Construction of semantic bootstrapping models for relation extraction
S0950705115001197
This paper describes a novel approach to learning term-weighting schemes (TWSs) in the context of text classification. In text mining a TWS determines the way in which documents will be represented in a vector space model, before applying a classifier. Whereas acceptable performance has been obtained with standard TWSs (e.g., Boolean and term-frequency schemes), the definition of TWSs has been traditionally an art. Further, it is still a difficult task to determine what is the best TWS for a particular problem and it is not clear yet, whether better schemes, than those currently available, can be generated by combining known TWS. We propose in this article a genetic program that aims at learning effective TWSs that can improve the performance of current schemes in text classification. The genetic program learns how to combine a set of basic units to give rise to discriminative TWSs. We report an extensive experimental study comprising data sets from thematic and non-thematic text classification as well as from image classification. Our study shows the validity of the proposed method; in fact, we show that TWSs learned with the genetic program outperform traditional schemes and other TWSs proposed in recent works. Further, we show that TWSs learned from a specific domain can be effectively used for other tasks.
Term-weighting learning via genetic programming for text classification
S0950705115001306
This paper addresses the problem of evaluating the node importance in actual complex networks. Firstly, the indicators used to evaluate the node importance are defined based on complex network theory, and their characteristics are analyzed in detail. Besides, a new indicator is put forward on the basis of K-shell decomposition, named improved K-shell. Secondly, in order to evaluate the node importance comprehensively, a multi-attribute ranking method is proposed based on the Technique for Order Preference by Similarity to Ideal Object (TOPSIS). Finally, our method is used to study two actual cases. Results show that, our method outperforms other methods in distinguishing the node importance of actual complex networks and can provide scientific decision support for the administration department.
The node importance in actual complex networks based on a multi-attribute ranking method
S095070511500132X
The integration of multiple features is important for action categorization and object recognition in videos, because single feature based representation hardly captures imaging variations and individual attributes. In this paper, a novel formulation named Multivariate video Information Bottleneck (MvIB) is defined. It is an extensional type of multivariate information bottleneck and can discover categories from a collection of unlabeled videos automatically. Differing from the original multivariate information bottleneck, the novel approach extracts the video categories from multiple features simultaneously, such as local static and dynamic feature, each type of feature is treated as a relevant variable. Specifically, by preserving the relevant information with respect to these feature variables maximally, the MvIB method is able to integrate various aspects of semantic information into the final video partitioning results, and thus captures the complementary information resided in multiple feature variables. Extensive experimental results on five challenging video data sets show that the proposed approach can consistently and significantly outperform other state-of-the-art unsupervised learning methods.
Unsupervised video categorization based on multivariate information bottleneck method
S0950705115001392
In this paper, a multiproduct single vendor–single buyer supply chain problem is investigated based on the economic production quantity model developed for the buyer to minimize the inventory cost. The model to be more applicable for real-world supply chain problems contains five stochastic constraints including backordering cost, space, ordering, procurement, and available budget. The objective is to find the optimal order quantities of the products such that the total inventory cost is minimized while the constraints are satisfied. The recently-developed sequential quadratic programming (SQP), as one of the best optimization methods available in the literature, is used to solve the problem. Twenty numerical examples in 3 scales of small, medium, and large are solved in order to demonstrate the applicability of the proposed methodology and to evaluate its optimum performance. The results show that SQP has satisfactory performance in terms of optimum solutions, number of iterations to achieve the optimum solution, infeasibility, optimality error, and complementarity. Besides, the optimum performance of the SQP method is compared with the one of another exact method called interior point using the above numerical examples under similar conditions. The comparison results are in favor of the employed SQP. At the end, a sensitivity analysis is performed on the change rate of the objective function obtained based on the change rate of the variance of the order quantity.
Optimization of a multiproduct economic production quantity problem with stochastic constraints using sequential quadratic programming
S0950705115001458
Feature selection is an important preprocessing step in machine learning and pattern recognition. The ultimate goal of feature selection is to select a feature subset from the original feature set to increase the performance of learning algorithms. In this paper a novel feature selection method based on the graph clustering approach and ant colony optimization is proposed for classification problems. The proposed method’s algorithm works in three steps. In the first step, the entire feature set is represented as a graph. In the second step, the features are divided into several clusters using a community detection algorithm and finally in the third step, a novel search strategy based on the ant colony optimization is developed to select the final subset of features. Moreover the selected subset of each ant is evaluated using a supervised filter based method called novel separability index. Thus the proposed method does not need any learning model and can be classified as a filter based feature selection method. The proposed method integrates the community detection algorithm with a modified ant colony based search process for the feature selection problem. Furthermore, the sizes of the constructed subsets of each ant and also size of the final feature subset are determined automatically. The performance of the proposed method has been compared to those of the state-of-the-art filter and wrapper based feature selection methods on ten benchmark classification problems. The results show that our method has produced consistently better classification accuracies.
Integration of graph clustering with ant colony optimization for feature selection
S0950705115001483
Software requirements engineering is a critical discipline in the software development life cycle. The major problem in software development is the selection and prioritization of the requirements in order to develop a system of high quality. This research analyzes the issues associated with existing software requirement prioritization techniques. One of the major issues in software requirement prioritization is that the existing techniques handle only toy projects or software projects with very few requirements. The current techniques are not suitable for the prioritization of a large number of requirements in projects where requirements may grow to the hundreds or even thousands. The research paper proposes an expert system, called the Priority Handler (PHandler), for requirement prioritization. PHandler is based on the value-based intelligent requirement prioritization technique, neural network and analytical hierarchical process in order to make the requirement prioritization process scalable. The back-propagation neural network is used to predict the value of a requirement in order to reduce the extent of expert biases and make the PHandler efficient. Moreover, the analytical hierarchy process is applied on prioritized groups of requirements in order to enhance the scalability of the requirement prioritization process.
PHandler: An expert system for a scalable software requirements prioritization process
S0950705115001501
Business processes constitute an essential asset of organizations while the related process models help to better comprehend the process and therefore to enable effective process analysis or redesign. However, there are several working environments where flows are particularly flexible (e.g., healthcare, customer service) and process models are either very hard to get created, or they fail to reflect reality. The aim of this paper is to support decision-making by providing comprehensible process models in the case of such flexible environments. Following a process mining approach, we propose a methodology to cluster customers’ flows and produce effective summarizations. We propose a novel method to create a similarity metric that is efficient in downgrading the effect of noise and outliers. We use a spectral technique that emphasizes the robustness of the estimated groups, therefore it provides process analysts with clearer process maps. The proposed method is applied to a real case of a healthcare institution delivering valuable insights and showing compelling performance in terms of process models’ complexity and density.
Supporting healthcare management decisions via robust clustering of event logs
S0950705115001586
For tackling the problem of pornographic image recognition, a novel multi-instance learning (MIL) algorithm is proposed by using extreme learning machine (ELM) and classifiers ensemble. Firstly, a spatial pyramid partition-based (SPP) multi-instance modeling technique has been deployed to transform the pornographic images recognition problem into a typical MIL problem. The method has deployed a bag corresponding to an image and an instance corresponding to each partitioned sub-block described by low-level visual features (i.e. color, texture and shape). Secondly, a collection of visual word (VW) has been generated by using hierarchical k-mean clustering method, and then based on the fuzzy membership function between instance and VW, a fuzzy histogram fusion-based metadata calculation method has been proposed to convert each bag to a single sample, which allows the MIL problem to be solved directly by a standard single instance learning (SIL) machine. Finally, by using ELM, a group of base classifiers with different number of hidden nodes have been constructed, and their weights bas been dynamically determined by using performance weighting rule. Therefore, the strategy of classifiers ensemble is used to improve the overall adaptability of proposed ELMCE-MIL algorithm. Experimental results have shown that the method is robust, and its performance is superior to other similar algorithms.
Pornographic images recognition based on spatial pyramid partition and multi-instance ensemble learning
S0950705115001628
In this paper, we address the task of automatically tracking a variable number of objects in the scene of a monocular and uncalibrated camera. We propose a global optimization method in network flow model for multiple object tracking. This approach extends recent work which formulates the tracking-by-detection into a maximum-a posteriori (MAP) data association problem. We redefine the observation likelihood and the affinity between observations to handle long term occlusions. Moreover, an improved greedy algorithm is designed to solve min-cost flow, reducing the amount of ID switches apparently. Furthermore, a linear hypothesis method is proposed to fill up the gaps in the trajectories. The experiment results demonstrate that our method is effective and efficient, and outperforms the state-of-the-art approaches on several benchmark datasets.
One global optimization method in network flow model for multiple object tracking
S0950705115001641
Feature weighting has been an important topic in classification learning algorithms. In this paper, we propose a new paradigm of assigning weights in classification learning, called value weighting method. While the current weighting methods assign a weight to each feature, we assign a different weight to the values of each feature. The proposed method is implemented in the context of naive Bayesian learning, and optimal weights of feature values are calculated using a gradient approach. The performance of naive Bayes learning with value weighting method is compared with that of other state-of-the-art methods for a number of datasets. The experimental results show that the value weighting method could improve the performance of naive Bayes significantly.
A gradient approach for value weighted classification learning in naive Bayes
S0950705115001720
In Machine Learning, a data set is imbalanced when the class proportions are highly skewed. Imbalanced data sets arise routinely in many application domains and pose a challenge to traditional classifiers. We propose a new approach to building ensembles of classifiers for two-class imbalanced data sets, called Random Balance. Each member of the Random Balance ensemble is trained with data sampled from the training set and augmented by artificial instances obtained using SMOTE. The novelty in the approach is that the proportions of the classes for each ensemble member are chosen randomly. The intuition behind the method is that the proposed diversity heuristic will ensure that the ensemble contains classifiers that are specialized for different operating points on the ROC space, thereby leading to larger AUC compared to other ensembles of classifiers. Experiments have been carried out to test the Random Balance approach by itself, and also in combination with standard ensemble methods. As a result, we propose a new ensemble creation method called RB-Boost which combines Random Balance with AdaBoost.M2. This combination involves enforcing random class proportions in addition to instance re-weighting. Experiments with 86 imbalanced data sets from two well known repositories demonstrate the advantage of the Random Balance approach.
Random Balance: Ensembles of variable priors classifiers for imbalanced data
S0950705115001768
To consider interactive phenomenon among experts (or attributes) in multiple attribute group decision making, a new interval linguistic aggregation operator named the generalized interval 2-tuple linguistic Shapley chi-square averaging operator is proposed which is based on the Shapley value and 2-tuple linguistic representation.The λ-fuzzy measure is applied to simplify the fuzzy measure on attribute set (or expert set), and some formulae and programming models are proposed to determine the λ-fuzzy measure. Meanwhile some desirable properties of aggregation operator are studied. With respect to group decision making under interval linguistic environment, the GOGDM algorithm is introduced in detail. Furthermore, the facility location selection is applied to illustrate the practicality and validity of the GOGDM algorithm. Comparative analysis are proposed to show the advantage and validity of the proposed approach.
An approach for facility location selection based on optimal aggregation operator
S0950705115001793
Knowledge engineering often involves using the opinions of experts, and very frequently of a group of experts. Experts often cooperate in creating a knowledge base that uses fuzzy inference rules. On the one hand, this may lead to generating a higher quality knowledge base. But on the other hand, it may result in irregularities, for example, if one of the experts dominates the others. This paper addresses a research problem related to creating a method for automatic verification of inference rules. It would allow one to detect inconsistencies between the rules that have been developed and the actual knowledge of the group of experts. A method of multi-criteria group evaluation of variants under uncertainty was used for this purpose. This method utilises experts’ opinions on the importance of the premises of inference rules. They are expressed in terms of multiple criteria in the form of both numerical and linguistic assessments. Experts define the conclusions of rules as so-called half-marks in order to increase the method’s flexibility. Automatic rules are generated in a similar way. Such an approach makes it possible to automatically determine the final conclusions of inference rules. They can be regarded as consistent both with the opinions of a group of experts and with automatically generated rules. This paper presents the use of the method for verifying the rules of an expert system that is aimed to evaluate the effectiveness of a passenger and baggage screening system at an airport. This method allows one to detect simple logical errors that are made when experts are establishing rules as well as inconsistencies between the rules that have been developed and the experts’ actual knowledge.
Automatic verification of a knowledge base by using a multi-criteria group evaluation with application to security screening at an airport
S0950705115001811
Structural balance enables a comprehensive understanding of the potential tensions and conflicts of signed networks, and its computation and transformation have attracted increasing attention in recent years. The balance computation aims at evaluating the distance from an unbalanced network to a balanced one, and the balance transformation is to convert an unbalanced network into a balanced one. In this paper, firstly, we model the balance computation of signed networks as the optimization of an energy function. Secondly, we model the balance transformation as the optimization of a more general energy function incorporated with transformation cost. Finally, a multilevel learning based memetic algorithm, which incorporates network-specific knowledge such as the neighborhoods of node, cluster and partition, is proposed to solve the modeled optimization problems. Systematical experiments in real-world social networks demonstrate the superior performance of the proposed algorithm compared with the state-of-the-art algorithms on the computation and transformation of structural balance. The results also show that our method can resolve the potential conflicts of signed networks with the minimum cost.
A memetic algorithm for computing and transforming structural balance in signed networks
S0950705115002002
In an era of growing data complexity and volume and the advent of big data, feature selection has a key role to play in helping reduce high-dimensionality in machine learning problems. We discuss the origins and importance of feature selection and outline recent contributions in a range of applications, from DNA microarray analysis to face recognition. Recent years have witnessed the creation of vast datasets and it seems clear that these will only continue to grow in size and number. This new big data scenario offers both opportunities and challenges to feature selection researchers, as there is a growing need for scalable yet efficient feature selection methods, given that existing methods are likely to prove inadequate.
Recent advances and emerging challenges of feature selection in the context of big data
S0950705115002026
Recommender systems, which can significantly help users find their interested items from the information era, has attracted an increasing attention from both the scientific and application society. One of the widest applied recommendation methods is the Matrix Factorization (MF). However, most of MF based approaches focus on the user-item rating matrix, but ignoring the ingredients which may have significant influence on users’ preferences on items. In this paper, we propose a multi-linear interactive MF algorithm (MLIMF) to model the interactions between the users and each event associated with their final decisions. Our model considers not only the user-item rating information but also the pairwise interactions based on some empirically supported factors. In addition, we compared the proposed model with three typical other methods: user-based collaborative filtering (UCF), item-based collaborative filtering (ICF) and regularized MF (RMF). Experimental results on two real-world datasets, MovieLens 1M and MovieLens 100k, show that our method performs much better than other three methods in the accuracy of recommendation. This work may shed some light on the in-depth understanding of modeling user online behaviors and the consequent decisions.
Multi-linear interactive matrix factorization
S095070511500204X
Covering rough sets are a generalization of Pawlak rough sets, in which a partition of the universal set induced by an equivalence relation is replaced by a covering. In this paper, covering rough sets are transformed into generalized rough sets induced by binary relations. The paper discusses three theoretical topics. First, we consider a special type of covering in which the neighborhoods form a reduction of the covering, and we obtain necessary and sufficient conditions for neighborhoods in a covering form a reduction of the covering. Second, we study another special type of covering, and give conditions for the covering lower and upper approximations to be dual to each other. Finally, we give an axiomatic system that characterizes the lower and upper approximations of rough sets based on a partial order.
Special types of coverings and axiomatization of rough sets based on partial orders
S0950705115002051
Evaluating open-response assignments in Massive Open Online Courses is a difficult task because of the huge number of students involved. Peer grading is an effective method to address this problem. There are two basic approaches in the literature: cardinal and ordinal. The first case uses grades assigned by student-graders to a set of assignments of other colleagues. In the ordinal approach, the raw materials used by grading systems are the relative orders that graders appreciate in the assignments that they evaluate. In this paper we present a factorization method that seeks a trade-off between cardinal and ordinal approaches. The algorithm learns from preference judgments to avoid the subjectivity of the numeric grades. But in addition to preferences expressed by student-graders, we include other preferences: those induced from assignments with significantly different average grades. The paper includes a report of the results obtained using this approach in a real world dataset collected in 3 Universities of Spain, A Coruña, Pablo de Olavide at Sevilla, and Oviedo at Gijón. Additionally, we studied the sensitivity of the method with respect to the number of assignments graded by each student. Our method achieves similar or better scores than staff instructors when we measure the discrepancies with other instructor’s grades.
A factorization approach to evaluate open-response assignments in MOOCs using preference learning on peer assessments
S0950705115002063
Current approaches to single and cross-domain polarity classification usually use bag of words, n-grams or lexical resource-based classifiers. In this paper, we propose the use of meta-learning to combine and enrich those approaches by adding also other knowledge-based features. In addition to the aforementioned classical approaches, our system uses the BabelNet multilingual semantic network to generate features derived from word sense disambiguation and vocabulary expansion. Experimental results show state-of-the-art performance on single and cross-domain polarity classification. Contrary to other approaches, ours is generic. These results were obtained without any domain adaptation technique. Moreover, the use of meta-learning allows our approach to obtain the most stable results across domains. Finally, our empirical analysis provides interesting insights on the use of semantic network-based features.
Cross-domain polarity classification using a knowledge-enhanced meta-classifier
S0950705115002075
Neighborhood System-based (NS-based) rough set theory is an extension of classical rough set theory. This paper aims to investigate the uncertainty measures of rough sets in a Neighborhood System-space (NS-space). We firstly develop a rough membership function and a rough intuitionistic membership function based on binary relationships. Then we propose a fuzzy entropy and an intuitionistic fuzzy entropy of rough sets, and further explore their corresponding properties. Examples show that the methods not only accord with people’s recognition rules of rough sets, but also can be used to depict the case that the NS-based rough sets degenerate into the crisp sets.
Uncertainty measures of Neighborhood System-based rough sets
S0950705115002105
Q-matrix is the intermediary between attribute mastery patterns and responses in cognitive diagnostic assessment; therefore, Q-matrix plays a very important role in the assessment. Currently, lacking of reliable method of inferring and validating the expert-specified Q-matrix is the main problem. Based on the algorithm of Liu et al. (2012), three modified algorithms are proposed. There are two major differences between the algorithm of Liu et al. and the modified algorithms, one is to modify the item parameters from fixed to unfixed, the other is to use an “incremental” Q-matrix estimation, which some items named as “base items” have been correctly prespecified, others (or called as new items or raw items whose attributes have not been specified) need to be specified. The modified algorithms “incrementally” add new items to the “base items” one by one, estimate the item parameters and Q-matrix jointly, rather than estimate all of the items simultaneously which would bring more “noise” to affect the accuracy of estimation. Simulation studies showed that the modified algorithms could get satisfactory results, and the empirical study showed that the proposed algorithms could offer useful information about the Q-matrix specification.
Model identification and Q-matrix incremental inference in cognitive diagnosis
S095070511500218X
We mainly study the low-rank image recovery problem by proposing a bilinear low-rank coding framework called Tensor Low-Rank Representation. For enhanced low-rank recovery and error correction, our method constructs a low-rank tensor subspace to reconstruct given images along row and column directions simultaneously by computing two low-rank matrices alternately from a nuclear norm minimization problem, so both column and row information of data can be effectively preserved. Our bilinear approach seamlessly integrates the low-rank coding and dictionary learning into a unified framework. Thus, our formulation can be treated as enhanced Inductive Robust Principal Component Analysis with noise removed by low-rank representation, and can also be considered as the enhanced low-rank representation with a clean informative dictionary via low-rank embedding. To enable our method to include outside images, the out-of-sample extension is also presented by regularizing the model to correlate image features with the low-rank recovery of the images. Comparison with other criteria shows that our model exhibits stronger robustness and enhanced performance. We also use the outputted bilinear low-rank codes for feature learning. Two unsupervised local and global low-rank subspace learning methods are proposed for extracting image features for classification. Simulations verified the validity of our techniques for image recovery, representation and classification.
Bilinear low-rank coding framework and extension for robust image recovery and feature representation
S0950705115002191
This paper presents a novel approach that exploits semantic knowledge to enhance the object recognition capability of autonomous robots. Semantic knowledge is a rich source of information, naturally gathered from humans (elicitation), which can encode both objects’ geometrical/appearance properties and contextual relations. This kind of information can be exploited in a variety of robotics skills, especially for robots performing in human environments. In this paper we propose the use of semantic knowledge to eliminate the need of collecting large datasets for the training stages required in typical recognition approaches. Concretely, semantic knowledge encoded in an ontology is used to synthetically and effortless generate an arbitrary number of training samples for tuning Probabilistic Graphical Models (PGMs). We then employ these PGMs to classify patches extracted from 3D point clouds gathered from office environments within the UMA-offices dataset, achieving a ∼90% of recognition success, and from office and home scenes within the NYU2 dataset, yielding a success of ∼81% and ∼69.5% respectively. Additionally, a comparison with state-of-the-art recognition methods also based on graphical models has been carried out, revealing that our semantic-based training approach can compete with, and even outperform, those trained with a considerable number of real samples.
Exploiting semantic knowledge for robot object recognition
S0950705115002221
Multi criteria decision making (MCDM) is a discipline of operations research which has widely studied by researchers and practitioners. It deals with evaluating and ranking alternatives from the best to the worst under conflicting criteria with respect to decision maker(s) preferences. Since, many real-world systems include uncertainty and vagueness in information, MCDM uses fuzzy sets. In recent years, as an extension of the traditional fuzzy sets concept, type-2 fuzzy sets are preferred to have the capability of handling more uncertainty, and hence, to produce more accurate and robust results, MCDM approaches based on interval type-2 fuzzy sets (IT2FSs) have been published in various subjects. This paper reviews 82 different papers using various MCDM approaches based on IT2FSs which are classified into 35 categories. All papers with respect to single and hybrid approaches are discussed, pointing out their real applications or empirical results and limitations. Furthermore, the papers are statistically analyzed to show new trends within the context of IT2FSs. This systematic and comprehensive review study provides an insight for researchers on interval type-2 fuzzy MCDM in terms of showing current state and potential areas to be focused in the future.
A comprehensive review of multi criteria decision making approaches based on interval type-2 fuzzy sets
S0950705115002233
When natural disasters happen, relief logistic centers (RLCs) and the quality of their services become absolutely important. In other words, choosing proper locations for RLCs has a direct impact on operating costs and timeliness of responses to the rising demands. This paper aims at proposing a decision support system for prioritizing RLC’s locations to facilitate providing emergency helps when natural disasters occur. The present study, focuses on considering availability, risk, technical issues, cost and coverage in locating RLCs. It is assumed that applying analytic hierarchy process (AHP) can facilitate the problem of locating these centers. The most important step in this process is establishing pair-wise comparisons for the criteria and alternatives. As it is more logical to use interval comparisons instead of crisp ones in real-world problems due to some considerations, this paper has used two decision-making methods known as lexicographic goal programming (LGP) and two-step logarithmic goal programming (TLGP) to derive priorities from pair-wise matrices. To assess the proposed method, a case study of Tehran, the capital city of Iran has also been discussed.
A prioritization model for locating relief logistic centers using analytic hierarchy process with interval comparison matrix
S0950705115002300
Relation extraction is essential for most text mining tasks. Existing approaches on relation extraction are generally based on bootstrapping methodology which implies semantic drift problem. This paper presents a new approach to learn semantic dependency patterns, which can significantly alleviate this problem. To this end, a unique representation of activation force defined dependency pattern is presented. It is a trigger word mediated relation between an entity and its attribute value, and the trigger word is extracted by using the statistics of word activation forces between those words. The adaptability and the scalability of the framework are facilitated by the recursive and compositional bootstrap learning of patterns and seed pairs. To obtain insights of the reliability and applicability of the method, we applied it to the English Slot Filling task of Knowledge Base Population track at Text Analysis Conference 2013. Experimental results show that the proposed method has good performance in the implementation of English Slot Filling 2013 with the overall F1 value significantly higher than the best automatic result reported. The experimental results also demonstrate that the activation force based trigger word mining method plays an essential role in improving the performance.
Mining activation force defined dependency patterns for relation extraction
S0950705115002373
Recommender systems attempt to guide users in decisions related to choosing items based on inferences about their personal opinions. Most existing systems implicitly assume the underlying classification is binary, that is, a candidate item is either recommended or not. Here we propose an alternate framework that integrates three-way decision and random forests to build recommender systems. First, we consider both misclassification cost and teacher cost. The former is paid for wrong recommender behaviors, while the latter is paid to actively consult the user for his or her preferences. With these costs, a three-way decision model is built, and rational settings for positive and negative threshold values α* and β* are computed. We next construct a random forest to compute the probability P that a user will like an item. Finally, α * , 0.35 e m 0 e x β * , and P are used to determine the recommender’s behavior. The performance of the recommender is evaluated on the basis of an average cost. Experimental results on the well-known MovieLens data set show that the (α*, β*)-pair determined by three-way decision is optimal not only on the training set, but also on the testing set.
Three-way recommender systems based on random forests
S0950705115002397
Organizations all around the world need to manage huge amounts of data from heterogeneous sources every day in order to conduct decision making processes. This requires them to infer what the value of such data is for the business in question through data analysis as well as acting promptly for critical or relevant situations. Complex Event Processing (CEP) is a technology that helps tackle this issue by detecting event patterns in real time. However, this technology forces domain experts to define these patterns indicating such situations and the appropriate actions to be executed in their information systems, generally based on Service-Oriented Architectures (SOAs). In particular, these users face the incommodity of implementing these patterns manually or by using editors which are not user-friendly enough. To deal with this problem, a model-driven solution for real-time decision making in event-driven SOAs is proposed and conducted in this paper. This approach allows the integration of CEP with this architecture type as well as defining CEP domain and event pattern through a graphical and intuitive editor, which also permits automatic code generation. Moreover, the solution is evaluated and its benefits are discussed. As a result, we can assert this is a novel solution for bringing CEP technology closer to any user, positively impacting on business decision making processes.
MEdit4CEP: A model-driven solution for real-time decision making in SOA 2.0
S0950705115002567
Feature selection has attracted significant attention in data mining and machine learning in the past decades. Many existing feature selection methods eliminate redundancy by measuring pairwise inter-correlation of features, whereas the complementariness of features and higher inter-correlation among more than two features are ignored. In this study, a modification item concerning feature complementariness is introduced in the evaluation criterion of features. Additionally, in order to identify the interference effect of already-selected False Positives (FPs), the redundancy-complementariness dispersion is also taken into account to adjust the measurement of pairwise inter-correlation of features. To illustrate the effectiveness of proposed method, classification experiments are applied with four frequently used classifiers on ten datasets. Classification results verify the superiority of proposed method compared with seven representative feature selection methods.
Feature selection with redundancy-complementariness dispersion
S0950705115002579
The aim of the present work is to design a system for automatic classification of personal video recordings based on simple audiovisual features that can be easily implemented in different devices. Specifically, the main objective is to classify frame by frame personal video recordings into 24 semantically meaningful categories. Such categories include information about the environment like indoor or outdoor, the presence or absence of people and their activity, ranging from sports to partying. In order to achieve a robust classification, features derived from both audio and image data will be used and combined with state of the art classifiers such as Gaussian Mixture Models or Support Vector Machines. In the process, several combination schemes of features and classifiers are defined and evaluated over a real data set of personal video recordings. The system learns which parameters and classifiers are most appropriate for this task. The experiments show that the approach using specific classifiers for audio features (Mel-Frequency Cepstral Coefficients (MFCCs)) and image features (color, edge histograms), and using a meta-classification combination schema attains significant performance. The best performance obtained over the different approaches evaluated gave a promising f - measure larger than 57% in average for all the categories and larger than 73% over diverse categories.
Automatic classification of personal video recordings based on audiovisual features
S0950705115002634
Nowadays it is important to develop effective computational methods for accurately identifying and predicting biological activity in the virtual screening of bioassay data so as to speed up the process of drug development. Among these methods, multi-criteria optimization classifier (MCOC) is a classifier which can find a trade-off between the overlapping degree of different classes and the total distance from input points to the decision hyperplane. The former should be minimized while the latter should be maximized. Then a decision function is derived from training data and this function is subsequently used to predict the class label of an unseen instance. However, due to outliers, anomalies, highly imbalanced classes, high dimension, nonlinear separability and other uncertainties in data, MCOC and other methods often give the poor predictive performance. In this paper, we introduce a new fuzzy contribution to each input point based on class median, by defining the new row and column kernel functions the linear combination of different feature kernels to replace the single kernel function in the kernel-induced feature space and penalty factors to imbalanced classes, thus a novel multi-kernel multi-criteria optimization classifier with fuzzification and penalty factors (MK–MCOC–FP) is proposed and the effects of the aforementioned problems are significantly reduced. The experimental results of predicting active compounds in the virtual screening and comparison with linear and quadratic MCOCs, support vector machines (SVM), fuzzy SVM and neural network, the conclusions show that MK–MCOC–FP evidently increased the ability of resisting noise interference, the predictive accuracy of highly class-imbalanced bioassay data, the separation of active compounds and inactive compounds, the interpretability of importance or contributions of different features to classification, the efficiency of classification with feature selection or dimensionality reduction for high-dimensional data, and the generalization of predicting the biological activity of new compounds.
Multi-kernel multi-criteria optimization classifier with fuzzification and penalty factors for predicting biological activity
S0950705115002671
Tensor analysis is a powerful tool for multiway problems in data mining, signal processing, pattern recognition and many other areas. Nowadays, the most important challenges in tensor analysis are efficiency and adaptability. Still, the majority of techniques are not scalable or not applicable in streaming settings. One of the promising frameworks that simultaneously addresses these two issues is Incremental Tensor Analysis (ITA) that includes three variants called Dynamic Tensor Analysis (DTA), Streaming Tensor Analysis (STA) and Window-based Tensor Analysis (WTA). However, ITA restricts the tensor’s growth only in time, which is a huge constraint in scalability and adaptability of other modes. We propose a new approach called multi-aspect-streaming tensor analysis (MASTA) that relaxes this constraint and allows the tensor to concurrently evolve through all modes. The new approach, which is developed for analysis-only purposes, instead of relying on expensive linear algebra techniques is founded on the histogram approximation concept. This consequently brought simplicity, adaptability, efficiency and flexibility to the tensor analysis task. The empirical evaluation on various data sets from several domains reveals that MASTA is a potential technique with a competitive value against ITA algorithms.
Multi-aspect-streaming tensor analysis
S0950705115002695
Parsing mistakes impose an upper bound in performance on many information extraction systems. In particular, syntactic errors detecting appositive structures limit the system’s ability to capture class-instance relations automatically from texts. The article presents a method that considers semantic information to correct appositive structures given by a parser. First, we build automatically a background knowledge base from a reference collection, capturing evidence of semantic compatibility among classes and instances. Then, we evaluate three different probabilistic-based measures to identify the correct dependence on ambiguous appositive structures. Results reach a 91.4% of correct appositions which is a relative improvement of 12.9% with respect to the best baseline (80.9%) given by a state of the art parser.
On improving parsing with automatically acquired semantic classes
S0950705115002750
This article introduces proximal three-way decision-making. This form of 3-way decision-making stems from the proximity structures that result from endowing each nonempty set of social network triangulation nodes with several proximity relations. A triangulation is obtained by connecting every pair of nearest neighbour nodes with a straight edge. The proposed approach to 3-way decision-making results from the analysis of Delaunay graphs (spatial) as well as friendship network (social) of location-based social network nodes. The knowledge gained from proximal three-way decision-making is in the form of information granules that are near sets of nodes representing interaction between either casual users or friends in a social network. A practical illustration of proximal three-way decision-making is given in terms of a public domain large-scale social network dataset.
Proximal three-way decisions: Theory and applications in social networks
S0950705115002774
This paper presents an extended model of biogeography based optimization (BBO) as opposed to the classical BBO wherein the HSI value of a habitat is not solely dependent upon the emigration and immigration rates of species but the HSI value is a function of different combinations of SIVs depending upon the characteristics of the habitat under consideration. The extended model also introduces a new concept of efforts required in migration from a low HSI solution to a high HSI solution for optimization in BBO. Hence, the proposed extended model of BBO presents an advanced optimization technique that was originally proposed by Dan Simon as BBO in December, 2008. Based on the concepts introduced in our extended model of BBO and its mathematics, we design a two – phase anticipatory system architecture for intelligent preparation of the battlefield which is the targeted optimization problem in our case. The proposed anticipatory system serves a dual purpose by predicting the deployment strategies of enemy troops in the battlefield and also finding the shortest and the best feasible path for attack on the enemy base station. Hence, the proposed anticipatory system can be used to improve the traditional approaches, since they lack the ability to predict the destination and can only find a suitable path to the given destination, leading to coordination problems and target misidentification which can lead to severe casualties. The designed system can be of major use for the commanders in the battlefield who have been using traditional decision making techniques of limited accuracy for predicting the destination. Using the above natural computation technique can help in enabling the commanders in the battlefield for intelligent preparation of the battlefield by automating the process of assessing the likely base stations of the enemy and the ways in which these can be attacked, given the environment and the terrain considerations. The results on two natural terrain scenarios that of plain/desert region of Alwar and hilly region of Mussourie are taken to demonstrate the performance of the technique where the proposed technique clearly outperforms the traditional methods and the other EAs like ACO, PSO, SGA, SOFM, FI, GA, etc. that have been used till date for path planning applications on satellite images with the smallest pixel count of 351 and 310 respectively. For location prediction application, the highest prediction efficiencies of the traditional method on Alwar and Mussourie was only 13% and 8% respectively as compared to the proposed method. Suitability Index Variables Habitat Suitability Index (Similarity Threshold) Particle Swarm Optimization Ant Colony Optimization Biogeography Based Optimization Pheromone value 1/d ij Distance between city ‘i’ and city ‘j’ Feasible neighborhood of the ant ‘k’ when being at ‘i’ Evolutionary Strategy Self Organizing Feature Maps Fuzzy Inference Mechanism Probability of choosing ‘j’ as the next city when at city ‘i’ Maximum Emigration Rate Maximum Immigration Rate Maximum No. of species=Maximum No. of SIVs in the feature habitat Stud Genetic Algorithms Genetic Algorithms
Two-phase anticipatory system design based on extended species abundance model of biogeography for intelligent battlefield preparation
S0950705115002786
Recently, by combining rough set theory with granular computing, pessimistic and optimistic multigranulation rough sets have been proposed to derive “AND” and “OR” decision rules from decision systems. At the same time, by integrating granular computing and formal concept analysis, Wille’s concept lattice and object-oriented concept lattice were used to obtain granular rules and disjunctive rules from formal decision contexts. So, the problem of rule acquisition can bring rough set theory, granular computing and formal concept analysis together. In this study, to shed some light on the comparison and combination of rough set theory, granular computing and formal concept analysis, we investigate the relationship between multigranulation rough sets and concept lattices via rule acquisition. Some interesting results are obtained in this paper: (1) “AND” decision rules in pessimistic multigranulation rough sets are proved to be granular rules in concept lattices, but the inverse may not be true; (2) the combination of the truth parts of an “OR” decision rule in optimistic multigranulation rough sets is an item of the decomposition of a disjunctive rule in concept lattices; (3) a non-redundant disjunctive rule in concept lattices is shown to be the multi-combination of the truth parts of “OR” decision rules in optimistic multigranulation rough sets; and (4) the same rule is defined with a same certainty factor but a different support factor in multigranulation rough sets and concept lattices. Moreover, algorithm complexity analysis is made for the acquisition of “AND” decision rules, “OR” decision rules, granular rules and disjunctive rules.
A comparative study of multigranulation rough sets and concept lattices via rule acquisition
S0950705115002889
To avoid performance degradation and maintain the quality of results obtained by the case-based reasoning (CBR) systems, maintenance becomes necessary, especially for those systems designed to operate over long periods and which must handle large numbers of cases. CBR systems cannot be preserved without scanning the case base. For this reason, the latter must undergo maintenance operations. The techniques of case base’s dimension optimization is the analog of instance reduction size methodology (in the machine learning community). This study links these techniques by presenting case-based maintenance in the framework of instance based reduction, and provides: first an overview of CBM studies, second, a novel method of structuring and updating the case base and finally an application of industrial case is presented. The structuring combines a categorization algorithm with a measure of competence CM based on competence and performance criteria. Since the case base must progress over time through the addition of new cases, an auto-increment algorithm is installed in order to dynamically ensure the structuring and the quality of a case base. The proposed method was evaluated through a case base from an industrial plant. In addition, an experimental study of the competence and the performance was undertaken on reference benchmarks. This study showed that the proposed method gives better results than the best methods currently found in the literature.
Case-based maintenance: Structuring and incrementing the case base
S0950705115003007
As a natural extension of three-way decisions with incomplete information, this paper provides a novel three-way decision model based on incomplete information system. First, we define a new relation to describe the similarity degree of incomplete information. Then, in view of the missing values presented in incomplete information system, we utilize interval number to acquire the loss function. A hybrid information table which consist both of the incomplete information and loss function, is used to deal with the new three-way decision model. The key steps and algorithm for constructing the integrated three-way decision model are also carefully investigated. An empirical study of medical diagnosis validates the reasonability and effectiveness of our proposed model.
A novel three-way decision model based on incomplete information system
S0950705115003019
Ordinal regression (OR) is a learning paradigm lying between classification and regression and has been attracting increasing attention in recent years due to its wide applications such as human age estimation. To date, there have been a variety of methods proposed for OR, among which the category of threshold-based OR becomes one of the representatives with preferable performance. Typical threshold-based methods, such as discriminant learning for OR (i.e., KDLOR), OR via manifold learning (i.e., MOR), usually seek an OR projection direction along which to maximally separate classes by a sequence of ordinal thresholds. Although having yielded encouraging results, they still leave a performance space that can be further improved since (1) the thresholds involved are optimized independently from each other, and (2) the ordinal constraints just associate with class means (or say centroids) which are generally under-represented for class distributions. Motivated by the analysis, in this work we propose to jointly learn the thresholds across samples and class centroids by seeking an optimal direction along which all the samples are distributed as in order as possible and maximally cater for nearest-centroid distributions, which we call Ordinal Nearest-Centroid Projection (OrNCP) and is formulated as a combinatorial optimization problem. For efficiency of optimization, we further relax the problem to a quadratic programming (QP-OrNCP) that in form covers the KDLOR and MOR as its special cases. Finally, through extensive experiments on synthetic and real ordinal datasets, we demonstrate the superiority of the proposed method, over state-of-the-art methods.
A novel ordinal learning strategy: Ordinal nearest-centroid projection
S095070511500307X
In multi-agent systems, stereotypical trust models are widely used to bootstrap a priori trust in case historical trust evidences are unavailable. These models can work well if and only if malicious agents share some common features (i.e., stereotypes) in their profiles and these features can be detected. However, this condition may not hold for all the adversarial scenarios. Smart attackers can show different trustworthiness to different agents and services (i.e., launching context-correlated attacks). In this paper, we propose CAST, a novel Context-Aware Stereotypical Trust deep learning framework. CAST coins a comprehensive set of seven context-aware stereotypes, each of which can capture a unique type of context-correlated attacks, as well as a deep learning architecture to keep the trust stereotyping robust (i.e., resist training errors). The basic idea is to construct a multi-layer perceptive structure to learn the latent correlations between context-aware stereotypes and the trustworthiness, and thus can estimate the new trust by taking into account the context information. We have evaluated CAST using a rich set of experiments over a simulated multi-agent system. The experimental results have successfully confirmed that, our CAST can achieve approximately tens of times higher trust inference accuracy in average than the competing algorithms in the presence of context-correlated attacks, and more importantly can maintain a much better trust inference robustness against stereotyping errors.
A priori trust inference with context-aware stereotypical deep learning
S0950705115003081
Epilepsy is the neurological disorder of the brain which is difficult to diagnose visually using Electroencephalogram (EEG) signals. Hence, an automated detection of epilepsy using EEG signals will be a useful tool in medical field. The automation of epilepsy detection using signal processing techniques such as wavelet transform and entropies may optimise the performance of the system. Many algorithms have been developed to diagnose the presence of seizure in the EEG signals. The entropy is a nonlinear parameter that reflects the complexity of the EEG signal. Many entropies have been used to differentiate normal, interictal and ictal EEG signals. This paper discusses various entropies used for an automated diagnosis of epilepsy using EEG signals. We have presented unique ranges for various entropies used to differentiate normal, interictal, and ictal EEG signals and also ranked them depending on the ability to discrimination ability of three classes. These entropies can be used to classify the different stages of epilepsy and can also be used for other biomedical applications.
Application of entropies for automated diagnosis of epilepsy using EEG signals: A review
S0950705115003123
The paper describes a new approach of developing and maintaining state-of-the-art decision support systems. Such systems are able to capture the collaborative work on decision problems over time. Due to the complexity of large problem spaces a multi-modal knowledge representation is proposed. For the realization of a multi-modal knowledge base we integrate semantic technologies as a fundamental layer by combining the W3C ontologies PROV-O and SKOS. The approach is demonstrated by an implementation report of an industrially deployed decision support system.
Knowledge-driven systems for episodic decision support
S0950705115003147
Fruit fly optimization algorithm (FOA) is recently presented metaheuristic technique that is inspired by the behavior of fruit flies. This paper improves the standard FOA by introducing the novel parameter integrated with chaos. The performance of developed chaotic fruit fly algorithm (CFOA) is investigated in details on ten well known benchmark problems using fourteen different chaotic maps. Moreover, we performed comparison studies with basic FOA, FOA with Levy flight distribution, and other recently published chaotic algorithms. Statistical results on every optimization task indicate that the chaotic fruit fly algorithm (CFOA) has a very fast convergence rate. In addition, CFOA is compared with recently developed chaos enhanced algorithms such as chaotic bat algorithm, chaotic accelerated particle swarm optimization, chaotic firefly algorithm, chaotic artificial bee colony algorithm, and chaotic cuckoo search. Overall research findings show that FOA with Chebyshev map show superiority in terms of reliability of global optimality and algorithm success rate.
Chaotic fruit fly optimization algorithm
S0950705115003160
Decision trees have been widely used in data mining and machine learning as a comprehensible knowledge representation. Minimal cost decision tree construction plays a crucial role in cost sensitive learning. Recently, many algorithms have been developed to tackle this problem. These algorithms choose an appropriate cut point of a numeric attribute by computing all possible cut points and assign a node through test all attributes. Therefore, the efficiency of these algorithms for large data sets is often unsatisfactory. To solve this issue, in this paper we propose a cost sensitive decision tree algorithm with two adaptive mechanisms to learn cost sensitive decision trees from training data sets based on C4.5 algorithm. The two adaptive mechanisms play an important role in cost sensitive decision tree construction. The first mechanism, adaptive selecting the cut point (ASCP) mechanism, selects the cut point adaptively to build a classifier rather than calculates each possible cut point of an attribute. It improves the efficiency of evaluating numeric attributes for cut point selection significantly. The second mechanism, adaptive removing attribute (ARA) mechanism, removes some redundant attributes in the process of selecting node. The effectiveness of the proposed algorithm is demonstrated on fourteen UCI data sets with representative test cost Normal distribution. Compared with the CS-C4.5 algorithm, the proposed algorithm significantly increases efficiency.
A cost sensitive decision tree algorithm with two adaptive mechanisms
S095070511500324X
Accurate and reliable prediction of diarrhoea outpatient visits is necessary for the health authorities to ensure the appropriate action for the control of the outbreak. In this study, a novel method based on time series decomposition and multi-local predictor fusion has been proposed to predict the diarrhoea outpatient visits. For time series decomposition, the Ensemble Empirical Mode Decomposition with Adaptive Noise (EEMDAN) is used to decompose diarrhoea outpatient visits time series into a finite set of Intrinsic Mode Function (IMF) components and a residue. The IMF components and residue are modeled and predicted respectively by means of Generalized Regression Neural Network (GRNN) as local predictor. Then the prediction results of all components are fusioned using another independent GRNN as fusion predictor to obtain final prediction results. This is the first study on using a EEMDAN and GRNN to constructing an prediction model for diarrhoea outpatient visits prediction problems. The pre-procession and post-processing techniques are used to take into account the seasonal and trend effects in the datasets for improving the prediction precision of proposed model. The performance of the proposed EEMDAN–GRNN model has been compared with Seasonal Auto-Regressive Moving Average (SARIMA), Single GRNN, Wavelet-GRNN and also with EEMD–GRNN by applying them to predict four real world diarrhoea outpatient visits. The results indicate that the proposed EEMDAN–GRNN model provides more accurate prediction results compared to the other traditional techniques. Thus EEMDAN–GRNN can be an alternate tool to facilitate the prediction of diarrhoea outpatient visits.
Diarrhoea outpatient visits prediction based on time series decomposition and multi-local predictor fusion
S0950705115003251
Decision makers often face challenges during the adoption of technology. Indeed, technology adoption usually occurs sequentially, so observational learning can help the decision makers to reach reasonable decisions. In reality, the decision makers may prefer to adopt a waiting strategy when the situation is equivocal. In this study, we construct a technology adoption model called the G-WB model, where we consider a generalized waiting situation based on the Walden–Browne (WB) model. In our model, herd behavior is a difficult issue, but an information cascade occurs after herding appears. We explore the effect of different parameters on the convergence speed and extend our G-WB model. We also demonstrate that observational learning is a useful strategy during sequential decision making and our model is an optimal version of the WB model, which facilitates a better understanding of herding.
Construction of a technology adoption decision-making model and its extension to understanding herd behavior
S0950705115003317
It is necessary to provide a set of scientific foundations to support the emergency response management. The emergency response process are described based on the ideas from OODA (Observe, Orient, Decide, Act) loop theory. Pointing at the difficulty in the described the cooperation in the emergency response process, the coupled OODA framework is built to analyze the interaction between the emergency response units. In order to demonstrate the emergency response mechanism in theoretic way, the simulation theory of DEVS (Discrete Event System Specification) is adopted to build up the simulator model of the basic OODA process framework. Utilize the coupled DEVS model to build the simulation coordinator of the coupled OODA process framework. The earthquake disaster response scenario in STAGE is built on the emergency response mechanism, and the emergency collaborative rescue process based on coupled OODA frameworks is adopted to build up the emergency response system, and the simulation results show reasonable.
Modeling and simulation method of the emergency response systems based on OODA
S0950705115003329
Uncertainty measure mining and applications are fundamental, and it is possible for double-quantitative fusion to acquire benign measures via heterogeneity and complementarity. This paper investigates the double-quantitative fusion of relative accuracy and absolute importance to provide systematic measure mining, benign integration construction, and hierarchical attribute reduction. (1) First, three-way probabilities and measures are analyzed. Thus, the accuracy and importance are systematically extracted, and both are further fused into importance-accuracy (IP-Accuracy), a synthetic causality measure. (2) By sum integration, IP-Accuracy gains a bottom-top granulation construction and granular hierarchical structure. IP-Accuracy holds benign granulation monotonicity at both the knowledge concept and classification levels. (3) IP-Accuracy attribute reduction is explored based on decision tables. A hierarchical reduct system is thereby established, including qualitative/quantitative reducts, tolerant/approximate reducts, reduct hierarchies, and heuristic algorithms. Herein, the innovative tolerant and approximate reducts quantitatively approach/expand/weaken the ideal qualitative reduct. (4) Finally, a decision table example is provided for illustration. This paper performs double-quantitative fusion of causality measures to systematically mine IP-Accuracy, and this measure benignly constructs a granular computing platform and hierarchical reduct system. By resorting to a monotonous uncertainty measure, this study provides an integration-evolution strategy of granular construction for attribute reduction.
Double-quantitative fusion of accuracy and importance: Systematic measure mining, benign integration construction, hierarchical attribute reduction
S0950705115003354
We introduce a new multiple criteria ranking/choice method that applies Dominance-based Rough Set Approach (DRSA) and represents the Decision Maker’s (DM’s) preferences with decision rules. The DM provides a set of pairwise comparisons indicating whether an outranking (weak preference) relation should hold for some pairs of reference alternatives. This preference information is structured using the lower and upper approximations of outranking (S) and non-outranking ( S c ) relations. Then, all minimal-cover (MC) sets of decision rules being compatible with this preference information are induced. Each of these sets is supported by some positive examples (pairs of reference alternatives from the lower approximation of a preference relation) and it does not cover any negative example (pair of alternatives from the upper approximation of an opposite preference relation). The recommendations obtained by all MC sets of rules are analyzed to describe pairwise outranking and non-outranking relations, using probabilistic indices (estimates of probabilities that one alternative outranks or does not outrank the other). Furthermore, given the preference relations obtained in result of application of each MC set of rules on a considered set of alternatives, we exploit them using some scoring procedures. From this, we derive the distribution of ranks attained by the alternatives. We also extend the basic approach in several ways. The practical usefulness of the method is demonstrated on a problem of ranking Polish cities according to their innovativeness.
Multiple criteria ranking and choice with all compatible minimal cover sets of decision rules
S0950705115003366
Microarray-based gene expression profiling has emerged as an efficient technique for classification, diagnosis, prognosis, and treatment of cancer disease. The nature of this disease changes frequently, this generates a huge volume of data. The data retrieved from microarray covers its varieties (veracity) of nature, and changes observed as time changes (velocity). Therefore, the analysis of microarray dataset in a very short period is essential. The major drawback of microarray data is the ‘curse of dimensionality problem’, this hinders the useful information of dataset and leads to computational instability. Therefore, selecting relevant genes is an imperative in microarray data analysis. Most of the existing schemes employ a two phase process: feature selection/extraction followed by classification. In this paper, various statistical methods (tests) based on MapReduce are proposed to select the relevant features. After feature selection, MapReduce based proximal support vector machine (mrPSVM) classifier is also proposed to classify the microarray data. These algorithms are successfully implemented on Hadoop framework. A comparative analysis is done on these feature selection methodologies using microarray datasets of various dimensions. Experimental results show that the ensemble of mrPSVM classifier and various feature selection methods produces a better accuracy rate on the benchmark dataset.
Classification of microarray using MapReduce based proximal support vector machine classifier
S0950705115003500
By introducing the misclassification and delayed decision costs into the probabilistic approximations of the target, the model of decision-theoretic rough set is then sensitive to cost. However, traditional decision-theoretic rough set is proposed based on one and only one cost matrix, such model does not take the characteristics of multiplicity and variability of cost into consideration. To fill this gap, a multicost strategy is developed for decision-theoretic rough set. Firstly, from the viewpoint of the voting fusion mechanism, a parameterized decision-theoretic rough set is proposed. Secondly, based on the new model, the smallest possible cost and the largest possible cost are calculated in decision systems. Finally, both the decision-monotocity and cost criteria are introduced into the attribute reductions. The heuristic algorithm is used to compute decision-monotonicity reduct while the genetic algorithm is used to compute the smallest and the largest possible cost reducts. Experimental results on eight UCI data sets tell us: 1. compared with the raw data, decision-monotocity reduct can generate greater lower approximations and more decision rules; 2. the smallest possible cost reduct is much better than decision-monotocity reduct for obtaining smaller costs and more decision rules. This study suggests new research trends concerning decision-theoretic rough set theory.
Decision-theoretic rough set: A multicost strategy
S0950705115003512
Age-related Macular Degeneration (AMD) is the posterior segment eye disease affecting elderly people and may lead to loss of vision. AMD is diagnosed using clinical features like drusen, Geographic Atrophy (GA) and Choroidal NeoVascularization (CNV) present in the fundus image. It is mainly classified into dry and wet type. Dry AMD is most common among elderly people. At present there is no treatment available for dry AMD. Early diagnosis and treatment to the affected eye may reduce the progression of disease. Manual screening of fundus images is time consuming and subjective. Hence in this study we are proposing an Empirical Mode Decomposition (EMD)-based nonlinear feature extraction to characterize and classify normal and AMD fundus images. EMD is performed on 1D Radon Transform (RT) projections to generate different Intrinsic Mode Functions (IMF). Various nonlinear features are extracted from the IMFs. The dimensionality of the extracted features are reduced using Locality Sensitive Discriminant Analysis (LSDA). Then the reduced LSDA features are ranked using minimum Redundancy Maximum Relevance (mRMR), Kullback–Leibler Divergence (KLD) and Chernoff Bound and Bhattacharyya Distance (CBBD) techniques. Ranked LSDA components are sequentially fed to Support Vector Machine (SVM) classifier to discriminate normal and AMD classes. The performance of the current study is experimented using private and two public datasets namely Automated Retinal Image Analysis (ARIA) and STructured Analysis of the Retina (STARE). The 10-fold cross validation approach is used to evaluate the performance of the classifiers and obtained highest average classification accuracy of 100%, sensitivity of 100% and specificity of 100% for STARE dataset using only two ranked LSDA components. Our results reveal that the proposed system can be used as a decision support tool for clinicians for mass AMD screening.
Automated detection of age-related macular degeneration using empirical mode decomposition
S0950705115003536
Due to the potential important information in real world networks, link prediction has become an interesting focus of different branches of science. Nevertheless, in “big data” era, link prediction faces significant challenges, such as how to predict the massive data efficiently and accurately. In this paper, we propose two novel node-coupling clustering approaches and their extensions for link prediction, which combine the coupling degrees of the common neighbor nodes of a predicted node-pair with cluster geometries of nodes. We then present an experimental evaluation to compare the prediction accuracy and effectiveness between our approaches and the representative existing methods on two synthetic datasets and six real world datasets. The experimental results show our approaches outperform the existing methods.
Node-coupling clustering approaches for link prediction
S0950705115003548
This paper proposes a novel learning model that introduces the calculation of the pairwise gravitation of the selected patterns into the classical fixed radius nearest neighbor method, in order to overcome the drawback of the original nearest neighbor rule when dealing with imbalanced data. The traditional k nearest neighbor rule is considered to lose power on imbalanced datasets because the final decision might be dominated by the patterns from negative classes in spite of the distance measurements. Differently from the existing modified nearest neighbor learning model, the proposed method named GFRNN has a simple structure and thus becomes easy to work. Moreover, all parameters of GFRNN do not need initializing or coordinating during the whole learning procedure. In practice, GFRNN first selects patterns as candidates out of the training set under the fixed radius nearest neighbor rule, and then introduces the metric based on the modified law of gravitation in the physical world to measure the distance between the query pattern and each candidate. Finally, GFRNN makes the decision based on the sum of all the corresponding gravitational forces from the candidates on the query pattern. The experimental comparison validates both the effectiveness and the efficiency of GFRNN on forty imbalanced datasets, comparing to nine typical methods. As a conclusion, the contribution of this paper is constructing a new simple nearest neighbor architecture to deal with imbalanced classification effectively without any manually parameter coordination, and further expanding the family of the nearest neighbor based rules.
Gravitational fixed radius nearest neighbor for imbalanced problem
S095070511500355X
This paper is devoted to two issues involved in the one-class support vector machine (OCSVM), i.e., the optimization algorithm and the kernel parameter selection. For appropriate choices of parameters, the primal maximum margin problem of OCSVM is equivalent to a nearest point problem. A generalized Gilbert (GG) algorithm is proposed to solve the nearest point problem. Compared with the algebraic algorithms developed for OCSVM, such as the well-known sequential minimal optimization (SMO) algorithm, the GG algorithm is a novel geometric algorithm that has an intuitive and explicit optimization target at each iteration. Moreover, an improved MIES (IMIES) is developed for the Gaussian kernel parameter selection. IMIES is implemented by constraining the geometric locations of edge and interior sample mappings relative to OCSVM separating hyper-planes. The experimental results on 2-D artificial datasets and benchmark datasets show that IMIES is able to select suitable kernel parameters, and the GG algorithm is computationally more efficient while achieving comparable accuracies to the SMO algorithm.
A generalized Gilbert algorithm and an improved MIES for one-class support vector machine
S0950705115003585
In this paper, a novel joint replenishment and synthetical delivery (JRD) model is proposed to improve the coordination of replenishment and delivery processes. Traditional, in a warehouse centralized supply chain, orders from online customers are usually delivered independently after multiple items have been jointly replenished. To decrease the outbound delivery cost, a new delivery strategy considering synthetical dispatched orders, orders and customers matching, and customer visiting sequence is proposed. Three new beta-heuristic algorithms, namely, quantum evolution algorithm (QEA), differential evolution algorithm (DE) and quantum differential evolution algorithm (QDE), are utilized to solve the proposed JRD. The most important parts of each algorithm (initialization, reproduction and mutation, and selection) are redesigned according to the structure of the decision variables. Numerical experiments are conducted to find the best parameter settings and potential searching abilities of each algorithm. Finally, experimental results show the superiorities of the proposed DE and QDE in terms of searching speed, accuracy, and robustness.
Intelligent algorithms for a new joint replenishment and synthetical delivery problem in a warehouse centralized supply chain
S0950705115003639
Human activity recognition can be exploited to benefit ubiquitous applications using sensors. Current research on sensor-based activity recognition is mainly using data-driven or knowledge-driven approaches. In terms of complex activity recognition, most data-driven approaches suffer from portability, extensibility and interpretability problems, whilst knowledge-driven approaches are often weak in handling intricate temporal data. To address these issues, we exploit time series shapelets for complex human activity recognition. In this paper, we first describe the association between activity and time series transformed from sensor data. Then, we present a recursively defined multilayered activity model to represent four types of activities and employ a shapelet-based framework to recognize various activities represented in the model. A prototype system was implemented to evaluate our approach on two public datasets. We also conducted two real-world case studies for system evaluation: daily living activity recognition and basketball play activity recognition. The experimental results show that our approach is capable of handling complex activity effectively. The results are interpretable and accurate, and our approach is fast and energy-efficient in real-time.
Sensor-based human activity recognition system with a multilayered model using time series shapelets
S0950705115003640
As an example of one-class classification methods, support vector data description (SVDD) offers an opportunity to improve the performance of outlier detection and reduce the loss caused by outlier occurrence in many real-world applications. However, due to limited outliers, the SVDD model is built only by using the normal data. In this situation, SVDD may easily lead to over fitting when the normal data contain noise or uncertainty. This paper presents two types of new SVDD methods, named R-SVDD and εNR-SVDD, which are constructed by introducing cutoff distance-based local density of each data sample and the ε-insensitive loss function with negative samples. We have demonstrated that the proposed methods can improve the robustness of SVDD for data with noise or uncertainty by extensive experiments on ten UCI datasets. The experimental results have shown that the proposed εNR-SVDD is superior to other existing outlier detection methods in terms of the detection rate and the false alarm rate. Meanwhile, the proposed R-SVDD can also achieve a better outlier detection performance with only normal data. Finally, the proposed methods are successfully used to detect the image-based conveyor belt fault.
Robust support vector data description for outlier detection with noise or uncertain data
S0950705115003676
k-nearest neighbors (kNN) classifiers are commonly used in various applications due to their relative simplicity and the absence of necessary training. However, the time complexity of the basic algorithm is quadratic, which makes them inappropriate for large scale datasets. At the same time, the performance of most improved algorithms based on tree structures decreases rapidly with increase in dimensionality of dataset, and tree structures have different complexity in different datasets. In this paper, we introduce the concept of “location difference of multiple distances, and use it to measure the difference between different data points. In this way, location difference of multiple distances based nearest neighbors searching algorithm (LDMDBA) is proposed. LDMDBA has a time complexity of O(logdnlogn) and does not rely on a search tree. This makes LDMDBA the only kNN method that can be efficiently applied to high dimensional data and has very good stability on different datasets. In addition, most of the existing methods have a time complexity of O(n) to predict a data point outside the dataset. By contrast, LDMDBA has a time complexity of O(logdlogn) to predict a query point in datasets of different dimensions, and, therefore, can be applied in real systems and large scale databases. The effectiveness and efficiency of LDMDBA are demonstrated in experiments involving public and artificial datasets.
Location difference of multiple distances based k-nearest neighbors algorithm
S095070511500372X
Noise components are a major cause of poor performance in document analysis. To reduce undesired components, most recent research works have applied an image processing technique. However, the effectiveness of these techniques is suitable only for a Latin script document but not a non-Latin script document. The characteristics of the non-Latin script document, such as Thai, are considerably more complicated than the Latin script document and include many levels of character alignment, no word or sentence separator, and variability in a character’s size. When applying an image processing technique to a Thai document, we usually remove the characters that are relatively close to noise. Hence, in this paper, we propose a novel noise reduction method by applying a machine learning technique to classify and reduce noise in document images. The proposed method uses a semi-supervised cluster-and-label approach with an improved labeling method, namely, feature selected sub-cluster labeling. Feature selected sub-cluster labeling focuses on the clusters that are incorrectly labeled by conventional labeling methods. These clusters are re-clustered into small groups with a new feature set that is selected according to class labels. The experimental results show that this method can significantly improve the accuracy of labeling examples and the performance of classification. We compared the performance of noise reduction and character preservation between the proposed method and two related noise reduction approaches, i.e., a two-phased stroke-like pattern noise (SPN) removal and a commercial noise reduction software called ScanFix Xpress 6.0. The results show that semi-supervised noise reduction is significantly better than the compared methods of which an F-measure of character and noise is 86.01 and 97.82, respectively.
Semi-supervised cluster-and-label with feature based re-clustering to reduce noise in Thai document images
S0950705115003779
E-learning and online education have made great strides in the recent past. It has moved from a knowledge transfer model to a highly intellect, swift and interactive proposition capable of advanced decision-making abilities. Two challenges have been observed during the exploration of recent developments in e-learning. Firstly, to incorporate e-learning systems effectively in the evolving semantic web environment and secondly, to realize adaptive personalization according to the learner's changing behavior. An ontology-driven system has proposed to implement the Felder-Silverman learning style model in addition to the learning contents, to validate its integration with the semantic web environment. Software agents are employed to monitor the learner's actual learning style and modify them accordingly. The learner's learning style and their modifications are made within the proposed e-learning system. Cloud storage is used as the primary back-end in order to maintain the ontology, databases and other required server resources. To verify the system, comparisons are made between the information presented and adaptive learning styles of the learner along with actions of agents according to learners' behavior. Finally, various conclusions are drawn by exploring the learner's behavior in an adaptive environment for the proposed e-learning system.
An ontology-based adaptive personalized e-learning system, assisted by software agents on cloud storage
S095070511500386X
The induction of decision tree searches for relevant characteristics in the data which would allow it to precisely model a certain concept, but it also worries about the comprehensibility of the generated model, helping human specialists to discover new knowledge, something very important in the medical and biological areas. On the other hand, such inducers present some instability. The main problem handled here refers to the behavior of those inducers when it comes to high-dimensional data, more specifically to gene expression data: irrelevant attributes may harm the learning process and many models with similar performance may be generated. In order to treat those problems, we have explored and revised windowing: pruning of the trees generated during intermediary steps of the algorithm; the use of the estimated error instead of the training error; the use of the error weighted according to the size of the current window; and the use of the classification confidence as the window update criterion. The results show that the proposed algorithm outperform the classical one, especially considering measures of complexity and comprehensibility of the induced models.
Windowing improvements towards more comprehensible models
S0950705115003998
Extreme learning machine (ELM) has been one widely used learning paradigm to train single hidden layer feedforward network (SLFN). However, like many other classification algorithms, ELM may learn undesirable class boundaries from data with unbalanced classes. This paper first tries to analyze the reason of the damage caused by class imbalance for ELM, and then discusses the influence of several data distribution factors for the damage. Next, we present an optimal decision outputs compensation strategy to deal with the class imbalance problem in the context of ELM. Specifically, the outputs of the minority classes in ELM are properly compensated. For a binary-class problem, the compensation can be regarded as a single variable optimization problem, thus the golden section search algorithm is adopted to find the optimal compensation value. For a multi-class problem, the particle swarm optimization (PSO) algorithm is used to solve the multivariate optimization problem and to provide the optimal combination of compensations. Experimental results on lots of imbalanced data sets demonstrate the superiority of the proposed algorithm. Statistical results indicate that the proposed approach not only outperforms the original ELM, but also yields better or at least competitive results compared with several widely used and state-of-the-art class imbalance learning methods.
ODOC-ELM: Optimal decision outputs compensation-based extreme learning machine for classifying imbalanced data
S0950705115004037
In subway systems, the automatic train operation (ATO) is gradually replacing manual driving for its high punctuality and parking accuracy. But the existing ATO systems have some drawbacks in riding comfort and energy-consumption compared with the manual driving by experienced drivers. To combine the advantages of ATO and manual driving, this paper proposes a Smart Train Operation (STO) approach based on the fusion of expert knowledge and data mining algorithms. First, we summarize the domain expert knowledge rules to ensure the safety and riding comfort. Then, we apply a regression algorithm named as CART (Classification And Regression Tree) and ensemble learning methods (i.e. Bagging and LSBoost) to obtain the valuable information from historical driving data, which are collected in the Beijing subway Yizhuang line. Besides, a heuristic train station parking algorithm (HSA) by using the positioning data storage in balises is proposed to realize precisely parking. By combing the expert knowledge, data mining algorithms and HSA, two comprehensive STO algorithms, i.e., STOB and STOL are developed for subway train operations. The proposed STO algorithms are tested by comparing both ATO and manual driving on a real-world case of the Beijing subway Yizhuang line. The results indicate that the developed STO approach is better than ATO in energy consumption and riding comfort, and it also outperforms manual driving in punctuality and parking accuracy. Finally, the flexibility of STOL and STOB is verified with extensive experiments by considering different kinds of disturbances in real-world applications.
Smart train operation algorithms based on expert knowledge and ensemble CART for the electric locomotive
S0950705115004086
Efficiently finding similar segments or motifs in time series data is a fundamental task that, due to the ubiquity of these data, is present in a wide range of domains and situations. Because of this, countless solutions have been devised but, to date, none of them seems to be fully satisfactory and flexible. In this article, we propose an innovative standpoint and present a solution coming from it: an anytime multimodal optimization algorithm for time series motif discovery based on particle swarms. By considering data from a variety of domains, we show that this solution is extremely competitive when compared to the state-of-the-art, obtaining comparable motifs in considerably less time using minimal memory. In addition, we show that it is robust to different implementation choices and see that it offers an unprecedented degree of flexibility with regard to the task. All these qualities make the presented solution stand out as one of the most prominent candidates for motif discovery in long time series streams. Besides, we believe the proposed standpoint can be exploited in further time series analysis and mining tasks, widening the scope of research and potentially yielding novel effective solutions.
Particle swarm optimization for time series motif discovery
S0950705115004104
There are three primary encountered problems in classic data envelopment analysis (DEA), which they decrease the effectiveness and reliability of decision making based on the obtained information from the classic DEA. These three problems include the following issues: (1) DEA efficiency scores overestimate efficiency and they are biased; (2) In certain cases, the standard DEA models are not as useful as expected in the sense of discriminating the decision making units (DMUs); (3) Specification of the evaluated DMUs as efficient by using DEA are peculiar rather than superiority. Tackling these mentioned problems is the motivation for creating this current study. To overcome these three problems in DEA together and enhance the effectiveness and reliability of the decision-making process, this paper uses the evidential-reasoning (ER) approach to construct a performance indicator for combining the efficiency and anti-efficiency obtained by DEA and inverted DEA models, which they are used to identify the efficient and anti-efficient frontiers, respectively. Numerical simulation tests indicate that our new performance indicator is more suitable for the cases where there are relatively few DMUs to be evaluated with respect to the number of input and output indicators. Furthermore, empirical studies demonstrate that this indicator has considerably more discrimination power than that of the standard DEA models, and also it reduces overestimation and addresses peculiar DMUs, properly.
Specification of a performance indicator using the evidential-reasoning approach
S0950705115004207
In community question answering (cQA), users pose queries (or questions) on portals like Yahoo! Answers which can then be answered by other users who are often knowledgeable on the subject. cQA is increasingly popular on the Web, due to its convenience and effectiveness in connecting users with queries and those with answers. In this article, we study the problem of finding previous queries (e.g., posed by other users) which may be similar to new queries, and adapting their answers as the answers to the new queries. A key challenge here is to the bridge the lexical gap between new queries and old answers. For example, “company” in the queries may correspond to “firm” in the answers. To address this challenge, past research has proposed techniques similar to machine translation that “translate” old answers to ones using the words in the new queries. However, a key limitation of these works is that they assume queries and answers are parallel texts, which is hardly true in reality. As a result, the translated or rephrased answers may not look intuitive. In this article, we propose a novel approach to learn the semantic representation of queries and answers by using a neural network architecture. The learned semantic level features are finally incorporated into a learning to rank framework. We have evaluated our approach using a large-scale data set. Results show that the approach can significantly outperform existing approaches.
Learning semantic representation with neural networks for community question answering retrieval
S0950705115004323
In this article, we explore an event detection framework to improve multi-document summarization. Our approach is based on a two-stage single-document method that extracts a collection of key phrases, which are then used in a centrality-as-relevance passage retrieval model. We explore how to adapt this single-document method for multi-document summarization methods that are able to use event information. The event detection method is based on Fuzzy Fingerprint, which is a supervised method trained on documents with annotated event tags. To cope with the possible usage of different terms to describe the same event, we explore distributed representations of text in the form of word embeddings, which contributed to improve the summarization results. The proposed summarization methods are based on the hierarchical combination of single-document summaries. The automatic evaluation and human study performed show that these methods improve upon current state-of-the-art multi-document summarization systems on two mainstream evaluation datasets, DUC 2007 and TAC 2009. We show a relative improvement in ROUGE-1 scores of 16% for TAC 2009 and of 17% for DUC 2007.
Exploring events and distributed representations of text in multi-document summarization
S0950705115004335
Topic detection as a tool to detect topics from online media attracts much attention. Generally, a topic is characterized by a set of informative keywords/terms. Traditional approaches are usually based on various topic models, such as Latent Dirichlet Allocation (LDA). They cluster terms into a topic by mining semantic relations between terms. However, co-occurrence relations across the document are commonly neglected, which leads to the detection of incomplete information. Furthermore, the inability to discover latent co-occurrence relations via the context or other bridge terms prevents the important but rare topics from being detected. To tackle this issue, we propose a hybrid relations analysis approach to integrate semantic relations and co-occurrence relations for topic detection. Specifically, the approach fuses multiple relations into a term graph and detects topics from the graph using a graph analytical method. It can not only detect topics more effectively by combing mutually complementary relations, but also mine important rare topics by leveraging latent co-occurrence relations. Extensive experiments demonstrate the advantage of our approach over several benchmarks.
A hybrid term–term relations analysis approach for topic detection
S0950705115004384
In general, pattern classification and regression tasks do not take into consideration the variation in the importance of the training samples. For twin support vector regression (TSVR), this implies that all the training samples play the same role on the bound functions. However, the number of close neighboring samples near to each training sample has an effect on the bound functions. In this paper, we formulate a regularized version of the KNN-based weighted twin support vector regression (KNNWTSVR) called RKNNWTSVR which is both efficient and effective. By introducing the regularization term and replacing 2-norm of slack variables instead of 1-norm, our RKNNWTSVR only needs to solve a simple system of linear equations with low computational cost, and at the same time, it improves the generalization performance. Particularly, we compare four implementations of RKNNWTSVR with existing approaches. Experimental results on several synthetic and benchmark datasets indicate that, comparing to SVR, WSVR, TSVR and KNNWTSVR, our proposed RKNNWTSVR has better generalization ability and requires less computational time.
An efficient regularized K-nearest neighbor based weighted twin support vector regression
S0950705115004578
The theory of belief functions is a very important and effective tool for uncertainty modeling and reasoning, where measures of uncertainty are very crucial for evaluating the degree of uncertainty in a body of evidence. Several uncertainty measures in the theory of belief functions have been proposed. However, existing measures are generalizations of measures in the probabilistic framework. The inconsistency between different frameworks causes limitations to existing measures. To avoid these limitations, in this paper, a new total uncertainty measure is proposed directly in the framework of belief functions theory without changing the theoretical frameworks. The average distance between the belief interval of each singleton and the most uncertain case is used to represent the total uncertainty degree of the given body of evidence. Numerical examples, simulations, applications and related analyses are provided to verify the rationality of our new measure.
A new distance-based total uncertainty measure in the theory of belief functions
S0950705115004591
Multiple work shifts are commonly utilized in construction projects to meet project requirements. Nevertheless, evening and night shifts raise the risk of adverse events and thus must be used to the minimum extent feasible. Tradeoff optimization among project duration (time), project cost, and the utilization of evening and night work shifts while maintaining with all job logic and resource availability constraints is necessary to enhance overall construction project success. In this study, a novel approach called “Multiple Objective Symbiotic Organisms Search” (MOSOS) to solve multiple work shifts problem is introduced. The MOSOS algorithm is new meta-heuristic based multi-objective optimization techniques inspired by the symbiotic interaction strategies that organisms use to survive in the ecosystem. A numerical case study of construction projects were studied and the performance of MOSOS is evaluated in comparison with other widely used algorithms which includes non-dominated sorting genetic algorithm II (NSGA-II), the multiple objective particle swarm optimization (MOPSO), the multiple objective differential evolution (MODE), and the multiple objective artificial bee colony (MOABC). The numerical results demonstrate MOSOS approach is a powerful search and optimization technique in finding optimization of work shift schedules that is it can assist project managers in selecting appropriate plan for project.
A novel Multiple Objective Symbiotic Organisms Search (MOSOS) for time–cost–labor utilization tradeoff problem
S0950705115004633
The stochastic process is one the most important tools to overcome uncertainties of supply chain problems. Being a lack of studies on constrained reliable facility location problems (RFLP) with multiple capacity levels, this paper develops a bi-objective RFLP with multiple capacity levels in a three echelon supply chain management while there is a constraint on the coverage levels. Moreover, there is a provider-side uncertainty for distribution-centers (DCs). The aim of this paper is to find a near-optimal solution including suitable locations of DCs and plants, the fraction of satisfied customer demands, and the fraction of items sent to DCs to minimize the total cost and to maximize fill rate, simultaneously. As the proposed model belongs to NP-Hard problems, a meta-heuristic algorithm called multi-objective biogeography-based optimization (MOBBO) is employed to find a near-optimal Pareto solution. Since there is no benchmark in the literature to compare provided solutions, a non-dominated ranking genetic algorithm (NRGA) and a multi objective simulated annealing (MOSA) are used to verify the solution obtained by MOBBO while a two-stage stochastic programming (2-SSP) is employed to capture randomness of DCs. This paper uses the adapted concepts of expected value of perfect information (EVPI) and the value of stochastic solution (VSS) in order to validate 2-SSP. Moreover, the parameters of algorithms are tuned by the response surface methodology (RSM) in the design of experiments. Besides, an exact method, called branch-and-bound method via GAMS optimization software, is used to compare lower and upper bounds of Pareto fronts to optimize two single-objective problems separately.
Optimizing a bi-objective reliable facility location problem with adapted stochastic measures using tuned-parameter multi-objective algorithms
S0950705115004645
A belief rule-based inference methodology using the evidential reasoning approach (RIMER) is employed in this study to construct a decision support tool that helps physicians predict in-hospital death and intensive care unit admission among trauma patients in emergency departments (EDs). This study contributes to the research community by developing and validating a RIMER-based decision tool for predicting trauma outcome. To compare the prediction performance of the RIMER model with those of models derived using commonly adopted methods, such as logistic regression analysis, support vector machine (SVM), and artificial neural network (ANN), several logistic regression models, SVM models, and ANN models are constructed using the same dataset. Five-fold cross-validation is employed to train and validate the prediction models constructed using four different methods. Results indicate that the RIMER model has the best prediction performance among the four models, and its performance can be improved after knowledge base training with historical data. The RIMER tool exhibits strong potential to help ED physicians to better triage trauma, optimally utilize hospital resources, and achieve better patient outcomes.
Belief rule-based inference for predicting trauma outcome
S0950705115004980
Functional data type, which is an important data type, is widely prevalent in many fields such as economics, biology, finance, and meteorology. Its underlying process is often seen as a continuous curve. The classification process for functional data is a basic data mining task. The common method is a two-stage learning process: first, by means of basis functions, the functional data series is converted into multivariate data; second, a machine learning algorithm is employed for performing the classification task based on the new representation. The problem is that a majority of learning algorithms are based on Euclidean distance, whereas the distance between functional samples is L 2 distance. In this context, there are three very interesting problems. (1) Is seeing a functional sample as a point in the corresponding Euclidean space feasible? (2) How to select an orthonormal basis for a given functional data type? (3) Which one is better, orthogonal representation or non-orthogonal representation, under finite basis functions for the same number of basis? These issues are the main motivation of this study. For the first problem, theoretical studies show that seeing a functional sample as a point in the corresponding Euclidean space is feasible under the orthonormal representation. For the second problem, through experimental analysis, we find that Fourier basis is suitable for representing stable functions(especially, periodic functions), wavelet basis is good at differentiating functions with local differences, and data driven functional principal component basis could be the first preference especially when one does not have any prior knowledge on functional data types. For the third problem, experimental results show that orthogonal representation is better than non-orthogonal representation from the viewpoint of classification performance. These results have important significance for studying functional data classification.
Comparison study of orthonormal representations of functional data in classification
S095070511500502X
This paper proposes a new variant of particle swarm optimization (PSO), namely, multiple learning PSO with space transformation perturbation (MLPSO-STP), to improve the performance of PSO. The proposed MLPSO-STP uses a novel learning strategy and STP. The novel learning strategy allows each particle to learn from the average information on the personal historical best position (pbest) of all particles and from the information on multiple best positions that are randomly chosen from the top 100p% of pbest. This learning strategy enables the preservation of swarm diversity to prevent premature convergence. Meanwhile, STP increases the chance to find optimal solutions. The performance of MLPSO-STP is comprehensively evaluated in 21 unimodal and multimodal benchmark functions with or without rotation. Compared with eight popular PSO variants and seven state-of-the-art metaheuristic search algorithms, MLPSO-STP performs more competitively on the majority of the benchmark functions. Finally, MLPSO-STP shows satisfactory performance in optimizing the operating conditions of an ethylene cracking furnace to improve the yields of ethylene and propylene.
Multiple learning particle swarm optimization with space transformation perturbation and its application in ethylene cracking furnace optimization
S0950705116000162
An improved fruit fly optimization algorithm (FOA) is proposed for optimizing continuous function problems and handling joint replenishment problems (JRPs). In the proposed FOA, a level probability policy and a new mutation parameter are developed to balance the population diversity and stability. Twenty-nine complex continuous benchmark functions are used to verify the performance of the FOA with level probability policy (LP–FOA). Numerical results show that the proposed LP–FOA outperforms two state-of-the-art variants of FOA, the differential evolution algorithm and particle swarm optimization algorithm, in terms of the median and standard deviations. The LP–FOA with a new and delicate coding style is also used to handle the classic JRP, which is a complex combinatorial optimization problem. Experimental results reveal that LP–FOA is better than the current best intelligent algorithm, particularly for large-scale JRPs. Thus, the proposed LP–FOA is a potential tool for various complex optimization problems.
An effective and efficient fruit fly optimization algorithm with level probability policy and its applications
S0950705116000204
Web service composition is a key technology for creating value-added services by integrating available services. With the rapid development of Service Computing, Cloud Computing, Big Data, and the Internet of Things, mass services with the same functionalities and different quality of service (QoS) values are available on the Internet. Moreover, due to the uncertainties of services’ application environment, a service’s QoS is highly dynamic; these two factors cause reliable dynamic Web service composition to be a challenge. To address this issue, this paper proposes a two-stage approach for reliable dynamic Web service composition. In the first stage, the top K Web service composition schemes based on each service’s historical QoS values are selected with the proposed algorithm named Culture Genetic Algorithm (CGA). Then, component services in the top K schemes are filtered out and employed as the candidate services for dynamic service composition. This operation can greatly reduce the number of available services and filter out better candidate services for dynamic service composition. Next, the global QoS constraints are decomposed into local QoS constraints with the CGA algorithm, and the global optimal problem of service composition is transformed into a local optimal service selection problem; this conversion increases the flexibility of dynamic service composition and provides a chance to predict QoS values of services before service selection. In the second stage, before selecting the best service for each task during the running of the service composition workflow, QoS values of each candidate service are predicted based on the improved case-based reasoning, and the best service is selected according to the predicted QoS values. Through QoS prediction, the reliability of the composite Web service can be greatly enhanced. Finally, experimental results show that the proposed method is feasible and effective.
Two-stage approach for reliable dynamic Web service composition
S0950705116000265
Transfer learning, which aims to exploit the knowledge in the source domains to promote the learning tasks in the target domains, has attracted extensive research interests recently. The general idea of the previous approaches is to model the shared structure in one latent space as the bridge across domains by reducing the distribution divergences. However, there exist some latent factors in the other latent spaces, which can also be utilized to draw the corresponding distributions closer for establishing the bridges. In this paper, we propose a novel transfer learning method, referred to as Multi-Bridge Transfer Learning (MBTL), to learn the distributions in the different latent spaces together. Therefore, more latent factors shared can be utilized to transfer knowledge. Additionally, an iterative algorithm with convergence guarantee based on non-negative matrix tri-factorization techniques is proposed to solve the optimization problem. Comprehensive experiments demonstrate that MBTL can significantly outperform state-of-the-art learning methods on the topic and sentiment classification tasks.
Multi-bridge transfer learning