FileName
stringlengths
17
17
Abstract
stringlengths
163
6.01k
Title
stringlengths
12
421
S0933365715001414
Objective The Internet has become a platform to express individual moods/feelings of daily life, where authors share their thoughts in web blogs, micro-blogs, forums, bulletin board systems or other media. In this work, we investigate text-mining technology to analyze and predict the depression tendency of web posts. Methods In this paper, we defined depression factors, which include negative events, negative emotions, symptoms, and negative thoughts from web posts. We proposed an enhanced event extraction (E3) method to automatically extract negative event terms. In addition, we also proposed an event-driven depression tendency warning (EDDTW) model to predict the depression tendency of web bloggers or post authors by analyzing their posted articles. Results We compare the performance among the proposed EDDTW model, negative emotion evaluation (NEE) model, and the diagnostic and statistical manual of mental disorders-based depression tendency evaluation method. The EDDTW model obtains the best recall rate and F-measure at 0.668 and 0.624, respectively, while the diagnostic and statistical manual of mental disorders-based method achieves the best precision rate of 0.666. The main reason is that our enhanced event extraction method can increase recall rate by enlarging the negative event lexicon at the expense of precision. Our EDDTW model can also be used to track the change or trend of depression tendency for each post author. The depression tendency trend can help doctors to diagnose and even track depression of web post authors more efficiently. Conclusions This paper presents an E3 method to automatically extract negative event terms in web posts. We also proposed a new EDDTW model to predict the depression tendency of web posts and possibly help bloggers or post authors to early detect major depressive disorder.
Analyzing depression tendency of web posts using an event-driven depression tendency warning model
S0933365715001426
Objective Feature selection is a technique widely used in data mining. The aim is to select the best subset of features relevant to the problem being considered. In this paper, we consider feature selection for the classification of gene datasets. Gene data is usually composed of just a few dozen objects described by thousands of features. For this kind of data, it is easy to find a model that fits the learning data. However, it is not easy to find one that will simultaneously evaluate new data equally well as learning data. This overfitting issue is well known as regards classification and regression, but it also applies to feature selection. Methods and materials We address this problem and investigate its importance in an empirical study of four feature selection methods applied to seven high-dimensional gene datasets. We chose datasets that are well studied in the literature—colon cancer, leukemia and breast cancer. All the datasets are characterized by a significant number of features and the presence of exactly two decision classes. The feature selection methods used are ReliefF, minimum redundancy maximum relevance, support vector machine-recursive feature elimination and relaxed linear separability. Results Our main result reveals the existence of positive feature selection bias in all 28 experiments (7 datasets and 4 feature selection methods). Bias was calculated as the difference between validation and test accuracies and ranges from 2.6% to as much as 41.67%. The validation accuracy (biased accuracy) was calculated on the same dataset on which the feature selection was performed. The test accuracy was calculated for data that was not used for feature selection (by so called external cross-validation). Conclusions This work provides evidence that using the same dataset for feature selection and learning is not appropriate. We recommend using cross-validation for feature selection in order to reduce selection bias.
The feature selection bias problem in relation to high-dimensional gene data
S0933365715001542
Introduction The Allgemeines Krankenhaus Informations Management (AKIM) project was started at the Vienna General Hospital (VGH) several years ago. This led to the introduction of a new hospital information system (HIS), and the installation of the expert system platform (EXP) for the integration of Arden-Syntax-based clinical decision support systems (CDSSs). In this report we take a look at the milestones achieved and the challenges faced in the creation and modification of CDSSs, and their integration into the HIS over the last three years. Materials and methods We introduce a three-stage development method, which is followed in nearly all CDSS projects at the Medical University of Vienna and the VGH. Stage one comprises requirements engineering and system conception. Stage two focuses on the implementation and testing of the system. Finally, stage three describes the deployment and integration of the system in the VGH HIS. The HIS provides a clinical work environment for healthcare specialists using customizable graphical interfaces known as parametric medical documents. Multiple Arden Syntax servers are employed to host and execute the CDSS knowledge bases: two embedded in the EXP for production and development, and a further three in clinical routine for production, development, and quality assurance. Results Three systems are discussed; the systems serve different purposes in different clinical areas, but are all implemented with Arden Syntax. MONI-ICU is an automated surveillance system for monitoring healthcare-associated infections in the intensive care setting. TSM-CDS is a CDSS used for risk prediction in the formation of cutaneous melanoma metastases. Finally, TacroDS is a CDSS for the manipulation of dosages for tacrolimus, an immunosuppressive agent used after kidney transplantation. Problems in development and integration were related to data quality or availability, although organizational difficulties also caused delays in development and integration. Discussion and conclusion Since the inception of the AKIM project at the VGH and its ability to support standards such as Arden Syntax and integrate CDSSs into clinical routine, the clinicians’ interest in, and demand for, decision support has increased substantially. The use of Arden Syntax as a standard for CDSSs played a substantial role in the ability to rapidly create high-quality CDSS systems, whereas the ability to integrate these systems into the HIS made CDSSs more popular among physicians. Despite these successes, challenges such as lack of (consistent and high-quality) electronic data, social acceptance among healthcare personnel, and legislative issues remain. These have to be addressed effectively before CDSSs can be more widely accepted and adopted.
Clinical decision support systems at the Vienna General Hospital using Arden Syntax: Design, implementation, and integration
S0933365715001554
Background The Arden Syntax is a knowledge-encoding standard, started in 1989, and now in its 10th revision, maintained by the health level seven (HL7) organization. It has constructs borrowed from several language concepts that were available at that time (mainly the HELP hospital information system and the Regenstrief medical record system (RMRS), but also the Pascal language, functional languages and the data structure of frames, used in artificial intelligence). The syntax has a rationale for its constructs, and has restrictions that follow this rationale. The main goal of the Standard is to promote knowledge sharing, by avoiding the complexity of traditional programs, so that a medical logic module (MLM) written in the Arden Syntax can remain shareable and understandable across institutions. Objectives One of the restrictions of the syntax is that you cannot define your own functions and subroutines inside an MLM. An MLM can, however, call another MLM, where this MLM will serve as a function. This will add an additional dependency between MLMs, a known criticism of the Arden Syntax knowledge model. This article explains why we believe the Arden Syntax would benefit from a construct for user-defined functions, discusses the need, the benefits and the limitations of such a construct. Methods and materials We used the recent grammar of the Arden Syntax v.2.10, and both the Arden Syntax standard document and the Arden Syntax Rationale article as guidelines. We gradually introduced production rules to the grammar. We used the CUP parsing tool to verify that no ambiguities were detected. Results A new grammar was produced, that supports user-defined functions. 22 production rules were added to the grammar. A parser was built using the CUP parsing tool. A few examples are given to illustrate the concepts. All examples were parsed correctly. Conclusions It is possible to add user-defined functions to the Arden Syntax in a way that remains coherent with the standard. We believe that this enhances the readability and the robustness of MLMs. A detailed proposal will be submitted by the end of the year to the HL7 workgroup on Arden Syntax.
User-defined functions in the Arden Syntax: An extension proposal
S0933365715300580
Objective Studies have revealed that non-adherence to prescribed medication can lead to hospital readmissions, clinical complications, and other negative patient outcomes. Though many techniques have been proposed to improve patient adherence rates, they suffer from low accuracy. Our objective is to develop and test a novel system for assessment of medication adherence. Methods Recently, several smart pill bottle technologies have been proposed, which can detect when the bottle has been opened, and even when a pill has been retrieved. However, very few systems can determine if the pill is subsequently ingested or discarded. We propose a system for detecting user adherence to medication using a smart necklace, capable of determining if the medication has been ingested based on the skin movement in the lower part of the neck during a swallow. This, coupled with existing medication adherence systems that detect when medicine is removed from the bottle, can detect a broader range of use-cases with respect to medication adherence. Results Using Bayesian networks, we were able to correctly classify between chewable vitamins, saliva swallows, medication capsules, speaking, and drinking water, with average precision and recall of 90.17% and 88.9%, respectively. A total of 135 instances were classified from a total of 20 subjects. Conclusion Our experimental evaluations confirm the accuracy of the piezoelectric necklace for detecting medicine swallows and disambiguating them from related actions. Further studies in real-world conditions are necessary to evaluate the efficacy of the proposed scheme.
A wearable sensor system for medication adherence prediction
S093336571530066X
Objective Machine learning techniques can be used to extract predictive models for diseases from electronic medical records (EMRs). However, the nature of EMRs makes it difficult to apply off-the-shelf machine learning techniques while still exploiting the rich content of the EMRs. In this paper, we explore the usage of a range of natural language processing (NLP) techniques to extract valuable predictors from uncoded consultation notes and study whether they can help to improve predictive performance. Methods We study a number of existing techniques for the extraction of predictors from the consultation notes, namely a bag of words based approach and topic modeling. In addition, we develop a dedicated technique to match the uncoded consultation notes with a medical ontology. We apply these techniques as an extension to an existing pipeline to extract predictors from EMRs. We evaluate them in the context of predictive modeling for colorectal cancer (CRC), a disease known to be difficult to diagnose before performing an endoscopy. Results Our results show that we are able to extract useful information from the consultation notes. The predictive performance of the ontology-based extraction method moves significantly beyond the benchmark of age and gender alone (area under the receiver operating characteristic curve (AUC) of 0.870 versus 0.831). We also observe more accurate predictive models by adding features derived from processing the consultation notes compared to solely using coded data (AUC of 0.896 versus 0.882) although the difference is not significant. The extracted features from the notes are shown be equally predictive (i.e. there is no significant difference in performance) compared to the coded data of the consultations. Conclusion It is possible to extract useful predictors from uncoded consultation notes that improve predictive performance. Techniques linking text to concepts in medical ontologies to derive these predictors are shown to perform best for predicting CRC in our EMR dataset.
Utilizing uncoded consultation notes from electronic medical records for predictive modeling of colorectal cancer
S0933365716000026
Objective In an ageing world population more citizens are at risk of cognitive impairment, with negative consequences on their ability of independent living, quality of life and sustainability of healthcare systems. Cognitive neuroscience researchers have identified behavioral anomalies that are significant indicators of cognitive decline. A general goal is the design of innovative methods and tools for continuously monitoring the functional abilities of the seniors at risk and reporting the behavioral anomalies to the clinicians. SmartFABER is a pervasive system targeting this objective. Methods A non-intrusive sensor network continuously acquires data about the interaction of the senior with the home environment during daily activities. A novel hybrid statistical and knowledge-based technique is used to analyses this data and detect the behavioral anomalies, whose history is presented through a dashboard to the clinicians. Differently from related works, SmartFABER can detect abnormal behaviors at a fine-grained level. Results We have fully implemented the system and evaluated it using real datasets, partly generated by performing activities in a smart home laboratory, and partly acquired during several months of monitoring of the instrumented home of a senior diagnosed with MCI. Experimental results, including comparisons with other activity recognition techniques, show the effectiveness of SmartFABER in terms of recognition rates.
SmartFABER: Recognizing fine-grained abnormal behaviors for early detection of mild cognitive impairment
S0933365716000038
Background After several years of treatment, patients with Parkinson's disease (PD) tend to have, as a side effect of the medication, dyskinesias. Close monitoring may benefit patients by enabling doctors to tailor a personalised medication regimen. Moreover, dyskinesia monitoring can help neurologists make more informed decisions in patient's care. Objective To design and validate an algorithm able to be embedded into a system that PD patients could wear during their activities of daily living with the purpose of registering the occurrence of dyskinesia in real conditions. Materials and methods Data from an accelerometer positioned in the waist are collected at the patient's home and are annotated by experienced clinicians. Data collection is divided into two parts: a main database gathered from 92 patients used to partially train and to evaluate the algorithms based on a leave-one-out approach and, on the other hand, a second database from 10 patients which have been used to also train a part of the detection algorithm. Results Results show that, depending on the severity and location of dyskinesia, specificities and sensitivities higher than 90% are achieved using a leave-one-out methodology. Although mild dyskinesias presented on the limbs are detected with 95% specificity and 39% sensitivity, the most important types of dyskinesia (any strong dyskinesia and trunk mild dyskinesia) are assessed with 95% specificity and 93% sensitivity. Conclusion The presented algorithmic method and wearable device have been successfully validated in monitoring the occurrence of strong dyskinesias and mild trunk dyskinesias during activities of daily living.
Dopaminergic-induced dyskinesia assessment based on a single belt-worn accelerometer
S093336571600004X
Objectives (1) To develop a rigorous and repeatable method for building effective Bayesian network (BN) models for medical decision support from complex, unstructured and incomplete patient questionnaires and interviews that inevitably contain examples of repetitive, redundant and contradictory responses; (2) To exploit expert knowledge in the BN development since further data acquisition is usually not possible; (3) To ensure the BN model can be used for interventional analysis; (4) To demonstrate why using data alone to learn the model structure and parameters is often unsatisfactory even when extensive data is available. Method The method is based on applying a range of recent BN developments targeted at helping experts build BNs given limited data. While most of the components of the method are based on established work, its novelty is that it provides a rigorous consolidated and generalised framework that addresses the whole life-cycle of BN model development. The method is based on two original and recent validated BN models in forensic psychiatry, known as DSVM-MSS and DSVM-P. Results When employed with the same datasets, the DSVM-MSS demonstrated competitive to superior predictive performance (AUC scores 0.708 and 0.797) against the state-of-the-art (AUC scores ranging from 0.527 to 0.705), and the DSVM-P demonstrated superior predictive performance (cross-validated AUC score of 0.78) against the state-of-the-art (AUC scores ranging from 0.665 to 0.717). More importantly, the resulting models go beyond improving predictive accuracy and into usefulness for risk management purposes through intervention, and enhanced decision support in terms of answering complex clinical questions that are based on unobserved evidence. Conclusions This development process is applicable to any application domain which involves large-scale decision analysis based on such complex information, rather than based on data with hard facts, and in conjunction with the incorporation of expert knowledge for decision support via intervention. The novelty extends to challenging the decision scientists to reason about building models based on what information is really required for inference, rather than based on what data is available and hence, forces decision scientists to use available data in a much smarter way.
From complex questionnaire and interviewing data to intelligent Bayesian network models for medical decision support
S0933365716000051
Objective A major source of information available in electronic health record (EHR) systems are the clinical free text notes documenting patient care. Managing this information is time-consuming for clinicians. Automatic text summarisation could assist clinicians in obtaining an overview of the free text information in ongoing care episodes, as well as in writing final discharge summaries. We present a study of automated text summarisation of clinical notes. It looks to identify which methods are best suited for this task and whether it is possible to automatically evaluate the quality differences of summaries produced by different methods in an efficient and reliable way. Methods and materials The study is based on material consisting of 66,884 care episodes from EHRs of heart patients admitted to a university hospital in Finland between 2005 and 2009. We present novel extractive text summarisation methods for summarising the free text content of care episodes. Most of these methods rely on word space models constructed using distributional semantic modelling. The summarisation effectiveness is evaluated using an experimental automatic evaluation approach incorporating well-known ROUGE measures. We also developed a manual evaluation scheme to perform a meta-evaluation on the ROUGE measures to see if they reflect the opinions of health care professionals. Results The agreement between the human evaluators is good (ICC=0.74, p <0.001), demonstrating the stability of the proposed manual evaluation method. Furthermore, the correlation between the manual and automated evaluations are high (> 0.90 Spearman's rho). Three of the presented summarisation methods (‘Composite’, ‘Case-Based’ and ‘Translate’) significantly outperform the other methods for all ROUGE measures (p <0.05, Wilcoxon signed-rank test and Bonferroni correction). Conclusion The results indicate the feasibility of the automated summarisation of care episodes. Moreover, the high correlation between manual and automated evaluations suggests that the less labour-intensive automated evaluations can be used as a proxy for human evaluations when developing summarisation methods. This is of significant practical value for summarisation method development, because manual evaluation cannot be afforded for every variation of the summarisation methods. Instead, one can resort to automatic evaluation during the method development process.
Comparison of automatic summarisation methods for clinical free text notes
S0933365716000063
Objective We present the PaHaW Parkinson's disease handwriting database, consisting of handwriting samples from Parkinson's disease (PD) patients and healthy controls. Our goal is to show that kinematic features and pressure features in handwriting can be used for the differential diagnosis of PD. Methods and material The database contains records from 37 PD patients and 38 healthy controls performing eight different handwriting tasks. The tasks include drawing an Archimedean spiral, repetitively writing orthographically simple syllables and words, and writing of a sentence. In addition to the conventional kinematic features related to the dynamics of handwriting, we investigated new pressure features based on the pressure exerted on the writing surface. To discriminate between PD patients and healthy subjects, three different classifiers were compared: K-nearest neighbors (K-NN), ensemble AdaBoost classifier, and support vector machines (SVM). Results For predicting PD based on kinematic and pressure features of handwriting, the best performing model was SVM with classification accuracy of P acc =81.3% (sensitivity P sen =87.4% and specificity of P spe =80.9%). When evaluated separately, pressure features proved to be relevant for PD diagnosis, yielding P acc =82.5% compared to P acc =75.4% using kinematic features. Conclusion Experimental results showed that an analysis of kinematic and pressure features during handwriting can help assess subtle characteristics of handwriting and discriminate between PD patients and healthy controls.
Evaluation of handwriting kinematics and pressure for differential diagnosis of Parkinson's disease
S0933365716300112
Objective We provide a survey of recent advances in biomedical image analysis and classification from emergent imaging modalities such as terahertz (THz) pulse imaging (TPI) and dynamic contrast-enhanced magnetic resonance images (DCE-MRIs) and identification of their underlining commonalities. Methods Both time and frequency domain signal pre-processing techniques are considered: noise removal, spectral analysis, principal component analysis (PCA) and wavelet transforms. Feature extraction and classification methods based on feature vectors using the above processing techniques are reviewed. A tensorial signal processing de-noising framework suitable for spatiotemporal association between features in MRI is also discussed. Validation Examples where the proposed methodologies have been successful in classifying TPIs and DCE-MRIs are discussed. Results Identifying commonalities in the structure of such heterogeneous datasets potentially leads to a unified multi-channel signal processing framework for biomedical image analysis. Conclusion The proposed complex valued classification methodology enables fusion of entire datasets from a sequence of spatial images taken at different time stamps; this is of interest from the viewpoint of inferring disease proliferation. The approach is also of interest for other emergent multi-channel biomedical imaging modalities and of relevance across the biomedical signal processing community.
Pattern identification of biomedical images with time series: Contrasting THz pulse imaging with DCE-MRIs
S093336571630015X
Objective Radiotherapy treatment planning aims at delivering a sufficient radiation dose to cancerous tumour cells while sparing healthy organs in the tumour-surrounding area. It is a time-consuming trial-and-error process that requires the expertise of a group of medical experts including oncologists and medical physicists and can take from 2 to 3h to a few days. Our objective is to improve the performance of our previously built case-based reasoning (CBR) system for brain tumour radiotherapy treatment planning. In this system, a treatment plan for a new patient is retrieved from a case base containing patient cases treated in the past and their treatment plans. However, this system does not perform any adaptation, which is needed to account for any difference between the new and retrieved cases. Generally, the adaptation phase is considered to be intrinsically knowledge-intensive and domain-dependent. Therefore, an adaptation often requires a large amount of domain-specific knowledge, which can be difficult to acquire and often is not readily available. In this study, we investigate approaches to adaptation that do not require much domain knowledge, referred to as knowledge-light adaptation. Methodology We developed two adaptation approaches: adaptation based on machine-learning tools and adaptation-guided retrieval. They were used to adapt the beam number and beam angles suggested in the retrieved case. Two machine-learning tools, neural networks and naive Bayes classifier, were used in the adaptation to learn how the difference in attribute values between the retrieved and new cases affects the output of these two cases. The adaptation-guided retrieval takes into consideration not only the similarity between the new and retrieved cases, but also how to adapt the retrieved case. Results The research was carried out in collaboration with medical physicists at the Nottingham University Hospitals NHS Trust, City Hospital Campus, UK. All experiments were performed using real-world brain cancer patient cases treated with three-dimensional (3D)-conformal radiotherapy. Neural networks-based adaptation improved the success rate of the CBR system with no adaptation by 12%. However, naive Bayes classifier did not improve the current retrieval results as it did not consider the interplay among attributes. The adaptation-guided retrieval of the case for beam number improved the success rate of the CBR system by 29%. However, it did not demonstrate good performance for the beam angle adaptation. Its success rate was 29% versus 39% when no adaptation was performed. Conclusions The obtained empirical results demonstrate that the proposed adaptation methods improve the performance of the existing CBR system in recommending the number of beams to use. However, we also conclude that to be effective, the proposed adaptation of beam angles requires a large number of relevant cases in the case base.
Knowledge-light adaptation approaches in case-based reasoning for radiotherapy treatment planning
S0933365716300574
Objective Antimicrobial stewardship programs have been shown to limit the inappropriate use of antimicrobials. Hospitals are increasingly relying on clinical decision support systems to assist in the demanding prescription reviewing process. In previous work, we have reported on an emerging clinical decision support system for antimicrobial stewardship that can learn new rules supervised by user feedback. In this paper, we report on the evaluation of this system. Methods The evaluated system uses a knowledge base coupled with a supervised learning module that extracts classification rules for inappropriate antimicrobial prescriptions using past recommendations for dose and dosing frequency adjustments, discontinuation of therapy, early switch from intravenous to oral therapy, and redundant antimicrobial spectrum. Over five weeks, the learning module was deployed alongside the baseline system to prospectively evaluate its ability to discover rules that complement the existing knowledge base for identifying inappropriate prescriptions of piperacillin–tazobactam, a frequently used antimicrobial. Results The antimicrobial stewardship pharmacists reviewed 374 prescriptions, of which 209 (56% of 374) were identified as inappropriate leading to 43 recommendations to optimize prescriptions. The baseline system combined with the learning module triggered alerts in 270 prescriptions with a positive predictive value of identifying inappropriate prescriptions of 74%. Of these, 240 reviewed prescriptions were identified by the alerts of the baseline system with a positive predictive value of 82% and 105 reviewed prescriptions were identified by the alerts of the learning module with a positive predictive value of 62%. The combined system triggered alerts for all 43 recommendations, resulting in a rate of actionable alerts of 16% (43 recommendations of 270 reviewed alerts); the baseline system triggered alerts for 38 interventions, resulting in a rate of actionable alerts of 16% (38 of 240 reviewed alerts); and the learning module triggered alerts for 17 interventions, resulting in a rate of actionable alerts of 16% (17 of 105 reviewed alerts). The learning module triggered alerts for every inappropriate prescription missed by the knowledge base of the baseline system (n =5). Conclusions The learning module was able to extract clinically relevant rules for multiple types of antimicrobial alerts. The learned rules were shown to extend the knowledge base of the baseline system by identifying pharmacist interventions that were missed by the baseline system. The learned rules identified inappropriate prescribing practices that were not supported by local experts and were missing from its knowledge base. However, combining the baseline system and the learning module increased the number of false positives.
Evaluation of a machine learning capability for a clinical decision support system to enhance antimicrobial stewardship programs
S0933365716300598
Objective In this paper we propose artificial intelligence methods to estimate cardiorespiratory fitness (CRF) in free-living using wearable sensor data. Methods Our methods rely on a computational framework able to contextualize heart rate (HR) in free-living, and use context-specific HR as predictor of CRF without need for laboratory tests. In particular, we propose three estimation steps. Initially, we recognize activity primitives using accelerometer and location data. Using topic models, we group activity primitives and derive activities composites. We subsequently rank activity composites, and analyze the relation between ranked activity composites and CRF across individuals. Finally, HR data in specific activity primitives and composites is used as predictor in a hierarchical Bayesian regression model to estimate CRF level from the participant's habitual behavior in free-living. Results We show that by combining activity primitives and activity composites the proposed framework can adapt to the user and context, and outperforms other CRF estimation models, reducing estimation error between 10.3% and 22.6% on a study population of 46 participants. Conclusions Our investigation showed that HR can be contextualized in free-living using activity primitives and activity composites and robust CRF estimation in free-living is feasible.
Cardiorespiratory fitness estimation in free-living using wearable sensors
S0933365716300689
Objective Disease-specific vocabularies are fundamental to many knowledge-based intelligent systems and applications like text annotation, cohort selection, disease diagnostic modeling, and therapy recommendation. Reference standards are critical in the development and validation of automated methods for disease-specific vocabularies. The goal of the present study is to design and test a generalizable method for the development of vocabulary reference standards from expert-curated, disease-specific biomedical literature resources. Methods We formed disease-specific corpora from literature resources like textbooks, evidence-based synthesized online sources, clinical practice guidelines, and journal articles. Medical experts annotated and adjudicated disease-specific terms in four classes (i.e., causes or risk factors, signs or symptoms, diagnostic tests or results, and treatment). Annotations were mapped to UMLS concepts. We assessed source variation, the contribution of each source to build disease-specific vocabularies, the saturation of the vocabularies with respect to the number of used sources, and the generalizability of the method with different diseases. Results The study resulted in 2588 string-unique annotations for heart failure in four classes, and 193 and 425 respectively for pulmonary embolism and rheumatoid arthritis in treatment class. Approximately 80% of the annotations were mapped to UMLS concepts. The agreement among heart failure sources ranged between 0.28 and 0.46. The contribution of these sources to the final vocabulary ranged between 18% and 49%. With the sources explored, the heart failure vocabulary reached near saturation in all four classes with the inclusion of minimal six sources (or between four to seven sources if only counting terms occurred in two or more sources). It took fewer sources to reach near saturation for the other two diseases in terms of the treatment class. Conclusions We developed a method for the development of disease-specific reference vocabularies. Expert-curated biomedical literature resources are substantial for acquiring disease-specific medical knowledge. It is feasible to reach near saturation in a disease-specific vocabulary using a relatively small number of literature sources.
A method for the development of disease-specific reference standards vocabularies from textual biomedical literature resources
S0933365716300999
Purpose Explore how efficient intelligent decision support systems, both easily understandable and straightforwardly implemented, can help modern hospital managers to optimize both bed occupancy and utilization costs. Methods and materials This paper proposes a hybrid genetic algorithm-queuing multi-compartment model for the patient flow in hospitals. A finite capacity queuing model with phase-type service distribution is combined with a compartmental model, and an associated cost model is set up. An evolutionary-based approach is used for enhancing the ability to optimize both bed management and associated costs. In addition, a “What-if analysis” shows how changing the model parameters could improve performance while controlling costs. The study uses bed-occupancy data collected at the Department of Geriatric Medicine – St. George's Hospital, London, period 1969–1984, and January 2000. Results The hybrid model revealed that a bed-occupancy exceeding 91%, implying a patient rejection rate around 1.1%, can be carried out with 159 beds plus 8 unstaffed beds. The same holding and penalty costs, but significantly different bed allocations (156 vs. 184 staffed beds, and 8 vs. 9 unstaffed beds, respectively) will result in significantly different costs (£755 vs. £1172). Moreover, once the arrival rate exceeds 7 patient/day, the costs associated to the finite capacity system become significantly smaller than those associated to an Erlang B queuing model (£134 vs. £947). Conclusion Encoding the whole information provided by both the queuing system and the cost model through chromosomes, the genetic algorithm represents an efficient tool in optimizing the bed allocation and associated costs. The methodology can be extended to different medical departments with minor modifications in structure and parameterization.
A hybrid genetic algorithm-queuing multi-compartment model for optimizing inpatient bed occupancy and associated costs
S0933365716301026
Background and objective A computer-aided system for colorectal endoscopy could provide endoscopists with important helpful diagnostic support during examinations. A straightforward means of providing an objective diagnosis in real time might be for using classifiers to identify individual parts of every endoscopic video frame, but the results could be highly unstable due to out-of-focus frames. To address this problem, we propose a defocus-aware Dirichlet particle filter (D-DPF) that combines a particle filter with a Dirichlet distribution and defocus information. Methods We develop a particle filter with a Dirichlet distribution that represents the state transition and likelihood of each video frame. We also incorporate additional defocus information by using isolated pixel ratios to sample from a Rayleigh distribution. Results We tested the performance of the proposed method using synthetic and real endoscopic videos with a frame-wise classifier trained on 1671 images of colorectal endoscopy. Two synthetic videos comprising 600 frames were used for comparisons with a Kalman filter and D-DPF without defocus information, and D-DPF was shown to be more robust against the instability of frame-wise classification results. Computation time was approximately 88ms/frame, which is sufficient for real-time applications. We applied our method to 33 endoscopic videos and showed that the proposed method can effectively smoothen highly unstable probability curves under actual defocus of the endoscopic videos. Conclusion The proposed D-DPF is a useful tool for smoothing unstable results of frame-wise classification of endoscopic videos to support real-time diagnosis during endoscopic examinations.
Defocus-aware Dirichlet particle filter for stable endoscopic video frame recognition
S0950705113001135
Information fusion is a known technique in enlightening features, patterns, and multiple criteria decision making. However, the decomposed information of the fusion has always been unknown, making its applications limited. This research proposes a fuzzy integral combined with a fitness fusion (named as the fuzzy integral fusion, FIF) to induce features and consequently reveal the decomposed information empirically illustrating the dominance benchmark and the fusion effect for approximations. For illustration, the proposed fuzzy integral fusion is applied on World Competitiveness Yearbook 2010 to analyze the European crisis nations (Greece, Italy, Portugal, Spain) and the European welfare nations (Denmark, Finland, Norway, Sweden). The results showed that the European crisis nations should improve their institution framework to effectively raise their business finance efficiency.
A fuzzy integral fusion approach in analyzing competitiveness patterns from WCY2010
S0950705113001718
This paper proposes a new probabilistic graphical model which contains an unobservable latent variable that affects all other observable variables, and the proposed model is applied to ranking evaluation of institutions using a set of performance indicators. Linear Gaussian models are used to express the causal relationship among variables. The proposed iterative method uses a combined causal discovery algorithm of score-based and constraint-based methods to find the network structure, while Gibbs sampling and regression analysis are conducted to estimate the parameters. The latent variable representing ranking scores of institutions is estimated, and the rankings are determined by comparing the estimated scores. The interval estimate of the ranking of an institution is finally obtained from a repetitive procedure. The proposed procedure was applied to a real data set as well as artificial data sets.
Ranking evaluation of institutions based on a Bayesian network having a latent variable
S0950705113001755
Data Envelopment Analysis (DEA) is a widely used mathematical programming technique for comparing the inputs and outputs of a set of homogenous Decision Making Units (DMUs) by evaluating their relative efficiency. The conventional DEA methods assume deterministic and precise values for the input and output observations. However, the observed values of the input and output data in real-world problems can potentially be both random and fuzzy in nature. We introduce Random Fuzzy (Ra-Fu) variables in DEA where randomness and vagueness coexist in the same problem. In this paper, we propose three DEA models for measuring the radial efficiency of DMUs when the input and output data are Ra-Fu variables with Poisson, uniform and normal distributions. We then extend the formulation of the possibility–probability and the necessity–probability DEA models with Ra-Fu parameters for a production possibility set where the Ra-Fu inputs and outputs have normal distributions with fuzzy means and variances. We finally propose the general possibility–probability and necessity–probability DEA models with fuzzy thresholds. A set of numerical examples and a case study are presented to demonstrate the efficacy of the procedures and algorithms.
Chance-constrained DEA models with random fuzzy inputs and outputs
S0950705113001810
Distributed environments, technological evolution, outsourcing market and information technology (IT) are factors that considerably influence current and future industrial maintenance management. Repairing and maintaining the plants and installations requires a better and more sophisticated skill set and continuously updated knowledge. Today, maintenance solutions involve increasing the collaboration of several experts to solve complex problems. These solutions imply changing the requirements and practices for maintenance; thus, conceptual models to support multidisciplinary expert collaboration in decision making are indispensable. The objectives of this work are as follows: (i) knowledge formalization of domain vocabulary to improve the communication and knowledge sharing among a number of experts and technical actors with Conceptual Graphs (CGs) formalism, (ii) multi-expert knowledge management with the Transferable Belief Model (TBM) to support collaborative decision making, and (iii) maintenance problem solving with a variant of the Case-Based Reasoning (CBR) mechanism with a process of solving new problems based on the solutions of similar past problems and integrating the experts’ beliefs. The proposed approach is applied for the maintenance management of the illustrative case study.
Knowledge reuse integrating the collaboration from experts in industrial maintenance management
S0950705113001822
The vendor-managed inventory (VMI) is a common policy in supply chain management (SCM) to reduce bullwhip effects. Although different applications of VMI have been proposed in the literature, the multi-vendor multi-retailer single-warehouse (MV-MR-SW) case has not been investigated yet. This paper develops a constrained MV-MR-SW supply chain, in which both the space and the annual number of orders of the central warehouse are limited. The goal is to find the order quantities along with the number of shipments received by retailers and vendors such that the total inventory cost of the chain is minimized. Since the problem is formulated into an integer nonlinear programming model, the meta-heuristic algorithm of particle swarm optimization (PSO) is presented to find an approximate optimum solution of the problem. In the proposed PSO algorithm, a genetic algorithm (GA) with an improved operator, namely the boundary operator, is employed as a local searcher to turn it to a hybrid PSO. In addition, since no benchmark is available in the literature, the GA with the boundary operator is proposed as well to solve the problem and to verify the solution. After employing the Taguchi method to calibrate the parameters of both algorithms, their performances in solving some test problems are compared in terms of the solution quality.
Optimizing a multi-vendor multi-retailer vendor managed inventory problem: Two tuned meta-heuristic algorithms
S0950705113001846
The complexity of Web information environments and multiple-topic Web pages are negative factors significantly affecting the performance of focused crawling. In a Web page, anchors or some link-contexts may misguide focused crawling, and a highly relevant region also may be obscured owing to the low overall relevance of that page. So, partitioning Web pages into smaller blocks will significantly improve the performance. In view of above, this paper presents a heuristic-based approach, CBP–SLC (Content Block Partition–Selective Link Context), which combines Web page partition algorithm and selectively uses link-context according to the relevance of content blocks, to enhance focused Web crawling. For guiding crawler, we build a weighted voting classifier by iteratively applying the SVM algorithm based on a novel TFIDF-improved feature weighting approach. During classifying, an improved 1-DNF algorithm, called 1-DNFC, is also proposed aimed at identifying more reliable negative documents from the unlabeled examples set. Experimental results show that the performance of the classifier using TFIPNDF outperforms TFIDF, and our crawler outperforms Breadth-First, Best-First, Anchor Text Only, Link-context, SLC and CBP both in Harvest rate and Target recall, which indicate our new techniques are efficient and feasible.
Focused crawling enhanced by CBP–SLC
S095070511300186X
A significant number of recommender systems utilize the k-nearest neighbor (kNN) algorithm as the collaborative filtering core. This algorithm is simple; it utilizes updated data and facilitates the explanations of recommendations. Its greatest inconveniences are the amount of execution time that is required and the non-scalable nature of the algorithm. The algorithm is based on the repetitive execution of the selected similarity metric. In this paper, an innovative similarity metric is presented: HwSimilarity. This metric attains high-quality recommendations that are similar to those provided by the best existing metrics and can be processed by employing low-cost hardware circuits. This paper examines the key design concepts and recommendation-quality results of the metric. The hardware design, cost of implementation, and improvements achieved during execution are also explored.
A similarity metric designed to speed up, using hardware, the recommender systems k-nearest neighbors algorithm
S0950705113002104
Credit rating assessment is a complicated process in which many parameters describing a company are taken into consideration and a grade is assigned, which represents the reliability of a potential client. Such assessment is expensive, because domain experts have to be employed to perform the rating. One way of lowering the costs of performing the rating is to use an automated rating procedure. In this paper, we assess several automatic classification methods for credit rating assessment. The methods presented in this paper follow a well-known paradigm of supervised machine learning, where they are first trained on a dataset representing companies with a known credibility, and then applied to companies with unknown credibility. We employed a procedure of feature selection that improved the accuracy of the ratings obtained as a result of classification. In addition, feature selection reduced the number of parameters describing a company that have to be known before the automatic rating can be performed. Wrappers performed better than filters for both US and European datasets. However, better classification performance was achieved at a cost of additional computational time. Our results also suggest that US rating methodology prefers the size of companies and market value ratios, whereas the European methodology relies more on profitability and leverage ratios.
Feature selection in corporate credit rating prediction
S0950705113002177
In this paper, a hybrid particle swarm optimization with crossover operator (denoted as C-PSO) is proposed, in which a crossover operator is adopted for enhancing the information exchange between particles to prevent premature convergence of the swarm. The C-PSO algorithm is employed for solving high dimensional bilevel multiobjective programming problem (HDBLMPP) in this study, which performs better than the existing method with respect to the generational distance and has almost the same performance with respect to the spacing. Finally, we use four test problems and a practical application to measure and evaluate the proposed algorithm. Our results indicate that the proposed algorithm is highly competitive with respect to the algorithm representative of the state-of-the-art in high dimensional bilevel multiobjective optimization.
Solving high dimensional bilevel multiobjective programming problem using a hybrid particle swarm optimization algorithm with crossover operator
S0950705113002190
Literature survey is one of the most important steps in the process of academic research, allowing researchers to explore and understand topics. However, novice researchers without sufficient prior knowledge lack the skills to determine proper keywords for searching topics of choice. To tackle this problem, we propose an entropy-based query expansion with a reweighting (E_QE) approach to revise queries during the iterative retrieval process. We designed a series of experiments that consider the researcher’s changing information needs during task execution. Three topic change situations are considered in this work: minor, moderate and dramatic topic changes. The simulation-based pseudo-relevance feedback technique is applied during the search process to evaluate the effectiveness of the proposed approach without the intervention of human effort. We measured the effectiveness of the TFIDF and E_QE approaches for different types of topic change situations. The results show that the proposed E_QE approach achieves better search results than the TFIDF, helping researchers to revise queries. The results also confirm that the E_QE approach is effective when considering the relevant and irrelevant pages during the relevance feedback process at different levels of topic change.
An entropy-based query expansion approach for learning researchers’ dynamic information needs
S0950705113002323
In this paper, we analyze the performance of several well-known pattern recognition and dimensionality reduction techniques when applied to mass-spectrometry data for odor biometric identification. Motivated by the successful results of previous works capturing the odor from other parts of the body, this work attempts to evaluate the feasibility of identifying people by the odor emanated from the hands. By formulating this task according to a machine learning scheme, the problem is identified with a small-sample-size supervised classification problem in which the input data is formed by mass spectrograms from the hand odor of 13 subjects captured in different sessions. The high dimensionality of the data makes it necessary to apply feature selection and extraction techniques together with a simple classifier in order to improve the generalization capabilities of the model. Our experimental results achieve recognition rates over 85% which reveals that there exists discriminatory information in the hand odor and points at body odor as a promising biometric identifier.
Analysis of pattern recognition and dimensionality reduction techniques for odor biometrics
S0950705113002335
The community structure is one of the most important patterns in network. Since finding the communities in the network can significantly improve our understanding of the complex relations, lots of work has been done in recent years. Yet it still lies vacant on the exact definition and practical algorithms for community detection. This paper proposes a novel definition for community which overcomes the drawbacks of existing methods. With the new definition, efficient community detection algorithms are developed, which take advantage of additive topological and other constrains to discover communities in arbitrary shape based on the feedback. The algorithm has a linear run time with the size of graph. Experimental results demonstrate that the community definition in this paper is effective and the algorithm is scalable for large graphs.
Efficient community detection with additive constrains on large networks
S0950705113002359
A new evidential classifier (EC) based on belief functions is developed in this paper for the classification of imprecise data using K-nearest neighbors. EC works with credal classification which allows to classify the objects either in the specific classes, in the meta-classes defined by the union of several specific classes, or in the ignorant class for the outlier detection. The main idea of EC is to not classify an object in a particular class whenever the object is simultaneously close to several classes that turn to be indistinguishable for it. In such case, EC will associate the object with a proper meta-class in order to reduce the misclassification errors. The full ignorant class is interpreted as the class of outliers representing all the objects that are too far from the other data. The K basic belief assignments (bba’s) associated with the object are determined by the distances of the object to its K-nearest neighbors and some chosen imprecision thresholds. The classification of the object depends on the global combination results of these K bba’s. The interest and potential of this new evidential classifier with respect to other classical methods are illustrated through several examples based on artificial and real data sets.
Evidential classifier for imprecise data based on belief functions
S0950705113002384
This paper presents a fast algorithm called Column Generation Newton (CGN) for kernel 1-norm support vector machines (SVMs). CGN combines the Column Generation (CG) algorithm and the Newton Linear Programming SVM (NLPSVM) method. NLPSVM was proposed for solving 1-norm SVM, and CG is frequently used in large-scale integer and linear programming algorithms. In each iteration of the kernel 1-norm SVM, NLPSVM has a time complexity of O(ℓ3), where ℓ is the sample number, and CG has a time complexity between O(ℓ3) and O(n ′3), where n′ is the number of columns of the coefficient matrix in the subproblem. CGN uses CG to generate a sequence of subproblems containing only active constraints and then NLPSVM to solve each subproblem. Since the subproblem in each iteration only consists of n′ unbound constraints, CGN thus has a time complexity of O(n ′3), which is smaller than that of NLPSVM and CG. Also, CGN is faster than CG when the solution to 1-norm SVM is sparse. A theorem is given to show a finite step convergence of CGN. Experimental results on the Ringnorm and UCI data sets demonstrate the efficiency of CGN to solve the kernel 1-norm SVM.
A fast algorithm for kernel 1-norm support vector machines
S0950705113002402
Fuzzy regression models have been widely applied to explain the relationship between explanatory variables and responses in fuzzy environments. This paper proposes a simple two-stage approach for constructing a fuzzy regression model based on the distance concept. Crisp numbers representing the fuzzy observations are obtained using the defuzzification method, and then the crisp regression coefficients in the fuzzy regression model are determined using the conventional least-squares method. Along with the crisp regression coefficients, the proposed fuzzy regression model contains a fuzzy adjustment variable so that the model can deal with the fuzziness from fuzzy observations in order to reduce the fuzzy estimation error. A mathematical programming model is formulated to determine the fuzzy adjustment term in the proposed fuzzy regression model to minimize the total estimation error based on the distance concept. Unlike existing approaches that only focus on positive coefficients, the problem of negative coefficients in the fuzzy regression model is taken into account and resolved in the solution procedure. Comparisons with previous studies show that the proposed fuzzy regression model has the highest explanatory power based on the total estimation error using various criteria. A real-life dataset is adopted to demonstrate the applicability of the proposed two-stage approach in handling a problem with negative coefficients in the fuzzy regression model and a large number of fuzzy observations.
A two-stage approach for formulating fuzzy regression models
S0950705113002414
With the rapid growth of user-generated content on the internet, automatic sentiment analysis of online customer reviews has become a hot research topic recently, but due to variety and wide range of products and services being reviewed on the internet, the supervised and domain-specific models are often not practical. As the number of reviews expands, it is essential to develop an efficient sentiment analysis model that is capable of extracting product aspects and determining the sentiments for these aspects. In this paper, we propose a novel unsupervised and domain-independent model for detecting explicit and implicit aspects in reviews for sentiment analysis. In the model, first a generalized method is proposed to learn multi-word aspects and then a set of heuristic rules is employed to take into account the influence of an opinion word on detecting the aspect. Second a new metric based on mutual information and aspect frequency is proposed to score aspects with a new bootstrapping iterative algorithm. The presented bootstrapping algorithm works with an unsupervised seed set. Third, two pruning methods based on the relations between aspects in reviews are presented to remove incorrect aspects. Finally the model employs an approach which uses explicit aspects and opinion words to identify implicit aspects. Utilizing extracted polarity lexicon, the approach maps each opinion word in the lexicon to the set of pre-extracted explicit aspects with a co-occurrence metric. The proposed model was evaluated on a collection of English product review datasets. The model does not require any labeled training data and it can be easily applied to other languages or other domains such as movie reviews. Experimental results show considerable improvements of our model over conventional techniques including unsupervised and supervised approaches.
Care more about customers: Unsupervised domain-independent aspect detection for sentiment analysis of customer reviews
S0950705113002591
We present two novel techniques for the imputation of both categorical and numerical missing values. The techniques use decision trees and forests to identify horizontal segments of a data set where the records belonging to a segment have higher similarity and attribute correlations. Using the similarity and correlations, missing values are then imputed. To achieve a higher quality of imputation some segments are merged together using a novel approach. We use nine publicly available data sets to experimentally compare our techniques with a few existing ones in terms of four commonly used evaluation criteria. The experimental results indicate a clear superiority of our techniques based on statistical analyses such as confidence interval.
Missing value imputation using decision trees and decision forests by splitting and merging records: Two novel techniques
S0950705113002621
In this paper, we propose a hybrid metaheuristic algorithm to solve the cyclic antibandwidth problem. This hard optimization problem consists of embedding an n-vertex graph into the cycle C n , such that the minimum distance (measured in the cycle) of adjacent vertices is maximized. It constitutes a natural extension of the well-known antibandwidth problem, and can be viewed as the dual problem of the cyclic bandwidth problem. Our method hybridizes the artificial bee colony methodology with tabu search to obtain high-quality solutions in short computational times. Artificial bee colony is a recent swarm intelligence technique based on the intelligent foraging behavior of honeybees. The performance of this algorithm is basically determined by two search strategies, an initialization scheme that is employed to construct initial solutions and a method for generating neighboring solutions. On the other hand, tabu search is an adaptive memory programming methodology introduced in the eighties to solve hard combinatorial optimization problems. Our hybrid approach adapts some elements of both methodologies, artificial bee colony and tabu search, to the cyclic antibandwidth problem. In addition, it incorporates a fast local search procedure to enhance the local intensification capability. Through the analysis of experimental results, the highly effective performance of the proposed algorithm is shown with respect to the current state-of-the-art algorithm for this problem.
A hybrid metaheuristic for the cyclic antibandwidth problem
S0950705113002712
Lymph Node Metastasis (LNM) in gastric cancer is an important prognostic factor regarding long-term survival. As it is difficult for doctors to combine multiple factors for a comprehensive analysis, Clinical Decision Support System (CDSS) is desired to help the analysis. In this paper, a novel Bi-level Belief Rule Based (BBRB) prototype CDSS is proposed. The CDSS consists of a two-layer Belief Rule Base (BRB) system. It can be used to handle uncertainty in both clinical data and specific domain knowledge. Initial BRBs are constructed by domain specific knowledge, which may not be accurate. Traditional methods for optimizing BRB are sensitive to initialization and are limited by their weak local searching abilities. In this paper, a new Clonal Selection Algorithm (CSA) is proposed to train a BRB system. Based on CSA, efficient global search can be achieved by reproducing individuals and selecting their improved maturated progenies after the affinity maturation process. The proposed prototype CDSS is validated using a set of real patient data and performs extremely well. In particular, BBRB is capable of providing more reliable and informative diagnosis than a single-layer BRB system in the case study. Compared with conventional optimization method, the new CSA could improve the diagnostic performance further by trying to avoid immature convergence to local optima.
A bi-level belief rule based decision support system for diagnosis of lymph node metastasis in gastric cancer
S0950705113002852
In compressed sensing, sparse signal reconstruction is a required stage. To find sparse solutions of reconstruction problems, many methods have been proposed. It is time-consuming for some methods when the regularization parameter takes a small value. This paper proposes a decomposition algorithm for sparse signal reconstruction, which is almost insensitive to the regularization parameter. In each iteration, a subproblem or a small quadratic programming problem is solved in our decomposition algorithm. If the extended solution in the current iteration satisfies optimality conditions, an optimal solution to the reconstruction problem is found. On the contrary, a new working set must be selected for constructing the next subproblem. The convergence of the decomposition algorithm is also shown in this paper. Experimental results show that the decomposition method is able to achieve a fast convergence when the regularization parameter takes small values.
Sparse signal reconstruction using decomposition algorithm
S0950705113002918
In the real world, some heterogeneous items are prohibited from being transported together or penalty cost occurs when transporting them together. This paper firstly proposes the joint replenishment and delivery (JRD) model where a warehouse procures multi heterogeneous items from suppliers and deliveries them to retailers. The problem is to determine the grouping decision and when and how many to order and delivery to the warehouse and retailers such that the total costs are minimized. However, due to the JRD’s difficult mathematical properties, simple and effective solutions for this problem have eluded researchers. To find an optimal solution, an adaptive hybrid differential evolution (AHDE) algorithm is designed. Results of contrastive numerical examples show that AHDE outperforms genetic algorithm. The effectiveness of AHDE is further verified by randomly generated problems. The findings show that AHDE is more stable and robust in handling this complex problem.
Modeling and optimization for the joint replenishment and delivery problem with heterogeneous items
S0950705113003043
A crucial step in group decision making (GDM) processes is the aggregation of individual opinions with the aim of achieving a “fair” representation of each individual within the group. In multi-granular linguistic contexts where linguistic term sets with common domain but different granularity and/or semantic are used, the methodology widely applied until now requires, prior to the aggregation step, the application of a unification process. The reason for this unification process is the lack of appropriate aggregation operators for directly aggregating uncertain information represented by means of fuzzy sets. With the recent development of the Type-1 Ordered Weighted Averaging (T1OWA) operator, which is able to aggregate fuzzy sets, alternative approaches to multi-granular linguistic GDM problems are possible. Unlike consensus models based on unification processes, this paper presents a new T1OWA based consensus methodology that can directly manage linguistic term sets with different cardinality and/or semantic without the need to perform any transformation to unify the information. Furthermore, the linguistic information could be assumed to be balanced or unbalanced in its mathematical representation, and therefore the new T1OWA approach to consensus is more general in its application than previous consensus reaching processes with muti-granular linguistic information. To test the goodness of the new consensus reaching approach, a comparative study between the T1OWA based consensus model and the unification based consensus model is carried out using six randomly generated GDM problems with balanced multi-granular information. When distance between fuzzy sets used in the T1OWA based approach is defined as the corresponding distance between their centroids, a higher final level of consensus is achieved in four out of the six cases although no significant differences were found between both consensus approaches.
Type-1 OWA methodology to consensus reaching processes in multi-granular linguistic contexts
S0950705113003067
Feature selection is a vital preprocessing step for text classification task used to solve the curse of dimensionality problem. Most existing metrics (such as information gain) only evaluate features individually but completely ignore the redundancy between them. This can decrease the overall discriminative power because one feature’s predictive power is weakened by others. On the other hand, though all higher order algorithms (such as mRMR) take redundancy into account, the high computational complexity renders them improper in the text domain. This paper proposes a novel metric called global information gain (GIG) which can avoid redundancy naturally. An efficient feature selection method called maximizing global information gain (MGIG) is also given. We compare MGIG with four other algorithms on six datasets, the experimental results show that MGIG has better results than others methods in most cases. Moreover, MGIG runs significantly faster than the traditional higher order algorithms, which makes it a proper choice for feature selection in text domain.
Feature selection via maximizing global information gain for text classification
S0950705113003146
The aggregation of individuals’ preferences into a consensus ranking is a decision support problem which has been widely used in various applications, such as decision support systems, voting systems, and recommendation systems. Especially when applying recommendation systems in business, customers ask for more suggestions about purchasing products or services because the tremendous amount of information available can be overwhelming. Therefore, we have to gather more preferences from recommenders and aggregate them to gain consensuses. For an example of preference ranking, C>A⩾D⩾B indicates C is favorable to A, A is somewhat favorable but not fully favorable to D, and ultimately D is somewhat favorable but not fully favorable to B, where > and ⩾ are comparators, and A, B, C, and D are items. This shows the ranking relationship between items. However, no studies, to the best of our knowledge, have ever developed a recommendation system to suggest a temporal relationship between items. That is, “item A could occur during the duration of item B” or “item C could occur before item D”. This type of recommendation can be applied to the reading order of books, course plans in colleges, or the order of taking medicine for patients. This study proposes a novel recommendation model to discover closed consensus temporal patterns, where closed means the patterns are only the maximum consensus sequences. Experiments using synthetic and real datasets showed the model’s computational efficiency, scalability, and effectiveness.
Recommendations of closed consensus temporal patterns by group decision making
S0950705113003237
Highly accurate interval forecasting of a stock price index is fundamental to successfully making a profit when making investment decisions, by providing a range of values rather than a point estimate. In this study, we investigate the possibility of forecasting an interval-valued stock price index series over short and long horizons using multi-output support vector regression (MSVR). Furthermore, this study proposes a firefly algorithm (FA)-based approach, built on the established MSVR, for determining the parameters of MSVR (abbreviated as FA-MSVR). Three globally traded broad market indices are used to compare the performance of the proposed FA-MSVR method with selected counterparts. The quantitative and comprehensive assessments are performed on the basis of statistical criteria, economic criteria, and computational cost. In terms of statistical criteria, we compare the out-of-sample forecasting using goodness-of-forecast measures and testing approaches. In terms of economic criteria, we assess the relative forecast performance with a simple trading strategy. The results obtained in this study indicate that the proposed FA-MSVR method is a promising alternative for forecasting interval-valued financial time series.
Multiple-output support vector regression with a firefly algorithm for interval-valued stock price index forecasting
S0950705113003304
Emotional illiteracy exists in current e-learning environment, which will decay learning enthusiasm and productivity, and now gets more attentions in recent researches. Inspired by affective computing and active listening strategy, in this paper, a research and application framework of recognizing emotion based on textual interaction is presented first. Second, an emotion category model for e-learners is defined. Third, many Chinese metaphors are abstracted from the corpus according to the sentence semantics and syntax. Fourth, as the strategy of active learning, topic detection is used to detect the first turn in dialogs and recognize the type of emotion in the turn, which is different from the traditional emotion recognition approaches that try to classify every turn into an emotion category. Fifth, compared with Support Vector Machines (SVM), Naive Bayes, LogitBoost, Bagging, MultiClass Classifier, RBFnetwork, J48 algorithms and their corresponding cost-sensitive approaches, Random Forest and its corresponding cost-sensitive approaches achieve better results in our initial experiment of classifying the e-learners’ emotions. Finally, a case-based reasoning for emotion regulation instance recommendation is proposed to guide the listener to regulate the negative emotion of a speaker, in which a weighted sum method of Chinese sentence similarity computation is adopted. The experimental result shows that the ratio of effective cases is 68%.
Recognizing and regulating e-learners’ emotions based on interactive Chinese texts in e-learning systems
S0950705113003481
Attribute reduction and attribute generalization are two basic methods for simple representations of knowledge. Attribute reduction can only reduce the number of attributes and is thus unsuitable for attributes with hierarchical domains. Attribute generalization can transform raw attribute domains into a coarser granularity by exploiting attribute value taxonomies (AVTs). As the control of how high an attribute should be generalized is typically quite subjective, it can easily result in over-generalization or under-generalization. This paper investigates knowledge reduction for decision tables with AVTs, which can objectively control the generalization process, and construct a reduced data set with fewer attributes and smaller attribute domains. Specifically, we make use of Shannon’s conditional entropy for measuring classification capability for generalization and propose a novel concept for knowledge reduction, designated attribute-generalization reduct, which can objectively generalize attributes to maximize high levels while keep the same classification capability as the raw data. We analyze major relationships between attribute reduct and attribute-generalization reduct and prove that finding a minimal attribute-generalization reduct is an NP-hard problem and develop a heuristic algorithm for attribute-generalization reduction, namely, AGR-SCE. Empirical studies demonstrate that our algorithm accomplishes better classification performance and assists in computing smaller rule sets with better generalized knowledge compared with the attribute reduction method.
Knowledge reduction for decision tables with attribute value taxonomies
S0950705113003511
Recently collaborative tagging, also known as “folksonomy” in Web 2.0, allows users to collaboratively create and manage tags to classify and categorize dynamic content for searching and sharing. A user’s interest in social resources usually changes with time in such a dynamic and information rich environment. Additionally, a social network is one innovative characteristic in social resource sharing websites. The information from a social network provides an inference of a certain user’s interests based on the interests of this user’s network neighbors. To handle the problem of personalized interests changing gradually with time, and to utilize the benefit of the social network, this study models a personalized user interest, incorporating frequency, recency, and duration of tag-based information, and performs collaborative recommendations using the user’s social network in social resource sharing websites. The proposed method includes finding neighbors from the “social friends” network by using collaborative filtering and recommending similar resource items to the users by using content-based filtering. This study examines the proposed system’s performance using an experimental dataset collected from a social bookmarking website. The experimental results show that the hybridization of user’s preferences with frequency, recency, and duration plays an important role, and provides better performances than traditional collaborative recommendation systems. The experimental results also reveal that the friend network information can successfully collaborate, thus improving the collaborative recommendation process.
Utilizing user tag-based interests in recommender systems for social resource sharing websites
S0950705113003560
Collaborative filtering has become one of the most used approaches to provide personalized services for users. The key of this approach is to find similar users or items using user-item rating matrix so that the system can show recommendations for users. However, most approaches related to this approach are based on similarity algorithms, such as cosine, Pearson correlation coefficient, and mean squared difference. These methods are not much effective, especially in the cold user conditions. This paper presents a new user similarity model to improve the recommendation performance when only few ratings are available to calculate the similarities for each user. The model not only considers the local context information of user ratings, but also the global preference of user behavior. Experiments on three real data sets are implemented and compared with many state-of-the-art similarity measures. The results show the superiority of the new similarity model in recommended performance.
A new user similarity model to improve the accuracy of collaborative filtering
S0950705113003705
One of the most challenging tasks in the development of recommender systems is the design of techniques that can infer the preferences of users through the observation of their actions. Those preferences are essential to obtain a satisfactory accuracy in the recommendations. Preference learning is especially difficult when attributes of different kinds (numeric or linguistic) intervene in the problem, and even more when they take multiple possible values. This paper presents an approach to learn user preferences over numeric and multi-valued linguistic attributes through the analysis of the user selections. The learning algorithm has been tested with real data on restaurants, showing a very good performance.
Automatic preference learning on numeric and multi-valued categorical attributes
S0950705113003857
Natural data sets often have missing values in them. An accurate missing value imputation is crucial to increase the usability of a data set for statistical analyses and data mining tasks. In this paper we present a novel missing value imputation technique using a data set’s existing patterns including co-appearances of attribute values, correlations among the attributes and similarity of values belonging to an attribute. Our technique can impute both numerical and categorical missing values. We carry out extensive experiments on nine natural data sets, and compare our technique with four high quality existing techniques. We simulate 32 types of missing patterns (combinations), and thereby generate 320 missing data sets for each of the nine natural data sets. Two well known evaluation criteria namely index of agreement ( d 2 ) and root mean squared error are used. Our experimental results, based on the statistical sign test, indicate that our technique achieves significantly better imputation accuracy than the existing techniques.
FIMUS: A framework for imputing missing values using co-appearance, correlation and similarity analysis
S0950705113003894
The 2-dimension uncertain linguistic variables add a subjective evaluation on the reliability of the evaluation results given by decision makers, so they can better express fuzzy information. At the same time, the power average (PA) operator has the characteristic of capturing the correlations of the aggregated arguments. In this paper, we propose some power aggregation operators, including 2-dimension uncertain linguistic power generalized aggregation operator (2DULPGA) and 2-dimension uncertain linguistic power generalized weighted aggregation operator (2DULPGWA), and discuss some properties and special cases of them. Finally, with respect to the multiple attribute group decision making problems in which the attribute values take the form of the 2-dimension uncertain linguistic information, the method based on some power generalized aggregation operators is proposed, and two examples are given to verify the developed approach and to demonstrate its effectiveness.
2-Dimension uncertain linguistic power generalized weighted aggregation operator and its application in multiple attribute group decision making
S0950705113004012
In most data envelopment analysis (DEA) models, the best performers have the full efficient status denoted by unity (or 100), and, from experience, we know that usually plural decision making units (DMUs) have this efficient status. Discriminating between these efficient DMUs is an interesting subject, and a large number of methods have been proposed for fully ranking both efficient and inefficient DMUs. This paper demonstrates the fact that the rank reversal phenomenon may occur in most DEA ranking methods; however, this study introduces some ranking methods which do not follow the procedure and lack this taint. Numerical examples are provided to clearly illustrate the above mentioned phenomenon in some DEA ranking methods. In fact, certain ranking methods are surveyed in DEA focusing on rank preservation and rank reversal phenomena.
Survey on rank preservation and rank reversal in data envelopment analysis
S0950705114000185
The discovery, extraction and analysis of knowledge from data rely generally upon the use of unsupervised learning methods, in particular clustering approaches. Much recent research in clustering and data engineering has focused on the consideration of finite mixture models which allow to reason in the face of uncertainty and to learn by example. The adoption of these models becomes a challenging task in the presence of outliers and in the case of high-dimensional data which necessitates the deployment of feature selection techniques. In this paper we tackle simultaneously the problems of cluster validation (i.e. model selection), feature selection and outliers rejection when clustering positive data. The proposed statistical framework is based on the generalized inverted Dirichlet distribution that offers a more practical and flexible alternative to the inverted Dirichlet which has a very restrictive covariance structure. The learning of the parameters of the resulting model is based on the minimization of a message length objective incorporating prior knowledge. We use synthetic data and real data generated from challenging applications, namely visual scenes and objects clustering, to demonstrate the feasibility and advantages of the proposed method.
Robust simultaneous positive data clustering and unsupervised feature selection using generalized inverted Dirichlet mixture models
S0950705114000197
Web index recommendation systems are designed to help internet users with suggestions for finding relevant information. One way to develop such systems is using the multi-instance learning (MIL) approach: a generalization of the traditional supervised learning where each example is a labeled bag that is composed of unlabeled instances, and the task is to predict the labels of unseen bags. This paper proposes a multi-instance learning wrapper method using the Rocchio classifier to recommend web index pages. The wrapper implements a new way to relate the instances with the class labels of the bags. The proposed method has low computational cost and the experimental study on benchmark data sets shows that it performs better than the state-of-the-art methods for this problem.
A multi-instance learning wrapper based on the Rocchio classifier for web index recommendation
S095070511400032X
This paper aims to develop and compare several elicitation criterions for decision making of incomplete soft sets which are generated by restricted intersection. One time elicitation process is divided into two steps. Using the greedy idea four criterions for elicitation of objects are built based on maximax, maximin, minimax regret and combination of expected choice values and elicitation times. Then these initial unknown values which produce incomplete values together with known information are in priority. Fast methods for computing possibly and necessarily optimal solutions before or in the elicitation process are invented. As far as the sizes of soft sets used in the simulation experiments, it is found statistically that we should choose the criterion based on the combination of expected choice value and expected elicitation times in the first step of one time elicitation. The developed methods can be used for decision making of incomplete 0–1 information systems, which are generated by the conjunction of two experts’ incomplete 0–1 evaluation results. Whenever the available information is not enough for choosing a necessarily optimal solution, the elicitation algorithms can help elicitate as few unknown values as possible until an optimal result is found. An elicitation system is made to show that our elicitation methods can potentially be embedded in recommender or decision support systems. The elicitation problems are proposed for decision making of operation-generated soft sets by extracting from some practical problems. The concept of expected elicitation times of objects is defined and used for developing one type of elicitation strategy.
Elicitation criterions for restricted intersection of two incomplete soft sets
S0950705114000409
Simultaneous reductions in inventory of raw materials, work-in-process, and finished items have recently become a major focus in supply chain management. Vendor-managed inventory is a well-known practice in supply chain collaborations, in which manufacturer manages inventory at the retailer and decides about the time and replenishment. In this paper, an integrated vendor-managed inventory model is presented for a two-level supply chain structured as a single capacitated manufacturer at the first level and multiple retailers at the second level. Manufacturer produces different products where demands are assumed decreasing functions of retail prices. In this chain, both the manufacturer and retailers contribute to determine their own decision variables in order to maximize their benefits. While previous research on this topic mainly included a single objective optimization model where the objective was either to minimize total supply chain costs or to maximize total supply chain benefits, in this research a fair profit contract is designed for the manufacturer and the retailers. The problem is first formulated into a bi-objective non-linear mathematical model and then the lexicographic max–min approach is utilized to obtain a fair non-dominated solution. Finally, different test problems are investigated in order to demonstrate the applicability of the proposed methodology and to evaluate the solution obtained.
Lexicographic max–min approach for an integrated vendor-managed inventory problem
S0950705114000422
Knowledge is nowadays considered as a significant source of performance improvement, but may be difficult to identify, structure, analyse and reuse properly. A possible source of knowledge is in the data and information stored in various modules of industrial information systems, like CMMS (Computerized Maintenance Management Systems) for maintenance. In that context, the main objective of this paper is to propose a framework allowing to manage and generate knowledge from information on past experiences, in order to improve the decisions related to the maintenance activity. In that purpose, we suggest an original Experience Feedback process dedicated to maintenance, allowing to capitalize on past activities by (i) formalizing the domain knowledge and experiences using a visual knowledge representation formalism with logical foundation (Conceptual Graphs); (ii) extracting new knowledge thanks to association rules mining algorithms, using an innovative interactive approach; and (iii) interpreting and evaluating this new knowledge thanks to the reasoning operations of Conceptual Graphs. The suggested method is illustrated on a case study based on real data dealing with the maintenance of overhead cranes.
Generating knowledge in maintenance from Experience Feedback
S095070511400046X
In multiple attribute decision making (MADM), different attribute weights may generate different solutions, which means that attribute weights significantly influence solutions. When there is a lack of sufficient data, knowledge, and experience for a decision maker to generate attribute weights, the decision maker may expect to find the most satisfactory solution based on unknown attribute weights called a robust solution in this study. To generate such a solution, this paper proposes a robust evidential reasoning (ER) approach to compare alternatives by measuring their robustness with respect to attribute weights in the ER context. Alternatives that can become the best with the support of one or more sets of attribute weights are firstly identified. The measurement of robustness of each identified alternative from two perspectives, i.e., the optimal situation of the alternative and the insensitivity of the alternative to a variation in attribute weights is then presented. The procedure of the proposed approach is described based on the combination of such identification of alternatives and the measurement of their robustness. A problem of car performance assessment is investigated to show that the proposed approach can effectively produce a robust solution to a MADM problem with unknown attribute weights.
Robust evidential reasoning approach with unknown attribute weights
S0950705114000471
Both support vector machine (SVM) and twin support vector machine (TWSVM) are powerful classification tools. However, in contrast to many SVM-based feature selection methods, TWSVM has not any corresponding one due to its different mechanism up to now. In this paper, we propose a feature selection method based on TWSVM, called FTSVM. It is interesting because of the advantages of TWSVM in many cases. Our FTSVM is quite different from the SVM-based feature selection methods. In fact, linear SVM constructs a single separating hyperplane which corresponds a single weight for each feature, whereas linear TWSVM constructs two fitting hyperplanes which corresponds to two weights for each feature. In our linear FTSVM, in order to link these two fitting hyperplanes, a feature selection matrix is introduced. Thus, the feature selection becomes to find an optimal matrix, leading to solve a multi-objective mixed-integer programming problem by a greedy algorithm. In addition, the linear FTSVM has been extended to the nonlinear case. Furthermore, a feature ranking strategy based on FTSVM is also suggested. The experimental results on several public available benchmark datasets indicate that our FTSVM not only gives nice feature selection on both linear and nonlinear cases but also improves the performance of TWSVM efficiently.
A novel feature selection method for twin support vector machine
S0950705114000549
With the rapid popularization of video recording devices, more multimedia content is available to the public. However, current video search engines rely on textual data such as video titles, annotations, and text around the video. Video recording devices such as cameras, smartphones and car blackboxes are nowadays equipped with GPS sensors and the ability to capture videos with spatiotemporal information such as time, location, and camera direction. We call such videos georeferenced videos. This paper proposes an efficient spatial indexing method, called GeoTree, which facilitates rapid searching of georeferenced videos. In particular, we propose a new data structure, called MBTR (Minimum Bounding Tilted Rectangle) to efficiently store the areas of moving scenes in the tree. We also propose algorithms for building MBTRs from georeferenced videos and algorithms for efficiently processing point and range queries on GeoTree. The results of experiments conducted on real georeferenced video data show that, compared to previous indexing methods for georeferenced video search, GeoTree substantially reduces index size and also improves search speed for georeferenced video data. An online demo of the system is available at “http://dm.postech.ac.kr/geosearch”.
GeoTree: Using spatial information for georeferenced video search
S0950705114000719
A new approach to the rule-base evidential reasoning based on the synthesis of fuzzy logic, Atannasov’s intuitionistic fuzzy sets theory and the Dempster-Shafer theory of evidence is proposed. It is shown that the use of intuitionistic fuzzy values and the classical operations on them directly may provide counter-intuitive results. Therefore, an interpretation of intuitionistic fuzzy values in the framework of Dempster-Shafer theory is proposed and used in the evidential reasoning. The merits of the proposed approach are illustrated with the use of developed expert systems for diagnostics of type 2 diabetes. Using the real-world examples, it is shown that such an approach provides reasonable and intuitively obvious results when the classical method of rule-base evidential reasoning cannot produce any reasonable results.
A new approach to the rule-base evidential reasoning in the intuitionistic fuzzy setting
S0950705114000872
Low back pain affects a large proportion of the adult population at some point in their lives and has a major economic and social impact. To soften this impact, one possible solution is to make use of Information and Communication Technologies. Recommender systems, which exploit past behaviors and user similarities to predict possible user needs, have already been introduced in several health fields. In this paper, we present TPLUFIB-WEB, a fuzzy linguistic Web system that uses a recommender system to provide personalized exercises to patients with low back pain problems and to offer recommendations for their prevention. This system may be useful to reduce the economic impact of low back pain, help professionals to assist patients, and inform users on low back pain prevention measures. TPLUFIB-WEB satisfies the Web quality standards proposed by the Health On the Net Foundation (HON), Official College of Physicians of Barcelona, and Health Quality Agency of the Andalusian Regional Government, endorsing the health information provided and warranting the trust of users.
TPLUFIB-WEB: A fuzzy linguistic Web system to help in the treatment of low back pain problems
S0950705114000896
One of the key issues of knowledge discovery and data mining is knowledge reduction. Attribute reduction of formal contexts based on the granules and dominance relation are first reviewed in this paper. Relations between granular reduts and dominance reducts are investigated with the aim to establish a bridge between the two reduction approaches. We obtain meaningful results showing that granule-based and dominance-relation-based attribute reducts and attribute characteristics are identical. Utilizing dominance reducts and attribute characteristics, we can obtain all granular reducts and attribute characteristics by the proposed approach. In addition, we establish relations between dominance classes and irreducible elements, and present some judgment theorems with respect to the irreducible elements.
Relations between granular reduct and dominance reduct in formal contexts
S0950705114001117
Feature selection has been attracting increasing attention in recent years for its advantages in improving the predictive efficiency and reducing the cost of feature acquisition. In this paper, we regard feature selection as an efficiency evaluation process with multiple evaluation indices, and propose a novel feature selection framework based on Data Envelopment Analysis (DEA). The most significant advantages of this framework are that it can make a trade-off among several feature properties or evaluation criteria and evaluate features from a perspective of “efficient frontier” without parameter setting. We then propose a simple feature selection method based on the framework to effectively search “efficient” features with high class-relevance and low conditional independence. Super-efficiency DEA is employed in our method to fully rank features according to their efficiency scores. Experimental results for twelve well-known datasets indicate that proposed method is effective and outperforms several representative feature selection methods in most cases. The results also show the feasibility of proposed DEA-based feature selection framework.
Feature selection using data envelopment analysis
S0950705114001130
Based on the concepts of conlitron and multiconlitron, we propose a growing construction technique for improving the performance of piecewise linear classifiers on two-class problems. This growing technique consists of two basic operations: SQUEEZE and INFLATE, which can produce relatively reliable linear boundaries. In the convexly separable case, the growing process, forming a growing support conlitron algorithm (GSCA), starts with an initial conlitron and uses SQUEEZE to train a new conlitron, moving its classification boundary closer to the interior convex region and fitting the data distribution better statistically. In the commonly separable case, the growing process, forming a growing support multiconlitron algorithm (GSMA), starts with an initial multiconlitron and uses INFLATE and SQUEEZE to train a new multiconlitron, making its classification boundary adjusted to improve the generalization ability. Experimental evaluation shows that the growing technique can simplify the structure of a conlitron/multiconlitron effectively by reducing the number of linear functions, largely keeping and even greatly improving the level of classification performances. Therefore, it would come to play an important role in the subsequent development of piecewise linear learning, with the main goal to improve piecewise linear classifiers in a general framework.
Growing construction of conlitron and multiconlitron
S095070511400118X
This study analyzed the “Cost efficiency” and “Revenue efficiency” of 207 certified public accountant firms in Taiwan by using the additive efficiency decomposition DEA approach. Furthermore, this study applied the Tobit regression to explore the relationship between CPA firms and intellectual capital (IC). The study found that the Big 4 and CPA firms that practiced auditing in China were relatively efficient in both cost and revenue. In addition, this research discovered that CPA firms relying mainly on auditing are more efficient in creating revenue and utilizing costs. Furthermore, the Tobit regression was employed to evaluate whether IC affected CPA firms’ cost efficiency and revenue efficiency. This study found that IC played an important role in performance representation both in cost efficiency and revenue efficiency. Therefore, this study suggests that CPA firms should manage IC efficiently to enhance CPA firms’ competitive abilities.
Does intellectual capital matter? Assessing the performance of CPA firms based on additive efficiency decomposition DEA
S0950705114001385
We propose a fuzzy ontology for human activity representation, which allows us to model and reason about vague, incomplete, and uncertain knowledge. Some relevant subdomains found to be missing in previous proposed ontologies for this domain were modelled as well. The resulting fuzzy OWL 2 ontology is able to model uncertain knowledge and represent temporal relationships between activities using an underlying fuzzy state machine representation. We provide a proof of concept of the approach in work scenarios such as the office domain, and also make experiments to emphasize the benefits of our approach with respect to crisp ontologies. As a result, we demonstrate that the inclusion of fuzzy concepts and relations in the ontology provide benefits during the recognition process with respect to crisp approaches.
A fuzzy ontology for semantic modelling and recognition of human behaviour
S095070511400149X
We study the problem of clustering uncertain objects whose locations are uncertain and described by probability density functions (pdf). We analyze existing pruning algorithms and experimentally show that there exists a new bottleneck in the performance due to the overhead of pruning candidate clusters for assignment of each uncertain object in each iteration. In this article, we will show that by considering squared Euclidean distance, UK-means (without pruning techniques) is reduced to K-means and performs much faster than pruning algorithms, however, with some discrepancies in the clustering results due to using different distance functions. Thus, we propose Approximate UK-means to heuristically identify objects of boundary cases and re-assign them to better clusters. Three models for the representation of cluster representative (certain model, uncertain model and heuristic model) are proposed to calculate expected squared Euclidean distance between objects and cluster representatives in this paper. Our experimental results show that on average the execution time of Approximate UK-means is only 25% more than K-means and our approach reduces the discrepancies of K-means’ clustering results by up to 70%.
A heuristic approach to effective and efficient clustering on uncertain objects
S0950705114001506
In this paper, we propose a novel supervised manifold learning approach, supervised locality discriminant manifold learning (SLDML), for head pose estimation. Traditional manifold learning methods focus on preserving only the intra-class geometric properties of the manifold embedded in the high-dimensional ambient space, so they cannot fully utilize the underlying discriminative knowledge of the data. The proposed SLDML aims to explore both geometric structure and discriminant information of the data, and yields a smooth and discriminative low-dimensional embedding by adding the local discriminant terms in the optimization objectives of manifold learning. Moreover, for efficiently handling out-of-sample extension and learning with the local consistency, we decompose the manifold learning as a two-step approach. We incorporate the manifold learning and the regression with a learned discriminant manifold-based projection function obtained by discriminatively Laplacian regularized least squares. The SLDML provides both the low-dimensional embedding and projection function with better intra-class compactness and inter-class separability, therefore preserves the local geometric structures more effectively. Meanwhile, the SLDML is supervised by both biased distance and continuous head pose angle information when constructing the graph, embedding the graph and learning the projection function. Our experiments demonstrate the superiority of the proposed SLDML over several current state-of-art approaches for head pose estimation on the publicly available FacePix dataset.
Supervised locality discriminant manifold learning for head pose estimation
S0950705114001749
Schema Matching, i.e. the process of discovering semantic correspondences between concepts adopted in different data source schemas, has been a key topic in Database and Artificial Intelligence research areas for many years. In the past, it was largely investigated especially for classical database models (e.g., E/R schemas, relational databases, etc.). However, in the latest years, the widespread adoption of XML in the most disparate application fields pushed a growing number of researchers to design XML-specific Schema Matching approaches, called XML Matchers, aiming at finding semantic matchings between concepts defined in DTDs and XSDs. XML Matchers do not just take well-known techniques originally designed for other data models and apply them on DTDs/XSDs, but they exploit specific XML features (e.g., the hierarchical structure of a DTD/XSD) to improve the performance of the Schema Matching process. The design of XML Matchers is currently a well-established research area. The main goal of this paper is to provide a detailed description and classification of XML Matchers. We first describe to what extent the specificities of DTDs/XSDs impact on the Schema Matching task. Then we introduce a template, called XML Matcher Template, that describes the main components of an XML Matcher, their role and behavior. We illustrate how each of these components has been implemented in some popular XML Matchers. We consider our XML Matcher Template as the baseline for objectively comparing approaches that, at first glance, might appear as unrelated. The introduction of this template can be useful in the design of future XML Matchers. Finally, we analyze commercial tools implementing XML Matchers and introduce two challenging issues strictly related to this topic, namely XML source clustering and uncertainty management in XML Matchers.
XML Matchers: Approaches and challenges
S0950705114001798
One of the most fundamental research questions in the field of human–machine interaction is how to enable dialogue systems to capture the meaning of spontaneously produced linguistic inputs without explicit syntactic expectations. This paper introduces a cognitively-inspired representational model intended to address this research question. To the extent that this model is cognitively-inspired, it integrates insights from behavioral and neuroimaging studies on working memory operations and language-impaired patients (i.e., Broca’s aphasics). The level of detail contained in the specification of the model is sufficient for a computational implementation, while the level of abstraction is sufficient to enable generalization of the model over different interaction domains. Finally, the paper reports on a domain-independent framework for end-user programming of adaptive dialogue management modules.
Cognitively-inspired representational approach to meaning in machine dialogue
S0950705114001804
This paper presents the development of a knowledge-based system (KBS) prototype able to design natural gas cogeneration plants, demonstrating new features for this field. The design of such power plants represents a synthesis problem, subject to thermodynamic constraints that include the location and sizing of components. The project was developed in partnership with the major Brazilian gas and oil company, and involved interaction with an external consultant as well as an interdisciplinary team. The paper focuses on validation and lessons learned, concentrating on important aspects such as the generation of alternative configuration schemes, breadth of each scheme description created by the system, and its module to support economic feasibility analysis.
Development of a knowledge-based system for cogeneration plant design: Verification, validation and lessons learned
S0950705114001816
Big Data analytics is considered an imperative aspect to be further improved in order to increase the operating margin of both public and private enterprises, and represents the next frontier for their innovation, competition, and productivity. Big Data are typically produced in different sectors of the above organizations, often geographically distributed throughout the world, and are characterized by a large size and variety. Therefore, there is a strong need for platforms handling larger and larger amounts of data in contexts characterized by complex event processing systems and multiple heterogeneous sources, dealing with the various issues related to efficiently disseminating, collecting and analyzing them in a fully distributed way. In such a scenario, this work proposes a way to overcome two fundamental issues: data heterogeneity and advanced processing capabilities. We present a knowledge-based solution for Big Data analytics, which consists in applying automatic schema mapping to face with data heterogeneity, as well as ontology extraction and semantic inference to support innovative processing. Such a solution, based on the publish/subscribe paradigm, has been evaluated within the context of a simple experimental proof-of-concept in order to determine its performance and effectiveness.
A knowledge-based platform for Big Data analytics based on publish/subscribe services and stream processing
S0950705114001828
How to improve the global search ability without significantly impairing the convergence speed is still a big challenge for most of the meta-heuristic optimization algorithms. In this paper, a concept for the optimization of continuous nonlinear functions applying crisscross optimization algorithm is introduced. The crisscross optimization algorithm is a new search algorithm inspired by Confucian doctrine of gold mean and the crossover operation in genetic algorithm, which has distinct advantages in solution accuracy as well as convergence rate compared to other complex optimization algorithms. The procedures and related concepts of the proposed algorithm are presented. On this basis, we discuss the behavior of the main search operators such as horizontal crossover and vertical crossover. It is just because of the perfect combination of both, leading to a magical effect on improving the convergence speed and solution accuracy when addressing complex optimization problems. Twelve benchmark functions, including unimodal, multimodal, shifted and rotated functions, are used to test the feasibility and efficiency of the proposed algorithm. The experimental results show that the crisscross optimization algorithm has an excellent performance on most of the test functions, compared to other heuristic algorithms. At the end, the crisscross optimization algorithm is successfully applied to the optimization of a large-scale economic dispatch problem in electric power system. It is concluded that the crisscross optimization algorithm is not only robust in solving continuous nonlinear functions, but also suitable for addressing the complex real-world engineering optimization problems.
Crisscross optimization algorithm and its application
S0950705114001993
This paper presents a novel method to analyse the tonal behaviour of a music piece. The method is based on the development of a novel probability model of the predominant key using the well known Pitch Class Profile (PCP or chroma) descriptor. This feature represents the importance of each note of the chromatic scale within the spectral content of the audio signal. Making use of PCP feature, a main novel contribution presented is the development of new profile models for the characterization of major and minor keys. Most of the key profile models found in the literature assign a single value to each pitch class for a particular key. This value represents the salience of this pitch class in a specific key. The new key profiles, that will be described in this paper, are based on the identification of specific probability density functions (PDFs), defined after the analysis of the presence of the pitch classes of the chromatic scale in the different keys.
A probability model for key analysis in music
S0950705114002263
Margin distribution is acknowledged as an important factor for improving the generalization performance of classifiers. In this paper, we propose a novel ensemble learning algorithm named Double Rotation Margin Forest (DRMF), that aims to improve the margin distribution of the combined system over the training set. We utilise random rotation to produce diverse base classifiers, and optimize the margin distribution to exploit the diversity for producing an optimal ensemble. We demonstrate that diverse base classifiers are beneficial in deriving large-margin ensembles, and that therefore our proposed technique will lead to good generalization performance. We examine our method on an extensive set of benchmark classification tasks. The experimental results confirm that DRMF outperforms other classical ensemble algorithms such as Bagging, AdaBoostM1 and Rotation Forest. The success of DRMF is explained from the viewpoints of margin distribution and diversity.
Exploiting diversity for optimizing margin distribution in ensemble learning
S0950705114002299
Theory of belief function can be introduced to the interval set by defining interval-valued belief structures. The Dempster–Shafer (D–S) theory of evidence has been extended to combine interval-valued belief structures for decades. Although there already exist several combination approaches proposed by previous researchers, this problem has not been fully resolved so far. A novel combination of interval-valued belief structures is developed after analyzing existing irrational or suboptimal approaches. The novel combination approach is modeled based on intuitionistic fuzzy set, rather than nonlinear programming models, which are computational complicated. Numerical examples are implemented to illustrate the performance of the proposed novel approach.
Combination of interval-valued belief structures based on intuitionistic fuzzy set
S0950705114002305
This article is devoted to the issue of forecasting exchange rates. The objective of the conducted research is to develop a predictive model with the use of an innovative methodology – fuzzy logic theory – and to evaluate its effectiveness in times of prosperity (years 2005–2007) and during the financial crisis (years 2009–2011). The model is based on sets of rules written by the author in the form of IF-THEN, where expert knowledge is stored. This model is the result of ten years of the author’s research on this issue. Empirically, this paper employs three currency pairs as experimental datasets: JPY/USD, GBP/USD and CHF/USD. From the model verification, it is demonstrated that refined processes are effective in improving the forecasting of exchange rate movements. The author’s created model is characterised by high efficiency. These studies are among the world’s first attempts to combine fundamental analysis with fuzzy logic to predict exchange rates.
A fuzzy logic model for forecasting exchange rates
S0950705114002330
This paper explores what kinds of management actions are needed by businesses to enhance their innovation capabilities. The first step is to clarify the differences between information and knowledge. To do this, the author introduces a model that can explain an individual’s mental processes in knowledge acquisition and creation. With this model, it becomes explainable in a comprehensive way how “explicit” knowledge received as information is turned into individual knowledge; how “tacit” knowledge can be successfully transferred between workers; and how new knowledge can be created by individuals. The model assumes that knowledge workers can be classified into two categories, i.e., Type-1 and Type-2. A Type-1 knowledge worker is one whose knowledge acquisition depends almost exclusively on learning. A Type-2 worker is one who has a substantial amount of self-created knowledge in addition to learned knowledge. It is quite common to find Type-1 workers, but there are not that many Type-2 workers. Successful business firms are usually led by Type-2 workers, who are more innovative. In order to enhance the innovation capabilities of business firms, rather than waiting for the fortuitous advent of Type-2 workers, management should make an effort to transform existing Type-1 workers into Type-2 workers. The author makes the assertion that such a transformation is possible by putting Type-1 knowledge workers into situations where their “insight for knowledge creation” is constantly stimulated. Constant stimulation is made possible by using an IT system based on the Timed-PDCA concept that was proposed by the author in his previous papers. When this system is deployed seriously by management, it becomes possible to facilitate workers’ breakthrough efforts and to promote close collaboration among workers through information sharing and visualization.
The creation and management of organizational knowledge
S0950705114002366
The Dempster–Shafer theory of evidence (D–S theory) has been widely used in many information fusion systems. However, the determination of basic probability assignment (BPA) remains an open problem which can considerably influence final results. In this paper, a new method to determine BPA using core samples is proposed. Unlike most of existing methods that determining BPA in a heuristic way, the proposed method is data-driven. It uses training data to generate core samples for each attribute model. Then, helpful core samples in generating BPAs are selected. Calculation of the relevance ratio based on convex hulls is integrated into the core sample selection as a new feature of the proposed method. BPAs are assigned based on the distance between the test data and the selected core samples. Finally, BPAs are combined to get a final BPA using the Dempster’s combination rule. In this paper, compound hypotheses are taken into consideration. BPA generated by the proposed method can be combined with some other sources of information to reduce the uncertainty. Empirical trials on benchmark database shows the efficiency of the proposed method.
A new method to determine basic probability assignment using core samples
S0950705114002378
Market segmentation comprises a variety of measurement methodologies that are used to support management, marketing and promotional policies in tourism destinations. This study applies ClustOfVar, a relatively recent algorithm for cluster analysis from mixed variables. The technique finds groups of variables by using a homogeneity criterion based on the sum of correlation ratios for qualitative variables, and squared correlations for quantitative variables. Then principal components from each cluster of variables are extracted in order to segment cruise passengers. CART analysis is finally used for the sake of finding the variables that drove the formation of the clusters. All the analysis is based on an official survey of tourists who disembarked in Uruguayan ports. The analysis identified five clusters, both for variables and cruise passengers. Findings highlight the importance of the enjoyment of the contact with local people for the economic impact, as well as the important role of age and gender related variables. Managerial implications are also discussed.
ClustOfVar and the segmentation of cruise passengers from mixed data: Some managerial implications
S0950705114002792
Quantitative attribute reduction exhibits applicability but complexity when compared to qualitative reduction. According to the two-category decision theoretic rough set model, this paper mainly investigates quantitative reducts and their hierarchies (with qualitative reducts) from a regional perspective. (1) An improved type of classification regions is proposed, and its preservation reduct (CRP-Reduct) is studied. (2) Reduction targets and preservation properties of set regions are analyzed, and the set-region preservation reduct (SRP-Reduct) is studied. (3) Separability of set regions and rule consistency is verified, and the quantitative and qualitative double-preservation reduct (DP-Reduct) is established. (4) Hierarchies of CRP-Reduct, SRP-Reduct, and DP-Reduct are explored with two qualitative reducts: the Pawlak-Reduct and knowledge-preservation reduct (KP-Reduct). (5) Finally, verification experiments are provided. CRP-Reduct, SRP-Reduct, and DP-Reduct expand layer by layer Pawlak-Reduct and exhibit quantitative applicability, and the experimental results indicate their effectiveness and hierarchies regarding Pawlak-Reduct and KP-Reduct.
Region-based quantitative and hierarchical attribute reduction in the two-category decision theoretic rough set model
S0950705114002834
Innovation is a key resource for the well-being of national economies and international competitive advantages. First, this study develops a network data envelopment analysis (DEA) production process to evaluate the R&D efficiency and economic efficiency of the national innovation system (NIS) in 30 countries. Our findings show that the R&D efficiencies of the NIS are better than the economic efficiencies. Second, this study examines the effect of intellectual capital (IC) on the NIS performance through truncated regression. Our findings indicate that IC does play an important role in affecting the NIS performance. Finally, this study presents a managerial decision-making matrix and makes suggestions through a performance improvement strategy map to help government and managers improve the NIS performance.
Intellectual capital and national innovation systems performance
S0950705114002846
Thesaurus is used in many Information Retrieval (IR) applications such as data integration, data warehousing, semantic query processing and schema matching. Schema matching or mapping is one of the most important basic steps in data integration. It is the process of identifying the semantic correspondence or equivalent between two or more schemas. Considering the fact of the existence of many thesauri for identical knowledge domain, the quality and the change in the results of schema matching when using different thesauri in specific knowledge field are not predictable. In this research, we studied the effect of thesaurus size on schema matching quality by conducting many experiments using different thesauri. In addition, a new method in calculating the similarity between vectors extracted from thesaurus database is proposed. The method is based on the ratio of individual shared elements to the elements in the compound set of the vectors. Moreover, we explained in details the efficient algorithm used in searching thesaurus database. After describing the experiments, results that show enhancement in the average of the similarity is presented. The completeness, effectiveness, and their harmonic mean measures were calculated to quantify the quality of matching. Experiments on two different thesauri show positive results with average Precision of 35% and a less value in the average of Recall. The effect of thesaurus size on the quality of matching was statically insignificant; however, other factors affecting the output and the exact value of change are still in the focus of our future study.
Effect of thesaurus size on schema matching quality
S0950705114002913
In the construction industry, evaluating the financial status of a contractor is a challenging task due to the myriad of the input data as well as the complexity of the working environment. This article presents a novel hybrid intelligent approach named as Evolutionary Least Squares Support Vector Machine Inference Model for Predicting Contractor Default Status (ELSIM-PCDS). The proposed ELSIM-PCDS is established by hybridizing the Synthetic Minority Over-sampling Technique (SMOTE), Least Squares Support Vector Machine (LS-SVM), and Differential Evolution (DE) algorithms. In this new paradigm, the SMOTE is specifically used to deal with the imbalanced classification problem. The LS-SVM acts as a supervised learning technique for learning the classification boundary that separates the default and non-default contractors. Additionally, the DE algorithm automatically searches for the optimal parameters of the classification model. Experimental results have demonstrated that the classification performance of the ELSIM-PCDS is better than that of other benchmark methods. Therefore, the proposed hybrid approach is a promising alternative for predicting contractor default status.
A novel hybrid intelligent approach for contractor default status prediction
S0950705114003025
Currently, power distribution companies have several problems that are related to energy losses. For example, the energy used might not be billed due to illegal manipulation or a breakdown in the customer’s measurement equipment. These types of losses are called non-technical losses (NTLs), and these losses are usually greater than the losses that are due to the distribution infrastructure (technical losses). Traditionally, a large number of studies have used data mining to detect NTLs, but to the best of our knowledge, there are no studies that involve the use of a Knowledge-Based System (KBS) that is created based on the knowledge and expertise of the inspectors. In the present study, a KBS was built that is based on the knowledge and expertise of the inspectors and that uses text mining, neural networks, and statistical techniques for the detection of NTLs. Text mining, neural networks, and statistical techniques were used to extract information from samples, and this information was translated into rules, which were joined to the rules that were generated by the knowledge of the inspectors. This system was tested with real samples that were extracted from Endesa databases. Endesa is one of the most important distribution companies in Spain, and it plays an important role in international markets in both Europe and South America, having more than 73 million customers.
Improving Knowledge-Based Systems with statistical techniques, text mining, and neural networks for non-technical loss detection
S0950705114003062
The aim of the present work is to perform a step towards the design of specific algorithms and methods for automatic music generation. A novel probabilistic model for the characterization of music learned from music samples is designed. This model makes use of automatically extracted music parameters, namely tempo, time signature, rhythmic patterns and pitch contours, to characterize music. Specifically, learned rhythmic patterns and pitch contours are employed to characterize music styles. Then, a novel autonomous music compositor that generates new melodies using the model developed will be presented. The methods proposed in this paper take into consideration different aspects related to the traditional way in which music is composed by humans such as harmony evolution and structure repetitions and apply them together with the probabilistic reutilization of rhythm patterns and pitch contours learned beforehand to compose music pieces.
Automatic melody composition based on a probabilistic model of music style and harmonic rules
S0950705114003232
Automatic image annotation has been an active research topic in recent years due to its potential impact on both image understanding and semantic based image retrieval. In this paper, we present a novel two-stage refining image annotation scheme based on Gaussian mixture model (GMM) and random walk method. To begin with, GMM is applied to estimate the posterior probabilities of each annotation keyword for the image, during which a semi-supervised learning, i.e. transductive support vector machine (TSVM), is employed to enhance the quality of training data. Next, a label similarity graph is constructed by a weighted linear combination of label similarity and visual similarity of images associated with the corresponding labels. In this way, it can seamlessly integrate the information from image low-level visual features and high-level semantic concepts. Followed by a random walk process over the constructed label graph is implemented to further mine the correlation of the candidate annotations so as to capture the refining results, which plays a crucial role in semantic based image retrieval. Finally, extensive experiments carried out on two publicly available image datasets bear out that this approach can achieve marked improvement in annotation performance over several state-of-the-art methods.
Semi-supervised learning for refining image annotation based on random walk model
S0950705114003542
Extreme Learning Machine (ELM) has received increasing attention for its simple principle, low computational cost and excellent performance. However, a large number of labeled instances are often required, and the number of hidden nodes should be manually tuned, for better learning and generalization of ELM. In this paper, we propose a Sparse Semi-Supervised Extreme Learning Machine (S3ELM) via joint sparse regularization for classification, which can automatically prune the model structure via joint sparse regularization technology, to achieve more accurate, efficient and robust classification, when only a small number of labeled training samples are available. Different with most of greedy-algorithms based model selection approaches, by using ℓ 2 , 1 -norm, S3ELM casts a joint sparse constraints on the training model of ELM and formulate a convex programming. Moreover, with a Laplacian, S3ELM can make full use of the information from both the labeled and unlabeled samples. Some experiments are taken on several benchmark datasets, and the results show that S3ELM is computationally attractive and outperforms its counterparts.
Joint sparse regularization based Sparse Semi-Supervised Extreme Learning Machine (S3ELM) for classification
S095070511400375X
Knowledge reduction is a basic issue in knowledge representation and data mining. Although various methods have been developed to reduce the size of classical formal contexts, the reduction of formal fuzzy contexts based on fuzzy lattices remains a difficult problem owing to its complicated derivation operators. To address this problem, we propose a general method of knowledge reduction by reducing attributes and objects in formal fuzzy contexts based on the variable threshold concept lattices. Employing the proposed approaches, we remove attributes and objects which are non-essential to the structure of a variable threshold concept lattice, i.e., with a given threshold level, the concept lattice constructed from a reduced formal context is made identical to that constructed from the original formal context. Discernibility matrices and Boolean functions are, respectively, employed to compute the attribute reducts and object reducts of the formal fuzzy contexts, by which all the attribute reducts and object reducts of the formal fuzzy contexts are determined without changing the structure of the lattice.
Knowledge reduction in formal fuzzy contexts
S0950705114003773
Financial distress prediction is always important for financial institutions in order for them to assess the financial health of enterprises and individuals. Bankruptcy prediction and credit scoring are two important issues in financial distress prediction where various statistical and machine learning techniques have been employed to develop financial prediction models. Since there are no generally agreed upon financial ratios as input features for model development, many studies consider feature selection as a pre-processing step in data mining before constructing the models. However, most works only focused on applying specific feature selection methods over either bankruptcy prediction or credit scoring problem domains. In this work, a comprehensive study is conducted to examine the effect of performing filter and wrapper based feature selection methods on financial distress prediction. In addition, the effect of feature selection on the prediction models obtained using various classification techniques is also investigated. In the experiments, two bankruptcy and two credit datasets are used. In addition, three filter and two wrapper based feature selection methods combined with six different prediction models are studied. Our experimental results show that there is no the best combination of the feature selection method and the classification technique over the four datasets. Moreover, depending on the chosen techniques, performing feature selection does not always improve the prediction performance. However, on average performing the genetic algorithm and logistic regression for feature selection can provide prediction improvements over the credit and bankruptcy datasets respectively.
The effect of feature selection on financial distress prediction
S0950705114003797
This research work adduces new hybrid machine learning ensembles for improving the performance of a computer aided diagnosis system integrated with multimethod assessment process and statistical process control, used for the spine diagnosis based on noninvasive panoramic radiographs. Novel methods are proposed for enhanced accurate classification. All the computations are performed considering steep error tolerance rate with statistical significance level of 5% as well as 1% and established the results with corrected t -tests. The kernel density estimator has been implemented to distinguish the affected patients against healthy ones. A new ensemble consisting of Bayesian network optimized by Tabu search algorithm as a classifier and Haar wavelets as the projection filter is used for relevant feature selection and attribute’s ranking. The performance analysis of each method along with major findings is discussed using various evaluation metrics and concludes with propitious results. The results are compared to the existing SINPATCO platform that uses MLP, GRNN, and SVM. The optimization of machine learning algorithms is obtained using Design of Experiments scheme to achieve superior prediction accuracy. The highest classification accuracy obtained is 96.55% with sensitivity, specificity of 0.966 and 0.987 respectively. The objective is to enhance the software reliability and quality of spine disorder diagnosis using medical diagnostic system and reinforce the viability of precise treatment.
Developing new machine learning ensembles for quality spine diagnosis
S0950705114003906
Segmentation has several strategic and tactical implications in marketing products and services. Despite hard clustering methods having several weaknesses, they remain widely applied in marketing studies. Alternative segmentation methods such as fuzzy methods are rarely used to understand consumer behaviour. In this study, we propose a strategy of analysis, by combining the Bagged Clustering (BC) method and the fuzzy C-means clustering method for fuzzy data (FCM-FD), i.e., the Bagged fuzzy C-means clustering method for fuzzy data (BFCM-FD). The method inherits the advantages of stability and reproducibility from BC and the flexibility from FCM-FD. The method is applied on a sample of 328 Chinese consumers revealing the existence of four segments (Admirers, Enthusiasts, Moderates, and Apathetics) of the perceived images of Western Europe as a tourist destination. The results highlight the heterogeneity in Chinese consumers’ place preferences and implications for place marketing are offered.
Bagged fuzzy clustering for fuzzy data: An application to a tourism market
S0950705114003931
Condition of eye images with a low lighting or low contrast ratio between the iris and pupil is one of the challenges for iris recognition in a non-cooperative environment and under visible wavelength illumination. Incorrect iris localization can affect the performance of the iris recognition system. Iso-contrast limited adaptive histogram equalization is proposed to overcome this challenge and increase the performance of iris localization. The eye image is partitioned into the contextual sub-region; then, the proposed method transfers the pixel intensity by referring to a local intensity histogram and a newly suggested cumulative distribution function. This research was tested on 1000 eye images from the UBIRIS.v2 dataset. The results showed that the proposed method performed better than existing methods when dealing with a low lighting or low contrast ratio between the iris and pupil in the eye image.
A low lighting or contrast ratio visible iris recognition using iso-contrast limited adaptive histogram equalization
S0950705114003943
Software testing is a crucial task during software development process with the potential to save time and budget by recognizing defects as early as possible and delivering a more defect-free product. To improve the testing process, fault prediction approaches identify parts of the system that are more defect prone. However, when the defect data or quality-based class labels are not identified or the company does not have similar or earlier versions of the software project, researchers cannot use supervised classification methods for defect detection. In order to detect defect proneness of modules in software projects with high accuracy and improve detection model generalization ability, we propose an automated software fault detection model using semi-supervised hybrid self-organizing map (HySOM). HySOM is a semi-supervised model based on self-organizing map and artificial neural network. The advantage of HySOM is the ability to predict the label of the modules in a semi-supervised manner using software measurement threshold values in the absence of quality data. In semi-supervised HySOM, the role of expert for identifying fault prone modules becomes less critical and more supportive. We have benchmarked the proposed model with eight industrial data sets from NASA and Turkish white-goods embedded controller software. The results show improvement in false negative rate and overall error rate in 80% and 60% of the cases respectively for NASA data sets. Moreover, we investigate the performance of the proposed model with other recent proposed methods. According to the results, our semi-supervised model can be used as an automated tool to guide testing effort by prioritizing the module’s defects improving the quality of software development and software testing in less time and budget.
An empirical study based on semi-supervised hybrid self-organizing map for software fault prediction
S0950705114004043
Medical diagnosis has been being considered as one of the important processes in clinical medicine that determines acquired diseases from some given symptoms. Enhancing the accuracy of diagnosis is the centralized focuses of researchers involving the uses of computerized techniques such as intuitionistic fuzzy sets (IFS) and recommender systems (RS). Based upon the observation that medical data are often imprecise, incomplete and vague so that using the standalone IFS and RS methods may not improve the accuracy of diagnosis, in this paper we consider the integration of IFS and RS into the proposed methodology and present a novel intuitionistic fuzzy recommender systems (IFRS) including: (i) new definitions of single-criterion and multi-criteria IFRS; (ii) new definitions of intuitionistic fuzzy matrix (IFM) and intuitionistic fuzzy composition matrix (IFCM); (iii) proposing intuitionistic fuzzy similarity matrix (IFSM), intuitionistic fuzzy similarity degree (IFSD) and the formulas to predict values on the basis of IFSD; (iv) a novel intuitionistic fuzzy collaborative filtering method so-called IFCF to predict the possible diseases. Experimental results reveal that IFCF obtains better accuracy than the standalone methods of IFS such as De et al., Szmidt and Kacprzyk, Samuel and Balamurugan and RS, e.g. Davis et al. and Hassan and Syed.
Intuitionistic fuzzy recommender systems: An effective tool for medical diagnosis
S095070511400416X
In this paper, an effective bi-population estimation of distribution algorithm (BEDA) is presented to solve the no-idle permutation flow-shop scheduling problem (NIPFSP) with the total tardiness criterion. To enhance the search efficiency and maintain the diversity of the whole population, two sub-populations are used in the BEDA. The two sub-populations are generated by sampling the probability models that are updated differently for the global exploration and the local exploitation, respectively. Meanwhile, the two sub-populations collaborate with each other to share search information for adjusting the models. To well adjust the models for generating promising solutions, the global probability model is updated during the evolution with the superior population and the local probability model is updated with the best solution that has been explored. To further enhance exploitation in the promising region, the insertion operator is used iteratively as the local search procedure. To investigate the influence of parameter setting, numerical study based on the Taguchi method of design-of-experiment is carried out. The effectiveness of the bi-population strategy and local search procedure is shown by numerical comparisons, and the comparisons with the recently published algorithms by using the benchmarking instances also demonstrate the effectiveness of the proposed BEDA.
A bi-population EDA for solving the no-idle permutation flow-shop scheduling problem with the total tardiness criterion
S0950705114004183
Time, cost, and quality are three important but often conflicting factors that must be optimally balanced during the planning and management of construction projects. Tradeoff optimization among these three factors within the project scope is necessary to maximize overall project success. In this paper, the MOABCDE-TCQT, a new hybrid multiple objective evolutionary algorithm that is based on hybridization of artificial bee colony and differential evolution, is proposed to solve time–cost–quality tradeoff problems. The proposed algorithm integrates crossover operations from differential evolution (DE) with the original artificial bee colony (ABC) in order to balance the exploration and exploitation phases of the optimization process. A numerical construction project case study demonstrates the ability of MOABCDE-generated, non-dominated solutions to assist project managers to select an appropriate plan to optimize TCQT, which is an operation that is typically difficult and time-consuming. Comparisons between the MOABCDE and four currently used algorithms, including the non-dominated sorting genetic algorithm (NSGA-II), the multiple objective particle swarm optimization (MOPSO), the multiple objective differential evolution (MODE), and the multiple objective artificial bee colony (MOABC), verify the efficiency and effectiveness of the developed algorithm.
Hybrid multiple objective artificial bee colony with differential evolution for the time–cost–quality tradeoff problem
S0950705114004213
Fatty Liver Disease (FLD) is a progressively prevalent disease that is present in about 15% of the world population. Normally benign and reversible if detected at an early stage, FLD, if left undetected and untreated, can progress to an irreversible advanced liver disease, such as fibrosis, cirrhosis, liver cancer and liver failure, which can cause death. Ultrasound (US) is the most widely used modality to detect FLD. However, the accuracy of US-based diagnosis depends on both the training and expertise of the radiologist. US-based Computer Aided Diagnosis (CAD) techniques for FLD detection can improve accuracy, speed and objectiveness of the diagnosis, and thereby, reduce operator dependability. In this paper, we first review the advantages and limitations of different diagnostic methods which are currently available to detect FLD. We then review the state-of-the-art US-based CAD techniques that utilize a range of image texture based features like entropy, Local Binary Pattern (LBP), Haralick textures and run length matrix in several automated decision making algorithms. These classification algorithms are trained using the features extracted from the patient data in order for them to learn the relationship between the features and the end-result (FLD present or absent). Subsequently, features from a new patient are input to these trained classifiers to determine if he/she has FLD. Due to the use of such automated systems, the inter-observer variability and the subjectivity of associated with reading images by radiologists are eliminated, resulting in a more accurate and quick diagnosis for the patient and time and cost savings for both the patient and the hospital.
Ultrasound-based tissue characterization and classification of fatty liver disease: A screening and diagnostic paradigm
S0950705114004237
To facilitate the exchange of environmental observations, efforts have been made to develop standardised markup languages for describing and transmitting data from multiple sources. Along with this is often a need to translate data from different formats or vocabularies to these languages. In this paper, we focus on the problem of translating data encoded in spreadsheets to an XML-based standardised exchange language. We describe the issues with data that have to be resolved. We present a solution that relies on an ontology capturing semantic gaps between data and the target language. We show how to develop such an ontology and use it to mediate translation through a real scenario where water resources data have to be translated to a standard data transfer format. In particular, we provide declarative mapping formalisms for representing relationships between spreadsheets, ontologies, and XML schemas, and give algorithms for processing mappings. We have implemented our approach in AdHoc, an ontology-mediated spreadsheet-to-XML translation tool, and showed its effectiveness with real environmental observations data.
A semantic approach to data translation: A case study of environmental observations data
S0950705114004316
An innovative intelligent diagnostic system is proposed in this study, which is primarily reflected in first heart sound (S1) and second heart sound (S2) automatic extraction, frequency feature matrix (FFM) automatic extraction, diagnostic feature y 1 and y 2 generation based on principal components analysis (PCA) and diagnostic method definition based on the classification boundary curves. Four stages corresponding to the diagnostic system implementation are summarized as follows. Stage 1 describes an envelope E T extraction from heart sound signals. In stage 2, heart sound segmentation points and peaks are first automatically located based on a novel method STMHT, and then S1 and S2 are automatically extracted according to the relationship between the systolic time interval and the diastolic time interval. In stage 3, in the frequency domain, a novel method is first proposed to generate the secondary envelopes SE S 1 and SE S 2 for S1 and S2, respectively, and then an STMHT -based FFM is automatically extracted from SE S 1 and SE S 2 . Finally, the PCA-based diagnostic features y 1 and y 2 are generated from the FFM . In stage 4, support vector machine (SVM)-based classification curves for the dataset consisting of y 1 and y 2 are first generated, and then, based on the classification curves, the scatter diagram diagnostic result (SDDR) and numerical diagnostic result (NDR) are defined for diagnosis of heart diseases. The proposed intelligent diagnosis system is validated by sounds from online heart sound databases and by sounds from clinical heart diseases. As a result, the classification accuracies (CA) achieved are 91.7%, 98.8%, 98.4%, 99.8%, 98.7%, 97.8%, 98.1% and 96.5% for the detection of atrial fibrillation (AF), aortic regurgitation(AR), mitral regurgitation (MR), normal sound (NM), pulmonary stenosis (PS), small ventricular septal defect (SVSD), medium ventricular septal defect (MVSD) and large ventricular septal defect (LVSD), respectively.
An innovative intelligent system based on automatic diagnostic feature extraction for diagnosing heart diseases