FileName
stringlengths
17
17
Abstract
stringlengths
163
6.01k
Title
stringlengths
12
421
S0920548916300058
The importance of Software Architecture (SA) design has been acknowledged as a very important factor for a high-quality software development. Different efforts in both industry and academia have produced multiple system development methodologies (SDMs) that include SA design activities. In addition, standardization bodies have defined different recommendations regarding Software Architecture design. However, in industry Software Architecture best practices are currently poorly employed. This fact constrains the benefits that industry can potentially obtain from Software Architecture design in software development. In this paper, we analyze the degree to which the four main recognized SDMs — RUP (Rational Unified Process), MSF (Microsoft Solutions Framework), MBASE (Model-Based System Architecting and Software Engineering), and RUP-SOA (Rational Unified Process for Service-oriented Architecture) — adhere to the best practices of Software Architecture design. Our analysis points out some of the most important strengths and weaknesses regarding Software Architecture design and highlights some of the most relevant issues of Software Architecture design that need to be incorporated into such methodologies.
The strengths and weaknesses of software architecture design in the RUP, MSF, MBASE and RUP-SOA methodologies: A conceptual review
S092054891630006X
The Future Internet is expected to be composed of a mesh of interoperable web services accessed from all over the Web. This approach has been supported by many software providers who have provided a wide range of mash up tools for creating composite applications based on components prepared by the respective provider. These tools aim to achieve the end-user development (EUD) of rich internet applications (RIA); however, most, having failed to meet the needs of end users without programming knowledge, have been unsuccessful. Thus, many studies have investigated success factors in order to propose scales of success factor objectives and assess the adequacy of mashup tools for their purpose. After reviewing much of the available literature, this paper proposes a new success factor scale based on human factors, human-computer interaction (HCI) factors and the specialization-functionality relationship. It brings together all these factors, offering a general conception of EUD success factors. The proposed scale was applied in an empirical study on current EUD tools, which found that today's EUD tools have many shortcomings. In order to achieve an acceptable success rate among end users, we then designed a mashup tool architecture, called FAST-Wirecloud, which was built taking into account the proposed EUD success factor scale. The results of a new empirical study carried out using this tool have demonstrated that users are better able to successfully develop their composite applications and that FAST-Wirecloud has scored higher than all the other tools under study on all scales of measurement, and particularly on the scale proposed in this paper.
Implementation of end-user development success factors in mashup development environments
S0920548916300083
Due to the quick growth of data created and analyzed by industry and business requirements become more complex, many companies come to employ more than one key-value store together to serve different tasks. Considering key-value stores currently define their own interfaces which have different attributes and semantics, interoperability among these key-value stores is weak. To get the best interoperability, we may choose the store whose interfaces are similar to the others, or we may define an interface specification such as SQL specification in relational databases. We propose an interface description model (IDM for short) to abstract interfaces of different key-value stores, and an algorithm to quantify their differences, named as conversion cost. With the help of these algorithms, we can measure and compare the interoperability of given two stores. After studying the interoperability of many stores, we propose an interface prototype, which has the minimum conversion cost to the interfaces of other stores, as a reference to the interface specification of key-value store. Experiments show the features of interfaces, and prove that the proposed prototype has the best interoperability to other typical stores.
Conversion cost and specification on interfaces of key-value stores
S0920548916300162
Background To integrate electronic health records (EHRs) from diverse document sources across healthcare providers, facilities, or medical institutions, the IHE XDS.b profile can be considered as one of the solutions. In this research, we have developed an EHR/OpenXDS system which adopted the OpenXDS, an open source software that complied with the IHE XDS.b profile, and which achieved the EHR interoperability. Objective We conducted performance testing to investigate the performance and limitations of this EHR/OpenXDS system. Methodology The performance testing was conducted for three use cases, EHR submission, query, and retrieval, based on the IHE XDS.b profile for EHR sharing. In addition, we also monitored the depletion of hardware resources (including the CPU usage, memory usage, and network usage) during the test cases execution to detect more details of the EHR/OpenXDS system's limitations. Results In this EHR/OpenXDS system, the maximum affordable workload of the EHR submissions were 400 EHR submissions per hour, the DSA CPU usage was 20%, memory usage was 1380MB, the network usages were 0.286KB input and 7.58KB output per minute; the DPA CPU usage was 1%, memory usage was 1770MB, the network usages were 7.75KB input and 1.54KB output per minute; the DGA CPU usage was 24%, memory usage was 2130MB, the network usages were 1.3KB input and 0.174KB output per minute. The maximum affordable workload of the EHR queries were 600 EHR queries per hour, the DCA CPU usage was 66%, the memory usage was 1660MB, the network usages were 0.230KB input and 0.251KB output per minute; the DGA CPU usage was 1%, the memory usage was 1890MB, the network usages were 0.273KB input and 0.22KB output per minute. The maximum affordable workload of the EHR retrievals were 2000 EHR retrievals, the DCA CPU usage was 79%, the memory usage was 1730MB, the network usages were 19.55KB input and 1.12KB output per minute; the DPA CPU usage was 3.75%, the memory usage was 2310MB, and the network usages were 0.956KB input and 19.57KB output per minute. Discussion and conclusion From the research results, we suggest that future implementers who deployed the EHR/OpenXDS system should consider the following aspects. First, to ensure how many service volumes would be provided in the environment and then to adjust the hardware resources. Second, the IHE XDS.b profile is adopted by the SOAP (Simple Object Access Protocol) web service, it might then move onto the Restful (representational state transfer) web service which is more efficient than the SOAP web service. Third, the concurrency process ability should be added in the OpenXDS source code to improve the hardware usage more efficiently while processing the ITI-42, ITI-18, and ITI-43 transactions. Four, this research suggests that the work should continue on adjusting the memory usage for the modules of the OpenXDS thereby using the memory resource more efficiently, e.g., the memory configuration of the JVM (Java Virtual Machine), Apache Tomcat, and Apache Axis2. Fifth, to consider if the hardware monitoring would be required in the implementing environment. These research results provided some test figures to refer to, and it also gave some tuning suggestions and future works to continue improving the performance of the OpenXDS.
Performance assessment and tuning for exchange of clinical documents cross healthcare enterprises
S0920548916300174
Analyzing conflicts in non-functional requirements is a major task in large software systems development projects. Many of the non-functional requirements that accumulate vary over time. Systems analysts often maintain non-functional requirements incrementally. Requirement information overload and employee turnover problems may complicate conflict detection in the non-functional requirement evolution process. This work proposes a conflict detector in non-functional requirement evolution (CDNFRE) system that uses ontologies as a theoretical foundation for automatically detecting conflicts. Requirement metadata and conflict detection rules and their associated requirement generation and conflict detection processes are proposed for the CDNFRE mechanism. A prototype is developed. A case study of electronic commerce in a television station company demonstrates the feasibility and effectiveness of the proposed CDNFRE system.
CDNFRE: Conflict detector in non-functional requirement evolution based on ontologies
S0920548916300216
Web accessibility guidelines help developers to create websites which can more easily be used by people with different limitations. The principles and techniques of accessibility focus on the suitable use of standard Web components, alternative methods to present information, and alternatives to facilitate user interaction. Currently, the biggest part in creating accessible websites is played by Web developers, because they manage the page code. Unfortunately, there are millions of websites which do not follow accessibility guidelines, as this usually requires great effort and knowledge of accessibility issues. This research aims to create a platform, based on a novel approach, which allows a set of accessibility problems to be solved without modifying the original page code. The proposed platform is able to analyse websites and detect many accessibility problems automatically; after this, a guided assistant is used to offer adequate solutions to each detected problem. The assistant tries to abstract references to Web implementation issues and to explain every accessibility problem in an understandable way for non-technical people. This new approach could be useful to improve the level of accessibility of many websites for people besides Web developers.
Social4all: Definition of specific adaptations in Web applications to improve accessibility
S0920548916300228
There has been a great deal of research on software quality, but few studies have stressed the factors beyond the scope of software products that can influence the final product's quality. These factors can also determine project success. Objective In this paper, a comparative study is conducted of the determinants of software quality, based on a prior study that only explored U.S. CIOs' (Chief Information Officers) perceptions of factors that could affect the final quality of software products. The aim of this study is to explore the perceptions of different users involved in the software development cycle and generate results that can be generalized and employed as an aid in the management of software project resources. Method The study was conducted through an online survey to various users involved in the software development cycle in Brazil. The respondents analyzed the same 24 items proposed in the previous study, categorized into individual, technological, and organizational factors. Based on the 175 responses obtained, a factor analysis technique was applied, considering the statistical model of the main components in order to identify the factors determining the quality of software products. Results After the factor analysis, it was identified that all 24 analyzed items displayed factor loadings greater than 0.5. Nine factors (9 eigenvalues greater than 1.0) were extracted from this analysis, with the value of the total variance equal to 72%. Conclusion Based upon the comparison between the studies, it was concluded that the most relevant factor identified in both surveys presented an individual character. This factor related items such as competence, training, knowledge, and level of user involvement as well as resistance to change. It was also identified through factor analysis that technological aspects had the highest ratings due to the strong relationship of the items comprising these factors compared to organizational aspects.
An analysis of the factors determining software product quality: A comparative study
S092054891630023X
In this study, we propose a novel solution for collecting smart meter data by merging Vehicular Ad-Hoc Networks (VANET) and smart grid communication technologies. In our proposed mechanism, Vehicular Ad-Hoc Networks are utilized for collecting data from smart meters, eliminating the need for manpower. To the best of our knowledge, this is the first study proposing the utilization of public transportation vehicles for collecting data from smart meters. With this work, the use of the IEEE 802.11p protocol has been proposed for the first time for use in smart grid applications. In our scheme, data flows first from smart meters to a bus through infrastructure-to-vehicle (I2V) communication and then from the bus to a bus stop through vehicle-to-infrastructure (V2I) communication. The performance of our proposed mechanism has been investigated in detail in terms of end-to-end delay and delivery ratio by using Network Simulator-2 and with different routing protocols.
A novel data collection mechanism for smart grids using public transportation buses
S0921889014000396
For humans to accurately understand the world around them, multimodal integration is essential because it enhances perceptual precision and reduces ambiguity. Computational models replicating such human ability may contribute to the practical use of robots in daily human living environments; however, primarily because of scalability problems that conventional machine learning algorithms suffer from, sensory-motor information processing in robotic applications has typically been achieved via modal-dependent processes. In this paper, we propose a novel computational framework enabling the integration of sensory-motor time-series data and the self-organization of multimodal fused representations based on a deep learning approach. To evaluate our proposed model, we conducted two behavior-learning experiments utilizing a humanoid robot; the experiments consisted of object manipulation and bell-ringing tasks. From our experimental results, we show that large amounts of sensory-motor information, including raw RGB images, sound spectrums, and joint angles, are directly fused to generate higher-level multimodal representations. Further, we demonstrated that our proposed framework realizes the following three functions: (1) cross-modal memory retrieval utilizing the information complementation capability of the deep autoencoder; (2) noise-robust behavior recognition utilizing the generalization capability of multimodal features; and (3) multimodal causality acquisition and sensory-motor prediction based on the acquired causality.
Multimodal integration learning of robot behavior using deep neural networks
S092188901400178X
Due to the advancements of robotic systems, they are able to be employed in more unstructured outdoor environments. In such environments the robot–terrain interaction becomes a highly non-linear function. Several methods were proposed to estimate the robot–terrain interaction: machine learning methods, iterative geometric methods, quasi-static and fully dynamic physics simulations. However, to the best of our knowledge there has been no systematic evaluation comparing those methods. In this paper, we present a newly developed iterative contact point estimation method for static stability estimation of actively reconfigurable robots. This new method is systematically compared to a physics simulation in a comprehensive evaluation. Both interaction models determine the contact points between robot and terrain and facilitate a subsequent static stability prediction. Hence, they can be used in our state space global planner for rough terrain to evaluate the robot’s pose and stability. The analysis also compares deterministic versions of both methods to stochastic versions which account for uncertainty in the robot configuration and the terrain model. The results of this analysis show that the new iterative method is a valid and fast approximate method. It is significantly faster compared to a physics simulation while providing good results in realistic robotic scenarios.
Design and comparative evaluation of an iterative contact point estimation method for static stability estimation of mobile actively reconfigurable robots
S0921889014001821
We introduce a novel, fabric-based, flexible, and stretchable tactile sensor, which is capable of seamlessly covering natural shapes. As humans and robots have curved body parts that move with respect to each other, the practical usage of traditional rigid tactile sensor arrays is limited. Rather, a flexible tactile skin is required. Our design allows for several tactile cells to be embedded in a single sensor patch. It can have an arbitrary perimeter and can cover free-form surfaces. In this article we discuss the construction of the sensor and evaluate its performance. Our flexible tactile sensor remains operational on top of soft padding such as a gel cushion, enabling the construction of a human-like soft tactile skin. The sensor allows pressure measurements to be read from a subtle less than 1 kPa up to high pressures of more than 500 kPa, which easily covers the common range for everyday human manual interactions. Due to a layered construction, the sensor is very robust and can withstand normal forces multiple magnitudes higher than what could be achieved by a human without sustaining damage. As an exciting application for the sensor, we describe the construction of a wearable tactile dataglove with 54 tactile cells and embedded data acquisition electronics. We also discuss the necessary implementation details to maintain long term sensor performance in the presence of moisture.
Flexible and stretchable fabric-based tactile sensor
S0921889014002164
In this paper, an overview of human–robot interactive communication is presented, covering verbal as well as non-verbal aspects. Following a historical introduction, and motivation towards fluid human–robot communication, ten desiderata are proposed, which provide an organizational axis both of recent as well as of future research on human–robot communication. Then, the ten desiderata are examined in detail, culminating in a unifying discussion, and a forward-looking conclusion.
A review of verbal and non-verbal human–robot interactive communication
S0921889014003054
Motion capture systems have been commonly used to enable humanoid robots or CG characters to perform human-like motions. However, prerecorded motion capture data cannot be reused efficiently because picking a specific motion from a large database and modifying the motion data to fit the desired motion patterns are difficult tasks. We have developed an imitative learning framework based on the symbolization of motion patterns using Hidden Markov Models (HMMs), where each HMM (hereafter referred to as “motion symbol”) abstracts the dynamics of a motion pattern and allows motion recognition and generation. This paper describes a symbolically structured motion database that consists of original motion data, motion symbols, and motion words. Each motion data is labeled with motion symbols and motion words. Moreover, a network is formed between two layers of motion symbols and motion words based on their probability association. This network makes it possible to associate motion symbols with motion words and to search for motion datasets using motion symbols. The motion symbols can also generate motion data. Therefore, the developed framework can provide the desired motion data when only the motion words are input into the database.
Symbolically structured database for human whole body motions based on association between motion symbols and motion words
S0921889014003169
The motion capture technology has been improved, and widely used for motion analysis and synthesis in various fields, such as robotics, animation, rehabilitation, and sports engineering. A massive amount of captured human data has already been collected. These prerecorded motion data should be reused in order to make the motion analysis and synthesis more efficient. The retrieval of a specified motion data is a fundamental technique for the reuse. Imitation learning frameworks have been developed in robotics, where motion primitive data is encoded into parameters in stochastic models or dynamical systems. We have also been making research on encoding motion primitive data into Hidden Markov Models, which are referred to as “motion symbol”, and aiming at integrating the motion symbols with language. The relations between motions and words in natural language will be versatile and powerful to provide a useful interface for reusing motion data. In this paper, we construct a space of motion symbols for human whole body movements and a space of word labels assigned to those movements. Through canonical correlation analysis, these spaces are reconstructed such that a strong correlation is formed between movements and word labels. These spaces lead to a method for searching for movement data from a query of word labels. We tested our proposed approach on captured human whole body motion data, and its validity was demonstrated. Our approach serves as a fundamental technique for extracting the necessary movements from a database and reusing them.
Correlated space formation for human whole-body motion primitives and descriptive word labels
S0921889015000834
The scope of this paper is to present a novel gait methodology in order to obtain an efficient walking capability for an original walking free-leg hexapod structure (WalkingHex) of tri-radial symmetry. Torque in the upper (actuated) spherical joints and stability margin analyses are obtained based on a constraint-driven gait generator. Therefore, the kinematic information of foot pose and angular orientation of the platform are considered as important variables along with the effect that they can produce in different gait cycles. The torque analysis is studied to determine the motor torque requirements for each step of the gait so that the robotic structure yields a stable and achievable pose. In this way, the analysis of torque permits the selection of an optimal gait based on stability margin criteria. Consequently, a gait generating algorithm is proposed for different types of terrain such as flat, ramp or stepped surfaces. Angle of slope of terrain (rad) Foot spacing angle (rad) Foot spacing radius (m) Foot i Gravitational Coefficient 9.81 (ms−2) Hexapod rotation (rad) Optimal hexapod rotation (rad) Magnitude of stability margin i (m) Maximum torque in system (N m) Optimal translation (m) Overall system stability margin (m) Platform pitch (rad) Platform translation (m) Prismatic joint i Set of integers Torque in upper spherical joint i (N m) Upper spherical joint i
Pre-gait analysis using optimal parameters for a walking machine tool based on a free-leg hexapod structure
S0921889015000846
The problem of finding stable grasps has been widely studied in robotics. However, in many applications the resulting grasps should not only be stable but also applicable for a particular task. Task-specific grasps are closely linked to object categories so that objects in a same category can be often used to perform the same task. This paper presents a probabilistic approach for task-specific stable grasping of objects with shape variations inside the category. An optimal grasp is found as a grasp that is maximally likely to be task compatible and stable taking into account shape uncertainty in a probabilistic context. The method requires only partial models of new objects for grasp generation and only few models and example grasps are used during the training stage. The experiments show that the approach can use multiple models to generalize to new objects in that it outperforms grasping based on the closest model. The method is shown to generate stable grasps for new objects belonging to the same class as well as for similar in shape objects of different categories.
Category-based task specific grasping
S0921889015000901
Personal Mobility Robots, such as the Seqway may be the remedy for the transportation related problems in the congested environment, especially for the last and first mile problems of the elderly people. However, the vehicle segmentation issues for the mobility robots, impede the use of these devices on the shared paths. The mobility reports can only be used in the designated areas and private facilities. The traffic regulatory institutions lack robot–society interaction database. In this study, we proposed methods and algorithms which can be employed on a widespread computing device, such as an Android tablet, to gather travel information and rider behavior making use of the motion and position sensors of the tablet PC. The methods we developed, first filter the noisy sensor readings using a complementary filter, then align the body coordinate system of the device to the Segway’s motion coordinate. A couple of state of the art classification methods are integrated to classify the braking states of the Segway. The classification algorithms are not limited to classification of the braking states, but they can be used for other motion related maneuvers on the road surfaces. The detected braking states and the other classified features related to the motion are reflected to the screen of the Android tablet to inform the rider about the riding and motion conditions. The developed Android application also gathers these travel information to build a National database for further statistical analysis of the robot–society interaction.
A signal pattern recognition approach for mobile devices and its application to braking state classification on robotic mobility devices
S0921889015001542
Daily life assistance is one of the most important applications for service robots. For comfortable assistance, service robots must recognize the surrounding conditions correctly, including human motion, the position of objects, and obstacles. However, since the everyday environment is complex and unpredictable, it is almost impossible to sense all of the necessary information using only a robot and sensors attached to it. In order to realize a service robot for daily life assistance, we have been developing an informationally structured environment using distributed sensors embedded in the environment. The present paper introduces a service robot system with an informationally structured environment referred to the ROS–TMS. This system enables the integration of various data from distributed sensors, as well as storage of these data in an on-line database and the planning of the service motion of a robot using real-time information about the surroundings. In addition, we discuss experiments such as detection and fetch-and-give tasks using the developed real environment and robot.
Service robot system with an informationally structured environment
S092188901500216X
An interactive loop between motion recognition and motion generation is a fundamental mechanism for humans and humanoid robots. We have been developing an intelligent framework for motion recognition and generation based on symbolizing motion primitives. The motion primitives are encoded into Hidden Markov Models (HMMs), which we call “motion symbols”. However, to determine the motion primitives to use as training data for the HMMs, this framework requires a manual segmentation of human motions. Essentially, a humanoid robot is expected to participate in daily life and must learn many motion symbols to adapt to various situations. For this use, manual segmentation is cumbersome and impractical for humanoid robots. In this study, we propose a novel approach to segmentation, the Real-time Unsupervised Segmentation (RUS) method, which comprises three phases. In the first phase, short human movements are encoded into feature HMMs. Seamless human motion can be converted to a sequence of these feature HMMs. In the second phase, the causality between the feature HMMs is extracted. The causality data make it possible to predict movement from observation. In the third phase, movements having a large prediction uncertainty are designated as the boundaries of motion primitives. In this way, human whole-body motion can be segmented into a sequence of motion primitives. This paper also describes an application of RUS to AUtonomous Symbolization of motion primitives (AUS). Each derived motion primitive is classified into an HMM for a motion symbol, and parameters of the HMMs are optimized by using the motion primitives as training data in competitive learning. The HMMs are gradually optimized in such a way that the HMMs can abstract similar motion primitives. We tested the RUS and AUS frameworks on captured human whole-body motions and demonstrated the validity of the proposed framework.
Real-time Unsupervised Segmentation of human whole-body motion and its application to humanoid robot acquisition of motion symbols
S0921889015003000
Autonomous systems such as unmanned vehicles are beginning to operate within society. All participants in society are required to follow specific regulations and laws. An autonomous system cannot be an exception. Inevitably an autonomous system will find itself in a situation in which it needs to not only choose to obey a rule or not, but also make a complex ethical decision. However, there exists no obvious way to implement the human understanding of ethical behaviour in computers. Even if we enable autonomous systems to distinguish between more and less ethical alternatives, how can we be sure that they would choose right? We consider autonomous systems with a hybrid architecture in which the highest level of reasoning is executed by a rational (BDI) agent. For such a system, formal verification has been used successfully to prove that specific rules of behaviour are observed when making decisions. We propose a theoretical framework for ethical plan selection that can be formally verified. We implement a rational agent that incorporates a given ethical policy in its plan selection and show that we can formally verify that the agent chooses to execute, to the best of its beliefs, the most ethical available plan.
Formal verification of ethical choices in autonomous systems
S0921889015300889
In this paper, we look into the problem of loop closure detection in topological mapping. The bag of words (BoW) is a popular approach which is fast and easy to implement, but suffers from perceptual aliasing, primarily due to vector quantization. We propose to overcome this limitation by incorporating the spatial co-occurrence information directly into the dictionary itself. This is done by creating an additional dictionary comprising of word pairs, which are formed by using a spatial neighborhood defined based on the scale size of each point feature. Since the word pairs are defined relative to the spatial location of each point feature, they exhibit a directional attribute which is a new finding made in this paper. The proposed approach, called bag of word pairs (BoWP), uses relative spatial co-occurrence of words to overcome the limitations of the conventional BoW methods. Unlike previous methods that use spatial arrangement only as a verification step, the proposed method incorporates spatial information directly into the detection level and thus, influences all stages of decision making. The proposed BoWP method is implemented in an on-line fashion by incorporating some of the popular concepts such as, K-D tree for storing and searching features, Bayesian probabilistic framework for making decisions on loop closures, incremental creation of dictionary and using RANSAC for confirming loop closure for the top candidate. Unlike previous methods, an incremental version of K-D tree implementation is used which prevents rebuilding of tree for every incoming image, thereby reducing the per image computation time considerably. Through experiments on standard datasets it is shown that the proposed methods provide better recall performance than most of the existing methods. This improvement is achieved without making use any geometric information obtained from range sensors or robot odometry. The computational requirements for the algorithm is comparable to that of BoW methods and is shown to be less than the latest state-of-the-art method in this category.
High performance loop closure detection using bag of word pairs
S0923596513001033
This paper introduces the theoretical framework allowing for the binary quantization index modulation (QIM) embedding techniques to be extended towards multiple-symbol QIM (m-QIM, where m stands for the number of symbols on which the mark is encoded prior to its embedding). The underlying detection method is optimized with respect to the minimization of the average error probability, under the hypothesis of white, additive Gaussian behavior for the attacks. This way, for prescribed transparency and robustness constraints, the data payload is increased by a factor of log2 m. m-QIM is experimentally validated under the frameworks of the MEDIEVALS French national project and of the SPY ITEA2 European project, related to MPEG-4 AVC robust and semi-fragile watermarking applications, respectively. The experiments are three-folded and consider the data payload–robustness–transparency tradeoff. In the former case, the main benefit is the increase of data payload by a factor of log2 m while keeping fixed robustness (variations lower than 3% of the bit error rate after additive noise, transcoding and Stirmark random bending attacks) and transparency (set to average PSNR=45dB and 65dB for SD and HD encoded content, respectively). The experiments consider 1h of video content. In the semi-fragile watermarking case, the m-QIM main advantage is a relative gain factor of 0.11 of PSNR for fixed robustness (against transcoding), fragility (to content alteration) and the data payload. The experiments consider 1h 20min of video content.
Multi-symbol QIM video watermarking
S0923596514001490
This paper describes a recently created image database, TID2013, intended for evaluation of full-reference visual quality assessment metrics. With respect to TID2008, the new database contains a larger number (3000) of test images obtained from 25 reference images, 24 types of distortions for each reference image, and 5 levels for each type of distortion. Motivations for introducing 7 new types of distortions and one additional level of distortions are given; examples of distorted images are presented. Mean opinion scores (MOS) for the new database have been collected by performing 985 subjective experiments with volunteers (observers) from five countries (Finland, France, Italy, Ukraine, and USA). The availability of MOS allows the use of the designed database as a fundamental tool for assessing the effectiveness of visual quality. Furthermore, existing visual quality metrics have been tested with the proposed database and the collected results have been analyzed using rank order correlation coefficients between MOS and considered metrics. These correlation indices have been obtained both considering the full set of distorted images and specific image subsets, for highlighting advantages and drawbacks of existing, state of the art, quality metrics. Approaches to thorough performance analysis for a given metric are presented to detect practical situations or distortion types for which this metric is not adequate enough to human perception. The created image database and the collected MOS values are freely available for downloading and utilization for scientific purposes.
Image database TID2013: Peculiarities, results and perspectives
S0923596514001854
The amount of image data generated each day in health care is ever increasing, especially in combination with the improved scanning resolutions and the importance of volumetric image data sets. Handling these images raises the requirement for efficient compression, archival and transmission techniques. Currently, JPEG 2000׳s core coding system, defined in Part 1, is the default choice for medical images as it is the DICOM-supported compression technique offering the best available performance for this type of data. Yet, JPEG 2000 provides many options that allow for further improving compression performance for which DICOM offers no guidelines. Moreover, over the last years, various studies seem to indicate that performance improvements in wavelet-based image coding are possible when employing directional transforms. In this paper, we thoroughly investigate techniques allowing for improving the performance of JPEG 2000 for volumetric medical image compression. For this purpose, we make use of a newly developed generic codec framework that supports JPEG 2000 with its volumetric extension (JP3D), various directional wavelet transforms as well as a generic intra-band prediction mode. A thorough objective investigation of the performance-complexity trade-offs offered by these techniques on medical data is carried out. Moreover, we provide a comparison of the presented techniques to H.265/MPEG-H HEVC, which is currently the most state-of-the-art video codec available. Additionally, we present results of a first time study on the subjective visual performance when using the aforementioned techniques. This enables us to provide a set of guidelines and settings on how to optimally compress medical volumetric images at an acceptable complexity level.
Wavelet based volumetric medical image compression
S0923596516300583
Visual security metrics are deterministic measures with the (claimed) ability to assess whether an encryption method for visual data does achieve its defined goal. These metrics are usually developed together with a particular encryption method in order to provide an evaluation of said method based on its visual output. However, visual security metrics themselves are rarely evaluated and the claim to perform as a visual security metric is not tied to the specific encryption method for which they were developed. In this paper, we introduce a methodology for assessing the performance of security metrics based on common media encryption scenarios. We systematically evaluate visual security metrics proposed in the literature, along with conventional image metrics which are frequently used for the same task. We show that they are generally not suitable to perform their claimed task.
Identifying deficits of visual security metrics for images
S092427161400241X
In this study, we compare three commonly used methods for hyperspectral image classification, namely Support Vector Machines (SVMs), Gaussian Processes (GPs) and the Spectral Angle Mapper (SAM). We assess their performance in combination with different kernels (i.e. which use distance-based and angle-based metrics). The assessment is done in two experiments, under ideal conditions in the laboratory and, separately, in the field (an operational open pit mine) using natural light. For both experiments independent training and test sets are used. Results show that GPs generally outperform the SVMs, irrespective of the kernel used. Furthermore, angle-based methods, including the Spectral Angle Mapper, outperform GPs and SVMs when using distance-based (i.e. stationary) kernels in the field experiment. A new GP method using an angle-based (i.e. a non-stationary) kernel – the Observation Angle Dependent (OAD) covariance function – outperforms SAM and SVMs in both experiments using only a small number of training spectra. These findings show that distance-based kernels are more affected by changes in illumination between the training and test set than are angular-based methods/kernels. Taken together, this study shows that independent training data can be used for classification of hyperspectral data in the field such as in open pit mines, by using Bayesian machine-learning methods and non-stationary kernels such as GPs and the OAD kernel. This provides a necessary component for automated classifications, such as autonomous mining where many images have to be classified without user interaction.
Evaluating the performance of a new classifier – the GP-OAD: A comparison with existing methods for classifying rock type and mineralogy from hyperspectral imagery
S0925231213010916
This paper introduces an efficient training algorithm for a dendrite morphological neural network (DMNN). Given p classes of patterns, C k , k=1, 2, …, p, the algorithm selects the patterns of all the classes and opens a hyper-cube HC n (with n dimensions) with a size such that all the class elements remain inside HC n . The size of HC n can be chosen such that the border elements remain in some of the faces of HC n , or can be chosen for a bigger size. This last selection allows the trained DMNN to be a very efficient classification machine in the presence of noise at the moment of testing, as we will see later. In a second step, the algorithm divides the HC n into 2 n smaller hyper-cubes and verifies if each hyper-cube encloses patterns for only one class. If this is the case, the learning process is stopped and the DMNN is designed. If at least one hyper-cube HC n encloses patterns of more than one class, then HC n is divided into 2 n smaller hyper-cubes. The verification process is iteratively repeated onto each smaller hyper-cube until the stopping criterion is satisfied. At this moment the DMNN is designed. The algorithm was tested for benchmark problems and compare its performance against some reported algorithms, showing its superiority.
Efficient training for dendrite morphological neural networks
S0925231214000265
The tensor completion problem is to recover a low-n-rank tensor from a subset of its entries. The main solution strategy has been based on the extensions of trace norm for the minimization of tensor rank via convex optimization. This strategy bears the computational cost required by the singular value decomposition (SVD) which becomes increasingly expensive as the size of the underlying tensor increase. In order to reduce the computational cost, we propose a multi-linear low-n-rank factorization model and apply the nonlinear Gauss–Seidal method that only requires solving a linear least squares problem per iteration to solve this model. Numerical results show that the proposed algorithm can reliably solve a wide range of problems at least several times faster than the trace norm minimization algorithm.
Tensor completion via a multi-linear low-n-rank factorization model
S0925231216001569
The effect of voluntary and involuntary eyeblinks in independent components (ICs) contributing to electroencephalographic (EEG) signals was assessed to create templates for eyeblink artifact rejection from EEG signals with small number of electrodes. Fourteen EEG and one vertical electrooculographic signals were recorded for twenty subjects during experiments that prompted subjects to blink voluntarily and involuntarily. Wavelet-enhanced independent component analysis with two markers was employed as a feature extraction scheme to investigate the effects of eyeblinks in ICs of EEG signals. Extracted features were separated into epochs and analyzed. This paper presents following characteristics: (i) voluntary and involuntary eyeblink features obtained from all channels present significant differences in the delta band; (ii) distorting effects have continued influence for 3.0–4.0s (in the occipital region, 2.0s); and (iii) eyeblink effects cease to exist after the zero-crossing four (in the occipital region, two) times, regardless of the type. Several characteristics are different between voluntary and involuntary eyeblinks in EEG signals. Therefore, any templates need both types of data for eyeblink artifact rejection if the EEG signals were obtained from small number of electrodes.
Assessing the effects of voluntary and involuntary eyeblinks in independent components of electroencephalogram
S0925772115000553
We show that the survivable bottleneck Steiner tree problem in normed planes can be solved in polynomial time when the number of Steiner points is constant. This is a fundamental problem in wireless ad-hoc network design where the objective is to design networks with efficient routing topologies. Our result holds for a general definition of survivability and for any norm whose ball is specified by a piecewise algebraic curve of bounded degree and with a bounded number of pieces. In particular, under the Euclidean and rectilinear norms our algorithm constructs an optimal solution in O ( n 2 k + 3 log ⁡ n ) steps, where n is the number of terminals and k is the number of Steiner points. Our algorithm is based on the construction of generalised Voronoi diagrams and relative neighbourhood graphs.
Survivable minimum bottleneck networks
S0925772116300104
In this paper, we address the problem of covering a given set of points on the plane with minimum and/or maximum area orthogonally convex polygons. It is known that the number of possible orthogonally convex polygon covers can be exponential in the number of input points. We propose, for the first time, an O ( n 2 ) algorithm to construct either the maximum or the minimum area orthogonally convex polygon if it exists, else report the non-existence in O ( n log ⁡ n ) .
Covering points with minimum/maximum area orthogonally convex polygons
S0933365714000669
Objective We address the task of extracting information from free-text pathology reports, focusing on staging information encoded by the TNM (tumour-node-metastases) and ACPS (Australian clinico-pathological stage) systems. Staging information is critical for diagnosing the extent of cancer in a patient and for planning individualised treatment. Extracting such information into more structured form saves time, improves reporting, and underpins the potential for automated decision support. Methods and material We investigate the portability of a text mining model constructed from records from one health centre, by applying it directly to the extraction task over a set of records from a different health centre, with different reporting narrative characteristics. Other than a simple normalisation step on features associated with target labels, we apply the models from one system directly to the other. Results The best F-scores for in-hospital experiments are 81%, 85%, and 94% (for staging T, N, and M respectively), while best cross-hospital F-scores reach 84%, 81%, and 91% for the same respective categories. Conclusions Our performance results compare favourably to the best levels reported in the literature, and—most relevant to our aim here—the cross-corpus results demonstrate the portability of the models we developed.
Cross-hospital portability of information extraction of cancer staging information
S0933365714000682
Objective This paper identifies and reviews ethical issues associated with artificial intelligent care providers (AICPs) in mental health care and other helping professions. Specific recommendations are made for the development of ethical codes, guidelines, and the design of AICPs. Methods Current developments in the application of AICPs and associated technologies are reviewed and a foundational overview of applicable ethical principles in mental health care is provided. Emerging ethical issues regarding the use of AICPs are then reviewed in detail. Recommendations for ethical codes and guidelines as well as for the development of semi-autonomous and autonomous AICP systems are described. The benefits of AICPs and implications for the helping professions are discussed in order to weigh the pros and cons of their use. Results Existing ethics codes and practice guidelines do not presently consider the current or the future use of interactive artificial intelligent agents to assist and to potentially replace mental health care professionals. AICPs present new ethical issues that will have significant ramifications for the mental health care and other helping professions. Primary issues involve the therapeutic relationship, competence, liability, trust, privacy, and patient safety. Many of the same ethical and philosophical considerations are applicable to use and design of AICPs in medicine, nursing, social work, education, and ministry. Conclusion The ethical and moral aspects regarding the use of AICP systems must be well thought-out today as this will help to guide the use and development of these systems in the future. Topics presented are relevant to end users, AI developers, and researchers, as well as policy makers and regulatory boards.
Recommendations for the ethical use and design of artificial intelligent care providers
S0933365714000815
Objectives Process model comparison and similar process retrieval is a key issue to be addressed in many real-world situations, and a particularly relevant one in medical applications, where similarity quantification can be exploited to accomplish goals such as conformance checking, local process adaptation analysis, and hospital ranking. In this paper, we present a framework that allows the user to: (i) mine the actual process model from a database of process execution traces available at a given hospital; and (ii) compare (mined) process models. The tool is currently being applied in stroke management. Methods Our framework relies on process mining to extract process-related information (i.e., process models) from data. As for process comparison, we have modified a state-of-the-art structural similarity metric by exploiting: (i) domain knowledge; (ii) process mining outputs and statistical temporal information. These changes were meant to make the metric more suited to the medical domain. Results Experimental results showed that our metric outperforms the original one, and generated output closer than that provided by a stroke management expert. In particular, our metric correctly rated 11 out of 15 mined hospital models with respect to a given query. On the other hand, the original metric correctly rated only 7 out of 15 models. The experiments also showed that the framework can support stroke management experts in answering key research questions: in particular, average patient improvement decreased as the distance (according to our metric) from the top level hospital process model increased. Conclusions The paper shows that process mining and process comparison, through a similarity metric tailored to medical applications, can be applied successfully to clinical data to gain a better understanding of different medical processes adopted by different hospitals, and of their impact on clinical outcomes. In the future, we plan to make our metric even more general and efficient, by explicitly considering various methodological and technological extensions. We will also test the framework in different domains.
Improving structural medical process comparison by exploiting domain knowledge and mined information
S0933365714000827
Objective Existing bioinformatics databases such as KEGG (Kyoto Encyclopedia of Genes and Genomes) provide a wealth of information. However, they generally lack a user-friendly and interactive interface. Methodology The study proposes a web service system for exploring the contents of the KEGG database in an intuitive and interactive manner. In the proposed system, the requested pathways are uploaded from the KEGG database and are converted from a static format into an interactive format such that their contents can be more readily explored. The system supports two basic functions, namely an exhaustive search for all possible reaction paths between two specified genes in a biological pathway, and the identification of similar reaction sequences in different biological pathways. Results The feasibility of the proposed system is evaluated by means of an initial pilot study involving 10 students with varying degrees of experience of the KEGG website and its operations. The results indicate that the system provides a useful learning tool for investigating biological pathways. Conclusion A system is proposed for converting the static pathway maps in KEGG into interactive maps such that they can be explored at will. The results of a preliminary trial confirm that the system is straightforward to use and provides a versatile and effective tool for examining and comparing biological pathways.
Interactive web service system for exploration of biological pathways
S0933365714000840
Objective Anemia is a frequent comorbidity in hemodialysis patients that can be successfully treated by administering erythropoiesis-stimulating agents (ESAs). ESAs dosing is currently based on clinical protocols that often do not account for the high inter- and intra-individual variability in the patient's response. As a result, the hemoglobin level of some patients oscillates around the target range, which is associated with multiple risks and side-effects. This work proposes a methodology based on reinforcement learning (RL) to optimize ESA therapy. Methods RL is a data-driven approach for solving sequential decision-making problems that are formulated as Markov decision processes (MDPs). Computing optimal drug administration strategies for chronic diseases is a sequential decision-making problem in which the goal is to find the best sequence of drug doses. MDPs are particularly suitable for modeling these problems due to their ability to capture the uncertainty associated with the outcome of the treatment and the stochastic nature of the underlying process. The RL algorithm employed in the proposed methodology is fitted Q iteration, which stands out for its ability to make an efficient use of data. Results The experiments reported here are based on a computational model that describes the effect of ESAs on the hemoglobin level. The performance of the proposed method is evaluated and compared with the well-known Q-learning algorithm and with a standard protocol. Simulation results show that the performance of Q-learning is substantially lower than FQI and the protocol. When comparing FQI and the protocol, FQI achieves an increment of 27.6% in the proportion of patients that are within the targeted range of hemoglobin during the period of treatment. In addition, the quantity of drug needed is reduced by 5.13%, which indicates a more efficient use of ESAs. Conclusion Although prospective validation is required, promising results demonstrate the potential of RL to become an alternative to current protocols.
Optimization of anemia treatment in hemodialysis patients via reinforcement learning
S0933365714000852
Objectives Extracting useful visual clues from medical images allowing accurate diagnoses requires physicians’ domain knowledge acquired through years of systematic study and clinical training. This is especially true in the dermatology domain, a medical specialty that requires physicians to have image inspection experience. Automating or at least aiding such efforts requires understanding physicians’ reasoning processes and their use of domain knowledge. Mining physicians’ references to medical concepts in narratives during image-based diagnosis of a disease is an interesting research topic that can help reveal experts’ reasoning processes. It can also be a useful resource to assist with design of information technologies for image use and for image case-based medical education systems. Methods and materials We collected data for analyzing physicians’ diagnostic reasoning processes by conducting an experiment that recorded their spoken descriptions during inspection of dermatology images. In this paper we focus on the benefit of physicians’ spoken descriptions and provide a general workflow for mining medical domain knowledge based on linguistic data from these narratives. The challenge of a medical image case can influence the accuracy of the diagnosis as well as how physicians pursue the diagnostic process. Accordingly, we define two lexical metrics for physicians’ narratives—lexical consensus score and top N relatedness score—and evaluate their usefulness by assessing the diagnostic challenge levels of corresponding medical images. We also report on clustering medical images based on anchor concepts obtained from physicians’ medical term usage. These analyses are based on physicians’ spoken narratives that have been preprocessed by incorporating the Unified Medical Language System for detecting medical concepts. Results The image rankings based on lexical consensus score and on top 1 relatedness score are well correlated with those based on challenge levels (Spearman correlation >0.5 and Kendall correlation >0.4). Clustering results are largely improved based on our anchor concept method (accuracy >70% and mutual information >80%). Conclusions Physicians’ spoken narratives are valuable for the purpose of mining the domain knowledge that physicians use in medical image inspections. We also show that the semantic metrics introduced in the paper can be successfully applied to medical image understanding and allow discussion of additional uses of these metrics.
From spoken narratives to domain knowledge: Mining linguistic data for medical image understanding
S0933365714000864
Objective Clinicians’ attention is a precious resource, which in the current healthcare practice is consumed by the cognitive demands arising from complex patient conditions, information overload, time pressure, and the need to aggregate and synthesize information from disparate sources. The ability to organize information in ways that facilitate the generation of effective diagnostic solutions is a distinguishing characteristic of expert physicians, suggesting that automated systems that organize clinical information in a similar manner may augment physicians’ decision-making capabilities. In this paper, we describe the design and evaluation of a theoretically driven cognitive support system (CSS) that assists psychiatrists in their interpretation of clinical cases. The system highlights, and provides the means to navigate to, text that is organized in accordance with a set of diagnostically and therapeutically meaningful higher-level concepts. Methods and materials To evaluate the interface, 16 psychiatry residents interpreted two clinical case scenarios, with and without the CSS. Think-aloud protocols captured during their interpretation of the cases were transcribed and analyzed qualitatively. In addition, the frequency and relative position of content related to key higher-level concepts in a verbal summary of the case were evaluated. In addition the transcripts from both groups were compared to an expert derived reference standard using latent semantic analysis (LSA). Results Qualitative analysis showed that users of the system better attended to specific clinically important aspects of both cases when these were highlighted by the system, and revealed ways in which the system mediates hypotheses generation and evaluation. Analysis of the summary data showed differences in emphasis with and without the system. The LSA analysis suggested users of the system were more “expert-like” in their emphasis, and that cognitive support was more effective in the more complex case. Conclusions Cognitive support impacts upon clinical comprehension. This appears to be largely helpful, but may also lead to neglect of information (such as the psychosocial history) that the system does not highlight. The results have implications for the design of CSSs for clinical narratives including the role of information organization and textual embellishments for more efficient clinical case presentation and comprehension.
Evaluating the effects of cognitive support on psychiatric clinical comprehension
S0933365714000980
Background In general, medical geneticists aim to pre-diagnose underlying syndromes based on facial features before performing cytological or molecular analyses where a genotype–phenotype interrelation is possible. However, determining correct genotype–phenotype interrelationships among many syndromes is tedious and labor-intensive, especially for extremely rare syndromes. Thus, a computer-aided system for pre-diagnosis can facilitate effective and efficient decision support, particularly when few similar cases are available, or in remote rural districts where diagnostic knowledge of syndromes is not readily available. Methods The proposed methodology, visual diagnostic decision support system (visual diagnostic DSS), employs machine learning (ML) algorithms and digital image processing techniques in a hybrid approach for automated diagnosis in medical genetics. This approach uses facial features in reference images of disorders to identify visual genotype–phenotype interrelationships. Our statistical method describes facial image data as principal component features and diagnoses syndromes using these features. Results The proposed system was trained using a real dataset of previously published face images of subjects with syndromes, which provided accurate diagnostic information. The method was tested using a leave-one-out cross-validation scheme with 15 different syndromes, each of comprised 5–9 cases, i.e., 92 cases in total. An accuracy rate of 83% was achieved using this automated diagnosis technique, which was statistically significant (p <0.01). Furthermore, the sensitivity and specificity values were 0.857 and 0.870, respectively. Conclusion Our results show that the accurate classification of syndromes is feasible using ML techniques. Thus, a large number of syndromes with characteristic facial anomaly patterns could be diagnosed with similar diagnostic DSSs to that described in the present study, i.e., visual diagnostic DSS, thereby demonstrating the benefits of using hybrid image processing and ML-based computer-aided diagnostics for identifying facial phenotypes.
Biomedical visual data analysis to build an intelligent diagnostic decision support system in medical genetics
S0933365714001018
Explore whether agent-based modeling and simulation can help healthcare administrators discover interventions that increase population wellness and quality of care while, simultaneously, decreasing costs. Since important dynamics often lie in the social determinants outside the health facilities that provide services, this study thus models the problem at three levels (individuals, organizations, and society). The study explores the utility of translating an existing (prize winning) software for modeling complex societal systems and agent's daily life activities (like a Sim City style of software), into a desired decision support system. A case study tests if the 3 levels of system modeling approach is feasible, valid, and useful. The case study involves an urban population with serious mental health and Philadelphia's Medicaid population (n =527,056), in particular. Section 3 explains the models using data from the case study and thereby establishes feasibility of the approach for modeling a real system. The models were trained and tuned using national epidemiologic datasets and various domain expert inputs. To avoid co-mingling of training and testing data, the simulations were then run and compared (Section 4.1) to an analysis of 250,000 Philadelphia patient hospital admissions for the year 2010 in terms of re-hospitalization rate, number of doctor visits, and days in hospital. Based on the Student t-test, deviations between simulated vs. real world outcomes are not statistically significant. Validity is thus established for the 2008–2010 timeframe. We computed models of various types of interventions that were ineffective as well as 4 categories of interventions (e.g., reduced per-nurse caseload, increased check-ins and stays, etc.) that result in improvement in well-being and cost. The 3 level approach appears to be useful to help health administrators sort through system complexities to find effective interventions at lower costs.
A systems approach to healthcare: Agent-based modeling, community mental health, and population well-being
S0933365714001031
Introduction The counting and classification of blood cells allow for the evaluation and diagnosis of a vast number of diseases. The analysis of white blood cells (WBCs) allows for the detection of acute lymphoblastic leukaemia (ALL), a blood cancer that can be fatal if left untreated. Currently, the morphological analysis of blood cells is performed manually by skilled operators. However, this method has numerous drawbacks, such as slow analysis, non-standard accuracy, and dependences on the operator's skill. Few examples of automated systems that can analyse and classify blood cells have been reported in the literature, and most of these systems are only partially developed. This paper presents a complete and fully automated method for WBC identification and classification using microscopic images. Methods In contrast to other approaches that identify the nuclei first, which are more prominent than other components, the proposed approach isolates the whole leucocyte and then separates the nucleus and cytoplasm. This approach is necessary to analyse each cell component in detail. From each cell component, different features, such as shape, colour and texture, are extracted using a new approach for background pixel removal. This feature set was used to train different classification models in order to determine which one is most suitable for the detection of leukaemia. Results Using our method, 245 of 267 total leucocytes were properly identified (92% accuracy) from 33 images taken with the same camera and under the same lighting conditions. Performing this evaluation using different classification models allowed us to establish that the support vector machine with a Gaussian radial basis kernel is the most suitable model for the identification of ALL, with an accuracy of 93% and a sensitivity of 98%. Furthermore, we evaluated the goodness of our new feature set, which displayed better performance with each evaluated classification model. Conclusions The proposed method permits the analysis of blood cells automatically via image processing techniques, and it represents a medical tool to avoid the numerous drawbacks associated with manual observation. This process could also be used for counting, as it provides excellent performance and allows for early diagnostic suspicion, which can then be confirmed by a haematologist through specialised techniques.
Leucocyte classification for leukaemia detection using image processing techniques
S0933365714001043
Objective This paper describes NICeSim, an open-source simulator that uses machine learning (ML) techniques to aid health professionals to better understand the treatment and prognosis of premature newborns. Methods The application was developed and tested using data collected in a Brazilian hospital. The available data were used to feed an ML pipeline that was designed to create a simulator capable of predicting the outcome (death probability) for newborns admitted to neonatal intensive care units. However, unlike previous scoring systems, our computational tool is not intended to be used at the patients bedside, although it is possible. Our primary goal is to deliver a computational system to aid medical research in understanding the correlation of key variables with the studied outcome so that new standards can be established for future clinical decisions. In the implemented simulation environment, the values of key attributes can be changed using a user-friendly interface, where the impact of each change on the outcome is immediately reported, allowing a quantitative analysis, in addition to a qualitative investigation, and delivering a totally interactive computational tool that facilitates hypothesis construction and testing. Results Our statistical experiments showed that the resulting model for death prediction could achieve an accuracy of 86.7% and an area under the receiver operating characteristic curve of 0.84 for the positive class. Using this model, three physicians and a neonatal nutritionist performed simulations with key variables correlated with chance of death. The results indicated important tendencies for the effect of each variable and the combination of variables on prognosis. We could also observe values of gestational age and birth weight for which a low Apgar score and the occurrence of respiratory distress syndrome (RDS) could be less or more severe. For instance, we have noticed that for a newborn with 2000 g or more the occurrence of RDS is far less problematic than for neonates weighing less. Conclusions The significant accuracy demonstrated by our predictive model shows that NICeSim might be used for hypothesis testing to minimize in vivo experiments. We observed that the model delivers predictions that are in very good agreement with the literature, demonstrating that NICeSim might be an important tool for supporting decision making in medical practice. Other very important characteristics of NICeSim are its flexibility and dynamism. NICeSim is flexible because it allows the inclusion and deletion of variables according to the requirements of a particular study. It is also dynamic because it trains a just-in-time model. Therefore, the system is improved as data from new patients become available. Finally, NICeSim can be extended in a cooperative manner because it is an open-source system.
NICeSim: An open-source simulator based on machine learning techniques to support medical research on prenatal and perinatal care decision making
S0933365714001055
Objective Intelligent recognition of electroencephalogram (EEG) signals is an important means for epilepsy detection. Almost all conventional intelligent recognition methods assume that the training and testing data of EEG signals have identical distribution. However, this assumption may indeed be invalid for practical applications due to differences in distributions between the training and testing data, making the conventional epilepsy detection algorithms not feasible under such situations. In order to overcome this problem, we proposed a transfer-learning-based intelligent recognition method for epilepsy detection. Methods We used the large-margin-projected transductive support vector machine method (LMPROJ) to learn the useful knowledge between the training domain and testing domain by calculating the maximal mean discrepancy. The method can effectively learn a model for the testing data with training data of different distributions, thereby relaxing the constraint that the data distribution in the training and testing samples should be identical. Results The experimental validation is performed over six datasets of electroencephalogram signals with three feature extraction methods. The proposed LMPROJ-based transfer learning method was compared with five conventional classification methods. For the datasets with identical distribution, the performance of these six classification methods was comparable. They all could achieve an accuracy of 90%. However, the LMPROJ method obviously outperformed the five conventional methods for experimental datasets with different distribution between the training and test data. Regardless of the feature extraction method applied, the mean classification accuracy of the proposed method is above 93%, which is greater than that of the other five methods with statistical significance. Conclusion The proposed transfer-learning-based method has better classification accuracy and adaptability than the conventional methods in classifying EEG signals for epilepsy detection.
Transductive domain adaptive learning for epileptic electroencephalogram recognition
S0933365714001067
Objective Intra-axiom redundancies are elements of concept definitions that are redundant as they are entailed by other elements of the concept definition. While such redundancies are harmless from a logical point of view, they make concept definitions hard to maintain, and they might lead to content-related problems when concepts evolve. The objective of this study is to develop a fully automated method to detect intra-axiom redundancies in OWL 2 EL and apply it to SNOMED Clinical Terms (SNOMED CT). Materials and methods We developed a software program in which we implemented, adapted and extended readily existing rules for redundancy elimination. With this, we analysed occurence of redundancy in 11 releases of SNOMED CT (January 2009 to January 2014). We used the ELK reasoner to classify SNOMED CT, and Pellet for explanation of equivalence. We analysed the completeness and soundness of the results by an in-depth examination of the identified redundant elements in the July 2012 release of SNOMED CT. To determine if concepts with redundant elements lead to maintenance issues, we analysed a small sample of solved redundancies. Results Analyses showed that the amount of redundantly defined concepts in SNOMED CT is consistently around 35,000. In the July 2012 version of SNOMED CT, 35,010 (12%) of the 296,433 concepts contained redundant elements in their definitions. The results of applying our method are sound and complete with respect to our evaluation. Analysis of solved redundancies suggests that redundancies in concept definitions lead to inadequate maintenance of SNOMED CT. Conclusions Our analysis revealed that redundant elements are continuously introduced and removed, and that redundant elements may be overlooked when concept definitions are corrected. Applying our redundancy detection method to remove intra-axiom redundancies from the stated form of SNOMED CT and to point knowledge modellers to newly introduced redundancies can support creating and maintaining a redundancy-free version of SNOMED CT.
Intra-axiom redundancies in SNOMED CT
S0933365714001079
Objective Taking into account patients’ preferences has become an essential requirement in health decision-making. Even in evidence-based settings where directions are summarized into clinical practice guidelines, there might exist situations where it is important for the care provider to involve the patient in the decision. In this paper we propose a unified framework to promote the shift from a traditional, physician-centered, clinical decision process to a more personalized, patient-oriented shared decision-making (SDM) environment. Methods We present the theoretical, technological and architectural aspects of a framework that encapsulates decision models and instruments to elicit patients’ preferences into a single tool, thus enabling physicians to exploit evidence-based medicine and shared decision-making in the same encounter. Results We show the implementation of the framework in a specific case study related to the prevention and management of the risk of thromboembolism in atrial fibrillation. We describe the underlying decision model and how this can be personalized according to patients’ preferences. The application of the framework is tested through a pilot clinical evaluation study carried out on 20 patients at the Rehabilitation Cardiology Unit at the IRCCS Fondazione Salvatore Maugeri hospital (Pavia, Italy). The results point out the importance of running personalized decision models, which can substantially differ from models quantified with population coefficients. Conclusions This study shows that the tool is potentially able to overcome some of the main barriers perceived by physicians in the adoption of SDM. In parallel, the development of the framework increases the involvement of patients in the process of care focusing on the centrality of individual patients.
From decision to shared-decision: Introducing patients’ preferences into clinical decision analysis
S0933365714001092
Objective Errors in the delivery of medical care are the principal cause of inpatient mortality and morbidity, accounting for around 98,000 deaths in the United States of America (USA) annually. Ineffective team communication, especially in the operation room (OR), is a major root of these errors. This miscommunication can be reduced by analyzing and constructing a conceptual model of communication and miscommunication in the OR. We introduce the principles underlying Object-Process Methodology (OPM)-based modeling of the intricate interactions between the surgeon and the surgical technician while handling surgical instruments in the OR. This model is a software- and hardware-independent description of the agents engaged in communication events, their physical activities, and their interactions. The model enables assessing whether the task-related objectives of the surgical procedure were achieved and completed successfully and what errors can occur during the communication. Methods and material The facts used to construct the model were gathered from observations of various types of operations miscommunications in the operating room and its outcomes. The model takes advantage of the compact ontology of OPM, which is comprised of stateful objects – things that exist physically or informatically, and processes – things that transform objects by creating them, consuming them or changing their state. The modeled communication modalities are verbal and non-verbal, and errors are modeled as processes that deviate from the “sunny day” scenario. Using OPM refinement mechanism of in-zooming, key processes are drilled into and elaborated, along with the objects that are required as agents or instruments, or objects that these processes transform. The model was developed through an iterative process of observation, modeling, group discussions, and simplification. Results The model faithfully represents the processes related to tool handling that take place in an OR during an operation. The specification is at various levels of detail, each level is depicted in a separate diagram, and all the diagrams are “aware” of each other as part of the whole model. Providing ontology of verbal and non-verbal modalities of communication in the OR, the resulting conceptual model is a solid basis for analyzing and understanding the source of the large variety of errors occurring in the course of an operation, providing an opportunity to decrease the quantity and severity of mistakes related to the use and misuse of surgical instrumentations. Since the model is event driven, rather than person driven, the focus is on the factors causing the errors, rather than the specific person. This approach advocates searching for technological solutions to alleviate tool-related errors rather than finger-pointing. Concretely, the model was validated through a structured questionnaire and it was found that surgeons agreed that the conceptual model was flexible (3.8 of 5, std=0.69), accurate, and it generalizable (3.7 of 5, std=0.37 and 3.7 of 5, std=0.85, respectively). Conclusion The detailed conceptual model of the tools handling subsystem of the operation performed in an OR focuses on the details of the communication and the interactions taking place between the surgeon and the surgical technician during an operation, with the objective of pinpointing the exact circumstances in which errors can happen. Exact and concise specification of the communication events in general and the surgical instrument requests in particular is a prerequisite for a methodical analysis of the various modes of errors and the circumstances under which they occur. This has significant potential value in both reduction in tool-handling-related errors during an operation and providing a solid formal basis for designing a cybernetic agent which can replace a surgical technician in routine tool handling activities during an operation, freeing the technician to focus on quality assurance, monitoring and control of the cybernetic agent activities. This is a critical step in designing the next generation of cybernetic OR assistants.
Operation room tool handling and miscommunication scenarios: An object-process methodology conceptual model
S0933365714001225
Objective Surgery is one of the riskiest and most important medical acts that is performed today. Understanding the ways in which surgeries are similar or different from each other is of major interest. Desires to improve patient outcomes and surgeon training, and to reduce the costs of surgery, all motivate a better understanding of surgical practices. To facilitate this, surgeons have started recording the activities that are performed during surgery. New methods have to be developed to be able to make the most of this extremely rich and complex data. The objective of this work is to enable the simultaneous comparison of a set of surgeries, in order to be able to extract high-level information about surgical practices. Materials and method We introduce non-linear temporal scaling (NLTS): a method that finds a multiple alignment of a set of surgeries. Experiments are carried out on a set of lumbar disc neurosurgeries. We assess our method both on a highly standardised phase of the surgery (closure) and on the whole surgery. Results Experiments show that NLTS makes it possible to consistently derive standards of surgical practice and to understand differences between groups of surgeries. We take the training of surgeons as the common theme for the evaluation of the results and highlight, for example, the main differences between the practices of junior and senior surgeons in the removal of a lumbar disc herniation. Conclusions NLTS is an effective and efficient method to find a multiple alignment of a set of surgeries. NLTS realigns a set of sequences along their intrinsic timeline, which makes it possible to extract standards of surgical practices.
Non-linear temporal scaling of surgical processes
S0933365714001237
Objective The main goal of this work is to measure how lexical regularities in biomedical ontology labels can be used for the automatic creation of formal relationships between classes, and to evaluate the results of applying our approach to the Gene Ontology (GO). Methods In recent years, we have developed a method for the lexical analysis of regularities in biomedical ontology labels, and we showed that the labels can present a high degree of regularity. In this work, we extend our method with a cross-products extension (CPE) metric, which estimates the potential interest of a specific regularity for axiomatic enrichment in the lexical analysis, using information on exact matches in external ontologies. The GO consortium recently enriched the GO by using so-called cross-product extensions. Cross-products are generated by establishing axioms that relate a given GO class with classes from the GO or other biomedical ontologies. We apply our method to the GO and study how its lexical analysis can identify and reconstruct the cross-products that are defined by the GO consortium. Results The label of the classes of the GO are highly regular in lexical terms, and the exact matches with labels of external ontologies affect 80% of the GO classes. The CPE metric reveals that 31.48% of the classes that exhibit regularities have fragments that are classes into two external ontologies that are selected for our experiment, namely, the Cell Ontology and the Chemical Entities of Biological Interest ontology, and 18.90% of them are fully decomposable into smaller parts. Our results show that the CPE metric permits our method to detect GO cross-product extensions with a mean recall of 62% and a mean precision of 28%. The study is completed with an analysis of false positives to explain this precision value. Conclusions We think that our results support the claim that our lexical approach can contribute to the axiomatic enrichment of biomedical ontologies and that it can provide new insights into the engineering of biomedical ontologies.
Approaching the axiomatic enrichment of the Gene Ontology from a lexical perspective
S0933365714001365
Background Mappings established between life science ontologies require significant efforts to maintain them up to date due to the size and frequent evolution of these ontologies. In consequence, automatic methods for applying modifications on mappings are highly demanded. The accuracy of such methods relies on the available description about the evolution of ontologies, especially regarding concepts involved in mappings. However, from one ontology version to another, a further understanding of ontology changes relevant for supporting mapping adaptation is typically lacking. Methods This research work defines a set of change patterns at the level of concept attributes, and proposes original methods to automatically recognize instances of these patterns based on the similarity between attributes denoting the evolving concepts. This investigation evaluates the benefits of the proposed methods and the influence of the recognized change patterns to select the strategies for mapping adaptation. Results The summary of the findings is as follows: (1) the Precision (>60%) and Recall (>35%) achieved by comparing manually identified change patterns with the automatic ones; (2) a set of potential impact of recognized change patterns on the way mappings is adapted. We found that the detected correlations cover ∼66% of the mapping adaptation actions with a positive impact; and (3) the influence of the similarity coefficient calculated between concept attributes on the performance of the recognition algorithms. Conclusions The experimental evaluations conducted with real life science ontologies showed the effectiveness of our approach to accurately characterize ontology evolution at the level of concept attributes. This investigation confirmed the relevance of the proposed change patterns to support decisions on mapping adaptation.
Recognizing lexical and semantic change patterns in evolving life science ontologies to inform mapping adaptation
S0933365714001377
Objective The effective and efficient assessment, management, and evolution of surgical processes are intrinsic to excellent patient care. Hence, in addition to economic interests, the quality of the outcome is of great importance. Process benchmarking examines the compliance of an intraoperative surgical process to another process that is considered as best practice. The objective of this work is to assess the relationship between the course and the outcome of surgical processes of the study. Materials and methods By assessing 450 skill practices on rapid prototyping models in minimally invasive surgery training, we extracted descriptions of surgical processes and examined the hypothesis that a significant relationship exists between the course of a surgical process and the quality of its outcome. Results The results showed a significant correlation with Person correlation coefficients >0.05 between the quality of process outcome and process compliance for simple and complex suturing tasks in the study. Conclusions We conclude that high process compliance supports good quality outcomes and, therefore, excellent patient care. We also showed that a deviation from best training processes led to a decreased outcome quality. This is relevant for identifying requirements for surgical processes, for generating feedback for the surgeon with regard to human factors and for inducing changes in the workflow in order to improve the outcome quality.
Outcome quality assessment by surgical process compliance measures in laparoscopic surgery
S0933365714001389
Objectives Access to the world wide web and multimedia content is an important aspect of life. We present a web browser and a multimedia user interface adapted for control with a brain-computer interface (BCI) which can be used by severely motor impaired persons. Methods The web browser dynamically determines the most efficient P300 BCI matrix size to select the links on the current website. This enables control of the web browser with fewer commands and smaller matrices. The multimedia player was based on an existing software. Both applications were evaluated with a sample of ten healthy participants and three end-users. All participants used a visual P300 BCI with face-stimuli for control. Results The healthy participants completed the multimedia player task with 90% accuracy and the web browsing task with 85% accuracy. The end-users completed the tasks with 62% and 58% accuracy. All healthy participants and two out of three end-users reported that they felt to be in control of the system. Conclusions In this study we presented a multimedia application and an efficient web browser implemented for control with a BCI. Significance Both applications provide access to important areas of modern information retrieval and entertainment.
Brain-controlled applications using dynamic P300 speller matrices
S0933365714001390
Objectives A well-developed appointment system can help increase the utilization of medical facilities in an outpatient department. This paper outlines the development of an appointment system that can make an outpatient department work more efficiently and improve patient satisfaction level. Methods A Markov decision process model is proposed to schedule sequential appointments with the consideration of patient preferences in order to maximize the patient satisfaction level. Adaptive dynamic programming algorithms are developed to avoid the curse of dimensionality. These algorithms can dynamically capture patient preferences, update the value of being a state, and thus improve the appointment decisions. Results Experiments were conducted to investigate the performance of the algorithms. The convergence behaviors under different settings, including the number of iterations needed for convergence and the accuracy of results, were examined. Bias-adjusted Kalman filter step-sizes were found to lead to the best convergence behavior, which stabilized within 5000 iterations. As for the effects of exploration and exploitation, it resulted in the best convergence behavior when the probability of taking a myopically optimal action equaled 0.9. The performance of value function approximation algorithm was greatly affected by the combination of basis functions. Under different combinations, errors varied from 2.7% to 8.3%. More preferences resulted in faster convergence, but required longer computation time. Conclusions System parameters are adaptively updated as bookings are confirmed. The proposed appointment scheduling system could certainly contribute to better patient satisfaction level during the booking periods.
Adaptive dynamic programming algorithms for sequential appointment scheduling with patient preferences
S0933365714001407
Background The use of telehealth technologies to remotely monitor patients suffering chronic diseases may enable preemptive treatment of worsening health conditions before a significant deterioration in the subject's health status occurs, requiring hospital admission. Objective The objective of this study was to develop and validate a classification algorithm for the early identification of patients, with a background of chronic obstructive pulmonary disease (COPD), who appear to be at high risk of an imminent exacerbation event. The algorithm attempts to predict the patient's condition one day in advance, based on a comparison of their current physiological measurements against the distribution of their measurements over the previous month. Method The proposed algorithm, which uses a classification and regression tree (CART), has been validated using telehealth measurement data recorded from patients with moderate/severe COPD living at home. The data were collected from February 2007 to January 2008, using a telehealth home monitoring unit. Results The CART algorithm can classify home telehealth measurement data into either a ‘low risk’ or ‘high risk’ category with 71.8% accuracy, 80.4% specificity and 61.1% sensitivity. The algorithm was able to detect a ‘high risk’ condition one day prior to patients actually being observed as having a worsening in their COPD condition, as defined by symptom and medication records. Conclusion The CART analyses have shown that features extracted from three types of physiological measurements; forced expiratory volume in 1s (FEV1), arterial oxygen saturation (SPO2) and weight have the most predictive power in stratifying the patients condition. This CART algorithm for early detection could trigger the initiation of timely treatment, thereby potentially reducing exacerbation severity and recovery time and improving the patient's health. This study highlights the potential usefulness of automated analysis of home telehealth data in the early detection of exacerbation events among COPD patients.
Predicting the risk of exacerbation in patients with chronic obstructive pulmonary disease using home telehealth measurement data
S0933365714001419
Objective This work addresses the theoretical description and experimental evaluation of a new feature selection method (named uFilter). The uFilter improves the Mann–Whitney U-test for reducing dimensionality and ranking features in binary classification problems. Also, it presented a practical uFilter application on breast cancer computer-aided diagnosis (CADx). Materials and methods A total of 720 datasets (ranked subsets of features) were formed by the application of the chi-square (CHI2) discretization, information-gain (IG), one-rule (1Rule), Relief, uFilter and its theoretical basis method (named U-test). Each produced dataset was used for training feed-forward backpropagation neural network, support vector machine, linear discriminant analysis and naive Bayes machine learning algorithms to produce classification scores for further statistical comparisons. Results A head-to-head comparison based on the mean of area under receiver operating characteristics curve scores against the U-test method showed that the uFilter method significantly outperformed the U-test method for almost all classification schemes (p <0.05); it was superior in 50%; tied in a 37.5% and lost in a 12.5% of the 24 comparative scenarios. Also, the performance of the uFilter method, when compared with CHI2 discretization, IG, 1Rule and Relief methods, was superior or at least statistically similar on the explored datasets while requiring less number of features. Conclusions The experimental results indicated that uFilter method statistically outperformed the U-test method and it demonstrated similar, but not superior, performance than traditional feature selection methods (CHI2 discretization, IG, 1Rule and Relief). The uFilter method revealed competitive and appealing cost-effectiveness results on selecting relevant features, as a support tool for breast cancer CADx methods especially in unbalanced datasets contexts. Finally, the redundancy analysis as a complementary step to the uFilter method provided us an effective way for finding optimal subsets of features without decreasing the classification performances.
Improving the Mann–Whitney statistical test for feature selection: An approach in breast cancer diagnosis on mammography
S0933365714001420
Objectives Operating room (OR) surgery scheduling determines the individual surgery's operation start time and assigns the required resources to each surgery over a schedule period, considering several constraints related to a complete surgery flow and the multiple resources involved. This task plays a decisive role in providing timely treatments for the patients while balancing hospital resource utilization. The originality of the present study is to integrate the surgery scheduling problem with real-life nurse roster constraints such as their role, specialty, qualification and availability. This article proposes a mathematical model and an ant colony optimization (ACO) approach to efficiently solve such surgery scheduling problems. Method A modified ACO algorithm with a two-level ant graph model is developed to solve such combinatorial optimization problems because of its computational complexity. The outer ant graph represents surgeries, while the inner graph is a dynamic resource graph. Three types of pheromones, i.e. sequence-related, surgery-related, and resource-related pheromone, fitting for a two-level model are defined. The iteration-best and feasible update strategy and local pheromone update rules are adopted to emphasize the information related to the good solution in makespan, and the balanced utilization of resources as well. The performance of the proposed ACO algorithm is then evaluated using the test cases from (1) the published literature data with complete nurse roster constraints, and 2) the real data collected from a hospital in China. Results The scheduling results using the proposed ACO approach are compared with the test case from both the literature and the real life hospital scheduling. Comparison results with the literature shows that the proposed ACO approach has (1) an 1.5-h reduction in end time; (2) a reduction in variation of resources’ working time, i.e. 25% for ORs, 50% for nurses in shift 1 and 86% for nurses in shift 2; (3) an 0.25h reduction in individual maximum overtime (OT); and (4) an 42% reduction in the total OT of nurses. Comparison results with the real 10-workday hospital scheduling further show the advantage of the ACO in several measurements. Instead of assigning all surgeries by a surgeon to only one OR and the same nurses by traditional manual approach in hospital, ACO realizes a more balanced surgery arrangement by assigning the surgeries to different ORs and nurses. It eventually leads to shortening the end time within the confidential interval of [7.4%, 24.6%] with 95% confidence level. Conclusion The ACO approach proposed in this paper efficiently solves the surgery scheduling problem with daily nurse roster while providing a shortened end time and relatively balanced resource allocations. It also supports the advantage of integrating the surgery scheduling with the nurse scheduling and the efficiency of systematic optimization considering a complete three-stage surgery flow and resources involved.
A short-term operating room surgery scheduling problem integrating multiple nurses roster constraints
S0933365714001432
Objective This study aimed to find effective approaches to electroencephalographic (EEG) signal analysis and resolve problems of real and imaginary finger movement pattern recognition and categorization for one hand. Methods and materials Eight right-handed subjects (mean age 32.8 [SD=3.3] years) participated in the study, and activity from sensorimotor zones (central and contralateral to the movements/imagery) was recorded for EEG data analysis. In our study, we explored the decoding accuracy of EEG signals using real and imagined finger (thumb/index of one hand) movements using artificial neural network (ANN) and support vector machine (SVM) algorithms for future brain–computer interface (BCI) applications. Results The decoding accuracy of the SVM based on a Gaussian radial basis function linearly increased with each trial accumulation (mean: 45%, max: 62% with 20 trial summarizations), and the decoding accuracy of the ANN was higher when single-trial discrimination was applied (mean: 38%, max: 42%). The chosen approaches of EEG signal discrimination demonstrated differential sensitivity to data accumulation. Additionally, the time responses varied across subjects and inside sessions but did not influence the discrimination accuracy of the algorithms. Conclusion This work supports the feasibility of the approach, which is presumed suitable for one-hand finger movement (real and imaginary) decoding. These results could be applied in the elaboration of multiclass BCI systems.
Development of electroencephalographic pattern classifiers for real and imaginary thumb and index finger movements of one hand
S0933365714001456
Objective In pattern recognition and medical diagnosis, similarity measure is an important mathematical tool. To overcome some disadvantages of existing cosine similarity measures of simplified neutrosophic sets (SNSs) in vector space, this paper proposed improved cosine similarity measures of SNSs based on cosine function, including single valued neutrosophic cosine similarity measures and interval neutrosophic cosine similarity measures. Then, weighted cosine similarity measures of SNSs were introduced by taking into account the importance of each element. Further, a medical diagnosis method using the improved cosine similarity measures was proposed to solve medical diagnosis problems with simplified neutrosophic information. Materials and methods The improved cosine similarity measures between SNSs were introduced based on cosine function. Then, we compared the improved cosine similarity measures of SNSs with existing cosine similarity measures of SNSs by numerical examples to demonstrate their effectiveness and rationality for overcoming some shortcomings of existing cosine similarity measures of SNSs in some cases. In the medical diagnosis method, we can find a proper diagnosis by the cosine similarity measures between the symptoms and considered diseases which are represented by SNSs. Then, the medical diagnosis method based on the improved cosine similarity measures was applied to two medical diagnosis problems to show the applications and effectiveness of the proposed method. Results Two numerical examples all demonstrated that the improved cosine similarity measures of SNSs based on the cosine function can overcome the shortcomings of the existing cosine similarity measures between two vectors in some cases. By two medical diagnoses problems, the medical diagnoses using various similarity measures of SNSs indicated the identical diagnosis results and demonstrated the effectiveness and rationality of the diagnosis method proposed in this paper. Conclusions The improved cosine measures of SNSs based on cosine function can overcome some drawbacks of existing cosine similarity measures of SNSs in vector space, and then their diagnosis method is very suitable for handling the medical diagnosis problems with simplified neutrosophic information and demonstrates the effectiveness and rationality of medical diagnoses.
Improved cosine similarity measures of simplified neutrosophic sets for medical diagnoses
S0933365714001468
Objective This study intends to develop a two-stage fuzzy neural network (FNN) for prognoses of prostate cancer. Methods Due to the difficulty of making prognoses of prostate cancer, this study proposes a two-stage FNN for prediction. The initial membership function parameters of FNN are determined by cluster analysis. Then, an integration of the optimization version of an artificial immune network (Opt-aiNET) and a particle swarm optimization (PSO) algorithm is developed to investigate the relationship between the inputs and outputs. Results The evaluation results for three benchmark functions show that the proposed two-stage FNN has better performance than the other algorithms. In addition, model evaluation results indicate that the proposed algorithm really can predict prognoses of prostate cancer more accurately. Conclusions The proposed two-stage FNN is able to learn the relationship between the clinical features and the prognosis of prostate cancer. Once the clinical data are known, the prognosis of prostate cancer patient can be predicted. Furthermore, unlike artificial neural networks, it is much easier to interpret the training results of the proposed network since they are in the form of fuzzy IF–THEN rules. These rules are very important for medical doctors. This can dramatically assist medical doctors to make decisions.
Application of a two-stage fuzzy neural network to a prostate cancer prognosis system
S093336571400147X
Introduction The length of stay of critically ill patients in the intensive care unit (ICU) is an indication of patient ICU resource usage and varies considerably. Planning of postoperative ICU admissions is important as ICUs often have no nonoccupied beds available. Problem statement Estimation of the ICU bed availability for the next coming days is entirely based on clinical judgement by intensivists and therefore too inaccurate. For this reason, predictive models have much potential for improving planning for ICU patient admission. Objective Our goal is to develop and optimize models for patient survival and ICU length of stay (LOS) based on monitored ICU patient data. Furthermore, these models are compared on their use of sequential organ failure (SOFA) scores as well as underlying raw data as input features. Methodology Different machine learning techniques are trained, using a 14,480 patient dataset, both on SOFA scores as well as their underlying raw data values from the first five days after admission, in order to predict (i) the patient LOS, and (ii) the patient mortality. Furthermore, to help physicians in assessing the prediction credibility, a probabilistic model is tailored to the output of our best-performing model, assigning a belief to each patient status prediction. A two-by-two grid is built, using the classification outputs of the mortality and prolonged stay predictors to improve the patient LOS regression models. Results For predicting patient mortality and a prolonged stay, the best performing model is a support vector machine (SVM) with G A,D =65.9% (area under the curve (AUC) of 0.77) and G S,L =73.2% (AUC of 0.82). In terms of LOS regression, the best performing model is support vector regression, achieving a mean absolute error of 1.79 days and a median absolute error of 1.22 days for those patients surviving a nonprolonged stay. Conclusion Using a classification grid based on the predicted patient mortality and prolonged stay, allows more accurate modeling of the patient LOS. The detailed models allow to support the decisions made by physicians in an ICU setting.
Predictive modelling of survival and length of stay in critically ill patients using sequential organ failure scores
S0933365714001481
Objectives A recently introduced pragmatic scheme promises to be a useful catalog of interneuron names. We sought to automatically classify digitally reconstructed interneuronal morphologies according to this scheme. Simultaneously, we sought to discover possible subtypes of these types that might emerge during automatic classification (clustering). We also investigated which morphometric properties were most relevant for this classification. Materials and methods A set of 118 digitally reconstructed interneuronal morphologies classified into the common basket (CB), horse-tail (HT), large basket (LB), and Martinotti (MA) interneuron types by 42 of the world's leading neuroscientists, quantified by five simple morphometric properties of the axon and four of the dendrites. We labeled each neuron with the type most commonly assigned to it by the experts. We then removed this class information for each type separately, and applied semi-supervised clustering to those cells (keeping the others’ cluster membership fixed), to assess separation from other types and look for the formation of new groups (subtypes). We performed this same experiment unlabeling the cells of two types at a time, and of half the cells of a single type at a time. The clustering model is a finite mixture of Gaussians which we adapted for the estimation of local (per-cluster) feature relevance. We performed the described experiments on three different subsets of the data, formed according to how many experts agreed on type membership: at least 18 experts (the full data set), at least 21 (73 neurons), and at least 26 (47 neurons). Results Interneurons with more reliable type labels were classified more accurately. We classified HT cells with 100% accuracy, MA cells with 73% accuracy, and CB and LB cells with 56% and 58% accuracy, respectively. We identified three subtypes of the MA type, one subtype of CB and LB types each, and no subtypes of HT (it was a single, homogeneous type). We got maximum (adapted) Silhouette width and ARI values of 1, 0.83, 0.79, and 0.42, when unlabeling the HT, CB, LB, and MA types, respectively, confirming the quality of the formed cluster solutions. The subtypes identified when unlabeling a single type also emerged when unlabeling two types at a time, confirming their validity. Axonal morphometric properties were more relevant that dendritic ones, with the axonal polar histogram length in the [π, 2π) angle interval being particularly useful. Conclusions The applied semi-supervised clustering method can accurately discriminate among CB, HT, LB, and MA interneuron types while discovering potential subtypes, and is therefore useful for neuronal classification. The discovery of potential subtypes suggests that some of these types are more heterogeneous that previously thought. Finally, axonal variables seem to be more relevant than dendritic ones for distinguishing among the CB, HT, LB, and MA interneuron types.
Classifying GABAergic interneurons with semi-supervised projected model-based clustering
S0933365714001493
Objective New technologies improve modern medicine, but may result in unwanted consequences. Some occur due to inadequate human–computer-interactions (HCI). To assess these consequences, an investigation model was developed to facilitate the planning, implementation and documentation of studies for HCI in surgery. Methods and material The investigation model was formalized in Unified Modeling Language and implemented as an ontology. Four different top-level ontologies were compared: Object-Centered High-level Reference, Basic Formal Ontology, General Formal Ontology (GFO) and Descriptive Ontology for Linguistic and Cognitive Engineering, according to the three major requirements of the investigation model: the domain-specific view, the experimental scenario and the representation of fundamental relations. Furthermore, this article emphasizes the distinction of “information model” and “model of meaning” and shows the advantages of implementing the model in an ontology rather than in a database. Results The results of the comparison show that GFO fits the defined requirements adequately: the domain-specific view and the fundamental relations can be implemented directly, only the representation of the experimental scenario requires minor extensions. The other candidates require wide-ranging extensions, concerning at least one of the major implementation requirements. Therefore, the GFO was selected to realize an appropriate implementation of the developed investigation model. The ensuing development considered the concrete implementation of further model aspects and entities: sub-domains, space and time, processes, properties, relations and functions. Conclusions The investigation model and its ontological implementation provide a modular guideline for study planning, implementation and documentation within the area of HCI research in surgery. This guideline helps to navigate through the whole study process in the form of a kind of standard or good clinical practice, based on the involved foundational frameworks. Furthermore, it allows to acquire the structured description of the applied assessment methods within a certain surgical domain and to consider this information for own study design or to perform a comparison of different studies. The investigation model and the corresponding ontology can be used further to create new knowledge bases of HCI assessment in surgery.
Ontology for assessment studies of human–computer-interaction in surgery
S093336571400150X
Objective Proteins are considered to be the most important individual components of biological systems and they combine to form physical protein complexes which are responsible for certain molecular functions. Despite the large availability of protein–protein interaction (PPI) information, not much information is available about protein complexes. Experimental methods are limited in terms of time, efficiency, cost and performance constraints. Existing computational methods have provided encouraging preliminary results, but they phase certain disadvantages as they require parameter tuning, some of them cannot handle weighted PPI data and others do not allow a protein to participate in more than one protein complex. In the present paper, we propose a new fully unsupervised methodology for predicting protein complexes from weighted PPI graphs. Methods and materials The proposed methodology is called evolutionary enhanced Markov clustering (EE-MC) and it is a hybrid combination of an adaptive evolutionary algorithm and a state-of-the-art clustering algorithm named enhanced Markov clustering. EE-MC was compared with state-of-the-art methodologies when applied to datasets from the human and the yeast Saccharomyces cerevisiae organisms. Results Using public available datasets, EE-MC outperformed existing methodologies (in some datasets the separation metric was increased by 10–20%). Moreover, when applied to new human datasets its performance was encouraging in the prediction of protein complexes which consist of proteins with high functional similarity. In specific, 5737 protein complexes were predicted and 72.58% of them are enriched for at least one gene ontology (GO) function term. Conclusions EE-MC is by design able to overcome intrinsic limitations of existing methodologies such as their inability to handle weighted PPI networks, their constraint to assign every protein in exactly one cluster and the difficulties they face concerning the parameter tuning. This fact was experimentally validated and moreover, new potentially true human protein complexes were suggested as candidates for further validation using experimental techniques.
Predicting protein complexes from weighted protein–protein interaction graphs with a novel unsupervised methodology: Evolutionary enhanced Markov clustering
S0933365715000032
Objective The objective of this paper is to highlight the state-of-the-art machine learning (ML) techniques in computational docking. The use of smart computational methods in the life cycle of drug design is relatively a recent development that has gained much popularity and interest over the last few years. Central to this methodology is the notion of computational docking which is the process of predicting the best pose (orientation + conformation) of a small molecule (drug candidate) when bound to a target larger receptor molecule (protein) in order to form a stable complex molecule. In computational docking, a large number of binding poses are evaluated and ranked using a scoring function. The scoring function is a mathematical predictive model that produces a score that represents the binding free energy, and hence the stability, of the resulting complex molecule. Generally, such a function should produce a set of plausible ligands ranked according to their binding stability along with their binding poses. In more practical terms, an effective scoring function should produce promising drug candidates which can then be synthesized and physically screened using high throughput screening process. Therefore, the key to computer-aided drug design is the design of an efficient highly accurate scoring function (using ML techniques). Methods The methods presented in this paper are specifically based on ML techniques. Despite many traditional techniques have been proposed, the performance was generally poor. Only in the last few years started the application of the ML technology in the design of scoring functions; and the results have been very promising. Material The ML-based techniques are based on various molecular features extracted from the abundance of protein–ligand information in the public molecular databases, e.g., protein data bank bind (PDBbind). Results In this paper, we present this paradigm shift elaborating on the main constituent elements of the ML approach to molecular docking along with the state-of-the-art research in this area. For instance, the best random forest (RF)-based scoring function [35] on PDBbind v2007 achieves a Pearson correlation coefficient between the predicted and experimentally determined binding affinities of 0.803 while the best conventional scoring function achieves 0.644 [34]. The best RF-based ranking power [6] ranks the ligands correctly based on their experimentally determined binding affinities with accuracy 62.5% and identifies the top binding ligand with accuracy 78.1%. Conclusions We conclude with open questions and potential future research directions that can be pursued in smart computational docking; using molecular features of different nature (geometrical, energy terms, pharmacophore), advanced ML techniques (e.g., deep learning), combining more than one ML models.
Machine learning in computational docking
S0933365715000159
Objective To detect negations of medical entities in free-text pathology reports with different approaches, and evaluate their performances. Methods and material Three different approaches were applied for negation detection: the lexicon-based approach was a rule-based method, relying on trigger terms and termination clues; the syntax-based approach was also a rule-based method, where the rules and negation patterns were designed using the dependency output from the Stanford parser; the machine-learning-based approach used a support vector machine as a classifier to build models with a number of features. A total of 284 English pathology reports of lymphoma were used for the study. Results The machine-learning-based approach had the best overall performance on the test set with micro-averaged F-score of 82.56%, while the syntax-based approach performed worst with 78.62% F-score. The lexicon-based approach attained an overall average precision of 89.74% and recall of 76.09%, which were significantly better than the results achieved by Negation Tagger with a similar approach. Discussion The lexicon-based approach benefitted from being customized to the corpus more than the other two methods. The errors in negation detection with the syntax-based approach producing poorest performance were mainly due to the poor parsing results, and the errors with the other methods were probably because of the abnormal grammatical structures. Conclusions A machine-learning-based approach has potential advantages for negation detection, and may be preferable for the task. To improve the overall performance, one of the possible solutions is to apply different approaches to each section in the reports.
Automatic negation detection in narrative pathology reports
S0933365715000251
Objectives Medical terminologies vary in the amount of concept information (the “density”) represented, even in the same sub-domains. This causes problems in terminology mapping, semantic harmonization and terminology integration. Moreover, complex clinical scenarios need to be encoded by a medical terminology with comprehensive content. SNOMED Clinical Terms (SNOMED CT), a leading clinical terminology, was reported to lack concepts and synonyms, problems that cannot be fully alleviated by using post-coordination. Therefore, a scalable solution is needed to enrich the conceptual content of SNOMED CT. We are developing a structure-based, algorithmic method to identify potential concepts for enriching the conceptual content of SNOMED CT and to support semantic harmonization of SNOMED CT with selected other Unified Medical Language System (UMLS) terminologies. Methods We first identified a subset of English terminologies in the UMLS that have ‘PAR’ relationship labeled with ‘IS_A’ and over 10% overlap with one or more of the 19 hierarchies of SNOMED CT. We call these “reference terminologies” and we note that our use of this name is different from the standard use. Next, we defined a set of topological patterns across pairs of terminologies, with SNOMED CT being one terminology in each pair and the other being one of the reference terminologies. We then explored how often these topological patterns appear between SNOMED CT and each reference terminology, and how to interpret them. Results Four viable reference terminologies were identified. Large density differences between terminologies were found. Expected interpretations of these differences were indeed observed, as follows. A random sample of 299 instances of special topological patterns (“2:3 and 3:2 trapezoids”) showed that 39.1% and 59.5% of analyzed concepts in SNOMED CT and in a reference terminology, respectively, were deemed to be alternative classifications of the same conceptual content. In 30.5% and 17.6% of the cases, it was found that intermediate concepts could be imported into SNOMED CT or into the reference terminology, respectively, to enhance their conceptual content, if approved by a human curator. Other cases included synonymy and errors in one of the terminologies. Conclusion These results show that structure-based algorithmic methods can be used to identify potential concepts to enrich SNOMED CT and the four reference terminologies. The comparative analysis has the future potential of supporting terminology authoring by suggesting new content to improve content coverage and semantic harmonization between terminologies.
A comparative analysis of the density of the SNOMED CT conceptual content for semantic harmonization
S0933365715000263
Objective This paper proposes a new, complex algorithm for the blind classification of the original electroencephalogram (EEG) tracing of each subject, without any preliminary pre-processing. The medical need in this field is to reach an early differential diagnosis between subjects affected by mild cognitive impairment (MCI), early Alzheimer's disease (AD) and the healthy elderly (CTR) using only the recording and the analysis of few minutes of their EEG. Methods and material This study analyzed the EEGs of 272 subjects, recorded at Rome's Neurology Unit of the Policlinico Campus Bio-Medico. The EEG recordings were performed using 19 electrodes, in a 0.3–70Hz bandpass, positioned according to the International 10–20 System. Many powerful learning machines and algorithms have been proposed during the last 20 years to effectively resolve this complex problem, resulting in different and interesting outcomes. Among these algorithms, a new artificial adaptive system, named implicit function as squashing time (I-FAST), is able to diagnose, with high accuracy, a few minutes of the subject's EEG track; whether it manifests an AD, MCI or CTR condition. An updating of this system, carried out by adding a new algorithm, named multi scale ranked organizing maps (MS-ROM), to the I-FAST system, is presented, in order to classify with greater accuracy the unprocessed EEG's of AD, MCI and control subjects. Results The proposed system has been measured on three independent pattern recognition tasks from unprocessed EEG tracks of a sample of AD subjects, MCI subjects and CTR: (a) AD compared with CTR; (b) AD compared with MCI; (c) CTR compared with MCI. While the values of accuracy of the previous system in distinguishing between AD and MCI were around 92%, the new proposed system reaches values between 94% and 98%. Similarly, the overall accuracy with best artificial neural networks (ANNs) is 98.25% for the distinguishing between CTR and MCI. Conclusions This new version of I-FAST makes different steps forward: (a) avoidance of pre-processing phase and filtering procedure of EEG data, being the algorithm able to directly process an unprocessed EEG; (b) noise elimination, through the use of a training variant with input selection and testing system, based on naïve Bayes classifier; (c) a more robust classification phase, showing the stability of results on nine well known learning machine algorithms; (d) extraction of spatial invariants of an EEG signal using, in addition to the unsupervised ANN, the principal component analysis and the multi scale entropy, together with the MS-ROM; a more accurate performance in this specific task.
An improved I-FAST system for the diagnosis of Alzheimer's disease from unprocessed electroencephalograms by using robust invariant features
S0933365715000287
Objective Terminologies and terminological systems have assumed important roles in many medical information processing environments, giving rise to the “big knowledge” challenge when terminological content comprises tens of thousands to millions of concepts arranged in a tangled web of relationships. Use and maintenance of knowledge structures on that scale can be daunting. The notion of abstraction network is presented as a means of facilitating the usability, comprehensibility, visualization, and quality assurance of terminologies. Methods and materials An abstraction network overlays a terminology's underlying network structure at a higher level of abstraction. In particular, it provides a more compact view of the terminology's content, avoiding the display of minutiae. General abstraction network characteristics are discussed. Moreover, the notion of meta-abstraction network, existing at an even higher level of abstraction than a typical abstraction network, is described for cases where even the abstraction network itself represents a case of “big knowledge.” Various features in the design of abstraction networks are demonstrated in a methodological survey of some existing abstraction networks previously developed and deployed for a variety of terminologies. Results The applicability of the general abstraction-network framework is shown through use-cases of various terminologies, including the Systematized Nomenclature of Medicine – Clinical Terms (SNOMED CT), the Medical Entities Dictionary (MED), and the Unified Medical Language System (UMLS). Important characteristics of the surveyed abstraction networks are provided, e.g., the magnitude of the respective size reduction referred to as the abstraction ratio. Specific benefits of these alternative terminology-network views, particularly their use in terminology quality assurance, are discussed. Examples of meta-abstraction networks are presented. Conclusions The “big knowledge” challenge constitutes the use and maintenance of terminological structures that comprise tens of thousands to millions of concepts and their attendant complexity. The notion of abstraction network has been introduced as a tool in helping to overcome this challenge, thus enhancing the usefulness of terminologies. Abstraction networks have been shown to be applicable to a variety of existing biomedical terminologies, and these alternative structural views hold promise for future expanded use with additional terminologies.
Abstraction networks for terminologies: Supporting management of “big knowledge”
S0933365715000299
Objective Clinical documents reflect a patient's health status in terms of observations and contain objective information such as descriptions of examination results, diagnoses and interventions. To evaluate this information properly, assessing positive or negative clinical outcomes or judging the impact of a medical condition on patient's well being are essential. Although methods of sentiment analysis have been developed to address these tasks, they have not yet found broad application in the medical domain. Methods and material In this work, we characterize the facets of sentiment in the medical sphere and identify potential use cases. Through a literature review, we summarize the state of the art in healthcare settings. To determine the linguistic peculiarities of sentiment in medical texts and to collect open research questions of sentiment analysis in medicine, we perform a quantitative assessment with respect to word usage and sentiment distribution of a dataset of clinical narratives and medical social media derived from six different sources. Results Word usage in clinical narratives differs from that in medical social media: Nouns predominate. Even though adjectives are also frequently used, they mainly describe body locations. Between 12% and 15% of sentiment terms are determined in medical social media datasets when applying existing sentiment lexicons. In contrast, in clinical narratives only between 5% and 11% opinionated terms were identified. This proves the less subjective use of language in clinical narratives, requiring adaptations to existing methods for sentiment analysis. Conclusions Medical sentiment concerns the patient's health status, medical conditions and treatment. Its analysis and extraction from texts has multiple applications, even for clinical narratives that remained so far unconsidered. Given the varying usage and meanings of terms, sentiment analysis from medical documents requires a domain-specific sentiment source and complementary context-dependent features to be able to correctly interpret the implicit sentiment.
Sentiment analysis in medical settings: New opportunities and challenges
S093336571500041X
Background Evidence-based medicine practice requires practitioners to obtain the best available medical evidence, and appraise the quality of the evidence when making clinical decisions. Primarily due to the plethora of electronically available data from the medical literature, the manual appraisal of the quality of evidence is a time-consuming process. We present a fully automatic approach for predicting the quality of medical evidence in order to aid practitioners at point-of-care. Methods Our approach extracts relevant information from medical article abstracts and utilises data from a specialised corpus to apply supervised machine learning for the prediction of the quality grades. Following an in-depth analysis of the usefulness of features (e.g., publication types of articles), they are extracted from the text via rule-based approaches and from the meta-data associated with the articles, and then applied in the supervised classification model. We propose the use of a highly scalable and portable approach using a sequence of high precision classifiers, and introduce a simple evaluation metric called average error distance (AED) that simplifies the comparison of systems. We also perform elaborate human evaluations to compare the performance of our system against human judgments. Results We test and evaluate our approaches on a publicly available, specialised, annotated corpus containing 1132 evidence-based recommendations. Our rule-based approach performs exceptionally well at the automatic extraction of publication types of articles, with F-scores of up to 0.99 for high-quality publication types. For evidence quality classification, our approach obtains an accuracy of 63.84% and an AED of 0.271. The human evaluations show that the performance of our system, in terms of AED and accuracy, is comparable to the performance of humans on the same data. Conclusions The experiments suggest that our structured text classification framework achieves evaluation results comparable to those of human performance. Our overall classification approach and evaluation technique are also highly portable and can be used for various evidence grading scales.
Automatic evidence quality prediction to support evidence-based decision making
S0933365715000421
Glaucoma is a chronic neurodegenerative disease characterized by loss of retinal ganglion cells, resulting in distinctive changes in the optic nerve head (ONH) and retinal nerve fiber layer. Important advances in technology for non-invasive imaging of the eye have been made providing quantitative tools to measure structural changes in ONH topography, a crucial step in diagnosing and monitoring glaucoma. Three dimensional (3D) spectral domain optical coherence tomography (SD-OCT), an optical imaging technique, is now the standard of care for diagnosing and monitoring progression of numerous eye diseases. Method This paper aims to detect changes in multi-temporal 3D SD-OCT ONH images using a hierarchical fully Bayesian framework and then to differentiate between changes reflecting random variations or true changes due to glaucoma progression. To this end, we propose the use of kernel-based support vector data description (SVDD) classifier. SVDD is a well-known one-class classifier that allows us to map the data into a high-dimensional feature space where a hypersphere encloses most patterns belonging to the target class. Results The proposed glaucoma progression detection scheme using the whole 3D SD-OCT images detected glaucoma progression in a significant number of cases showing progression by conventional methods (78%), with high specificity in normal and non-progressing eyes (93% and 94% respectively). Conclusion The use of the dependency measurement in the SVDD framework increased the robustness of the proposed change-detection scheme with comparison to the classical support vector machine and SVDD methods. The validation using clinical data of the proposed approach has shown that the use of only healthy and non-progressing eyes to train the algorithm led to a high diagnostic accuracy for detecting glaucoma progression compared to other methods.
Learning from healthy and stable eyes: A new approach for detection of glaucomatous progression
S0933365715000433
Objective Survey data sets are important sources of data, and their successful exploitation is of key importance for informed policy decision-making. We present how a survey analysis approach initially developed for customer satisfaction research in marketing can be adapted for an introduction of clinical pharmacy services into a hospital. Methods and material We use a data mining analytical approach to extract relevant managerial consequences. We evaluate the importance of competences for users of a clinical pharmacy with the OrdEval algorithm and determine their nature according to the users’ expectations. For this, we need substantially fewer questions than are required by the Kano approach. Results From 52 clinical pharmacy activities we were able to identify seven activities with a substantial negative impact (i.e., negative reinforcement) on the overall satisfaction of clinical pharmacy services, and two activities with a strong positive impact (upward reinforcement). Using analysis of individual feature values, we identified six performance, 10 excitement, and one basic clinical pharmacists’ activity. Conclusions We show how the OrdEval algorithm can exploit the information hidden in the ordering of class and attribute values, and their inherent correlation using a small sample of highly relevant respondents. The visualization of the outputs turns out highly useful in our clinical pharmacy research case study.
Assessment of surveys for the management of hospital clinical pharmacy services
S0933365715000445
Objective The paper addresses the problem of automatic detection of basal cell carcinoma (BCC) in histopathology images. In particular, it proposes a framework to both, learn the image representation in an unsupervised way and visualize discriminative features supported by the learned model. Materials and methods This paper presents an integrated unsupervised feature learning (UFL) framework for histopathology image analysis that comprises three main stages: (1) local (patch) representation learning using different strategies (sparse autoencoders, reconstruct independent component analysis and topographic independent component analysis (TICA), (2) global (image) representation learning using a bag-of-features representation or a convolutional neural network, and (3) a visual interpretation layer to highlight the most discriminant regions detected by the model. The integrated unsupervised feature learning framework was exhaustively evaluated in a histopathology image dataset for BCC diagnosis. Results The experimental evaluation produced a classification performance of 98.1%, in terms of the area under receiver-operating-characteristic curve, for the proposed framework outperforming by 7% the state-of-the-art discrete cosine transform patch-based representation. Conclusions The proposed UFL-representation-based approach outperforms state-of-the-art methods for BCC detection. Thanks to its visual interpretation layer, the method is able to highlight discriminative tissue regions providing a better diagnosis support. Among the different UFL strategies tested, TICA-learned features exhibited the best performance thanks to its ability to capture low-level invariances, which are inherent to the nature of the problem.
An unsupervised feature learning framework for basal cell carcinoma image analysis
S0933365715000457
Objective The objective of this study is to develop a probabilistic modeling framework for segmenting structures of interest from a collection of atlases. We present a label fusion method that is based on minimizing an energy function using graph-cut techniques. Methods and materials We use a conditional random field (CRF) model that allows us to efficiently incorporate shape, appearance and context information. This model is characterized by a pseudo-Boolean function defined on unary, pairwise and higher-order potentials. Given a subset of registered atlases in the target image for a particular region of interest (ROI), we first derive an appearance-shape model from these registered atlases. The unary potentials combine an appearance model based on multiple features with a label prior using a weighted voting method. The pairwise terms are defined from a Finsler metric that minimizes the surface of separation between voxels whose labels are different. The higher-order potentials used in our framework are based on the robust P n model proposed by Kohli et al. The higher-order potentials enforce label consistency in cliques; hence, the proposed method can be viewed as an approach to integrate high-level information with images based on low-level features. To evaluate the performance and the robustness of the proposed label fusion method, we employ two available databases of T1-weighted (T1W) magnetic resonance (MR) images of human brains. We compare our approach with other label fusion methods in the automatic hippocampal segmentation from T1W-MR images. Results Our label fusion method yields mean Dice coefficients of 0.829 and 0.790 for the two databases used with mean times of approximately 80 and 160s, respectively. Conclusions We introduce a new label fusion method based on a CRF model and on ROIs. The CRF model is characterized by a pseudo-Boolean function defined on unary, pairwise and higher-order potentials. The proposed Boolean function is representable by graphs. A globally optimal binary labeling is found using a st-mincut algorithm in each ROI. We show that the proposed approach is very competitive with respect to recently reported methods.
A label fusion method using conditional random fields with higher-order potentials: Application to hippocampal segmentation
S0933365715000469
Objective Manual contouring and registration for radiotherapy treatment planning and online adaptation for cervical cancer radiation therapy in computed tomography (CT) and magnetic resonance images (MRI) are often necessary. However manual intervention is time consuming and may suffer from inter or intra-rater variability. In recent years a number of computer-guided automatic or semi-automatic segmentation and registration methods have been proposed. Segmentation and registration in CT and MRI for this purpose is a challenging task due to soft tissue deformation, inter-patient shape and appearance variation and anatomical changes over the course of treatment. The objective of this work is to provide a state-of-the-art review of computer-aided methods developed for adaptive treatment planning and radiation therapy planning for cervical cancer radiation therapy. Methods Segmentation and registration methods published with the goal of cervical cancer treatment planning and adaptation have been identified from the literature (PubMed and Google Scholar). A comprehensive description of each method is provided. Similarities and differences of these methods are highlighted and the strengths and weaknesses of these methods are discussed. A discussion about choice of an appropriate method for a given modality is provided. Results In the reviewed papers a Dice similarity coefficient of around 0.85 along with mean absolute surface distance of 2–4mm for the clinically treated volume were reported for transfer of contours from planning day to the treatment day. Conclusions Most segmentation and non-rigid registration methods have been primarily designed for adaptive re-planning for the transfer of contours from planning day to the treatment day. The use of shape priors significantly improved segmentation and registration accuracy compared to other models.
A review of segmentation and deformable registration methods applied to adaptive cervical cancer radiation therapy treatment planning
S0933365715000470
Objective Nowadays, effective scheduling of patients in clinics, laboratories, and emergency rooms is becoming increasingly important. Hospitals are required to maximize the level of patient satisfaction, while they are faced with lack of space and facilities. An effective scheduling of patients in existing conditions is vital for improving healthcare delivery. The shorter waiting time of patients improves healthcare service quality and efficiency. Focusing on real settings, this paper addresses a semi-online patient scheduling problem in a pathology laboratory located in Tehran, Iran, as a case study. Methods and material Due to partial precedence constraints of laboratory tests, the problem is formulated as a semi-online hybrid shop scheduling problem and a mixed integer linear programming model is proposed. A genetic algorithm (GA) is developed for solving the problem and response surface methodology is used for setting GA parameters. A lower bound is also calculated for the problem, and several experiments are conducted to estimate the validity of the proposed algorithm. Results Based on the empirical data collected from the pathology laboratory, comparison between the current condition of the laboratory and the results obtained by the proposed approach is performed through simulation experiments. The results indicate that the proposed approach can significantly reduce waiting time of the patients and improve operations efficiency. Conclusion The proposed approach has been successfully applied to scheduling patients in a pathology laboratory considering the real-world settings including precedence constraints of tests, constraint on the number of sites or operators for taking tests (i.e. multi-machine problem), and semi-online nature of the problem.
Semi-online patient scheduling in pathology laboratories
S0933365715000482
Background Diagnosis codes are assigned to medical records in healthcare facilities by trained coders by reviewing all physician authored documents associated with a patient's visit. This is a necessary and complex task involving coders adhering to coding guidelines and coding all assignable codes. With the popularity of electronic medical records (EMRs), computational approaches to code assignment have been proposed in the recent years. However, most efforts have focused on single and often short clinical narratives, while realistic scenarios warrant full EMR level analysis for code assignment. Objective We evaluate supervised learning approaches to automatically assign international classification of diseases (ninth revision) – clinical modification (ICD-9-CM) codes to EMRs by experimenting with a large realistic EMR dataset. The overall goal is to identify methods that offer superior performance in this task when considering such datasets. Methods We use a dataset of 71,463 EMRs corresponding to in-patient visits with discharge date falling in a two year period (2011–2012) from the University of Kentucky (UKY) Medical Center. We curate a smaller subset of this dataset and also use a third gold standard dataset of radiology reports. We conduct experiments using different problem transformation approaches with feature and data selection components and employing suitable label calibration and ranking methods with novel features involving code co-occurrence frequencies and latent code associations. Results Over all codes with at least 50 training examples we obtain a micro F-score of 0.48. On the set of codes that occur at least in 1% of the two year dataset, we achieve a micro F-score of 0.54. For the smaller radiology report dataset, the classifier chaining approach yields best results. For the smaller subset of the UKY dataset, feature selection, data selection, and label calibration offer best performance. Conclusions We show that datasets at different scale (size of the EMRs, number of distinct codes) and with different characteristics warrant different learning approaches. For shorter narratives pertaining to a particular medical subdomain (e.g., radiology, pathology), classifier chaining is ideal given the codes are highly related with each other. For realistic in-patient full EMRs, feature and data selection methods offer high performance for smaller datasets. However, for large EMR datasets, we observe that the binary relevance approach with learning-to-rank based code reranking offers the best performance. Regardless of the training dataset size, for general EMRs, label calibration to select the optimal number of labels is an indispensable final step.
An empirical evaluation of supervised learning approaches in assigning diagnosis codes to electronic medical records
S0933365715000494
Objective The existing methods of fuzzy soft sets in decision making are mainly based on different kinds of level soft sets, and it is very difficult for decision makers to select a suitable level soft set in most instances. The goal of this paper is to present an approach to fuzzy soft sets in decision making to avoid selecting a suitable level soft set and to apply this approach to solve medical diagnosis problems. Methods This approach combines grey relational analysis with the Dempster–Shafer theory of evidence. It first utilizes grey relational analysis to calculate the grey mean relational degree, by which we calculate the uncertain degree of various parameters. Then, on the basis of the uncertain degree, the suitable basic probability assignment function of each independent alternative with each parameter can be obtained. Next, we apply Dempster–Shafer rule of evidence fusion to aggregate these alternatives into a collective alternative, by which these alternatives are ranked and the best alternative is obtained. Finally, we compare this approach with the mean potentiality approach. Results The results demonstrate the effectiveness and feasibility of this approach vis-a-vis the mean potentiality approach, Feng's method, Analytical Hierarchy Process and Naive Bayes’ classification method because the measure of performance of this approach is the same as that of the mean potentiality approach, and the belief measure of the whole uncertainty falls from the initial mean 0.3821 to 0.0069 in an application of medical diagnosis. Conclusion An approach to fuzzy soft sets in decision making by combining grey relational analysis with Dempster–Shafer theory of evidence is introduced. The advantages of this approach are discussed. A practical application to medical diagnosis problems is given.
An approach to fuzzy soft sets in decision making based on grey relational analysis and Dempster–Shafer theory of evidence: An application in medical diagnosis
S0933365715000500
Background Most existing systems that identify protein–protein interaction (PPI) in literature make decisions solely on evidence within a single sentence and ignore the rich context of PPI descriptions in large corpora. Moreover, they often suffer from the heavy burden of manual annotation. Methods To address these problems, a new relational-similarity (RS)-based approach exploiting context in large-scale text is proposed. A basic RS model is first established to make initial predictions. Then word similarity matrices that are sensitive to the PPI identification task are constructed using a corpus-based approach. Finally, a hybrid model is developed to integrate the word similarity model with the basic RS model. Results The experimental results show that the basic RS model achieves F-scores much higher than a baseline of random guessing on interactions (from 50.6% to 75.0%) and non-interactions (from 49.4% to 74.2%). The hybrid model further improves F-score by about 2% on interactions and 3% on non-interactions. Conclusion The experimental evaluations conducted with PPIs in well-known databases showed the effectiveness of our approach that explores context information in PPI identification. This investigation confirmed that within the framework of relational similarity, the word similarity model relieves the data sparseness problem in similarity calculation.
Protein–protein interaction identification using a hybrid model
S0933365715000780
Objective Drug named entity recognition (NER) is a critical step for complex biomedical NLP tasks such as the extraction of pharmacogenomic, pharmacodynamic and pharmacokinetic parameters. Large quantities of high quality training data are almost always a prerequisite for employing supervised machine-learning techniques to achieve high classification performance. However, the human labour needed to produce and maintain such resources is a significant limitation. In this study, we improve the performance of drug NER without relying exclusively on manual annotations. Methods We perform drug NER using either a small gold-standard corpus (120 abstracts) or no corpus at all. In our approach, we develop a voting system to combine a number of heterogeneous models, based on dictionary knowledge, gold-standard corpora and silver annotations, to enhance performance. To improve recall, we employed genetic programming to evolve 11 regular-expression patterns that capture common drug suffixes and used them as an extra means for recognition. Materials Our approach uses a dictionary of drug names, i.e. DrugBank, a small manually annotated corpus, i.e. the pharmacokinetic corpus, and a part of the UKPMC database, as raw biomedical text. Gold-standard and silver annotated data are used to train maximum entropy and multinomial logistic regression classifiers. Results Aggregating drug NER methods, based on gold-standard annotations, dictionary knowledge and patterns, improved the performance on models trained on gold-standard annotations, only, achieving a maximum F-score of 95%. In addition, combining models trained on silver annotations, dictionary knowledge and patterns are shown to achieve comparable performance to models trained exclusively on gold-standard data. The main reason appears to be the morphological similarities shared among drug names. Conclusion We conclude that gold-standard data are not a hard requirement for drug NER. Combining heterogeneous models build on dictionary knowledge can achieve similar or comparable classification performance with that of the best performing model trained on gold-standard annotations.
Boosting drug named entity recognition using an aggregate classifier
S0933365715000809
Introduction Patients surviving myocardial infarction (MI) can be divided into high and low arrhythmic risk groups. Distinguishing between these two groups is of crucial importance since the high-risk group has been shown to benefit from implantable cardioverter defibrillator insertion; a costly surgical procedure with potential complications and no proven advantages for the low-risk group. Currently, markers such as left ventricular ejection fraction and myocardial scar size are used to evaluate arrhythmic risk. Methods In this paper, we propose quantitative discriminative features extracted from late gadolinium enhanced cardiac magnetic resonance images of post-MI patients, to distinguish between 20 high-risk and 34 low-risk patients. These features include size, location, and textural information concerning the scarred myocardium. To evaluate the discriminative power of the proposed features, we used several built-in classification schemes from matrix laboratory (MATLAB) and Waikato environment for knowledge analysis (WEKA) software, including k-nearest neighbor (k-NN), support vector machine (SVM), decision tree, and random forest. Results In Experiment 1, the leave-one-out cross-validation scheme is implemented in MATLAB to classify high- and low-risk groups with a classification accuracy of 94.44%, and an AUC of 0.965 for a feature combination that captures size, location and heterogeneity of the scar. In Experiment 2 with the help of WEKA, nested cross-validation is performed with k-NN, SVM, adjusting decision tree and random forest classifiers to differentiate high-risk and low-risk patients. SVM classifier provided average accuracy of 92.6%, and AUC of 0.921 for a feature combination capturing location and heterogeneity of the scar. Experiment 1 and Experiment 2 show that textural features from the scar are important for classification and that localization features provide an additional benefit. Conclusion These promising results suggest that the discriminative features introduced in this paper can be used by medical professionals, or in automatic decision support systems, along with the recognized risk markers, to improve arrhythmic risk stratification in post-MI patients.
Cardiac magnetic resonance image-based classification of the risk of arrhythmias in post-myocardial infarction patients
S0933365715000810
Objectives In this paper, an alignment-free method for DNA barcode classification that is based on both a spectral representation and a neural gas network for unsupervised clustering is proposed. Methods In the proposed methodology, distinctive words are identified from a spectral representation of DNA sequences. A taxonomic classification of the DNA sequence is then performed using the sequence signature, i.e., the smallest set of k-mers that can assign a DNA sequence to its proper taxonomic category. Experiments were then performed to compare our method with other supervised machine learning classification algorithms, such as support vector machine, random forest, ripper, naïve Bayes, ridor, and classification tree, which also consider short DNA sequence fragments of 200 and 300 base pairs (bp). The experimental tests were conducted over 10 real barcode datasets belonging to different animal species, which were provided by the on-line resource “Barcode of Life Database”. Results The experimental results showed that our k-mer-based approach is directly comparable, in terms of accuracy, recall and precision metrics, with the other classifiers when considering full-length sequences. In addition, we demonstrate the robustness of our method when a classification is performed task with a set of short DNA sequences that were randomly extracted from the original data. For example, the proposed method can reach the accuracy of 64.8% at the species level with 200-bp fragments. Under the same conditions, the best other classifier (random forest) reaches the accuracy of 20.9%. Conclusions Our results indicate that we obtained a clear improvement over the other classifiers for the study of short DNA barcode sequence fragments.
A k-mer-based barcode DNA classification methodology based on spectral representation and a neural gas network
S0933365715000846
Objective This paper presents a multilingual news surveillance system applied to tele-epidemiology. It has been shown that multilingual approaches improve timeliness in detection of epidemic events across the globe, eliminating the wait for local news to be translated into major languages. We present here a system to extract epidemic events in potentially any language, provided a Wikipedia seed for common disease names exists. Methods The Daniel system presented herein relies on properties that are common to news writing (the journalistic genre), the most useful being repetition and saliency. Wikipedia is used to screen common disease names to be matched with repeated characters strings. Language variations, such as declensions, are handled by processing text at the character-level, rather than at the word level. This additionally makes it possible to handle various writing systems in a similar fashion. Material As no multilingual ground truth existed to evaluate the Daniel system, we built a multilingual corpus from the Web, and collected annotations from native speakers of Chinese, English, Greek, Polish and Russian, with no connection or interest in the Daniel system. This data set is available online freely, and can be used for the evaluation of other event extraction systems. Results Experiments for 5 languages out of 17 tested are detailed in this paper: Chinese, English, Greek, Polish and Russian. The Daniel system achieves an average F-measure of 82% in these 5 languages. It reaches 87% on BEcorpus, the state-of-the-art corpus in English, slightly below top-performing systems, which are tailored with numerous language-specific resources. The consistent performance of Daniel on multiple languages is an important contribution to the reactivity and the coverage of epidemiological event detection systems. Conclusions Most event extraction systems rely on extensive resources that are language-specific. While their sophistication induces excellent results (over 90% precision and recall), it restricts their coverage in terms of languages and geographic areas. In contrast, in order to detect epidemic events in any language, the Daniel system only requires a list of a few hundreds of disease names and locations, which can actually be acquired automatically. The system can perform consistently well on any language, with precision and recall around 82% on average, according to this paper's evaluation. Daniel's character-based approach is especially interesting for morphologically-rich and low-resourced languages. The lack of resources to be exploited and the state of the art string matching algorithms imply that Daniel can process thousands of documents per minute on a simple laptop. In the context of epidemic surveillance, reactivity and geographic coverage are of primary importance, since no one knows where the next event will strike, and therefore in what vernacular language it will first be reported. By being able to process any language, the Daniel system offers unique coverage for poorly endowed languages, and can complete state of the art techniques for major languages.
Multilingual event extraction for epidemic detection
S093336571500086X
Objective Recurrence of cancer after treatment increases the risk of death. The ability to predict the treatment outcome can help to design the treatment planning and can thus be beneficial to the patient. We aim to select predictive features from clinical and PET (positron emission tomography) based features, in order to provide doctors with informative factors so as to anticipate the outcome of the patient treatment. Methods In order to overcome the small sample size problem of datasets usually met in the medical domain, we propose a novel wrapper feature selection algorithm, named HFS (hierarchical forward selection), which searches forward in a hierarchical feature subset space. Feature subsets are iteratively evaluated with the prediction performance using SVM (support vector machine). All feature subsets performing better than those at the preceding iteration are retained. Moreover, as SUV (standardized uptake value) based features have been recognized as significant predictive factors for a patient outcome, we propose to incorporate this prior knowledge into the selection procedure to improve its robustness and reduce its computational cost. Results Two real-world datasets from cancer patients are included in the evaluation. We extract dozens of clinical and PET-based features to characterize the patient's state, including SUV parameters and texture features. We use leave-one-out cross-validation to evaluate the prediction performance, in terms of prediction accuracy and robustness. Using SVM as the classifier, our HFS method produces accuracy values of 100% and 94% on the two datasets, respectively, and robustness values of 89% and 96%. Without accuracy loss, the prior-based version (pHFS) improves the robustness up to 100% and 98% on the two datasets, respectively. Conclusions Compared with other feature selection methods, the proposed HFS and pHFS provide the most promising results. For our HFS method, we have empirically shown that the addition of prior knowledge improves the robustness and accelerates the convergence.
Robust feature selection to predict tumor treatment outcome
S0933365715000871
Background Over the past 30 years, the international conference on Artificial Intelligence in MEdicine (AIME) has been organized at different venues across Europe every 2 years, establishing a forum for scientific exchange and creating an active research community. The Artificial Intelligence in Medicine journal has published theme issues with extended versions of selected AIME papers since 1998. Objectives To review the history of AIME conferences, investigate its impact on the wider research field, and identify challenges for its future. Methods We analyzed a total of 122 session titles to create a taxonomy of research themes and topics. We classified all 734 AIME conference papers published between 1985 and 2013 with this taxonomy. We also analyzed the citations to these conference papers and to 55 special issue papers. Results We identified 30 research topics across 12 themes. AIME was dominated by knowledge engineering research in its first decade, while machine learning and data mining prevailed thereafter. Together these two themes have contributed about 51% of all papers. There have been eight AIME papers that were cited at least 10 times per year since their publication. Conclusions There has been a major shift from knowledge-based to data-driven methods while the interest for other research themes such as uncertainty management, image and signal processing, and natural language processing has been stable since the early 1990s. AIME papers relating to guidelines and protocols are among the most highly cited.
Thirty years of artificial intelligence in medicine (AIME) conferences: A review of research themes
S0933365715000895
Objectives Early recognition of breast cancer, the most commonly diagnosed form of cancer in women, is of crucial importance, given that it leads to significantly improved chances of survival. Medical thermography, which uses an infrared camera for thermal imaging, has been demonstrated as a particularly useful technique for early diagnosis, because it detects smaller tumors than the standard modality of mammography. Methods and material In this paper, we analyse breast thermograms by extracting features describing bilateral symmetries between the two breast areas, and present a classification system for decision making. Clearly, the costs associated with missing a cancer case are much higher than those for mislabelling a benign case. At the same time, datasets contain significantly fewer malignant cases than benign ones. Standard classification approaches fail to consider either of these aspects. In this paper, we introduce a hybrid cost-sensitive classifier ensemble to address this challenging problem. Our approach entails a pool of cost-sensitive decision trees which assign a higher misclassification cost to the malignant class, thereby boosting its recognition rate. A genetic algorithm is employed for simultaneous feature selection and classifier fusion. As an optimisation criterion, we use a combination of misclassification cost and diversity to achieve both a high sensitivity and a heterogeneous ensemble. Furthermore, we prune our ensemble by discarding classifiers that contribute minimally to the decision making. Results For a challenging dataset of about 150 thermograms, our approach achieves an excellent sensitivity of 83.10%, while maintaining a high specificity of 89.44%. This not only signifies improved recognition of malignant cases, it also statistically outperforms other state-of-the-art algorithms designed for imbalanced classification, and hence provides an effective approach for analysing breast thermograms. Conclusions Our proposed hybrid cost-sensitive ensemble can facilitate a highly accurate early diagnostic of breast cancer based on thermogram features. It overcomes the difficulties posed by the imbalanced distribution of patients in the two analysed groups.
A hybrid cost-sensitive ensemble for imbalanced breast thermogram classification
S0933365715000901
Objective This paper presents benchmarking results of human epithelial type 2 (HEp-2) interphase cell image classification methods on a very large dataset. The indirect immunofluorescence method applied on HEp-2 cells has been the gold standard to identify connective tissue diseases such as systemic lupus erythematosus and Sjögren's syndrome. However, the method suffers from numerous issues such as being subjective, time consuming and labor intensive. This has been the main motivation for the development of various computer-aided diagnosis systems whose main task is to automatically classify a given cell image into one of the predefined classes. Methods and material The benchmarking was performed in the form of an international competition held in conjunction with the International Conference of Image Processing in 2013: fourteen teams, composed of practitioners and researchers in this area, took part in the initiative. The system developed by each team was trained and tested on a very large HEp-2 cell dataset comprising over 68,000 images of HEp-2 cell. The dataset contains cells with six different staining patterns and two levels of fluorescence intensity. For each method we provide a brief description highlighting the design choices and an in-depth analysis on the benchmarking results. Results The staining pattern recognition accuracy attained by the methods varies between 47.91% and slightly above 83.65%. However, the difference between the top performing method and the seventh ranked method is only 5%. In the paper, we also study the performance achieved by fusing the best methods, finding that a recognition rate of 85.60% is reached when the top seven methods are employed. Conclusions We found that highest performance is obtained when using a strong classifier (typically a kernelised support vector machine) in conjunction with features extracted from local statistics. Furthermore, the misclassification profiles of the different methods highlight that some staining patterns are intrinsically more difficult to recognize. We also noted that performance is strongly affected by the fluorescence intensity level. Thus, low accuracy is to be expected when analyzing low contrasted images.
Benchmarking human epithelial type 2 interphase cells classification methods on a very large dataset
S0933365715000925
Objective Case-based reasoning (CBR) is a problem-solving paradigm that uses past knowledge to interpret or solve new problems. It is suitable for experience-based and theory-less problems. Building a semantically intelligent CBR that mimic the expert thinking can solve many problems especially medical ones. Methods Knowledge-intensive CBR using formal ontologies is an evolvement of this paradigm. Ontologies can be used for case representation and storage, and it can be used as a background knowledge. Using standard medical ontologies, such as SNOMED CT, enhances the interoperability and integration with the health care systems. Moreover, utilizing vague or imprecise knowledge further improves the CBR semantic effectiveness. This paper proposes a fuzzy ontology-based CBR framework. It proposes a fuzzy case-base OWL2 ontology, and a fuzzy semantic retrieval algorithm that handles many feature types. Material This framework is implemented and tested on the diabetes diagnosis problem. The fuzzy ontology is populated with 60 real diabetic cases. The effectiveness of the proposed approach is illustrated with a set of experiments and case studies. Results The resulting system can answer complex medical queries related to semantic understanding of medical concepts and handling of vague terms. The resulting fuzzy case-base ontology has 63 concepts, 54 (fuzzy) object properties, 138 (fuzzy) datatype properties, 105 fuzzy datatypes, and 2640 instances. The system achieves an accuracy of 97.67%. We compare our framework with existing CBR systems and a set of five machine-learning classifiers; our system outperforms all of these systems. Conclusion Building an integrated CBR system can improve its performance. Representing CBR knowledge using the fuzzy ontology and building a case retrieval algorithm that treats different features differently improves the accuracy of the resulting systems.
A fuzzy-ontology-oriented case-based reasoning framework for semantic diabetes diagnosis
S0933365715000937
Background An antibiogram (ABG) gives the results of in vitro susceptibility tests performed on a pathogen isolated from a culture of a sample taken from blood or other tissues. The institutional cross-ABG consists of the conditional probability of susceptibility for pairs of antimicrobials. This paper explores how interpretative reading of the isolate ABG can be used to replace and improve the prior probabilities stored in the institutional ABG. Probabilities were calculated by both a naïve and semi-naïve Bayesian approaches, both using the ABG for the given isolate and institutional ABGs and cross-ABGs. Methods and Material We assessed an isolate database from an Israeli university hospital with ABGs from 3347 clinically significant blood isolates, where on average 19 antimicrobials were tested for susceptibility, out of 31 antimicrobials in regular use for patient treatment. For each of 14 pathogens or groups of pathogens in the database the average (prior) probability of susceptibility (also called the institutional ABG) and the institutional cross-ABG were calculated. For each isolate, the normalized Brier distance was used as a measure of the distance between susceptibility test results from the isolate ABG and respectively prior probabilities and posteriori probabilities of susceptibility. We used a 5-fold cross-validation to evaluate the performance of different approaches to predict posterior susceptibilities. Results The normalized Brier distance between the prior probabilities and the susceptibility test results for all isolates in the database was reduced from 37.7% to 28.2% by the naïve Bayes method. The smallest normalized Brier distance of 25.3% was obtained with the semi-naïve min2max2 method, which uses the two smallest significant odds ratios and the two largest significant odds ratios expressing respectively cross-resistance and cross-susceptibility, calculated from the cross-ABG. Conclusion A practical method for predicting probability for antimicrobial susceptibility could be developed based on a semi-naïve Bayesian approach using statistical data on cross-susceptibilities and cross-resistances. The reduction in Brier distance from 37.7% to 25.3%, indicates a significant advantage to the proposed min2max2 method (p<10 99).
Interpretative reading of the antibiogram – a semi-naïve Bayesian approach
S0933365715000950
Objective This paper aims at developing an automated gastroscopic video summarization algorithm to assist clinicians to more effectively go through the abnormal contents of the video. Methods and materials To select the most representative frames from the original video sequence, we formulate the problem of gastroscopic video summarization as a dictionary selection issue. Different from the traditional dictionary selection methods, which take into account only the number and reconstruction ability of selected key frames, our model introduces the similar-inhibition constraint to reinforce the diversity of selected key frames. We calculate the attention cost by merging both gaze and content change into a prior cue to help select the frames with more high-level semantic information. Moreover, we adopt an image quality evaluation process to eliminate the interference of the poor quality images and a segmentation process to reduce the computational complexity. Results For experiments, we build a new gastroscopic video dataset captured from 30 volunteers with more than 400k images and compare our method with the state-of-the-arts using the content consistency, index consistency and content-index consistency with the ground truth. Compared with all competitors, our method obtains the best results in 23 of 30 videos evaluated based on content consistency, 24 of 30 videos evaluated based on index consistency and all videos evaluated based on content-index consistency. Conclusions For gastroscopic video summarization, we propose an automated annotation method via similar-inhibition dictionary selection. Our model can achieve better performance compared with other state-of-the-art models and supplies more suitable key frames for diagnosis. The developed algorithm can be automatically adapted to various real applications, such as the training of young clinicians, computer-aided diagnosis or medical report generation.
Scalable gastroscopic video summarization via similar-inhibition dictionary selection
S0933365715001086
Objective Anomaly detection, as an imperative task for clinical pathway (CP) analysis and improvement, can provide useful and actionable knowledge of interest to clinical experts to be potentially exploited. Existing studies mainly focused on the detection of global anomalous inpatient traces of CPs using the similarity measures in a structured manner, which brings order in the chaos of CPs, may decline the accuracy of similarity measure between inpatient traces, and may distort the efficiency of anomaly detection. In addition, local anomalies that exist in some subsegments of events or behaviors in inpatient traces are easily overlooked by existing approaches since they are designed for detecting global or large anomalies. Method In this study, we employ a probabilistic topic model to discover underlying treatment patterns, and assume any significant unexplainable deviations from the normal behaviors surmised by the derived patterns are strongly correlated with anomalous behaviours. In this way, we can figure out the detailed local abnormal behaviors and the associations between these anomalies such that diagnostic information on local anomalies can be provided. Results The proposed approach is evaluated via a clinical data-set, including 2954 unstable angina patient traces and 483,349 clinical events, extracted from a Chinese hospital. Using the proposed method, local anomalies are detected from the log. In addition, the identified associations between the detected local anomalies are derived from the log, which lead to clinical concern on the reason resulting in these anomalies in CPs. The correctness of the proposed approach has been evaluated by three experience cardiologists of the hospital. For four types of local anomalies (i.e., unexpected events, early events, delay events, and absent events), the proposed approach achieves 94%, 71% 77%, and 93.2% in terms of recall. This is quite remarkable as we do not use a prior knowledge. Conclusion Substantial experimental results show that the proposed approach can effectively detect local anomalies in CPs, and also provide diagnostic information on the detected anomalies in an informative manner.
On local anomaly detection and analysis for clinical pathways
S0933365715001098
Objectives Inspired by real-world examples from the forensic medical sciences domain, we seek to determine whether a decision about an interventional action could be subject to amendments on the basis of some incomplete information within the model, and whether it would be worthwhile for the decision maker to seek further information prior to suggesting a decision. Method The method is based on the underlying principle of Value of Information to enhance decision analysis in interventional and counterfactual Bayesian networks. Results The method is applied to two real-world Bayesian network models (previously developed for decision support in forensic medical sciences) to examine the average gain in terms of both Value of Information (average relative gain ranging from 11.45% and 59.91%) and decision making (potential amendments in decision making ranging from 0% to 86.8%). Conclusions We have shown how the method becomes useful for decision makers, not only when decision making is subject to amendments on the basis of some unknown risk factors, but also when it is not. Knowing that a decision outcome is independent of one or more unknown risk factors saves us from the trouble of seeking information about the particular set of risk factors. Further, we have also extended the assessment of this implication to the counterfactual case and demonstrated how answers about interventional actions are expected to change when some unknown factors become known, and how useful this becomes in forensic medical science.
Value of information analysis for interventional and counterfactual Bayesian networks in forensic medical sciences
S0933365715001207
Objective Arden Syntax is a standard for representing and sharing medical knowledge in form of independent modules and looks back on a history of 25 years. Its traditional field of application is the monitoring of clinical events such as generating an alert in case of occurrence of a critical laboratory result. Arden Syntax Medical Logic Modules must be able to retrieve patient data from the electronic medical record in order to enable automated decision making. For patient data with a simple structure, for instance a list of laboratory results, or, in a broader view, any patient data with a list or table structure, this mapping process is straightforward. Nevertheless, if patient data are of a complex nested structure the mapping process may become tedious. Two clinical requirements – to process complex microbiology data and to decrease the time between a critical laboratory event and its alerting by monitoring Health Level 7 (HL7) communication – have triggered the investigation of approaches for providing complex patient data from electronic medical records inside Arden Syntax Medical Logic Modules. Methods and materials The data mapping capabilities of current versions of the Arden Syntax standard as well as interfaces and data mapping capabilities of three different Arden Syntax environments have been analyzed. We found and implemented three different approaches to map a test sample of complex microbiology data for 22 patients and measured their execution times and memory usage. Based on one of these approaches, we mapped entire HL7 messages onto congruent Arden Syntax objects. Results While current versions of Arden Syntax support the mapping of list and table structures, complex data structures are so far unsupported. We identified three different approaches to map complex data from electronic patient records onto Arden Syntax variables; each of these approaches successfully mapped a test sample of complex microbiology data. The first approach was implemented in Arden Syntax itself, the second one inside the interface component of one of the investigated Arden Syntax environments. The third one was based on deserialization of Extended Markup Language (XML) data. Mean execution times of the approaches to map the test sample were 497ms, 382ms, and 84ms. Peak memory usage amounted to 3MB, 3MB, and 6MB. Conclusion The most promising approach by far was to map arbitrary XML structures onto congruent complex data types of Arden Syntax through deserialization. This approach is generic insofar as a data mapper based on this approach can transform any patient data provided in appropriate XML format. Therefore it could help overcome a major obstacle for integrating clinical decision support functions into clinical information systems. Theoretically, the deserialization approach would even allow mapping entire patient records onto Arden Syntax objects in one single step. We recommend extending the Arden Syntax specification with an appropriate XML data format.
Accessing complex patient data from Arden Syntax Medical Logic Modules
S0933365715001219
Purpose Higher order tensor (HOT) imaging approaches based on the spherical deconvolution framework have attracted much interest for their effectiveness in estimating fiber orientation distribution (FOD). However, sparse regularization techniques are still needed to obtain stable FOD in solving the deconvolution problem, particularly in very high orders. Our goal is to adequately characterize the actual sparsity lying in the FOD domain to develop accurate estimation approach for fiber orientation in HOT framework. Materials and methods We propose a sparse HOT regularization model by enforcing the sparse constraint directly on the representation of FOD instead of imposing it on coefficients of basis function. Then, we incorporate both the stabilizing effect of the l 2 penalty and the sparsity encouraging effect of the l 1 penalty in the sparse model to adequately characterize the actual sparsity lying in the FOD domain. Furthermore, a weighted regularization scheme is developed to iteratively solve the deconvolution problem. The deconvolution technique is compared against existing methods using l 2 or l 1 regularizer and tested on synthetic data and real human brain. Results Experiments were conducted on synthetic data and real human brain data. The synthetic experimental results indicate that crossing fibers are more easily detected and the angular resolution limit is improved by our method by approximately 20°–30° compared to existing HOT method. The detection accuracy is considerably improved compared with that of spherical deconvolution approaches using the l 2 regularizer and the reweighted l 1 scheme. Conclusions Results of testing the deconvolution technique demonstrate that it allows HOTs to obtain increasingly clean and sharp FOD, which in turn significantly increases the angular resolution of current HOT methods. With sparsity on FOD domain, this method efficiently improves the ability of HOT in resolving crossing fibers.
Sparse deconvolution of higher order tensor for fiber orientation distribution estimation
S0933365715001220
Objective Bacterial infections frequently cause prolonged intensive care unit (ICU) stays. Repeated measurements of the procalcitonin (PCT) biomarker are typically used for early detection and follow up of bacterial infections and sepsis, but those PCT measurements are costly. To avoid overutilization, we developed and evaluated a clinical decision support system (CDSS) in Arden Syntax which computes necessary and preventable PCT orders. Methods The CDSS implements a rule set based on the latest PCT value, the time period since this measurement, and the PCT trend scenario. We assessed the CDSS effects on the daily rate of ordered PCT tests within a prospective study having two ON and two OFF phases in a surgical ICU. In addition, we performed interviews with the participating physicians to investigate their experience with the CDSS advice. Results Prior to the deployment of the CDSS, 22% of the performed PCT tests were potentially preventable according to the rule set. During the first ON phase the daily rate of ordered PCT tests per patient decreased significantly from 0.807 to 0.662. In subsequent OFF, ON and OFF phases, however, PCT utilization reached again daily rates of 0.733, 0.803, and 0.792, respectively. The interviews demonstrated that the physicians were aware of the problem of PCT overutilization, which they primarily attributed to acute time constraints. The responders assumed that the majority of preventable measurements are indiscriminately ordered for patients during longer ICU stays. Conclusion We observed an 18% reduction of PCT tests within the first four weeks of CDSS support in the investigated ICU. This reduction may have been influenced by raised awareness of the overutilization problem; the extent of this influence cannot be determined in our study design. No reduction of PCT tests could be observed during the second ON phase. The physician interviews indicated that time critical ICU situations can prevent extensive reflection about the necessity of individual tests. In order to achieve an enduring effect on PCT utilization, we will have to proceed to electronic order entry.
Using Arden Syntax Medical Logic Modules to reduce overutilization of laboratory tests for detection of bacterial infections—Success or failure?
S0933365715001232
Background Pediatric guidelines based care is often overlooked because of the constraints of a typical office visit and the sheer number of guidelines that may exist for a patient's visit. In response to this problem, in 2004 we developed a pediatric computer based clinical decision support system using Arden Syntax medical logic modules (MLM). Methods The Child Health Improvement through Computer Automation system (CHICA) screens patient families in the waiting room and alerts the physician in the exam room. Here we describe adaptation of Arden Syntax to support production and consumption of patient specific tailored documents for every clinical encounter in CHICA and describe the experiments that demonstrate the effectiveness of this system. Results As of this writing CHICA has served over 44,000 patients at 7 pediatric clinics in our healthcare system in the last decade and its MLMs have been fired 6182,700 times in “produce” and 5334,021 times in “consume” mode. It has run continuously for over 10 years and has been used by 755 physicians, residents, fellows, nurse practitioners, nurses and clinical staff. There are 429 MLMs implemented in CHICA, using the Arden Syntax standard. Studies of CHICA's effectiveness include several published randomized controlled trials. Conclusions Our results show that the Arden Syntax standard provided us with an effective way to represent pediatric guidelines for use in routine care. We only required minor modifications to the standard to support our clinical workflow. Additionally, Arden Syntax implementation in CHICA facilitated the study of many pediatric guidelines in real clinical environments.
Pediatric decision support using adapted Arden Syntax
S0933365715001244
Objectives The radiology report is the most important source of clinical imaging information. It documents critical information about the patient's health and the radiologist's interpretation of medical findings. It also communicates information to the referring physicians and records that information for future clinical and research use. Although efforts to structure some radiology report information through predefined templates are beginning to bear fruit, a large portion of radiology report information is entered in free text. The free text format is a major obstacle for rapid extraction and subsequent use of information by clinicians, researchers, and healthcare information systems. This difficulty is due to the ambiguity and subtlety of natural language, complexity of described images, and variations among different radiologists and healthcare organizations. As a result, radiology reports are used only once by the clinician who ordered the study and rarely are used again for research and data mining. In this work, machine learning techniques and a large multi-institutional radiology report repository are used to extract the semantics of the radiology report and overcome the barriers to the re-use of radiology report information in clinical research and other healthcare applications. Material and methods We describe a machine learning system to annotate radiology reports and extract report contents according to an information model. This information model covers the majority of clinically significant contents in radiology reports and is applicable to a wide variety of radiology study types. Our automated approach uses discriminative sequence classifiers for named-entity recognition to extract and organize clinically significant terms and phrases consistent with the information model. We evaluated our information extraction system on 150 radiology reports from three major healthcare organizations and compared its results to a commonly used non-machine learning information extraction method. We also evaluated the generalizability of our approach across different organizations by training and testing our system on data from different organizations. Results Our results show the efficacy of our machine learning approach in extracting the information model's elements (10-fold cross-validation average performance: precision: 87%, recall: 84%, F1 score: 85%) and its superiority and generalizability compared to the common non-machine learning approach (p-value<0.05). Conclusions Our machine learning information extraction approach provides an effective automatic method to annotate and extract clinically significant information from a large collection of free text radiology reports. This information extraction system can help clinicians better understand the radiology reports and prioritize their review process. In addition, the extracted information can be used by researchers to link radiology reports to information from other data sources such as electronic health records and the patient's genome. Extracted information also can facilitate disease surveillance, real-time clinical decision support for the radiologist, and content-based image retrieval.
Information extraction from multi-institutional radiology reports
S0933365715001256
Objective The objective of this study is to help a team of physicians and knowledge engineers acquire clinical knowledge from existing practices datasets for treatment of head and neck cancer, to validate the knowledge against published guidelines, to create refined rules, and to incorporate these rules into clinical workflow for clinical decision support. Methods and materials A team of physicians (clinical domain experts) and knowledge engineers adapt an approach for modeling existing treatment practices into final executable clinical models. For initial work, the oral cavity is selected as the candidate target area for the creation of rules covering a treatment plan for cancer. The final executable model is presented in HL7 Arden Syntax, which helps the clinical knowledge be shared among organizations. We use a data-driven knowledge acquisition approach based on analysis of real patient datasets to generate a predictive model (PM). The PM is converted into a refined-clinical knowledge model (R-CKM), which follows a rigorous validation process. The validation process uses a clinical knowledge model (CKM), which provides the basis for defining underlying validation criteria. The R-CKM is converted into a set of medical logic modules (MLMs) and is evaluated using real patient data from a hospital information system. Results We selected the oral cavity as the intended site for derivation of all related clinical rules for possible associated treatment plans. A team of physicians analyzed the National Comprehensive Cancer Network (NCCN) guidelines for the oral cavity and created a common CKM. Among the decision tree algorithms, chi-squared automatic interaction detection (CHAID) was applied to a refined dataset of 1229 patients to generate the PM. The PM was tested on a disjoint dataset of 739 patients, which gives 59.0% accuracy. Using a rigorous validation process, the R-CKM was created from the PM as the final model, after conforming to the CKM. The R-CKM was converted into four candidate MLMs, and was used to evaluate real data from 739 patients, yielding efficient performance with 53.0% accuracy. Conclusion Data-driven knowledge acquisition and validation against published guidelines were used to help a team of physicians and knowledge engineers create executable clinical knowledge. The advantages of the R-CKM are twofold: it reflects real practices and conforms to standard guidelines, while providing optimal accuracy comparable to that of a PM. The proposed approach yields better insight into the steps of knowledge acquisition and enhances collaboration efforts of the team of physicians and knowledge engineers.
Data-driven knowledge acquisition, validation, and transformation into HL7 Arden Syntax
S0933365715001268
Objective Most practically deployed Arden-Syntax-based clinical decision support (CDS) modules process data from individual patients. The specification of Arden Syntax, however, would in principle also support multi-patient CDS. The patient data management system (PDMS) at our local intensive care units does not natively support patient overviews from customizable CDS routines, but local physicians indicated a demand for multi-patient tabular overviews of important clinical parameters such as key laboratory measurements. As our PDMS installation provides Arden Syntax support, we set out to explore the capability of Arden Syntax for multi-patient CDS by implementing a prototypical dashboard for visualizing laboratory findings from patient sets. Methods and material Our implementation leveraged the object data type, supported by later versions of Arden, which turned out to be serviceable for representing complex input data from several patients. For our prototype, we designed a modularized architecture that separates the definition of technical operations, in particular the control of the patient context, from the actual clinical knowledge. Individual Medical Logic Modules (MLMs) for processing single patient attributes could then be developed according to well-tried Arden Syntax conventions. Results We successfully implemented a working dashboard prototype entirely in Arden Syntax. The architecture consists of a controller MLM to handle the patient context, a presenter MLM to generate a dashboard view, and a set of traditional MLMs containing the clinical decision logic. Our prototype could be integrated into the graphical user interface of the local PDMS. We observed that with realistic input data the average execution time of about 200ms for generating dashboard views attained applicable performance. Conclusion Our study demonstrated the general feasibility of creating multi-patient CDS routines in Arden Syntax. We believe that our prototypical dashboard also suggests that such implementations can be relatively easy, and may simultaneously hold promise for sharing dashboards between institutions and reusing elementary components for additional dashboards.
Using Arden Syntax for the creation of a multi-patient surveillance dashboard
S0933365715001396
Background Nutritional screening procedures followed by regular nutrition monitoring for oncological outpatients are no standard practice in many European hospital wards and outpatient settings. As a result, early signs of malnutrition are missed and nutritional treatment is initiated when patients have already experienced severe weight loss. Objective We report on a novel clinical decision support system (CDSS) for the global assessment and nutritional triage of the nutritional condition of oncology outpatients. The system combines clinical and laboratory data collected in the clinical setting with patient-generated data from a smartphone application for monitoring the patients’ nutritional status. Our objective is to assess the feasibility of a CDSS that combines the aforementioned data sources and describe its integration into a hospital information system. Furthermore, we collected patients’ opinions on the value of the system, and whether they would regard the system as a useful aid in coping with their condition. Materials and methods The system implements the Patient-Generated Subjective Global Assessment (PG-SGA) to monitor nutritional status in the outpatient setting. A smartphone application is used to collect patient-generated data by performing weekly mini-surveys on patients concerning their eating habits, weight, and overall well-being. Data are uploaded on completion of each mini-survey and stored on a secure server at the Medical University of Vienna (MUV). The data are then combined with relevant clinical information from the Vienna General Hospital (VGH) information system. The knowledge base for the CDSS is implemented in medical logic modules (MLMs) using Arden Syntax. A three-month pilot clinical trial was performed to test the feasibility of the system. Qualitative questionnaires were used to obtain the patients’ opinions on the usability and personal value of the system during the four-week test period. Results We used the existing separation between the scientific and clinical data domains in the secured network environment (SNE) at the MUV and VGH to our advantage by importing, storing, and processing both patient-generated and routine data in the scientific data domain. To limit exposure to the SNE, patient-generated data stored outside the SNE were imported to the scientific domain once a day. The CDSS created for nutritional assessment and triage comprised ten MLMs, each including either a sub-assessment or the final results of the PG-SGA. Finally, an interface created for the hospital information system showed the results directly in clinical routine. In all 22 patients completed the clinical study. The results of the questionnaires showed that 91% of the patients were generally happy with the usability of the system, 91% believed that the application was of additional value in detecting cancer-related malnutrition, and 82% found it helpful as a long-term monitoring tool. Discussion and conclusion Despite strict protection of the clinical data domain, a CDSS employing patient-generated data can be integrated into clinical routine. The CDSS discussed in this report combined the information entered into a smartphone application with clinical data in order to inform the physician of a patient's nutritional status and thus permit suitable and timely intervention. The initial results show that the smartphone application was well accepted by patients, who considered it useful, but not many oncological outpatients were willing to participate in the clinical study because they did not possess an Android phone or lacked smartphone expertise. Furthermore, the results indicate that patient-generated data could be employed to augment clinical data and calculate metrics such as the PG-SGA without excessive effort by using a secure intermediate location as the locus of data storage and processing.
Assessing the feasibility of a mobile health-supported clinical decision support system for nutritional triage in oncology outpatients using Arden Syntax
S0933365715001402
Objective This work presents a computer-based approach to analyze the two-dimensional vocal fold dynamics of endoscopic high-speed videos, and constitutes an extension and generalization of a previously proposed wavelet-based procedure. While most approaches aim for analyzing sustained phonation conditions, the proposed method allows for a clinically adequate analysis of both dynamic as well as sustained phonation paradigms. Materials and methods The analysis procedure is based on a spatio-temporal visualization technique, the phonovibrogram, that facilitates the documentation of the visible laryngeal dynamics. From the phonovibrogram, a low-dimensional set of features is computed using a principle component analysis strategy that quantifies the type of vibration patterns, irregularity, lateral symmetry and synchronicity, as a function of time. Two different test bench data sets are used to validate the approach: (I) 150 healthy and pathologic subjects examined during sustained phonation. (II) 20 healthy and pathologic subjects that were examined twice: during sustained phonation and a glissando from a low to a higher fundamental frequency. In order to assess the discriminative power of the extracted features, a Support Vector Machine is trained to distinguish between physiologic and pathologic vibrations. The results for sustained phonation sequences are compared to the previous approach. Finally, the classification performance of the stationary analyzing procedure is compared to the transient analysis of the glissando maneuver. Results For the first test bench the proposed procedure outperformed the previous approach (proposed feature set: accuracy: 91.3%, sensitivity: 80%, specificity: 97%, previous approach: accuracy: 89.3%, sensitivity: 76%, specificity: 96%). Comparing the classification performance of the second test bench further corroborates that analyzing transient paradigms provides clear additional diagnostic value (glissando maneuver: accuracy: 90%, sensitivity: 100%, specificity: 80%, sustained phonation: accuracy: 75%, sensitivity: 80%, specificity: 70%). Conclusions The incorporation of parameters describing the temporal evolvement of vocal fold vibration clearly improves the automatic identification of pathologic vibration patterns. Furthermore, incorporating a dynamic phonation paradigm provides additional valuable information about the underlying laryngeal dynamics that cannot be derived from sustained conditions. The proposed generalized approach provides a better overall classification performance than the previous approach, and hence constitutes a new advantageous tool for an improved clinical diagnosis of voice disorders.
A generalized procedure for analyzing sustained and dynamic vocal fold vibrations from laryngeal high-speed videos using phonovibrograms