FileName
stringlengths
17
17
Abstract
stringlengths
163
6.01k
Title
stringlengths
12
421
S0169260715002989
Objective The analysis of treatment effects in clinical trials usually focus on efficacy and safety in separate descriptive statistical analyses. The Q-TWiST (Quality adjusted Time Without Symptoms and Toxicity) method has been proposed by Gelber in the 90s to enable a statistical comparison between two groups with a graphical representation by incorporating benefit and risk into a single analysis. Although the method has been programmed in SAS, it is rarely used. The availability of the method in the freely software environment system like R would greatly enhanced the accessibility by researchers. The objective of this paper is to present a program for Q-TWiST analyses within R software environment. Methods The qtwist function was developed in order to estimate and compare Q-TWiST for two groups. Two individual patient data files are required used for input: one for visits and one for follow-up. Q-TWiST is obtained as a sum of time spent in three health states: period in toxicity (TOX), period without relapse and toxicity (TWiST) and period in relapse (REL), weighted by associated utility scores restricted to median overall survival for example. The bootstrap method is used for testing statistical significance. Threshold analysis and gain functions allow a group comparison for different utility values. Results Input data is checked for consistency. Descriptive statistics and mean durations for each health state are provided, allowing statistical comparisons. Graphical results are presented in a PDF file. The use of the function is illustrated with data from a simulated data set and a randomized clinical trial. Conclusions qtwist is an easy to use R function, allowing a quality adjusted survival analysis with the Q-TWiST method.
Analysis of survival adjusted for quality of life using the Q-TWiST function: Interface in R
S0169260715002990
Background and objective In clinical examinations and brain–computer interface (BCI) research, a short electroencephalogram (EEG) measurement time is ideal. The use of event-related potentials (ERPs) relies on both estimation accuracy and processing time. We tested a particle filter that uses a large number of particles to construct a probability distribution. Methods We constructed a simple model for recording EEG comprising three components: ERPs approximated via a trend model, background waves constructed via an autoregressive model, and noise. We evaluated the performance of the particle filter based on mean squared error (MSE), P300 peak amplitude, and latency. We then compared our filter with the Kalman filter and a conventional simple averaging method. To confirm the efficacy of the filter, we used it to estimate ERP elicited by a P300 BCI speller. Results A 400-particle filter produced the best MSE. We found that the merit of the filter increased when the original waveform already had a low signal-to-noise ratio (SNR) (i.e., the power ratio between ERP and background EEG). We calculated the amount of averaging necessary after applying a particle filter that produced a result equivalent to that associated with conventional averaging, and determined that the particle filter yielded a maximum 42.8% reduction in measurement time. The particle filter performed better than both the Kalman filter and conventional averaging for a low SNR in terms of both MSE and P300 peak amplitude and latency. For EEG data produced by the P300 speller, we were able to use our filter to obtain ERP waveforms that were stable compared with averages produced by a conventional averaging method, irrespective of the amount of averaging. Conclusions We confirmed that particle filters are efficacious in reducing the measurement time required during simulations with a low SNR. Additionally, particle filters can perform robust ERP estimation for EEG data produced via a P300 speller.
Robust estimation of event-related potentials via particle filter
S0169260715003211
Background Diabetes mellitus is associated with an increased risk of liver cancer, and these two diseases are among the most common and important causes of morbidity and mortality in Taiwan. Purpose To use data mining techniques to develop a model for predicting the development of liver cancer within 6 years of diagnosis with type II diabetes. Methods Data were obtained from the National Health Insurance Research Database (NHIRD) of Taiwan, which covers approximately 22 million people. In this study, we selected patients who were newly diagnosed with type II diabetes during the 2000–2003 periods, with no prior cancer diagnosis. We then used encrypted personal ID to perform data linkage with the cancer registry database to identify whether these patients were diagnosed with liver cancer. Finally, we identified 2060 cases and assigned them to a case group (patients diagnosed with liver cancer after diabetes) and a control group (patients with diabetes but no liver cancer). The risk factors were identified from the literature review and physicians’ suggestion, then, chi-square test was conducted on each independent variable (or potential risk factor) for a comparison between patients with liver cancer and those without, those found to be significant were selected as the factors. We subsequently performed data training and testing to construct artificial neural network (ANN) and logistic regression (LR) prediction models. The dataset was randomly divided into 2 groups: a training group and a test group. The training group consisted of 1442 cases (70% of the entire dataset), and the prediction model was developed on the basis of the training group. The remaining 30% (618 cases) were assigned to the test group for model validation. Results The following 10 variables were used to develop the ANN and LR models: sex, age, alcoholic cirrhosis, nonalcoholic cirrhosis, alcoholic hepatitis, viral hepatitis, other types of chronic hepatitis, alcoholic fatty liver disease, other types of fatty liver disease, and hyperlipidemia. The performance of the ANN was superior to that of LR, according to the sensitivity (0.757), specificity (0.755), and the area under the receiver operating characteristic curve (0.873). After developing the optimal prediction model, we base on this model to construct a web-based application system for liver cancer prediction, which can provide support to physicians during consults with diabetes patients. Conclusion In the original dataset (n =2060), 33% of diabetes patients were diagnosed with liver cancer (n =515). After using 70% of the original data to training the model and other 30% for testing, the sensitivity and specificity of our model were 0.757 and 0.755, respectively; this means that 75.7% of diabetes patients can be predicted correctly to receive a future liver cancer diagnosis, and 75.5% can be predicted correctly to not be diagnosed with liver cancer. These results reveal that this model can be used as effective predictors of liver cancer for diabetes patients, after discussion with physicians; they also agreed that model can assist physicians to advise potential liver cancer patients and also helpful to decrease the future cost incurred upon cancer treatment.
Development of a web-based liver cancer prediction model for type II diabetes patients by using an artificial neural network
S0169260715003223
Background and objective Cosmetic outcome of breast cancer conservative treatment (BCCT) remains without a standard evaluation method. Subjective methods, in spite of their low reproducibility, continue to be the most frequently used. Objective methods, although more reproducible, seem unable to translate all the subtleties involved in cosmetic outcome. The breast cancer conservative treatment cosmetic results (BCCT.core) software was developed in 2007 to try to overcome these pitfalls. The software is a semi-automatic objective tool that evaluates asymmetry, color differences and scar visibility using patient's digital pictures. The purpose of this work is to review the use of the BCCT.core software since its availability in 2007 and to put forward future developments. Methods All the online requests for BCCT.core use were registered from June 2007 to December 2014. For each request the department, city and country as well as user intention (clinical use/research or both) were questioned. A literature search was performed in Medline, Google Scholar and ISI Web of Knowledge for all publications using and citing “BCCT.core”. Results During this period 102 centers have requested the software essentially for clinical use. The BCCT.core software was used in 19 full published papers and in 29 conference abstracts. Conclusions The BCCT.core is a user friendly semi-automatic method for the objective evaluation of BCCT. The number of online requests and publications have been steadily increasing turning this computer program into the most frequently used tool for the objective cosmetic evaluation of BCCT.
The breast cancer conservative treatment. Cosmetic results – BCCT.core – Software for objective assessment of esthetic outcome in breast cancer conservative treatment: A narrative review
S0169260715003235
Brain ageing is followed by changes of the connectivity of white matter (WM) and changes of the grey matter (GM) concentration. Neurodegenerative disease is more vulnerable to an accelerated brain ageing, which is associated with prospective cognitive decline and disease severity. Accurate detection of accelerated ageing based on brain network analysis has a great potential for early interventions designed to hinder atypical brain changes. To capture the brain ageing, we proposed a novel computational approach for modeling the 112 normal older subjects (aged 50–79 years) brain age by connectivity analyses of networks of the brain. Our proposed method applied principal component analysis (PCA) to reduce the redundancy in network topological parameters. Back propagation artificial neural network (BPANN) improved by hybrid genetic algorithm (GA) and Levenberg–Marquardt (LM) algorithm is established to model the relation among principal components (PCs) and brain age. The predicted brain age is strongly correlated with chronological age (r =0.8). The model has mean absolute error (MAE) of 4.29 years. Therefore, we believe the method can provide a possible way to quantitatively describe the typical and atypical network organization of human brain and serve as a biomarker for presymptomatic detection of neurodegenerative diseases in the future.
Predicting healthy older adult's brain age based on structural connectivity networks using artificial neural networks
S0169260715003247
The assessment of microcirculation spatial heterogeneity on the hand skin is the main objective of this work. Near-infrared spectroscopy based 2D imaging is a non-invasive technique for the assessment of tissue oxygenation. The haemoglobin oxygen saturation images were acquired by a dedicated camera (Kent Imaging) during baseline, ischaemia (brachial artery cuff occlusion) and reperfusion. Acquired images underwent a preliminary restoration process aimed at removing degradations occurring during signal capturing. Then, wavelet transform based multiscale analysis was applied to identify edges by detecting local maxima and minima across successive scales. Segmentation of test areas during different conditions was obtained by thresholding-based region growing approach. The method identifies the differences in microcirculatory control of blood flow in different regions of the hand skin. The obtained results demonstrate the potential use of NIRS images for the clinical evaluation of skin disease and microcirculatory dysfunction.
Near infrared image processing to quantitate and visualize oxygen saturation during vascular occlusion
S0169260715003260
Background and objective A markerless low cost prototype has been developed for the determination of some spatio-temporal parameters of human gait: step-length, step-width and cadence have been considered. Only a smartphone and a high-definition webcam have been used. Methods The signals obtained by the accelerometer embedded in the smartphone are used to recognize the heel strike events, while the feet positions are calculated through image processing of the webcam stream. Step length and width are computed during gait trials on a treadmill at various speeds (3, 4 and 5km/h). Results Six subjects have been tested for a total of 504 steps. Results were compared with those obtained by a stereo-photogrammetric system (Elite, BTS Engineering). The maximum average errors were 3.7cm (5.36%) for the right step length and 1.63cm (15.16%) for the right step width at 5km/h. The maximum average error for step duration was 0.02s (1.69%) at 5km/h for the right steps. Conclusion The system is characterized by a very high level of automation that allows its use by non-expert users in non-structured environments. A low cost system able to automatically provide a reliable and repeatable evaluation of some gait events and parameters during treadmill walking, is relevant also from a clinical point of view because it allows the analysis of hundreds of steps and consequently an analysis of their variability.
A markerless system based on smartphones and webcam for the measure of step length, width and duration on treadmill
S0169260715003272
Equivalence testing is recommended as a better alternative to the traditional difference-based methods for demonstrating the comparability of two or more treatment effects. Although equivalent tests of two groups are widely discussed, the natural extensions for assessing equivalence between several groups have not been well examined. This article provides a detailed and schematic comparison of the ANOVA F and the studentized range tests for evaluating the comparability of several standardized effects. Power and sample size appraisals of the two grossly distinct approaches are conducted in terms of a constraint on the range of the standardized means when the standard deviation of the standardized means is fixed. Although neither method is uniformly more powerful, the studentized range test has a clear advantage in sample size requirements necessary to achieve a given power when the underlying effect configurations are close to the priori minimum difference for determining equivalence. For actual application of equivalence tests and advance planning of equivalence studies, both SAS and R computer codes are available as supplementary files to implement the calculations of critical values, p-values, power levels, and sample sizes.
A comparative appraisal of two equivalence tests for multiple standardized effects
S0169260715003284
The cardiovascular and respiratory autonomic nervous regulation has been studied mainly by hemodynamic responses during different physical stressors. In this study, dynamics of autonomic response to an orthostatic challenge was investigated by hemodynamic variables and by diverse linear and nonlinear indices calculated from time series of beat-to-beat intervals (BBI), respiratory cycle duration (RESP), systolic (SYS) and diastolic (DIA) blood pressure. This study included 16 young female patients (SYN) with vasovagal syncope and 12 age-matched female controls (CON). The subjects were enrolled in a head-up tilt (HUT) test, breathing normally, including 5min of baseline (BL, supine position) and 18min of 70° orthostatic phase (OP). To increase the time resolution of the analysis the time series were segmented in five-minute overlapping windows with a shift of 1min. Hemodynamic parameters did not show any statistical differences between SYN and CON. Time domain linear analysis revealed increased respiratory frequency and increased blood pressure variability (BPV) in patients during OP meaning increased sympathetic activity and vagal withdrawal. Frequency domain analysis confirmed a predominance of sympathetic tone by steadily increased values of low over high frequency power in BBI and of low frequency power in SYS and DIA in patients during OP. The nonlinear analysis by symbolic dynamics seemed to be highly suitable for differentiation of SYN and CON in the early beginning of OP, i.e., 5min after tilt-up. In particular the index SYS_plvar3 showed less patterns of low variability in patients reflecting a steadily increase in both BPV and sympathetic activity. The proposed dynamical analysis could lead to a better understanding of the temporal underlying mechanisms in healthy subjects and patients under orthostatic stress.
Orthostatic stress causes immediately increased blood pressure variability in women with vasovagal syncope
S0169260715003296
A major difficulty with chest radiographic analysis is the invisibility of abnormalities caused by the superimposition of normal anatomical structures, such as ribs, over the main tissue to be examined. Suppressing the ribs with no information loss about the original tissue would therefore be helpful during manual identification or computer-aided detection of nodules on a chest radiographic image. In this study, we introduce a two-step algorithm for eliminating rib shadows in chest radiographic images. The algorithm first delineates the ribs using a novel hybrid self-template approach and then suppresses these delineated ribs using an unsupervised regression model that takes into account the change in proximal thickness (depth) of bone in the vertical axis. The performance of the system is evaluated using a benchmark set of real chest radiographic images. The experimental results determine that proposed method for rib delineation can provide higher accuracy than existing methods. The knowledge of rib delineation can remarkably improve the nodule detection performance of a current computer-aided diagnosis (CAD) system. It is also shown that the rib suppression algorithm can increase the nodule visibility by eliminating rib shadows while mostly preserving the nodule intensity.
Eliminating rib shadows in chest radiographic images providing diagnostic assistance
S0169260715003302
Background and objective Progress in biomedical engineering has improved the hardware available for diagnosis and treatment of cardiac arrhythmias. But although huge amounts of intracardiac electrograms (EGMs) can be acquired during electrophysiological examinations, there is still a lack of software aiding diagnosis. The development of novel algorithms for the automated analysis of EGMs has proven difficult, due to the highly interdisciplinary nature of this task and hampered data access in clinical systems. Thus we developed a software platform, which allows rapid implementation of new algorithms, verification of their functionality and suitable visualization for discussion in the clinical environment. Methods A software for visualization was developed in Qt5 and C++ utilizing the class library of VTK. The algorithms for signal analysis were implemented in MATLAB. Clinical data for analysis was exported from electroanatomical mapping systems. Results The visualization software KaPAVIE (Karlsruhe Platform for Analysis and Visualization of Intracardiac Electrograms) was implemented and tested on several clinical datasets. Both common and novel algorithms were implemented which address important clinical questions in diagnosis of different arrhythmias. It proved useful in discussions with clinicians due to its interactive and user-friendly design. Time after export from the clinical mapping system to visualization is below 5min. Conclusion KaPAVIE 2 2 See http://www.ibt.kit.edu/hardundsoftware.php. is a powerful platform for the development of novel algorithms in the clinical environment. Simultaneous and interactive visualization of measured EGM data and the results of analysis will aid diagnosis and help understanding the underlying mechanisms of complex arrhythmias like atrial fibrillation.
Analysis and visualization of intracardiac electrograms in diagnosis and research: Concept and application of KaPAVIE
S0169260715003314
An electrocardiogram (ECG) measures the electric activity of the heart and has been widely used for detecting heart diseases due to its simplicity and non-invasive nature. By analyzing the electrical signal of each heartbeat, i.e., the combination of action impulse waveforms produced by different specialized cardiac tissues found in the heart, it is possible to detect some of its abnormalities. In the last decades, several works were developed to produce automatic ECG-based heartbeat classification methods. In this work, we survey the current state-of-the-art methods of ECG-based automated abnormalities heartbeat classification by presenting the ECG signal preprocessing, the heartbeat segmentation techniques, the feature description methods and the learning algorithms used. In addition, we describe some of the databases used for evaluation of methods indicated by a well-known standard developed by the Association for the Advancement of Medical Instrumentation (AAMI) and described in ANSI/AAMI EC57:1998/(R)2008 (ANSI/AAMI, 2008). Finally, we discuss limitations and drawbacks of the methods in the literature presenting concluding remarks and future challenges, and also we propose an evaluation process workflow to guide authors in future works.
ECG-based heartbeat classification for arrhythmia detection: A survey
S0169260715003326
This article was motivated by the doctors’ demand to make a technical support in pathologies of gastrointestinal tract research [10], which would be based on machine vision tools. Proposed solution should be less expensive alternative to already existing RF (radio frequency) methods. The objective of whole experiment was to evaluate the amount of animal motion dependent on degree of pathology (gastric ulcer). In the theoretical part of the article, several methods of animal trajectory tracking are presented: two differential methods based on background subtraction, the thresholding methods based on global and local threshold and the last method used for animal tracking was the color matching with a chosen template containing a searched spectrum of colors. The methods were tested offline on five video samples. Each sample contained situation with moving guinea pig locked in a cage under various lighting conditions.
Machine vision application in animal trajectory tracking
S0169260715003338
Background and objective Hypokinetic dysarthria (HD) is a frequent speech disorder associated with idiopathic Parkinson's disease (PD). It affects all dimensions of speech production. One of the most common features of HD is dysprosody that is characterized by alterations of rhythm and speech rate, flat speech melody, and impairment of speech intensity control. Dysprosody has a detrimental impact on speech naturalness and intelligibility. Methods This paper deals with quantitative prosodic analysis of neutral, stress-modified and rhymed speech in patients with PD. The analysis of prosody is based on quantification of monopitch, monoloudness, and speech rate abnormalities. Experimental dataset consists of 98 patients with PD and 51 healthy speakers. For the purpose of HD identification, sequential floating feature selection algorithm and random forests classifier is used. In this paper, we also introduce a concept of permutation test applied in the field of acoustic analysis of dysarthric speech. Results Prosodic features obtained from stress-modified reading task provided higher classification accuracies compared to the ones extracted from reading task with neutral emotion demonstrating the importance of stress in speech prosody. Features calculated from poem recitation task outperformed both reading tasks in the case of gender-undifferentiated analysis showing that rhythmical demands can in general lead to more precise identification of HD. Additionally, some gender-related patterns of dysprosody has been observed. Conclusions This paper confirms reduced variation of fundamental frequency in PD patients with HD. Interestingly, increased variability of speech intensity compared to healthy speakers has been detected. Regarding speech rate disturbances, our results does not report any particular pattern. We conclude further development of prosodic features quantifying the relationship between monopitch, monoloudness and speech rate disruptions in HD can have a great potential in future PD analysis.
Prosodic analysis of neutral, stress-modified and rhymed speech in patients with Parkinson's disease
S0169260715300055
Background and objective In oral and maxillofacial surgery, conventional radiographic cephalometry is one of the standard auxiliary tools for diagnosis and surgical planning. While contemporary computer-assisted cephalometric systems and methodologies support cephalometric analysis, they tend neither to be practical nor intuitive for practitioners. This is particularly the case for 3D methods since the associated landmarking process is difficult and time consuming. In addition to this, there are no 3D cephalometry norms or standards defined; therefore new landmark selection methods are required which will help facilitate their establishment. This paper presents and evaluates a novel haptic-enabled landmarking approach to overcome some of the difficulties and disadvantages of the current landmarking processes used in 2D and 3D cephalometry. Method In order to evaluate this new system's feasibility and performance, 21 dental surgeons (comprising 7 Novices, 7 Semi-experts and 7 Experts) performed a range of case studies using a haptic-enabled 2D, 2½D and 3D digital cephalometric analyses. Results The results compared the 2D, 2½D and 3D cephalometric values, errors and standard deviations for each case study and associated group of participants and revealed that 3D cephalometry significantly reduced landmarking errors and variability compared to 2D methods. Conclusions Through enhancing the process by providing a sense of touch, the haptic-enabled 3D digital cephalometric approach was found to be feasible and more intuitive than its counterparts as well effective at reducing errors, the variability of the measurements taken and associated task completion times.
The evaluation of a novel haptic-enabled virtual reality approach for computer-aided cephalometry
S0169260715300067
To determine initial velocities of enzyme catalyzed reactions without theoretical errors it is necessary to consider the use of the integrated Michaelis–Menten equation. When the reaction product is an inhibitor, this approach is particularly important. Nevertheless, kinetic studies usually involved the evaluation of other inhibitors beyond the reaction product. The occurrence of these situations emphasizes the importance of extending the integrated Michaelis–Menten equation, assuming the simultaneous presence of more than one inhibitor because reaction product is always present. This methodology is illustrated with the reaction catalyzed by alkaline phosphatase inhibited by phosphate (reaction product, inhibitor 1) and urea (inhibitor 2). The approach is explained in a step by step manner using an Excel spreadsheet (available as a template in Appendix). Curve fitting by nonlinear regression was performed with the Solver add-in (Microsoft Office Excel). Discrimination of the kinetic models was carried out based on Akaike information criterion. This work presents a methodology that can be used to develop an automated process, to discriminate in real time the inhibition type and kinetic constants as data (product vs. time) are achieved by the spectrophotometer.
Enzyme inhibition studies by integrated Michaelis–Menten equation considering simultaneous presence of two inhibitors when one of them is a reaction product
S0169260715300110
Background and objective The automatic classification of breast imaging lesions is currently an unsolved problem. This paper describes an innovative representation learning framework for breast cancer diagnosis in mammography that integrates deep learning techniques to automatically learn discriminative features avoiding the design of specific hand-crafted image-based feature detectors. Methods A new biopsy proven benchmarking dataset was built from 344 breast cancer patients’ cases containing a total of 736 film mammography (mediolateral oblique and craniocaudal) views, representative of manually segmented lesions associated with masses: 426 benign lesions and 310 malignant lesions. The developed method comprises two main stages: (i) preprocessing to enhance image details and (ii) supervised training for learning both the features and the breast imaging lesions classifier. In contrast to previous works, we adopt a hybrid approach where convolutional neural networks are used to learn the representation in a supervised way instead of designing particular descriptors to explain the content of mammography images. Results Experimental results using the developed benchmarking breast cancer dataset demonstrated that our method exhibits significant improved performance when compared to state-of-the-art image descriptors, such as histogram of oriented gradients (HOG) and histogram of the gradient divergence (HGD), increasing the performance from 0.787 to 0.822 in terms of the area under the ROC curve (AUC). Interestingly, this model also outperforms a set of hand-crafted features that take advantage of additional information from segmentation by the radiologist. Finally, the combination of both representations, learned and hand-crafted, resulted in the best descriptor for mass lesion classification, obtaining 0.826 in the AUC score. Conclusions A novel deep learning based framework to automatically address classification of breast mass lesions in mammography was developed.
Representation learning for mammography mass lesion classification with convolutional neural networks
S0169260715300146
The development of adequate mathematical models for blood glucose dynamics may improve early diagnosis and control of diabetes mellitus (DM). We have developed a stochastic nonlinear second order differential equation to describe the response of blood glucose concentration to food intake using continuous glucose monitoring (CGM) data. A variational Bayesian learning scheme was applied to define the number and values of the system's parameters by iterative optimisation of free energy. The model has the minimal order and number of parameters to successfully describe blood glucose dynamics in people with and without DM. The model accounts for the nonlinearity and stochasticity of the underlying glucose–insulin dynamic process. Being data-driven, it takes full advantage of available CGM data and, at the same time, reflects the intrinsic characteristics of the glucose–insulin system without detailed knowledge of the physiological mechanisms. We have shown that the dynamics of some postprandial blood glucose excursions can be described by a reduced (linear) model, previously seen in the literature. A comprehensive analysis demonstrates that deterministic system parameters belong to different ranges for diabetes and controls. Implications for clinical practice are discussed. This is the first study introducing a continuous data-driven nonlinear stochastic model capable of describing both DM and non-DM profiles.
A data driven nonlinear stochastic model for blood glucose dynamics
S0169260715300158
This paper presents a new heuristic algorithm for reduct selection based on credible index in the rough set theory (RST) applications. This algorithm is efficient and effective in selecting the decision rules particularly the problem to be solved in a large scale. This algorithm is capable to derive the rules with multi-outcomes and identify the most significant features simultaneously, which is unique and useful in solving predictive medical problems. The end results of the proposed approach are a set of decision rules that illustrates the causes for solitary pulmonary nodule and results of the long term treatment.
Rough set based rule induction in decision making using credible classification and preference from medical application perspective
S0169260715300213
In this paper, the unsteady pulsatile magneto-hydrodynamic blood flows through porous arteries concerning the influence of externally imposed periodic body acceleration and a periodic pressure gradient are numerically simulated. Blood is taken into account as the third-grade non-Newtonian fluid. Besides the numerical solution, for small Womersley parameter (such as blood flow through arterioles and capillaries), the analytical perturbation method is used to solve the nonlinear governing equations. Consequently, analytical expressions for the velocity profile, wall shear stress, and blood flow rate are obtained. Excellent agreement between the analytical and numerical predictions is evident. Also, the effects of body acceleration, magnetic field, third-grade non-Newtonian parameter, pressure gradient, and porosity on the flow behaviors are examined. Some important conclusions are that, when the Womersley parameter is low, viscous forces tend to dominate the flow, velocity profiles are parabolic in shape, and the center-line velocity oscillates in phase with the driving pressure gradient. In addition, by increasing the pressure gradient, the mean value of the velocity profile increases and the amplitude of the velocity remains constant. Also, when non-Newtonian effect increases, the amplitude of the velocity profile.
Pulsatile magneto-hydrodynamic blood flows through porous blood vessels using a third grade non-Newtonian fluids model
S0169260715300262
Background and objective Non-compartmental analysis (NCA) calculates pharmacokinetic (PK) metrics related to the systemic exposure to a drug following administration, e.g. area under the concentration–time curve and peak concentration. We developed a new package in R, called ncappc, to perform (i) a NCA and (ii) simulation-based posterior predictive checks (ppc) for a population PK (PopPK) model using NCA metrics. Methods The nca feature of ncappc package estimates the NCA metrics by NCA. The ppc feature of ncappc estimates the NCA metrics from multiple sets of simulated concentration–time data and compares them with those estimated from the observed data. The diagnostic analysis is performed at the population as well as the individual level. The distribution of the simulated population means of each NCA metric is compared with the corresponding observed population mean. The individual level comparison is performed based on the deviation of the mean of any NCA metric based on simulations for an individual from the corresponding NCA metric obtained from the observed data. The ncappc package also reports the normalized prediction distribution error (NPDE) of the simulated NCA metrics for each individual and their distribution within a population. Results The ncappc produces two default outputs depending on the type of analysis performed, i.e., NCA and PopPK diagnosis. The PopPK diagnosis feature of ncappc produces 8 sets of graphical outputs to assess the ability of a population model to simulate the concentration–time profile of a drug and thereby evaluate model adequacy. In addition, tabular outputs are generated showing the values of the NCA metrics estimated from the observed and the simulated data, along with the deviation, NPDE, regression parameters used to estimate the elimination rate constant and the related population statistics. Conclusions The ncappc package is a versatile and flexible tool-set written in R that successfully estimates NCA metrics from concentration–time data and produces a comprehensive set of graphical and tabular output to summarize the diagnostic results including the model specific outliers. The output is easy to interpret and to use in evaluation of a population PK model. ncappc is freely available on CRAN (http://cran.r-project.org/web/packages/ncappc/index.html/) and GitHub (https://github.com/cacha0227/ncappc/).
A diagnostic tool for population models using non-compartmental analysis: The ncappc package for R
S0169260715300298
This work presents a systematic review of techniques for the 3D automatic detection of pulmonary nodules in computerized-tomography (CT) images. Its main goals are to analyze the latest technology being used for the development of computational diagnostic tools to assist in the acquisition, storage and, mainly, processing and analysis of the biomedical data. Also, this work identifies the progress made, so far, evaluates the challenges to be overcome and provides an analysis of future prospects. As far as the authors know, this is the first time that a review is devoted exclusively to automated 3D techniques for the detection of pulmonary nodules from lung CT images, which makes this work of noteworthy value. The research covered the published works in the Web of Science, PubMed, Science Direct and IEEEXplore up to December 2014. Each work found that referred to automated 3D segmentation of the lungs was individually analyzed to identify its objective, methodology and results. Based on the analysis of the selected works, several studies were seen to be useful for the construction of medical diagnostic aid tools. However, there are certain aspects that still require attention such as increasing algorithm sensitivity, reducing the number of false positives, improving and optimizing the algorithm detection of different kinds of nodules with different sizes and shapes and, finally, the ability to integrate with the Electronic Medical Record Systems and Picture Archiving and Communication Systems. Based on this analysis, we can say that further research is needed to develop current techniques and that new algorithms are needed to overcome the identified drawbacks.
Automatic 3D pulmonary nodule detection in CT images: A survey
S0169260715300328
In crisis situations, a seamless ubiquitous communication is necessary to provide emergency medical service to save people's lives. An excellent prehospital emergency medicine provides immediate medical care to increase the survival rate of patients. On their way to the hospital, ambulance personnel must transmit real-time and uninterrupted patient information to the hospital to apprise the physician of the situation and provide options to the ambulance personnel. In emergency and crisis situations, many communication channels can be unserviceable because of damage to equipment or loss of power. Thus, data transmission over wireless communication to achieve uninterrupted network services is a major obstacle. This study proposes a mobile middleware for cognitive radio (CR) for improving the wireless communication link. CRs can sense their operating environment and optimize the spectrum usage so that the mobile middleware can integrate the existing wireless communication systems with a seamless communication service in heterogeneous network environments. Eventually, the proposed seamless mobile communication middleware was ported into an embedded system, which is compatible with the actual network environment without the need for changing the original system architecture.
A seamless ubiquitous emergency medical service for crisis situations
S0169260715300390
Objectives The present work has the goal of developing a secure medical imaging information system based on a combined steganography and cryptography technique. It attempts to securely embed patient's confidential information into his/her medical images. Methods The proposed information security scheme conceals coded Electronic Patient Records (EPRs) into medical images in order to protect the EPRs’ confidentiality without affecting the image quality and particularly the Region of Interest (ROI), which is essential for diagnosis. The secret EPR data is converted into ciphertext using private symmetric encryption method. Since the Human Visual System (HVS) is less sensitive to alterations in sharp regions compared to uniform regions, a simple edge detection method has been introduced to identify and embed in edge pixels, which will lead to an improved stego image quality. In order to increase the embedding capacity, the algorithm embeds variable number of bits (up to 3) in edge pixels based on the strength of edges. Moreover, to increase the efficiency, two message coding mechanisms have been utilized to enhance the ±1 steganography. The first one, which is based on Hamming code, is simple and fast, while the other which is known as the Syndrome Trellis Code (STC), is more sophisticated as it attempts to find a stego image that is close to the cover image through minimizing the embedding impact. The proposed steganography algorithm embeds the secret data bits into the Region of Non Interest (RONI), where due to its importance; the ROI is preserved from modifications. Results The experimental results demonstrate that the proposed method can embed large amount of secret data without leaving a noticeable distortion in the output image. The effectiveness of the proposed algorithm is also proven using one of the efficient steganalysis techniques. Conclusion The proposed medical imaging information system proved to be capable of concealing EPR data and producing imperceptible stego images with minimal embedding distortions compared to other existing methods. In order to refrain from introducing any modifications to the ROI, the proposed system only utilizes the Region of Non Interest (RONI) in embedding the EPR data.
Quality optimized medical image information hiding algorithm that employs edge detection and data coding
S0169260715300419
In this paper, a MATLAB-based graphical user interface (GUI) software tool for general biomedical signal processing and analysis of functional neuroimaging data is introduced. Specifically, electroencephalography (EEG) and electrocardiography (ECG) signals can be processed and analyzed by the developed tool, which incorporates commonly used temporal and frequency analysis methods. In addition to common methods, the tool also provides non-linear chaos analysis with Lyapunov exponents and entropies; multivariate analysis with principal and independent component analyses; and pattern classification with discriminant analysis. This tool can also be utilized for training in biomedical engineering education. This easy-to-use and easy-to-learn, intuitive tool is described in detail in this paper.
Design of a novel biomedical signal processing and analysis tool for functional neuroimaging
S0169260715300420
Nowadays, the diagnosis and treatment of pelvic sarcoma pose a major surgical challenge for reconstruction in orthopedics. With the development of manufacturing technology, the metal 3D-printed customized implants have brought revolution for the limb-salvage resection and reconstruction surgery. However, the tumor resection is not without risk and the precise implant placement is very difficult due to the anatomic intricacies of the pelvis. In this study, a surgical navigation system including the implant calibration algorithm has been developed, so that the surgical instruments and the 3D-printed customized implant can be tracked and rendered on the computer screen in real time, minimizing the risks and improving the precision of the surgery. Both the phantom experiment and the pilot clinical case study presented the feasibility of our computer-aided surgical navigation system. According to the accuracy evaluation experiment, the precision of customized implant installation can be improved three to five times (TRE: 0.75±0.18mm) compared with the non-navigated implant installation after the guided osteotomy (TRE: 3.13±1.28mm), which means it is sufficient to meet the clinical requirements of the pelvic reconstruction. However, more clinical trials will be conducted in the future work for the validation of the reliability and efficiency of our navigation system.
Image-guided installation of 3D-printed patient-specific implant and its application in pelvic tumor resection and reconstruction surgery
S0169260715300432
Interventional cardiologists have a deep interest in risk stratification prior to stenting and percutaneous coronary intervention (PCI) procedures. Intravascular ultrasound (IVUS) is most commonly adapted for screening, but current tools lack the ability for risk stratification based on grayscale plaque morphology. Our hypothesis is based on the genetic makeup of the atherosclerosis disease, that there is evidence of a link between coronary atherosclerosis disease and carotid plaque built up. This novel idea is explored in this study for coronary risk assessment and its classification of patients between high risk and low risk. This paper presents a strategy for coronary risk assessment by combining the IVUS grayscale plaque morphology and carotid B-mode ultrasound carotid intima-media thickness (cIMT) – a marker of subclinical atherosclerosis. Support vector machine (SVM) learning paradigm is adapted for risk stratification, where both the learning and testing phases use tissue characteristics derived from six feature combinational spaces, which are then used by the SVM classifier with five different kernels sets. These six feature combinational spaces are designed using 56 novel feature sets. K-fold cross validation protocol with 10 trials per fold is used for optimization of best SVM-kernel and best feature combination set. IRB approved coronary IVUS and carotid B-mode ultrasound were jointly collected on 15 patients (2 days apart) via: (a) 40MHz catheter utilizing iMap (Boston Scientific, Marlborough, MA, USA) with 2865 frames per patient (42,975 frames) and (b) linear probe B-mode carotid ultrasound (Toshiba scanner, Japan). Using the above protocol, the system shows the classification accuracy of 94.95% and AUC of 0.95 using optimized feature combination. This is the first system of its kind for risk stratification as a screening tool to prevent excessive cost burden and better patients’ cardiovascular disease management, while validating our two hypotheses.
A new method for IVUS-based coronary artery disease risk stratification: A link between coronary & carotid ultrasound plaque burdens
S0169260715300468
Background The simultaneous acquisition of electroencephalogram (EEG) and functional magnetic resonance imaging (fMRI) provides both high temporal and spatial resolution when measuring brain activity. A real-time analysis during a simultaneous EEG–fMRI acquisition is essential when studying neurofeedback and conducting effective brain activity monitoring. However, the ballistocardiogram (BCG) artifacts which are induced by heartbeat-related electrode movements in an MRI scanner severely contaminate the EEG signals and hinder a reliable real-time analysis. New method The optimal basis sets (OBS) method is an effective candidate for removing BCG artifacts in a traditional offline EEG–fMRI analysis, but has yet to be applied to a real-time EEG–fMRI analysis. Here, a novel real-time technique based on OBS method (rtOBS) is proposed to remove BCG artifacts on a moment-to-moment basis. Real-time electrocardiogram R-peak detection procedure and sliding window OBS method were adopted. Results A series of simulated data was constructed to verify the feasibility of the rtOBS technique. Furthermore, this method was applied to real EEG–fMRI data to remove BCG artifacts. The results of both simulated data and real EEG–fMRI data from eight healthy human subjects demonstrate the effectiveness of rtOBS in both the time and frequency domains. Comparison with existing methods A comparison between rtOBS and real-time averaged artifact subtraction (rtAAS) was conducted. The results suggest the efficacy and advantage of rtOBS in the real-time removal of BCG artifacts. Conclusions In this study, a novel real-time OBS technique was proposed for the real-time removal of BCG artifacts. The proposed method was tested using simulated data and applied to real simultaneous EEG–fMRI data. The results suggest the effectiveness of this method.
A real-time method to reduce ballistocardiogram artifacts from EEG during fMRI based on optimal basis sets (OBS)
S0169260715300523
Background and objectives The diagnosis of Developmental Dysplasia of the Hip (DDH) in infants is currently made primarily by ultrasound. However, two-dimensional ultrasound (2DUS) images capture only an incomplete portion of the acetabular shape, and the alpha and beta angles measured on 2DUS for the Graf classification technique show high inter-scan and inter-observer variability. This variability relates partly to the manual determination of the apex point separating the acetabular roof from the ilium during index measurement. This study proposes a new 2DUS image processing technique for semi-automated tracing of the bony surface followed by automatic calculation of two indices: a contour-based alpha angle (α A ), and a new modality-independent quantitative rounding index (M). The new index M is independent of the apex point, and can be directly extended to 3D surface models. Methods We tested the proposed indices on a dataset of 114 2DUS scans of infant hips aged between 4 and 183 days scanned using a 12MHz linear transducer. We calculated the manual alpha angle (α M ), coverage, contour-based alpha angle and rounding index for each of the recordings and statistically evaluated these indices based on regression analysis, area under the receiver operating characteristic curve (AUC) and analysis of variance (ANOVA). Results Processing time for calculating α A and M was similar to manual alpha angle measurement, ∼30s per image. Reliability of the new indices was high, with inter-observer intraclass correlation coefficients (ICC) 0.90 for α A and 0.89 for M. For a diagnostic test classifying hips as normal or dysplastic, AUC was 93.0% for α A vs. 92.7% for α M , 91.6% for M alone, and up to 95.7% for combination of M with α M , α A or coverage. Conclusions The rounding index provides complimentary information to conventional indices such as alpha angle and coverage. Calculation of the contour-based alpha angle and rounding index is rapid, shows potential to improve the reliability and accuracy of DDH diagnosis from 2DUS, and could be extended to 3D ultrasound in future.
Toward automated classification of acetabular shape in ultrasound for diagnosis of DDH: Contour alpha angle and the rounding index
S0169260715300535
Recently, there has been a growing interest in the field of metabolomics, materialized by a remarkable growth in experimental techniques, available data and related biological applications. Indeed, techniques as nuclear magnetic resonance, gas or liquid chromatography, mass spectrometry, infrared and UV–visible spectroscopies have provided extensive datasets that can help in tasks as biological and biomedical discovery, biotechnology and drug development. However, as it happens with other omics data, the analysis of metabolomics datasets provides multiple challenges, both in terms of methodologies and in the development of appropriate computational tools. Indeed, from the available software tools, none addresses the multiplicity of existing techniques and data analysis tasks. In this work, we make available a novel R package, named specmine, which provides a set of methods for metabolomics data analysis, including data loading in different formats, pre-processing, metabolite identification, univariate and multivariate data analysis, machine learning, and feature selection. Importantly, the implemented methods provide adequate support for the analysis of data from diverse experimental techniques, integrating a large set of functions from several R packages in a powerful, yet simple to use environment. The package, already available in CRAN, is accompanied by a web site where users can deposit datasets, scripts and analysis reports to be shared with the community, promoting the efficient sharing of metabolomics data analysis pipelines.
An R package for the integrated analysis of metabolomics and spectral data
S0169260715300560
Background and objectives Text mining and semantic analysis approaches can be applied to the construction of biomedical domain-specific search engines and provide an attractive alternative to create personalized and enhanced search experiences. Therefore, this work introduces the new open-source BIOMedical Search Engine Framework for the fast and lightweight development of domain-specific search engines. The rationale behind this framework is to incorporate core features typically available in search engine frameworks with flexible and extensible technologies to retrieve biomedical documents, annotate meaningful domain concepts, and develop highly customized Web search interfaces. Methods The BIOMedical Search Engine Framework integrates taggers for major biomedical concepts, such as diseases, drugs, genes, proteins, compounds and organisms, and enables the use of domain-specific controlled vocabulary. Technologies from the Typesafe Reactive Platform, the AngularJS JavaScript framework and the Bootstrap HTML/CSS framework support the customization of the domain-oriented search application. Moreover, the RESTful API of the BIOMedical Search Engine Framework allows the integration of the search engine into existing systems or a complete web interface personalization. Results The construction of the Smart Drug Search is described as proof-of-concept of the BIOMedical Search Engine Framework. This public search engine catalogs scientific literature about antimicrobial resistance, microbial virulence and topics alike. The keyword-based queries of the users are transformed into concepts and search results are presented and ranked accordingly. The semantic graph view portraits all the concepts found in the results, and the researcher may look into the relevance of different concepts, the strength of direct relations, and non-trivial, indirect relations. The number of occurrences of the concept shows its importance to the query, and the frequency of concept co-occurrence is indicative of biological relations meaningful to that particular scope of research. Conversely, indirect concept associations, i.e. concepts related by other intermediary concepts, can be useful to integrate information from different studies and look into non-trivial relations. Conclusions The BIOMedical Search Engine Framework supports the development of domain-specific search engines. The key strengths of the framework are modularity and extensibilityin terms of software design, the use of open-source consolidated Web technologies, and the ability to integrate any number of biomedical text mining tools and information resources. Currently, the Smart Drug Search keeps over 1,186,000 documents, containing more than 11,854,000 annotations for 77,200 different concepts. The Smart Drug Search is publicly accessible at http://sing.ei.uvigo.es/sds/. The BIOMedical Search Engine Framework is freely available for non-commercial use at https://github.com/agjacome/biomsef.
BIOMedical Search Engine Framework: Lightweight and customized implementation of domain-specific biomedical search engines
S0169260715300614
Background and objective Classification of gene expression data is the common denominator of various biomedical recognition tasks. However, obtaining class labels for large training samples may be difficult or even impossible in many cases. Therefore, semi-supervised classification techniques are required as semi-supervised classifiers take advantage of unlabeled data. Methods Gene expression data is high-dimensional which gives rise to the phenomena known under the umbrella of the curse of dimensionality, one of its recently explored aspects being the presence of hubs or hubness for short. Therefore, hubness-aware classifiers have been developed recently, such as Naive Hubness-Bayesian k-Nearest Neighbor (NHBNN). In this paper, we propose a semi-supervised extension of NHBNN which follows the self-training schema. As one of the core components of self-training is the certainty score, we propose a new hubness-aware certainty score. Results We performed experiments on publicly available gene expression data. These experiments show that the proposed classifier outperforms its competitors. We investigated the impact of each of the components (classification algorithm, semi-supervised technique, hubness-aware certainty score) separately and showed that each of these components are relevant to the performance of the proposed approach. Conclusions Our results imply that our approach may increase classification accuracy and reduce computational costs (i.e., runtime). Based on the promising results presented in the paper, we envision that hubness-aware techniques will be used in various other biomedical machine learning tasks. In order to accelerate this process, we made an implementation of hubness-aware machine learning techniques publicly available in the PyHubs software package (http://www.biointelligence.hu/pyhubs) implemented in Python, one of the most popular programming languages of data science. the set of k-nearest neighbors of x. probability that x belongs to class C given its nearest neighbors the probability of the event that x appears as one of the k-nearest neighbors of any labeled training instance belonging to class C the prior probability of the event that an instance belongs to class C how many times x occurs as one of the k-nearest neighbors of labeled training instances belonging to class C how many times x occurs as one of the k-nearest neighbors of other instances when considering D lab ∪ { x }
Classification of gene expression data: A hubness-aware semi-supervised approach
S0169260715300663
Transfer function design is a key issue in direct volume rendering. Many sophisticated transfer functions have been proposed to visualize boundaries in volumetric data sets such as computed tomography and magnetic resonance imaging. However, it is still conventionally challenging to reliably detect boundaries. Meanwhile, the interactive strategy is complicated for new users or even experts. In this paper, we first propose the human-centric boundary extraction criteria and our boundary model. Based on the model we present a boundary visualization method through a what material you pick is what boundary you see approach. Users can pick out the material of interest to directly convey semantics. In addition, the 3-D canny edge detection is utilized to ensure the good localization of boundaries. Furthermore, we establish a point-to-material distance measure to guarantee the accuracy and integrity of boundaries. The proposed boundary visualization is intuitive and flexible for the exploration of volumetric data.
Visualization of boundaries in volumetric data sets through a what material you pick is what boundary you see approach
S0169260715300699
Psoriasis is an autoimmune skin disease with red and scaly plaques on skin and affecting about 125 million people worldwide. Currently, dermatologist use visual and haptic methods for diagnosis the disease severity. This does not help them in stratification and risk assessment of the lesion stage and grade. Further, current methods add complexity during monitoring and follow-up phase. The current diagnostic tools lead to subjectivity in decision making and are unreliable and laborious. This paper presents a first comparative performance study of its kind using principal component analysis (PCA) based CADx system for psoriasis risk stratification and image classification utilizing: (i) 11 higher order spectra (HOS) features, (ii) 60 texture features, and (iii) 86 color feature sets and their seven combinations. Aggregate 540 image samples (270 healthy and 270 diseased) from 30 psoriasis patients of Indian ethnic origin are used in our database. Machine learning using PCA is used for dominant feature selection which is then fed to support vector machine classifier (SVM) to obtain optimized performance. Three different protocols are implemented using three kinds of feature sets. Reliability index of the CADx is computed. Among all feature combinations, the CADx system shows optimal performance of 100% accuracy, 100% sensitivity and specificity, when all three sets of feature are combined. Further, our experimental result with increasing data size shows that all feature combinations yield high reliability index throughout the PCA-cutoffs except color feature set and combination of color and texture feature sets. HOS features are powerful in psoriasis disease classification and stratification. Even though, independently, all three set of features HOS, texture, and color perform competitively, but when combined, the machine learning system performs the best. The system is fully automated, reliable and accurate.
Computer-aided diagnosis of psoriasis skin images with HOS, texture and color features: A first comparative study of its kind
S0169260715300742
Recently, various non-invasive tools such as the magnetic resonance image (MRI), ultrasound imaging (USI), computed tomography (CT), and the computational fluid dynamics (CFD) have been widely utilized to enhance our current understanding of the physiological parameters that affect the initiation and the progression of the cardiovascular diseases (CVDs) associated with heart failure (HF). In particular, the hemodynamics of left ventricle (LV) has attracted the attention of the researchers due to its significant role in the heart functionality. In this study, CFD owing its capability of predicting detailed flow field was adopted to model the blood flow in images-based patient-specific LV over cardiac cycle. In most published studies, the blood is modeled as Newtonian that is not entirely accurate as the blood viscosity varies with the shear rate in non-linear manner. In this paper, we studied the effect of Newtonian assumption on the degree of accuracy of intraventricular hemodynamics. In doing so, various non-Newtonian models and Newtonian model are used in the analysis of the intraventricular flow and the viscosity of the blood. Initially, we used the cardiac MRI images to reconstruct the time-resolved geometry of the patient-specific LV. After the unstructured mesh generation, the simulations were conducted in the CFD commercial solver FLUENT to analyze the intraventricular hemodynamic parameters. The findings indicate that the Newtonian assumption cannot adequately simulate the flow dynamic within the LV over the cardiac cycle, which can be attributed to the pulsatile and recirculation nature of the flow and the low blood shear rate.
The numerical analysis of non-Newtonian blood flow in human patient-specific left ventricle
S0169260715300857
Background and objective In this paper, we have tested the suitability of using different artificial intelligence-based algorithms for decision support when classifying the risk of congenital heart surgery. In this sense, classification of those surgical risks provides enormous benefits as the a priori estimation of surgical outcomes depending on either the type of disease or the type of repair, and other elements that influence the final result. This preventive estimation may help to avoid future complications, or even death. Methods We have evaluated four machine learning algorithms to achieve our objective: multilayer perceptron, self-organizing map, radial basis function networks and decision trees. The architectures implemented have the aim of classifying among three types of surgical risk: low complexity, medium complexity and high complexity. Results Accuracy outcomes achieved range between 80% and 99%, being the multilayer perceptron method the one that offered a higher hit ratio. Conclusions According to the results, it is feasible to develop a clinical decision support system using the evaluated algorithms. Such system would help cardiology specialists, paediatricians and surgeons to forecast the level of risk related to a congenital heart disease surgery.
Aid decision algorithms to estimate the risk in congenital heart surgery
S0169260715300894
One of the key elements of e-learning platforms is the content provided to the students. Content creation is a time demanding task that requires teachers to prepare material taking into account that it will be accessed on-line. Moreover, the teacher is restricted by the functionalities provided by the e-learning platforms. In contexts such as radiology where images have a key role, the required functionalities are still more specific and difficult to be provided by these platforms. Our purpose is to create a framework to make teacher's tasks easier, specially when he has to deal with contents where images have a main role. In this paper, we present RadEd, a new web-based teaching framework that integrates a smart editor to create case-based exercises that support image interaction such as changing the window width and the grey scale used to render the image, taking measurements on the image, attaching labels to images and selecting parts of the images, amongst others. It also provides functionalities to prepare courses with different topics, exercises and theory material, and also functionalities to control students’ work. Different experts have used RadEd and all of them have considered it a very useful and valuable tool to prepare courses where radiological images are the main component. RadEd provides teachers functionalities to prepare more realistic cases and students the ability to make a more specific diagnosis.
A new e-learning platform for radiology education (RadEd)
S0169260715300961
Glomerulus diameter and Bowman's space width in renal microscopic images indicate various diseases. Therefore, the detection of the renal corpuscle and related objects is a key step in histopathological evaluation of renal microscopic images. However, the task of automatic glomeruli detection is challenging due to their wide intensity variation, besides the inconsistency in terms of shape and size of the glomeruli in the renal corpuscle. Here, a novel solution is proposed which includes the Particles Analyzer technique based on median filter for morphological image processing to detect the renal corpuscle objects. Afterwards, the glomerulus diameter and Bowman's space width are measured. The solution was tested with a dataset of 21 rats’ renal corpuscle images acquired using light microscope. The experimental results proved that the proposed solution can detect the renal corpuscle and its objects efficiently. As well as, the proposed solution has the ability to manage any input images assuring its robustness to the deformations of the glomeruli even with the glomerular hypertrophy cases. Also, the results reported significant difference between the control and affected (due to ingested additional daily dose (14.6mg) of fructose) groups in terms of glomerulus diameter (97.40±19.02μm and 177.03±54.48μm, respectively).
Measurement of glomerulus diameter and Bowman's space width of renal albino rats
S0169260715300973
Background and objective Carpal fusions are useful for treating specific carpal disorders, maximizing postoperative wrist motion, hand strength, reducing pain and instability of the joint. The surgeon selects the appropriate treatment by considering the degree of stability, the chronicity of the injury, functional demands of the patient and former patient's outcomes as well. However there are not many studies regarding the load distribution provided by the treatment. So, the purpose of this study is to analyze the load distribution through the wrist joint with an arthrodesis treatment and compare the results with a normal wrist. Method To this end the rigid body spring model (RBSM) method was used on a three-dimensional model of the wrist joint. The cartilage and ligaments were simulated as springs acting under compression and tension, respectively, while the bones were considered as rigid bodies. To simulate the arthrodesis, the fused bones were considered as a single rigid body. Results The changes on the load distribution for each arthrodesis agree with the treatment objective, reducing load transmission through a specific articular surface. For example, for SLAC/SNAC II most of the treatments reduced the load transmitted through the radioscaphoid fossae, almost by 8%. However, the capitolunate (CL) arthrodesis was the treatment that managed to keep the load transmitted through the radiolunate joint closer to normal conditions. Also, in treatments where the scaphoid was excised (3-corner, 4-corner and capitolunate arthrodesis), the joint surface between the lunate surface compensates by doubling the transmitted force to the radius. Conclusions The common arthrodesis for treating SLAC/SNAC II-III, reduces, in fact, the load on the radioscaphoid joint. Alternative treatments that reduce load distribution on the radiocarpal joint should be three corner and capitolunate arthrodesis for treating SLAC/SNAC-II; and for SLAC/SNAC-III four corners with scaphoid excision. On Kienbock's disease. Scaphocapitate (SC) arthrodesis is more effective on reducing the load transmission through the radiolunate and ulnolunate joints. All arthrodesis treatment should consider changes on the load transmission, and also bones’ fusion rates and pain reduction on patient's outcomes.
Load distribution on the radio-carpal joint for carpal arthrodesis
S0169260715301012
Background and objectives Computer aided analysis of mammograms has been employed by radiologists as a vital tool to increase the precision in the diagnosis of breast cancer. The efficiency of such an analysis is dependent on the employed mammogram enhancement approach; as its major role is to yield a visually improved image for radiologists. Methods Non-linear Polynomial Filtering (NPF) framework has been explored previously as a robust approach for contrast improvement of mammographic images. This paper presents the extension of NPF framework for sharpening and edge enhancement of mammogram lesions. Proposed NPF serves to provide enhancement of edges and sharpness of the lesion region (region-of-interest) in mammograms, in a manner to minimize the dependencies on pre-selected thresholds. In the present work, Logarithmic Image Processing (LIP) model has been employed for the purpose of improvement in visualization of mammograms based on Human Visual System (HVS) characteristics. Results The proposed NPF filtering framework yields mammograms with significant improvement in contrast, edges as well as sharpness of the lesion region. The performance of the proposed approach has been validated using state-of-art objective evaluation measures (of mammogram enhancement) like Contrast Improvement Index (CII), Peak Signal-to-Noise Ratio (PSNR), Average Signal-to-Noise Ratio (ASNR) and Combined Enhancement Measure (CEM); as well as subjective evaluation by radiologists’ opinions. Conclusions The proposed NPF provides a robust solution to perform noise controlled contrast as well as edge enhancement using a single filtering model. This leads to a better visualization of the fine lesion details predictive of their severity. The applicability of single filtering methodology for carrying out denoising, contrast and edge enhancement improves the worth of the overall framework.
Non-linear polynomial filters for edge enhancement of mammogram lesions
S0169260715301036
Background and objectives In computed tomography (CT), statistical iterative reconstruction (SIR) approaches can produce images of higher quality compared to the conventional analytical methods such as filtered backprojection (FBP) algorithm. Effective noise modeling and possibilities to incorporate priors in the image reconstruction problem are the main advantages that lead to continuous development of SIR methods. Oriented by low-dose CT requirements, several methods are recently developed to obtain a high-quality image reconstruction from down-sampled or noisy projection data. In this paper, a new prior information obtained from probabilistic atlas is proposed for low-dose CT image reconstruction. Methods The proposed approach consists of two main phases. In learning phase, a dataset of images obtained from different patients is used to construct a 3D atlas with Laplacian mixture model. The expectation maximization (EM) algorithm is used to estimate the mixture parameters. In reconstruction phase, prior information obtained from the probabilistic atlas is used to construct the cost function for image reconstruction. Results We investigate the low-dose imaging by considering the reduction of X-ray beam intensity and by acquiring the projection data through a small number of views or limited view angles. Experimental studies using simulated data and chest screening CT data demonstrate that the probabilistic atlas prior is a practically promising approach for the low-dose CT imaging. Conclusions The prior information obtained from probabilistic atlas constructed from earlier scans of different patients is useful in low-dose CT imaging.
Probabilistic atlas prior for CT image reconstruction
S0169260715301139
Background and objective Injury of knee joint cartilage may result in pathological vibrations between the articular surfaces during extension and flexion motions. The aim of this paper is to analyze and quantify vibroarthrographic (VAG) signal irregularity associated with articular cartilage degeneration and injury in the patellofemoral joint. Methods The symbolic entropy (SyEn), approximate entropy (ApEn), fuzzy entropy (FuzzyEn), and the mean, standard deviation, and root-mean-squared (RMS) values of the envelope amplitude, were utilized to quantify the signal fluctuations associated with articular cartilage pathology of the patellofemoral joint. The quadratic discriminant analysis (QDA), generalized logistic regression analysis (GLRA), and support vector machine (SVM) methods were used to perform signal pattern classifications. Results The experimental results showed that the patients with cartilage pathology (CP) possess larger SyEn and ApEn, but smaller FuzzyEn, over the statistical significance level of the Wilcoxon rank-sum test (p <0.01), than the healthy subjects (HS). The mean, standard deviation, and RMS values computed from the amplitude difference between the upper and lower signal envelopes are also consistently and significantly larger (p <0.01) for the group of CP patients than for the HS group. The SVM based on the entropy and envelope amplitude features can provide superior classification performance as compared with QDA and GLRA, with an overall accuracy of 0.8356, sensitivity of 0.9444, specificity of 0.8, Matthews correlation coefficient of 0.6599, and an area of 0.9212 under the receiver operating characteristic curve. Conclusions The SyEn, ApEn, and FuzzyEn features can provide useful information about pathological VAG signal irregularity based on different entropy metrics. The statistical parameters of signal envelope amplitude can be used to characterize the temporal fluctuations related to the cartilage pathology.
Quantification of knee vibroarthrographic signal irregularity associated with patellofemoral joint cartilage pathology based on entropy and envelope amplitude measures
S0169260715301140
An extensive, in-depth study of cardiovascular risk factors (CVRF) seems to be of crucial importance in the research of cardiovascular disease (CVD) in order to prevent (or reduce) the chance of developing or dying from CVD. The main focus of data analysis is on the use of models able to discover and understand the relationships between different CVRF. In this paper a report on applying Bayesian network (BN) modeling to discover the relationships among thirteen relevant epidemiological features of heart age domain in order to analyze cardiovascular lost years (CVLY), cardiovascular risk score (CVRS), and metabolic syndrome (MetS) is presented. Furthermore, the induced BN was used to make inference taking into account three reasoning patterns: causal reasoning, evidential reasoning, and intercausal reasoning. Application of BN tools has led to discovery of several direct and indirect relationships between different CVRF. The BN analysis showed several interesting results, among them: CVLY was highly influenced by smoking being the group of men the one with highest risk in CVLY; MetS was highly influence by physical activity (PA) being again the group of men the one with highest risk in MetS, and smoking did not show any influence. BNs produce an intuitive, transparent, graphical representation of the relationships between different CVRF. The ability of BNs to predict new scenarios when hypothetical information is introduced makes BN modeling an Artificial Intelligence (AI) tool of special interest in epidemiological studies. As CVD is multifactorial the use of BNs seems to be an adequate modeling tool.
Bayesian network modeling: A case study of an epidemiologic system analysis of cardiovascular risk
S0169260715301152
Background and objective Signal segmentation and spike detection are two important biomedical signal processing applications. Often, non-stationary signals must be segmented into piece-wise stationary epochs or spikes need to be found among a background of noise before being further analyzed. Permutation entropy (PE) has been proposed to evaluate the irregularity of a time series. PE is conceptually simple, structurally robust to artifacts, and computationally fast. It has been extensively used in many applications, but it has two key shortcomings. First, when a signal is symbolized using the Bandt–Pompe procedure, only the order of the amplitude values is considered and information regarding the amplitudes is discarded. Second, in the PE, the effect of equal amplitude values in each embedded vector is not addressed. To address these issues, we propose a new entropy measure based on PE: the amplitude-aware permutation entropy (AAPE). Methods AAPE is sensitive to the changes in the amplitude, in addition to the frequency, of the signals thanks to it being more flexible than the classical PE in the quantification of the signal motifs. To demonstrate how the AAPE method can enhance the quality of the signal segmentation and spike detection, a set of synthetic and realistic synthetic neuronal signals, electroencephalograms and neuronal data are processed. We compare the performance of AAPE in these problems against state-of-the-art approaches and evaluate the significance of the differences with a repeated ANOVA with post hoc Tukey's test. Results In signal segmentation, the accuracy of AAPE-based method is higher than conventional segmentation methods. AAPE also leads to more robust results in the presence of noise. The spike detection results show that AAPE can detect spikes well, even when presented with single-sample spikes, unlike PE. For multi-sample spikes, the changes in AAPE are larger than in PE. Conclusion We introduce a new entropy metric, AAPE, that enables us to consider amplitude information in the formulation of PE. The AAPE algorithm can be used in almost every irregularity-based application in various signal and image processing fields. We also made freely available the Matlab code of the AAPE.
Amplitude-aware permutation entropy: Illustration in spike detection and signal segmentation
S0169260715301188
Background and objective We live our lives by the calendar and the clock, but time is also an abstraction, even an illusion. The sense of time can be both domain-specific and complex, and is often left implicit, requiring significant domain knowledge to accurately recognize and harness. In the clinical domain, the momentum gained from recent advances in infrastructure and governance practices has enabled the collection of tremendous amount of data at each moment in time. Electronic health records (EHRs) have paved the way to making these data available for practitioners and researchers. However, temporal data representation, normalization, extraction and reasoning are very important in order to mine such massive data and therefore for constructing the clinical timeline. The objective of this work is to provide an overview of the problem of constructing a timeline at the clinical point of care and to summarize the state-of-the-art in processing temporal information of clinical narratives. Methods This review surveys the methods used in three important area: modeling and representing of time, medical NLP methods for extracting time, and methods of time reasoning and processing. The review emphasis on the current existing gap between present methods and the semantic web technologies and catch up with the possible combinations. Results The main findings of this review are revealing the importance of time processing not only in constructing timelines and clinical decision support systems but also as a vital component of EHR data models and operations. Conclusions Extracting temporal information in clinical narratives is a challenging task. The inclusion of ontologies and semantic web will lead to better assessment of the annotation task and, together with medical NLP techniques, will help resolving granularity and co-reference resolution problems.
Temporal data representation, normalization, extraction, and reasoning: A review from clinical domain
S0169260715301218
This work provides a performance comparison of four different machine learning classifiers: multinomial logistic regression with ridge estimators (MLR) classifier, k-nearest neighbours (KNN), support vector machine (SVM) and naïve Bayes (NB) as applied to terahertz (THz) transient time domain sequences associated with pixelated images of different powder samples. The six substances considered, although have similar optical properties, their complex insertion loss at the THz part of the spectrum is significantly different because of differences in both their frequency dependent THz extinction coefficient as well as differences in their refractive index and scattering properties. As scattering can be unquantifiable in many spectroscopic experiments, classification solely on differences in complex insertion loss can be inconclusive. The problem is addressed using two-dimensional (2-D) cross-correlations between background and sample interferograms, these ensure good noise suppression of the datasets and provide a range of statistical features that are subsequently used as inputs to the above classifiers. A cross-validation procedure is adopted to assess the performance of the classifiers. Firstly the measurements related to samples that had thicknesses of 2mm were classified, then samples at thicknesses of 4mm, and after that 3mm were classified and the success rate and consistency of each classifier was recorded. In addition, mixtures having thicknesses of 2 and 4mm as well as mixtures of 2, 3 and 4mm were presented simultaneously to all classifiers. This approach provided further cross-validation of the classification consistency of each algorithm. The results confirm the superiority in classification accuracy and robustness of the MLR (least accuracy 88.24%) and KNN (least accuracy 90.19%) algorithms which consistently outperformed the SVM (least accuracy 74.51%) and NB (least accuracy 56.86%) classifiers for the same number of feature vectors across all studies. The work establishes a general methodology for assessing the performance of other hyperspectral dataset classifiers on the basis of 2-D cross-correlations in far-infrared spectroscopy or other parts of the electromagnetic spectrum. It also advances the wider proliferation of automated THz imaging systems across new application areas e.g., biomedical imaging, industrial processing and quality control where interpretation of hyperspectral images is still under development.
Classification of THz pulse signals using two-dimensional cross-correlation feature extraction and non-linear classifiers
S0169260715301231
Background and Objective Viruses are infectious agents that replicate inside organisms and reveal a plethora of distinct characteristics. Viral infections spread in many ways, but often have devastating consequences and represent a huge danger for public health. It is important to design statistical and computational techniques capable of handling the available data and highlighting the most important features. Methods This paper reviews the quantitative and qualitative behaviour of 22 infectious diseases caused by viruses. The information is compared and visualized by means of the multidimensional scaling technique. Results The results are robust to uncertainties in the data and revealed to be consistent with clinical practice. Conclusions The paper shows that the proposed methodology may represent a solid mathematical tool to tackle a larger number of virus and additional information about these infectious agents.
Multidimensional scaling analysis of virus diseases
S0169260715301280
Background and objectives The lack of benchmark data in computational ophthalmology contributes to the challenging task of applying disease assessment and evaluate performance of machine learning based methods on retinal spectral domain optical coherence tomography (SD-OCT) scans. Presented here is a general framework for constructing a benchmark dataset for retinal image processing tasks such as cyst, vessel, and subretinal fluid segmentation and as a result, a benchmark dataset for cyst segmentation has been developed. Method First, a dataset captured by different SD-OCT vendors with different numbers of scans and pathology qualities are selected. Then a robust and intelligent method is used to evaluate performance of readers, partitioning the dataset into subsets. Subsets are then assigned to complementary readers for annotation with respect to a novel confidence based annotation protocol. Finally, reader annotations are combined based on their performance to generate final annotations. Result The generated benchmark dataset for cyst segmentation comprises 26 SD-OCT scans with differing cyst qualities, collected from 4 different SD-OCT vendors to cover a wide variety of data. The dataset is partitioned into three subsets which are annotated by complementary readers based on a confidence based annotation protocol. Experimental results show annotations of complementary readers are combined efficiently with respect to their performance, generating accurate annotations. Conclusion Our results facilitate the process of generating benchmark datasets. Moreover the generated benchmark data set for cyst segmentation can be used reliably to train and test machine learning based methods.
A novel benchmark model for intelligent annotation of spectral-domain optical coherence tomography scans using the example of cyst annotation
S0169260715301309
Background and objective The adoption of computerized physician order entry is an important cornerstone of using health information technology (HIT) in health care. The transition from paper to computer forms presents a change in physicians’ practices. The main objective of this study was to investigate the impact of implementing a computer-based order entry (CPOE) system without clinical decision support on the number of radiographs ordered for patients admitted in the emergency department. Methods This single-center pre-/post-intervention study was conducted in January, 2013 (before CPOE period) and January, 2014 (after CPOE period) at the emergency department at Nîmes University Hospital. All patients admitted in the emergency department who had undergone medical imaging were included in the study. Results Emergency department admissions have increased since the implementation of CPOE (5388 in the period before CPOE implementation vs. 5808 patients after CPOE implementation, p =.008). In the period before CPOE implementation, 2345 patients (44%) had undergone medical imaging; in the period after CPOE implementation, 2306 patients (40%) had undergone medical imaging (p =.008). In the period before CPOE, 2916 medical imaging procedures were ordered; in the period after CPOE, 2876 medical imaging procedures were ordered (p =.006). In the period before CPOE, 1885 radiographs were ordered; in the period after CPOE, 1776 radiographs were ordered (p <.001). The time between emergency department admission and medical imaging did not vary between the two periods. Conclusions Our results show a decrease in the number of radiograph requests after a CPOE system without clinical decision support was implemented in our emergency department.
Impact of a computerized provider radiography order entry system without clinical decision support on emergency department medical imaging requests
S0169260715301358
Background and objectives Gene splicing is a vital source of protein diversity. Perfectly eradication of introns and joining exons is the prominent task in eukaryotic gene expression, as exons are usually interrupted by introns. Identification of splicing sites through experimental techniques is complicated and time-consuming task. With the avalanche of genome sequences generated in the post genomic age, it remains a complicated and challenging task to develop an automatic, robust and reliable computational method for fast and effective identification of splicing sites. Methods In this study, a hybrid model “iSS-Hyb-mRMR” is proposed for quickly and accurately identification of splicing sites. Two sample representation methods namely; pseudo trinucleotide composition (PseTNC) and pseudo tetranucleotide composition (PseTetraNC) were used to extract numerical descriptors from DNA sequences. Hybrid model was developed by concatenating PseTNC and PseTetraNC. In order to select high discriminative features, minimum redundancy maximum relevance algorithm was applied on the hybrid feature space. The performance of these feature representation methods was tested using various classification algorithms including K-nearest neighbor, probabilistic neural network, general regression neural network, and fitting network. Jackknife test was used for evaluation of its performance on two benchmark datasets S 1 and S 2, respectively. Results The predictor, proposed in the current study achieved an accuracy of 93.26%, sensitivity of 88.77%, and specificity of 97.78% for S 1, and the accuracy of 94.12%, sensitivity of 87.14%, and specificity of 98.64% for S 2, respectively. Conclusion It is observed, that the performance of proposed model is higher than the existing methods in the literature so for; and will be fruitful in the mechanism of RNA splicing, and other research academia.
“iSS-Hyb-mRMR”: Identification of splicing sites using hybrid space of pseudo trinucleotide and pseudo tetranucleotide composition
S0169260715301383
The Non-local means denoising filter has been established as gold standard for image denoising problem in general and particularly in medical imaging due to its efficiency. However, its computation time limited its applications in real world application, especially in medical imaging. In this paper, a distributed version on parallel hybrid architecture is proposed to solve the computation time problem and a new method to compute the filters’ coefficients is also proposed, where we focused on the implementation and the enhancement of filters’ parameters via taking the neighborhood of the current voxel more accurately into account. In terms of implementation, our key contribution consists in reducing the number of shared memory accesses. The different tests of the proposed method were performed on the brain-web database for different levels of noise. Performances and the sensitivity were quantified in terms of speedup, peak signal to noise ratio, execution time, the number of floating point operations. The obtained results demonstrate the efficiency of the proposed method. Moreover, the implementation is compared to that of other techniques, recently published in the literature.
Medical image denoising via optimal implementation of non-local means on hybrid parallel architecture
S0169260715301401
Background and objectives Angle closure disease in the eye can be detected using time-domain Anterior Segment Optical Coherence Tomography (AS-OCT). The Anterior Chamber (AC) characteristics can be quantified from AS-OCT image, which is dependent on the image quality at the image acquisition stage. To date, to the best of our knowledge there are no objective or automated subjective measurements to assess the quality of AS-OCT images. Methods To address AS-OCT image quality assessment issue, we define a method for objective assessment of AS-OCT images using complex wavelet based local binary pattern features. These features are pooled using the Naïve Bayes classifier to obtain the final quality parameter. To evaluate the proposed method, a subjective assessment has been performed by clinical AS-OCT experts, who graded the quality of AS-OCT images on a scale of good, fair, and poor. This was done based on the ability to identify the AC structures including the position of the scleral spur. Results We compared the results of the proposed objective assessment with the subjective assessments. From this comparison, it is validated that the proposed objective assessment has the ability of differentiating the good and fair quality AS-OCT images for glaucoma diagnosis from the poor quality AS-OCT images. Conclusions This proposed algorithm is an automated approach to evaluate the AS-OCT images with the intention for collecting of high quality data for further medical diagnosis. Our proposed quality index has the ability of automatic objective and quantitative assessment of AS-OCT image quality and this quality index is similar to glaucoma specialist.
Complex wavelet based quality assessment for AS-OCT images with application to Angle Closure Glaucoma diagnosis
S0169260715301425
Background and objective Heart failure due to iron-overload cardiomyopathy is one of the main causes of mortality. The cardiomyopathy is reversible if intensive iron chelation treatment is done in time, but the diagnosis is often delayed because the cardiac iron deposition is unpredictable and the symptoms are lately detected. There are many ways to assess iron-overload. However, the widely used and approved method is by using MRI which is performed by calculating the T2* (T2-star). In order to compute the T2* value, the region of interest (ROI) is manually selected by an expert which may require considerable time and skills. The aim of this work is hence to develop the cardiac T2* measurement by using region growing algorithm for automatically segmenting the ROI in cardiac MR images. Mathematical morphologies are also used to reduce some errors. Methods Thirty MR images with free-breathing and respiratory-trigger technique were used in this work. The segmentation algorithm yields good results when compared with the manual segmentation performed by two experts. Results The averages of positive predictive value, the sensitivity, the Hausdorff distance, and the Dice similarity coefficient are 0.76, 0.84, 7.78 pixels, and 0.80 when compared with the two experts’ opinions. The T2* values were carried out based on the automatically segmented ROI's. The mean difference of T2* values between the proposed technique and the experts’ opinion is about 1.40ms. Conclusions The results demonstrate the accuracy of the proposed method in T2* value estimation. Some previous methods were implemented for comparisons. The results show that the proposed method yields better segmentation and T2* value estimation performances.
Automatic cardiac T2* relaxation time estimation from magnetic resonance images using region growing method with automatically initialized seed points
S0169260715301589
Background and objective Transfer function (TF) is an important parameter for the analysis and understanding of hemodynamics when arterial stenosis exists in human arterial tree. Aimed to validate the feasibility of using TF to diagnose arterial stenosis, the forward problem and inverse problem were simulated and discussed. Methods A calculation method of TF between ascending aorta and any other artery was proposed based on a 55 segment transmission line model (TLM) of human artery tree. The effects of artery stenosis on TF were studied in two aspects: stenosis degree and position. The degree of arterial stenosis was specified to be 10–90% in three representative arteries: carotid, aorta and iliac artery, respectively. In order to validate the feasibility of diagnosis of artery stenosis using TF and support vector machine (SVM), a database of TF was established to simulate the real conditions of artery stenosis based on the TLM model. And a diagnosis model of artery stenosis was built by using SVM and the database. Results The simulating results showed the modulus and phase of TF were decreasing sharply from frequency 2 to 10Hz with the stenosis degree increasing and displayed their unique and nonlinear characteristics when frequency is higher than 10Hz. The diagnosis results showed the average accuracy was above 76% for the stenosis from 10% to 90% degree, and the diagnosis accuracies of moderate (50%) and serious (90%) stenosis were 87% and 99%, respectively. When the stenosis degree increased to 90%, the accuracy of stenosis localization reached up to 94% for most of arteries. Conclusions The proposed method of combining TF and SVM is a theoretically feasible method for diagnosis of artery stenosis.
A novel method of artery stenosis diagnosis using transfer function and support vector machine based on transmission line model: A numerical simulation and validation study
S0169260715301656
Background and objectives Mathematical models are suitable to simulate complex biological processes by a set of non-linear differential equations. These simulation models can be used as an e-learning tool in medical education. However, in many cases these mathematical systems have to be treated numerically which is computationally intensive. The aim of the study was to develop a system for numerical simulation to be used in an online e-learning environment. Methods In the software system the simulation is located on the server as a CGI application. The user (student) selects the boundary conditions for the simulation (e.g., properties of a simulated patient) on the browser. With these parameters the simulation on the server is started and the simulation result is re-transferred to the browser. Results With this system two examples of e-learning units were realized. The first one uses a multi-compartment model of the glucose-insulin control loop for the simulation of the plasma glucose level after a simulated meal or during diabetes (including treatment by subcutaneous insulin application). The second one simulates the ion transport leading to the resting and action potential in nerves. The student can vary parameters systematically to explore the biological behavior of the system. Conclusions The described system is able to simulate complex biological processes and offers the possibility to use these models in an online e-learning environment. As far as the underlying principles can be described mathematically, this type of system can be applied to a broad spectrum of biomedical or natural scientific topics.
Using numeric simulation in an online e-learning environment to teach functional physiological contexts
S0169260715301693
Background and Objectives Light sharing PET detector configuration coupled with thick light guide and Geiger-mode avalanche photodiode (GAPD) with large-area microcells was proposed to overcome the energy non-linearity problem and to obtain high light collection efficiency (LCE). Methods A Monte-Carlo simulation was conducted for the three types of LSO block, 4 × 4 array of 3 × 3 × 20 mm3 discrete crystals, 6 × 6 array of 2 × 2 × 20 mm3 discrete crystals, and 12 × 12 array of 1 × 1 × 20 mm3 discrete crystals, to investigate the scintillation light distribution after conversion of the γ-rays in LSO. The incident photons were read out by three types of 4 × 4 array photosensors, which were PSPMT of 25% quantum efficiency (QE), GAPD1 with 50 × 50 µm2 microcells of 30% photon detection efficiency (PDE) and GAPD2 with 100 × 100 µm2 of 45% PDE. The number of counted photons in each photosensor was analytically calculated. The LCE, linearity and flood histogram were examined for each PET detector module having 99 different configurations as a function of light guide thickness ranging from 0 to 10 mm. Results The performance of PET detector modules based on GAPDs was considerably improved by using the thick light guide. The LCE was increased from 24 to 30% and from 14 to 41%, and the linearity was also improved from 0.97 to 0.99 and from 0.75 to 0.99, for GAPD1 and GAPD2, respectively. As expected, the performance of PSPMT based detector did not change. The flood histogram of 12 × 12 array PET detector modules using 3 mm light guide coupled with GAPDs was obtained by simulation, and all crystals of 1 × 1 × 20 mm3 size were clearly identified. PET detector module coupled with thick light guide and GAPD array with large-area microcells was proposed to obtain high QE and high spatial resolution, and its feasibility was verified. Conclusions This study demonstrated that the overall PET performance of the proposed design was considerably improved, and this approach will provide opportunities to develop GAPD based PET detector with a high LCE.
Simulation study of PET detector configuration with thick light guide and GAPD array having large-area microcells for high effective quantum efficiency
S0169260715301735
An acetabular cup with larger abduction angles is able to affect the normal function of the cup seriously that may cause early failure of the total hip replacement (THR). Complexity of the finite element (FE) simulation in the wear analysis of the THR is usually concerned with the contact status, the computational effort, and the possible divergence of results, which become more difficult on THRs with larger cup abduction angles. In the study, we propose a FE approach with contact transformation that offers less computational effort. Related procedures, such as Lagrangian Multiplier, partitioned matrix inversion, detection of contact forces, continuity of contact surface, nodal area estimation, etc. are explained in this report. Through the transformed methodology, the computer round-off error is tremendously reduced and the embedded repetitive procedure can be processed precisely and quickly. Here, wear behaviors of THR with various abduction angles are investigated. The most commonly used combination, i.e., metal-on-polyethylene, is adopted in the current study where a cobalt-chromium femoral head is paired with an Ultra High Molecular Weight Polyethylene (UHMWPE) cup. In all illustrations, wear coefficients are estimated by self-averaging strategy with available experimental datum reported elsewhere. The results reveal that the THR with larger abduction angles may produce deeper depth of wear but the volume of wear presents an opposite tendency; these results are comparable with clinical and experimental reports. The current approach can be widely applied easily to fields such as the study of the wear behaviors on ante-version, impingement, and time-dependent behaviors of prostheses etc.
The study of wear behaviors on abducted hip joint prostheses by an alternate finite element approach
S0169260715301802
Background and objectives Mammography analysis is an effective technology for early detection of breast cancer. Micro-calcification clusters (MCs) are a vital indicator of breast cancer, so detection of MCs plays an important role in computer aided detection (CAD) system, this paper proposes a new hybrid method to improve MCs detection rate in mammograms. Methods The proposed method comprises three main steps: firstly, remove label and pectoral muscle adopting the largest connected region marking and region growing method, and enhance MCs using the combination of double top-hat transform and grayscale-adjustment function; secondly, remove noise and other interference information, and retain the significant information by modifying the contourlet coefficients using nonlinear function; thirdly, we use the non-linking simplified pulse-coupled neural network to detect MCs. Results In our work, we choose 118 mammograms including 38 mammograms with micro-calcification clusters and 80 mammograms without micro-calcification to demonstrate our algorithm separately from two open and common database including the MIAS and JSMIT; and we achieve the higher specificity of 94.7%, sensitivity of 96.3%, AUC of 97.0%, accuracy of 95.8%, MCC of 90.4%, MCC-PS of 61.3% and CEI of 53.5%, these promising results clearly demonstrate that the proposed approach outperforms the current state-of-the-art algorithms. In addition, this method is verified on the 20 mammograms from the People's Hospital of Gansu Province, the detection results reveal that our method can accurately detect the calcifications in clinical application. Conclusions This proposed method is simple and fast, furthermore it can achieve high detection rate, it could be considered used in CAD systems to assist the physicians for breast cancer diagnosis in the future.
A new method of detecting micro-calcification clusters in mammograms using contourlet transform and non-linking simplified PCNN
S0169260715301814
Background and objective Methods used in image processing should reflect any multilevel structures inherent in the image dataset or they run the risk of functioning inadequately. We wish to test the feasibility of multilevel principal components analysis (PCA) to build active shape models (ASMs) for cases relevant to medical and dental imaging. Methods Multilevel PCA was used to carry out model fitting to sets of landmark points and it was compared to the results of “standard” (single-level) PCA. Proof of principle was tested by applying mPCA to model basic peri-oral expressions (happy, neutral, sad) approximated to the junction between the mouth/lips. Monte Carlo simulations were used to create this data which allowed exploration of practical implementation issues such as the number of landmark points, number of images, and number of groups (i.e., “expressions” for this example). To further test the robustness of the method, mPCA was subsequently applied to a dental imaging dataset utilising landmark points (placed by different clinicians) along the boundary of mandibular cortical bone in panoramic radiographs of the face. Results Changes of expression that varied between groups were modelled correctly at one level of the model and changes in lip width that varied within groups at another for the Monte Carlo dataset. Extreme cases in the test dataset were modelled adequately by mPCA but not by standard PCA. Similarly, variations in the shape of the cortical bone were modelled by one level of mPCA and variations between the experts at another for the panoramic radiographs dataset. Results for mPCA were found to be comparable to those of standard PCA for point-to-point errors via miss-one-out testing for this dataset. These errors reduce with increasing number of eigenvectors/values retained, as expected. Conclusions We have shown that mPCA can be used in shape models for dental and medical image processing. mPCA was found to provide more control and flexibility when compared to standard “single-level” PCA. Specifically, mPCA is preferable to “standard” PCA when multiple levels occur naturally in the dataset.
Multilevel principal component analysis (mPCA) in shape analysis: A feasibility study in medical and dental imaging
S0169260715301826
Background and objective Integrative approaches for the study of biological systems have gained popularity in the realm of statistical genomics. For example, The Cancer Genome Atlas (TCGA) has applied integrative clustering methodologies to various cancer types to determine molecular subtypes within a given cancer histology. In order to adequately compare integrative or “systems-biology”-type methods, realistic and related datasets are needed to assess the methods. This involves simulating multiple types of ‘omic data with realistic correlation between features of the same type (e.g., gene expression for genes in a pathway) and across data types (e.g., “gene silencing” involving DNA methylation and gene expression). Methods We present the software application tool InterSIM for simulating multiple interrelated data types with realistic intra- and inter-relationships based on the DNA methylation, mRNA gene expression, and protein expression from the TCGA ovarian cancer study. Results The resulting simulated datasets can be used to assess and compare the operating characteristics of newly developed integrative bioinformatics methods to existing methods. Application of InterSIM is presented with an example of heatmaps of the simulated datasets. Conclusions InterSIM allows researchers to evaluate and test new integrative methods with realistically simulated interrelated genomic datasets. The software tool InterSIM is implemented in R and is freely available from CRAN.
InterSIM: Simulation tool for multiple integrative ‘omic datasets’
S0169260715301905
To extend the use of wearable sensor networks for stroke patients training and assessment in non-clinical settings, this paper proposes a novel remote quantitative Fugl-Meyer assessment (FMA) framework, in which two accelerometer and seven flex sensors were used to monitoring the movement function of upper limb, wrist and fingers. The extreme learning machine based ensemble regression model was established to map the sensor data to clinical FMA scores while the RRelief algorithm was applied to find the optimal features subset. Considering the FMA scale is time-consuming and complicated, seven training exercises were designed to replace the upper limb related 33 items in FMA scale. 24 stroke inpatients participated in the experiments in clinical settings and 5 of them were involved in the experiments in home settings after they left the hospital. Both the experimental results in clinical and home settings showed that the proposed quantitative FMA model can precisely predict the FMA scores based on wearable sensor data, the coefficient of determination can reach as high as 0.917. It also indicated that the proposed framework can provide a potential approach to the remote quantitative rehabilitation training and evaluation.
A remote quantitative Fugl-Meyer assessment framework for stroke patients based on wearable sensor networks
S0169260715301929
Background and objectives The automated analysis of indirect immunofluorescence images for Anti-Nuclear Autoantibody (ANA) testing is a fairly recent field that is receiving ever-growing interest from the research community. ANA testing leverages on the categorization of intensity level and fluorescent pattern of IIF images of HEp-2 cells to perform a differential diagnosis of important autoimmune diseases. Nevertheless, it suffers from tremendous lack of repeatability due to subjectivity in the visual interpretation of the images. The automatization of the analysis is seen as the only valid solution to this problem. Several works in literature address individual steps of the work-flow, nonetheless integrating such steps and assessing their effectiveness as a whole is still an open challenge. Methods We present a modular tool, ANAlyte, able to characterize a IIF image in terms of fluorescent intensity level and fluorescent pattern without any user-interactions. For this purpose, ANAlyte integrates the following: (i) Intensity Classifier module, that categorizes the intensity level of the input slide based on multi-scale contrast assessment; (ii) Cell Segmenter module, that splits the input slide into individual HEp-2 cells; (iii) Pattern Classifier module, that determines the fluorescent pattern of the slide based on the pattern of the individual cells. Results To demonstrate the accuracy and robustness of our tool, we experimentally validated ANAlyte on two different public benchmarks of IIF HEp-2 images with rigorous leave-one-out cross-validation strategy. We obtained overall accuracy of fluorescent intensity and pattern classification respectively around 85% and above 90%. We assessed all results by comparisons with some of the most representative state of the art works. Conclusions Unlike most of the other works in the recent literature, ANAlyte aims at the automatization of all the major steps of ANA image analysis. Results on public benchmarks demonstrate that the tool can characterize HEp-2 slides in terms of intensity and fluorescent pattern with accuracy better or comparable with the state of the art techniques, even when such techniques are run on manually segmented cells. Hence, ANAlyte can be proposed as a valid solution to the problem of ANA testing automatization.
ANAlyte: A modular image analysis tool for ANA testing with indirect immunofluorescence
S0169260715302224
A toolkit has been developed for calculating the 3-dimensional biological effective dose (BED) distributions in multi-phase, external beam radiotherapy treatments such as those applied in liver stereotactic body radiation therapy (SBRT) and in multi-prescription treatments. This toolkit also provides a wide range of statistical results related to dose and BED distributions. MATLAB 2010a, version 7.10 was used to create this GUI toolkit. The input data consist of the dose distribution matrices, organ contour coordinates, and treatment planning parameters from the treatment planning system (TPS). The toolkit has the capability of calculating the multi-phase BED distributions using different formulas (denoted as true and approximate). Following the calculations of the BED distributions, the dose and BED distributions can be viewed in different projections (e.g. coronal, sagittal and transverse). The different elements of this toolkit are presented and the important steps for the execution of its calculations are illustrated. The toolkit is applied on brain, head & neck and prostate cancer patients, who received primary and boost phases in order to demonstrate its capability in calculating BED distributions, as well as measuring the inaccuracy and imprecision of the approximate BED distributions. Finally, the clinical situations in which the use of the present toolkit would have a significant clinical impact are indicated.
A graphical user interface (GUI) toolkit for the calculation of three-dimensional (3D) multi-phase biological effective dose (BED) distributions including statistical analyses
S0169260715302236
Background and Objective The HIV/AIDS-related issue has given rise to a priority concern in which potential new therapies are increasingly highlighted to lessen the negative impact of highly active anti-retroviral therapy (HAART) in the healthcare industry. With the motivation of “medical applications,” this study focuses on the main advanced feature selection techniques and classification approaches that reflect a new architecture, and a trial to build a hybrid model for interested parties. Methods This study first uses an integrated linear–nonlinear feature selection technique to identify the determinants influencing HAART medication and utilizes organizations of different condition-attributes to generate a hybrid model based on a rough set classifier to study evolving HIV/AIDS research in order to improve classification performance. Results The proposed model makes use of a real data set from Taiwan's specialist medical center. The experimental results show that the proposed model yields a satisfactory result that is superior to the listed methods, and the core condition-attributes PVL, CD4, Code, Age, Year, PLT, and Sex were identified in the HIV/AIDS data set. In addition, the decision rule set created can be referenced as a knowledge-based healthcare service system as the best of evidence-based practices in the workflow of current clinical diagnosis. Conclusions This study highlights the importance of these key factors and provides the rationale that the proposed model is an effective alternative to analyzing sustained HAART medication in follow-up studies of HIV/AIDS treatment in practice.
A comprehensive identification-evidence based alternative for HIV/AIDS treatment with HAART in the healthcare industries
S0169260715302339
In observational studies without random assignment of the treatment, the unadjusted comparison between treatment groups may be misleading due to confounding. One method to adjust for measured confounders is inverse probability of treatment weighting. This method can also be used in the analysis of time to event data with competing risks. Competing risks arise if for some individuals the event of interest is precluded by a different type of event occurring before, or if only the earliest of several times to event, corresponding to different event types, is observed or is of interest. In the presence of competing risks, time to event data are often characterized by cumulative incidence functions, one for each event type of interest. We describe the use of inverse probability of treatment weighting to create adjusted cumulative incidence functions. This method is equivalent to direct standardization when the weight model is saturated. No assumptions about the form of the cumulative incidence functions are required. The method allows studying associations between treatment and the different types of event under study, while focusing on the earliest event only. We present a SAS macro implementing this method and we provide a worked example.
Covariate adjustment of cumulative incidence functions for competing risks data using inverse probability of treatment weighting
S0169260715302583
Background and objective Retinal blood vessel segmentation is a prominent task for the diagnosis of various retinal pathology such as hypertension, diabetes, glaucoma, etc. In this paper, a novel matched filter approach with the Gumbel probability distribution function as its kernel is introduced to improve the performance of retinal blood vessel segmentation. Methods Before applying the proposed matched filter, the input retinal images are pre-processed. During pre-processing stage principal component analysis (PCA) based gray scale conversion followed by contrast limited adaptive histogram equalization (CLAHE) are applied for better enhancement of retinal image. After that an exhaustive experiments have been conducted for selecting the appropriate value of parameters to design a new matched filter. The post-processing steps after applying the proposed matched filter include the entropy based optimal thresholding and length filtering to obtain the segmented image. Results For evaluating the performance of proposed approach, the quantitative performance measures, an average accuracy, average true positive rate (ATPR), and average false positive rate (AFPR) are calculated. The respective values of the quantitative performance measures are 0.9522, 0.7594, 0.0292 for DRIVE data set and 0.9270, 0.7939, 0.0624 for STARE data set. To justify the effectiveness of proposed approach, receiver operating characteristic (ROC) curve is plotted and the average area under the curve (AUC) is calculated. The average AUC for DRIVE and STARE data sets are 0.9287 and 0.9140 respectively. Conclusions The obtained experimental results confirm that the proposed approach performance better with respect to other prominent Gaussian distribution function and Cauchy PDF based matched filter approaches.
Retinal blood vessels segmentation by using Gumbel probability distribution function based matched filter
S0169260715302753
Background and objective Percutaneous coronary interventional procedures need advance planning prior to stenting or an endarterectomy. Cardiologists use intravascular ultrasound (IVUS) for screening, risk assessment and stratification of coronary artery disease (CAD). We hypothesize that plaque components are vulnerable to rupture due to plaque progression. Currently, there are no standard grayscale IVUS tools for risk assessment of plaque rupture. This paper presents a novel strategy for risk stratification based on plaque morphology embedded with principal component analysis (PCA) for plaque feature dimensionality reduction and dominant feature selection technique. The risk assessment utilizes 56 grayscale coronary features in a machine learning framework while linking information from carotid and coronary plaque burdens due to their common genetic makeup. Method This system consists of a machine learning paradigm which uses a support vector machine (SVM) combined with PCA for optimal and dominant coronary artery morphological feature extraction. Carotid artery proven intima-media thickness (cIMT) biomarker is adapted as a gold standard during the training phase of the machine learning system. For the performance evaluation, K-fold cross validation protocol is adapted with 20 trials per fold. For choosing the dominant features out of the 56 grayscale features, a polling strategy of PCA is adapted where the original value of the features is unaltered. Different protocols are designed for establishing the stability and reliability criteria of the coronary risk assessment system (cRAS). Results Using the PCA-based machine learning paradigm and cross-validation protocol, a classification accuracy of 98.43% (AUC 0.98) with K =10 folds using an SVM radial basis function (RBF) kernel was achieved. A reliability index of 97.32% and machine learning stability criteria of 5% were met for the cRAS. Conclusions This is the first Computer aided design (CADx) system of its kind that is able to demonstrate the ability of coronary risk assessment and stratification while demonstrating a successful design of the machine learning system based on our assumptions.
PCA-based polling strategy in machine learning framework for coronary artery disease risk assessment in intravascular ultrasound: A link between carotid and coronary grayscale plaque morphology
S0169260715302819
Background In the last few years the use of social media in medicine has grown exponentially, providing a new area of research based on the analysis and use of Web 2.0 capabilities. In addition, the use of social media in medical education is a subject of particular interest which has been addressed in several studies. One example of this application is the medical quizzes of The New England Journal of Medicine (NEJM) that regularly publishes a set of questions through their Facebook timeline. Objective We present an approach for the automatic extraction of medical quizzes and their associated answers on a Facebook platform by means of a set of computer-based methods and algorithms. Methods We have developed a tool for the extraction and analysis of medical quizzes stored on Facebook timeline at the NEJM Facebook page, based on a set of computer-based methods and algorithms using Java. The system is divided into two main modules: Crawler and Data retrieval. Results The system was launched on December 31, 2014 and crawled through a total of 3004 valid posts and 200,081 valid comments. The first post was dated on July 23, 2009 and the last one on December 30, 2014. 285 quizzes were analyzed with 32,780 different users providing answers to the aforementioned quizzes. Of the 285 quizzes, patterns were found in 261 (91.58%). From these 261 quizzes where trends were found, we saw that users follow trends of incorrect answers in 13 quizzes and trends of correct answers in 248. Conclusions This tool is capable of automatically identifying the correct and wrong answers to a quiz provided on Facebook posts in a text format to a quiz, with a small rate of false negative cases and this approach could be applicable to the extraction and analysis of other sources after including some adaptations of the information on the Internet.
Automatic extraction and identification of users’ responses in Facebook medical quizzes
S0169260715302959
Background M2M (Machine-to-Machine) communications represent one of the main pillars of the new paradigm of the Internet of Things (IoT), and is making possible new opportunities for the eHealth business. Nevertheless, the large number of M2M protocols currently available hinders the election of a suitable solution that satisfies the requirements that can demand eHealth applications. Objectives In the first place, to develop a tool that provides a benchmarking analysis in order to objectively select among the most relevant M2M protocols for eHealth solutions. In the second place, to validate the tool with a particular use case: the respiratory rehabilitation. Methods A software tool, called Distributed Computing Framework (DFC), has been designed and developed to execute the benchmarking tests and facilitate the deployment in environments with a large number of machines, with independence of the protocol and performance metrics selected. Results DDS, MQTT, CoAP, JMS, AMQP and XMPP protocols were evaluated considering different specific performance metrics, including CPU usage, memory usage, bandwidth consumption, latency and jitter. The results obtained allowed to validate a case of use: respiratory rehabilitation of chronic obstructive pulmonary disease (COPD) patients in two scenarios with different types of requirement: Home-Based and Ambulatory. Conclusions The results of the benchmark comparison can guide eHealth developers in the choice of M2M technologies. In this regard, the framework presented is a simple and powerful tool for the deployment of benchmark tests under specific environments and conditions.
A Machine-to-Machine protocol benchmark for eHealth applications – Use case: Respiratory rehabilitation
S0169260715302996
Background and Objective Iterative reconstruction from Compton scattered data is known to be computationally more challenging than that from conventional line-projection based emission data in that the gamma rays that undergo Compton scattering are modeled as conic projections rather than line projections. In conventional tomographic reconstruction, to parallelize the projection and backprojection operations using the graphics processing unit (GPU), approximated methods that use an unmatched pair of ray-tracing forward projector and voxel-driven backprojector have been widely used. In this work, we propose a new GPU-accelerated method for Compton camera reconstruction which is more accurate by using exactly matched pair of projector and backprojector. Methods To calculate conic forward projection, we first sample the cone surface into conic rays and accumulate the intersecting chord lengths of the conic rays passing through voxels using a fast ray-tracing method (RTM). For conic backprojection, to obtain the true adjoint of the conic forward projection, while retaining the computational efficiency of the GPU, we use a voxel-driven RTM which is essentially the same as the standard RTM used for the conic forward projector. Results Our simulation results show that, while the new method is about 3 times slower than the approximated method, it is still about 16 times faster than the CPU-based method without any loss of accuracy. Conclusions The net conclusion is that our proposed method is guaranteed to retain the reconstruction accuracy regardless of the number of iterations by providing a perfectly matched projector–backprojector pair, which makes iterative reconstruction methods for Compton imaging faster and more accurate.
GPU-accelerated iterative reconstruction from Compton scattered data using a matched pair of conic projector and backprojector
S0169260715303035
The peristaltic flow of a copper oxide water fluid investigates the effects of heat generation and magnetic field in permeable tube is studied. The mathematical formulation is presented, the resulting equations are solved exactly. The obtained expressions for pressure gradient, pressure rise, temperature, velocity profile are described through graphs for various pertinent parameters. It is found that pressure gradient is reduce with enhancement of particle concentration and velocity profile is upturn, beside it is observed that temperature increases as more volume fraction of copper oxide. The streamlines are drawn for some physical quantities to discuss the trapping phenomenon.
Copper oxide nanoparticles analysis with water as base fluid for peristaltic flow in permeable tube with heat transfer
S0169260715303072
Background Structural changes of the brain's third ventricle have been acknowledged as an indicative measure of the brain atrophy progression in neurodegenerative and endocrinal diseases. To investigate the ventricular enlargement in relation to the atrophy of the surrounding structures, shape analysis is a promising approach. However, there are hurdles in modeling the third ventricle shape. First, it has topological variations across individuals due to the inter-thalamic adhesion. In addition, as an interhemispheric structure, it needs to be aligned to the midsagittal plane to assess its asymmetric and regional deformation. Method To address these issues, we propose a model-based shape assessment. Our template model of the third ventricle consists of a midplane and a symmetric mesh of generic shape. By mapping the template's midplane to the individuals’ brain midsagittal plane, we align the symmetric mesh on the midline of the brain before quantifying the third ventricle shape. To build the vertex-wise correspondence between the individual third ventricle and the template mesh, we employ a minimal-distortion surface deformation framework. In addition, to account for topological variations, we implement geometric constraints guiding the template mesh to have zero width where the inter-thalamic adhesion passes through, preventing vertices crossing between left and right walls of the third ventricle. The individual shapes are compared using a vertex-wise deformity from the symmetric template. Results Experiments on imaging and demographic data from a study of aging showed that our model was sensitive in assessing morphological differences between individuals in relation to brain volume (i.e. proxy for general brain atrophy), gender and the fluid intelligence at age 72. It also revealed that the proposed method can detect the regional and asymmetrical deformation unlike the conventional measures: volume (median 1.95ml, IQR 0.96ml) and width of the third ventricle. Similarity measures between binary masks and the shape model showed that the latter reconstructed shape details with high accuracy (Dice coefficient ≥0.9, mean distance 0.5mm and Hausdorff distance 2.7mm). Conclusions We have demonstrated that our approach is suitable to morphometrical analyses of the third ventricle, providing high accuracy and inter-subject consistency in the shape quantification. This shape modeling method with geometric constraints based on anatomical landmarks could be extended to other brain structures which require a consistent measurement basis in the morphometry.
3D shape analysis of the brain's third ventricle using a midplane encoded symmetric template model
S0169260715303205
Background Dynamic measurements of human muscle fascicle length from sequences of B-mode ultrasound images have become increasingly prevalent in biomedical research. Manual digitisation of these images is time consuming and algorithms for automating the process have been developed. Here we present a freely available software implementation of a previously validated algorithm for semi-automated tracking of muscle fascicle length in dynamic ultrasound image recordings, “UltraTrack”. Methods UltraTrack implements an affine extension to an optic flow algorithm to track movement of the muscle fascicle end-points throughout dynamically recorded sequences of images. The underlying algorithm has been previously described and its reliability tested, but here we present the software implementation with features for: tracking multiple fascicles in multiple muscles simultaneously; correcting temporal drift in measurements; manually adjusting tracking results; saving and re-loading of tracking results and loading a range of file formats. Results Two example runs of the software are presented detailing the tracking of fascicles from several lower limb muscles during a squatting and walking activity. Conclusion We have presented a software implementation of a validated fascicle-tracking algorithm and made the source code and standalone versions freely available for download.
UltraTrack: Software for semi-automated tracking of muscle fascicles in sequences of B-mode ultrasound images
S0169260715303321
Background and objective Ankle motion and proprioception in multiple axis movements are crucial for daily activities. However, few studies have developed and used a multiple axis system for measuring ankle motion and proprioception. This study was designed to validate a novel ankle haptic interface system that measures the ankle range of motion (ROM) and joint position sense in multiple plane movements, investigating the proprioception deficits during joint position sense tasks for patients with ankle instability. Methods Eleven healthy adults (mean ± standard deviation; age, 24.7 ± 1.9 years) and thirteen patients with ankle instability were recruited in this study. All subjects were asked to perform tests to evaluate the validity of the ankle ROM measurements and underwent tests for validating the joint position sense measurements conducted during multiple axis movements of the ankle joint. Pearson correlation was used for validating the angular position measurements obtained using the developed system; the independent t test was used to investigate the differences in joint position sense task performance for people with or without ankle instability. Results The ROM measurements of the device were linearly correlated with the criterion standards (r = 0.99). The ankle instability and healthy groups were significantly different in direction, absolute, and variable errors of plantar flexion, dorsiflexion, inversion, and eversion (p < 0.05). Conclusions The results demonstrate that the novel ankle joint motion and position sense measurement system is valid and can be used for measuring the ankle ROM and joint position sense in multiple planes and indicate proprioception deficits for people with ankle instability.
Validity of an ankle joint motion and position sense measurement system and its application in healthy subjects and patients with ankle sprain
S0169260715303369
Background and objectives Automatic electrocardiogram (ECG) heartbeat classification is substantial for diagnosing heart failure. The aim of this paper is to evaluate the effect of machine learning methods in creating the model which classifies normal and congestive heart failure (CHF) on the long-term ECG time series. Methods The study was performed in two phases: feature extraction and classification phase. In feature extraction phase, autoregressive (AR) Burg method is applied for extracting features. In classification phase, five different classifiers are examined namely, C4.5 decision tree, k-nearest neighbor, support vector machine, artificial neural networks and random forest classifier. The ECG signals were acquired from BIDMC Congestive Heart Failure and PTB Diagnostic ECG databases and classified by applying various experiments. Results The experimental results are evaluated in several statistical measures (sensitivity, specificity, accuracy, F-measure and ROC curve) and showed that the random forest method gives 100% classification accuracy. Conclusions Impressive performance of random forest method proves that it plays significant role in detecting congestive heart failure (CHF) and can be valuable in expressing knowledge useful in medicine.
Congestive heart failure detection using random forest classifier
S0169260715303473
Background and objective Cell migration, differentiation, proliferation and apoptosis are the main processes in tissue regeneration. Mesenchymal Stem Cells have the potential to differentiate into many cell phenotypes such as tissue- or organ-specific cells to perform special functions. Experimental observations illustrate that differentiation and proliferation of these cells can be regulated according to internal forces induced within their Extracellular Matrix. The process of how exactly they interpret and transduce these signals is not well understood. Methods A previously developed three-dimensional (3D) computational model is here extended and employed to study how force-free substrates and force-induced substrate control cell differentiation and/or proliferation during the mechanosensing process. Consistent with experimental observations, it is assumed that cell internal deformation (a mechanical signal) in correlation with the cell maturation state directly triggers cell differentiation and/or proliferation. The Extracellular Matrix is modeled as Neo-Hookean hyperelastic material assuming that cells are cultured within 3D nonlinear hydrogels. Results In agreement with well-known experimental observations, the findings here indicate that within neurogenic (0.1–1kPa), chondrogenic (20–25kPa) and osteogenic (30–45kPa) substrates, Mesenchymal Stem Cells differentiation and proliferation can be precipitated by inducing the substrate with an internal force. Therefore, cells require a longer time to grow and maturate within force-free substrates than within force-induced substrates. In the instance of Mesenchymal Stem Cells differentiation into a compatible phenotype, the magnitude of the net traction force increases within chondrogenic and osteogenic substrates while it reduces within neurogenic substrates. This is consistent with experimental studies and numerical works recently published by the same authors. However, in all cases the magnitude of the net traction force considerably increases at the instant of cell proliferation because of cell–cell interaction. Conclusions The present model provides new perspectives to delineate the role of force-induced substrates in remotely controlling the cell fate during cell–matrix interaction, which open the door for new tissue regeneration methodologies.
Numerical modeling of cell differentiation and proliferation in force-induced substrates via encapsulated magnetic nanoparticles
S0169260715303771
Objective Cancer is the primary disease responsible for death and disability worldwide. Currently, prevention and early detection represents the best hope for cure. Knowing the expected diseases that occur with a particular cancer in advance could lead to physicians being able to better tailor their treatment for cancer. The aim of this study was to build an animated visualization tool called as Cancer Associations Map Animation (CAMA), to chart the association of cancers with other disease over time. Methods The study population was collected from the Taiwan National Health Insurance Database during the period January 2000 to December 2002, 782 million outpatient visits were used to compute the associations of nine major cancers with other diseases. A motion chart was used to quantify and visualize the associations between diseases and cancers. Results The CAMA motion chart that was built successfully facilitated the observation of cancer-disease associations across ages and genders. The CAMA system can be accessed online at http://203.71.86.98/web/runq16.html. Conclusion The CAMA animation system is an animated medical data visualization tool which provides a dynamic, time-lapse, animated view of cancer-disease associations across different age groups and gender. Derived from a large, nationwide healthcare dataset, this exploratory data analysis tool can detect cancer comorbidities earlier than is possible by manual inspection. Taking into account the trajectory of cancer-specific comorbidity development may facilitate clinicians and healthcare researchers to more efficiently explore early stage hypotheses, develop new cancer treatment approaches, and identify potential effect modifiers or new risk factors associated with specific cancers. Motion chart parameters Mapping variables in this study Time Presents age of patients (i.e. 100 age groups) X-axis Presents the scale of association's strength (i.e. Q-values) Y-axis Presents the scale of count number of relative disease Size of circle Presents the number of co-occurrence of both diseases A and B Color Presents the category of disease (see Table S1 in Appendix)
Cancer-disease associations: A visualization and animation through medical big data
S0169260715304600
The mathematical modeling of physical and biologic systems represents an interesting alternative to study the behavior of these phenomena. In this context, the development of mathematical models to simulate the dynamic behavior of tumors is configured as an important theme in the current days. Among the advantages resulting from using these models is their application to optimization and inverse problem approaches. Traditionally, the formulated Optimal Control Problem (OCP) has the objective of minimizing the size of tumor cells by the end of the treatment. In this case an important aspect is not considered, namely, the optimal concentrations of drugs may affect the patients' health significantly. In this sense, the present work has the objective of obtaining an optimal protocol for drug administration to patients with cancer, through the minimization of both the cancerous cells concentration and the prescribed drug concentration. The resolution of this multi-objective problem is obtained through the Multi-objective Optimization Differential Evolution (MODE) algorithm. The Pareto's Curve obtained supplies a set of optimal protocols from which an optimal strategy for drug administration can be chosen, according to a given criterion.
Determination of an optimal control strategy for drug administration in tumor treatment using multi-objective optimization differential evolution
S0169260716000031
Background and objective This article presents a multimodal analysis of startle type responses using a variety of physiological, facial, and speech features. These multimodal components of the startle type response reflect complex brain–body reactions to a sudden and intense stimulus. Additionally, the proposed multimodal evaluation of reflexive and emotional reactions associated with the startle eliciting stimuli and underlying neural networks and pathways could be applied in diagnostics of different psychiatric and neurological diseases. Different startle type stimuli can be compared in the strength of their elicitation of startle responses, i.e. their potential to activate stress-related neural pathways, underlying biomarkers and corresponding behavioral reactions. Methods An innovative method for measuring startle type responses using multimodal stimuli and multimodal feature analysis has been introduced. Individual's multimodal reflexive and emotional expressions during startle type elicitation have been assessed by corresponding physiological, speech and facial features on ten female students of psychology. Different startle eliciting stimuli like noise and airblast probes, as well as a variety of visual and auditory stimuli of different valence and arousal levels, based on International Affective Picture System (IAPS) images and/or sounds from International Affective Digitized Sounds (IADS) database, have been designed and tested. Combined together into more complex startle type stimuli, such composite stimuli can potentiate the evoked response of underlying neural networks, and corresponding neurotransmitters and neuromodulators as well; this is referred to as increased power of response elicitation. The intensity and magnitude of multimodal responses to selected startle type stimuli have been analyzed using effect sizes and medians of dominant multimodal features, i.e. skin conductance, eye blink, head movement, speech fundamental frequency and energy. The significance of the observed effects and comparisons between paradigms were evaluated using one-tailed t-tests and ANOVA methods, respectively. Skin conductance response habituation was analyzed using ANOVA and post hoc multiple comparison tests with the Dunn–Šidák correction. Results The results revealed specific physiological, facial and vocal reflexive and emotional responses on selected five stimuli paradigms which included: (1) acoustic startle probes, (2) airblasts, (3) IAPS images, (4) IADS sounds, and (5) image-sound-airblast composite stimuli. Overall, composite and airblast paradigms resulted in the largest responses across all analyzed features, followed by sound and acoustic startle paradigms, while paradigm using images consistently elicited the smallest responses. In this context, power of response elicitation of the selected stimuli paradigms can be described according to the aggregated magnitude of the participants’ multimodal responses. We also observed a habituation effect only in skin conductance response to acoustic startle, airblast and sound paradigms. Conclusions This study developed a system for paradigm design and stimuli generation, as well as real-time multimodal signal processing and feature calculation. Experimental paradigms for monitoring individual responses to stressful startle type stimuli were designed in order to compare the response elicitation power across various stimuli. The developed system, applied paradigms and obtained results might be useful in further research for evaluation of individuals’ multimodal responses when they are faced with a variety of aversive emotional distractors and stressful situations.
Multimodal analysis of startle type responses
S0169260716000043
Motor unit action potential (MUAP), which consists of individual muscle fiber action potentials (MFAPs), represents the electrical activity of the motor unit. The values of the MUAP features are changed by denervation and reinnervation in neurogenic involvement as well as muscle fiber loss with increased diameter variability in myopathic diseases. The present study is designed to investigate how increased muscle fiber diameter variability affects MUAP parameters in simulated motor units. In order to detect this variation, simulated MUAPs were calculated both at the innervation zone where the MFAPs are more synchronized, and near the tendon, where they show increased temporal dispersion. Reinnervation in neurogenic state increases MUAP amplitude for the recordings at both the innervation zone and near the tendon. However, MUAP duration and the number of peaks significantly increased in a case of myopathy for recordings near the tendon. Furthermore, of the new features, “number of peaks × spike duration” was found as the strongest indicator of MFAP dispersion in myopathy. MUAPs were also recorded from healthy participants in order to investigate the biological counterpart of the simulation data. MUAPs which were recorded near to tendon revealed significantly prolonged duration and decreased amplitude. Although the number of peaks was increased by moving the needle near to tendon, this was not significant.
The effect of recording site on extracted features of motor unit action potential
S0169260716000055
Abnormal values of vital parameters such as hypotension or tachycardia may occur during anesthesia and may be detected by analyzing time-series data collected during the procedure by the Anesthesia Information Management System. When crossed with other data from the Hospital Information System, abnormal values of vital parameters have been linked with postoperative morbidity and mortality. However, methods for the automatic detection of these events are poorly documented in the literature and differ between studies, making it difficult to reproduce results. In this paper, we propose a methodology for the automatic detection of abnormal values of vital parameters. This methodology uses an algorithm allowing the configuration of threshold values for any vital parameters as well as the management of missing data. Four examples illustrate the application of the algorithm, after which it is applied to three vital signs (heart rate, SpO2, and mean arterial pressure) to all 2014 anesthetic records at our institution.
Methodology to automatically detect abnormal values of vital parameters in anesthesia time-series: Proposal for an adaptable algorithm
S0169260716000067
This paper proposes a novel active learning (AL) framework, and combines it with semi supervised learning (SSL) for segmenting Crohns disease (CD) tissues from abdominal magnetic resonance (MR) images. Robust fully supervised learning (FSL) based classifiers require lots of labeled data of different disease severities. Obtaining such data is time consuming and requires considerable expertise. SSL methods use a few labeled samples, and leverage the information from many unlabeled samples to train an accurate classifier. AL queries labels of most informative samples and maximizes gain from the labeling effort. Our primary contribution is in designing a query strategy that combines novel context information with classification uncertainty and feature similarity. Combining SSL and AL gives a robust segmentation method that: (1) optimally uses few labeled samples and many unlabeled samples; and (2) requires lower training time. Experimental results show our method achieves higher segmentation accuracy than FSL methods with fewer samples and reduced training effort.
Active learning based segmentation of Crohns disease from abdominal MRI
S0169260716000079
Background and objective Neuroimaging studies have demonstrated dysfunction in the brain reward circuit in individuals with online gaming addiction (OGA). We hypothesized that virtual reality therapy (VRT) for OGA would improve the functional connectivity (FC) of the cortico-striatal-limbic circuit by stimulating the limbic system. Methods Twenty-four adults with OGA were randomly assigned to a cognitive behavior therapy (CBT) group or VRT group. Before and after the four-week treatment period, the severity of OGA was evaluated with Young's Internet Addiction Scale (YIAS). Using functional magnetic resonance imaging, the amplitude of low-frequency fluctuation (ALFF) and FC from the posterior cingulate cortex (PCC) seed to other brain areas were evaluated. Twelve casual game users were also recruited and underwent only baseline assessment. Results After treatment, both CBT and VRT groups showed reductions in YIAS scores. At baseline, the OGA group showed a smaller ALFF within the right middle frontal gyrus and reduced FC in the cortico-striatal-limbic circuit. In the VRT group, connectivity from the PCC seed to the left middle frontal and bilateral temporal lobe increased after VRT. Conclusion VRT seemed to reduce the severity of OGA, showing effects similar to CBT, and enhanced the balance of the cortico-striatal-limbic circuit.
The effects of a virtual reality treatment program for online gaming addiction
S0169260716000080
Registration of mammograms plays an important role in breast cancer computer-aided diagnosis systems. Radiologists usually compare mammogram images in order to detect abnormalities. The comparison of mammograms requires a registration between them. A temporal mammogram registration method is proposed in this paper. It is based on the curvilinear coordinates, which are utilized to cope both with global and local deformations in the breast area. Temporal mammogram pairs are used to validate the proposed method. After registration, the similarity between the mammograms is maximized, and the distance between manually defined landmarks is decreased. In addition, a thorough comparison with the state-of-the-art mammogram registration methods is performed to show its effectiveness.
Temporal mammogram image registration using optimized curvilinear coordinates
S0169260716000092
Vibroarthographic (VAG) signals emitted from the knee joint disorder provides an early diagnostic tool. The nonstationary and nonlinear nature of VAG signal makes an important aspect for feature extraction. In this work, we investigate VAG signals by proposing a wavelet based decomposition. The VAG signals are decomposed into sub-band signals of different frequencies. Nonlinear features such as recurrence quantification analysis (RQA), approximate entropy (ApEn) and sample entropy (SampEn) are extracted as features of VAG signal. A total of twenty-four features form a vector to characterize a VAG signal. Two feature selection (FS) techniques, apriori algorithm and genetic algorithm (GA) selects six and four features as the most significant features. Least square support vector machines (LS-SVM) and random forest are proposed as classifiers to evaluate the performance of FS techniques. Results indicate that the classification accuracy was more prominent with features selected from FS algorithms. Results convey that LS-SVM using the apriori algorithm gives the highest accuracy of 94.31% with false discovery rate (FDR) of 0.0892. The proposed work also provided better classification accuracy than those reported in the previous studies which gave an accuracy of 88%. This work can enhance the performance of existing technology for accurately distinguishing normal and abnormal VAG signals. And the proposed methodology could provide an effective non-invasive diagnostic tool for knee joint disorders.
Feature selection and classification methodology for the detection of knee-joint disorders
S0169260716300876
Background and objective Optimal experimental design approaches are seldom used in preclinical drug discovery. The objective is to develop an optimal design software tool specifically designed for preclinical applications in order to increase the efficiency of drug discovery in vivo studies. Methods Several realistic experimental design case studies were collected and many preclinical experimental teams were consulted to determine the design goal of the software tool. The tool obtains an optimized experimental design by solving a constrained optimization problem, where each experimental design is evaluated using some function of the Fisher Information Matrix. The software was implemented in C++ using the Qt framework to assure a responsive user-software interaction through a rich graphical user interface, and at the same time, achieving the desired computational speed. In addition, a discrete global optimization algorithm was developed and implemented. Results The software design goals were simplicity, speed and intuition. Based on these design goals, we have developed the publicly available software PopED lite (http://www.bluetree.me/PopED_lite). Optimization computation was on average, over 14 test problems, 30 times faster in PopED lite compared to an already existing optimal design software tool. PopED lite is now used in real drug discovery projects and a few of these case studies are presented in this paper. Conclusions PopED lite is designed to be simple, fast and intuitive. Simple, to give many users access to basic optimal design calculations. Fast, to fit a short design-execution cycle and allow interactive experimental design (test one design, discuss proposed design, test another design, etc). Intuitive, so that the input to and output from the software tool can easily be understood by users without knowledge of the theory of optimal design. In this way, PopED lite is highly useful in practice and complements existing tools.
PopED lite: An optimal design software for preclinical pharmacokinetic and pharmacodynamic studies
S0169260716301067
Aim Medical data mining (also called knowledge discovery process in medicine) processes for extracting patterns from large datasets. In the current study, we intend to assess different medical data mining approaches to predict ischemic stroke. Materials and methods The collected dataset from Turgut Ozal Medical Centre, Inonu University, Malatya, Turkey, comprised the medical records of 80 patients and 112 healthy individuals with 17 predictors and a target variable. As data mining approaches, support vector machine (SVM), stochastic gradient boosting (SGB) and penalized logistic regression (PLR) were employed. 10-fold cross validation resampling method was utilized, and model performance evaluation metrics were accuracy, area under ROC curve (AUC), sensitivity, specificity, positive predictive value and negative predictive value. The grid search method was used for optimizing tuning parameters of the models. Results The accuracy values with 95% CI were 0.9789 (0.9470–0.9942) for SVM, 0.9737 (0.9397–0.9914) for SGB and 0.8947 (0.8421–0.9345) for PLR. The AUC values with 95% CI were 0.9783 (0.9569–0.9997) for SVM, 0.9757 (0.9543–0.9970) for SGB and 0.8953 (0.8510–0.9396) for PLR. Conclusions The results of the current study demonstrated that the SVM produced the best predictive performance compared to the other models according to the majority of evaluation metrics. SVM and SGB models explained in the current study could yield remarkable predictive performance in the classification of ischemic stroke.
Different medical data mining approaches based prediction of ischemic stroke
S0169260716301420
The purpose of this study was to evaluate the use of fractional-order (FrOr) modeling in asthma. To this end, three FrOr models were compared with traditional parameters and an integer-order model (InOr). We investigated which model would best fit the data, the correlation with traditional lung function tests and the contribution to the diagnostic of airway obstruction. The data consisted of forced oscillation (FO) measurements obtained from healthy (n =22) and asthmatic volunteers with mild (n =22), moderate (n =19) and severe (n =19) obstructions. The first part of this study showed that a FrOr was the model that best fit the data (relative distance: FrOr=4.3±2.4; InOr=5.1±2.6%). The correlation analysis resulted in reasonable (R =0.36) to very good (R =0.77) associations between FrOr parameters and spirometry. The closest associations were observed between parameters related to peripheral airway obstruction, showing a clear relationship between the FrOr models and lung mechanics. Receiver–operator analysis showed that FrOr parameters presented a high potential to contribute to the detection of the mild obstruction in a clinical setting. The accuracy [area under the Receiver Operating Characteristic curve (AUC)] observed in these parameters (AUC =0.954) was higher than that observed in traditional FO parameters (AUC =0.732) and that obtained from the InOr model (AUC =0.861). Patients with moderate and severe obstruction were identified with high accuracy (AUC =0.972 and 0.977, respectively). In conclusion, the results obtained are in close agreement with asthma pathology, and provide evidence that FO measurement associated with FrOr models is a non-invasive, simple and radiation-free method for the detection of biomechanical abnormalities in asthma.
Forced oscillation, integer and fractional-order modeling in asthma
S0169260716301614
Background and objective Lung sound auscultation is one of the most commonly used methods to evaluate respiratory diseases. However, the effectiveness of this method depends on the physician's training. If the physician does not have the proper training, he/she will be unable to distinguish between normal and abnormal sounds generated by the human body. Thus, the aim of this study was to implement a pattern recognition system to classify lung sounds. Methods We used a dataset composed of five types of lung sounds: normal, coarse crackle, fine crackle, monophonic and polyphonic wheezes. We used higher-order statistics (HOS) to extract features (second-, third- and fourth-order cumulants), Genetic Algorithms (GA) and Fisher's Discriminant Ratio (FDR) to reduce dimensionality, and k-Nearest Neighbors and Naive Bayes classifiers to recognize the lung sound events in a tree-based system. We used the cross-validation procedure to analyze the classifiers performance and the Tukey's Honestly Significant Difference criterion to compare the results. Results Our results showed that the Genetic Algorithms outperformed the Fisher's Discriminant Ratio for feature selection. Moreover, each lung class had a different signature pattern according to their cumulants showing that HOS is a promising feature extraction tool for lung sounds. Besides, the proposed divide-and-conquer approach can accurately classify different types of lung sounds. The classification accuracy obtained by the best tree-based classifier was 98.1% for classification accuracy on training, and 94.6% for validation data. Conclusions The proposed approach achieved good results even using only one feature extraction tool (higher-order statistics). Additionally, the implementation of the proposed classifier in an embedded system is feasible.
Classification of lung sounds using higher-order statistics: A divide-and-conquer approach
S0169260716301833
Background and objective Feature extraction of electroencephalogram (EEG) plays a vital role in brain–computer interfaces (BCIs). In recent years, common spatial pattern (CSP) has been proven to be an effective feature extraction method. However, the traditional CSP has disadvantages of requiring a lot of input channels and the lack of frequency information. In order to remedy the defects of CSP, wavelet packet decomposition (WPD) and CSP are combined to extract effective features. But WPD-CSP method considers less about extracting specific features that are fitted for the specific subject. So a subject-based feature extraction method using fisher WPD-CSP is proposed in this paper. Methods The idea of proposed method is to adapt fisher WPD-CSP to each subject separately. It mainly includes the following six steps: (1) original EEG signals from all channels are decomposed into a series of sub-bands using WPD; (2) average power values of obtained sub-bands are computed; (3) the specified sub-bands with larger values of fisher distance according to average power are selected for that particular subject; (4) each selected sub-band is reconstructed to be regarded as a new EEG channel; (5) all new EEG channels are used as input of the CSP and a six-dimensional feature vector is obtained by the CSP. The subject-based feature extraction model is so formed; (6) the probabilistic neural network (PNN) is used as the classifier and the classification accuracy is obtained. Results Data from six subjects are processed by the subject-based fisher WPD-CSP, the non-subject-based fisher WPD-CSP and WPD-CSP, respectively. Compared with non-subject-based fisher WPD-CSP and WPD-CSP, the results show that the proposed method yields better performance (sensitivity: 88.7±0.9%, and specificity: 91±1%) and the classification accuracy from subject-based fisher WPD-CSP is increased by 6–12% and 14%, respectively. Conclusions The proposed subject-based fisher WPD-CSP method can not only remedy disadvantages of CSP by WPD but also discriminate helpless sub-bands for each subject and make remaining fewer sub-bands keep better separability by fisher distance, which leads to a higher classification accuracy than WPD-CSP method.
Subject-based feature extraction by using fisher WPD-CSP in brain–computer interfaces
S0169260716303418
Background and objectives Because skin cancer affects millions of people worldwide, computational methods for the segmentation of pigmented skin lesions in images have been developed in order to assist dermatologists in their diagnosis. This paper aims to present a review of the current methods, and outline a comparative analysis with regards to several of the fundamental steps of image processing, such as image acquisition, pre-processing and segmentation. Methods Techniques that have been proposed to achieve these tasks were identified and reviewed. As to the image segmentation task, the techniques were classified according to their principle. Results The techniques employed in each step are explained, and their strengths and weaknesses are identified. In addition, several of the reviewed techniques are applied to macroscopic and dermoscopy images in order to exemplify their results. Conclusions The image segmentation of skin lesions has been addressed successfully in many studies; however, there is a demand for new methodologies in order to improve the efficiency.
Computational methods for the image segmentation of pigmented skin lesions: A review
S0198971513000240
Iterative proportional fitting (IPF) is a widely used method for spatial microsimulation. The technique results in non-integer weights for individual rows of data. This is problematic for certain applications and has led many researchers to favour combinatorial optimisation approaches such as simulated annealing. An alternative to this is ‘integerisation’ of IPF weights: the translation of the continuous weight variable into a discrete number of unique or ‘cloned’ individuals. We describe four existing methods of integerisation and present a new one. Our method – ‘truncate, replicate, sample’ (TRS) – recognises that IPF weights consist of both ‘replication weights’ and ‘conventional weights’, the effects of which need to be separated. The procedure consists of three steps: (1) separate replication and conventional weights by truncation; (2) replication of individuals with positive integer weights; and (3) probabilistic sampling. The results, which are reproducible using supplementary code and data published alongside this paper, show that TRS is fast, and more accurate than alternative approaches to integerisation.
‘Truncate, replicate, sample’: A method for creating integer weights for spatial microsimulation
S0198971513000835
Segregation models often focus on private racial preference but overlook the institutional context. This paper represents an effort to move beyond the preference centricity. In this paper, an ideal Pigovian regulatory intervention is emulated and added into Schelling’s (1971) classic spatial proximity model of racial segregation, with an aim to preserve collective welfare against the negative externalities induced by the changing local racial compositions after individual relocations. A key discovery from a large number of cellular automata is that the Pigovian regulation tends to result in less segregated but also less efficient (in terms of aggregate utility) residential patterns than laissez faire. This finding, albeit from a highly stylized model, bears intellectual relations to an important practical question: What are the potential racial effects of Pigovian local planning interventions, such as financially motivated anti-density zoning or the collection of a development impact fee? On top of its modest policy implications, this paper demonstrates a bottom-up computational modelling approach to reconcile the preference-based and institution-orientated academic perspectives regarding racial residential segregation.
Beyond preference: Modelling segregation under regulation
S0198971514001112
Models that simulate land-use patterns often use either inductive, data-driven approaches or deductive, theory-based methods to describe the relative strength of the social, economic and biophysical forces that drive the various sectors in the land system. An integrated framework is proposed here that incorporates both approaches based on a unified assessment for local land suitability following a monetary, utility-based logic. The framework is illustrated with a hedonic pricing analysis of urban land values and a net present value assessment for agricultural production system in combination with statistics-based assessments of land suitability for other sectors. The results show that limited difference exists between the most commonly applied inductive approaches that use either multinomial or binomial logistic regression specifications of suitability. Land-use simulations following the binomial regression based suitability values that were rescaled to bid prices (reflecting relative competitiveness) perform better for all individual land-use types. Performance improves even further when a land value based description of urban bid prices is added to this approach. Interestingly enough the better fitting description of suitability for urban areas also improves the ability of the model to simulate correct locations for business estates and greenhouses. The simulation alternatives that consider the net present values for agricultural types of land use show the relevance of this approach for understanding the spatial distribution of these types of land use. The combined use of urban land values and net present values for agricultural land use in defining land suitability performs best in our validation exercise. The proposed methodology can also be used to incorporate information from other research frameworks that describe the utility of land for different types of use.
A utility-based suitability framework for integrated local-scale land-use modelling
S0198971514001355
Over the last few years, much online volunteered geographic information (VGI) has emerged and has been increasingly analyzed to understand places and cities, as well as human mobility and activity. However, there are concerns about the quality and usability of such VGI. In this study, we demonstrate a complete process that comprises the collection, unification, classification and validation of a type of VGI—online point-of-interest (POI) data—and develop methods to utilize such POI data to estimate disaggregated land use (i.e., employment size by category) at a very high spatial resolution (census block level) using part of the Boston metropolitan area as an example. With recent advances in activity-based land use, transportation, and environment (LUTE) models, such disaggregated land use data become important to allow LUTE models to analyze and simulate a person’s choices of work location and activity destinations and to understand policy impacts on future cities. These data can also be used as alternatives to explore economic activities at the local level, especially as government-published census-based disaggregated employment data have become less available in the recent decade. Our new approach provides opportunities for cities to estimate land use at high resolution with low cost by utilizing VGI while ensuring its quality with a certain accuracy threshold. The automatic classification of POI can also be utilized for other types of analyses on cities.
Mining point-of-interest data from social networks for urban land use classification and disaggregation
S0198971514001367
With the exponential growth in the world population and the constant increase in human mobility, the possible impact of outbreaks of epidemics on cities is increasing, especially in high-density urban areas such as public transportation and transfer points. The volume and proximity of people in these areas can lead to an observed dramatic increase in the transmission of airborne viruses and related pathogens. Due to the critical role these areas play in transmission, it is vital that we have a comprehensive understanding of the ‘transmission highways’ in these areas to predict or prevent the spreading of infectious diseases in general. The principled approach of this paper is to combine and utilize as much information as possible from relevant sources and to integrate these data in a simulated environment that allows for scenario testing and decision support. In this paper, we describe a novel approach to study the spread of airborne diseases in cities by combining traffic information with geo-spatial data, infection dynamics and spreading characteristics. The system is currently being used in an attempt to understand the outbreak of influenza in densely populated cities in China.
Simulating city-level airborne infectious diseases
S0198971515300077
Forecasting the variability of dwellings and residential land is important for estimating the future potential of environmental technologies. This paper presents an innovative method of converting average residential density into a set of one-hectare 3D tiles to represent the dwelling stock. These generic tiles include residential land as well as the dwelling characteristics. The method was based on a detailed analysis of the English House Condition Survey data and density was calculated as the inverse of the plot area per dwelling. This found that when disaggregated by age band, urban morphology and area type, the frequency distribution of plot density per dwelling type can be represented by the gamma distribution. The shape parameter revealed interesting characteristics about the dwelling stock and how this has changed over time. It showed a consistent trend that older dwellings have greater variability in plot density than newer dwellings, and also that apartments and detached dwellings have greater variability in plot density than terraced and semi-detached dwellings. Once calibrated, the shape parameter of the gamma distribution was used to convert the average density per housing type into a frequency distribution of plot density. These were then approximated by systematically selecting a set of generic tiles. These tiles are particularly useful as a medium for multidisciplinary research on decentralized environmental technologies or climate adaptation, which requires this understanding of the variability of dwellings, occupancies and urban space. It thereby links the socioeconomic modeling of city regions with the physical modeling of dwellings and associated infrastructure across the spatial scales. The tiles method has been validated by comparing results against English regional housing survey data and dwelling footprint area data. The next step would be to explore the possibility of generating generic residential area types and adapt the method to other countries that have similar housing survey data.
Representing the dwelling stock as 3D generic tiles estimated from average residential density
S0198971515300090
Urbanisation, environmental risks and resource scarcity are but three of many challenges that cities must address if they are to become more sustainable. However, the policies and spatial development strategies implemented to achieve individual sustainability objectives frequently interact and conflict presenting decision-makers a multi-objective spatial optimisation problem. This work presents a developed spatial optimisation framework which optimises the location of future residential development against several sustainability objectives. The framework is applied to a case study over Middlesbrough in the North East of the United Kingdom. In this context, the framework optimises five sustainability objectives from our case study site: (i) minimising risk from heat waves, (ii) minimising the risk from flood events, (iii) minimising travel costs to minimise transport emissions, (iv) minimising the expansion of urban sprawl and (v) preventing development on green-spaces. A series of optimised spatial configurations of future development strategies are presented. The results compare strategies that are optimal against individual, pairs and multiple sustainability objectives, such that each of these optimal strategies out-performs all other development strategies in at least one sustainability objective. Moreover, the resulting spatial strategies significantly outperform the current local authority strategy for all objectives with, for example, a relative improvement of up to 68% in the performance of distance to CBD. Based on these results, it suggests that spatial optimisation can provide a powerful decision support tool to help planners to identify spatial development strategies that satisfy multiple sustainability objectives.
Optimised spatial planning to meet long term urban sustainability objectives
S0198971515300107
We study the impact of settlement sizes on network connectivity in a spatial setting. First, we develop a model of geometric urban networks that posits a positive relationship between connectivity and size. Empirical evidence is then presented validating the model prediction that local links exhibit super-linear scaling with the exponent greater than 1, while long-range connections scale linearly with the unit exponent. The scaling exponents thus suggest that the impact of population size on connectivity is stronger within cities than between cities. We next combine the geometric framework with a computational model of interacting agents to generate a realistic settlement distribution and urban networks from the bottom-up. Calibrated simulation results demonstrate the consistency between hierarchical rank-size distribution and scale-free connectivity. Finally, coupling the spatial network with a tipping diffusion model allows us to consolidate the evolution of network connectivity, city sizes, and social practices in a unified computational framework.
Size, connectivity, and tipping in spatial networks: Theory and empirics
S0198971515300223
Increasingly realistic virtual three dimensional (3D) models have been created that demonstrate a variety of landscape designs. They have supported a more collaborative and participative approach in planning and design. However, these 3D landscape models are often developed for use in bespoke virtual reality labs that tie the models to expensive graphics hardware, or complex arrays of screens, with the viewer spatially detached from the actual site. Given the increase in prevalence of advanced “smartphone” and tablet technology with GPS and compass functionality, this paper demonstrates two methods for on-demand dissemination of existing virtual 3D landscape models using: (1) a touch based interface with integrated mapping; (2) a standard web browser interface on mobile phones. The latter method demonstrates the potential to reduce the complexity of accessing an existing 3D landscape model on-site to simply pointing a smartphone in a particular direction, loading a web page and seeing the relevant view of the model as an image. A prototype system was developed to demonstrate both methods successfully, but it was also ascertained that the accuracy of GPS positional data can have a negative effect on the browser based method. Finally, potential developments are presented exploring the future of the technology underpinning the method and possible extensions to the prototype as a technique for increasing public participation in planning and design.
Getting virtual 3D landscapes out of the lab