FileName
stringlengths
17
17
Abstract
stringlengths
163
6.01k
Title
stringlengths
12
421
S0169260715000267
A normal cardiac activation starts in the sinoatrial node and then spreads throughout the atrial myocardium, thus defining the P-wave of the electrocardiogram. However, when the onset of paroxysmal atrial fibrillation (PAF) approximates, a highly disturbed electrical activity occurs within the atria, thus provoking fragmented and eventually longer P-waves. Although this altered atrial conduction has been successfully quantified just before PAF onset from the signal-averaged P-wave spectral analysis, its evolution during the hours preceding the arrhythmia has not been assessed yet. This work focuses on quantifying the P-wave spectral content variability over the 2h preceding PAF onset with the aim of anticipating as much as possible the arrhythmic episode envision. For that purpose, the time course of several metrics estimating absolute energy and ratios of high- to low-frequency power in different bands between 20 and 200Hz has been computed from the P-wave autoregressive spectral estimation. All the analyzed metrics showed an increasing variability trend as PAF onset approximated, providing the P-wave high-frequency energy (between 80 and 150Hz) a diagnostic accuracy around 80% to discern between healthy subjects, patients far from PAF and patients less than 1h close to a PAF episode. This discriminant power was similar to that provided by the most classical time-domain approach, i.e., the P-wave duration. Furthermore, the linear combination of both metrics improved the diagnostic accuracy up to 88.07%, thus constituting a reliable noninvasive harbinger of PAF onset with a reasonable anticipation. The information provided by this methodology could be very useful in clinical practice either to optimize the antiarrhythmic treatment in patients at high-risk of PAF onset and to limit drug administration in low risk patients.
Role of the P-wave high frequency energy and duration as noninvasive cardiovascular predictors of paroxysmal atrial fibrillation
S0169260715000279
Background and objective Insulin bolus calculators are simple decision support software tools incorporated in most commercially available insulin pumps and some capillary blood glucose meters. Although their clinical benefit has been demonstrated, their utilisation has not been widespread and their performance remains suboptimal, mainly because of their lack of flexibility and adaptability. One of the difficulties that people with diabetes, clinicians and carers face when using bolus calculators is having to set parameters and adjust them on a regular basis according to changes in insulin requirements. In this work, we propose a novel method that aims to automatically adjust the parameters of a bolus calculator. Periodic usage of a continuous glucose monitoring device is required for this purpose. Methods To test the proposed method, an in silico evaluation under real-life conditions was carried out using the FDA-accepted Type 1 diabetes mellitus (T1DM) UVa/Padova simulator. Since the T1DM simulator does not incorporate intra-subject variability and uncertainty, a set of modifications were introduced to emulate them. Ten adult and ten adolescent virtual subjects were assessed over a 3-month scenario with realistic meal variability. The glycaemic metrics: mean blood glucose; percentage time in target; percentage time in hypoglycaemia; risk index, low blood glucose index; and blood glucose standard deviation, were employed for evaluation purposes. A t-test statistical analysis was carried out to evaluate the benefit of the presented algorithm against a bolus calculator without automatic adjustment. Results The proposed method statistically improved (p <0.05) all glycemic metrics evaluating hypoglycaemia on both virtual cohorts: percentage time in hypoglycaemia (i.e. BG<70mg/dl) (adults: 2.7±4.0 vs. 0.4±0.7, p =0.03; adolescents: 7.1±7.4 vs. 1.3±2.4, p =0.02) and low blood glucose index (LBGI) (adults: 1.1±1.3 vs. 0.3±0.2, p =0.002; adolescents: 2.0±2.19 vs. 0.7±1.4, p =0.05). A statistically significant improvement was also observed on the blood glucose standard deviation (BG SDmg/dL) (adults: 33.5±13.7 vs. 29.2±8.3, p =0.01; adolescents: 63.7±22.7 vs. 44.9±23.9, p =0.01). Apart from a small increase in mean blood glucose on the adult cohort (129.9±11.9 vs. 133.9±11.6, p =0.03), the rest of the evaluated metrics, despite showing an improvement trend, did not experience a statistically significant change. Conclusions A novel method for automatically adjusting the parameters of a bolus calculator has the potential to improve glycemic control in T1DM diabetes management.
Method for automatic adjustment of an insulin bolus calculator: In silico robustness evaluation under intra-day variability
S0169260715000280
Alternative splicing plays a key role in the regulation of the central dogma. Four major types of alternative splicing have been classified as intron retention, exon skipping, alternative 5 splice sites or alternative donor sites, and alternative 3 splice sites or alternative acceptor sites. A few algorithms have been developed to detect splice junctions from RNA-Seq reads. However, there are few tools targeting at the major alternative splicing types at the exon/intron level. This type of analysis may reveal subtle, yet important events of alternative splicing, and thus help gain deeper understanding of the mechanism of alternative splicing. This paper describes a user-friendly R package, extracting, annotating and analyzing alternative splicing types for sequence alignment files from RNA-Seq. SplicingTypesAnno can: (1) provide annotation for major alternative splicing at exon/intron level. By comparing the annotation from GTF/GFF file, it identifies the novel alternative splicing sites; (2) offer a convenient two-level analysis: genome-scale annotation for users with high performance computing environment, and gene-scale annotation for users with personal computers; (3) generate a user-friendly web report and additional BED files for IGV visualization. SplicingTypesAnno is a user-friendly R package for extracting, annotating and analyzing alternative splicing types at exon/intron level for sequence alignment files from RNA-Seq. It is publically available at https://sourceforge.net/projects/splicingtypes/files/ or http://genome.sdau.edu.cn/research/software/SplicingTypesAnno.html.
SplicingTypesAnno: Annotating and quantifying alternative splicing events for RNA-Seq data
S0169260715000292
The prediction of substantially short survivability in patients is extremely risky. In this study, we proposed a probabilistic model using Bayesian network (BN) to predict the short survivability of patients with brain metastasis from lung cancer. A nationwide cancer patient database from 1996 to 2010 in Taiwan was used. The cohort consisted of 438 patients with brain metastasis from lung cancer. We utilized synthetic minority over-sampling technique (SMOTE) to solve the imbalanced property embedded in the problem. The proposed BN was compared with three competitive models, namely, naive Bayes (NB), logistic regression (LR), and support vector machine (SVM). Statistical analysis showed that performances of BN, LR, NB, and SVM were statistically the same in terms of all indices with low sensitivity when these models were applied on an imbalanced data set. Results also showed that SMOTE can improve the performance of the four models in terms of sensitivity, while keeping high accuracy and specificity. Further, the proposed BN is more effective as compared with NB, LR, and SVM from two perspectives: the transparency and ability to show the relation of factors affecting brain metastasis from lung cancer; it allows decision makers to find the probability despite incomplete evidence and information; and the sensitivity of the proposed BN is the highest among all standard machine learning methods.
Probabilistic modeling of short survivability in patients with brain metastasis from lung cancer
S0169260715000309
Today, more and more biological laboratories use 3D cell cultures and tissues grown in vitro as a 3D model of in vivo tumours and metastases. In the last decades, it has been extensively established that multicellular spheroids represent an efficient model to validate effects of drugs and treatments for human care applications. However, a lack of methods for quantitative analysis limits the usage of spheroids as models for routine experiments. Several methods have been proposed in literature to perform high throughput experiments employing spheroids by automatically computing different morphological parameters, such as diameter, volume and sphericity. Nevertheless, these systems are typically grounded on expensive automated technologies, that make the suggested solutions affordable only for a limited subset of laboratories, frequently performing high content screening analysis. In this work we propose AnaSP, an open source software suitable for automatically estimating several morphological parameters of spheroids, by simply analyzing brightfield images acquired with a standard widefield microscope, also not endowed with a motorized stage. The experiments performed proved sensitivity and precision of the segmentation method proposed, and excellent reliability of AnaSP to compute several morphological parameters of spheroids imaged in different conditions. AnaSP is distributed as an open source software tool. Its modular architecture and graphical user interface make it attractive also for researchers who do not work in areas of computer vision and suitable for both high content screenings and occasional spheroid-based experiments.
AnaSP: A software suite for automatic image analysis of multicellular spheroids
S0169260715000449
Retinal image registration is a necessary step in diagnosis and monitoring of Diabetes Retinopathy (DR), which is one of the leading causes of blindness. Long term diabetes affects the retinal blood vessels and capillaries eventually causing blindness. This progressive damage to retina and subsequent blindness can be prevented by periodic retinal screening. The extent of damage caused by DR can be assessed by comparing retinal images captured during periodic retinal screenings. During image acquisition at the time of periodic screenings translation, rotation and scale (TRS) are introduced in the retinal images. Therefore retinal image registration is an essential step in automated system for screening, diagnosis, treatment and evaluation of DR. This paper presents an algorithm for registration of retinal images using orthogonal moment invariants as features for determining the correspondence between the dominant points (vessel bifurcations) in the reference and test retinal images. As orthogonal moments are invariant to TRS; moment invariants features around a vessel bifurcation are unaltered due to TRS and can be used to determine the correspondence between reference and test retinal images. The vessel bifurcation points are located in segmented, thinned (mono pixel vessel width) retinal images and labeled in corresponding grayscale retinal images. The correspondence between vessel bifurcations in reference and test retinal image is established based on moment invariants features. Further the TRS in test retinal image with respect to reference retinal image is estimated using similarity transformation. The test retinal image is aligned with reference retinal image using the estimated registration parameters. The accuracy of registration is evaluated in terms of mean error and standard deviation of the labeled vessel bifurcation points in the aligned images. The experimentation is carried out on DRIVE database, STARE database, VARIA database and database provided by local government hospital in Pune, India. The experimental results exhibit effectiveness of the proposed algorithm for registration of retinal images.
Orthogonal moments for determining correspondence between vessel bifurcations for retinal image registration
S0169260715000450
Gene expression data analysis is based on the assumption that co-expressed genes imply co-regulated genes. This assumption is being reformulated because the co-expression of a group of genes may be the result of an independent activation with respect to the same experimental condition and not due to the same regulatory regime. For this reason, traditional techniques are recently being improved with the use of prior biological knowledge from open-access repositories together with gene expression data. Biclustering is an unsupervised machine learning technique that searches patterns in gene expression data matrices. A scatter search-based biclustering algorithm that integrates biological information is proposed in this paper. In addition to the gene expression data matrix, the input of the algorithm is only a direct annotation file that relates each gene to a set of terms from a biological repository where genes are annotated. Two different biological measures, FracGO and SimNTO, are proposed to integrate this information by means of its addition to-be-optimized fitness function in the scatter search scheme. The measure FracGO is based on the biological enrichment and SimNTO is based on the overlapping among GO annotations of pairs of genes. Experimental results evaluate the proposed algorithm for two datasets and show the algorithm performs better when biological knowledge is integrated. Moreover, the analysis and comparison between the two different biological measures is presented and it is concluded that the differences depend on both the data source and how the annotation file has been built in the case GO is used. It is also shown that the proposed algorithm obtains a greater number of enriched biclusters than other classical biclustering algorithms typically used as benchmark and an analysis of the overlapping among biclusters reveals that the biclusters obtained present a low overlapping. The proposed methodology is a general-purpose algorithm which allows the integration of biological information from several sources and can be extended to other biclustering algorithms based on the optimization of a merit function.
Integrating biological knowledge based on functional annotations for biclustering of gene expression data
S0169260715000553
Recent study shows that tendon-sheath system (TSS) has great potential in the development of surgical robots for endoscopic surgery. It is able to deliver adequate power in a light-weight and compact package. And the flexibility and compliance of the tendon-sheath system make it capable of adapting to the long and winding path in the flexible endoscope. However, the main difficulties in precise control of such system fall on the nonlinearities of the system behavior and absence of necessary sensory feedback at the surgical end-effectors. Since accurate position control of the tool is a prerequisite for efficacy, safety and intuitive user-experience in robotic surgery, in this paper we propose a system modeling approach for motion compensation. Based on a bidirectional actuated system using two separate tendon-sheaths, motion transmission is firstly characterized. Two types of positional errors due to system backlash and environment loading are defined and modeled. Then a model-based feedforward compensation method is proposed for open-loop control, giving the system abilities to adjust according to changes in the transmission route configuration without any information feedback from the distal end. A dedicated experimental platform emulating a bidirectional TSS robotic system for endoscopic surgery is built for testing. Proposed positional errors are identified and verified. The performance of the proposed motion compensation is evaluated by trajectory tracking under different environment loading conditions. And the results demonstrate that accurate position control can be achieved even if the transmission route configuration is updated.
Modeling and motion compensation of a bidirectional tendon-sheath actuated system for robotic endoscopic surgery
S0169260715000565
Objective Stroke is a prominent life-threatening disease in the world. The current study was performed to predict the outcome of stroke using knowledge discovery process (KDP) methods, artificial neural networks (ANN) and support vector machine (SVM) models. Materials and methods The records of 297 (130 sick and 167 healthy) individuals were acquired from the databases of the department of emergency medicine. Nine predictors (coronary artery disease, diabetes mellitus, hypertension, history of cerebrovascular disease, atrial fibrillation, smoking, the findings of carotid Doppler ultrasonography [normal, plaque, plaque+stenosis≥50%], the levels of cholesterol and C-reactive protein) were used for predicting the stroke. Feature selection based on the Cramer's V test was carried out for reducing the predictors. Multilayer perceptron (MLP) ANN and SVM with radial basis function (RBF) kernel were used for the prediction based on the selected predictors. Results The accuracy values were 81.82% for ANN and 80.38% for SVM in the training dataset (n =209), and 85.9% for ANN and 84.62% for SVM in the testing dataset (n =78), respectively. ANN and SVM models yielded area under curve (AUC) values of 0.905 and 0.899 in the training dataset, and 0.928 and 0.91 in the testing dataset, consecutively. Conclusion The findings of the current study pointed out that ANN had more predictive performance when compared with SVM in predicting stroke. The proposed ANN model would be useful when making clinical decisions regarding stroke.
Application of knowledge discovery process on the prediction of stroke
S0169260715000577
Classifying imbalanced data in medical informatics is challenging. Motivated by this issue, this study develops a classifier approach denoted as BSMAIRS. This approach combines borderline synthetic minority oversampling technique (BSM) and artificial immune recognition system (AIRS) as global optimization searcher with the nearest neighbor algorithm used as a local classifier. Eight electronic medical datasets collected from University of California, Irvine (UCI) machine learning repository were used to evaluate the effectiveness and to justify the performance of the proposed BSMAIRS. Comparisons with several well-known classifiers were conducted based on accuracy, sensitivity, specificity, and G-mean. Statistical results concluded that BSMAIRS can be used as an efficient method to handle imbalanced class problems. To further confirm its performance, BSMAIRS was applied to real imbalanced medical data of lung cancer metastasis to the brain that were collected from National Health Insurance Research Database, Taiwan. This application can function as a supplementary tool for doctors in the early diagnosis of brain metastasis from lung cancer.
A hybrid classifier combining Borderline-SMOTE with AIRS algorithm for estimating brain metastasis from lung cancer: A case study in Taiwan
S0169260715000589
Background Gastric cancer is among the most common gastrointestinal cancers worldwide. Patients who have undergone surgery for gastric cancer may suffer from malnutrition and potential consequences such as gastrointestinal complications, surgical stress, and cancer cachexia. A tablet PC-based intervention via a mobile application might enhance the early recovery of postgastrectomy patients. Objective The aim of this study was to develop and test a tablet personal computer (PC)-assisted intervention to hasten the recovery of postgastrectomy cancer patients with respect to nutritional status. Methods This single-arm pilot study investigated a tablet PC application developed to serve the functions of nutritional monitoring, medical information management, drainage follow-up, and wound care. All services were delivered by medical professionals. Results Twenty consecutive gastrectomy patients at the National Taiwan University Hospital received perioperative care via this application (App group). During the study period, we retrospectively collected an additional 20 demographically matched gastrectomy cases as a control group. The App group had a lower body weight loss percentage relative to the control group during a 6-month follow-up period (4.8±1.2% vs. 8.7±2.4%; p <0.01). However, the patients in the App group had more outpatient clinic (OPC) visits than did those in the control group (9.8±0.9 vs. 5.6±0.8; p <0.01). Conclusions This study supported the feasibility of a tablet PC-based application for the perioperative care of gastric cancer subjects to promote a lower body weight loss and the collection of comprehensive surgical records.
Tablet PC-enabled application intervention for patients with gastric cancer undergoing gastrectomy
S0169260715000590
The assessment of the state of the acrosome is a priority in artificial insemination centres since it is one of the main causes of function loss. In this work, boar spermatozoa present in gray scale images acquired with a phase-contrast microscope have been classified as acrosome-intact or acrosome-damaged, after using fluorescent images for creating the ground truth. Based on shape prior criteria combined with Otsu's thresholding, regional minima and watershed transform, the spermatozoa heads were segmented and registered. One of the main novelties of this proposal is that, unlike what previous works stated, the obtained results show that the contour information of the spermatozoon head is important for improving description and classification. Other of this work novelties is that it confirms that combining different texture descriptors and contour descriptors yield the best classification rates for this problem up to date. The classification was performed with a Support Vector Machine backed by a Least Squares training algorithm and a linear kernel. Using the biggest acrosome intact-damaged dataset ever created, the early fusion approach followed provides a 0.9913 F-Score, outperforming all previous related works.
Acrosome integrity assessment of boar spermatozoa images using an early fusion of texture and contour descriptors
S0169260715000607
The paper presents a computer-based assessment for facioscapulohumeral dystrophy (FSHD) diagnosis through characterisation of the fat and oedema percentages in the muscle region. A novel multi-slice method for the muscle-region segmentation in the T1-weighted magnetic resonance images is proposed using principles of the live-wire technique to find the path representing the muscle-region border. For this purpose, an exponential cost function is used that incorporates the edge information obtained after applying the edge-enhancement algorithm formerly designed for the fingerprint enhancement. The difference between the automatic segmentation and manual segmentation performed by a medical specialists is characterised using the Zijdenbos similarity index, indicating a high accuracy of the proposed method. Finally, the fat and oedema are quantified from the muscle region in the T1-weighted and T2-STIR magnetic resonance images, respectively, using the fuzzy c-mean clustering approach for 10 FSHD patients.
Computer-based assessment for facioscapulohumeral dystrophy diagnosis
S0169260715000619
The Gene Ontology (GO) is a structured repository of concepts (GO terms) that are associated to one or more gene products. The process of association is referred to as annotation. The relevance and the specificity of both GO terms and annotations are evaluated by a measure defined as information content (IC). The analysis of annotated data is thus an important challenge for bioinformatics. There exist different approaches of analysis. From those, the use of association rules (AR) may provide useful knowledge, and it has been used in some applications, e.g. improving the quality of annotations. Nevertheless classical association rules algorithms do not take into account the source of annotation nor the importance yielding to the generation of candidate rules with low IC. This paper presents GO-WAR (Gene Ontology-based Weighted Association Rules) a methodology for extracting weighted association rules. GO-WAR can extract association rules with a high level of IC without loss of support and confidence from a dataset of annotated data. A case study on using of GO-WAR on publicly available GO annotation datasets is used to demonstrate that our method outperforms current state of the art approaches.
Using GO-WAR for mining cross-ontology weighted association rules
S0169260715000760
In recent years several computational methods have been developed to predict RNA-binding sites in protein. Most of these methods do not consider interacting partners of a protein, so they predict the same RNA-binding sites for a given protein sequence even if the protein binds to different RNAs. Unlike the problem of predicting RNA-binding sites in protein, the problem of predicting protein-binding sites in RNA has received little attention mainly because it is much more difficult and shows a lower accuracy on average. In our previous study, we developed a method that predicts protein-binding nucleotides from an RNA sequence. In an effort to improve the prediction accuracy and usefulness of the previous method, we developed a new method that uses both RNA and protein sequence data. In this study, we identified effective features of RNA and protein molecules and developed a new support vector machine (SVM) model to predict protein-binding nucleotides from RNA and protein sequence data. The new model that used both protein and RNA sequence data achieved a sensitivity of 86.5%, a specificity of 86.2%, a positive predictive value (PPV) of 72.6%, a negative predictive value (NPV) of 93.8% and Matthews correlation coefficient (MCC) of 0.69 in a 10-fold cross validation; it achieved a sensitivity of 58.8%, a specificity of 87.4%, a PPV of 65.1%, a NPV of 84.2% and MCC of 0.48 in independent testing. For comparative purpose, we built another prediction model that used RNA sequence data alone and ran it on the same dataset. In a 10 fold-cross validation it achieved a sensitivity of 85.7%, a specificity of 80.5%, a PPV of 67.7%, a NPV of 92.2% and MCC of 0.63; in independent testing it achieved a sensitivity of 67.7%, a specificity of 78.8%, a PPV of 57.6%, a NPV of 85.2% and MCC of 0.45. In both cross-validations and independent testing, the new model that used both RNA and protein sequences showed a better performance than the model that used RNA sequence data alone in most performance measures. To the best of our knowledge, this is the first sequence-based prediction of protein-binding nucleotides in RNA which considers the binding partner of RNA. The new model will provide valuable information for designing biochemical experiments to find putative protein-binding sites in RNA with unknown structure.
Predicting protein-binding RNA nucleotides with consideration of binding partners
S0169260715000875
Background and objective Cardiac arrhythmias are disorders in terms of speed or rhythm in the heart's electrical system. Atrial fibrillation (AFib) is the most common sustained arrhythmia that affects a large number of persons. Electrophysiologic study (EPS) procedures are used to study fibrillation in patients; they consist of inducing a controlled fibrillation in surgical room to analyze electrical heart reactions or to decide for implanting medical devices (i.e., pacemaker). Nevertheless, the spontaneous induction may generate an undesired AFib, which may induce risk for patient and thus a critical issue for physicians. We study the unexpected AFib onset, aiming to identify signal patterns occurring in time interval preceding an event of spontaneous (i.e., not inducted) fibrillation. Profiling such signal patterns allowed to design and implement an AFib prediction algorithm able to early identify a spontaneous fibrillation. The objective is to increase the reliability of EPS procedures. Methods We gathered data signals collected by a General Electric Healthcare's CardioLab electrophysiology recording system (i.e., a polygraph). We extracted superficial and intracavitary cardiac signals regarding 50 different patients studied at the University Magna Graecia Cardiology Department. By studying waveform (i.e., amplitude and energy) of intracavitary signals before the onset of the arrhythmia, we were able to define patterns related to AFib onsets that are side effects of an inducted fibrillation. Results A framework for atrial fibrillation prediction during electrophysiological studies has been developed. It includes a prediction algorithm to alert an upcoming AFib onset. Tests have been performed on an intracavitary cardiac signals data set, related to patients studied in electrophysiological room. Also, results have been validated by the clinicians, proving that the framework can be useful in case of integration with the polygraph, helping physicians in managing and controlling of patient status during EPS.
A framework for the atrial fibrillation prediction in electrophysiological studies
S0169260715000887
We present a software system called “Polyp-Alert” to assist the endoscopist find polyps by providing visual feedback during colonoscopy. Polyp-Alert employs our previous edge-cross-section visual features and a rule-based classifier to detect a polyp edge—an edge along the contour of a polyp. The technique employs tracking of detected polyp edge(s) to group a sequence of images covering the same polyp(s) as one polyp shot. In our experiments, the software correctly detected 97.7% (42 of 43) of polyp shots on 53 randomly selected video files of entire colonoscopy procedures. However, Polyp-Alert incorrectly marked only 4.3% of a full-length colonoscopy procedure as showing a polyp when they do not. The test data set consists of about 18h worth of video data from Olympus and Fujinon endoscopes. The technique is extensible to other brands of colonoscopes. Furthermore, Polyp-Alert can provide as high as ten feedbacks per second for a smooth display of feedback. The performance of our system is by far the most promising to potentially assist the endoscopist find more polyps in clinical practice during a routine screening colonoscopy.
Polyp-Alert: Near real-time feedback during colonoscopy
S0169260715000899
In this paper a new automatic approach to determine the accurate measure of human vertebrae is proposed. The aim is to speed up the measurement process and to reduce the uncertainties that typically affect the measurement carried out by traditional approaches. The proposed method uses a 3D model of the vertebra obtained from CT-scans or 3D scanning, from which some characteristic dimensions are detected. For this purpose, specific rules to identify morphological features, from which to detect dimensional features unambiguously and accurately, are put forward and implemented in original software. The automatic method which is here proposed is verified by analysing real vertebrae and is then compared with the state-of-the-art methods for vertebra measurement.
A new method for the automatic identification of the dimensional features of vertebrae
S0169260715000905
Patients use nurse call systems to signal nurses for medical help. Traditional push button-flashing lamp call systems are not integrated with other hospital automation systems. Therefore, nurse response time becomes a matter of personal discretion. The improvement obtained by integrating a pager system into the nurse call systems does not increase care efficiency, because unnecessary visits are still not eliminated. To obtain an immediate response and a purposeful visit by a nurse; regardless of the location of nurse in hospital, traditional systems have to be improved by intelligent telephone system integration. The results of the developed Nurse Call System Software (NCSS), the Wireless Phone System Software (WPSS), the Location System Software (LSS) and the communication protocol are provided, together with detailed XML message structures. The benefits of the proposed system are also discussed and the direction of future work is presented.
Improving communication among nurses and patients
S0169260715000917
Computational fluid dynamics (CFD) modeling of the pulmonary vasculature has the potential to reveal continuum metrics associated with the hemodynamic stress acting on the vascular endothelium. It is widely accepted that the endothelium responds to flow-induced stress by releasing vasoactive substances that can dilate and constrict blood vessels locally. The objectives of this study are to examine the extent of patient specificity required to obtain a significant association of CFD output metrics and clinical measures in models of the pulmonary arterial circulation, and to evaluate the potential correlation of wall shear stress (WSS) with established metrics indicative of right ventricular (RV) afterload in pulmonary hypertension (PH). Right Heart Catheterization (RHC) hemodynamic data and contrast-enhanced computed tomography (CT) imaging were retrospectively acquired for 10 PH patients and processed to simulate blood flow in the pulmonary arteries. While conducting CFD modeling of the reconstructed patient-specific vasculatures, we experimented with three different outflow boundary conditions to investigate the potential for using computationally derived spatially averaged wall shear stress (SAWSS) as a metric of RV afterload. SAWSS was correlated with both pulmonary vascular resistance (PVR) (R 2 =0.77, P <0.05) and arterial compliance (C) (R 2 =0.63, P <0.05), but the extent of the correlation was affected by the degree of patient specificity incorporated in the fluid flow boundary conditions. We found that decreasing the distal PVR alters the flow distribution and changes the local velocity profile in the distal vessels, thereby increasing the local WSS. Nevertheless, implementing generic outflow boundary conditions still resulted in statistically significant SAWSS correlations with respect to both metrics of RV afterload, suggesting that the CFD model could be executed without the need for complex outflow boundary conditions that require invasively obtained patient-specific data. A preliminary study investigating the relationship between outlet diameter and flow distribution in the pulmonary tree offers a potential computationally inexpensive alternative to pressure based outflow boundary conditions. computational fluid dynamics fluid–structure interaction pulmonary hypertension right ventricle coefficient of determination right heart catheterization main pulmonary artery mean pulmonary arterial pressure, mmHg systolic pulmonary arterial pressure, mmHg diastolic pulmonary arterial pressure, mmHg pulse pressure= sPAP − dPAP, mmHg pulmonary vascular resistance, dyns/cm5 wall shear stress, dyn/cm2 spatially averaged wall shear stress, dyn/cm2 cardiac output, cm3/s body surface area, m2 cardiac index, cm3/s/m2 heart rate, beats/min pulmonary capillary wedge pressure, mmHg compliance, cm3/mmHg vascular luminal area, cm2 hydraulic diameter, cm
Patient-specific computational modeling of blood flow in the pulmonary arterial circulation
S0169260715000929
Pharmacokinetics can be a challenging topic to teach due to the complex relationships inherent between physiological parameters, mathematical descriptors and equations, and their combined impact on shaping the blood fluid concentration vs. time curves of drugs. A computer program was developed within Microsoft Excel for Windows, designed to assist in the instruction of basic pharmacokinetics within an entry-to-practice pharmacy class environment. The program is composed of a series of spreadsheets (modules) linked by Visual Basic for Applications, intended to illustrate the relationships between pharmacokinetic and in some cases physiological parameters, doses and dose rates and the drug blood fluid concentration vs. time curves. Each module is accompanied by a simulation user's guide, prompting the user to change specific independent parameters and then observe the impact of the change(s) on the drug concentration vs. time curve and on other dependent parameters. “Slider” (or “scroll”) bars can be selected to readily see the effects of repeated changes on the dependencies. Topics covered include one compartment single dose administration (iv bolus, oral, short infusion), intravenous infusion, repeated doses, renal and hepatic clearance, nonlinear elimination, two compartment model, plasma protein binding and the relationship between pharmacokinetics and drug effect. The program has been used in various forms in the classroom over a number of years, with positive ratings generally being received from students for its use in the classroom.
uSIMPK. An Excel for Windows-based simulation program for instruction of basic pharmacokinetics principles to pharmacy students
S0169260715000930
Purpose Clinical pathways fall under the process perspective of health care quality. For care providers, clinical pathways can be compared to improve health care quality. The objective of this study was to design a convenient physician order set comparison system based on claim records from the National Health Insurance Research Database (NHIRD) of Taiwan. Methods Data were retrieved from the NHIRD for the period of 2003–2007 for frequent physician order sets found in hospital surgical hernia repair inpatient claim records. The derived frequent physician order sets were divided into five frequency thresholds: 80%, 85%, 90%, 95% and 100%. A consistency index was defined and calculated to understand each care providers’ adherence to clinical pathways. In addition, the average count of physician orders, average amount of cost, Charlson comorbidity index, and recurrence rate were calculated; these variables were considered in frequent physician order sets comparison. Results Records for 3262 patients from 257 hospitals were retrieved. The frequent physician order sets of various frequency thresholds, Charlson comorbidities, and recurrence rates were extracted and computed for comparison among hospitals. A recurrence rate threshold of 2% was established to separate low and high quality of herniorrhaphy at each hospital. Univariable analysis showed that low recurrence rate was associated with high consistency index (70.99±23.88 vs. 52.60±20.30; P <.001), few surgeons at each hospital (3.50±4.41 vs. 7.09±6.57; P <.001), and non-medical center facility type (P =.042). A multivariable Cox regression analysis indicated an association of low recurrence rates with consistency index only (one percentage increased: OR=0.973; CI: 0.957–0.990; P =.002). Conclusions The proposed system leveraged the claim records to generate frequent physician order sets at hospitals, thus solving the difficulty in obtaining clinical pathway data. This allows medical professionals and management to conveniently and effectively compare and query similarities and differences in clinical pathways among hospitals.
A cross-hospital cost and quality assessment system by extracting frequent physician order set from a nationwide Health Insurance Research Database
S0169260715000942
We propose the signal processing technique of calculating a cross-correlation function and an average deviation between the continuous blood glucose and the interpolation of limited blood glucose samples to evaluate blood glucose monitoring frequency in a self-aware patient software agent model. The diabetic patient software agent model [1] is a 24-h circadian, self-aware, stochastic model of a diabetic patient's blood glucose levels in a software agent environment. The purpose of this work is to apply a signal processing technique to assist patients and physicians in understanding the extent of a patient's illness using a limited number of blood glucose samples. A second purpose of this work is to determine an appropriate blood glucose monitoring frequency in order to have a minimum number of samples taken that still provide a good understanding of the patient's blood glucose levels. For society in general, the monitoring cost of diabetes is an extremely important issue, and these costs can vary tremendously depending on monitoring approaches and monitoring frequencies. Due to the cost and discomfort associated with blood glucose monitoring, today, patients expect monitoring frequencies specific to their health profile. The proposed method quantitatively assesses various monitoring protocols (from 6 times per day to 1 time per week) in nine predefined categories of patient agents in terms of risk factors of health status and age. Simulation results show that sampling 6 times per day is excessive, and not necessary for understanding the dynamics of the continuous signal in the experiments. In addition, patient agents in certain conditions only need to sample their blood glucose 1 time per week to have a good understanding of the characteristics of their blood glucose. Finally, an evaluation scenario is developed to visualize this concept, in which appropriate monitoring frequencies are shown based on the particular conditions of patient agents. This base line can assist people in determining an appropriate monitoring frequency based on their personal health profile.
A signal processing application for evaluating self-monitoring blood glucose strategies in a software agent model
S0169260715000954
Background and objective Several abnormal brain regions are known to be linked to depression, including amygdala, orbitofrontal cortex (OFC), anterior cingulate cortex (ACC), and dorsolateral prefrontal cortex (DLPFC) etc. The aim of this study is to apply EEG (electroencephalogram) data analysis to investigate, with respect to mild depression, whether there exists dysregulation in these brain regions. Methods EEG sources were assessed from 9 healthy and 9 mildly depressed subjects who were classified according to the Beck Depression Inventory (BDI) criteria. t-Test was used to calculate the eye movement data and standardized low resolution tomography (sLORETA) was used to correlate EEG activity. Results A comparison of eye movement data between the healthy and mild depressed subjects exhibited that mildly depressed subjects spent more time viewing negative emotional faces. Comparison of the EEG from the two groups indicated higher theta activity in BA6 (Brodmann area) and higher alpha activity in BA38. Conclusions EEG source location results suggested that temporal pole activity to be dysregulated, and eye-movement data analysis exhibited mild depressed subjects paid much more attention to negative face expressions, which is also in accordance with the results of EEG source location.
A study on EEG-based brain electrical source of mild depressed subjects
S0169260715000966
Background and objectives Continuous subcutaneous insulin infusion (CSII) pump is widely considered a convenience and promising way for type 1 diabetes mellitus (T1DM) subjects, who need exogenous insulin infusion. In the standard insulin pump therapy, there are two modes for insulin infusion: basal and bolus insulin. The basal-bolus therapy should be individualized and optimized in order to keep one subject's blood glucose (BG) level within the normal range; however, the optimization procedure is troublesome and it perturb the patients a lot. Therefore, an automatic adjustment method is needed to reduce the burden of the patients, and run-to-run (R2R) control algorithm can be used to handle this significant task. Methods In this study, two kinds of high order R2R control methods are presented to adjust the basal and bolus insulin simultaneously. For clarity, a second order R2R control algorithm is first derived and studied. Furthermore, considering the differences between weekdays and weekends, a seventh order R2R control algorithm is also proposed and tested. Results In order to simulate real situation, the proposed method has been tested with uncertainties on measurement noise, drifts, meal size, meal time and snack. The proposed method can converge even when there are ±60min random variations in meal timing or ±50% random variations in meal size. Conclusions According to the robustness analysis, one can see that the proposed high order R2R has excellent robustness and could be a promising candidate to optimize insulin pump therapy.
Optimization of insulin pump therapy based on high order run-to-run control scheme
S0169260715001108
Telemedicine is the medical practice of information exchanged from one location to another through electronic communications to improve the delivery of health care services. This research article describes a telemedicine framework with knowledge engineering using taxonomic reasoning of ontology modeling and semantic similarity. In addition to being a precious support in the procedure of medical decision-making, this framework can be used to strengthen significant collaborations and traceability that are important for the development of official deployment of telemedicine applications. Adequate mechanisms for information management with traceability of the reasoning process are also essential in the fields of epidemiology and public health. In this paper we enrich the case-based reasoning process by taking into account former evidence-based knowledge. We use the regular four steps approach and implement an additional (iii) step: (i) establish diagnosis, (ii) retrieve treatment, (iii) apply evidence, (iv) adaptation, (v) retain. Each step is performed using tools from knowledge engineering and information processing (natural language processing, ontology, indexation, algorithm, etc.). The case representation is done by the taxonomy component of a medical ontology model. The proposed approach is illustrated with an example from the oncology domain. Medical ontology allows a good and efficient modeling of the patient and his treatment. We are pointing up the role of evidences and specialist's opinions in effectiveness and safety of care.
Telemedicine framework using case-based reasoning with evidences
S0169260715001133
We provide a continuation of the existing Activity Table Modeling methodology with a modular spreadsheets simulation. The simulation model developed is comprised of 28 modeling elements for the abdominal surgery cycle process. The simulation of a two-week patient flow in an abdominal clinic with 75 beds demonstrates the applicability of the methodology. The simulation does not include macros, thus programming experience is not essential for replication or upgrading the model. Unlike the existing methods, the proposed solution employs a modular approach for modeling the activities that ensures better readability, the possibility of easily upgrading the model with other activities, and its easy extension and connectives with other similar models. We propose a first-in-first-served approach for simulation of servicing multiple patients. The uncertain time duration of the activities is modeled using the function “rand()”. The patients movements from one activity to the next one is tracked with nested “if()” functions, thus allowing easy re-creation of the process without the need of complex programming.
Abdominal surgery process modeling framework for simulation using spreadsheets
S0169260715001376
Background and objective Periodontitis involves progressive loss of alveolar bone around the teeth. Hence, automatic alveolar bone-loss (ABL) measurement in periapical radiographs can assist dentists in diagnosing such disease. In this paper, we propose an effective method for ABL area localization and denote it as ABLIfBm. Method ABLIfBm is a threshold segmentation method that uses a hybrid feature fused of both intensity and texture measured by the H-value of fractional Brownian motion (fBm) model, where the H-value is the Hurst coefficient in the expectation function of a fBm curve (intensity change) and is directly related to the value of fractal dimension. Adopting leave-one-out cross validation training and testing mechanism, ABLIfBm trains weights for both features using Bayesian classifier and transforms the radiograph image into a feature image obtained from a weighted average of both features. Finally, by Otsu's thresholding, it segments the feature image into normal and bone-loss regions. Results Experimental results on 31 periodontitis radiograph images in terms of mean true positive fraction and false positive fraction are about 92.5% and 14.0%, respectively, where the ground truth is provided by a dentist. The results also demonstrate that ABLIfBm outperforms (a) the threshold segmentation method using either feature alone or a weighted average of the same two features but with weights trained differently; (b) a level set segmentation method presented earlier in literature; and (c) segmentation methods based on Bayesian, K-NN, or SVM classifier using the same two features. Conclusion Our results suggest that the proposed method can effectively localize alveolar bone-loss areas in periodontitis radiograph images and hence would be useful for dentists in evaluating degree of bone-loss for periodontitis patients.
Alveolar bone-loss area localization in periodontitis radiographs based on threshold segmentation with a hybrid feature fused of intensity and the H-value of fractional Brownian motion model
S0169260715001492
Background and objective The evaluation of the clinical status of a patient is frequently based on the temporal evolution of some parameters, making the detection of temporal patterns a priority in data analysis. Temporal abstraction (TA) is a methodology widely used in medical reasoning for summarizing and abstracting longitudinal data. Methods This paper describes JTSA (Java Time Series Abstractor), a framework including a library of algorithms for time series preprocessing and abstraction and an engine to execute a workflow for temporal data processing. The JTSA framework is grounded on a comprehensive ontology that models temporal data processing both from the data storage and the abstraction computation perspective. The JTSA framework is designed to allow users to build their own analysis workflows by combining different algorithms. Thanks to the modular structure of a workflow, simple to highly complex patterns can be detected. The JTSA framework has been developed in Java 1.7 and is distributed under GPL as a jar file. Results JTSA provides: a collection of algorithms to perform temporal abstraction and preprocessing of time series, a framework for defining and executing data analysis workflows based on these algorithms, and a GUI for workflow prototyping and testing. The whole JTSA project relies on a formal model of the data types and of the algorithms included in the library. This model is the basis for the design and implementation of the software application. Taking into account this formalized structure, the user can easily extend the JTSA framework by adding new algorithms. Results are shown in the context of the EU project MOSAIC to extract relevant patterns from data coming related to the long term monitoring of diabetic patients. Conclusions The proof that JTSA is a versatile tool to be adapted to different needs is given by its possible uses, both as a standalone tool for data summarization and as a module to be embedded into other architectures to select specific phenotypes based on TAs in a large dataset.
JTSA: An open source framework for time series abstractions
S0169260715001509
Background and objectives Rule-based classification is a typical data mining task that is being used in several medical diagnosis and decision support systems. The rules stored in the rule base have an impact on classification efficiency. Rule sets that are extracted with data mining tools and techniques are optimized using heuristic or meta-heuristic approaches in order to improve the quality of the rule base. In this work, a meta-heuristic approach called Wind-driven Swarm Optimization (WSO) is used. The uniqueness of this work lies in the biological inspiration that underlies the algorithm. Methods WSO uses Jval, a new metric, to evaluate the efficiency of a rule-based classifier. Rules are extracted from decision trees. WSO is used to obtain different permutations and combinations of rules whereby the optimal ruleset that satisfies the requirement of the developer is used for predicting the test data. The performance of various extensions of decision trees, namely, RIPPER, PART, FURIA and Decision Tables are analyzed. The efficiency of WSO is also compared with the traditional Particle Swarm Optimization. Results Experiments were carried out with six benchmark medical datasets. The traditional C4.5 algorithm yields 62.89% accuracy with 43 rules for liver disorders dataset where as WSO yields 64.60% with 19 rules. For Heart disease dataset, C4.5 is 68.64% accurate with 98 rules where as WSO is 77.8% accurate with 34 rules. The normalized standard deviation for accuracy of PSO and WSO are 0.5921 and 0.5846 respectively. Conclusion WSO provides accurate and concise rulesets. PSO yields results similar to that of WSO but the novelty of WSO lies in its biological motivation and it is customization for rule base optimization. The trade-off between the prediction accuracy and the size of the rule base is optimized during the design and development of rule-based clinical decision support system. The efficiency of a decision support system relies on the content of the rule base and classification accuracy.
A Swarm Optimization approach for clinical knowledge mining
S0169260715001510
The histopathological examination of tissue specimens is necessary for the diagnosis and grading of colon cancer. However, the process is subjective and leads to significant inter/intra observer variation in diagnosis as it mainly relies on the visual assessment of histopathologists. Therefore, a reliable computer-aided technique, which can automatically classify normal and malignant colon samples, and determine grades of malignant samples, is required. In this paper, we propose a novel colon cancer diagnostic (CCD) system, which initially classifies colon biopsy images into normal and malignant classes, and then automatically determines the grades of colon cancer for malignant images. To this end, various novel structural descriptors, which mathematically model and quantify the variation among the structure of normal colon tissues and malignant tissues of various cancer grades, have been employed. Radial basis function (RBF) kernel of support vector machines (SVM) has been employed as classifier in order to classify/grade colon samples based on these descriptors. The proposed system has been tested on 92 malignant and 82 normal colon biopsy images. The classification performance has been measured in terms of various performance measures, and quite promising performance has been observed. Compared with previous techniques, the proposed system has demonstrated better cancer detection (classification accuracy=95.40%) and grading (classification accuracy=93.47%) capability. Therefore, the proposed CCD system can provide a reliable second opinion to the histopathologists.
Novel structural descriptors for automated colon cancer detection and grading
S0169260715001522
Background and objective The accurate identification of fat droplets is a prerequisite for the automatic quantification of steatosis in histological images. A major challenge in this regard is the distinction between clustered fat droplets and vessels or tissue cracks. Methods We present a new method for the identification of fat droplets that utilizes adjacency statistics as shape features. Adjacency statistics are simple statistics on neighbor pixels. Results The method accurately identified fat droplets with sensitivity and specificity values above 90%. Compared with commonly-used shape features, adjacency statistics greatly improved the sensitivity toward clustered fat droplets by 29% and the specificity by 17%. On a standard personal computer, megapixel images were processed in less than 0.05s. Conclusions The presented method is simple to implement and can provide the basis for the fast and accurate quantification of steatosis.
Fast and accurate identification of fat droplets in histological images
S0169260715001534
There are various medical image sharing and electronic whiteboard systems available for diagnosis and discussion purposes. However, most of these systems ask clients to install special software tools or web plug-ins to support whiteboard discussion, special medical image format, and customized decoding algorithm of data transmission of HRIs (high-resolution images). This limits the accessibility of the software running on different devices and operating systems. In this paper, we propose a solution based on pure web pages for medical HRIs lossless sharing and e-whiteboard discussion, and have set up a medical HRI sharing and e-whiteboard system, which has four-layered design: (1) HRIs access layer: we improved an tile-pyramid model named unbalanced ratio pyramid structure (URPS), to rapidly share lossless HRIs and to adapt to the reading habits of users; (2) format conversion layer: we designed a format conversion engine (FCE) on server side to real time convert and cache DICOM tiles which clients requesting with window-level parameters, to make browsers compatible and keep response efficiency to server-client; (3) business logic layer: we built a XML behavior relationship storage structure to store and share users’ behavior, to keep real time co-browsing and discussion between clients; (4) web-user-interface layer: AJAX technology and Raphael toolkit were used to combine HTML and JavaScript to build client RIA (rich Internet application), to meet clients’ desktop-like interaction on any pure webpage. This system can be used to quickly browse lossless HRIs, and support discussing and co-browsing smoothly on any web browser in a diversified network environment. The proposal methods can provide a way to share HRIs safely, and may be used in the field of regional health, telemedicine and remote education at a low cost.
Medical high-resolution image sharing and electronic whiteboard system: A pure-web-based system for accessing and discussing lossless original images in telemedicine
S0169260715001546
Background and objective Meaningful targeting of brain structures is required in a number of experimental designs in neuroscience. Current technological developments as high density electrode arrays for parallel electrophysiological recordings and optogenetic tools that allow fine control of activity in specific cell populations provide powerful tools to investigate brain physio-pathology. However, to extract the maximum yield from these fine developments, increased precision, reproducibility and cost-efficiency in experimental procedures is also required. Methods We introduce here a framework based on magnetic resonance imaging (MRI) and digitized brain atlases to produce customizable 3D-environments for brain navigation. It allows the use of individualized anatomical and/or functional information from multiple MRI modalities to assist experimental neurosurgery planning and in vivo tissue processing. Results As a proof of concept we show three examples of experimental designs facilitated by the presented framework, with extraordinary applicability in neuroscience. Conclusions The obtained results illustrate its feasibility for identifying and selecting functionally and/or anatomically connected neuronal population in vivo and directing electrode implantations to targeted nodes in the intricate system of brain networks.
Neurosurgery planning in rodents using a magnetic resonance imaging assisted framework to target experimentally defined networks
S0169260715001558
Background and objectives Dose finding trials using model-based methods have the ability to handle the increasingly complex landscape being seen in clinical trials. Issues such as patient heterogeneity in trial populations are important to address in the designing of a trial in addition to the inclusion/exclusion criteria. Designs accommodating patient heterogeneity have been described using the continual reassessment method (CRM) and time-to-event CRM (TITE-CRM), yet, the implementation of these trials in practice have been limited. These methods and other model-based methods generally need statisticians to help design and conduct these trials. However, the statistical programs which facilitate the use of these methods, currently available focus on estimation in the one-sample case. Methods A SAS program to accommodate two groups using the TITE-CRM and likelihood estimation has been developed. The program consists of macros that assist with the planning and implementation of a trial accounting for patient heterogeneity. Results Description of the program is given as well as examples using the programs. For planning purposes, an example will be provided showing how the program can be used to guide sample size estimates for the trial. Conclusions This program provides researchers with a valuable tool for designing dose-finding studies to account for the presence of patient heterogeneity and conduct a trial using a hypothetical example.
Implementation of a two-group likelihood time-to-event continual reassessment method using SAS
S0169260715001571
Automatic detection of the QRS complexes/R-peaks in an electrocardiogram (ECG) signal is the most important step preceding any kind of ECG processing and analysis. The performance of these systems heavily relies on the accuracy of the QRS detector. The objective of present work is to drive a new robust method based on stationary wavelet transform (SWT) for R-peaks detection. The decimation of the coefficients at each level of the transformation algorithm is omitted, more samples in the coefficient sequences are available and hence a better outlier detection can be performed. Using the information of local maxima, minima and zero crossings of the fourth SWT coefficient detail, the proposed algorithm identifies the significant points for detection and delineation of the QRS complexes, as well as detection and identification of the QRS individual waves peaks of the pre-processed ECG signal. Various experimental results show that the proposed algorithm exhibits reliable QRS detection as well as accurate ECG delineation, achieving excellent performance on different databases, on the MIT-BIH database (Se =99.84%, P =99.88%), on the QT Database (Se =99.94%, P =99.89%) and on MIT-BIH Noise Stress Test Database, (Se =95.30%, P =93.98%). Reliability and accuracy are close to the highest among the ones obtained in other studies. Experiments results being satisfactory, the SWT may represent a novel QRS detection tool, for a robust ECG signal analysis.
R-peaks detection based on stationary wavelet transform
S0169260715001583
Background and objective Understanding the causes of disagreement among experts in clinical decision making has been a challenge for decades. In particular, a high amount of variability exists in diagnosis of retinopathy of prematurity (ROP), which is a disease affecting low birth weight infants and a major cause of childhood blindness. A possible cause of variability, that has been mostly neglected in the literature, is related to discrepancies in the sets of important features considered by different experts. In this paper we propose a methodology which makes use of machine learning techniques to understand the underlying causes of inter-expert variability. Methods The experiments are carried out on a dataset consisting of 34 retinal images, each with diagnoses provided by 22 independent experts. Feature selection techniques are applied to discover the most important features considered by a given expert. Those features selected by each expert are then compared to the features selected by other experts by applying similarity measures. Finally, an automated diagnosis system is built in order to check if this approach can be helpful in solving the problem of understanding high inter-rater variability. Results The experimental results reveal that some features are mostly selected by the feature selection methods regardless the considered expert. Moreover, for pairs of experts with high percentage agreement among them, the feature selection algorithms also select similar features. By using the relevant selected features, the classification performance of the automatic system was improved or maintained. Conclusions The proposed methodology provides a handy framework to identify important features for experts and check whether the selected features reflect the pairwise agreements/disagreements. These findings may lead to improved diagnostic accuracy and standardization among clinicians, and pave the way for the application of this methodology to other problems which present inter-expert variability.
Dealing with inter-expert variability in retinopathy of prematurity: A machine learning approach
S0169260715001595
With the aim of controlling drug resistant Plasmodium falciparum, a computational attempt of designing novel adduct antimalarial drugs through the molecular docking method of combining chloroquine with five alkaloids, individually is presented. These alkaloids were obtained from the medicinal plant, Adhatoda vasica. From the obtained individual docking values of important derivatives of quinine and chloroquine, as well as, individual alkaloids and adduct agents of chloroquine with Adhatoda alkaloids as ligands, it was discernible that the ‘adduct agent-1 with chloroquine and adhatodine’ combination had the minimum energy of interaction, as the docking score value of −11.144kcal/mol against the target protein, triosephosphate isomerase (TIM), the key enzyme of glycolytic pathway. Drug resistance of P. falciparum is due to a mutation in the polypeptide of TIM. Moratorium of mutant TIM would disrupt the metabolism during the control of the drug resistant P. falciparum. This in silico work helped to locate the ‘adduct agent-1 with chloroquine and adhatodine’, which could be taken up by pharmacology for further development of this compound as a new drug against drug resistant Plasmodium.
In silico attempt for adduct agent(s) against malaria: Combination of chloroquine with alkaloids of Adhatoda vasica
S0169260715001601
Taiwan is an area where chronic hepatitis is endemic. Liver cancer is so common that it has been ranked first among cancer mortality rates since the early 1980s in Taiwan. Besides, liver cirrhosis and chronic liver diseases are the sixth or seventh in the causes of death. Therefore, as shown by the active research on hepatitis, it is not only a health threat, but also a huge medical cost for the government. The estimated total number of hepatitis B carriers in the general population aged more than 20 years old is 3,067,307. Thus, a case record review was conducted from all patients with diagnosis of acute hepatitis admitted to the Emergency Department (ED) of a well-known teaching-oriented hospital in Taipei. The cost of medical resource utilization is defined as the total medical fee. In this study, a fuzzy neural network is employed to develop the cost forecasting model. A total of 110 patients met the inclusion criteria. The computational results indicate that the FNN model can provide more accurate forecasts than the support vector regression (SVR) or artificial neural network (ANN). In addition, unlike SVR and ANN, FNN can also provide fuzzy IF–THEN rules for interpretation.
A medical cost estimation with fuzzy neural network of acute hepatitis patients in emergency room
S0169260715001613
Background Proteomics, the study of proteomes, has been increasingly utilized in a wide variety of biological problems. The Two-Dimensional Gel Electrophoresis (2D-PAGE) technique is a powerful proteomics technique aiming at separation of the complex protein mixtures. Spot detection and segmentation are fundamental components of 2D-gel image analysis but remain arduous and difficult tasks. Several software packages and academic approaches are available for 2D-gel image spot detection and segmentation. Each one has its respective advantages and disadvantages and achieves a different level of success in dealing with the challenges of 2D-gel image analysis. A common characteristic of the available methods is their dependency on user intervention in order to achieve optimal results, a process that can lead to subjective and non-reproducible results. In this work, the authors propose a novel spot detection and segmentation methodology for 2D-gel images. Methods This work introduces a novel spot detection and spot segmentation methodology that is based on a multi-thresholding scheme applied on overlapping regions of the image, a custom grow-cut algorithm, a region growing scheme and morphological operators. The performance of the proposed methodology is evaluated on real as well as synthetic 2D-gel images using well established statistical measures, including precision, sensitivity, and their weighted measure, F-measure, as well as volumetric overlap, volumetric error and volumetric overlap error. Results Experimental results show that the proposed methodology outperforms state-of-the-art software packages and methods proposed in the literature and results in more plausible spot boundaries and more accurate segmentation. The proposed method achieved the highest F-measure (94.8%) for spot detection and the lowest volumetric overlap error (8.3%) for the segmentation process. Conclusions Evaluation against state-of-the-art 2D-gel image analysis software packages and techniques proposed in the literature, including Melanie 7, Delta2D, PDQuest and Scimo, demonstrates that the proposed approach outperforms the other methods evaluated in this work and constitutes an advantageous and reliable solution for 2D-gel image analysis.
2D-gel spot detection and segmentation based on modified image-aware grow-cut and regional intensity information
S0169260715001625
Electrocardiography (ECG) has been recently proposed as biometric trait for identification purposes. Intra-individual variations of ECG might affect identification performance. These variations are mainly due to Heart Rate Variability (HRV). In particular, HRV causes changes in the QT intervals along the ECG waveforms. This work is aimed at analysing the influence of seven QT interval correction methods (based on population models) on the performance of ECG-fiducial-based identification systems. In addition, we have also considered the influence of training set size, classifier, classifier ensemble as well as the number of consecutive heartbeats in a majority voting scheme. The ECG signals used in this study were collected from thirty-nine subjects within the Physionet open access database. Public domain software was used for fiducial points detection. Results suggested that QT correction is indeed required to improve the performance. However, there is no clear choice among the seven explored approaches for QT correction (identification rate between 0.97 and 0.99). MultiLayer Perceptron and Support Vector Machine seemed to have better generalization capabilities, in terms of classification performance, with respect to Decision Tree-based classifiers. No such strong influence of the training-set size and the number of consecutive heartbeats has been observed on the majority voting scheme.
Subject identification via ECG fiducial-based systems: Influence of the type of QT interval correction
S0169260715001753
In this paper, we propose a robust semi-autonomous algorithm for 3D vessel segmentation and tracking based on an active contour model and a Kalman filter. For each computed tomography angiography (CTA) slice, we use the active contour model to segment the vessel boundary and the Kalman filter to track position and shape variations of the vessel boundary between slices. For successful segmentation via active contour, we select an adequate number of initial points from the contour of the first slice. The points are set manually by user input for the first slice. For the remaining slices, the initial contour position is estimated autonomously based on segmentation results of the previous slice. To obtain refined segmentation results, an adaptive control spacing algorithm is introduced into the active contour model. Moreover, a block search-based initial contour estimation procedure is proposed to ensure that the initial contour of each slice can be near the vessel boundary. Experiments were performed on synthetic and real chest CTA images. Compared with the well-known Chan-Vese (CV) model, the proposed algorithm exhibited better performance in segmentation and tracking. In particular, receiver operating characteristic analysis on the synthetic and real CTA images demonstrated the time efficiency and tracking robustness of the proposed model. In terms of computational time redundancy, processing time can be effectively reduced by approximately 20%.
Adaptive Kalman snake for semi-autonomous 3D vessel tracking
S0169260715001765
Breast cancer is one of the most perilous diseases among women. Breast screening is a method of detecting breast cancer at a very early stage which can reduce the mortality rate. Mammography is a standard method for the early diagnosis of breast cancer. In this paper, a new algorithm is proposed for breast cancer detection and classification in digital mammography based on Non-Subsampled Contourlet Transform (NSCT) and Super Resolution (SR). The presented algorithm includes three main parts including pre-processing, feature extraction and classification. In the pre-processing stage, after determining the region of interest (ROI) by an automatic technique, the quality of image is improved using NSCT and SR algorithm. In the feature extraction part, several features of the image components are extracted and skewness of each feature is calculated. Finally, AdaBoost algorithm is used to classify and determine the probability of benign and malign disease. The obtained results on Mammographic Image Analysis Society (MIAS) database indicate the significant performance and superiority of the proposed method in comparison with the state of the art approaches. According to the obtained results, the proposed technique achieves 91.43% and 6.42% as a mean accuracy and FPR, respectively.
Breast cancer detection and classification in digital mammography based on Non-Subsampled Contourlet Transform (NSCT) and Super Resolution
S0169260715001777
Premature ventricular contraction (PVC) is a common type of abnormal heartbeat. Without early diagnosis and proper treatment, PVC may result in serious harms. Diagnosis of PVC is of great importance in goal-directed treatment and preoperation prognosis. This paper proposes a novel diagnostic method for PVC based on Lyapunov exponents of electrocardiogram (ECG) beats. The methodology consists of preprocessing, feature extraction and classification integrated into the system. PVC beats can be classified and differentiated from other types of abnormal heartbeats by analyzing Lyapunov exponents and training a learning vector quantization (LVQ) neural network. Our algorithm can obtain a good diagnostic result with little features by using single lead ECG data. The sensitivity, positive predictability, and the overall accuracy of the automatic diagnosis of PVC is 90.26%, 92.31%, and 98.90%, respectively. The effectiveness of the new method is validated through extensive tests using data from MIT-BIH database. The experimental results show that the proposed method is efficient and robust.
Automatic diagnosis of premature ventricular contraction based on Lyapunov exponents and LVQ neural network
S0169260715001789
Background “Our lives are connected by a thousand invisible threads and along these sympathetic fibers, our actions run as causes and return to us as results”. It is Herman Melville's famous quote describing connections among human lives. To paraphrase the Melville's quote, diseases are connected by many functional threads and along these sympathetic fibers, diseases run as causes and return as results. The Melville's quote explains the reason for researching disease–disease similarity and disease network. Measuring similarities between diseases and constructing disease network can play an important role in disease function research and in disease treatment. To estimate disease–disease similarities, we proposed a novel literature-based method. Methods and results The proposed method extracted disease–gene relations and disease–drug relations from literature and used the frequencies of occurrence of the relations as features to calculate similarities among diseases. We also constructed disease network with top-ranking disease pairs from our method. The proposed method discovered a larger number of answer disease pairs than other comparable methods and showed the lowest p-value. Conclusions We presume that our method showed good results because of using literature data, using all possible gene symbols and drug names for features of a disease, and determining feature values of diseases with the frequencies of co-occurrence of two entities. The disease–disease similarities from the proposed method can be used in computational biology researches which use similarities among diseases.
A literature-driven method to calculate similarities among diseases
S0169260715001790
This study applied a simulation method to map the temperature distribution based on magnetic resonance imaging (MRI) of individual patients, and investigated the influence of different pelvic tissue types as well as the choice of thermal property parameters on the efficiency of endorectal cooling balloon (ECB). MR images of four subjects with different prostate sizes and pelvic tissue compositions, including fatty tissue and venous plexus, were analyzed. The MR images acquired using endorectal coil provided a realistic geometry of deformed prostate that resembled the anatomy in the presence of ECB. A single slice with the largest two-dimensional (2D) cross-sectional area of the prostate gland was selected for analysis. The rectal wall, prostate gland, peri-rectal fatty tissue, peri-prostatic fatty tissue, peri-prostatic venous plexus, and urinary bladder were manually segmented. Pennes’ bioheat thermal model was used to simulate the temperature distribution dynamics, by using an in-house finite element mesh based solver written in MATLAB. The results showed that prostate size and periprostatic venous plexus were two major factors affecting ECB cooling efficiency. For cases with negligible amount of venous plexus and small prostate, the average temperature in the prostate and neurovascular bundles could be cooled down to 25°C within 30min. For cases with abundant venous plexus and large prostate, the temperature could not reach 25°C at the end of 3h cooling. Large prostate made the cooling difficult to propagate through. The impact of fatty tissue on cooling effect was small. The filling of bladder with warm urine during the ECB cooling procedure did not affect the temperature in the prostate or NVB. In addition to the 2D simulation, in one case a 3D pelvic model was constructed for volumetric simulation. It was found that the 2D slice with the largest cross-sectional area of prostate had the most abundant venous plexus, and was the most difficult slice to cool, thus it may provide a conservative prediction of the cooling effect. This feasibility study demonstrated that the simulation tool could potentially be used for adjusting the setting of ECB for individual patients during hypothermic radical prostatectomy. Further studies using MR thermometry are required to validate the in silico results obtained using simulation.
Investigation of factors affecting hypothermic pelvic tissue cooling using bio-heat simulation based on MRI-segmented anatomic models
S0169260715001807
Breast cancer is the most deadly disease affecting women and thus it is natural for women aged 40–49 years (who have a family history of breast cancer or other related cancers) to assess their personal risk for developing familial breast cancer (FBC). Besides, as each individual woman possesses different levels of risk of developing breast cancer depending on their family history, genetic predispositions and personal medical history, individualized care setting mechanism needs to be identified so that appropriate risk assessment, counseling, screening, and prevention options can be determined by the health care professionals. The presented work aims at developing a soft computing based medical decision support system using Fuzzy Cognitive Map (FCM) that assists health care professionals in deciding the individualized care setting mechanisms based on the FBC risk level of the given women. The FCM based FBC risk management system uses NHL to learn causal weights from 40 patient records and achieves a 95% diagnostic accuracy. The results obtained from the proposed model are in concurrence with the comprehensive risk evaluation tool based on Tyrer–Cuzick model for 38/40 patient cases (95%). Besides, the proposed model identifies high risk women by calculating higher accuracy of prediction than the standard Gail and NSAPB models. The testing accuracy of the proposed model using 10-fold cross validation technique outperforms other standard machine learning based inference engines as well as previous FCM-based risk prediction methods for BC.
A risk management model for familial breast cancer: A new application using Fuzzy Cognitive Map method
S0169260715001819
The paper addresses the issue of non-invasive real-time prediction of hidden inner body temperature variables during therapeutic cooling or heating and proposes a solution that uses computer simulations and machine learning. The proposed approach is applied on a real-world problem in the domain of biomedicine – prediction of inner knee temperatures during therapeutic cooling (cryotherapy) after anterior cruciate ligament (ACL) reconstructive surgery. A validated simulation model of the cryotherapeutic treatment is used to generate a substantial amount of diverse data from different simulation scenarios. We apply machine learning methods on the simulated data to construct a predictive model that provides a prediction for the inner temperature variable based on other system variables whose measurement is more feasible, i.e. skin temperatures. First, we perform feature ranking using the RReliefF method. Next, based on the feature ranking results, we investigate the predictive performance and time/memory efficiency of several predictive modeling methods: linear regression, regression trees, model trees, and ensembles of regression and model trees. Results have shown that using only temperatures from skin sensors as input attributes gives excellent prediction for the temperature in the knee center. Moreover, satisfying predictive accuracy is also achieved using short history of temperatures from just two skin sensors (placed anterior and posterior to the knee) as input variables. The model trees perform the best with prediction error in the same range as the accuracy of the simulated data (0.1°C). Furthermore, they satisfy the requirements for small memory size and real-time response. We successfully validate the best performing model tree with real data from in vivo temperature measurement from a patient undergoing cryotherapy after ACL reconstruction.
Non-invasive real-time prediction of inner knee temperatures during therapeutic cooling
S0169260715001947
Heat Shock Proteins (HSPs) are the substantial ingredients for cell growth and viability, which are found in all living organisms. HSPs manage the process of folding and unfolding of proteins, the quality of newly synthesized proteins and protecting cellular homeostatic processes from environmental stress. On the basis of functionality, HSPs are categorized into six major families namely: (i) HSP20 or sHSP (ii) HSP40 or J-proteins types (iii) HSP60 or GroEL/ES (iv) HSP70 (v) HSP90 and (vi) HSP100. Identification of HSPs family and sub-family through conventional approaches is expensive and laborious. It is therefore, highly desired to establish an automatic, robust and accurate computational method for prediction of HSPs quickly and reliably. Regard, a computational model is developed for the prediction of HSPs family. In this model, protein sequences are formulated using three discrete methods namely: Split Amino Acid Composition, Pseudo Amino Acid Composition, and Dipeptide Composition. Several learning algorithms are utilized to choice the best one for high throughput computational model. Leave one out test is applied to assess the performance of the proposed model. The empirical results showed that support vector machine achieved quite promising results using Dipeptide Composition feature space. The predicted outcomes of proposed model are 90.7% accuracy for HSPs dataset and 97.04% accuracy for J-protein types, which are higher than existing methods in the literature so far.
Identification of Heat Shock Protein families and J-protein types by incorporating Dipeptide Composition into Chou's general PseAAC
S0169260715001959
In this study, we proposed a new adaptive method for fusing multiple emotional modalities to improve the performance of the emotion recognition system. Three-channel forehead biosignals along with peripheral physiological measurements (blood volume pressure, skin conductance, and interbeat intervals) were utilized as emotional modalities. Six basic emotions, i.e., anger, sadness, fear, disgust, happiness, and surprise were elicited by displaying preselected video clips for each of the 25 participants in the experiment; the physiological signals were collected simultaneously. In our multimodal emotion recognition system, recorded signals with the formation of several classification units identified the emotions independently. Then the results were fused using the adaptive weighted linear model to produce the final result. Each classification unit is assigned a weight that is determined dynamically by considering the performance of the units during the testing phase and the training phase results. This dynamic weighting scheme enables the emotion recognition system to adapt itself to each new user. The results showed that the suggested method outperformed conventional fusion of the features and classification units using the majority voting method. In addition, a considerable improvement, compared to the systems that used the static weighting schemes for fusing classification units, was also shown. Using support vector machine (SVM) and k-nearest neighbors (KNN) classifiers, the overall classification accuracies of 84.7% and 80% were obtained in identifying the emotions, respectively. In addition, applying the forehead or physiological signals in the proposed scheme indicates that designing a reliable emotion recognition system is feasible without the need for additional emotional modalities.
Reliable emotion recognition system based on dynamic adaptive fusion of forehead biopotentials and physiological signals
S0169260715001960
This study discussed a computer-aided program development that meets the requirements of people with physical disabilities. A number of control modes, such as electrode signal recorded on the scalp and blink control, were combined with the scanning human–machine interface to improve the external input/output device. Moreover, a novel and precise algorithm, which filters noise and reduces misrecognition of the system, was proposed. A convenient assistive device can assist people with physical disabilities to meet their requirements for independent living and communication with the outside. The traditional scanning keyboard is changed, and only the phonetic notations are typed instead of characters, thus the time of tone and function selection could be saved, and the typing time could be also reduced. Barrier-free computer assistive devices and interface for people with physical disabilities in typing or speech could allow them to use a scanning keyboard to select phonetic symbols instead of Chinese characters to express their thoughts. The human–machine interface controls can obtain more reliable results as 99.8% connection success rate and 95% typing success rate.
The designs and applications of a scanning interface with electrical signal detection on the scalp for the severely disabled
S0169260715001972
Background and objective Early childhood caries (ECC) is a potentially severe disease affecting children all over the world. The available findings are mostly based on a logistic regression model, but data mining, in particular association rule mining, could be used to extract more information from the same data set. Methods ECC data was collected in a cross-sectional analytical study of the 10% sample of preschool children in the South Bačka area (Vojvodina, Serbia). Association rules were extracted from the data by association rule mining. Risk factors were extracted from the highly ranked association rules. Results Discovered dominant risk factors include male gender, frequent breastfeeding (with other risk factors), high birth order, language, and low body weight at birth. Low health awareness of parents was significantly associated to ECC only in male children. Conclusions The discovered risk factors are mostly confirmed by the literature, which corroborates the value of the methods.
Using association rule mining to identify risk factors for early childhood caries
S0169260715001984
In this paper, a new image segmentation method based on Particle Swarm Optimization (PSO) and outlier rejection combined with level set is proposed. A traditional approach to the segmentation of Magnetic Resonance (MR) images is the Fuzzy C-Means (FCM) clustering algorithm. The membership function of this conventional algorithm is sensitive to the outlier and does not integrate the spatial information in the image. The algorithm is very sensitive to noise and in-homogeneities in the image, moreover, it depends on cluster centers initialization. To improve the outlier rejection and to reduce the noise sensitivity of conventional FCM clustering algorithm, a novel extended FCM algorithm for image segmentation is presented. In general, in the FCM algorithm the initial cluster centers are chosen randomly, with the help of PSO algorithm the clusters centers are chosen optimally. Our algorithm takes also into consideration the spatial neighborhood information. These a priori are used in the cost function to be optimized. For MR images, the resulting fuzzy clustering is used to set the initial level set contour. The results confirm the effectiveness of the proposed algorithm.
Improved Fuzzy C-Means based Particle Swarm Optimization (PSO) initialization and outlier rejection with level set methods for MR brain image segmentation
S0169260715001996
Glaucoma is an optic neuropathy which is one of the main causes of permanent blindness worldwide. This paper presents an automatic image processing based method for detection of glaucoma from the digital fundus images. In this proposed work, the discriminatory parameters of glaucoma infection, such as cup to disc ratio (CDR), neuro retinal rim (NRR) area and blood vessels in different regions of the optic disc has been used as features and fed as inputs to learning algorithms for glaucoma diagnosis. These features which have discriminatory changes with the occurrence of glaucoma are strategically used for training the classifiers to improve the accuracy of identification. The segmentation of optic disc and cup based on adaptive threshold of the pixel intensities lying in the optic nerve head region. Unlike existing methods the proposed algorithm is based on an adaptive threshold that uses local features from the fundus image for segmentation of optic cup and optic disc making it invariant to the quality of the image and noise content which may find wider acceptability. The experimental results indicate that such features are more significant in comparison to the statistical or textural features as considered in existing works. The proposed work achieves an accuracy of 94.11% with a sensitivity of 100%. A comparison of the proposed work with the existing methods indicates that the proposed approach has improved accuracy of classification glaucoma from a digital fundus which may be considered clinically significant.
An adaptive threshold based image processing technique for improved glaucoma detection and classification
S0169260715002011
Background and objective The discrimination of Alzheimer's disease (AD) and its prodromal stage known as mild cognitive impairment (MCI) from normal control (NC) is important for patients’ timely treatment. The simultaneous use of multi-modality data has been demonstrated to be helpful for more accurate identification. The current study focused on extending a multi-modality algorithm and evaluating the method by identifying AD/MCI. Methods In this study, sparse representation-based classification (SRC), a well-developed method in pattern recognition and machine learning, was extended to a multi-modality classification framework named as weighted multi-modality SRC (wmSRC). Data including three modalities of volumetric magnetic resonance imaging (MRI), fluorodeoxyglucose (FDG) positron emission tomography (PET) and florbetapir PET from the Alzheimer's disease Neuroimaging Initiative database were adopted for AD/MCI classification (113 AD patients, 110 MCI patients and 117 NC subjects). Results Adopting wmSRC, the classification accuracy achieved 94.8% for AD vs. NC, 74.5% for MCI vs. NC, and 77.8% for progressive MCI vs. stable MCI, superior to or comparable with the results of some other state-of-the-art models in recent multi-modality researches. Conclusions The wmSRC method is a promising tool for classification with multi-modality data. It could be effective for identifying diseases from NC with neuroimaging data, which could be helpful for the timely diagnosis and treatment of diseases.
Multi-modality sparse representation-based classification for Alzheimer's disease and mild cognitive impairment
S0169260715002023
This paper proposes an integrated modelling approach for location planning of radiotherapy treatment services based on cancer incidence and road network-based accessibility. Previous research efforts have established travel distance/time barriers as a key factor affecting access to cancer treatment services, as well as epidemiological studies have shown that cancer incidence rates vary with population demography. Our study is built on the evidence that the travel distances to treatment centres and demographic profiles of the accessible regions greatly influence the uptake of cancer radiotherapy (RT) services. An integrated service planning approach that combines spatially-explicit cancer incidence projections, and the placement of new RT services based on road network based accessibility measures have never been attempted. This research presents a novel approach for the location planning of RT services, and demonstrates its viability by modelling cancer incidence rates for different age–sex groups in New South Wales, Australia based on observed cancer incidence trends; and estimations of the road network-based access to current NSW treatment centres. Using three indices (General Efficiency, Service Availability and Equity), we show how the best location for a new RT centre may be chosen when there are multiple competing locations.
An approach to plan and evaluate the location of radiotherapy services and its application in the New South Wales, Australia
S0169260715002163
Cardiopulmonary resuscitation (CPR) is a first aid key survival technique used to stimulate breathing and keep blood flowing to the heart. Its effective administration can significantly increase the chances of survival for victims of cardiac arrest. LISSA is a serious game designed to complement CPR teaching and also to refresh CPR skills in an enjoyable way. The game presents an emergency situation in a 3D virtual environment and the player has to save the victim applying the CPR actions. In this paper, we describe LISSA and its evaluation in a population composed of 109 nursing undergraduate students enrolled in the Nursing degree of our university. To evaluate LISSA we performed a randomized controlled trial that compares the classical teaching methodology, composed of self-directed learning for theory plus laboratory sessions with a mannequin for practice, with the one that uses LISSA after self-directed learning for theory and before laboratory sessions with a mannequin. From our evaluation we observed that students using LISSA (Group 2 and 3) gave significantly better learning acquisition scores than those following traditional classes (Group 1). To evaluate the differences between students of these groups we performed a paired samples t-test between Group 1 and 2 (μ 1 =35, 67, μ 2 =47, 50 and p <0.05) and between students of Group 1 and 3 (μ 1 =35, 67, μ 3 =50, 58 and p <0.05). From these tests we observed that there are significant differences in both cases. We also evaluated student performance of main steps of CPR protocol. Students that use LISSA performed better than the ones that did not use it.
Using a serious game to complement CPR instruction in a nurse faculty
S0169260715002175
The aims of this study are summarized in the following items: first, to investigate the class discrimination power of long-term heart rate variability (HRV) features for risk assessment in patients suffering from congestive heart failure (CHF); second, to introduce the most discriminative features of HRV to discriminate low risk patients (LRPs) and high risk patients (HRPs), and third, to examine the influence of feature dimension reduction in order to achieve desired accuracy of the classification. We analyzed two public Holter databases: 12 data of patients suffering from mild CHF (NYHA class I and II), labeled as LRPs and 32 data of patients suffering from severe CHF (NYHA class III and IV), labeled as HRPs. A K-nearest neighbor classifier was used to evaluate the performance of feature set in the classification. Moreover, to reduce the number of features as well as the overlap of the samples of two classes in feature space, we used generalized discriminant analysis (GDA) as a feature extraction method. By applying GDA to the discriminative nonlinear features, we achieved sensitivity and specificity of 100% having the least number of features. Finally, the results were compared with other similar conducted studies regarding the performance of feature selection procedure and classifier besides the number of features used in training.
Generalized discriminant analysis for congestive heart failure risk assessment based on long-term heart rate variability
S0169260715002187
Immunization saves millions of lives against vaccine-preventable diseases. Yet, 24 million children born every year do not receive proper immunization during their first year. UNICEF and WHO have emphasized the need to strengthen the immunization surveillance and monitoring in developing countries to reduce childhood deaths. In this regard, we present a software application called Jeev to track the vaccination coverage of children in rural communities. Jeev synergistically combines the power of smartphones and the ubiquity of cellular infrastructure, QR codes, and national identification cards. We present the design of Jeev and highlight its unique features along with a detailed evaluation of its performance and power consumption using the National Immunization Survey datasets. We are in discussion with a non-profit organization in Haiti to pilot test Jeev in order to study its effectiveness and identify socio-cultural issues that may arise in a large-scale deployment.
A prototype of a novel cell phone application for tracking the vaccination coverage of children in rural communities
S0169260715002199
Setting The infection with Mycobacterium tuberculosis gives a delayed immune response, measured by the tuberculine skin test. We present a new technique for evaluation based on automatic detection and measurement of skin temperature due to infrared emission. Design 34 subjects (46.8±16.9 years) (12/22, M/F) with suspected tuberculosis disease were examined with an IR thermal camera, 48h after tuberculin skin injection. Results In 20 subjects, IR analysis was positive for tuberculine test. Mean temperature of injection area was higher, around 1°C, for the positive group (36.2±1.1°C positive group; 35.1±1.6°C negative group, p <0.02 T test for unpaired groups). Conclusion IR image analysis achieves similar estimation of tuberculin reaction as the visual evaluation, based on higher temperature due to increased heat radiation from the skin lesion.
Tuberculine reaction measured by infrared thermography
S0169260715002205
The design of a novel non-contact multimedia controller is proposed in this study. Nowadays, multimedia controllers are generally used by patients and nursing assistants in the hospital. Conventional multimedia controllers usually involve in manual operation or other physical movements. However, it is more difficult for the disabled patients to operate the conventional multimedia controller by themselves; they might totally depend on others. Different from other multimedia controllers, the proposed system provides a novel concept of controlling multimedia via visual stimuli, without manual operation. The disabled patients can easily operate the proposed multimedia system by focusing on the control icons of a visual stimulus device, where a commercial tablet is used as the visual stimulus device. Moreover, a wearable and wireless electroencephalogram (EEG) acquisition device is also designed and implemented to easily monitor the user's EEG signals in daily life. Finally, the proposed system has been validated. The experimental result shows that the proposed system can effectively measure and extract the EEG feature related to visual stimuli, and its information transfer rate is also good. Therefore, the proposed non-contact multimedia controller exactly provides a good prototype of novel multimedia controlling scheme.
Design of novel non-contact multimedia controller for disability by using visual stimulus
S0169260715002229
Gait function is traditionally assessed using well-lit, unobstructed walkways with minimal distractions. In patients with subclinical physiological abnormalities, these conditions may not provide enough stress on their ability to adapt to walking. The introduction of challenging walking conditions in gait can induce responses in physiological systems in addition to the locomotor system. There is a need for a device that is capable of monitoring multiple physiological systems in various walking conditions. To address this need, an Android-based gait-monitoring device was developed that enabled the recording of a patient's physiological systems during walking. The gait-monitoring device was tested during self-regulated overground walking sessions of fifteen healthy subjects that included 6 females and 9 males aged 18–35 years. The gait-monitoring device measures the patient's stride interval, acceleration, electrocardiogram, skin conductance and respiratory rate. The data is stored on an Android phone and is analyzed offline through the extraction of features in the time, frequency and time–frequency domains. The analysis of the data depicted multisystem physiological interactions during overground walking in healthy subjects. These interactions included locomotion-electrodermal, locomotion-respiratory and cardiolocomotion couplings. The current results depicting strong interactions between the locomotion system and the other considered systems (i.e., electrodermal, respiratory and cardiovascular systems) warrant further investigation into multisystem interactions during walking, particularly in challenging walking conditions with older adults.
Assessing interactions among multiple physiological systems during walking outside a laboratory: An Android based gait monitor
S0169260715002230
Background and objective Sperm morphology analysis (SMA) is an important factor in the diagnosis of human male infertility. This study presents an automatic algorithm for sperm morphology analysis (to detect malformation) using images of human sperm cells. Methods The SMA method was used to detect and analyze different parts of the human sperm. First of all, SMA removes the image noises and enhances the contrast of the image to a great extent. Then it recognizes the different parts of sperm (e.g., head, tail) and analyzes the size and shape of each part. Finally, the algorithm classifies each sperm as normal or abnormal. Malformations in the head, midpiece, and tail of a sperm, can be detected by the SMA method. In contrast to other similar methods, the SMA method can work with low resolution and non-stained images. Furthermore, an image collection created for the SMA, has also been described in this study. This benchmark consists of 1457 sperm images from 235 patients, and is known as human sperm morphology analysis dataset (HSMA-DS). Results The proposed algorithm was tested on HSMA-DS. The experimental results show the high ability of SMA to detect morphological deformities from sperm images. In this study, the SMA algorithm produced above 90% accuracy in sperm abnormality detection task. Another advantage of the proposed method is its low computation time (that is, less than 9s), as such, the expert can quickly decide to choose the analyzed sperm or select another one. Conclusions Automatic and fast analysis of human sperm morphology can be useful during intracytoplasmic sperm injection for helping embryologists to select the best sperm in real time.
An efficient method for automatic morphological abnormality detection from human sperm images
S0169260715002242
Three dimensional reconstruction of lung and vessel tree has great significance to 3D observation and quantitative analysis for lung diseases. This paper presents non-sheltered 3D models of lung and vessel tree based on a supervised semi-3D lung tissues segmentation method. A recursive strategy based on geometric active contour is proposed instead of the “coarse-to-fine” framework in existing literature to extract lung tissues from the volumetric CT slices. In this model, the segmentation of the current slice is supervised by the result of the previous one slice due to the slight changes between adjacent slice of lung tissues. Through this mechanism, lung tissues in all the slices are segmented fast and accurately. The serious problems of left and right lungs fusion, caused by partial volume effects, and segmentation of pleural nodules can be settled meanwhile during the semi-3D process. The proposed scheme is evaluated by fifteen scans, from eight healthy participants and seven participants suffering from early-stage lung tumors. The results validate the good performance of the proposed method compared with the “coarse-to-fine” framework. The segmented datasets are utilized to reconstruct the non-sheltered 3D models of lung and vessel tree.
Supervised recursive segmentation of volumetric CT images for 3D reconstruction of lung and vessel tree
S0169260715002254
Objective To survey researchers’ efforts in response to the new and disruptive technology of smartphone medical apps, mapping the research landscape form the literature into a coherent taxonomy, and finding out basic characteristics of this emerging field represented on: motivation of using smartphone apps in medicine and healthcare, open challenges that hinder the utility, and the recommendations to improve the acceptance and use of medical apps in the literature. Methods We performed a focused search for every article on (1) smartphone (2) medical or health-related (3) app, in four major databases: MEDLINE, Web of Science, ScienceDirect, and IEEE Xplore. Those databases are deemed broad enough to cover both medical and technical literature. Results The final set included 133 articles. Most articles (68/133) are reviews and surveys that refer to actual apps or the literature to describe medical apps for a specific specialty, disease, or purpose; or to provide a general overview of the technology. Another group (43/133) carried various studies, from evaluation of apps to exploration of desired features when developing them. Few researchers (17/133) presented actual attempts to develop medical apps, or shared their experiences in doing so. The smallest portion (5/133) proposed general frameworks addressing the production or operation of apps. Discussion Since 2010, researchers followed the trend of medical apps in several ways, though leaving areas or aspect for further attention. Regardless of their category, articles focus on the challenges that hinder the full utility of medical apps and do recommend mitigations to them. Conclusions Research on smartphone medical apps is active and various. We hope that this survey contribute to the understanding of the available options and gaps for other researchers to join this line of research.
The landscape of research on smartphone medical apps: Coherent taxonomy, motivations, open challenges and recommendations
S0169260715002266
Breast cancer is the most commonly occurring type of cancer among women, and it is the major cause of female cancer-related deaths worldwide. Its incidence is increasing in developed as well as developing countries. Efficient strategies to reduce the high death rates due to breast cancer include early detection and tumor removal in the initial stages of the disease. Clinical and mammographic examinations are considered the best methods for detecting the early signs of breast cancer; however, these techniques are highly dependent on breast characteristics, equipment quality, and physician experience. Computer-aided diagnosis (CADx) systems have been developed to improve the accuracy of mammographic diagnosis; usually such systems may involve three steps: (i) segmentation; (ii) parameter extraction and selection of the segmented lesions and (iii) lesions classification. Literature considers the first step as the most important of them, as it has a direct impact on the lesions characteristics that will be used in the further steps. In this study, the original contribution is a microcalcification segmentation method based on the geodesic active contours (GAC) technique associated with anisotropic texture filtering as well as the radiologists’ knowledge. Radiologists actively participate on the final step of the method, selecting the final segmentation that allows elaborating an adequate diagnosis hypothesis with the segmented microcalcifications presented in a region of interest (ROI). The proposed method was assessed by employing 1000 ROIs extracted from images of the Digital Database for Screening Mammography (DDSM). For the selected ROIs, the rate of adequately segmented microcalcifications to establish a diagnosis hypothesis was at least 86.9%, according to the radiologists. The quantitative test, based on the area overlap measure (AOM), yielded a mean of 0.52±0.20 for the segmented images, when all 2136 segmented microcalcifications were considered. Moreover, a statistical difference was observed between the AOM values for large and small microcalcifications. The proposed method had better or similar performance as compared to literature for microcalcifications with maximum diameters larger than 460μm. For smaller microcalcifications the performance was limited.
Evaluating geodesic active contours in microcalcifications segmentation on mammograms
S0169260715002278
Increased heterogeneity of the lung disturbs pulmonary gas exchange. During bronchoconstriction, inflammation of lung parenchyma or acute respiratory distress syndrome, inhomogeneous lung ventilation can become bimodal and increase the risk of ventilator-induced lung injury during mechanical ventilation. A simple index sensitive to ventilation heterogeneity would be very useful in clinical practice. In the case of bimodal ventilation, the index (H) can be defined as the ratio between the longer and shorter time constant characterising regions of contrary mechanical properties. These time constants can be derived from the Otis model fitted to input impedance (Z in ) measured using forced oscillations. In this paper we systematically investigated properties of the aforementioned approach. The research included both numerical simulations and real experiments with a dual-lung simulator. Firstly, a computational model mimicking the physical simulator was derived and then used as a forward model to generate synthetic flow and pressure signals. These data were used to calculate the input impedance and then the Otis inverse model was fitted to Z in by means of the Levenberg–Marquardt (LM) algorithm. Finally, the obtained estimates of model parameters were used to compute H. The analysis of the above procedure was performed in the frame of Monte Carlo simulations. For each selected value of H, forward simulations with randomly chosen lung parameters were repeated 1000 times. Resulting signals were superimposed by additive Gaussian noise. The estimated values of H properly indicated the increasing level of simulated inhomogeneity, however with underestimation and variation increasing with H. The main factor responsible for the growing estimation bias was the fixed starting vector required by the LM algorithm. Introduction of a correction formula perfectly reduced this systematic error. The experimental results with the dual-lung simulator confirmed potential of the proposed procedure to properly deduce the lung heterogeneity level. We conclude that the heterogeneity index H can be used to assess bimodal ventilation imbalances in cases when this phenomenon dominates lung properties, however future analyses, including the impact of lung tissue viscoelasticity and distributed airway or tissue inhomogeneity on H estimates, as well as studies in the time domain, are advisable.
Analysis of the method for ventilation heterogeneity assessment using the Otis model and forced oscillations
S0169260715002291
Group-independent component analysis (GICA) is a well-established blind source separation technique that has been widely applied to study multi-subject functional magnetic resonance imaging (fMRI) data. The group-independent components (GICs) represent the commonness of all of the subjects in the group. Similar to independent component analysis on the single-subject level, the performance of GICA can be improved for multi-subject fMRI data analysis by incorporating a priori information; however, a priori information is not always considered while looking for GICs in existing GICA methods, especially when no obvious or specific knowledge about an unknown group is available. In this paper, we present a novel method to extract the group intrinsic reference from all of the subjects of the group and then incorporate it into the GICA extraction procedure. Comparison experiments between FastICA and GICA with intrinsic reference (GICA-IR) are implemented on the group level with regard to the simulated, hybrid and real fMRI data. The experimental results show that the GICs computed by GICA-IR have a higher correlation with the corresponding independent component of each subject in the group, and the accuracy of activation regions detected by GICA-IR was also improved. These results have demonstrated the advantages of the GICA-IR method, which can better reflect the commonness of the subjects in the group.
A novel fMRI group data analysis method based on data-driven reference extracting from group subjects
S0169260715002308
A confocal microscope provides a sequence of images of the corneal layers and structures at different depths from which medical clinicians can extract clinical information on the state of health of the patient's cornea. A hybrid model based on snake and particle swarm optimisation (S-PSO) is proposed in this paper to analyse the confocal endothelium images. The proposed system is able to pre-process images (including quality enhancement and noise reduction), detect cells, measure cell densities and identify abnormalities in the analysed data sets. Three normal corneal data sets acquired using a confocal microscope, and three abnormal confocal endothelium images associated with diseases have been investigated in the proposed system. Promising results are presented and the performance of this system is compared with manual and two morphological based approaches. The average differences between the manual and the automatic cell densities calculated using S-PSO and two other morphological based approaches is 5%, 7% and 13% respectively. The developed system will be deployable as a clinical tool to underpin the expertise of ophthalmologists in analysing confocal corneal images.
An efficient intelligent analysis system for confocal corneal endothelium images
S0169260715002321
Background and objective Wireless Capsule Endoscopy (WCE) can image the portions of the human gastrointestinal tract that were previously unreachable for conventional endoscopy examinations. A major drawback of this technology is that a large volume of data are to be analyzed in order to detect a disease which can be time-consuming and burdensome for the clinicians. Consequently, there is a dire need of computer-aided disease detection schemes to assist the clinicians. In this paper, we propose a real-time, computationally efficient and effective computerized bleeding detection technique applicable for WCE technology. Methods The development of our proposed technique is based on the observation that characteristic patterns appear in the frequency spectrum of the WCE frames due to the presence of bleeding region. Discovering these discriminating patterns, we develop a texture-feature-descriptor-based-algorithm that operates on the Normalized Gray Level Co-occurrence Matrix (NGLCM) of the magnitude spectrum of the images. A new local texture descriptor called difference average that operates on NGLCM is also proposed. We also perform statistical validation of the proposed scheme. Results The proposed algorithm was evaluated using a publicly available WCE database. The training set consisted of 600 bleeding and 600 non-bleeding frames. This set was used to train the SVM classifier. On the other hand, 860 bleeding and 860 non-bleeding images were selected from the rest of the extracted images to form the test set. The accuracy, sensitivity and specificity obtained from our method are 99.19%, 99.41% and 98.95% respectively which are significantly higher than state-of-the-art methods. In addition, the low computational cost of our method makes it suitable for real-time implementation. Conclusion This work proposes a bleeding detection algorithm that employs textural features from the magnitude spectrum of the WCE images. Experimental outcomes backed by statistical validations prove that the proposed algorithm is superior to the existing ones in terms of accuracy, sensitivity, specificity and computational cost.
Computer-aided gastrointestinal hemorrhage detection in wireless capsule endoscopy videos
S0169260715002333
Background Drug–drug interactions have long been an active research area in clinical medicine. In Taiwan, however, the widespread use of traditional Chinese medicines (TCM) presents additional complexity to the topic. Therefore, it is important to see the interaction between traditional Chinese and western medicine. Objective (1) To create a comprehensive database of multi-herb/western drug interactions indexed according to the ways in which physicians actually practice and (2) to measure this database's impact on the detection of adverse effects between traditional Chinese medicine compounds and western medicines. Methods First, a multi-herb/western medicine drug interactions database was created by separating each TCM compound into its constituent herbs. Each individual herb was then checked against an existing single-herb/western drug interactions database. The data source comes from the National Health Insurance research database, which spans the years 1998–2011. This study estimated the interaction prevalence rate and further separated the rates according to patient characteristics, distribution by county, and hospital accreditation levels. Finally, this new database was integrated into a computer order entry module of the electronic medical records system of a regional teaching hospital. The effects it had were measured for two months. Results The most commonly interacting Chinese herbs were Ephedrae Herba and Angelicae Sinensis Radix/Angelicae Dahuricae Radix. Ephedrae Herba contains active ingredients similar to in ephedrine. 15 kinds of traditional Chinese medicine compounds contain Ephedrae Herba. Angelicae Sinensis Radix and Angelicae Dahuricae Radix contain ingredients similar to coumarin, a blood thinner. 9 kinds of traditional Chinese medicine compounds contained Angelicae Sinensis Radix/Angelicae Dahuricae Radix. In the period from 1998 to 2011, the prevalence of herb–drug interactions related to Ephedrae Herba was 0.18%. The most commonly prescribed traditional Chinese compounds were MA SHING GAN SHYR TANG (23.1%), followed by SHEAU CHING LONG TANG (15.5%) and DINQ CHUAN TANG (13.2%). The prevalence of herb–drug interactions related to Angelicae Sinensis Radix, Angelicae Dahuricae Radix was 4.59%. The most common traditional Chinese compound formula were TSANG EEL SAAN (32%), followed by HUOH SHIANG JENQ CHIH SAAN (31.4%) and SHY WUH TANG (10.7%). Once the multi-herb drug interaction database was deployed in a hospital system, there were 480 prescriptions that indicated a TCM–western drug interaction. Physicians were alerted 24 times during two months. These alerts resulted in a prescription change four times (16.7%). Conclusion Due to the unique cultural factors that have resulted in widespread acceptance of both western and traditional Chinese medicine, Taiwan stands well positioned to report on the prevalence of interactions between western drugs and traditional Chinese medicine and devise ways to reduce their incidence. This study built a multi-herb/western drug interactions database, embedded inside a hospital clinical information system, and then examined the effects that drug interaction alerts had on clinician prescribing behaviour. The results demonstrated that western drug/traditional Chinese medicine interactions are prevalent and that western-trained physicians tend to change their prescribing behaviour more than traditional Chinese medicine physicians in their response to medication interaction alerts.
Interactions between traditional Chinese medicine and western drugs in Taiwan: A population-based study
S0169260715002345
Breast ultrasound (BUS) image segmentation is a challenging task due to the speckle noise, poor quality of the ultrasound images and size and location of the breast lesions. In this paper, we propose a new BUS image segmentation algorithm based on neutrosophic similarity score (NSS) and level set algorithm. At first, the input BUS image is transferred to the NS domain via three membership subsets T, I and F, and then, a similarity score NSS is defined and employed to measure the belonging degree to the true tumor region. Finally, the level set method is used to segment the tumor from the background tissue region in the NSS image. Experiments have been conducted on a variety of clinical BUS images. Several measurements are used to evaluate and compare the proposed method's performance. The experimental results demonstrate that the proposed method is able to segment the BUS images effectively and accurately.
A novel breast ultrasound image segmentation algorithm based on neutrosophic similarity score and level set
S0169260715002357
Over the past two decades, the use of telemedicine as a way to provide medical services has grown as communication technologies advance and patients seek more convenient ways to receive care. Because developments within this field are still rapidly evolving, identifying trends within telemedicine literature is an important task to help delineate future directions of telemedicine research. In this study, we analyzed 7960 telemedicine-related publication records found in the Science Citations Index – Expanded database between 1993 and 2012. Bibliometric analyses revealed that while the total growth in telemedicine literature has been significant in the last twenty years, the publication activity per country and over time has been variable. While the United States led the world in the cumulative number of telemedicine publications, Norway ranked highest when we ordered countries by publications per capita. We also saw that the growth in the number of publications per year has been inconsistent over the past two decades. Our results identified that neuroscience neurology and nursing as two fields of research in telemedicine that have seen considerable growth in interest in this field, and are poised to be the focus of research activity in the near future.
Trends in the growth of literature of telemedicine: A bibliometric analysis
S0169260715002369
The visualization of multiple 3D objects has been increasingly required for recent applications in medical fields. Due to the heterogeneity in data representation or data configuration, it is difficult to efficiently render multiple medical objects in high quality. In this paper, we present a novel intermixing scheme for fusion rendering of multiple medical objects while preserving the real-time performance. First, we present an in-slab visibility interpolation method for the representation of subdivided slabs. Second, we introduce virtual zSlab, which extends an infinitely thin boundary (such as polygonal objects) into a slab with a finite thickness. Finally, based on virtual zSlab and in-slab visibility interpolation, we propose a slab-based visibility intermixing method with the newly proposed rendering pipeline. Experimental results demonstrate that the proposed method delivers more effective multiple-object renderings in terms of rendering quality, compared to conventional approaches. And proposed intermixing scheme provides high-quality intermixing results for the visualization of intersecting and overlapping surfaces by resolving aliasing and z-fighting problems. Moreover, two case studies are presented that apply the proposed method to the real clinical applications. These case studies manifest that the proposed method has the outstanding advantages of the rendering independency and reusability.
High-quality slab-based intermixing method for fusion rendering of multiple medical objects
S0169260715002382
Enhancing 2D angiography while maintaining a low radiation dose has become an important research topic. However, it is difficult to enhance images while preserving vessel-structure details because X-ray noise and contrast blood vessels in 2D angiography have similar intensity distributions, which can lead to ambiguous images of vessel structures. In this paper, we propose a novel and fast vessel-enhancement method for 2D angiography. We apply filtering in the principal component analysis domain for vessel regions and background regions separately, using assumptions based on energy compaction. First, we identify an approximate vessel region using a Hessian-based method. Vessel and non-vessel regions are then represented sparsely by calculating their optimal bases separately. This is achieved by identifying periodic motion in the vessel region caused by the flow of the contrast medium through the blood vessels when viewed on the time axis. Finally, we obtain noise-free images by removing noise in the new coordinate domain for the optimal bases. Our method was validated for an X-ray system, using 10 low-dose sets for training and 20 low-dose sets for testing. The results were compared with those for a high-dose dataset with respect to noise-free images. The average enhancement rate was 93.11±0.71%. The average processing time for enhancing video comprising 50–70 frames was 0.80±0.35s, which is much faster than the previously proposed technique. Our method is applicable to 2D angiography procedures such as catheterization, which requires rapid and natural vessel enhancement.
Low-dose 2D X-ray angiography enhancement using 2-axis PCA for the preservation of blood-vessel region and noise minimization
S0169260715002394
Introduction Understanding the basic concepts of physiology and biophysics of cardiac cells can be improved by virtual experiments that illustrate the complex excitation–contraction coupling process in cardiac cells. The aim of this study is to propose a rat cardiac myocyte simulator, with which calcium dynamics in excitation–contraction coupling of an isolated cell can be observed. This model has been used in the course “Mathematical Modeling and Simulation of Biological Systems”. In this paper we present the didactic utility of the simulator MioLab®. Methods The simulator enables virtual experiments that can help studying inhibitors and activators in the sarcoplasmic reticulum sodium–calcium exchanger, thus corroborating a better understanding of the effects of medications, which are used to treat arrhythmias, on these compartments. The graphical interfaces were developed not only to facilitate the use of the simulator, but also to promote a constructive learning on the subject, since there are animations and videos for each stage of the simulation. The effectiveness of the simulator was tested by a group of graduate students. Results Some examples of simulations were presented in order to describe the overall structure of the simulator. Part of these virtual experiments became an activity for Biomedical Engineering graduate students, who evaluated the simulator based on its didactic quality. As a result, students answered a questionnaire on the usability and functionality of the simulator as a teaching tool. All students performed the proposed activities and classified the simulator as an optimal or good learning tool. In their written questions, students indicated as negative characteristics some problems with visualizing graphs; as positive characteristics, they indicated the simulator's didactic function, especially tutorials and videos on the topic of this study. Conclusions The results show that the simulator complements the study of the physiology and biophysics of the cardiac cell.
MioLab, a rat cardiac contractile force simulator: Applications to teaching cardiac cell physiology and biophysics
S0169260715002400
Automated fracture detection is an essential part of a computer-aided tele-medicine system. In this paper, we have proposed a unified technique for the detection and evaluation of orthopaedic fractures in long-bone digital X-ray image. We have also developed a software tool that can be conveniently used by paramedics or specialist doctors. The proposed tool first segments the bone region of an input digital X-ray image from its surrounding flesh region and then generates the bone-contour using an adaptive thresholding approach. Next, it performs unsupervised correction of bone-contour discontinuities that might have been generated because of segmentation errors, and finally detects the presence of fracture in the bone. Moreover, the method can also localize the line-of-break for easy visualization of the fracture, identify its orientation, and assess the extent of damage in the bone. Several concepts from digital geometry such as relaxed straightness and concavity index are utilized to correct contour imperfections, and to detect fracture locations and type. Experiments on a database of several long-bone digital X-ray images show satisfactory results.
Long-bone fracture detection in digital X-ray images based on digital-geometric techniques
S0169260715002412
Breast thermography still has inherent limitations that prevent it from being fully accepted as a breast screening modality in medicine. The main challenges of breast thermography are to reduce false positive results and to increase the sensitivity of a thermogram. Further, it is still difficult to obtain information about tumour parameters such as metabolic heat, tumour depth and diameter from a thermogram. However, infrared technology and image processing have advanced significantly and recent clinical studies have shown increased sensitivity of thermography in cancer diagnosis. The aim of this paper is to study numerically the possibilities of extracting information about the tumour depth from steady state thermography and transient thermography after cold stress with no need to use any specific inversion technique. Both methods are based on the numerical solution of Pennes bioheat equation for a simple three-dimensional breast model. The effectiveness of two approaches used for depth detection from steady state thermography is assessed. The effect of breast density on the steady state thermal contrast has also been studied. The use of a cold stress test and the recording of transient contrasts during rewarming were found to be potentially suitable for tumour depth detection during the rewarming process. Sensitivity to parameters such as cold stress temperature and cooling time is investigated using the numerical model and simulation results reveal two prominent depth-related characteristic times which do not strongly depend on the temperature of the cold stress or on the cooling period.
Potentialities of steady-state and transient thermography in breast tumour depth detection: A numerical study
S0169260715002424
The Point of Care (PoC) version of the interoperability standard ISO/IEEE11073 (X73) provided a mechanism to control remotely agents through documents X73-10201 and X73-20301. The newer version of X73 oriented to Personal Health Devices (PHD) has no mechanisms to do such a thing. The authors are working toward a common proposal with the PHD Working Group (PHD-WG) in order to adapt the remote control capabilities from X73PoC to X73PHD. However, this theoretical adaptation has to be implemented and tested to evaluate whether or not its inclusion entails an acceptable overhead and extra cost. Such proof-of-concept assessment is the main objective of this paper. For the sake of simplicity, a weighing scale with a configurable operation was chosen as use case. First, in a previous stage of the research – the model was defined. Second, the implementation methodology – both in terms of hardware and software – was defined and executed. Third, an evaluation methodology to test the remote control features was defined. Then, a thorough comparison between a weighing scale with and without remote control was performed. The results obtained indicate that, when implementing remote control in a weighing scale, the relative weight of such feature represents an overhead of as much as 53%, whereas the number of Implementation Conformance Statements (ICSs) to be satisfied by the manufacturer represent as much as 34% regarding the implementation without remote control. The new feature facilitates remote control of PHDs but, at the same time, increases overhead and costs, and, therefore, manufacturers need to weigh this trade-off. As a conclusion, this proof-of-concept helps in fostering the evolution of the remote control proposal to extend X73PHD and promotes its inclusion as part of the standard, as well as it illustrates the methodological steps for its extrapolation to other specializations.
Lessons learned from the implementation of remote control for the interoperability standard ISO/IEEE11073-20601 in a standard weighing scale
S0169260715002436
This paper presents an improved wave-based bilateral teleoperation scheme for rehabilitation therapies assisted by robot manipulators. The main feature of this bilateral teleoperator is that both robot manipulators, master and slave, are controlled by impedance. Thus, a pair of motion-based adaptive impedance controllers are integrated into a wave-based configuration, in order to guarantee a stable human–robot interaction and to compensate the position drift, characteristic of the available schemes of bilateral teleoperation. Moreover, the teleoperator stability, in the presence of time delays in the communication channel, is guaranteed because the wave-variable approach is included to encode the force and velocity signals. It should be noted that the proposed structure enables the implementation of several teleoperator schemes, from passive therapies, without the intervention of a human operator on the master side, to fully active therapies where both manipulators interact with humans in a stable manner. The suitable performance of the proposed teleoperator is verified through some results obtained from the simulation of the passive and active-constrained modes, by considering typical tasks in motor-therapy rehabilitation, where an improved behavior is observed when compared to implementations of the classical wave-based approach.
Impedance control in a wave-based teleoperator for rehabilitation motor therapies assisted by robots
S0169260715002448
The deposits of fat on the surroundings of the heart are correlated to several health risk factors such as atherosclerosis, carotid stiffness, coronary artery calcification, atrial fibrillation and many others. These deposits vary unrelated to obesity, which reinforces its direct segmentation for further quantification. However, manual segmentation of these fats has not been widely deployed in clinical practice due to the required human workload and consequential high cost of physicians and technicians. In this work, we propose a unified method for an autonomous segmentation and quantification of two types of cardiac fats. The segmented fats are termed epicardial and mediastinal, and stand apart from each other by the pericardium. Much effort was devoted to achieve minimal user intervention. The proposed methodology mainly comprises registration and classification algorithms to perform the desired segmentation. We compare the performance of several classification algorithms on this task, including neural networks, probabilistic models and decision tree algorithms. Experimental results of the proposed methodology have shown that the mean accuracy regarding both epicardial and mediastinal fats is 98.5% (99.5% if the features are normalized), with a mean true positive rate of 98.0%. In average, the Dice similarity index was equal to 97.6%.
A novel approach for the automated segmentation and volume quantification of cardiac fats on computed tomography
S0169260715002461
This paper presents an integrated system for the automatic analysis of mammograms to assist radiologists in confirming their diagnosis in mammography screening. The proposed automated confirmatory system (ACS) can process a digitalized mammogram online, and generates a high quality filtered segmentation of an image for biological interpretation and a texture-feature based diagnosis. We use a serial of image pre-processing and segmentation techniques, including 2D median filtering, seeded region growing (SRG) algorithm, image contrast enhancement, to remove noise, delete radiopaque artifacts and eliminate the projection of the pectoral muscle from a digitalized mammogram. We also develop an entire-image texture-feature based classification method, by combining a Rough-set approach to extract five fundamental texture features from images, and then an Artificial Neural Network technique to classify a mammogram as: normal; indicating the presence of a benign lump; or representing a malignant tumor. Here, 222 random images from the Mammographic Image Analysis Society (MIAS) database are used for the offline ACS training. Once the system is tuned and trained, it is ready for the automated use for the analysis and diagnosis of new mammograms. To test the trained system, a separate set of 100 random images from the MIAS and another set of 100 random images from the independent BancoWeb database are selected. The proposed ACS is shown to be successful in confirming diagnosis of mammograms from the two independent databases.
An automated confirmatory system for analysis of mammograms
S0169260715002473
Background and objectives The broad adoption of clinical decision support systems within clinical practice has been hampered mainly by the difficulty in expressing domain knowledge and patient data in a unified formalism. This paper presents a semantic-based approach to the unified representation of healthcare domain knowledge and patient data for practical clinical decision making applications. Methods A four-phase knowledge engineering cycle is implemented to develop a semantic healthcare knowledge base based on an HL7 reference information model, including an ontology to model domain knowledge and patient data and an expression repository to encode clinical decision making rules and queries. A semantic clinical decision support system is designed to provide patient-specific healthcare recommendations based on the knowledge base and patient data. Results The proposed solution is evaluated in the case study of type 2 diabetes mellitus inpatient management. The knowledge base is successfully instantiated with relevant domain knowledge and testing patient data. Ontology-level evaluation confirms model validity. Application-level evaluation of diagnostic accuracy reaches a sensitivity of 97.5%, a specificity of 100%, and a precision of 98%; an acceptance rate of 97.3% is given by domain experts for the recommended care plan orders. Conclusions The proposed solution has been successfully validated in the case study as providing clinical decision support at a high accuracy and acceptance rate. The evaluation results demonstrate the technical feasibility and application prospect of our approach.
Integrating HL7 RIM and ontology for unified knowledge and data representation in clinical decision support systems
S0169260715002631
Because of the increased volume of information available to physicians from advanced medical technology, the obtained information of each symptom with respect to a disease may contain truth, falsity and indeterminacy information. Since a single-valued neutrosophic set (SVNS) consists of the three terms like the truth-membership, indeterminacy-membership and falsity-membership functions, it is very suitable for representing indeterminate and inconsistent information. Then, similarity measure plays an important role in pattern recognition and medical diagnosis. However, existing medical diagnosis methods can only handle the single period medical diagnosis problem, but cannot deal with the multi-period medical diagnosis problems with neutrosophic information. Hence, the purpose of this paper was to propose similarity measures between SVNSs based on tangent function and a multi-period medical diagnosis method based on the similarity measure and the weighted aggregation of multi-period information to solve multi-period medical diagnosis problems with single-valued neutrosophic information. Then, we compared the tangent similarity measures of SVNSs with existing similarity measures of SVNSs by a numerical example about pattern recognitions to indicate the effectiveness and rationality of the proposed similarity measures. In the multi-period medical diagnosis method, we can find a proper diagnosis for a patient by the proposed similarity measure between the symptoms and the considered diseases represented by SVNSs and the weighted aggregation of multi-period information. Then, a multi-period medical diagnosis example was presented to demonstrate the application of the proposed diagnosis method and to indicate the effectiveness of the proposed diagnosis method by the comparative analysis. The diagnosis results showed that the developed multi-period medical diagnosis method can help doctors make a proper diagnosis by the comprehensive information of multi-periods.
Multi-period medical diagnosis method using a single valued neutrosophic similarity measure based on tangent function
S0169260715002643
Bone drilling is a common procedure in many types of surgeries, including orthopedic, neurological and otologic surgeries. Several technologies and control algorithms have been developed to help the surgeon automatically stop the drill before it goes through the boundary of the tissue being drilled. However, most of them rely on thrust force and cutting torque to detect bone layer transitions which has many drawbacks that affect the reliability of the process. This paper describes in detail a bone-drilling algorithm based only on the position control of the drill bit that overcomes such problems and presents additional advantages. The implication of each component of the algorithm in the drilling procedure is analyzed and the efficacy of the algorithm is experimentally validated with two types of bones.
Using an admittance algorithm for bone drilling procedures
S0169260715002655
It is desirable to reduce the excessive radiation exposure to patients in repeated medical CT applications. One of the most effective ways is to reduce the X-ray tube current (mAs) or tube voltage (kVp). However, it is difficult to achieve accurate reconstruction from the noisy measurements. Compared with the conventional filtered back-projection (FBP) algorithm leading to the excessive noise in the reconstructed images, the approaches using statistical iterative reconstruction (SIR) with low mAs show greater image quality. To eliminate the undesired artifacts and improve reconstruction quality, we proposed, in this work, an improved SIR algorithm for low-dose CT reconstruction, constrained by a modified Markov random field (MRF) regularization. Specifically, the edge-preserving total generalized variation (TGV), which is a generalization of total variation (TV) and can measure image characteristics up to a certain degree of differentiation, was introduced to modify the MRF regularization. In addition, a modified alternating iterative algorithm was utilized to optimize the cost function. Experimental results demonstrated that images reconstructed by the proposed method could not only generate high accuracy and resolution properties, but also ensure a higher peak signal-to-noise ratio (PSNR) in comparison with those using existing methods.
Low-dose CT statistical iterative reconstruction via modified MRF regularization
S0169260715002667
Background and objective This study proposes an infrastructure with a reporting workflow optimization algorithm (RWOA) in order to interconnect facilities, reporting units and radiologists on a single access interface, to increase the efficiency of the reporting process by decreasing the medical report turnaround time and to increase the quality of medical reports by determining the optimum match between the inspection and radiologist in terms of subspecialty, workload and response time. Methods Workflow centric network architecture with an enhanced caching, querying and retrieving mechanism is implemented by seamlessly integrating Grid Agent and Grid Manager to conventional digital radiology systems. The inspection and radiologist attributes are modelled using a hierarchical ontology structure. Attribute preferences rated by radiologists and technical experts are formed into reciprocal matrixes and weights for entities are calculated utilizing Analytic Hierarchy Process (AHP). The assignment alternatives are processed by relation-based semantic matching (RBSM) and Integer Linear Programming (ILP). Results The results are evaluated based on both real case applications and simulated process data in terms of subspecialty, response time and workload success rates. Results obtained using simulated data are compared with the outcomes obtained by applying Round Robin, Shortest Queue and Random distribution policies. The proposed algorithm is also applied to a real case teleradiology application process data where medical reporting workflow was performed based on manual assignments by the chief radiologist for 6225 inspections. Conclusions RBSM gives the highest subspecialty success rate and integrating ILP with RBSM ratings as RWOA provides a better response time and workload distribution success rate. RWOA based image delivery also prevents bandwidth, storage or hardware related stuck and latencies. When compared with a real case teleradiology application where inspection assignments were performed manually, the proposed solution was found to increase the experience success rate by 13.25%, workload success rate by 63.76% and response time success rate by 120%. The total response time in the real case application data was improved by 22.39%.
A novel approach to optimize workflow in grid-based teleradiology applications
S0169260715002679
Cataract is defined as a lenticular opacity presenting usually with poor visual acuity. It is one of the most common causes of visual impairment worldwide. Early diagnosis demands the expertise of trained healthcare professionals, which may present a barrier to early intervention due to underlying costs. To date, studies reported in the literature utilize a single learning model for retinal image classification in grading cataract severity. We present an ensemble learning based approach as a means to improving diagnostic accuracy. Three independent feature sets, i.e., wavelet-, sketch-, and texture-based features, are extracted from each fundus image. For each feature set, two base learning models, i.e., Support Vector Machine and Back Propagation Neural Network, are built. Then, the ensemble methods, majority voting and stacking, are investigated to combine the multiple base learning models for final fundus image classification. Empirical experiments are conducted for cataract detection (two-class task, i.e., cataract or non-cataractous) and cataract grading (four-class task, i.e., non-cataractous, mild, moderate or severe) tasks. The best performance of the ensemble classifier is 93.2% and 84.5% in terms of the correct classification rates for cataract detection and grading tasks, respectively. The results demonstrate that the ensemble classifier outperforms the single learning model significantly, which also illustrates the effectiveness of the proposed approach.
Exploiting ensemble learning for automatic cataract detection and grading
S0169260715002680
Despite the widespread use of cardiotocography in foetal monitoring, the evaluation of foetal status suffers from a considerable inter and intra-observer variability. In order to overcome the main limitations of visual cardiotocographic assessment, computerised methods to analyse cardiotocographic recordings have been recently developed. In this study, a new software for automated analysis of foetal heart rate is presented. It allows an automatic procedure for measuring the most relevant parameters derivable from cardiotocographic traces. Simulated and real cardiotocographic traces were analysed to test software reliability. In artificial traces, we simulated a set number of events (accelerations, decelerations and contractions) to be recognised. In the case of real signals, instead, results of the computerised analysis were compared with the visual assessment performed by 18 expert clinicians and three performance indexes were computed to gain information about performances of the proposed software. The software showed preliminary performance we judged satisfactory in that the results matched completely the requirements, as proved by tests on artificial signals in which all simulated events were detected from the software. Performance indexes computed in comparison with obstetricians’ evaluations are, on the contrary, not so satisfactory; in fact they led to obtain the following values of the statistical parameters: sensitivity equal to 93%, positive predictive value equal to 82% and accuracy equal to 77%. Very probably this arises from the high variability of trace annotation carried out by clinicians.
Software for computerised analysis of cardiotocographic traces
S0169260715002692
Background Accurate segmentation of human head on medical images is an important process in a wide array of applications such as diagnosis, facial surgery planning, prosthesis design, and forensic identification. Objectives In this study, a Bayesian method for segmentation of facial tissues is presented. Segmentation classes include muscle, bone, fat, air and skin. Methods The method presented incorporates information fusion from multiple modalities, modelling of image resolution (measurement blurring), image noise, two priors helping to reduce noise and partial volume. Image resolution modelling employed facilitates resolution enhancement and superresolution capabilities during image segmentation. Regularization based on isotropic and directional Markov Random Field priors is integrated. The Bayesian model is solved iteratively yielding tissue class labels at every voxel of the image. Sub-methods as variations of the main method are generated by using a combination of the models. Results Testing of the sub-methods is performed on two patients using single modality three-dimensional (3D) image (magnetic resonance, MR or computerized tomography, CT) as well as registered MR-CT images with information fusion. Numerical, visual and statistical analyses of the methods are conducted. High segmentation accuracy values are obtained by the use of image resolution and partial volume models as well as information fusion from MR and CT images. The methods are also compared with our Bayesian segmentation method proposed in a previous study. The performance is found to be similar to our previous Bayesian approach, but the presented methods here eliminates ad hoc parameter tuning needed by the previous approach which is system and data acquisition setting dependent. Conclusions The Bayesian approach presented provides resolution enhanced segmentation of very thin structures of the human head. Meanwhile, free parameters of the algorithm can be adjusted for different imaging systems and data acquisition settings in a more systematic way as compared with our previous study.
Bayesian segmentation of human facial tissue using 3D MR-CT information fusion, resolution enhancement and partial volume modelling
S0169260715002709
Glaucoma is a disease of the retina which is one of the most common causes of permanent blindness worldwide. This paper presents an automatic image processing based method for glaucoma diagnosis from the digital fundus image. In this paper wavelet feature extraction has been followed by optimized genetic feature selection combined with several learning algorithms and various parameter settings. Unlike the existing research works where the features are considered from the complete fundus or a sub image of the fundus, this work is based on feature extraction from the segmented and blood vessel removed optic disc to improve the accuracy of identification. The experimental results presented in this paper indicate that the wavelet features of the segmented optic disc image are clinically more significant in comparison to features of the whole or sub fundus image in the detection of glaucoma from fundus image. Accuracy of glaucoma identification achieved in this work is 94.7% and a comparison with existing methods of glaucoma detection from fundus image indicates that the proposed approach has improved accuracy of classification.
Image processing based automatic diagnosis of glaucoma using wavelet features of segmented optic disc from fundus image
S0169260715002710
Eye blinks are one of the most influential artifact sources in electroencephalogram (EEG) recorded from frontal channels, and thereby detecting and rejecting eye blink artifacts is regarded as an essential procedure for improving the quality of EEG data. In this paper, a novel method to detect eye blink artifacts from a single-channel frontal EEG signal was proposed by combining digital filters with a rule-based decision system, and its performance was validated using an EEG dataset recorded from 24 healthy participants. The proposed method has two main advantages over the conventional methods. First, it uses single-channel EEG data without the need for electrooculogram references. Therefore, this method could be particularly useful in brain–computer interface applications using headband-type wearable EEG devices with a few frontal EEG channels. Second, this method could estimate the ranges of eye blink artifacts accurately. Our experimental results demonstrated that the artifact range estimated using our method was more accurate than that from the conventional methods, and thus, the overall accuracy of detecting epochs contaminated by eye blink artifacts was markedly increased as compared to conventional methods. The MATLAB package of our library source codes and sample data, named Eyeblink Master, is open for free download.
Detection of eye blink artifacts from single prefrontal channel electroencephalogram
S0169260715002722
This paper presents a tool for automatic assessment of skeletal bone age according to a modified version of the Tanner and Whitehouse (TW2) clinical method. The tool is able to provide an accurate bone age assessment in the range 0–6 years by processing epiphysial/metaphysial ROIs with image-processing techniques, and assigning TW2 stage to each ROI by means of hidden Markov models. The system was evaluated on a set of 360 X-rays (180 for males and 180 for females) achieving a high success rate in bone age evaluation (mean error rate of 0.41±0.33 years comparable to human error) as well as outperforming other effective methods. The paper also describes the graphical user interface of the tool, which is also released, thus to support and speed up clinicians’ practices when dealing with bone age assessment.
Modeling skeletal bone development with hidden Markov models
S0169260715002734
To facilitate the performance comparison of new methods for sleep patterns analysis, datasets with quality content, publicly-available, are very important and useful. We introduce an open-access comprehensive sleep dataset, called ISRUC-Sleep. The data were obtained from human adults, including healthy subjects, subjects with sleep disorders, and subjects under the effect of sleep medication. Each recording was randomly selected between PSG recordings that were acquired by the Sleep Medicine Centre of the Hospital of Coimbra University (CHUC). The dataset comprises three groups of data: (1) data concerning 100 subjects, with one recording session per subject; (2) data gathered from 8 subjects; two recording sessions were performed per subject, and (3) data collected from one recording session related to 10 healthy subjects. The polysomnography (PSG) recordings, associated with each subject, were visually scored by two human experts. Comparing the existing sleep-related public datasets, ISRUC-Sleep provides data of a reasonable number of subjects with different characteristics such as: data useful for studies involving changes in the PSG signals over time; and data of healthy subjects useful for studies involving comparison of healthy subjects with the patients, suffering from sleep disorders. This dataset was created aiming to complement existing datasets by providing easy-to-apply data collection with some characteristics not covered yet. ISRUC-Sleep can be useful for analysis of new contributions: (i) in biomedical signal processing; (ii) in development of ASSC methods; and (iii) on sleep physiology studies. To evaluate and compare new contributions, which use this dataset as a benchmark, results of applying a subject-independent automatic sleep stage classification (ASSC) method on ISRUC-Sleep dataset are presented.
ISRUC-Sleep: A comprehensive public dataset for sleep researchers
S0169260715002746
Background and objective Probabilistic topic models provide an unsupervised method for analyzing unstructured text. These models discover semantically coherent combinations of words (topics) that could be integrated in a clinical automatic summarization system for primary care physicians performing chart review. However, the human interpretability of topics discovered from clinical reports is unknown. Our objective is to assess the coherence of topics and their ability to represent the contents of clinical reports from a primary care physician's point of view. Methods Three latent Dirichlet allocation models (50 topics, 100 topics, and 150 topics) were fit to a large collection of clinical reports. Topics were manually evaluated by primary care physicians and graduate students. Wilcoxon Signed-Rank Tests for Paired Samples were used to evaluate differences between different topic models, while differences in performance between students and primary care physicians (PCPs) were tested using Mann–Whitney U tests for each of the tasks. Results While the 150-topic model produced the best log likelihood, participants were most accurate at identifying words that did not belong in topics learned by the 100-topic model, suggesting that 100 topics provides better relative granularity of discovered semantic themes for the data set used in this study. Models were comparable in their ability to represent the contents of documents. Primary care physicians significantly outperformed students in both tasks. Conclusion This work establishes a baseline of interpretability for topic models trained with clinical reports, and provides insights on the appropriateness of using topic models for informatics applications. Our results indicate that PCPs find discovered topics more coherent and representative of clinical reports relative to students, warranting further research into their use for automatic summarization.
Evaluating topic model interpretability from a primary care physician perspective
S0169260715002758
Anatomical cine cardiovascular magnetic resonance (CMR) imaging is widely used to assess the systolic cardiac function because of its high soft tissue contrast. Assessment of diastolic LV function has not regularly been performed due the complex and time consuming procedures. This study presents a semi-automated assessment of the left ventricular (LV) diastolic function using anatomical short-axis cine CMR images. The proposed method is based on three main steps: (1) non-rigid registration, which yields a sequence of endocardial boundary points over the cardiac cycle based on a user-provided contour on the first frame; (2) LV volume and filling rate computations over the cardiac cycle; and (3) automated detection of the peak values of early (E) and late ventricular (A) filling waves. In 47 patients cine CMR imaging and Doppler-echocardiographic imaging were performed. CMR measurements of peak values of the E and A waves as well as the deceleration time were compared with the corresponding values obtained in Doppler-Echocardiography. For the E/A ratio the proposed algorithm for CMR yielded a Cohen's kappa measure of 0.70 and a Gwet's AC1 coefficient of 0.70. Conclusion: Semi-automated assessment of the left ventricular (LV) diastolic function using anatomical short-axis cine CMR images provides mitral inflow measurements comparable to Doppler-Echocardiography.
Detecting left ventricular impaired relaxation in cardiac MRI using moving mesh correspondences
S0169260715002783
In this paper, we present the dfcomb R package for the implementation of a single prospective clinical trial or simulation studies of phase I combination trials in oncology. The aim is to present the features of the package and to illustrate how to use it in practice though different examples. The use of combination clinical trials is growing, but the implementation of existing model-based methods is complex, so this package should promote the use of innovative adaptive designs for early phases combination trials.
dfcomb: An R-package for phase I/II trials of drug combinations
S0169260715002795
Although direct volume rendering (DVR) has become a commodity, effective rendering of interesting features is still a challenge. In one of active DVR application fields, the medicine, radiologists have used DVR for the diagnosis of lesions or diseases that should be visualized distinguishably from other surrounding anatomical structures. One of most frequent and important radiologic tasks is the detection of lesions, usually constrictions, in complex tubular structures. In this paper, we propose a 3D spatial field for the effective visualization of constricted tubular structures, called as a stenosis map which stores the degree of constriction at each voxel. Constrictions within tubular structures are quantified by using newly proposed measures (i.e. line similarity measure and constriction measure) based on the localized structure analysis, and classified with a proposed transfer function mapping the degree of constriction to color and opacity. We show the application results of our method to the visualization of coronary artery stenoses. We present performance evaluations using twenty eight clinical datasets, demonstrating high accuracy and efficacy of our proposed method. The ability of our method to saliently visualize the constrictions within tubular structures and interactively adjust the visual appearance of the constrictions proves to deliver a substantial aid in radiologic practice.
Stenosis map for volume visualization of constricted tubular structures: Application to coronary artery stenosis
S0169260715002953
Current telehealth services are dominated by conventional 2D video conferencing systems, which are limited in their capabilities in providing a satisfactory communication experience due to the lack of realism. The “immersiveness” provided by 3D technologies has the potential to promote telehealth services to a wider range of applications. However, conventional stereoscopic 3D technologies are deficient in many aspects, including low resolution and the requirement for complicated multi-camera setup and calibration, and special glasses. The advent of light field (LF) photography enables us to record light rays in a single shot and provide glasses-free 3D display with continuous motion parallax in a wide viewing zone, which is ideally suited for 3D telehealth applications. As far as our literature review suggests, there have been no reports of 3D telemedicine systems using LF technology. In this paper, we propose a cross-platform solution for a LF-based 3D telemedicine system. Firstly, a novel system architecture based on LF technology is established, which is able to capture the LF of a patient, and provide an immersive 3D display at the doctor site. For 3D modeling, we further propose an algorithm which is able to convert the captured LF to a 3D model with a high level of detail. For the software implementation on different platforms (i.e., desktop, web-based and mobile phone platforms), a cross-platform solution is proposed. Demo applications have been developed for 2D/3D video conferencing, 3D model display and edit, blood pressure and heart rate monitoring, and patient data viewing functions. The demo software can be extended to multi-discipline telehealth applications, such as tele-dentistry, tele-wound and tele-psychiatry. The proposed 3D telemedicine solution has the potential to revolutionize next-generation telemedicine technologies by providing a high quality immersive tele-consultation experience.
A cross-platform solution for light field based 3D telemedicine
S0169260715002965
Today, smart mobile devices (telephones and tablets) are very commonly used due to their powerful hardware and useful features. According to an eMarketer report, in 2014 there were 1.76 billion smartphone users (excluding users of tablets) in the world; it is predicted that this number will rise by 15.9% to 2.04 billion in 2015. It is thought that these devices can be used successfully in biomedical applications. A wireless blood pressure measuring device used together with a smart mobile device was developed in this study. By means of an interface developed for smart mobile devices with Android and iOS operating systems, a smart mobile device was used both as an indicator and as a control device. The cuff communicating with this device through Bluetooth was designed to measure blood pressure via the arm. A digital filter was used on the cuff instead of the traditional analog signal processing and filtering circuit. The newly developed blood pressure measuring device was tested on 18 patients and 20 healthy individuals of different ages under a physician's supervision. When the test results were compared with the measurements made using a sphygmomanometer, it was shown that an average 93.52% accuracy in sick individuals and 94.53% accuracy in healthy individuals could be achieved with the new device.
Development of a wireless blood pressure measuring device with smart mobile device