FileName
stringlengths
17
17
Abstract
stringlengths
163
6.01k
Title
stringlengths
12
421
S0167947315002017
We study the property of the Fused Lasso Signal Approximator (FLSA) for estimating a blocky signal sequence with additive noise. We transform the FLSA to an ordinary Lasso problem, and find that in general the resulting design matrix does not satisfy the irrepresentable condition that is known as an almost necessary and sufficient condition for exact pattern recovery. We give necessary and sufficient conditions on the expected signal pattern such that the irrepresentable condition holds in the transformed Lasso problem. However, these conditions turn out to be very restrictive. We apply the newly developed preconditioning method — Puffer Transformation (Jia and Rohe, 2015) to the transformed Lasso and call the new procedure the preconditioned fused Lasso. We give non-asymptotic results for this method, showing that as long as the signal-to-noise ratio is not too small, our preconditioned fused Lasso estimator always recovers the correct pattern with high probability. Theoretical results give insight into what controls the ability of recovering the pattern — it is the noise level instead of the length of the signal sequence. Simulations further confirm our theorems and visualize the significant improvement of the preconditioned fused Lasso estimator over the vanilla FLSA in exact pattern recovery.
On stepwise pattern recovery of the fused Lasso
S0167947315002066
Financial data are often thick-tailed and exhibit skewness. The versatile Generalized Tukey Lambda (GTL) distribution is able to capture varying degrees of skewness in thin- or thick-tailed data. Such versatility makes the GTL distribution potentially useful in the area of financial risk measurement. Moreover, for GTL-distributed random variables, the familiar risk measures of Value at Risk (VaR) and Expected Shortfall (ES) may be expressed in simple analytical forms. It turns out that, both analytically and through Monte Carlo simulations, GTL’s VaR and ES differ significantly from other flexible distributions. The asymptotic properties of the maximum likelihood estimator of the GTL parameters are also examined. In order to study risk in financial data, the GTL distribution is inserted into a GARCH model. This GTL-GARCH model is estimated with data on daily returns of GE stock, demonstrating that, for certain data sets, GTL may capture risk measurements better than other distributions. 1 1 Online supplementary materials consist of appendices with proofs and additional Monte Carlo results, data used in this study, an R script for fitting GTL densities by maximum likelihood, and an R script for estimation of the GTL-GARCH model (see Appendix A).
Linking Tukey’s legacy to financial risk measurement
S0167947315002595
In large scale genomic analyses dealing with detecting genotype–phenotype associations, such as genome wide association studies (GWAS), it is desirable to have numerically and statistically robust procedures to test the stochastic independence null hypothesis against certain alternatives. Motivated by a special case in a GWAS, a novel test procedure called Correlation Profile Test (CPT) is developed for testing genomic associations with failure-time phenotypes subject to right censoring and competing risks. Performance and operating characteristics of CPT are investigated and compared to existing approaches, by a simulation study and on a real dataset. Compared to popular choices of semiparametric and nonparametric methods, CPT has three advantages: it is numerically more robust because it solely relies on sample moments; it is more robust against the violation of the proportional hazards condition; and it is more flexible in handling various failure and censoring scenarios. CPT is a general approach to testing the null hypothesis of stochastic independence between a failure event point process and any random variable; thus it is widely applicable beyond genomic studies.
Exploratory failure time analysis in large scale genomics
S0167947315002716
Testing whether two or more independent samples arise from a common distribution is a classic problem in statistics. Several multivariate two-sample tests of equality are based on graphs such as the minimum spanning tree, nearest neighbor, and optimal nonbipartite perfect matching. Here, the samples are pooled and the test statistic is the number of edges in the graph that connect points with different sample identities. These tests are typically unbiased and perform well when estimates of underlying probability densities are poor. However, these tests have not been thoroughly studied when data is very high dimensional or in the multisample case. We introduce the use of orthogonal perfect matchings for testing equality in distribution. A suite of Monte Carlo simulations on artificial and real data shows that orthogonal perfect matchings and spanning trees typically have higher power than other graphs and are also more effective at discerning when samples have differences in their covariance structure compared to other nonparametric tests such as the energy and triangle tests.
Graph-theoretic multisample tests of equality in distribution for high dimensional data
S0167947315002935
Change point models seek to fit a piecewise regression model with unknown breakpoints to a data set whose parameters are suspected to change through time. However, the exponential number of possible solutions to a multiple change point problem requires an efficient algorithm if long time series are to be analyzed. A sequential Bayesian change point algorithm is introduced that provides uncertainty bounds on both the number and location of change points. The algorithm is able to quickly update itself in linear time as each new data point is recorded and uses the exact posterior distribution to infer whether or not a change point has been observed. Simulation studies illustrate how the algorithm performs under various parameter settings, including detection speeds and error rates, and allow for comparison with several existing multiple change point algorithms. The algorithm is then used to analyze two real data sets, including global surface temperature anomalies over the last 130 years.
An exact approach to Bayesian sequential change point detection
S0167947315003047
A common problem in modern genetic research is that of comparing the mean vectors of two populations–typically in settings in which the data dimension is larger than the sample size–where Hotelling’s test cannot be applied. Recently, a test using random subspaces was proposed, in which the data are randomly projected into several lower-dimensional subspaces, and Hotelling’s test is well defined. Superior performance with competing tests was demonstrated when the variables were correlated. Following the research of random subspaces, a modified test was proposed that might make more efficient use of covariance structure at high dimension. Hierarchical clustering is performed first such that highly correlated variables are clustered together. Next, Hotelling’s statistics are computed for every cluster-subspace and summed as the new test statistic. High performance was demonstrated via simulations and real data analysis.
A high-dimension two-sample test for the mean using cluster subspaces
S0167947315003163
The generalized Pareto distribution (GPD) has been widely used in modelling heavy tail phenomena in many applications. The standard practice is to fit the tail region of the dataset to the GPD separately, a framework known as the peaks-over-threshold (POT) in the extreme value literature. In this paper we propose a new GPD parameter estimator, under the POT framework, to estimate common tail risk measures, the Value-at-Risk (VaR) and Conditional Tail Expectation (also known as Tail-VaR) for heavy-tailed losses. The proposed estimator is based on a nonlinear weighted least squares method that minimizes the sum of squared deviations between the empirical distribution function and the theoretical GPD for the data exceeding the tail threshold. The proposed method properly addresses a caveat of a similar estimator previously advocated, and further improves the performance by introducing appropriate weights in the optimization procedure. Using various simulation studies and a realistic heavy-tailed model, we compare alternative estimators and show that the new estimator is highly competitive, especially when the tail risk measures are concerned with extreme confidence levels.
Estimating extreme tail risk measures with generalized Pareto distribution
S0167947315003187
Hierarchical centering has been described as a reparameterization method applicable to random effects models. It has been shown to improve mixing of models in the context of Markov chain Monte Carlo (MCMC) methods. A hierarchical centering approach is proposed for reversible jump MCMC (RJMCMC) chains which builds upon the hierarchical centering methods for MCMC chains and uses them to reparameterize models in an RJMCMC algorithm. Although these methods may be applicable to models with other error distributions, the case is described for a log-linear Poisson model where the expected value λ includes fixed effect covariates and a random effect for which normality is assumed with a zero-mean and unknown standard deviation. For the proposed RJMCMC algorithm including hierarchical centering, the models are reparameterized by modeling the mean of the random effect coefficients as a function of the intercept of the λ model and one or more of the available fixed effect covariates depending on the model. The method is appropriate when fixed-effect covariates are constant within random effect groups. This has an effect on the dynamics of the RJMCMC algorithm and improves model mixing. The methods are applied to a case study of point transects of indigo buntings where, without hierarchical centering, the RJMCMC algorithm had poor mixing and the estimated posterior distribution depended on the starting model. With hierarchical centering on the other hand, the chain moved freely over model and parameter space. These results are confirmed with a simulation study. Hence, the proposed methods should be considered as a regular strategy for implementing models with random effects in RJMCMC algorithms; they facilitate convergence of these algorithms and help avoid false inference on model parameters.
Using hierarchical centering to facilitate a reversible jump MCMC algorithm for random effects models
S0167947316000165
Excess zeroes are often thought of as a cause of data over-dispersion (i.e. when the variance exceeds the mean); this claim is not entirely accurate. In actuality, excess zeroes reduce the mean of a dataset, thus inflating the dispersion index (i.e. the variance divided by the mean). While this results in an increased chance for data over-dispersion, the implication is not guaranteed. Thus, one should consider a flexible distribution that not only can account for excess zeroes, but can also address potential over- or under-dispersion. A zero-inflated Conway–Maxwell–Poisson (ZICMP) regression allows for modeling the relationship between explanatory and response variables, while capturing the effects due to excess zeroes and dispersion. This work derives the ZICMP model and illustrates its flexibility, extrapolates the corresponding likelihood ratio test for the presence of significant data dispersion, and highlights various statistical properties and model fit through several examples.
A flexible zero-inflated model to address data dispersion
S0167947316000232
Mahalanobis distance may be used as a measure of the disparity between an individual’s profile of scores and the average profile of a population of controls. The degree to which the individual’s profile is unusual can then be equated to the proportion of the population who would have a larger Mahalanobis distance than the individual. Several estimators of this proportion are examined. These include plug-in maximum likelihood estimators, medians, the posterior mean from a Bayesian probability matching prior, an estimator derived from a Taylor expansion, and two forms of polynomial approximation, one based on Bernstein polynomial and one on a quadrature method. Simulations show that some estimators, including the commonly-used plug-in maximum likelihood estimators, can have substantial bias for small or moderate sample sizes. The polynomial approximations yield estimators that have low bias, with the quadrature method marginally to be preferred over Bernstein polynomials. However, the polynomial estimators sometimes yield infeasible estimates that are outside the 0–1 range. While none of the estimators are perfectly unbiased, the median estimators match their definition; in simulations their estimates of the proportion have a median error close to zero. The standard median estimator can give unrealistically small estimates (including 0) and an adjustment is proposed that ensures estimates are always credible. This latter estimator has much to recommend it when unbiasedness is not of paramount importance, while the quadrature method is recommended when bias is the dominant issue.
On point estimation of the abnormality of a Mahalanobis index
S0167947316300184
At about the same time (approximately 1989), R. Liu introduced the notion of simplicial depth and R. Randles the notion of interdirections. These completely independent and seemingly unrelated initiatives, serving different purposes in nonparametric multivariate analysis, have spawned significant activity within their quite different respective domains. A surprising and fruitful connection between the two notions is shown. Exploiting the connection, statistical procedures based on interdirections can be modified to use simplicial depth instead, at considerable reduction of computational burden in the case of dimensions 2, 3, and 4. Implications regarding multivariate sign test statistics are discussed in detail, and several other potential applications are noted.
On Liu’s simplicial depth and Randles’ interdirections
S0167947316300287
Quantile inference with adjustment for covariates has not been widely investigated on competing risks data. We propose covariate-adjusted quantile inferences based on the cause-specific proportional hazards regression of the cumulative incidence function. We develop the construction of confidence intervals for quantiles of the cumulative incidence function given a value of covariates and for the difference of quantiles based on the cumulative incidence functions between two treatment groups with common covariates. Simulation studies show that the procedures perform well. We illustrate the proposed methods using early stage breast cancer data.
Covariate-adjusted quantile inference with competing risks
S0167947316300408
Numerous facets of scientific research implicitly or explicitly call for the estimation of probability densities. Histograms and kernel density estimates (KDEs) are two commonly used techniques for estimating such information, with the KDE generally providing a higher fidelity representation of the probability density function (PDF). Both methods require specification of either a bin width or a kernel bandwidth. While techniques exist for choosing the kernel bandwidth optimally and objectively, they are computationally intensive, since they require repeated calculation of the KDE. A solution for objectively and optimally choosing both the kernel shape and width has recently been developed by Bernacchia and Pigolotti (2011). While this solution theoretically applies to multidimensional KDEs, it has not been clear how to practically do so. A method for practically extending the Bernacchia–Pigolotti KDE to multidimensions is introduced. This multidimensional extension is combined with a recently-developed computational improvement to their method that makes it computationally efficient: a 2D KDE on 10 5 samples only takes 1 s on a modern workstation. This fast and objective KDE method, called the fastKDE method, retains the excellent statistical convergence properties that have been demonstrated for univariate samples. The fastKDE method exhibits statistical accuracy that is comparable to state-of-the-science KDE methods publicly available in R , and it produces kernel density estimates several orders of magnitude faster. The fastKDE method does an excellent job of encoding covariance information for bivariate samples. This property allows for direct calculation of conditional PDFs with fastKDE. It is demonstrated how this capability might be leveraged for detecting non-trivial relationships between quantities in physical systems, such as transitional behavior.
A fast and objective multidimensional kernel density estimation method: fastKDE
S0167947316300500
We propose Bayesian shrinkage methods for coefficient estimation for high-dimensional vector autoregressive (VAR) models using scale mixtures of multivariate normal distributions for independently sampled additive noises. We also suggest an efficient selection procedure for the shrinkage parameter as a computationally feasible alternative to the traditional MCMC sampling methods for high-dimensional data. A shrinkage parameter is selected at the minimum point of a newly proposed score function which is asymptotically equivalent to the mean squared error of the model coefficients. The selected shrinkage parameter is presented in a closed form as a function of sample size, level of noise, and non-normality in data, and it can be efficiently estimated by using a suggested variation of cross validation. Consistency of both of the cross validation estimator and proposed shrinkage estimator is proved. The competitiveness of the proposed methods is demonstrated based on comprehensive experimental results using simulated data and high-dimensional plant gene expression data in the context of coefficient estimation and structural inference for VAR models. The proposed methods are applicable to high-dimensional stationary time series with or without near unit roots.
Bayes shrinkage estimation for high-dimensional VAR models with scale mixture of normal distributions for noise
S0168169913002135
Supply chains are increasingly virtualised in response to market challenges and to opportunities offered by nowadays affordable new technologies. Virtual supply chain management does no longer require physical proximity, which implies that control and coordination can take place in other locations and by other partners. This paper assesses how the Internet of Things concept can be used to enhance virtualisation of supply chains in the floricultural sector. Virtualisation is expected to have a big impact in this sector where currently still most products physically pass through auction houses on their fixed routes from (inter)national growers to (inter)national customers. The paper defines the concept of virtualisation and describes different perspectives on virtualisation in literature, i.e. the organisational, team, information technology, virtual reality and virtual things perspectives. Subsequently it develops a conceptual framework for analysis of virtualisation in supply chains. This framework is applied in the Dutch floriculture to investigate the existing situation and to define future challenges for virtualisation in this sector.
Virtualisation of floricultural supply chains: A review from an Internet of Things perspective
S0168169913002512
Site selection for companies is a complex and unstructured problem that must be analyzed carefully and properly, since a localization error could drive to bankrupt. This problem has been discussed widely and effectively using multi-attribute methods in a manufacturing context, but it has been little studied in agribusiness. The goal of this work is a methodological approach oriented to evaluate optimal locations of new agri-food warehouses. Furthermore, a literature review is developed, analyzing the location problem and the attributes and techniques most widely used applied to agribusiness, and a case-study is presented in order to exemplify the methodological proposal. The multi-attribute technique called Analytic Hierarchy Process has been selected as the basis for the research, and it is applied to the real case study analyzed: the selection of a site for a new banana distribution warehouse. Six generic criteria have been analyzed: accessibility to the area, distance, cost, security of the region, local acceptance of the company, and its needs. The process includes the assignment of attributes to each one of the generic criteria, as well as the assessment of their importance levels. Three different areas of Guadalajara, Jalisco, and Mexico DF have been evaluated for the case-study, and the methodological proposal has been utilized to determine the best option.
Multi-attribute evaluation and selection of sites for agricultural product warehouses based on an Analytic Hierarchy Process
S0168169915000022
Optimal design and operation of a planned full-scale UASB reactor at a dairy farm are determined using optimization algorithms based on steady state simulations of a dynamic AD process model combined with models of the reactor temperature and heat exchanger temperatures based on energy balances. Available feedstock is 6m3/d dairy manure produced by the herd. Three alternative optimization problems are solved: Maximization of produced methane gas flow, minimization of reactor volume, and maximization of power surplus. Constraints of the optimization problems are an upper limit of the VFA concentration, and an upper limit of the feed rate corresponding to a normal animal waste production at the farm. The most proper optimization problem appears to be minimization of the reactor volume, assuming that the feed rate is fixed at its upper limit and that the VFA concentration is at its upper limit. The optimal result is a power surplus of 49.8MWh/y, a hydraulic retention time of 6.1d, and a reactor temperature of 35.9°C, assuming heat recovery with an heat exchanger, and perfect reactor heat transfer insulation. In general, the optimal solutions are improved if the ratio of the solids (biomass) retention time to the hydraulic retention time is increased.
Optimal design and operation of a UASB reactor for dairy cattle manure
S0168169915000459
The electronic identification of sheep and goats has been obligatory in the European Union since 2010 by means of low-frequency radio-frequency identification systems. The identification of pigs and cattle is currently based on a visual ear tag, but electronic animal identification is gaining in importance. The European Union already offers the additional use of electronic identification systems for cattle in their council regulation. Besides the low-frequency radio-frequency identification, an ultra-high-frequency ear tag is a possibility for electronic animal identification. The benefits of the latter frequency band are the high range, the possibility of quasi-simultaneous reading and a high data transmission rate. First systematic laboratory tests were carried out before testing the ear tags in practice. Therefore, a dynamic test bench was built. The aim of the experiments presented in this study was to compare different ear tags under standardised conditions and select the most suitable for practical use. The influence of different parameters was tested and a standard test procedure to evaluate the quality of the transponder ear tag was developed. The experiments showed that neither the transponder holder material (polyvinyl chloride vs. extruded polystyrene) nor the reader settings examined (triggered read vs. presence sensing) had a significant influence on the average of readings of the different transponder types. The parameter ‘number of rounds’ (10 vs. 15 vs. 20) did not show a significant effect either. However, significant differences between speed (1.5ms−1,3.0ms−1), transponder orientation and the fourteen transponder types were found. The two most suitable transponder ear tags for cattle and pigs have been determined by comparison.
Methodology of a dynamic test bench to test ultra-high-frequency transponder ear tags in motion
S0168169915000575
Detailed and timely information on crop area, production and yield is important for the assessment of environmental impacts of agriculture, for the monitoring of the land use and management practices, and for food security early warning systems. A machine learning approach is proposed to model crop rotations which can predict with good accuracy, at the beginning of the agricultural season, the crops most likely to be present in a given field using the crop sequence of the previous 3–5years. The approach is able to learn from data and to integrate expert knowledge represented as first-order logic rules. Its accuracy is assessed using the French Land Parcel Information System implemented in the frame of the EU’s Common Agricultural Policy. This assessment is done using different settings in terms of temporal depth and spatial generalization coverage. The obtained results show that the proposed approach is able to predict the crop type of each field, before the beginning of the crop season, with an accuracy as high as 60%, which is better than the results obtained with current approaches based on remote sensing imagery.
Assessment of a Markov logic model of crop rotations for early crop mapping
S0168169915002069
An advanced, proof-of-concept real-time plant discrimination system is presented that employs two visible (red) laser diodes (635nm, 685nm) and one near-infrared (NIR) laser diode (785nm). The lasers sequentially illuminate the target ground area and a linear sensor array measures the intensities of the reflected laser beams. The spectral reflectance measurements are then processed by an embedded microcontroller running a discrimination algorithm based on dual Normalised Difference Vegetation Indices (NDVI). Pre-determined plant spectral signatures are used to define unique regions-of-classification for use by the discrimination algorithm. Measured aggregated NDVI values that fall within a region-of-classification (RoC) representing an unwanted plant generate a spray control signal that activates an external spray module, thus allowing for a targeted spraying operation. Dynamic outdoor evaluation of the advanced, proof-of-concept real-time plant discrimination system, using three different plant species and control data determined under static laboratory conditions, shows that the system can perform green-from-green plant detection and accomplish practical discrimination for a vehicle speed of 3km/h.
A real-time plant discrimination system utilising discrete reflectance spectroscopy
S0168169916301260
In this study we assess the interchangeability and statistical agreement of two prevalent instruments from the non-invasive “sniffer” method and compare their precision. Furthermore, we develop and validate an effective algorithm for aligning time series data from multiple instruments to remove the effects of variable and fixed time shifts from the instrument comparison. The CH4 and CO2 gas concentrations for both instruments were found to differ for population means (P <0.05) and intra-cow variation (precision) (P <0.05) and for inter-cow variation (P <0.05). The CH4 and CO2 gas concentrations from both instruments can be used interchangeably to increase statistical power for example, in genetic evaluations, provided sources of disagreement are corrected through calibration and standardisation. Additionally, averaging readings of cows over a longer period of time (one week) is an effective noise reduction technique which provides phenotypes with considerable inter-cow variation.
Interchangeability between methane measurements in dairy cows assessed by comparing precision and agreement of two non-invasive infrared methods
S0168169916301296
Smart farming is a management style that includes smart monitoring, planning and control of agricultural processes. This management style requires the use of a wide variety of software and hardware systems from multiple vendors. Adoption of smart farming is hampered because of a poor interoperability and data exchange between ICT components hindering integration. Software Ecosystems is a recent emerging concept in software engineering that addresses these integration challenges. Currently, several Software Ecosystems for farming are emerging. To guide and accelerate these developments, this paper provides a reference architecture for Farm Software Ecosystems. This reference architecture should be used to map, assess design and implement Farm Software Ecosystems. A key feature of this architecture is a particular configuration approach to connect ICT components developed by multiple vendors in a meaningful, feasible and coherent way. The reference architecture is evaluated by verification of the design with the requirements and by mapping two existing Farm Software Ecosystems using the Farm Software Ecosystem Reference Architecture. This mapping showed that the reference architecture provides insight into Farm Software Ecosystems as it can describe similarities and differences. A main conclusion is that the two existing Farm Software Ecosystems can improve configuration of different ICT components. Future research is needed to enhance configuration in Farm Software Ecosystems.
A reference architecture for Farm Software Ecosystems
S0168169916301399
Corn height measured manually has shown promising results in improving the relationship between active-optical (AO) sensor readings and crop yield. Manual measurement of corn height is not practical in US commercial corn production, so an alternative automatic method must be found in order to capture the benefit of including canopy height into in-season yield estimates and from there into in-season nitrogen (N) fertilizer applications. One existing alternative to measure canopy height is an acoustic height sensor. A commercial acoustic height sensor was utilized in these experiments at two corn growth stages (V6 and V12) along with AO sensors. Eight corn N rate sites in North Dakota, USA, were used to compare the acoustic height sensor as a practical alternative to manual height measurements as an additional parameter to increase the relationship between AO sensor readings and corn yield. Six N treatments, 0, 45, 90, 134, 179, and 224kgha−1, were applied before planting in a randomized complete block experimental design with four replications. Height measurement using the acoustic sensor provided an improved yield relationship compared to manual height at all locations. The level of improvement of the relationship between AO readings multiplied by acoustic sensor readings and yield was greater at V6 growth stage compared to the V12 growth stage. At V12, corn height measured manually and with the acoustic sensor multiplied by AO readings provided similar improvement to the relationship with yield compared to relating AO readings alone with yield at most locations. The acoustic height sensor may be useful in increasing the usefulness of AO sensor corn yield prediction algorithms for use in on-the-go in-season N application to corn particularly if the sensor height is normalized within site before combining multiple locations.
Use of corn height measured with an acoustic sensor improves yield estimation with ground based active optical sensors
S0168874X13000802
It has been a great challenge for many scientists and engineers to compute elastic–plastic solutions for dynamically loaded cracked structures due to the fact that the solutions are much more complicated and computationally time consuming than corresponding static problems. The path independent integral J ^ F , originally developed for a two-dimensional dynamically loaded stationary circular arc crack in a homogeneous and isotropic material, is evaluated in the present study by taking elastic–plastic material properties to quantify the crack problem. All the numerical results presented in this study were evaluated using general purpose finite element commercial software ANSYS together with the post-processing program (using FORTRAN) developed by the authors. Numerical results thus obtained are compared with good agreement within the limits of computational accuracy and the path invariant property of J ^ F is well preserved over the integration contours. arbitrary region around the process zone body force vector physical component of body force vector B i Rice's path independent integral crack driving force for circular arc crack under multiple loads positive unit outward normal physical component of positive unit outward normal n β plane polar coordinate radius of the crack traction vector physical component of the traction vector T i displacement vector covariant derivative of u i with respect to j physical component of u i , j mechanical strain energy density function contravariant stress tensor physical component of the stress tensor σ i j strain tensor covariant derivative of ε i j with respect to β physical component of ε i j , β thermal strain tensor initial strain tensor arbitrary curve surrounding A angle material density symbols not listed are defined as they appear in text.
Elastic–plastic dynamic fracture analysis for stationary curved cracks
S0169260713002435
This study aimed to focus on medical knowledge representation and reasoning using the probabilistic and fuzzy influence processes, implemented in the semantic web, for decision support tasks. Bayesian belief networks (BBNs) and fuzzy cognitive maps (FCMs), as dynamic influence graphs, were applied to handle the task of medical knowledge formalization for decision support. In order to perform reasoning on these knowledge models, a general purpose reasoning engine, EYE, with the necessary plug-ins was developed in the semantic web. The two formal approaches constitute the proposed decision support system (DSS) aiming to recognize the appropriate guidelines of a medical problem, and to propose easily understandable course of actions to guide the practitioners. The urinary tract infection (UTI) problem was selected as the proof-of-concept example to examine the proposed formalization techniques implemented in the semantic web. The medical guidelines for UTI treatment were formalized into BBN and FCM knowledge models. To assess the formal models’ performance, 55 patient cases were extracted from a database and analyzed. The results showed that the suggested approaches formalized medical knowledge efficiently in the semantic web, and gave a front-end decision on antibiotics’ suggestion for UTI.
Application of probabilistic and fuzzy cognitive approaches in semantic web framework for medical decision support
S0169260714001266
This paper proposes a fast weighted horizontal visibility graph constructing algorithm (FWHVA) to identify seizure from EEG signals. The performance of the FWHVA is evaluated by comparing with Fast Fourier Transform (FFT) and sample entropy (SampEn) method. Two noise-robustness graph features based on the FWHVA, mean degree and mean strength, are investigated using two chaos signals and five groups of EEG signals. Experimental results show that feature extraction using the FWHVA is faster than that of SampEn and FFT. And mean strength feature associated with ictal EEG is significant higher than that of healthy and inter-ictal EEGs. In addition, an 100% classification accuracy for identifying seizure from healthy shows that the features based on the FWHVA are more promising than the frequency features based on FFT and entropy indices based on SampEn for time series classification.
Epileptic seizure detection in EEGs signals using a fast weighted horizontal visibility algorithm
S0169260714001278
Background and objective Patients who visit emergency department (ED) may have symptoms of occult cancers. Methods We studied a random cohort of one million subjects from Taiwan National Health Insurance Research Database between 2000 and 2008 to evaluate the ED utilization of individuals who were subsequently diagnosed with digestive tract cancers. The case group was digestive tract cancer patients and the control group was traumatic fracture patients. We reviewed record of ED visits only from 4 to 15 months before the cancer diagnoses. Results There were 2635 and 6665 in the case and control groups respectively. Patients’ adjusted odds ratio with 95% confidence interval for the case group were 1.36 (1.06–1.74) for Abdominal ultrasound, 2.16 (1.61–2.90) pan-endoscopy, 1.72 (1.33–2.22) guaiac fecal-occult blood test, 1.42 (1.28–1.58) plain abdominal X-rays, 1.20 (1.09–1.32) SGOT, 1.27 (1.14–1.40) SGPT, 1.66 (1.41–1.95) total bilirubin, 2.41 (1.89–3.08) direct bilirubin, 1.21 (1.01–1.46) hemoglobin and 3.63 (2.66–4.94) blood transfusion, respectively. Blood transfusion in the ED was a significant predictor of the individual subsequently diagnosed with digestive tract cancer. Conclusions The health system could identify high risk patients early by real-time review of their ED utilization before the diagnosis of digestive tract cancers. We proposed a follow-up methodology for daily screening of patients with high risk of digestive tract cancer by informatics system in the ED.
Emergency department utilization can indicate early diagnosis of digestive tract cancers: A population-based study in Taiwan
S0169260714001461
In this paper, the gHRV software tool is presented. It is a simple, free and portable tool developed in python for analysing heart rate variability. It includes a graphical user interface and it can import files in multiple formats, analyse time intervals in the signal, test statistical significance and export the results. This paper also contains, as an example of use, a clinical analysis performed with the gHRV tool, namely to determine whether the heart rate variability indexes change across different stages of sleep. Results from tests completed by researchers who have tried gHRV are also explained: in general the application was positively valued and results reflect a high level of satisfaction. gHRV is in continuous development and new versions will include suggestions made by testers.
gHRV: Heart rate variability analysis made easy
S0169260714001473
This paper presents a novel method for QRS detection in electrocardiograms (ECG). It is based on the S-Transform, a new time frequency representation (TFR). The S-Transform provides frequency-dependent resolution while maintaining a direct relationship with the Fourier spectrum. We exploit the advantages of the S-Transform to isolate the QRS complexes in the time–frequency domain. Shannon energy of each obtained local spectrum is then computed in order to localize the R waves in the time domain. Significant performance enhancement is confirmed when the proposed approach is tested with the MIT-BIH arrhythmia database (MITDB). The obtained results show a sensitivity of 99.84%, a positive predictivity of 99.91% and an error rate of 0.25%. Furthermore, to be more convincing, the authors illustrated the detection parameters in the case of certain ECG segments with complicated patterns.
QRS detection using S-Transform and Shannon energy
S0169260714001497
Breast cancer continues to be a significant public health problem in the world. Early detection is the key for improving breast cancer prognosis. Mammogram breast X-ray is considered the most reliable method in early detection of breast cancer. However, it is difficult for radiologists to provide both accurate and uniform evaluation for the enormous mammograms generated in widespread screening. Micro calcification clusters (MCCs) and masses are the two most important signs for the breast cancer, and their automated detection is very valuable for early breast cancer diagnosis. The main objective is to discuss the computer-aided detection system that has been proposed to assist the radiologists in detecting the specific abnormalities and improving the diagnostic accuracy in making the diagnostic decisions by applying techniques splits into three-steps procedure beginning with enhancement by using Histogram equalization (HE) and Morphological Enhancement, followed by segmentation based on Otsu's threshold the region of interest for the identification of micro calcifications and mass lesions, and at last classification stage, which classify between normal and micro calcifications ‘patterns and then classify between benign and malignant micro calcifications. In classification stage; three methods were used, the voting K-Nearest Neighbor classifier (K-NN) with prediction accuracy of 73%, Support Vector Machine classifier (SVM) with prediction accuracy of 83%, and Artificial Neural Network classifier (ANN) with prediction accuracy of 77%.
Computer aided detection system for micro calcifications in digital mammograms
S0169260714001503
Identifying the abnormal changes of mental workload (MWL) over time is quite crucial for preventing the accidents due to cognitive overload and inattention of human operators in safety-critical human–machine systems. It is known that various neuroimaging technologies can be used to identify the MWL variations. In order to classify MWL into a few discrete levels using representative MWL indicators and small-sized training samples, a novel EEG-based approach by combining locally linear embedding (LLE), support vector clustering (SVC) and support vector data description (SVDD) techniques is proposed and evaluated by using the experimentally measured data. The MWL indicators from different cortical regions are first elicited by using the LLE technique. Then, the SVC approach is used to find the clusters of these MWL indicators and thereby to detect MWL variations. It is shown that the clusters can be interpreted as the binary class MWL. Furthermore, a trained binary SVDD classifier is shown to be capable of detecting slight variations of those indicators. By combining the two schemes, a SVC–SVDD framework is proposed, where the clear-cut (smaller) cluster is detected by SVC first and then a subsequent SVDD model is utilized to divide the overlapped (larger) cluster into two classes. Finally, three-class MWL levels (low, normal and high) can be identified automatically. The experimental data analysis results are compared with those of several existing methods. It has been demonstrated that the proposed framework can lead to acceptable computational accuracy and has the advantages of both unsupervised and supervised training strategies.
Identification of temporal variations in mental workload using locally-linear-embedding-based EEG feature reduction and support-vector-machine-based clustering and classification techniques
S0169260714001515
This paper proposes new combined methods to classify normal and epileptic seizure EEG signals using wavelet transform (WT), phase-space reconstruction (PSR), and Euclidean distance (ED) based on a neural network with weighted fuzzy membership functions (NEWFM). WT, PSR, ED, and statistical methods that include frequency distributions and variation, were implemented to extract 24 initial features to use as inputs. Of the 24 initial features, 4 minimum features with the highest accuracy were selected using a non-overlap area distribution measurement method supported by the NEWFM. These 4 minimum features were used as inputs for the NEWFM and this resulted in performance sensitivity, specificity, and accuracy of 96.33%, 100%, and 98.17%, respectively. In addition, the area under Receiver Operating Characteristic (ROC) curve was used to measure the performances of NEWFM both without and with feature selections.
Classification of normal and epileptic seizure EEG signals using wavelet transform, phase-space reconstruction, and Euclidean distance
S0169260714001680
This paper presents a method for fast computation of Hessian-based enhancement filters, whose conditions for identifying particular structures in medical images are associated only with the signs of Hessian eigenvalues. The computational costs of Hessian-based enhancement filters come mainly from the computation of Hessian eigenvalues corresponding to image elements to obtain filter responses, because computing eigenvalues of a matrix requires substantial computational effort. High computational cost has become a challenge in the application of Hessian-based enhancement filters. Using a property of the characteristic polynomial coefficients of a matrix and the well-known Routh–Hurwitz criterion in control engineering, it is shown that under certain conditions, the response of a Hessian-based enhancement filter to an image element can be obtained without having to compute Hessian eigenvalues. The computational cost can thus be reduced. Experimental results on several medical images show that the method proposed in this paper can reduce significantly the number of computations of Hessian eigenvalues and the processing times of images. The percentage reductions of the number of computations of Hessian eigenvalues for enhancing blob- and tubular-like structures in two-dimensional images are approximately 90% and 65%, respectively. For enhancing blob-, tubular-, and plane-like structures in three-dimensional images, the reductions are approximately 97%, 75%, and 12%, respectively. For the processing times, the percentage reductions for enhancing blob- and tubular-like structures in two-dimensional images are approximately 31% and 7.5%, respectively. The reductions for enhancing blob-, tubular-, and plane-like structures in three-dimensional images are approximately 68%, 55%, and 3%, respectively.
Fast computation of Hessian-based enhancement filters for medical images
S0169260714001692
Intestinal abnormalities and ischemia are medical conditions in which inflammation and injury of the intestine are caused by inadequate blood supply. Acute ischemia of the small bowel can be life-threatening. Computed tomography (CT) is currently a gold standard for the diagnosis of acute intestinal ischemia in the emergency department. However, the assessment of the diagnostic performance of CT findings in the detection of intestinal abnormalities and ischemia has been a difficult task for both radiologists and surgeons. Little effort has been found in developing computerized systems for the automated identification of these types of complex gastrointestinal disorders. In this paper, a geostatistical mapping of spatial uncertainty in CT scans is introduced for medical image feature extraction, which can be effectively applied for diagnostic detection of intestinal abnormalities and ischemia from control patterns. Experimental results obtained from the analysis of clinical data suggest the usefulness of the proposed uncertainty mapping model.
Identification of intestinal wall abnormalities and ischemia by modeling spatial uncertainty in computed tomography imaging findings
S0169260714001837
Objective Many regional programs of the countries educate asthmatic children and their families to manage healthcare data. This study aims to establish a Web-based self-management system, eAsthmaCare, to promote the electronic healthcare (e-Healthcare) services for the asthmatic children in Taiwan. The platform can perform real time online functionality based upon a five-tier infrastructure with mutually supportive components to acquire asthma diaries, quality of life assessments and health educations. Methods We have designed five multi-disciplinary portions on the interactive interface functioned with the analytical diagrams: (1) online asthma diary, (2) remote asthma assessment, (3) instantaneous asthma alert, (4) diagrammatical clinic support, and (5) asthma health education. The Internet-based asthma diary and assessment program was developed for patients to process self-management healthcare at home. In addition, the online analytical charts can help healthcare professionals to evaluate multi-domain health information of patients immediately. Results eAsthmaCare was developed by Java™ Servlet/JSP technology upon Apache Tomcat™ web server and Oracle™ database. Forty-one voluntary asthmatic children (and their parents) were intervened to examine the proposed system. Seven domains of satisfiability assessment by using the system were applied for approving the development. The average scores were scaled in the acceptable range for each domain to ensure feasibility of the proposed system. Conclusion The study revealed the details of system infrastructure and developed functions that can help asthmatic children in self-management for healthcare to enhance communications between patients and hospital professionals.
Development of online diary and self-management system on e-Healthcare for asthmatic children in Taiwan
S0169260714002041
The three-parameter Rayleigh damping (RD) model applied to time-harmonic Magnetic Resonance Elastography (MRE) has potential to better characterise fluid-saturated tissue systems. However, it is not uniquely identifiable at a single frequency. One solution to this problem involves simultaneous inverse problem solution of multiple input frequencies over a broad range. As data is often limited, an alternative elegant solution is a parametric RD reconstruction, where one of the RD parameters (μ I or ρ I ) is globally constrained allowing accurate identification of the remaining two RD parameters. This research examines this parametric inversion approach as applied to in vivo brain imaging. Overall, success was achieved in reconstruction of the real shear modulus (μ R ) that showed good correlation with brain anatomical structures. The mean and standard deviation shear stiffness values of the white and gray matter were found to be 3±0.11kPa and 2.2±0.11kPa, respectively, which are in good agreement with values established in the literature or measured by mechanical testing. Parametric results with globally constrained μ I indicate that selecting a reasonable value for the μ I distribution has a major effect on the reconstructed ρ I image and concomitant damping ratio (ξ d ). More specifically, the reconstructed ρ I image using a realistic μ I =333Pa value representative of a greater portion of the brain tissue showed more accurate differentiation of the ventricles within the intracranial matter compared to μ I =1000Pa, and ξ d reconstruction with μ I =333Pa accurately captured the higher damping levels expected within the vicinity of the ventricles. Parametric RD reconstruction shows potential for accurate recovery of the stiffness characteristics and overall damping profile of the in vivo living brain despite its underlying limitations. Hence, a parametric approach could be valuable with RD models for diagnostic MRE imaging with single frequency data.
Parametric-based brain Magnetic Resonance Elastography using a Rayleigh damping material model
S0169260714002053
In this paper, a passive planar micromixer with ellipse-like micropillars is proposed to operate in the laminar flow regime for high mixing efficiency. With a splitting and recombination (SAR) concept, the diffusion distance of the fluids in a micromixer with ellipse-like micropillars was decreased. Thus, space usage for micromixer of an automatic sample collection system is also minimized. Numerical simulation was conducted to evaluate the performance of proposed micromixer by solving the governing Navier–Stokes equation and convection–diffusion equation. With software (COMSOL 4.3) for computational fluid dynamics (CFD) we simulated the mixing of fluids in a micromixer with ellipse-like micropillars and basic T-type mixer in a laminar flow regime. The efficiency of the proposed micromixer is shown in numerical results and is verified by measurement results.
An efficient passive planar micromixer with ellipse-like micropillars for continuous mixing of human blood
S0169260714002065
In this paper we propose a class of flexible weight functions for use in comparison of two cumulative incidence functions. The proposed weights allow the users to focus their comparison on an early or a late time period post treatment or to treat all time points with equal emphasis. These weight functions can be used to compare two cumulative incidence functions via their risk difference, their relative risk, or their odds ratio. The proposed method has been implemented in the R-CIFsmry package which is readily available for download and is easy to use as illustrated in the example.
Weighted comparison of two cumulative incidence functions with R-CIFsmry package
S0169260714002077
Active contours are image segmentation methods that minimize the total energy of the contour to be segmented. Among the active contour methods, the radial methods have lower computational complexity and can be applied in real time. This work aims to present a new radial active contour technique, called pSnakes, using the 1D Hilbert transform as external energy. The pSnakes method is based on the fact that the beams in ultrasound equipment diverge from a single point of the probe, thus enabling the use of polar coordinates in the segmentation. The control points or nodes of the active contour are obtained in pairs and are called twin nodes. The internal energies as well as the external one, Hilbertian energy, are redefined. The results showed that pSnakes can be used in image segmentation of short-axis echocardiogram images and that they were effective in image segmentation of the left ventricle. The echo-cardiologist's golden standard showed that the pSnakes was the best method when compared with other methods. The main contributions of this work are the use of pSnakes and Hilbertian energy, as the external energy, in image segmentation. The Hilbertian energy is calculated by the 1D Hilbert transform. Compared with traditional methods, the pSnakes method is more suitable for ultrasound images because it is not affected by variations in image contrast, such as noise. The experimental results obtained by the left ventricle segmentation of echocardiographic images demonstrated the advantages of the proposed model. The results presented in this paper are justified due to an improved performance of the Hilbert energy in the presence of speckle noise.
pSnakes: A new radial active contour model and its application in the segmentation of the left ventricle from echocardiographic images
S0169260714002089
Interpenetrated polymer networks (IPNs), composed by two independent polymeric networks that spatially interpenetrate, are considered as valuable systems to control permeability and mechanical properties of hydrogels for biomedical applications. Specifically, poly(ethyl acrylate) (PEA)–poly(2-hydroxyethyl acrylate) (PHEA) IPNs have been explored as good hydrogels for mimicking articular cartilage. These lattices are proposed as matrix implants in cartilage damaged areas to avoid the discontinuity in flow uptake preventing its deterioration. The permeability of these implants is a key parameter that influences their success, by affecting oxygen and nutrient transport and removing cellular waste products to healthy cartilage. Experimental try-and-error approaches are mostly used to optimize the composition of such structures. However, computational simulation may offer a more exhaustive tool to test and screen out biomaterials mimicking cartilage, avoiding expensive and time-consuming experimental tests. An accurate and efficient prediction of material's permeability and internal directionality and magnitude of the fluid flow could be highly useful when optimizing biomaterials design processes. Here we present a 3D computational model based on Sussman–Bathe hyperelastic material behaviour. A fluid structure analysis is performed with ADINA software, considering these materials as two phases composites where the solid part is saturated by the fluid. The model is able to simulate the behaviour of three non-biodegradable hydrogel compositions, where percentages of PEA and PHEA are varied. Specifically, the aim of this study is (i) to verify the validity of the Sussman–Bathe material model to simulate the response of the PEA–PHEA biomaterials; (ii) to predict the fluid flux and the permeability of the proposed IPN hydrogels and (iii) to study the material domains where the passage of nutrients and cellular waste products is reduced leading to an inadequate flux distribution in healthy cartilage tissue. The obtained results show how the model predicts the permeability of the PEA–PHEA hydrogels and simulates the internal behaviour of the samples and shows the distribution and quantification of fluid flux.
Computational analysis of cartilage implants based on an interpenetrated polymer network for tissue repairing
S0169260714002107
In this paper the model predictive control (MPC) technology is used for tackling the optimal drug administration problem. The important advantage of MPC compared to other control technologies is that it explicitly takes into account the constraints of the system. In particular, for drug treatments of living organisms, MPC can guarantee satisfaction of the minimum toxic concentration (MTC) constraints. A whole-body physiologically-based pharmacokinetic (PBPK) model serves as the dynamic prediction model of the system after it is formulated as a discrete-time state-space model. Only plasma measurements are assumed to be measured on-line. The rest of the states (drug concentrations in other organs and tissues) are estimated in real time by designing an artificial observer. The complete system (observer and MPC controller) is able to drive the drug concentration to the desired levels at the organs of interest, while satisfying the imposed constraints, even in the presence of modelling errors, disturbances and noise. A case study on a PBPK model with 7 compartments, constraints on 5 tissues and a variable drug concentration set-point illustrates the efficiency of the methodology in drug dosing control applications. The proposed methodology is also tested in an uncertain setting and proves successful in presence of modelling errors and inaccurate measurements.
Robust model predictive control for optimal continuous drug administration
S0169260714002351
Pressure ulcers (PrU) are considered as one of the most challenging problems that Nursing professionals have to deal with in their daily practice. Nowadays, the education on PrUs is mainly based on traditional lecturing, seminars and face-to-face instruction, sometimes with the support of photographs of wounds being used as teaching material. This traditional educational methodology suffers from some important limitations, which could affect the efficacy of the learning process. This current study has been designed to introduce information and communication technologies (ICT) in the education on PrU for undergraduate students, with the main objective of evaluating the advantages an disadvantages of using ICT, by comparing the learning results obtained from using an e-learning tool with those from a traditional teaching methodology. In order to meet this major objective, a web-based learning system named ePULab has been designed and developed as an adaptive e-learning tool for the autonomous acquisition of knowledge on PrU evaluation. This innovative system has been validated by means of a randomized controlled trial that compares its learning efficacy with that from a control group receiving a traditional face-to-face instruction. Students using ePULab gave significantly better (p < 0.01) learning acquisition scores (from pre-test mean 8.27 (SD 1.39) to post-test mean 15.83 (SD 2.52)) than those following traditional lecture-style classes (from pre-test mean 8.23 (SD 1.23) to post-test mean 11.6 (SD 2.52)). In this article, the ePULab software is described in detail and the results from that experimental educational validation study are also presented and analyzed.
A web-based e-learning application for wound diagnosis and treatment
S0169260714002405
Introduction Paroxysmal versus persistent atrial fibrillation (AF) can be distinguished based on differences in the spectral parameters of fractionated atrial electrograms. Maximization of these differences would improve characterization of the arrhythmogenic substrate. A novel spectral estimator (NSE) has been shown previously to provide greater distinction in AF spectral parameters as compared with the Fourier transform estimator. Herein, it is described how the differences in NSE spectral parameters can be further improved. Method In 10 persistent and 9 paroxysmal AF patients undergoing electrophysiologic study, fractionated electrograms were acquired from the distal bipolar ablation electrode. A total of 204 electrograms were recorded from the pulmonary vein (PV) antra and from the anterior and posterior left atrial free wall. The following spectral parameters were measured: the dominant frequency (DF), which reflects local activation rate, the DF amplitude (DA), and the mean spectral profile (MP), which represents background electrical activity. To optimize differences in parameters between paroxysmal versus persistent AF patients, the NSE was varied by selectively removing subharmonics, using a threshold. The threshold was altered in steps to determine the optimal subharmonics removal. Results At the optimal threshold level, mean differences in persistent versus paroxysmal AF spectral parameters were: ΔDA=+0.371mV, ΔDF=+0.737Hz, and ΔMP=−0.096mV. When subharmonics were not removed, the differences were substantially less: ΔDA=+0.301mV, ΔDF=+0.699Hz, and ΔMP=−0.063mV. Conclusions NSE optimization produces greater spectral parameter difference between persistent versus paroxysmal AF data. Quantifying spectral parameter differences can be assistive in characterizing the arrhythmogenic substrate.
Optimization of novel spectral estimator for fractionated electrogram analysis is helpful to discern atrial fibrillation type
S0169260714002417
Proteins control all biological functions in living species. Protein structure is comprised of four major classes including all-α class, all-β class, α+β, and α/β. Each class performs different function according to their nature. Owing to the large exploration of protein sequences in the databanks, the identification of protein structure classes is difficult through conventional methods with respect to cost and time. Looking at the importance of protein structure classes, it is thus highly desirable to develop a computational model for discriminating protein structure classes with high accuracy. For this purpose, we propose a silco method by incorporating Pseudo Average Chemical Shift and Support Vector Machine. Two feature extraction schemes namely Pseudo Amino Acid Composition and Pseudo Average Chemical Shift are used to explore valuable information from protein sequences. The performance of the proposed model is assessed using four benchmark datasets 25PDB, 1189, 640 and 399 employing jackknife test. The success rates of the proposed model are 84.2%, 85.0%, 86.4%, and 89.2%, respectively on the four datasets. The empirical results reveal that the performance of our proposed model compared to existing models is promising in the literature so far and might be useful for future research.
Discriminating protein structure classes by incorporating Pseudo Average Chemical Shift to Chou's general PseAAC and Support Vector Machine
S0169260714002429
Objectives To compare the risk of infection for rheumatoid arthritis (RA) patients who took etanercept or adalimumab medication in a nationwide population. Methods RA patients who took etanercept or adalimumab were identified in the Taiwan's National Health Insurance Research Database. The composite outcome of serious infections, including hospitalization for infection, reception of an antimicrobial injection, and tuberculosis were followed for 365 days. A Kaplan–Meier survival curve with a log-rank test and Cox proportional hazards regression were used to compare risks of infection between the two cohorts of tumor necrosis factor (TNF)-α antagonists users. Hazard ratios (HRs) were obtained and adjusted with propensity scores and clinical factors. Sensitivity analyses and subgroup analyses were also performed. Results In total, 1660 incident etanercept users and 484 incident adalimumab users were eligible for the analysis. The unadjusted HR for infection of the etanercept users was significantly higher than that of the adalimumab users (HR: 1.93; 95% confidence interval (CI): 1.09–3.42; p =0.024). The HRs were 2.04 (95% CI: 1.14–3.65; p =0.016) and 2.02 (95% CI: 1.13–3.61; p =0.018) after adjusting for propensity scores and for propensity scores in addition to clinical factors, respectively. The subgroup analyses revealed that HRs for composite infection was significantly higher in patient subgroups of older age, female, as well as patients who did not have DM, COPD, and hospitalization history at the baseline. Conclusion In this head-to-head cohort study involving a nationwide population of patients with RA, etanercept users demonstrated a higher risk of infection than adalimumab users. Results of this study suggest the possible existence of an intra-class difference in infection risk among TNF-α antagonists.
Infection risk in patients with rheumatoid arthritis treated with etanercept or adalimumab
S0169260714002442
The purpose of this study was the development of a clustering methodology to deal with arterial pressure waveform (APW) parameters to be used in the cardiovascular risk assessment. One hundred sixteen subjects were monitored and divided into two groups. The first one (23 hypertensive subjects) was analyzed using APW and biochemical parameters, while the remaining 93 healthy subjects were only evaluated through APW parameters. The expectation maximization (EM) and k-means algorithms were used in the cluster analysis, and the risk scores (the Framingham Risk Score (FRS), the Systematic COronary Risk Evaluation (SCORE) project, the Assessing cardiovascular risk using Scottish Intercollegiate Guidelines Network (ASSIGN) and the PROspective Cardiovascular Münster (PROCAM)), commonly used in clinical practice were selected to the cluster risk validation. The result from the clustering risk analysis showed a very significant correlation with ASSIGN (r =0.582, p <0.01) and a significant correlation with FRS (r =0.458, p <0.05). The results from the comparison of both groups also allowed to identify the cluster with higher cardiovascular risk in the healthy group. These results give new insights to explore this methodology in future scoring trials.
Cardiovascular risk analysis by means of pulse morphology and clustering methodologies
S0169260714002454
Positron emission tomography (PET) with 18fluorodeoxyglucose (18F-FDG) is increasingly used in neurology. The measurement of cerebral arterial inflow (QA) using 18F-FDG complements the information provided by standard brain PET imaging. Here, injections were performed after the beginning of dynamic acquisitions and the time to arrival (t0) of activity in the gantry's field of view was computed. We performed a phantom study using a branched tube (internal diameter: 4mm) and a 18F-FDG solution injected at 240mL/min. Data processing consisted of (i) reconstruction of the first 3s after t0, (ii) vascular signal enhancement and (iii) clustering. This method was then applied in four subjects. We measured the volumes of the tubes or vascular trees and calculated the corresponding flows. In the phantom, the flow was calculated to be 244.2mL/min. In each subject, our QA value was compared with that obtained by quantitative cine-phase contrast magnetic resonance imaging; the mean QA value of 581.4±217.5mL/min calculated with 18F-FDG PET was consistent with the mean value of 593.3±205.8mL/min calculated with quantitative cine-phase contrast magnetic resonance imaging. Our 18F-FDG PET method constitutes a novel, fully automatic means of measuring QA.
Cerebral arterial inflow assessment with 18F-FDG PET: Methodology and feasibility
S0169260714002478
This paper demonstrates the utility of a differencing technique to transform surface EMG signals measured during both static and dynamic contractions such that they become more stationary. The technique was evaluated by three stationarity tests consisting of the variation of two statistical properties, i.e., mean and standard deviation, and the reverse arrangements test. As a result of the proposed technique, the first difference of EMG time series became more stationary compared to the original measured signal. Based on this finding, the performance of time-domain features extracted from raw and transformed EMG was investigated via an EMG classification problem (i.e., eight dynamic motions and four EMG channels) on data from 18 subjects. The results show that the classification accuracies of all features extracted from the transformed signals were higher than features extracted from the original signals for six different classifiers including quadratic discriminant analysis. On average, the proposed differencing technique improved classification accuracies by 2–8%.
Feature extraction of the first difference of EMG time series for EMG pattern recognition
S0169260714002491
Multiple statistics show that heart diseases are one of the main causes of mortality in our highly developed societies today. These diseases lead to a change of the physiology of the heart, which gives useful information about characteristic and severity of the defect. A fast and reliable diagnosis is the base for successful therapy. As a first step towards recognition of such heart remodeling processes, this work proposes a fully automatic processing pipeline for regional classification of the left ventricular wall in ultrasound images of small animals. The pipeline is based on state-of-the-art methods from computer vision and pattern classification. The myocardial wall is segmented and its motion is estimated. A feature extraction using the segmented data is realized to automatically classify the image regions into normal and abnormal myocardial tissue. The performance of the proposed pipeline is evaluated and a comparison of common classification algorithms on ultrasound data of living mice before and after artificially induced myocardial infarction is given. It is shown that the results of this work, reaching a maximum accuracy of 91.46%, are an encouraging base for further investigation.
Automatic classification of left ventricular wall segments in small animal ultrasound imaging
S0169260714002521
Semen analysis is the first step in the evaluation of an infertile couple. Within this process, an accurate and objective morphological analysis becomes more critical as it is based on the correct detection and segmentation of human sperm components. In this paper, we present an improved two-stage framework for detection and segmentation of human sperm head characteristics (including acrosome and nucleus) that uses three different color spaces. The first stage detects regions of interest that define sperm heads, using k-means, then candidate heads are refined using mathematical morphology. In the second stage, we work on each region of interest to segment accurately the sperm head as well as nucleus and acrosome, using clustering and histogram statistical analysis techniques. Our proposal is also characterized by being fully automatic, where a user intervention is not required. Our experimental evaluation shows that our proposed method outperforms the state-of-the-art. This is supported by the results of different evaluation metrics. In addition, we propose a gold-standard built with the cooperation of a referent expert in the field, aiming to compare methods for detecting and segmenting sperm cells. Our results achieve notable improvement getting above 98% in the sperm head detection process at the expense of having significantly fewer false positives obtained by the state-of-the-art method. Our results also show an accurate head, acrosome and nucleus segmentation achieving over 80% overlapping against hand-segmented gold-standard. Our method achieves higher Dice coefficient, lower Hausdorff distance and less dispersion with respect to the results achieved by the state-of-the-art method.
Gold-standard and improved framework for sperm head segmentation
S0169260714002533
Background The report from the Institute of Medicine, To Err Is Human: Building a Safer Health System in 1999 drew a special attention towards preventable medical errors and patient safety. The American Reinvestment and Recovery Act of 2009 and federal criteria of ‘Meaningful use’ stage 1 mandated e-prescribing to be used by eligible providers in order to access Medicaid and Medicare incentive payments. Inappropriate prescribing has been identified as a preventable cause of at least 20% of drug-related adverse events. A few studies reported system-related errors and have offered targeted recommendations on improving and enhancing e-prescribing system. Objective This study aims to enhance efficiency of the e-prescribing system by shortening the medication list, reducing the risk of inappropriate selection of medication, as well as in reducing the prescribing time of physicians. Method 103.48 million prescriptions from Taiwan's national health insurance claim data were used to compute Diagnosis-Medication association. Furthermore, 100,000 prescriptions were randomly selected to develop a smart medication recommendation model by using association rules of data mining. Results and conclusion The important contribution of this model is to introduce a new concept called Mean Prescription Rank (MPR) of prescriptions and Coverage Rate (CR) of prescriptions. A proactive medication list (PML) was computed using MPR and CR. With this model the medication drop-down menu is significantly shortened, thereby reducing medication selection errors and prescription times. The physicians will still select relevant medications even in the case of inappropriate (unintentional) selection.
A smart medication recommendation model for the electronic prescription
S0169260714002545
Background and objective The degeneration of the balance control system in the elderly and in many pathologies requires measuring the equilibrium conditions very often. In clinical practice, equilibrium control is commonly evaluated by using a force platform (stabilometric platform) in a clinical environment. In this paper, we demonstrate how a simple movement analysis system, based on a 3D video camera and a 3D real time model reconstruction of the human body, can be used to collect information usually recorded by a physical stabilometric platform. Methods The algorithm used to reconstruct the human body model as a set of spheres is described and discussed. Moreover, experimental measurements and comparisons with data collected by a physical stabilometric platform are also reported. The measurements were collected on a set of 6 healthy subjects to whom a change in equilibrium condition was stimulated by performing an equilibrium task. Results The experimental results showed that more than 95% of data collected by the proposed method were not significantly different from those collected by the classic platform, thus confirming the usefulness of the proposed system. Conclusions The proposed virtual balance assessment system can be implemented at low cost (about 500$) and, for this reason, can be considered a home use medical device. On the contrary, astabilometric platform has a cost of about 10,000$ and requires periodical calibration. The proposed system does not require periodical calibration, as is necessary for stabilometric force platforms, and it is easy to use. In future, the proposed system with little integration can be used, besides being an emulator of a stabilometric platform, also to recognize and track, in real time, head, legs, arms and trunk, that is to collect information actually obtained by sophisticated optoelectronic systems.
A low-cost real time virtual system for postural stability assessment at home
S0169260714002557
The domain of cancer treatment is a promising field for the implementation and evaluation of a protocol-based clinical decision support system, because of the algorithmic nature of treatment recommendations. However, many factors can limit such systems’ potential to support the decision of clinicians: technical challenges related to the interoperability with existing electronic patient records and clinical challenges related to the inherent complexity of the decisions, often collectively taken by panels of different specialists. In this paper, we evaluate the performances of an Asbru-based decision support system implementing treatment protocols for breast cancer, which accesses data from an oncological electronic patient record. Focusing on the decision on the adjuvant pharmaceutical treatment for patients affected by early invasive breast cancer, we evaluate the matching of the system's recommendations with those issued by the multidisciplinary panel held weekly in a hospital.
Implementation and evaluation of an Asbru-based decision support system for adjuvant treatment in breast cancer
S0169260714002569
Patients who suffer from chronic renal failure (CRF) tend to suffer from an associated anemia as well. Therefore, it is essential to know the hemoglobin (Hb) levels in these patients. The aim of this paper is to predict the hemoglobin (Hb) value using a database of European hemodialysis patients provided by Fresenius Medical Care (FMC) for improving the treatment of this kind of patients. For the prediction of Hb, both analytical measurements and medication dosage of patients suffering from chronic renal failure (CRF) are used. Two kinds of models were trained, global and local models. In the case of local models, clustering techniques based on hierarchical approaches and the adaptive resonance theory (ART) were used as a first step, and then, a different predictor was used for each obtained cluster. Different global models have been applied to the dataset such as Linear Models, Artificial Neural Networks (ANNs), Support Vector Machines (SVM) and Regression Trees among others. Also a relevance analysis has been carried out for each predictor model, thus finding those features that are most relevant for the given prediction.
Prediction of the hemoglobin level in hemodialysis patients using machine learning techniques
S0169260714002909
Background The use of open source software in health informatics is increasingly advocated by authors in the literature. Although there is no clear evidence of the superiority of the current open source applications in the healthcare field, the number of available open source applications online is growing and they are gaining greater prominence. This repertoire of open source options is of a great value for any future-planner interested in adopting an electronic medical/health record system, whether selecting an existent application or building a new one. The following questions arise. How do the available open source options compare to each other with respect to functionality, usability and security? Can an implementer of an open source application find sufficient support both as a user and as a developer, and to what extent? Does the available literature provide adequate answers to such questions? This review attempts to shed some light on these aspects. Objective The objective of this study is to provide more comprehensive guidance from an implementer perspective toward the available alternatives of open source healthcare software, particularly in the field of electronic medical/health records. Methods The design of this study is twofold. In the first part, we profile the published literature on a sample of existent and active open source software in the healthcare area. The purpose of this part is to provide a summary of the available guides and studies relative to the sampled systems, and to identify any gaps in the published literature with respect to our research questions. In the second part, we investigate those alternative systems relative to a set of metrics, by actually installing the software and reporting a hands-on experience of the installation process, usability, as well as other factors. Results The literature covers many aspects of open source software implementation and utilization in healthcare practice. Roughly, those aspects could be distilled into a basic taxonomy, making the literature landscape more perceivable. Nevertheless, the surveyed articles fall short of fulfilling the targeted objective of providing clear reference to potential implementers. The hands-on study contributed a more detailed comparative guide relative to our set of assessment measures. Overall, no system seems to satisfy an industry-standard measure, particularly in security and interoperability. The systems, as software applications, feel similar from a usability perspective and share a common set of functionality, though they vary considerably in community support and activity. Conclusion More detailed analysis of popular open source software can benefit the potential implementers of electronic health/medical records systems. The number of examined systems and the measures by which to compare them vary across studies, but still rewarding insights start to emerge. Our work is one step toward that goal. Our overall conclusion is that open source options in the medical field are still far behind the highly acknowledged open source products in other domains, e.g. operating systems market share.
Open source EMR software: Profiling, insights and hands-on analysis
S0169260714002910
We develop an autonomous system to detect and evaluate physical therapy exercises using wearable motion sensors. We propose the multi-template multi-match dynamic time warping (MTMM-DTW) algorithm as a natural extension of DTW to detect multiple occurrences of more than one exercise type in the recording of a physical therapy session. While allowing some distortion (warping) in time, the algorithm provides a quantitative measure of similarity between an exercise execution and previously recorded templates, based on DTW distance. It can detect and classify the exercise types, and count and evaluate the exercises as correctly/incorrectly performed, identifying the error type, if any. To evaluate the algorithm's performance, we record a data set consisting of one reference template and 10 test executions of three execution types of eight exercises performed by five subjects. We thus record a total of 120 and 1200 exercise executions in the reference and test sets, respectively. The test sequences also contain idle time intervals. The accuracy of the proposed algorithm is 93.46% for exercise classification only and 88.65% for simultaneous exercise and execution type classification. The algorithm misses 8.58% of the exercise executions and demonstrates a false alarm rate of 4.91%, caused by some idle time intervals being incorrectly recognized as exercise executions. To test the robustness of the system to unknown exercises, we employ leave-one-exercise-out cross validation. This results in a false alarm rate lower than 1%, demonstrating the robustness of the system to unknown movements. The proposed system can be used for assessing the effectiveness of a physical therapy session and for providing feedback to the patient.
Automated evaluation of physical therapy exercises using multi-template dynamic time warping on wearable sensor signals
S0169260714002922
Insulin pharmacokinetics is not well understood during continuous subcutaneous insulin infusion in type 2 diabetes (T2D). We analyzed data collected in 11 subjects with T2D [6 male, 9 white European and two of Indian ethnicity; age 59.7(12.1) years, BMI 30.1(3.9)kg/m2, fasting C-peptide 1002.2(365.8)pmol/l, fasting plasma glucose 9.6(2.2)mmol/l, diabetes duration 8.0(6.2) years and HbA1c 8.3(0.8)%; mean(SD)] who underwent a 24-h study investigating closed-loop insulin delivery at the Wellcome Trust Clinical Research Facility, Cambridge, UK. Subcutaneous delivery of insulin lispro was modulated every 15min according to a model predictive control algorithm. Two complementary insulin assays facilitated discrimination between exogenous (lispro) and endogenous plasma insulin concentrations measured every 15–60min. Lispro pharmacokinetics was represented by a linear two-compartment model whilst parameters were estimated using a Bayesian approach applying a closed-form model solution. The time-to-peak of lispro absorption (t max) was 109.6 (75.5–120.5)min [median (interquartile range)] and the metabolic clearance rate (MCR I ) 1.26 (0.87–1.56)×10−2 l/kg/min. MCR I was negatively correlated with fasting C-peptide (r s =−0.84; P =.001) and with fasting plasma insulin concentration (r s =−0.79; P =.004). In conclusion, compartmental modelling adequately represents lispro kinetics during continuous subcutaneous insulin infusion in T2D. Fasting plasma C-peptide or fasting insulin may be predictive of lispro metabolic clearance rate in T2D but further investigations are warranted.
Pharmacokinetics of insulin lispro in type 2 diabetes during closed-loop insulin delivery
S0169260714002934
We propose a fast seed detection for automatic tracking of coronary arteries in coronary computed tomographic angiography (CCTA). To detect vessel regions, Hessian-based filtering is combined with a new local geometric feature that is based on the similarity of the consecutive cross-sections perpendicular to the vessel direction. It is in turn founded on the prior knowledge that a vessel segment is shaped like a cylinder in axial slices. To improve computational efficiency, an axial slice, which contains part of three main coronary arteries, is selected and regions of interest (ROIs) are extracted in the slice. Only for the voxels belonging to the ROIs, the proposed geometric feature is calculated. With the seed points, which are the centroids of the detected vessel regions, and their vessel directions, vessel tracking method can be used for artery extraction. Here a particle filtering-based tracking algorithm is tested. Using 19 clinical CCTA datasets, it is demonstrated that the proposed method detects seed points and can be used for full automatic coronary artery extraction. ROC (receiver operating characteristic) curve analysis shows the advantages of the proposed method.
A fast seed detection using local geometrical feature for automatic tracking of coronary arteries in CTA
S0169260714002946
In this study, we developed an integrated hospital-associated urinary tract infection (HAUTI) surveillance information system (called iHAUTISIS) based on existing electronic medical records (EMR) systems for improving the work efficiency of infection control professionals (ICPs) in a 730-bed, tertiary-care teaching hospital in Taiwan. The iHAUTISIS can automatically collect data relevant to HAUTI surveillance from the different EMR systems, and provides a visualization dashboard that helps ICPs make better surveillance plans and facilitates their surveillance work. In order to measure the system performance, we also created a generic model for comparing the ICPs’ work efficiency when using existing electronic culture-based surveillance information system (eCBSIS) and iHAUTISIS, respectively. This model can demonstrate a patient's state (unsuspected, suspected, and confirmed) and corresponding time spent on surveillance tasks performed by ICPs for the patient in that state. The study results showed that the iHAUTISIS performed better than the eCBSIS in terms of ICPs’ time cost. It reduced the time by 73.27s, when using iHAUTISIS (114.26s) and eCBSIS (187.53s), for each patient on average. With increased adoption of EMR systems, the development of the integrated HAI surveillance information systems would be more and more cost-effective. Moreover, the iHAUTISIS adopted web-based technology that enables ICPs to online access patient's surveillance information using laptops or mobile devices. Therefore, our system can further facilitate the HAI surveillance and reduce ICPs’ surveillance workloads.
Improving the work efficiency of healthcare-associated infection surveillance using electronic medical records
S0169260714002995
The cure fraction models have been widely used to analyze survival data in which a proportion of the individuals is not susceptible to the event of interest. In this article, we introduce a bivariate model for survival data with a cure fraction based on the three-parameter generalized Lindley distribution. The joint distribution of the survival times is obtained by using copula functions. We consider three types of copula function models, the Farlie–Gumbel–Morgenstern (FGM), Clayton and Gumbel–Barnett copulas. The model is implemented under a Bayesian framework, where the parameter estimation is based on Markov Chain Monte Carlo (MCMC) techniques. To illustrate the utility of the model, we consider an application to a real data set related to an invasive cervical cancer study.
Bayesian bivariate generalized Lindley model for survival data with a cure fraction
S0169260714003009
This study was performed to evaluate the influences of the myocardial bridges on the plaque initializations and progression in the coronary arteries. The wall structure is changed due to the plaque presence, which could be the reason for multiple heart malfunctions. Using simplified parametric finite element model (FE model) of the coronary artery having myocardial bridge and analyzing different mechanical parameters from blood circulation through the artery (wall shear stress, oscillatory shear index, residence time), we investigated the prediction of “the best” position for plaque progression. We chose six patients from the angiography records and used data from DICOM images to generate FE models with our software tools for FE preprocessing, solving and post-processing. We found a good correlation between real positions of the plaque and the ones that we predicted to develop at the proximal part of the myocardial bridges with wall shear stress, oscillatory shear index and residence time. This computer model could be additional predictive tool for everyday clinical examination of the patient with myocardial bridge.
Prediction of coronary plaque location on arteries having myocardial bridge, using finite element models
S0169260714003010
Analyzing the acceleration photoplethysmogram (APG) is becoming increasingly important for diagnosis. However, processing an APG signal is challenging, especially if the goal is to detect its small components (c, d, and e waves). Accurate detection of c, d, and e waves is an important first step for any clinical analysis of APG signals. In this paper, a novel algorithm that can detect c, d, and e waves simultaneously in APG signals of healthy subjects that have low amplitude waves, contain fast rhythm heart beats, and suffer from non-stationary effects was developed. The performance of the proposed method was tested on 27 records collected during rest, resulting in 97.39% sensitivity and 99.82% positive predictivity.
Detection of c, d, and e waves in the acceleration photoplethysmogram
S0169260714003034
Studies on health domain have shown that health websites provide imperfect information and give recommendations which are not up to date with the recent literature even when their last modified dates are quite recent. In this paper, we propose a framework which assesses the timeliness of the content of health websites automatically by evidence based medicine. Our aim is to assess the accordance of website contents with the current literature and information timeliness disregarding the update time stated on the websites. The proposed method is based on automatic term recognition, relevance feedback and information retrieval techniques in order to generate time-aware structured queries. We tested the framework on diabetes health web sites which were archived between 2006 and 2013 by Archive-it using American Diabetes Association's (ADA) guidelines. The results showed that the proposed framework achieves 65% and 77% accuracy in detecting the timeliness of the web content according to years and pre-determined time intervals respectively. Information seekers and web site owners may benefit from the proposed framework in finding relevant and up-to-date diabetes web sites.
Automatic information timeliness assessment of diabetes web sites by evidence based medicine
S0169260714003058
This study developed a computerised method for fovea centre detection in fundus images. In the method, the centre of the optic disc was localised first by the template matching method, the disc–fovea axis (a line connecting the optic disc centre and the fovea) was then determined by searching the vessel-free region, and finally the fovea centre was detected by matching the fovea template around the centre of the axis. Adaptive Gaussian templates were used to localise the centres of the optic disc and fovea for the images with different resolutions. The proposed method was evaluated using three publicly available databases (DIARETDB0, DIARETDB1 and MESSIDOR), which consisted of a total of 1419 fundus images with different resolutions. The proposed method obtained the fovea detection accuracies of 93.1%, 92.1% and 97.8% for the DIARETDB0, DIARETDB1 and MESSIDOR databases, respectively. The overall accuracy of the proposed method was 97.0% in this study.
Automated detection of fovea in fundus images based on vessel-free zone and adaptive Gaussian template
S0169260714003186
The main goal of this study was to numerically quantify risk of duodenal stump blowout after Billroth II (BII) gastric resection. Our hypothesis was that the geometry of the reconstructed tract after BII resection is one of the key factors that can lead to duodenal dehiscence. We used computational fluid dynamics (CFD) with finite element (FE) simulations of various models of BII reconstructed gastrointestinal (GI) tract, as well as non-perfused, ex vivo, porcine experimental models. As main geometrical parameters for FE postoperative models we have used duodenal stump length and inclination between gastric remnant and duodenal stump. Virtual gastric resection was performed on each of 3D FE models based on multislice Computer Tomography (CT) DICOM. According to our computer simulation the difference between maximal duodenal stump pressures for models with most and least preferable geometry of reconstructed GI tract is about 30%. We compared the resulting postoperative duodenal pressure from computer simulations with duodenal stump dehiscence pressure from the experiment. Pressure at duodenal stump after BII resection obtained by computer simulation is 4–5 times lower than the dehiscence pressure according to our experiment on isolated bowel segment. Our conclusion is that if the surgery is performed technically correct, geometry variations of the reconstructed GI tract by themselves are not sufficient to cause duodenal stump blowout. Pressure that develops in the duodenal stump after BII resection using omega loop, only in the conjunction with other risk factors can cause duodenal dehiscence. Increased duodenal pressure after BII resection is risk factor. Hence we recommend the routine use of Roux en Y anastomosis as a safer solution in terms of resulting intraluminal pressure. However, if the surgeon decides to perform BII reconstruction, results obtained with this methodology can be valuable.
Numerical and experimental analysis of factors leading to suture dehiscence after Billroth II gastric resection
S0169260714003198
This research focuses on scheduling patients in emergency department laboratories according to the priority of patients’ treatments, determined by the triage factor. The objective is to minimize the total waiting time of patients in the emergency department laboratories with emphasis on patients with severe conditions. The problem is formulated as a flexible open shop scheduling problem and a mixed integer linear programming model is proposed. A genetic algorithm (GA) is developed for solving the problem. Then, the response surface methodology is applied for tuning the GA parameters. The algorithm is tested on a set of real data from an emergency department. Simulation results show that the proposed algorithm can significantly improve the efficiency of the emergency department by reducing the total waiting time of prioritized patients.
Scheduling prioritized patients in emergency department laboratories
S0169260714003204
Background and objective Parkinson's disease (PD) is the second most common neurodegenerative disease affecting significant portion of elderly population. One of the most frequent hallmarks and usually also the first manifestation of PD is deterioration of handwriting characterized by micrographia and changes in kinematics of handwriting. There is no objective quantitative method of clinical diagnosis of PD. It is thought that PD can only be definitively diagnosed at postmortem, which further highlights the complexities of diagnosis. Methods We exploit the fact that movement during handwriting of a text consists not only from the on-surface movements of the hand, but also from the in-air trajectories performed when the hand moves in the air from one stroke to the next. We used a digitizing tablet to assess both in-air and on-surface kinematic variables during handwriting of a sentence in 37 PD patients on medication and 38 age- and gender-matched healthy controls. Results By applying feature selection algorithms and support vector machine learning methods to separate PD patients from healthy controls, we demonstrated that assessing the in-air/on-surface hand movements led to accurate classifications in 84% and 78% of subjects, respectively. Combining both modalities improved the accuracy by another 1% over the evaluation of in-air features alone and provided medically relevant diagnosis with 85.61% prediction accuracy. Conclusions Assessment of in-air movements during handwriting has a major impact on disease classification accuracy. This study confirms that handwriting can be used as a marker for PD and can be with advance used in decision support systems for differential diagnosis of PD.
Analysis of in-air movement in handwriting: A novel marker for Parkinson's disease
S0169260714003216
Background Overall survival (OS) and progression free survival (PFS) are key outcome measures for head and neck cancer as they reflect treatment efficacy, and have implications for patients and health services. The UK has recently developed a series of national cancer audits which aim to estimate survival and recurrence by relying on institutions manually submitting interval data on patient status, a labour-intensive method. However, nationally, data are routinely collected on hospital admissions, surgery, radiotherapy and chemotherapy. We have developed a technique to automate the interpretation of these routine datasets, allowing us to derive patterns of treatment in head and neck cancer patients from routinely acquired data. Methods We identified 122 patients with head and neck cancer and extracted treatment histories from hospital notes to provide a gold standard dataset. We obtained routinely collected local data on inpatient admission and procedures, chemotherapy and radiotherapy for these patients and analysed them with a computer algorithm which identified relevant time points and then calculated OS and PFS. We validated these by comparison with the gold standard dataset. The algorithm was then optimised to maximise correct identification of each timepoint, and minimise false identification of recurrence events. Results Of the 122 patients, 82% had locally advanced disease. OS was 88% at 1 year and 77% at 2 years and PFS was 75% and 66% at 1 and 2 years. 40 patients developed recurrent disease. Our automated method provided an estimated OS of 87% and 77% and PFS of 87% and 78% at 1 and 2 years; 98% and 82% of patients showed good agreement between the automated technique and Gold standard dataset of OS and PFS respectively (ratio of Gold standard to routine intervals of between 0.8 and 1.2). The automated technique correctly assigned recurrence in 101 out of 122 (83%) of the patients: 21 of the 40 patients with recurrent disease were correctly identified, 19 were too unwell to receive further treatment and were missed. Of the 82 patients who did not develop a recurrence, 77 were correctly identified and 2 were incorrectly identified as having recurrent disease when they did not. Conclusions We have demonstrated that our algorithm can be used to automate the interpretation of routine datasets to extract survival information for this sample of patients. It currently underestimates recurrence rates due to many patients not being well-enough to be treated for recurrent disease. With some further optimisation, this technique could be extended to a national level, providing a new approach to measuring outcomes on a larger scale than is currently possible. This could have implications for healthcare provision and policy for a range of different disease types.
Automated estimation of disease recurrence in head and neck cancer using routine healthcare data
S0169260714003228
In conjunction with the advance in computer technology, virtual screening of small molecules has been started to use in drug discovery. Since there are thousands of compounds in early-phase of drug discovery, a fast classification method, which can distinguish between active and inactive molecules, can be used for screening large compound collections. In this study, we used Support Vector Machines (SVM) for this type of classification task. SVM is a powerful classification tool that is becoming increasingly popular in various machine-learning applications. The data sets consist of 631 compounds for training set and 216 compounds for a separate test set. In data pre-processing step, the Pearson's correlation coefficient used as a filter to eliminate redundant features. After application of the correlation filter, a single SVM has been applied to this reduced data set. Moreover, we have investigated the performance of SVM with different feature selection strategies, including SVM–Recursive Feature Elimination, Wrapper Method and Subset Selection. All feature selection methods generally represent better performance than a single SVM while Subset Selection outperforms other feature selection methods. We have tested SVM as a classification tool in a real-life drug discovery problem and our results revealed that it could be a useful method for classification task in early-phase of drug discovery.
Drug/nondrug classification using Support Vector Machines with various feature selection strategies
S0169260714003241
Current electrocardiogram (ECG) signal quality assessment studies have aimed to provide a two-level classification: clean or noisy. However, clinical usage demands more specific noise level classification for varying applications. This work outlines a five-level ECG signal quality classification algorithm. A total of 13 signal quality metrics were derived from segments of ECG waveforms, which were labeled by experts. A support vector machine (SVM) was trained to perform the classification and tested on a simulated dataset and was validated using data from the MIT-BIH arrhythmia database (MITDB). The simulated training and test datasets were created by selecting clean segments of the ECG in the 2011 PhysioNet/Computing in Cardiology Challenge database, and adding three types of real ECG noise at different signal-to-noise ratio (SNR) levels from the MIT-BIH Noise Stress Test Database (NSTDB). The MITDB was re-annotated for five levels of signal quality. Different combinations of the 13 metrics were trained and tested on the simulated datasets and the best combination that produced the highest classification accuracy was selected and validated on the MITDB. Performance was assessed using classification accuracy (Ac), and a single class overlap accuracy (OAc), which assumes that an individual type classified into an adjacent class is acceptable. An Ac of 80.26% and an OAc of 98.60% on the test set were obtained by selecting 10 metrics while 57.26% (Ac) and 94.23% (OAc) were the numbers for the unseen MITDB validation data without retraining. By performing the fivefold cross validation, an Ac of 88.07±0.32% and OAc of 99.34±0.07% were gained on the validation fold of MITDB.
A machine learning approach to multi-level ECG signal quality classification
S0169260714003253
Vascularity evaluation on breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) has a potential diagnostic value, but it represents a time consuming procedure, affected by intra- and inter-observer variability. This study tests the application of a recently published method to reproducibly quantify breast vascularity, and evaluates if the vascular volume of cancer-bearing breast, calculated from automatic vascular maps (AVMs), may correlate with pathologic tumor response after neoadjuvant chemotherapy (NAC). Twenty-four patients with unilateral locally advanced breast cancer underwent DCE-MRI before and after NAC, 8 responders and 16 non-responders. A validated algorithm, based on multiscale 3D Hessian matrix analysis, provided AVMs and allowed the calculation of vessel volume before the initiation and after the last NAC cycle for each breast. For cancer bearing breast, the difference in vascular volume before and after NAC was compared in responders and non-responders using the Wilcoxon two-sample test. A radiologist evaluated the vascularity on the subtracted images (first enhanced minus unenhanced), before and after treatment, assigning a vascular score for each breast, according to the number of vessels with length ≥30mm and maximal transverse diameter ≥2mm. The same evaluation was repeated with the support of the simultaneous visualization of the AVMs. The two evaluations were compared in terms of mean number of vessels and mean vascular score per breast, in responders and non-responders, by use of Wilcoxon two sample test. For all the analysis, the statistical significance level was set at 0.05. For breasts harboring the cancer, evidence of a difference in vascular volume before and after NAC for responders (median=1.71cc) and non-responders (median=0.41cc) was found (p =0.003). A significant difference was also found in the number of vessels (p =0.03) and vascular score (p =0.02) before or after NAC, according to the evaluation supported by the AVMs. The encouraging, although preliminary, results of this study suggest the use of AVMs as new biomarker to evaluate the pathologic response after NAC, but also support their application in other breast DCE-MRI vessel analysis that are waiting for a reliable quantification method.
A new algorithm for automatic vascular mapping of DCE-MRI of the breast: Clinical application of a potential new biomarker
S0169260714003435
Cell counting is one of the basic needs of most biological experiments. Numerous methods and systems have been studied to improve the reliability of counting. However, at present, manual cell counting performed with a hemocytometer still represents the gold standard, despite several problems limiting reproducibility and repeatability of the counts and, at the end, jeopardizing their reliability in general. We present our own approach based on image processing techniques to improve counting reliability. It works in two stages: first building a high-resolution image of the hemocytometer's grid, then counting the live and dead cells by tagging the image with flags of different colours. In particular, we introduce GridMos (http://sourceforge.net/p/gridmos), a fully-automated mosaicing method to obtain a mosaic representing the whole hemocytometer's grid. In addition to offering more significant statistics, the mosaic “freezes” the culture status, thus permitting analysis by more than one operator. Finally, the mosaic achieved can thus be tagged by using an image editor, thus markedly improving counting reliability. The experiments performed confirm the improvements brought about by the proposed counting approach in terms of both reproducibility and repeatability, also suggesting the use of a mosaic of an entire hemocytometer's grid, then labelled trough an image editor, as the best likely candidate for the new gold standard method in cell counting.
Improving reliability of live/dead cell counting through automated image mosaicing
S0169260714003447
Since falls are a major public health problem in an aging society, there is considerable demand for low-cost fall detection systems. One of the main reasons for non-acceptance of the currently available solutions by seniors is that the fall detectors using only inertial sensors generate too much false alarms. This means that some daily activities are erroneously signaled as fall, which in turn leads to frustration of the users. In this paper we present how to design and implement a low-cost system for reliable fall detection with very low false alarm ratio. The detection of the fall is done on the basis of accelerometric data and depth maps. A tri-axial accelerometer is used to indicate the potential fall as well as to indicate whether the person is in motion. If the measured acceleration is higher than an assumed threshold value, the algorithm extracts the person, calculates the features and then executes the SVM-based classifier to authenticate the fall alarm. It is a 365/7/24 embedded system permitting unobtrusive fall detection as well as preserving privacy of the user.
Human fall detection on embedded platform using depth maps and wireless accelerometer
S0169260714003459
Telecare medicine information systems provide a communicating platform for accessing remote medical resources through public networks, and help health care workers and medical personnel to rapidly making correct clinical decisions and treatments. An authentication scheme for data exchange in telecare medicine information systems enables legal users in hospitals and medical institutes to establish a secure channel and exchange electronic medical records or electronic health records securely and efficiently. This investigation develops an efficient and secure verified-based three-party authentication scheme by using extended chaotic maps for data exchange in telecare medicine information systems. The proposed scheme does not require server's public keys and avoids time-consuming modular exponential computations and scalar multiplications on elliptic curve used in previous related approaches. Additionally, the proposed scheme is proven secure in the random oracle model, and realizes the lower bounds of messages and rounds in communications. Compared to related verified-based approaches, the proposed scheme not only possesses higher security, but also has lower computational cost and fewer transmissions.
Verifier-based three-party authentication schemes using extended chaotic maps for data exchange in telecare medicine information systems
S0169260714003472
Many children with motor impairments cannot participate in games and jokes that contribute to their formation. Currently, commercial computer games there are few options of software and sufficiently flexible access devices to meet the needs of this group of children. In this study, a peripheral access device and a 3D computerized game that do not require the actions of dragging, clicking, or activating various keys at the same time were developed. The peripheral access device consists of a webcam and a supervisory system that processes the images. This method provides a field of action that can be adjusted to various types of motor impairments. To analyze the sensitivity of the commands, a virtual course was developed using the scenario of a path of straight lines and curves. A volunteer with good ability in virtual games performed a short training with the virtual course and, after 15min of training, obtained similar results with a standard keyboard and the adapted peripheral device. A 3D game in the Amazon forest was developed using the Blender 3D tool. This free software was used to model the characters and scenarios. To evaluate the usability of the 3D game, the game was tested by 20 volunteers without motor impairments (group A) and 13 volunteers with severe motor limitations of the upper limbs (group B). All the volunteers (group A and B) could easily execute all the actions of the game using the adapted peripheral device. The majority positively evaluated the questions of usability and expressed their satisfaction. The computerized game coupled to the adapted device will offer the option of leisure and learning to people with severe motor impairments who previously lacked this possibility. It also provided equality in this activity to all the users.
The design and evaluation of a peripheral device for use with a computer game intended for children with motor disabilities
S0169260714003484
In PET/CT thoracic imaging, respiratory motion reduces image quality. A solution consists in performing respiratory gated PET acquisitions. The aim of this study was to generate clinically realistic Monte-Carlo respiratory PET data, obtained using the 4D-NCAT numerical phantom and the GATE simulation tool, to assess the impact of respiratory motion and respiratory-motion compensation in PET on lesion detection and volume measurement. To obtain reconstructed images as close as possible to those obtained in clinical conditions, a particular attention was paid to apply to the simulated data the same correction and reconstruction processes as those applied to real clinical data. The simulations required 140,000h (CPU) generating 1.5 To of data (98 respiratory gated and 49 ungated scans). Calibration phantom and patient reconstructed images from the simulated data were visually and quantitatively very similar to those obtained in clinical studies. The lesion detectability was higher when the better trade-off between lesion movement limitation (compared to ungated acquisitions) and image statistic preservation is considered (respiratory cycle sampling in 3 frames). We then compared the lesion volumes measured on conventional PET acquisitions versus respiratory gated acquisitions, using an automatic segmentation method and a 40%-threshold approach. A time consuming initial manual exclusion of noisy structures needed with the 40%-threshold was not necessary when the automatic method was used. The lesion detectability along with the accuracy of tumor volume estimates was largely improved with the gated compared to ungated PET images.
Monte-Carlo simulations of clinically realistic respiratory gated 18F-FDG PET: Application to lesion detectability and volume measurements
S0169260714003496
Virtual colon flattening (VF) is a minimally invasive viewing mode used to detect colorectal polyps on the colonic inner surface in virtual colonoscopy. Compared with conventional colonoscopy, inspecting a flattened colonic inner surface is faster and results in fewer uninspected regions. Unfortunately, the deformation distortions of flattened colonic inner surface impede the performance of VF. Conventionally, the deformation distortions can be corrected by using the colonic inner surface. However, colonic curvatures and haustral folds make correcting deformation distortions using only the colonic inner surface difficult. Therefore, we propose a VF method that is based on the colonic outer surface. The proposed method includes two novel algorithms, namely, the colonic outer surface extraction algorithm and the colonic outer surface-based distortion correction algorithm. Sixty scans involving 77 annotated polyps were used for the validation. The flattened colons were independently inspected by three operators and then compared with three existing VF methods. The correct detection rates of the proposed method and the three existing methods were 79.6%, 67.1%, 71.9%, and 72.7%, respectively, and the false positives per scan were 0.16, 0.32, 0.21, and 0.26, respectively. The experimental results demonstrate that our proposed method has better performance than existing methods that are based on the colonic inner surface.
Virtual colon flattening method based on colonic outer surface
S0169260714003502
Background Developing countries are confronting a steady growth in the prevalence of the infectious diseases. Mobile technologies are widely available and can play an important role in health care at the regional, community, and individual levels. Although labs usually able to accomplish the requested blood test and produce the results within two days after receiving the samples, but the time for the results to be delivered back to clinics is quite variable depending on how often the motorbike transport makes trips between the clinic and the lab. Objective In this study, we seek to assess factors facilitating as well as factors hindering the adoption of mobile devices in the Swazi healthcare through evaluating the end-users of the LabPush system. Methods A qualitative study with semi-structured and in-depth one on one interviews were conducted over two month period July–August 2012. Purposive sampling was used; participants were those operating and using the LabPush system at the remote clinics, at the national laboratory and the supervisors of users at Swaziland. Interview questions were focused on perceived of ease of use and usefulness of the system. All interviews were recorded and then transcribed. Results This study had aimed its primary focus on reducing TAT, prompt patient care, reducing bouncing of patients and defaulting of patients which were challenges that the clinicians have always had. Therefore, the results revealed several barriers and facilitators to the adoption of mobile device by healthcare providers in the Swaziland. The themes Shortens TAT, Technical support, Patient-centered care, Mindset, Improved communication, Missing Reports, Workload, Workflow, Security of smart phone, Human error and Ownership are sorted by facilitators to barriers. Conclusion Thus the end-users perspective, prompt patient care, reduced bouncing of patients, technical support, better communication, willing participant and social influence were facilitators of the adoption m-health in the Swazi healthcare.
LabPush: A pilot study of providing remote clinics with laboratory results via short message service (SMS) in Swaziland, Africa – A qualitative study
S0169260714003514
This research examines the precision of an adaptive neuro-fuzzy computing technique in estimating the anti-obesity property of a potent medicinal plant in a clinical dietary intervention. Even though a number of mathematical functions such as SPSS analysis have been proposed for modeling the anti-obesity properties estimation in terms of reduction in body mass index (BMI), body fat percentage, and body weight loss, there are still disadvantages of the models like very demanding in terms of calculation time. Since it is a very crucial problem, in this paper a process was constructed which simulates the anti-obesity activities of caraway (Carum carvi) a traditional medicine on obese women with adaptive neuro-fuzzy inference (ANFIS) method. The ANFIS results are compared with the support vector regression (SVR) results using root-mean-square error (RMSE) and coefficient of determination (R 2). The experimental results show that an improvement in predictive accuracy and capability of generalization can be achieved by the ANFIS approach. The following statistical characteristics are obtained for BMI loss estimation: RMSE=0.032118 and R 2 =0.9964 in ANFIS testing and RMSE=0.47287 and R 2 =0.361 in SVR testing. For fat loss estimation: RMSE=0.23787 and R 2 =0.8599 in ANFIS testing and RMSE=0.32822 and R 2 =0.7814 in SVR testing. For weight loss estimation: RMSE=0.00000035601 and R 2 =1 in ANFIS testing and RMSE=0.17192 and R 2 =0.6607 in SVR testing. Because of that, it can be applied for practical purposes.
Appraisal of adaptive neuro-fuzzy computing technique for estimating anti-obesity properties of a medicinal plant
S0169260714003526
Mechanical stimuli play a significant role in the process of long bone development as evidenced by clinical observations and in vivo studies. Up to now approaches to understand stimuli characteristics have been limited to the first stages of epiphyseal development. Furthermore, growth plate mechanical behavior has not been widely studied. In order to better understand mechanical influences on bone growth, we used Carter and Wong biomechanical approximation to analyze growth plate mechanical behavior, and explore stress patterns for different morphological stages of the growth plate. To the best of our knowledge this work is the first attempt to study stress distribution on growth plate during different possible stages of bone development, from gestation to adolescence. Stress distribution analysis on the epiphysis and growth plate was performed using axisymmetric (3D) finite element analysis in a simplified generic epiphyseal geometry using a linear elastic model as the first approximation. We took into account different growth plate locations, morphologies and widths, as well as different epiphyseal developmental stages. We found stress distribution during bone development established osteogenic index patterns that seem to influence locally epiphyseal structures growth and coincide with growth plate histological arrangement.
Growth plate stress distribution implications during bone development: A simple framework computational approach
S0169260714003538
Non-invasive treatment of neurodegenerative diseases is particularly challenging in Western countries, where the population age is increasing. In this work, magnetic propagation in human head is modelled by Finite-Difference Time-Domain (FDTD) method, taking into account specific characteristics of Transcranial Magnetic Stimulation (TMS) in neurodegenerative diseases. It uses a realistic high-resolution three-dimensional human head mesh. The numerical method is applied to the analysis of magnetic radiation distribution in the brain using two realistic magnetic source models: a circular coil and a figure-8 coil commonly employed in TMS. The complete model was applied to the study of magnetic stimulation in Alzheimer and Parkinson Diseases (AD, PD). The results show the electrical field distribution when magnetic stimulation is supplied to those brain areas of specific interest for each particular disease. Thereby the current approach entails a high potential for the establishment of the current underdeveloped TMS dosimetry in its emerging application to AD and PD.
FDTD-based Transcranial Magnetic Stimulation model applied to specific neurodegenerative disorders
S0169260714003708
The purpose of this study was to develop automatic classifiers to simplify the clinical use and increase the accuracy of the forced oscillation technique (FOT) in the categorisation of airway obstruction level in patients with chronic obstructive pulmonary disease (COPD). The data consisted of FOT parameters obtained from 168 volunteers (42 healthy and 126 COPD subjects with four different levels of obstruction). The first part of this study showed that FOT parameters do not provide adequate accuracy in identifying COPD subjects in the first levels of obstruction, as well as in discriminating between close levels of obstruction. In the second part of this study, different supervised machine learning (ML) techniques were investigated, including k-nearest neighbour (KNN), random forest (RF) and support vector machines with linear (SVML) and radial basis function kernels (SVMR). These algorithms were applied only in situations where high categorisation accuracy [area under the Receiver Operating Characteristic curve (AUC)≥0.9] was not achieved with the FOT parameter alone. It was observed that KNN and RF classifiers improved categorisation accuracy. Notably, in four of the six cases studied, an AUC≥0.9 was achieved. Even in situations where an AUC≥0.9 was not achieved, there was a significant improvement in categorisation performance (AUC≥0.83). In conclusion, machine learning classifiers can help in the categorisation of COPD airway obstruction. They can assist clinicians in tracking disease progression, evaluating the risk of future disease exacerbations and guiding therapy.
Machine learning algorithms and forced oscillation measurements to categorise the airway obstruction severity in chronic obstructive pulmonary disease
S0169260714003836
Neuropsychological assessment tests have an important role in early detection of dementia. Therefore, we designed and implemented a test battery for mobile devices that can be used for mobile cognitive screening (MCS). This battery consists of 33 questions from 14 type of tests for the assessment of 8 different cognitive functions: Arithmetic, orientation, abstraction, attention, memory, language, visual, and executive functions. This test battery is implemented as an application for mobile devices that operates on Android OS. In order to validate the effectiveness of the neuropsychological test battery, it was applied on a group of 23 elderly persons. Within this group, 9 (of age 81.78±4.77) were healthy and 14 (of age 72.55±9.95) were already diagnosed with dementia. The education level of the control group (healthy) and dementia group were comparable as they spent 13.66±5.07 and 13.71±4.14 years at school respectively. For comparison, a validated paper-and-pencil test (Montreal Cognitive Test – MoCA) was applied along with the proposed MCS battery. The proposed test was able to differentiate the individuals in the control and dementia groups for executive, visual, memory, attention, orientation functions with statistical significance (p <0.05). Results of the remaining functions; language, abstraction, and arithmetic were statistically insignificant (p >0.05). The results of MCS and MoCA were compared, and the scores of individuals from these tests were correlated (r 2 =0.57).
A mobile application for cognitive screening of dementia
S0169260714003848
Background and objectives Document annotation is a key task in the development of Text Mining methods and applications. High quality annotated corpora are invaluable, but their preparation requires a considerable amount of resources and time. Although the existing annotation tools offer good user interaction interfaces to domain experts, project management and quality control abilities are still limited. Therefore, the current work introduces Marky, a new Web-based document annotation tool equipped to manage multi-user and iterative projects, and to evaluate annotation quality throughout the project life cycle. Methods At the core, Marky is a Web application based on the open source CakePHP framework. User interface relies on HTML5 and CSS3 technologies. Rangy library assists in browser-independent implementation of common DOM range and selection tasks, and Ajax and JQuery technologies are used to enhance user–system interaction. Results Marky grants solid management of inter- and intra-annotator work. Most notably, its annotation tracking system supports systematic and on-demand agreement analysis and annotation amendment. Each annotator may work over documents as usual, but all the annotations made are saved by the tracking system and may be further compared. So, the project administrator is able to evaluate annotation consistency among annotators and across rounds of annotation, while annotators are able to reject or amend subsets of annotations made in previous rounds. As a side effect, the tracking system minimises resource and time consumption. Conclusions Marky is a novel environment for managing multi-user and iterative document annotation projects. Compared to other tools, Marky offers a similar visually intuitive annotation experience while providing unique means to minimise annotation effort and enforce annotation quality, and therefore corpus consistency. Marky is freely available for non-commercial use at http://sing.ei.uvigo.es/marky.
Marky: A tool supporting annotation consistency in multi-user and iterative document annotation projects
S0169260714003861
The Monte Carlo method for photon transport is often used to predict the volumetric heating that an optical source will induce inside a tissue or material. This method relies on constant (with respect to temperature) optical properties, specifically the coefficients of scattering and absorption. In reality, optical coefficients are typically temperature-dependent, leading to error in simulation results. The purpose of this study is to develop a method that can incorporate variable properties and accurately simulate systems where the temperature will greatly vary, such as in the case of laser-thawing of frozen tissues. A numerical simulation was developed that utilizes the Monte Carlo method for photon transport to simulate the thermal response of a system that allows temperature-dependent optical and thermal properties. This was done by combining traditional Monte Carlo photon transport with a heat transfer simulation to provide a feedback loop that selects local properties based on current temperatures, for each moment in time. Additionally, photon steps are segmented to accurately obtain path lengths within a homogenous (but not isothermal) material. Validation of the simulation was done using comparisons to established Monte Carlo simulations using constant properties, and a comparison to the Beer–Lambert law for temperature-variable properties. The simulation is able to accurately predict the thermal response of a system whose properties can vary with temperature. The difference in results between variable-property and constant property methods for the representative system of laser-heated silicon can become larger than 100K. This simulation will return more accurate results of optical irradiation absorption in a material which undergoes a large change in temperature. This increased accuracy in simulated results leads to better thermal predictions in living tissues and can provide enhanced planning and improved experimental and procedural outcomes.
Monte Carlo method for photon heating using temperature-dependent optical properties
S0169260714003873
Registration of pre-clinical images to physical space is indispensable for computer-assisted endoscopic interventions in operating rooms. Electromagnetically navigated endoscopic interventions are increasingly performed at current diagnoses and treatments. Such interventions use an electromagnetic tracker with a miniature sensor that is usually attached at an endoscope distal tip to real time track endoscope movements in a pre-clinical image space. Spatial alignment between the electromagnetic tracker (or sensor) and pre-clinical images must be performed to navigate the endoscope to target regions. This paper proposes an adaptive marker-free registration method that uses a multiple point selection strategy. This method seeks to address an assumption that the endoscope is operated along the centerline of an intraluminal organ which is easily violated during interventions. We introduce an adaptive strategy that generates multiple points in terms of sensor measurements and endoscope tip center calibration. From these generated points, we adaptively choose the optimal point, which is the closest to its assigned the centerline of the hollow organ, to perform registration. The experimental results demonstrate that our proposed adaptive strategy significantly reduced the target registration error from 5.32 to 2.59 mm in static phantoms validation, as well as from at least 7.58mm to 4.71mm in dynamic phantom validation compared to current available methods.
Adaptive marker-free registration using a multiple point strategy for real-time and robust endoscope electromagnetic navigation
S0169260714003885
We present a new SAS macro %pshreg that can be used to fit a proportional subdistribution hazards model for survival data subject to competing risks. Our macro first modifies the input data set appropriately and then applies SAS's standard Cox regression procedure, PROC PHREG, using weights and counting-process style of specifying survival times to the modified data set. The modified data set can also be used to estimate cumulative incidence curves for the event of interest. The application of PROC PHREG has several advantages, e.g., it directly enables the user to apply the Firth correction, which has been proposed as a solution to the problem of undefined (infinite) maximum likelihood estimates in Cox regression, frequently encountered in small sample analyses. Deviation from proportional subdistribution hazards can be detected by both inspecting Schoenfeld-type residuals and testing correlation of these residuals with time, or by including interactions of covariates with functions of time. We illustrate application of these extended methods for competing risk regression using our macro, which is freely available at: http://cemsiis.meduniwien.ac.at/en/kb/science-research/software/statistical-software/pshreg, by means of analysis of a real chronic kidney disease study. We discuss differences in features and capabilities of %pshreg and the recent (January 2014) SAS PROC PHREG implementation of proportional subdistribution hazards modelling.
PSHREG: A SAS macro for proportional and nonproportional subdistribution hazards regression
S0169260714003897
Metabolic Engineering (ME) aims to design microbial cell factories towards the production of valuable compounds. In this endeavor, one important task relates to the search for the most suitable heterologous pathway(s) to add to the selected host. Different algorithms have been developed in the past towards this goal, following distinct approaches spanning constraint-based modeling, graph-based methods and knowledge-based systems based on chemical rules. While some of these methods search for pathways optimizing specific objective functions, here the focus will be on methods that address the enumeration of pathways that are able to convert a set of source compounds into desired targets and their posterior evaluation according to different criteria. Two pathway enumeration algorithms based on (hyper)graph-based representations are selected as the most promising ones and are analyzed in more detail: the Solution Structure Generation and the Find Path algorithms. Their capabilities and limitations are evaluated when designing novel heterologous pathways, by applying these methods on three case studies of synthetic ME related to the production of non-native compounds in E. coli and S. cerevisiae: 1-butanol, curcumin and vanillin. Some targeted improvements are implemented, extending both methods to address limitations identified that impair their scalability, improving their ability to extract potential pathways over large-scale databases. In all case-studies, the algorithms were able to find already described pathways for the production of the target compounds, but also alternative pathways that can represent novel ME solutions after further evaluation.
Development and application of efficient pathway enumeration algorithms for metabolic engineering applications
S0169260714003903
The interest in image dermoscopy has been significantly increased recently and skin lesion images are nowadays routinely acquired for a number of skin disorders. An important finding in the assessment of a skin lesion severity is the existence of dark dots and globules, which are hard to locate and count using existing image software tools. In this work we present a novel methodology for detecting/segmenting and count dark dots and globules from dermoscopy images. Segmentation is performed using a multi-resolution approach based on inverse non-linear diffusion. Subsequently, a number of features are extracted from the segmented dots/globules and their diagnostic value in automatic classification of dermoscopy images of skin lesions into melanoma and non-malignant nevus is evaluated. The proposed algorithm is applied to a number of images with skin lesions with known histo-pathology. Results show that the proposed algorithm is very effective in automatically segmenting dark dots and globules. Furthermore, it was found that the features extracted from the segmented dots/globules can enhance the performance of classification algorithms that discriminate between malignant and benign skin lesions, when they are combined with other region-based descriptors.
Enhancing classification accuracy utilizing globules and dots features in digital dermoscopy
S0169260714003915
In this paper, the problem of predicting blood glucose concentrations (BG) for the treatment of patients with type 1 diabetes, is addressed. Predicting BG is of very high importance as most treatments, which consist in exogenous insulin injections, rely on the availability of BG predictions. Many models that can be used for predicting BG are available in the literature. However, it is widely admitted that it is almost impossible to perfectly model blood glucose dynamics while still being able to identify model parameters using only blood glucose measurements. The main contribution of this work is to propose a simple and identifiable linear dynamical model, which is based on the static prediction model of standard therapy. It is shown that the model parameters are intrinsically correlated with physician-set therapy parameters and that the reduction of the number of model parameters to identify leads to inferior data fits but to equivalent or slightly improved prediction capabilities compared to state-of-the-art models: a sign of an appropriate model structure and superior reliability. The validation of the proposed dynamic model is performed using data from the UVa simulator and real clinical data, and potential uses of the proposed model for state estimation and BG control are discussed.
A therapy parameter-based model for predicting blood glucose concentrations in patients with type 1 diabetes
S0169260714003927
Volume is one of the most important features for the characterization of a tumour on a macroscopic scale. It is often used to assess the effectiveness of care treatments, thus making its correct evaluation a crucial issue for patient care. Similarly, volume is a key feature on a microscopic scale. Multicellular cancer spheroids are 3D tumour models widely employed in pre-clinical studies to test the effects of drugs and radiotherapy treatments. Very few methods have been proposed to estimate the tumour volume arising from a 2D projection of multicellular spheroids, and even fewer have been designed to provide a 3D reconstruction of the tumour shape. In this work, we propose Reconstruction and Visualization from a Single Projection (ReViSP), an automatic method conceived to reconstruct the 3D surface and estimate the volume of single cancer multicellular spheroids, or even of spheroid cultures. As the input parameter ReViSP requires only one 2D projection, which could be a widefield microscope image. We assessed the effectiveness of our method by comparing it with other approaches. To this purpose, we used a new strategy that allowed us to achieve accurate volume measurements based on the analysis of home-made 3D objects, built by mimicking the spheroid morphology. The results confirmed the effectiveness of our method for both 3D reconstruction and volume assessment. ReViSP software is distributed as an open source tool.
Cancer multicellular spheroids: Volume assessment from a single 2D projection
S0169260714003939
The prediction of the number of clusters in a dataset, in particular microarrays, is a fundamental task in biological data analysis, usually performed via validation measures. Unfortunately, it has received very little attention and in fact there is a growing need for software tools/libraries dedicated to it. Here we present ValWorkBench, a software library consisting of eleven well known validation measures, together with novel heuristic approximations for some of them. The main objective of this paper is to provide the interested researcher with the full software documentation of an open source cluster validation platform having the main features of being easily extendible in a homogeneous way and of offering software components that can be readily re-used. Consequently, the focus of the presentation is on the architecture of the library, since it provides an essential map that can be used to access the full software documentation, which is available at the supplementary material website [1]. The mentioned main features of ValWorkBench are also discussed and exemplified, with emphasis on software abstraction design and re-usability. A comparison with existing cluster validation software libraries, mainly in terms of the mentioned features, is also offered. It suggests that ValWorkBench is a much needed contribution to the microarray software development/algorithm engineering community. For completeness, it is important to mention that previous accurate algorithmic experimental analysis of the relative merits of each of the implemented measures [19,23,25], carried out specifically on microarray data, gives useful insights on the effectiveness of ValWorkBench for cluster validation to researchers in the microarray community interested in its use for the mentioned task.
ValWorkBench: An open source Java library for cluster validation, with applications to microarray data analysis
S0169260715000024
Background and objective Biofilms are receiving increasing attention from the biomedical community. Biofilm-like growth within human body is considered one of the key microbial strategies to augment resistance and persistence during infectious processes. The Biofilms Experiment Workbench is a novel software workbench for the operation and analysis of biofilms experimental data. The goal is to promote the interchange and comparison of data among laboratories, providing systematic, harmonised and large-scale data computation. Methods The workbench was developed with AIBench, an open-source Java desktop application framework for scientific software development in the domain of translational biomedicine. Implementation favours free and open-source third-parties, such as the R statistical package, and reaches for the Web services of the BiofOmics database to enable public experiment deposition. Results First, we summarise the novel, free, open, XML-based interchange format for encoding biofilms experimental data. Then, we describe the execution of common scenarios of operation with the new workbench, such as the creation of new experiments, the importation of data from Excel spreadsheets, the computation of analytical results, the on-demand and highly customised construction of Web publishable reports, and the comparison of results between laboratories. Conclusions A considerable and varied amount of biofilms data is being generated, and there is a critical need to develop bioinformatics tools that expedite the interchange and comparison of microbiological and clinical results among laboratories. We propose a simple, open-source software infrastructure which is effective, extensible and easy to understand. The workbench is freely available for non-commercial use at http://sing.ei.uvigo.es/bew under LGPL license.
Enabling systematic, harmonised and large-scale biofilms data computation: The Biofilms Experiment Workbench
S0169260715000036
The importance of evaluating complications and toxicity during and following treatment has been stressed in many publications. In most studies, these endpoints are presented descriptively and summarized by numbers and percentages but descriptive methods are rarely sufficient to evaluate treatment-related complications. Pepe and Lancar developed Prevalence and Weighted Prevalence functions which take into account the duration and the severity of complication unlike conventional methods of survival analysis or competing risks which are limited to the time to first event. The purpose of this paper is to describe features and use of two R functions, main.preval.func and main.wpreval.func, which were designed for the analysis of survival adjusted for quality of life. These functions compute descriptive statistics, survival and competing risks analysis and especially Prevalence and Weighted Prevalence estimations with confidence intervals and associated test statistics. The use of these functions is illustrated by several examples.
Assessment of health status over time by Prevalence and Weighted Prevalence functions: Interface in R
S0169260715000048
Background There is a growing demand for women to be classified into different risk groups of developing breast cancer (BC). The focus of the reported work is on the development of an integrated risk prediction model using a two-level fuzzy cognitive map (FCM) model. The proposed model combines the results of the initial screening mammogram of the given woman with her demographic risk factors to predict the post-screening risk of developing BC. Methods The level-1 FCM models the demographic risk profile. A nonlinear Hebbian learning algorithm is used to train this model and thus to help on predicting the BC risk grade based on demographic risk factors identified by domain experts. The risk grades estimated by the proposed model are validated using two standard BC risk assessment models viz. Gail and Tyrer–Cuzick. The level-2 FCM models the features of the screening mammogram concerning normal, benign and malignant cases. The data driven Hebbian learning algorithm (DDNHL) is used to train this model in order to predict the BC risk grade based on these mammographic image features. An overall risk grade is calculated by combining the outcomes of these two FCMs. Results The main limitation of the Gail model of underestimating the risk level of women with strong family history is overcome by the proposed model. IBIS is a hard computing tool based on the Tyrer–Cuzick model that is comprehensive enough in covering a wide range of demographic risk factors including family history, but it generates results in terms of numeric risk score based on predefined formulae. Thus the outcome is difficult to interpret by naive users. Besides these models are based only on the demographic details and do not take into account the findings of the screening mammogram. The proposed integrated model overcomes the above described limitations of the existing models and predicts the risk level in terms of qualitative grades. The predictions of the proposed NHL-FCM model comply with the Tyrer–Cuzick model for 36 out of 40 patient cases. With respect to tumor grading, the overall classification accuracy of DDNHL-FCM using 70 real mammogram screening images is 94.3%. The testing accuracy of the proposed model using 10-fold cross validation technique outperforms other standard machine learning based inference engines. Conclusion In the perspective of clinical oncologists, this is a comprehensive front-end medical decision support system that assists them in efficiently assessing the expected post-screening BC risk level of the given individual and hence prescribing individualized preventive interventions and more intensive surveillance for high risk women.
An integrated breast cancer risk assessment and management model based on fuzzy cognitive maps
S0169260715000206
The aim of this study is to design a robust feature extraction method for the classification of multiclass EEG signals to determine valuable features from original epileptic EEG data and to discover an efficient classifier for the features. An optimum allocation based principal component analysis method named as OA_PCA is developed for the feature extraction from epileptic EEG data. As EEG data from different channels are correlated and huge in number, the optimum allocation (OA) scheme is used to discover the most favorable representatives with minimal variability from a large number of EEG data. The principal component analysis (PCA) is applied to construct uncorrelated components and also to reduce the dimensionality of the OA samples for an enhanced recognition. In order to choose a suitable classifier for the OA_PCA feature set, four popular classifiers: least square support vector machine (LS-SVM), naive bayes classifier (NB), k-nearest neighbor algorithm (KNN), and linear discriminant analysis (LDA) are applied and tested. Furthermore, our approaches are also compared with some recent research work. The experimental results show that the LS-SVM_1v1 approach yields 100% of the overall classification accuracy (OCA), improving up to 7.10% over the existing algorithms for the epileptic EEG data. The major finding of this research is that the LS-SVM with the 1v1 system is the best technique for the OA_PCA features in the epileptic EEG signal classification that outperforms all the recent reported existing methods in the literature.
Designing a robust feature extraction method based on optimum allocation and principal component analysis for epileptic EEG signal classification
S0169260715000218
Background and objectives Post-genomic clinical trials require the participation of multiple institutions, and collecting data from several hospitals, laboratories and research facilities. This paper presents a standard-based solution to provide a uniform access endpoint to patient data involved in current clinical research. Methods The proposed approach exploits well-established standards such as HL7 v3 or SPARQL and medical vocabularies such as SNOMED CT, LOINC and HGNC. A novel mechanism to exploit semantic normalization among HL7-based data models and biomedical ontologies has been created by using Semantic Web technologies. Results Different types of queries have been used for testing the semantic interoperability solution described in this paper. The execution times obtained in the tests enable the development of end user tools within a framework that requires efficient retrieval of integrated data. Conclusions The proposed approach has been successfully tested by applications within the INTEGRATE and EURECA EU projects. These applications have been deployed and tested for: (i) patient screening, (ii) trial recruitment, and (iii) retrospective analysis; exploiting semantically interoperable access to clinical patient data from heterogeneous data sources.
Enabling semantic interoperability in multi-centric clinical trials on breast cancer
S0169260715000231
Background Chronic hypoxemia has deleterious effects on psychomotor function that can affect daily life. There are no clear results regarding short term therapy with low concentrations of O2 in hypoxemic patients. We seek to demonstrate, by measuring the characteristics of drawing, these effects on psychomotor function of hypoxemic patients treated with O2. Methods Eight patients (7/1) M/F, age 69.5 (9.9)yr, mean (SD) with hypoxemia (Pa O2 62.2 (6.9)mmHg) performed two drawings of pictures. Tests were performed before and after 30min breathing with O2. Results Stroke velocity increased after O2 for the house drawing (i.e. velocity 27.6 (5.5)mm/s basal, 30.9 (7.1)mm/s with O2, mean (SD), p <0.025, Wilcoxon test). The drawing time ‘down’ or fraction time the pen is touching the paper during the drawing phase decreased (i.e. time down 20.7 (6.6)s basal, 17.4 (6.3)s with O2, p <0.017, Wilcoxon test). Conclusions This study shows that in patients with chronic hypoxemia, a short period of oxygen therapy produces changes in psychomotor function that can be measured by means of drawing analysis.
Short term oxygen therapy effects in hypoxemic patients measured by drawing analysis
S0169260715000243
Mathematical models that predict the complex dynamic behaviour of cellular networks are fundamental in systems biology, and provide an important basis for biomedical and biotechnological applications. However, obtaining reliable predictions from large-scale dynamic models is commonly a challenging task due to lack of identifiability. The present work addresses this challenge by presenting a methodology for obtaining high-confidence predictions from dynamic models using time-series data. First, to preserve the complex behaviour of the network while reducing the number of estimated parameters, model parameters are combined in sets of meta-parameters, which are obtained from correlations between biochemical reaction rates and between concentrations of the chemical species. Next, an ensemble of models with different parameterizations is constructed and calibrated. Finally, the ensemble is used for assessing the reliability of model predictions by defining a measure of convergence of model outputs (consensus) that is used as an indicator of confidence. We report results of computational tests carried out on a metabolic model of Chinese Hamster Ovary (CHO) cells, which are used for recombinant protein production. Using noisy simulated data, we find that the aggregated ensemble predictions are on average more accurate than the predictions of individual ensemble models. Furthermore, ensemble predictions with high consensus are statistically more accurate than ensemble predictions with large variance. The procedure provides quantitative estimates of the confidence in model predictions and enables the analysis of sufficiently complex networks as required for practical applications.
A consensus approach for estimating the predictive accuracy of dynamic models in biology
S0169260715000255
The goal of our study is to develop a fast parallel implementation of group independent component analysis (ICA) for functional magnetic resonance imaging (fMRI) data using graphics processing units (GPU). Though ICA has become a standard method to identify brain functional connectivity of the fMRI data, it is computationally intensive, especially has a huge cost for the group data analysis. GPU with higher parallel computation power and lower cost are used for general purpose computing, which could contribute to fMRI data analysis significantly. In this study, a parallel group ICA (PGICA) on GPU, mainly consisting of GPU-based PCA using SVD and Infomax-ICA, is presented. In comparison to the serial group ICA, the proposed method demonstrated both significant speedup with 6–11 times and comparable accuracy of functional networks in our experiments. This proposed method is expected to perform the real-time post-processing for fMRI data analysis.
GPU-based parallel group ICA for functional magnetic resonance data