FileName
stringlengths
17
17
Abstract
stringlengths
163
6.01k
Title
stringlengths
12
421
S1746809413001055
In this paper, a new filtering method is presented to remove the Rician noise from magnetic resonance images (MRI) acquired using single coil MRI acquisition system. This filter is based on nonlocal neutrosophic set (NLNS) approach of Wiener filtering. A neutrosophic set (NS), a part of neutrosophy theory, studies the origin, nature, and scope of neutralities, as well as their interactions with different ideational spectra. Now, we apply the neutrosophic set into image domain and define some concepts and operators for image denoising. First, the nonlocal mean is applied to the noisy MRI. The resultant image is transformed into NS domain, described using three membership sets: true (T), indeterminacy (I) and false (F). The entropy of the neutrosophic set is defined and employed to measure the indeterminacy. The ω-Wiener filtering operation is used on T and F to decrease the set indeterminacy and to remove the noise. The experiments have been conducted on simulated MR images from Brainweb database and clinical MR images. The results show that the NLNS Wiener filter produces better denoising results in terms of qualitative and quantitative measures compared with other denoising methods, such as classical Wiener filter, the anisotropic diffusion filter, the total variation minimization and the nonlocal means filter. The visual and the diagnostic quality of the denoised image are well preserved.
MRI denoising using nonlocal neutrosophic set approach of Wiener filtering
S1746809413001067
Patients with tremor can benefit from wearable robots managing their tremor during daily living. To achieve this, the interfaces controlling such robotic systems must be able to estimate the user's intention to move and to distinguish it from the undesired tremor. In this context, analysis of electroencephalographic activity is of special interest, since it provides information on the planning and execution of voluntary movements. This paper proposes an adaptive and asynchronous EEG-based system for online detection of the intention to move in patients with tremor. An experimental protocol with separated self-paced wrist extensions was used to test the ability of the system to detect the intervals preceding voluntary movements. Six healthy subjects and four essential tremor patients took part in the experiments. The system predicted 60±10% of the movements with the control subjects and 42±27% of the movements with the patients. The ratio of false detections was low in both cases (1.5±0.1 and 1.4±0.5 false activations per minute with the controls and patients, respectively). The prediction period with which the movements were detected was higher than in previous similar studies (1.06±1.02s for the controls and 1.01±0.99s with the patients). Additionally, an adaptive and fixed design were compared, and it was the adaptive design that had a higher number of movement detections. The system is expected to lead to further development of more natural interfaces between the assistive devices and the patients wearing them.
Online detector of movement intention based on EEG—Application in tremor patients
S1746809413001079
Muscle thickness is one of the most widely used parameters for quantifying muscle function in both diagnosis and rehabilitation assessment. Ultrasound imaging has been frequently used to non-invasively study the thickness of human muscles as a reliable method. However, the measurement is traditionally conducted by manual digitization of reference points at the superior and inferior muscle fascias, thus it is subjective and time-consuming. In this paper, a novel method is proposed to detect the muscle thickness automatically. The superficial and deep fascias of a muscle are detected by line detection algorithm at the first ultrasound frame, and the image regions of interest (ROI) for the fascias are subsequently located and tracked by optical flow technique. The muscle thickness is geometrically obtained based on the location of the fascias for each frame. Six ultrasound sequences (250 frames in each sequence) are used to evaluate this method. The correlation coefficient of the detection results between the proposed method and manual method is 0.95±0.01, and the difference is −0.05±0.22mm. The linear regression of the total 1500 detections show that a good linear correlation between the results of the two methods is obtained (R 2 =0.981). The automated method proposed here provides an accurate, high repeatable and efficient approach for estimating fascicle thickness during human motion, thus justifying its application in biological sciences.
Continuous thickness measurement of rectus femoris muscle in ultrasound image sequences: A completely automated approach
S1746809413001092
A necessary but not sufficient prerequisite of malignant arrhythmias is the existence of elevated static or dynamic repolarization dispersion (RD) of the ventricular myocardium. The early detection of this type of spatiotemporal impairment might have a clinical importance. Body surface potential mapping, using high resolution QRST integral maps, provides a unique noninvasive method for the beat-to-beat exploration of spatial RD. The sound theoretically proven relationship of ventricular RD and QRST integral maps has been known for many years, offering a novel electrocardiological imaging possibility. However, a “yes or no” type risk assessment can be achieved even without solving the ill-posed electrocardiological inverse problem by computing the body surface potential QRST integral map non-dipolarity index (NDI). In this study a multi-element numerical heart and a piecewise homogeneous chest model was used to estimate the sensitivity of the NDIs on pathological ventricular RD patterns. The tests included the physiological RD pattern as a reference and additional 83 pathological ones classified into 3 major types. All the local action potential (AP) modulation types were systematically swept through the anterior, lateral, posterior, septal and apical segments of the left ventricle. Additionally the impact of impaired conduction system and the involvement of the right ventricle were tested as well. It was concluded that the source level origin of the extreme NDIs was located in the apical part of the heart, due to permanent myocardial necrosis, temporally refracter regions or reversed transmural AP patterns.
Model interpretation of body surface potential QRST integral map variability in arrhythmia patients
S1746809413001110
The hidden Markov random field (HMRF) model has been widely used in image segmentation, as it provides a spatially constrained clustering scheme on two sets of random variables. However, in many HMRF-based segmentation approaches, both the latent class labels and statistical parameters have been estimated by deterministic techniques, which usually lead to local convergence and less accurate segmentation. In this paper, we incorporate the immune inspired clonal selection algorithm (CSA) and Markov chain Monte Carlo (MCMC) method into HMRF model estimation, and thus propose the HMRF–CSA algorithm for brain MR image segmentation. Our algorithm employs a three-step iterative process that consists of MCMC-based class labels estimation, bias field correction and CSA-based statistical parameter estimation. Since both the MCMC and CSA are global optimization techniques, the proposed algorithm has the potential to overcome the drawback of traditional HMRF-based segmentation approaches. We compared our algorithm to the state-of-the-art GA–EM algorithm, deformable cosegmentation algorithm, the segmentation routines in the widely-used statistical parametric mapping (SPM) software package and the FMRIB software library (FSL) on both simulated and clinical brain MR images. Our results show that the proposed HMRF–CSA algorithm is robust to image artifacts and can differentiate major brain structures more accurately than other three algorithms.
Hidden Markov random field model based brain MR image segmentation using clonal selection algorithm and Markov chain Monte Carlo method
S1746809413001122
Electroencephalogram (EEG) is generally used in brain–computer interface (BCI), including motor imagery, mental task, steady-state evoked potentials (SSEPs) and P300. In order to complement existing motor-based control paradigms, this paper proposed a novel imagery mode: speech imagery. Chinese characters are monosyllabic and one Chinese character can express one meaning. Thus, eight Chinese subjects were required to read two Chinese characters in mind in this experiment. There were different shapes, pronunciations and meanings between two Chinese characters. Feature vectors of EEG signals were extracted by common spatial patterns (CSP), and then these vectors were classified by support vector machine (SVM). The accuracy between two characters was not superior. However, it was still effective to distinguish whether subjects were reading one character in mind, and the accuracies were between 73.65% and 95.76%. The results were better than vowel speech imagery, and they were suitable for asynchronous BCI. BCI systems will be also extended from motor imagery to combine motor imagery and speech imagery in the future.
Analysis and classification of speech imagery EEG for BCI
S1746809413001134
Respiratory cycle related EEG change (RCREC) is characterized by significant relative EEG power changes within different stages of respiration during sleep. RCREC has been demonstrated to predict sleepiness in patients with obstructive sleep apnoea and is hypothesized to represent microarousals. As such RCREC may provide a sensitive marker of respiratory arousals. A key step in quantification of RCREC is respiratory signal segmentation which is conventionally based on local maxima and minima of the nasal flow signal. We have investigated an alternative respiratory cycle segmentation method based on inspiratory/expiratory transitions. Sixty two healthy paediatric participants were recruited through staff of local universities in Bolivia. Subjects underwent attended polysomnography on a single night (Compumedics PS2 system). Studies were sleep staged according to standard criteria. C3/A2 EEG channel and time-locked nasal flow (thermistor) were used in RCREC quantification. Forty Seven subjects aged 7–17 (11.4±3) years (24M:23F) were found to have usable polysomnographs for the purpose of RCREC calculation. Respiratory cycles were segmented using both the conventional and novel (transition) methods and differences in RCREC derived from the two methods were compared in each frequency band. Significance of transition RCREC as measured by Fisher's F value through analysis of variance (ANOVA) was found to be significantly higher than the conventional RCREC in all frequency bands (P <0.05) but beta. This increase in statistical significance of RCREC as demonstrated with the novel transition segmentation approach suggests better alignment of the respiratory cycle segments with the underlying physiology driving RCREC.
Respiratory cycle related EEG changes: Modified respiratory cycle segmentation
S1746809413001146
Getting precise locations of target tumors can help to ensure ablation of cancerous tissues and avoid unwanted destruction of healthy tissues in high-intensity focused ultrasound (HIFU) treatment system. Because of speckle noise and spurious boundaries in ultrasound images, traditional image segmentation methods are not suitable for achieving the precise locations of target tumors in HIFU ablation. In this paper, a multi-step directional generalized gradient vector flow snake model is introduced for target tumor segmentation. In the first step, the traditional generalized gradient vector flow (GGVF) snake is used to obtain an approximate contour of the tumor. According to the approximate contour, a new distance map is generated. Subsequently, a new directional edge map is created by calculating a scalar product of the gradients of the distance map and the initial image. In this process, the gradient directional information and the magnitude information of the distance map are used to attenuate unwanted edges and highlight the real edges in the new directional edge map. Finally, a refined GGVF field is derived from a diffusion operation of the gradient vectors of the directional edge map. The GGVF field is used to refine the tumor's contour, by directing the approximate contour to edges with the desired gradient directionality. Based on the newly developed snake model, the influences of the spurious boundaries and the speckle noise are significantly reduced in the ultrasound image segmentation. Experimental results indicate that this technique is greatly useful for target tumor segmentation in HIFU treatment system
A multi-step directional generalized gradient vector flow snake for target tumor segmentation in US-guided high-intensity focused ultrasound ablation
S1746809413001158
Experimental and clinical studies have shown that beat-to-beat variability of ventricular repolarization morphology, which can be measured by T-wave spectral variance (TSV) index based on the two-dimensional Fourier transform, is associated with an increased risk of developing malignant ventricular arrhythmias. In the present study we tested TSV index during percutaneous coronary intervention (PCI) procedure in the 12 standard ECG leads and in the orthogonal X , Y and Z leads. In addition, we analyzed the intrasubject and intersubject variability of TSV index, in order to determine reliable limits of significant repolarization variability due to an ischemic cardiac process. A total population of 62 patients, in which two ECG controls and one ECG recording during PCI procedure, were obtained. Results indicate that TSV index showed significant differences during PCI procedure with respect to control situation in all ECG leads ( p < 0.0001 ). The relative change between PCI procedure and control situation showed that there is a preferential ECG lead to analyze the TSV index depending on the occlusion site. Moreover, TSV index presented a high stability in each patient and a significant larger variability among patients. Finally, we conclude that TSV index offers a robust tool for evaluating beat-to-beat repolarization variability during acute myocardial ischemia.
Beat-to-beat ventricular repolarization variability evaluated during acute myocardial ischemia
S174680941300116X
We study the theoretical performance of using Electrical Impedance Tomography (EIT) to measure the conductivity of the main tissues of the head. The governing equations are solved using the Finite Element Method for realistically shaped head models with isotropic and anisotropic electrical conductivities. We focus on the Electroencephalography (EEG) signal frequency range since EEG source localization is the assumed application. We obtain the Cramér-Rao Lower Bound (CRLB) to find the minimum conductivity estimation error expected with EIT measurements. The more convenient electrode pairs selected for current injection from a typical EEG array are determined from the CRLB. Moreover, using simulated data, the Maximum Likelihood Estimator of the conductivity parameters is shown to be close to the CRLB for a relatively low number of measurements. The results support the idea of using EIT as a low-cost and practical tool for individually measure the conductivity of the head tissues, and to use them when solving the EEG source localization. Even when the conductivity of the soft tissues of the head is available from Diffusion Tensor Imaging, EIT can complement the electrical model with the estimation of the skull and scalp conductivities.
Analysis of parametric estimation of head tissue conductivities using Electrical Impedance Tomography
S1746809413001171
Short inter-stimulus interval (ISI) is one inherent characteristic of the high stimulus-rate (HSR) paradigms for studying auditory evoked potentials (AEPs). At short ISIs, the AEPs to adjacent stimuli overlap. To resolve the AEP to a specific stimulus requires an inverse process of overlapping. Inverse filtering (also called as deconvolution) has been commonly employed to achieve this goal. However, the resulted signal may be severely distorted as inverse filtering can substantially amplify such undesired components as noises and artifacts in the raw EEG recordings. In practice, even if care be taken to obtain quality EEGs, noises and artifacts are unavoidable. It is thus critical to remove or at least supress these undesired components for studies using HSR paradigms. In this paper, we propose a systematic approach to EEG signal enhancement based on empirical mode decomposition (EMD) and threshold filtering/rejection. Using synthetic and real data, we test the effectiveness of our approach. Results for both types of data consistently demonstrate that our methods can significantly improve the quality of recovered AEPs, according to visual inspection and SNRs estimated using two metrics.
EMD-based EEG signal enhancement for auditory evoked potential recovery under high stimulus-rate paradigm
S1746809413001183
The human cerebral cortex may be subdivided into architectonic fields according to variations within its laminar structure. Studies have shown correspondences between the locations of functional activation foci and architectonic regions. In order to perform accurate localization of functional activation foci to architectonic regions, a parcellation algorithm capable of segmenting architectonic regions on in vivo imaging datasets is required. This paper presents a novel 3D model-based approach to directly detect cortical layers and classify architectonic fields. The column-like structure of the cortex is modeled using a Laplace equation method which generates a collection of intensity profiles that span the cortical mantle. Bayesian evidence for intensity profile elements belonging to hyper- or hypo-intense bands, which represent cell or myelin poor or rich layers in imaging data, is gathered. A non-isotropic Markov Random Field model is used to encourage contiguous bands as well as a penalty term that completes bands across highly curved cortical regions where neighbouring evidence for banding is strong. This algorithm is validated on a 3D histological dataset of a macaque brain with visible layering at intermediate resolution between high-resolution MRI and histology. The algorithm detects the myelin-rich Stria of Gennari and uses this as the basis for finding the Brodmann Area 17/18 boundary.
3D model-based approach to identification of laminar structures of the cerebral cortex: Application to Brodmann areas 17 and 18
S1746809413001201
Frequency compounding (FC) is commonly used to reduce the speckle variance in order to enhance contrast resolution by averaging two or more uncorrelated sub-band images. However, due to the frequency dependent attenuation, the contrast resolution cannot be enhanced to the theoretical limit when imaging deep-lying tissue. In this paper, we propose the frequency equalized compounding (FEC) method to achieve contrast enhancement in the area of imaging as a whole. In this proposed method, a sub-band signal is divided into several zones along the imaging depth (or time), and the center frequencies and weighting factors for each zone are estimated; the estimated values are used in dynamic quadrature demodulation (DQDM) and image compounding respectively. The performance of the proposed method was evaluated through simulations and experiments. During the evaluation, the contrast resolution was quantified by speckle's signal-to-noise ratio (SSNR) in speckle regions and contrast-to-noise ratio (CNR) in hyper- and hypoechoic regions. Theoretical values of the SSNR and the CNR by the FC were computed by multiplying the SSNR and CNR values measured from the original image by N , where N is the number of sub-bands used in the compounding. From in vitro phantom experiments, it was learned that the SSNR and CNR values from the proposed method were similar to the theoretical values; the maximum and minimum errors from the theoretical value were 9% and 1% while those of the conventional FC (CFC) method were 25% and 7%. Similar results were obtained from the in vivo experiments with RF data acquired from the liver and the kidney. In addition, signal-to-noise ratio (SNR) improvement was measured. The SNR also improved due to the DQDM; maximum improvements for the in vitro and the in vivo experiments were 2.3dB and 4.8dB higher the results from the CFC method. These results demonstrate that the proposed FEC method can improve the contrast resolution up to a theoretically achievable value and may be useful in imaging technically difficult patients.
Frequency equalized compounding for effective speckle reduction in medical ultrasound imaging
S1746809413001213
Atrial fibrillation (AF) and atrial flutter (AFL) are the two common atrial arrhythmia encountered in the clinical practice. In order to diagnose these abnormalities the electrocardiogram (ECG) is widely used. The conventional linear time and frequency domain methods cannot decipher the hidden complexity present in these signals. The ECG is inherently a non-linear, non-stationary and non-Gaussian signal. The non-linear models can provide improved results and capture minute variations present in the time series. Higher order spectra (HOS) is a non-linear dynamical method which is highly rugged to noise. In the present study, the performances of two methods are compared: (i) 3rd order HOS cumulants and (ii) HOS bispectrum. The 3rd order cumulant and bispectrum coefficients are subjected to dimensionality reduction using independent component analysis (ICA) and classified using classification and regression tree (CART), random forest (RF), artificial neural network (ANN) and k-nearest neighbor (KNN) classifiers to select the best classifier. The ICA components of cumulant coefficients have provided the average accuracy, sensitivity, specificity and positive predictive value of 99.50%, 100%, 99.22% and 99.72% respectively using KNN classifier. Similarly, the ICA components of HOS bispectrum coefficients have yielded the average accuracy, sensitivity, specificity and PPV of 97.65%, 98.16%, 98.75% and 99.53% respectively using KNN. So, the ICA performed on the 3rd order HOS cumulants coupled with KNN classifier performed better than the HOS bispectrum method. The proposed methodology is robust and can be used in mass screening of cardiac patients.
Application of higher order statistics for atrial arrhythmia classification
S1746809413001225
Discrete high-frequency oscillations (HFOs) in the range of 80–500Hz have previously been recorded from human epileptic brains using intracereberal EEG and seem to be a reliable biomarker of seizure onset zone in patients with intractable epilepsy. Visual marking of HFOs bursts is tedious, highly time-consuming particularly for analyzing long-term multichannel EEG recordings, inevitably subjective and can be error prone. Thus, the development of automatic, fast and robust detectors is necessary and crucial for HFOs investigation and for propelling their eventual clinical applications. This paper presents a proposed algorithm for detection and classification of HFOs, which is a combination of both smoothed Hilbert Huang Transform (HHT) and root mean square (RMS) feature. Performance evaluation in terms of sensitivity and false discovery rate (FDR) were respectively 90.72% and 8.23% related to process validation. Indeed, the proposed method was efficient in terms of high sensitivity in which the majority of HFOs visually identified by experienced reviewers was correctly detected, and had a lower FDR. This would mean that only a low rate of detected events was misclassified as candidate HFOs events. The presented software is extremely fast, suitable and can be considered a valuable clinical tool for HFOs investigation.
Automated detection and classification of high frequency oscillations (HFOs) in human intracereberal EEG
S1746809413001237
The quantitative analysis of vocal disorders by nonlinear signal processing methods has been extensively used in the last two decades. In this work, two algorithms for nonlinear time-series analysis, Sample Entropy and cross-Sample Entropy, are used on electroglottogram (EGG) and microphone (MIC) signals recorded from 51 normal and 80 dysphonic subjects, to obtain summary measures of voice disorders through SampEn and cross-SampEn indices. Such parameters quantify, respectively, the degree of irregularity (in the sense of self-dissimilarity) within a time-series and of asynchrony (in the sense of cross-dissimilarity) between two distinct time-series. The aims of this work are: to determine if statistically significant differences in terms of signal irregularity quantified by SampEn occur between normal and pathological subjects, investigating whether or not such differences can be equally seen in EGG and MIC; to assess if cross-SampEn reveals different degrees of asynchrony between EGG and MIC signals in the two groups. Results show that SampEn in pathological subjects is higher than in normal subjects for both EGG and MIC time-series, with a statistically significant difference detectable from both signals (Pe<10−4 for EGG and Pe<10−7 for MIC). Cross-SampEn exhibits a statistically significant difference too, showing a higher degree of cross-dissimilarity between EGG and MIC time-series for pathological subjects (Pe<10−4). In conclusion, SampEn and cross-SampEn well quantify the increase of complexity of both EGG and MIC signals and the decrease of their cross-similarity in presence of vocal disorders. Thanks to the complementarity of nonlinear indicators to the traditionally considered linear ones, SampEn and cross-SampEn appear as suitable candidates to enter the pool of approaches to investigate speech pathologies and to obtain potentially new insights on their nature.
Voice disorders assessed by (cross-) Sample Entropy of electroglottogram and microphone signals
S1746809413001249
Kinetic parameters of the compartment models give important information about the physiology. These kinetic parameters are estimated from the time activity curves (TACs) that are obtained from dynamic positron emission tomography (PET). As the signal to noise (SNR) ratio of dynamic PET is low, the estimated kinetic parameters have low precision. The parametric images are formed by the kinetic parameters that are estimated for each pixel. Typically, the parametric images have large spatial variance due to low SNR and high variance of the estimated kinetic parameters. Many methods have been developed to reduce this variance. These methods concentrate on TAC denoising and population based constraints. Spatial regularization on the kinetic parameter domain is not used commonly, and its effects on the parametric images have not been investigated. The aim of this paper is to investigate the effect of a quadratic spatial regularization that is applied directly on the parametric images in terms of bias and variance. For this objective, bias and variance are investigated using two simulated datasets at different noise and spatial regularization levels. The results on simulated phantom indicate that the effects of spatial regularization on bias and variance depend on the size of the region and the amount of difference in parameter values in the neighbouring regions. Hence, spatial regularization should be used carefully, if the region of interest is small in size and has a large difference in kinetic parameter value with its surrounding regions. In addition, the effects of noise level on the bias and variance of estimated kinetic parameters decrease with the increasing level of spatial regularization.
Effects of spatial regularization on kinetic parameter estimation for dynamic PET
S1746809413001262
In this paper, a robust algorithm for disease type determination in brain magnetic resonance image (MRI) is presented. The proposed method classifies MRI into normal or one of the seven different diseases. At first two-level two-dimensional discrete wavelet transform (2D DWT) of input image is calculated. Our analysis show that the wavelet coefficients of detail sub-bands can be modeled by generalized autoregressive conditional heteroscedasticity (GARCH) statistical model. The parameters of GARCH model are considered as the primary feature vector. After feature vector normalization, principal component analysis (PCA) and linear discriminant analysis (LDA) are used to extract the proper features and remove the redundancy from the primary feature vector. Finally, the extracted features are applied to the K-nearest neighbor (KNN) and support vector machine (SVM) classifiers separately to determine the normal image or disease type. Experimental results indicate that the proposed algorithm achieves high classification rate and outperforms recently introduced methods while it needs less number of features for classification.
Robust algorithm for brain magnetic resonance image (MRI) classification based on GARCH variances series
S1746809413001274
An increasing number of studies use the spectrum of cardiac signals for analyzing the spatiotemporal dynamics of complex cardiac arrhythmias. However, the relationship between the spectrum of cardiac signals and the spatiotemporal dynamics of the underlying cardiac sources remains to date unclear. In this paper, by following a multivariate signal analysis approach we identify the relationship between the spectrum of cardiac signals, the spatiotemporal dynamics of cardiac sources, and the measurement characteristics of the lead systems. Then, by using analytical methods and computer simulations we analyze the spectrum of cardiac signals measured by idealized lead systems during correlated and uncorrelated spatiotemporal dynamics. Our results show that lead systems can have distorting effects on the spectral envelope of cardiac signals, which depend on the spatial resolution of the lead systems and on the degree of spatiotemporal correlation of the underlying cardiac sources. In addition to this, our results indicate that the spectral features that do not depend on the spectral envelope behave robustly against different choices of lead systems.
Relating the spectrum of cardiac signals to the spatiotemporal dynamics of cardiac sources
S1746809413001286
Major obstacles in effective local drug delivery to a target site include non-invasive measurement of the concentration distribution after local administration. Herein, the development of a non-invasive in vitro magnetic resonance imaging (MRI) method is described to quantitatively study the distribution of drug surrogate and to measure fluid flow velocity in the region of interest (ROI). Dynamic contrast enhanced MRI was used to study diffusion–convection transport phenomena of a magnetic resonance contrast agent, Gd-DTPA, in 1% agarose gel. The relationship between the concentration of Gd-DTPA and T 1 relaxation time was determined using an inversion recovery MRI technique. The concentration distribution of Gd-DTPA images was estimated from a calibration curve relating T 1 relaxation time and the concentration of Gd-DTPA. Using the estimated concentration profiles, center-of-mass points were calculated in a series of time points in order to determine fluid flow velocity, which correlated well with the real volumetric flow velocity at early time points. In this study, we developed a method to analyze MR images quantitatively and to determine fluid flow velocity through a tissue in vivo.
A novel center-of-mass method to measure fluid velocity with MRI
S1746809413001298
The accommodative ability of the natural eye lens is lost when it is replaced by an artificial intraocular lens in cataract surgery. While monofocal intraocular lenses are usually intended for distance vision, multifocal lenses allow for good vision at various distances, thus enabling the patient to live without glasses. In this work, several innovative multifocal intraocular lens design concepts are analyzed in terms of vision quality. Results from simulations of these lenses in different human eye models are compared to results from in-vitro measurements. We further demonstrate how the choice of the eye model impacts intraocular lens design and we show how biometric parameters that are usually not considered in intraocular lens power calculation may influence vision quality.
Increased quality of vision by innovative intraocular lens and human eye modeling
S1746809413001316
In biomedical and psychological applications dealing with EEG, a suitable selection of the most relevant electrodes is useful for lightening the data acquisition and facilitating the signal processing. Therefore, an efficient method for extracting and selecting features from EEG channels is desirable. Classification methods are more and more applied for obtaining important conclusions from diverse psychological processes, and specifically for emotional processing. In this work, an original and straightforward method, inspired by the spectral turbulence (ST) measure from electrocardiogram and the support vector machine-recursive feature elimination (SVM-RFE) algorithm, is proposed for classifying EEG signals. The goal of this study is to introduce the ST concept in applications of artificial intelligence related to cognitive processes and to determine the best EEG channels for distinguishing between two different experimental conditions. By means of this method, the left temporal region of the brain has revealed to be greatly involved in the affective valence processing elicited by visual stimuli.
Spectral turbulence measuring as feature extraction method from EEG on affective computing
S1746809413001328
Visualization of the inner ear has been performed using magnetic resonance imaging (MRI) to investigate benign paroxysmal positional vertigo (BPPV). In the patients with BPPV, our recent findings indicate the thickness of some internal auditory canal (IAC) nerves narrower than the thickness of healthy subjects. The thickness of the IAC and its nerves are measured using brain MRI images. The cross sectional area (CSA) of a nerve is assumed as its thickness. Some statistical measurement and a statistical classification are performed on the CSA data to investigate any relation between IAC, the nerves and BPPV.
An investigation on magnetic imaging findings of the inner ear: A relationship between the internal auditory canal, its nerves and benign paroxysmal positional vertigo
S174680941300133X
In this paper we have made a humble attempt to automate an ophthalmologic diagnostic system based on signal processing using wavelets. Electroretinographic signals indicate the activity of the retinal cells from different layers of the inner retina and therefore these signals are used to predict various dreadful diseases. In this work we have analyzed 95 subjects from four different classes viz. Controls, Congenital Stationery Night Blindness (CSNB), rod-cone dystrophy and Central Retinal Vein Occlusion (CRVO). The signal features extracted by wavelets are used for morphological and statistical analysis and for getting the subtle parameters like entropy. The results found comprises of difference in the values of wavelet coefficients, a-wave and b-wave amplitude in the case of normal and pathological signals. The colour intensity distribution of scalograms shows highlighting variations in the case of maximum response and oscillatory potentials of the ERG signals for specific type of diseases. Furthermore, we propose an Electroretinographic Index (ERI) from different entropy parameters which can be used to distinguish between the normal and abnormal classes. This new method based on ERG signal analysis can be reliable enough to build a solution for the constraints in the field of ophthalmology.
Wavelet based electroretinographic signal analysis for diagnosis
S1746809413001353
This paper presents an activity mode recognition approach to identify the motions of the human torso. The intent recognizer is based on decision tree classification in order to leverage its computational efficiency. The recognizer uses surface electromyography as the input and CART (classification and regression tree) as the classifier. The experimental results indicate that the recognizer can extract the user's intent within 215ms, which is below the threshold a user will perceive. The approach achieves a low recognition error rate and a user-unperceived latency by using sliding overlapped analysis window. The intent recognizer is envisioned to a part a high-level supervisory controller for a powered backbone exoskeleton.
Activity recognition of the torso based on surface electromyography for exoskeleton control
S1746809413001377
Pathological tremor is a roughly sinusoidal movement and impacts individuals’ daily living activities. Biomechanical loading is employed as a potential method for tremor suppression with the accurate estimation of amplitude and frequency of tremor signals. In this paper, a study on tremor is conducted and the characteristics of tremor signals are analyzed. An adaptive sliding Bandlimited multiple Fourier linear combiner (ASBMFLC) algorithm is proposed to estimate the different desired signals with sliding frequency band and zero-phase lag. This method incorporates digital filter, Weighted-Frequency Fourier Linear Combiner (WFLC) and Bandlimited Multiple Fourier Linear Combiner (BMFLC) with modification of fundamental frequency and limitation of frequency range. Based on the experimental tremor signals, WFLC, BMFLC and the proposed algorithm are evaluated, respectively. The experimental results show that the developed algorithm could adapt to the unknown dominant frequency for determining the interesting frequency band without prior information. Furthermore, the improved method could provide higher accurate estimation of tremor and extract the voluntary components from measured signals, compared with WFLC and BMFLC, respectively.
Adaptive sliding bandlimited multiple fourier linear combiner for estimation of pathological tremor
S1746809413001389
This work proposes a functionality for computerized tomography (CT) based investigation of diffuse lung diseases diagnosis that enables the evaluation of the disease from lung anatomical structures. Automated methods for segmenting several anatomy structures in chest CT are proposed: namely the lobe lungs, airway tree and pulmonary vessel tree. The airway and pulmonary vessel trees are segmented using a failure tracking and recovery algorithm. The algorithm checks intermediary results consistence, backtrack to a history position if a failure is detected. The quality of the result is improved while reducing the processing time even for subjects with lung diseases. The pulmonary vessels are segmented through the same algorithm with different seed points. The seed for the airway tree segmentation is within the tracheal tube, and the seed for the pulmonary vessels segmentation is within the heart. The algorithm is tested with CT images acquired from four distinct types of subjects: healthy, idiopathic interstitial pneumonias (IIPs), usual interstitial pneumonia (UIP) and chronic obstructive pulmonary disease (COPD). The main bronchi are found in the segmented airway and the associated lung lobes are determined. Combining the segmented lung lobes and the diffuse lung diseases classification, it is possible to quantify how much and where each lobe is injured. The results were compared with a conventional 3D region growing algorithm and commercial systems. Several results were compared to medical doctor evaluations: inter-lobe fissure, percentage of lung lobe that is injured and lung and lobe volumes. The algorithm proposed was evaluated to be robust enough to segment the cases studied.
Integrated lung field segmentation of injured region with anatomical structure analysis by failure–recovery algorithm from chest CT images
S1746809413001390
Eye activity is one of the main sources of artifacts in electroencephalogram (EEG) recordings, however, the ocular artifact can seriously distort the EEG recordings. It is an open issue to remove the ocular artifact as completely as possible without losing the useful EEG information. Independent Component Analysis (ICA) has been one of the correction approaches to correct the ocular artifact in practice. However, ICA based approach may overly or less remove the artifacts when the EEG sources and ocular sources cannot be represented in different independent components (ICs). In this paper, a new approach combining ICA and Auto-Regressive eXogenous (ARX) (ICA-ARX) is proposed for a more robust removal of ocular artifact. In the proposed approach, to lower the negative effect induced by ICA, ARX is used to build the multi-models based on the ICA corrected signals and the reference EEG selected before contamination period for each channel, and then the optimal model will be selected for further artifact removal. The results applied to both the simulated signals and actual EEG recordings demonstrate the effectiveness of the proposed approach for ocular artifact removal, and its potential to be used in the EEG related studies.
Robust removal of ocular artifacts by combining Independent Component Analysis and system identification
S1746809413001407
Over the past several years, although the resolution, signal-to-noise ratio and acquisition speed of magnetic resonance imaging (MRI) technology have been increased, MR images are still affected by artifacts and noise. A tradeoff between noise reduction and the preservation of actual detail features has to be made in the way that enhances the diagnostically relevant image content. Therefore, noise reduction is still a difficult task. A variety of techniques have been presented in the literature on denoising MR images and each technique has its own assumptions, advantages and limitations. The purpose of this paper is to present a survey of the published literature in dealing with denoising methods in MR images. After a brief introduction about magnetic resonance imaging and the characteristics of noise in MRI, the popular approaches are classified into different groups and an overview of various methods is provided. The denoising method's advantages and limitations are also discussed.
A survey on the magnetic resonance image denoising methods
S1746809413001559
Analysis of respiratory sounds can help the recognition of various respiratory diseases. Due to acoustic noise in hospital environments, the recorded sounds are polluted. The noise can destroy the analysis and should therefore be removed. Because of the chaotic nature of respiratory sounds, traditional noise reduction methods may not be efficient. Thus taking advantage of algorithms especially devised for noise reduction from chaotic signals can lead to better results. In this paper, a new method based on an original local projection algorithm is presented to reduce the noise in respiratory sounds.
A chaotic viewpoint on noise reduction from respiratory sounds
S1746809413001572
In this work, we are interested in developing an efficient voice disorders classification system by using discrete wavelet packet transform (DWPT), multi-class linear discriminant analysis (MC-LDA), and multilayer neural network (ML-NN). The characteristics of normal and pathologic voices are well described with energy and Shannon entropy extracted from the coefficients in the output nodes of the best wavelet packet tree with eight decomposition level. The separately extracted wavelet packet-based features, energy and Shannon entropy, are optimized with the usage of multi-class linear discriminant analysis to reduced 2-dimensional feature vector. The experimental implementation uses 258 data samples including normal voices and speech signals impaired by three sorts of disorders: A–P squeezing, gastric reflux, and hyperfunction. The voice disorders classification results achieved on Kay Elemetrics databases, developed by Massachusetts Ear and Eye Infirmary (MEEI), show average classification accuracy of 96.67% and 97.33% for the structure composed of wavelet packet-based energy and entropy features, respectively. In these structures, feature vectors are optimized by multi-class linear discriminant analysis and, finally classified by multilayer neural network. The obtained results from confusion matrix and cross-validation tests prove that this novel voice pathology classification system is capable of significant classification improvement with low complexity. This research claims that the proposed voice pathology classification tool can be employed for application of early detection of laryngeal pathology and for assessment of vocal improvement following voice therapy in clinical setting.
An efficient voice pathology classification scheme based on applying multi-layer linear discriminant analysis to wavelet packet-based features
S1746809413001602
In this paper, a method for detection of steady-state visual evoked potentials in a non-invasive, multiple channel, asynchronous brain–computer interface is proposed. It is based on the canonical correlation analysis spatial filter for identifying optimal weighted combinations of electrode signals, followed by a cluster analysis of its coefficients for a fast and accurate SSVEP detection. High information transfer rates can be achieved after a short calibration session. The proposed algorithm, a standard spectrum analysis approach, and two competitive spatial filtering and detection methods were evaluated in a series of experiments with the use of data from 21 subjects. The obtained results showed a significant improvement in classification accuracy and in an average detection time for a large group of users.
Cluster analysis of CCA coefficients for robust detection of the asynchronous SSVEPs in brain–computer interfaces
S1746809413001626
Using nonlinear stochastic state-space model of HIV-1 infection, having as state variables the concentration of healthy and infected cells and the concentration of virions (free virus particles), utilized for design a control method. In this paper, a new optimal nonlinear stochastic controller is presented based on a bacterial foraging optimization (BFO) method to decrease the number of infected cells in presence of stochastic parameters of HIV dynamic. Bacterial foraging optimization sigmoid nonlinear control (BFO-SNC) is a novel nonlinear robust optimal method that can control the biological characteristics of nonlinear stochastic HIV dynamic by drug dosage management. The BFOA should optimize this kind of controller included three parameters. The proposed control method searches the best controller parameters domain subject to minimize a stochastic expected value of cost function. Simulation results show that the proposed BFO-SNC scheme does improve the treatment performance in compare to other control methods. For comparison with BFO-SNC method, a modified PID controller is chosen as controller structure.
Optimal sigmoid nonlinear stochastic control of HIV-1 infection based on bacteria foraging optimization method
S1746809413001699
Glaucoma is a group of disease often causing visual impairment without any prior symptoms. It is usually caused due to high intra ocular pressure (IOP) which can result in blindness by damaging the optic nerve. Hence, diagnosing the glaucoma in the early stage can prevent the vision loss. This paper proposes a novel automated glaucoma diagnosis system using higher order spectra (HOS) cumulants extracted from Radon transform (RT) applied on digital fundus images. In this work, the images are classified into three classes: normal, mild glaucoma and moderate/severe glaucoma. The 3rd order HOS cumulant features are subjected to linear discriminant analysis (LDA) to reduce the number of features and then these clinically significant linear discriminant (LD) features are fed to the support vector machine (SVM) and Naïve Bayesian (NB) classifiers for automated diagnosis. This work is validated using 272 fundus images with 100 normal, 72 mild glaucoma and 100 moderate/severe glaucoma images using ten-fold cross validation method. The proposed system can detect the early glaucoma stage with an average accuracy of 84.72%, and the three classes with an average accuracy of 92.65%, sensitivity of 100% and specificity of 92% using NB classifier. This automated system can be used during the mass screening of glaucoma.
Automated classification of glaucoma stages using higher order cumulant features
S1746809413001705
Estimation of the genome copy number variations (CNVs) measured using the high-resolution array-comparative genomic hybridization (HR-CGH) microarray is commonly provided in the presence of large Gaussian noise having white properties with different segmental variances. Medical experts must thus be highly concerned about the confidence limits for CNVs in order to make correct decisions about genomic changes. We carry out a probabilistic analysis of CNVs in HR-CGH microarray measurements and show that jitter in the breakpoints can be approximated with the discrete skew Laplace distribution. Using this distribution, we find the confidence upper and lower boundaries to guarantee an existence of genomic changes in the confidence interval of 99.73%. We suggest combining these boundaries with the estimates to give medical experts more information about actual CNVs. Experimental verification of the theory is provided by simulation and using real HR-CGH microarray-based measurements.
Confidence limits for genome DNA copy number variations in HR-CGH array measurements
S1746809413001717
The analysis of heart rate variability (HRV) provides a non-invasive tool for assessing the autonomic regulation of cardiovascular system. Quadratic time–frequency distributions (TFDs) have been used to account for the non-stationarity of HRV signals, but their performance is affected by cross-terms. This study presents an improved type of quadratic TFD with a lag-independent kernel (LIK-TFD) by introducing a new parameter defined as the minimal frequency distance among signal components. The resulting TFD with this LIK can effectively suppress the cross-terms while maintaining the time–frequency (TF) resolution needed for accurate characterization of HRV signals. Results of quantitative and qualitative tests on both simulated and real HRV signals show that the proposed LIK-TFDs outperform other TFDs commonly used in HRV analysis. The findings of the study indicate that these LIK-TFDs provide more reliable TF characterization of HRV signals for extracting new instantaneous frequency (IF) based clinically related features. These IF based measurements shown to be important in detecting perinatal hypoxic insult – a severe cause of morbidity and mortality in newborns.
Improved characterization of HRV signals based on instantaneous frequency features estimated from quadratic time–frequency distributions with data-adapted kernels
S1746809413001742
In this paper, an electroencephalogram-based innovative brain–computer interface (BCI), known as “Character Plotter”, is presented. The proposed design uses steady-state visually evoked potentials. The subjects generate characters by drawing, step by step, lines between six circles on the computer screen. Additionally, there are three circles for controlling the drawing procedure. The features obtained from canonical correlation analysis are used for classification. The proposed synchronous BCI design was tested on 16 subjects in offline and online experimental tasks using support vector machines, linear discriminant analysis and the nearest mean classifier. The obtained classification performances indicate that the proposed design can be successfully used in classification. The Character Plotter converges to the natural writing scheme of humans. Therefore, subjects can adapt to the BCI design in a short training session.
A novel steady-state visually evoked potential-based brain–computer interface design: Character Plotter
S1746809413001754
Background Many investigations based on nonlinear methods have been carried out for the research of seizure detection. However, some of these nonlinear measures cannot achieve satisfying performance without considering the basic rhythms of epileptic EEGs. New method To overcome the defects, this paper proposed a framework on wavelet-based nonlinear features and extreme learning machine (ELM) for the seizure detection. Three nonlinear methods, i.e., approximate entropy (ApEn), sample entropy (SampEn) and recurrence quantification analysis (RQA) were computed from orignal EEG signals and corresponding wavelet decomposed sub-bands separately. The wavelet-based energy was measured as the comparative. Then the combination of sub-band features was fed to ELM and SVM classifier respectively. Results The decomposed sub-band signals show significant discrimination between interictal and ictal states and the union of sub-band features helps to achieve better detection. All the three nonlinear methods show higher sensitivity than the wavelet-based energy analysis using the proposed framework. The wavelet-based SampEn-ELM detector reaches the best performance with a sensitivity of 92.6% and a false detection rate (FDR) of 0.078. Compared with SVM, the ELM detector is better in terms of detection accuracy and learning efficiency. Comparison with existing method(s) The decomposition of original signals into sub-bands leads to better identification of seizure events compared with that of the existing nonlinear methods without considering the time–frequency decomposition. Conclusions The proposed framework achieves not only a high detection accuracy but also a very fast learning speed, which makes it feasible for the further development of the automatic seizure detection system.
A framework on wavelet-based nonlinear features and extreme learning machine for epileptic seizure detection
S1746809413001766
Background The pressure dependent recruitment model (PRM) is a comprehensive mathematical description of pulmonary mechanics in acute respiratory distress syndrome (ARDS). However, previous investigations of the PRM implied that the number of model parameters may cause inaccurate parameter estimation. Methods PRM models were evaluated for 12 ARDS patients that underwent a low-flow recruitment manoeuvre. The identified parameter set formed the basis of a parameter reduction investigation of the PRM. The parameter reduction investigation measured the mean cohort residual error (ψ) yielded by each possible combination of identified parameter set with the non-identified parameter values set to a priori population constants. Results Reducing the five variable PRM to a particular three variable model configuration produced a limited increase in model fit to data residuals (ψ 5 =22.68, ψ 3 =29.21mbar). The reduced model evaluates airway-resistance, compliance and distension as model variables and uses population values for alveoli opening pressure and the ratio of open alveoli at end expiratory. Conclusions The reduced PRM model captures all major pressure–volume response features in the ARDS patients. Reduced parameterisation allows more robust parameter identification and thus more reliable parameter estimates that may prove more useful in a clinical setting.
Reformulation of the pressure-dependent recruitment model (PRM) of respiratory mechanics
S1746809413001791
This paper presents an algorithm, in the context of speech analysis and pathologic/dysphonic voices evaluation, which splits the signal of the glottal excitation into harmonic and noise components. The algorithm uses a harmonic and noise splitter and a glottal inverse filtering. The combination of these two functionalities leads to an improved estimation of the glottal excitation and its components. The results demonstrate this improvement of estimates of the glottal excitation in comparison to a known inverse filtering method (IAIF). These results comprise performance tests with synthetic voices and application to natural voices that show the waveforms of harmonic and noise components of the glottal excitation. This enhances the glottal information retrieval such as waveform patterns with physiological meaning.
The harmonic and noise information of the glottal pulses in speech
S1746809413001808
Single molecule fluorescence microscopy is a powerful technique for uncovering detailed information about biological systems, both in vitro and in vivo. In such experiments, the inherently low signal to noise ratios mean that accurate algorithms to separate true signal and background noise are essential to generate meaningful results. To this end, we have developed a new and robust method to reduce noise in single molecule fluorescence images by using a Gaussian Markov random field (GMRF) prior in a Bayesian framework. Two different strategies are proposed to build the prior—an intrinsic GMRF, with a stationary relationship between pixels and a heterogeneous intrinsic GMRF, with a differently weighted relationship between pixels classified as molecules and background. Testing with synthetic and real experimental fluorescence images demonstrates that the heterogeneous intrinsic GMRF is superior to other conventional de-noising approaches.
Statistical denoising scheme for single molecule fluorescence microscopic images
S174680941300181X
The analysis of heart rate variability (HRV) is central for cardiac diagnostics but its essential nonstationarity has started to gain attention only recently. The aim of this work is to develop a method for finding mathematical indicators of HRV spectral properties considering frequency nonstationarity. The analysis is done both for the new model of rhythmogram taking into account frequency modulation and for the true rhythmogram record during head up tilt test. Continuous wavelet transformation of the frequency-modulated signal (CWT) has been derived in analytical form. The local frequency of heart rhythm giving the maximum of CWT has been determined. Treated as another non-stationary signal, this frequency has been subjected to CWT following double CWT procedure (DCWT). The transient periods for local frequency, the frequencies of local frequency fluctuation against the main trend and the periods of emergence and attenuation of such fluctuations have been defined by estimating the spectral integrals in the ranges {ULF, VLF, LF, HF}. The presented technique allows to use HRV control in the cases of arrhythmia, ectopic beats, heart turbulence and other non-stationary violence of heart rhythm and also while studying long term cardiac records.
Analysis of non-stationary HRV as a frequency modulated signal by double continuous wavelet transformation method
S1746809413001821
Automated detection of the various features of an electrocardiogram (ECG) waveform has wide applications in clinical diagnosis. Although detection of typical QRS waveforms has been widely studied, detection of atypical waveforms with complex morphologies remains challenging. The importance of detecting these complex waveforms and their patterns has grown recently due to their clinical implications. In this paper, we propose a novel algorithm for detecting the various peaks of such complex ECG waveforms. It is identified that most of the well-formed ECG waveforms – both typical and complex – fall into nine broad categories according to the standard nomenclature. Motivated by this ECG waveform classification, our algorithm uses signal analysis techniques such as first and second derivatives and adaptive thresholds to classify these waveforms accordingly by detecting the various features present in them. Temporal coherence along a single lead as well as spatial coherence across the 12 leads are used to improve performance. For waveform and pattern analysis, data from 50 healthy subjects and 50 patients with myocardial infarction were randomly selected. Results with an overall sensitivity of 99.06% and overall positive predictive value of 98.89% validate the effectiveness of the approach. Further, the algorithm gives true detections even on waveforms with fluctuations in baseline and wave amplitudes, proving its robustness against such variations.
Automated analysis of ECG waveforms with atypical QRS complex morphologies
S1746809413001833
Intensive insulin therapy has previously shown reduced mortality with lowering blood glucose to between 4.4 and 6.1mmol/l. However presumably due to fear of hypoglycemia the current recommended glycemic target is 7.8–10mmol/l. This study evaluates the effect of modifications to the Glucosafe system on the glycemic outcomes of an in silico cohort and which modifications are necessary to lower mean blood glucose under 6.1mmol/l without hypoglycemic incidents. Based on data from 12 real patients from a previous clinical trial, 12 virtual patients were constructed, the groups were compared and results of the modifications evaluated. Results indicate that virtual patients are applicable in evaluating modifications to advice generation, and that it is possible to lower mean blood glucose below 6.1mmol/l, with no hypoglycemic incidents. In some patients increased insulin use did not achieve this and decreasing nutritional intake was necessary.
Evaluating modifications to the Glucosafe decision support system for tight glycemic control in the ICU using virtual patients
S1746809414000020
A large number of biomedical and surveillance applications target at identifying specific events from sensor recordings. These events can be defined as rare and relevant occurrences with a limited duration. When possible, human annotation is available and developed techniques generally adopt the standard recognition approach in which a statistical model is built for the event and non-event classes. However, the goal is not to detect the event in its complete length precisely, but rather to identify the presence of an event, which leads to an inconsistency in the standard framework. This paper proposes an approach in which labels and features are modified so that they are suited for time event detection. The technique consists of an iterative process made of two steps: finding the most discriminant segment inside each event, and synchronizing features. Both steps are performed using a mutual information-based criterion. Experiments are conducted in the context of audio-based automatic cough detection. Results show that the proposed method enhances the process of feature selection, and significantly increases the event detection capabilities compared to the baseline, providing an absolute reduction of the revised event error rate between 4 and 8%. Thanks to these improvements, the audio-only cough detection algorithm outperforms a commercial system using 4 sensors, with an absolute gain of 26% in terms of sensitivity, while preserving the same specificity performance.
Using mutual information in supervised temporal event detection: Application to cough detection
S1746809414000032
A simple, real time, differential operator based spike detection algorithm has been described, which can efficiently detect seizure spikes in noisy ECoG signals. Simultaneous spikes and inverted spikes have been detected across all focal ECoG channels during preictal, ictal and postictal periods. Out of 79 seizures recorded from 21 patients, about 80% showed more occurrence of simultaneous spikes and inverted spikes across focal channels after the seizure offset than during the seizure in 0–40Hz range, where the duration studied after the offset is equal to the duration of the seizure. This is an important finding because this goes contrary to the prevailing wisdom that epileptic seizure is a hyper-synchronous phenomenon.
Simultaneous multi-channel spikes and inverted spikes in focal epileptic ECoG are more after offset than during the seizure
S1746809414000044
In Single Photon Emission Computed Tomography (SPECT) projections over 2π with non-uniform attenuation contain redundant information thus the scanning angle can be reduced to π which can be used to improve the imaging quality. The precondition of this reduction is the application of correction for gamma photon attenuation and the distance dependent spatial resolution of the detector during reconstruction. Despite the existing redundancy in the projections in opposite directions they cannot be treated identically because of statistical differences of the corresponding views. In this work we propose a new approximation method to select an optimal set of views over π with the best statistical properties. Using this set of views during the data acquisition the image quality can be improved. The paper also contains the verification and evaluation results of the proposed approximation method using a mathematical and two realistic physical phantoms.
Practical estimation method of the optimal scanning protocol for 180° data acquisition in parallel SPECT imaging
S1746809414000056
Ultrasound imaging is one of the most important and cheapest instrument used for diagnostic purpose among the clinicians. Due to inherent limitations of acquisition methods and systems, ultrasound images are corrupted by the multiplicative speckle noise that degrades the quality and most importantly texture information present in the ultrasound image. In this paper, we proposed an algorithm based on a new multiscale geometric representation as discrete ripplet transform and non-linear bilateral filter in order to reduce the speckle noise in ultrasound images. Ripplet transform with their different features of anisotropy, localization, directionality and multiscale is employed to provide effective representation of the noisy coefficients of log transformed ultrasound images. Bilateral filter is applied to the approximation ripplet coefficients to improve the denoising efficiency and preserve the edge features effectively. The performance of the proposed method is evaluated by conductive extensive simulations using both synthetic speckled and real ultrasound images. Experiments show that the proposed method provides better results of removing the speckle and preserving the edges and image details as compared to several existing methods.
Ripplet domain non-linear filtering for speckle reduction in ultrasound medical images
S174680941400007X
This work proposes a comparative study of a pair of electrocardiographic 2D representations: the frontal plane (FP) and a preferential plane (PP) obtained from ECG data. During depolarization and repolarization, main electrical vectors were analyzed and compared between healthy subjects and patients referred for percutaneous transluminal coronary angioplasty (PTCA). Recordings were obtained at rest. Many patients from the latter group presented normal ECGs, thus, the hypothesis to prove was that electrical axis in any of the studied planes would effectively discriminate silent ischemia records from healthy ones. The FP was constructed with I and aVF leads, while the PP used the two first eigenvectors of the spatial correlation matrix of the ECG. Although the depolarization and repolarization vectors from both groups resulted normal, those from the silent ischemia group appeared strongly biased to the left, closer to the limit of the normality range. This slight change originated a significant separation between health and disease in the FP. Here, most of the parameters resulted highly informative, even those related to the depolarization phase. The cardiac vector, integrating both depolarization and repolarization information, presented the highest performance (AUC=0.89). Parameters in the PP, however, did not produce an acceptable discrimination, except for the amplitude of the T-wave (AUC=0.79). Additionally, the repolarization orientation in the FP was the only marker that simultaneously discriminated three different groups of patients according to their occlusion sites (p <0.0001). In conclusion: the FP offered a 2D representation general enough to enable the separation of silent ischemia versus health populations while the PP did not, due mainly to its individually optimized nature, failing to provide a unique referencial frame for all the subjects.
2D ECG differences in frontal vs preferential planes inpatients referred for percutaneous transluminal coronary angioplasty
S1746809414000081
The four main functions that are available in current clinical prostheses (e.g. Otto Bock DMC Plus®) are power grasp, hand open, wrist pronation and wrist supination. Improving the control of these two DoFs is therefore of great clinical and commercial interest. This study investigates whether control performance can be improved by targeting wrist rotator muscles by means of intramuscular EMG. Nine able-bodied subjects were evaluated using offline metrics and during a real-time control task. Two intramuscular (targeted) and four surface EMG channels were recorded concurrently from the right forearm. The control was derived either from the four surface sources or by combining two surface channels combined with two intramuscular channels located in the pronator and supinator muscles (combined EMG). Five metrics (Throughput, Path Efficiency, Average Speed, Overshoot and Completion Rate) were used to quantify real-time performance. A significant improvement of 20% in Throughput was obtained with combined EMG (0.90±0.12bit/s) compared to surface EMG alone (0.75±0.10bit/s). Furthermore, combined EMG performed significantly better than the surface EMG in terms of Overshoot, Path Efficiency and offline classification error. No significant difference was found for Completion Rate and Average Speed. The results obtained in this study imply that targeting muscles that are involved in the rotation of the forearm could improve the performance of myoelectric control systems that include both wrist rotation and opening/closing of a terminal device.
Combined surface and intramuscular EMG for improved real-time myoelectric control performance
S1746809414000093
Low signal to noise ratio and low contrast are the major limitations for segmentation, image analysis and interpretation of medical thermal images. In this work, an attempt has been made to improve and preserve the inter-regions edges by effectively removing the noise without blurring and hence, to extract the breast tissues from infrared images using level sets based on improved edge information. Gaussian filter is a linear, homogenous diffusion process that performs smoothing operation at each location that blurs the edge information resulting in difficulty of detection and localization of edges. To avoid smoothing across boundaries, an anisotropic diffusion based smoothing filter is used. This enables smoothing within the region by preserving sharp region boundaries. The performance improvement of the diffusion filter is verified and validated by extracting the breast tissues. The segmentation of regions of interests (ROIs) is performed by evolving the initial level set function based on this improved edge information. The extracted ROIs are compared with the corresponding four sets of ground truth images. The results show a good agreement of the segmented ROIs with ground truths. Further, the performance of the segmentation method is analyzed across inter person variations by calculating quantitative measures based on overlap and the statistics of regional similarities. It is observed that the segmentation method could able delineate the accurate regions of interest irrespective to the limitations of thermal images such as lack of clear edges. Average accuracy of 98% of regional similarity is obtained between segmented ROIs and ground truth images. Therefore, the enhanced edge detail seems to be useful to improve the performance of segmentation algorithm which could be used during breast cancer screening for early detection of tumor.
Anisotropic diffusion filter based edge enhancement for segmentation of breast thermogram using level sets
S1746809414000111
This work investigates the use of switching linear Gaussian state space models for the segmentation and automatic labelling of Stage 2 sleep EEG data characterised by spindles and K-complexes. The advantage of this approach is that it offers a unified framework of detecting multiple transient events within background EEG data. Specifically for the identification of background EEG, spindles and K-complexes, a true positive rate (false positive rate) of 76.04% (33.47%), 83.49% (47.26%) and 52.02% (7.73%) respectively was obtained on a sample by sample basis. A novel semi-supervised model allocation approach is also proposed, allowing new unknown modes to be learnt in real time.
Automatic detection of spindles and K-complexes in sleep EEG using switching multiple models
S1746809414000123
Delivery of electroporation pulses in electroporation-based treatments could potentially induce heart-related effects. The objective of our work was to develop a software tool for electrocardiogram (ECG) analysis to facilitate detection of such effects in pre-selected ECG- or heart rate variability (HRV) parameters. Our software tool consists of five distinct modules for: (i) preprocessing; (ii) learning; (iii) detection and classification; (iv) selection and verification; and (v) ECG and HRV analysis. Its key features are: automated selection of ECG segments from ECG signal according to specific user-defined requirements (e.g., selection of relatively noise-free ECG segments); automated detection of prominent heartbeat features, such as Q, R and T wave peak; automated classification of individual heartbeat as normal or abnormal; displaying of heartbeat annotations; quick manual screening of analyzed ECG signal; and manual correction of annotation and classification errors. The performance of the detection and classification module was evaluated on 19 two-hour-long ECG records from Long-Term ST database. On average, the QRS detection algorithm had high sensitivity (99.78%), high positive predictivity (99.98%) and low detection error rate (0.35%). The classification algorithm correctly classified 99.45% of all normal QRS complexes. For normal heartbeats, the positive predictivity of 99.99% and classification error rate of 0.01% were achieved. The software tool provides for reliable and effective detection and classification of heartbeats and for calculation of ECG and HRV parameters. It will be used to clarify the issues concerning patient safety during the electroporation-based treatments used in clinical practice. Preventing the electroporation pulses from interfering with the heart is becoming increasingly important because new applications of electroporation-based treatments are being developed which are using endoscopic, percutaneous or surgical means to access internal tumors or tissues and in which the target tissue can be located in immediate vicinity to the heart.
Matlab-based tool for ECG and HRV analysis
S1746809414000135
Objectives A pathological voice detection and classification method based on MPEG-7 audio low-level features is proposed in this paper. MPEG-7 features are originally used for multimedia indexing, which includes both video and audio. Indexing is related to event detection, and as pathological voice is a separate event than normal voice, we show that MPEG-7 part-4 audio low-level features can do very well in detecting pathological voices, as well as binary classifying the pathologies. Patients and methods The experiments are done on a subset of sustained vowel (“AH”) recordings from healthy and voice pathological subjects, from the Massachusetts Eye and Ear Infirmary (MEEI) database. For classification, support vector machine (SVM) is applied. An optional feature selection method, namely, Fisher discrimination ratio is applied. Results The proposed method with MPEG-7 audio features and SVM classification is evaluated on voice pathology detection, as well as binary pathologies classification. The proposed method is able to achieve an accuracy of 99.994% with a standard deviation of 0.0105% for detecting pathological voices and an accuracy up to 100% for binary pathologies classification. Conclusion MPEG-7 descriptors can reliably be used for automatic voice pathology detection and classification.
Pathological voice detection and binary classification using MPEG-7 audio features
S1746809414000147
Oscillometric blood pressure (BP) monitors are omnipresent and used on a daily basis for personalized healthcare. Nevertheless, physicians generally approach these devices cautiously since the mercury Korotkoff sphygmomanometer remains the golden standard. Various reasons explain the hesitating attitude of the medical world towards automated BP monitors: (i) its principle is based on the pressure pulsations arriving at the cuff by the cardiac cycle instead of an audio wave used by physicians triggered by the turbulences in the artery, (ii) the actual computation of the systolic and diastolic BP from the measured oscillometry is manufacturer dependent and not based on general scientific principles, (iii) the quality of the oscillometric monitors is labeled by a trial such that the devices correspond well to the Korotkoff method for the average healthy patient but deviates for patients suffering from hypo- or hypertension. In this paper, we develop a statistical learning technique to calibrate and correct an oscillometric monitor such that the device better corresponds to the Korotkoff method regardless of the health status of the patient. The technique is based on logistic regression which allows correcting and eliminating systematic errors caused by patients suffering from hyper -or hypotension. No user interaction is required since the technique is able to train and validate the calibration procedure in an unsupervised way. In our case study, the systematic error is reduced by nearly 50% corresponding to the performance specifications of the device.
Logistic ordinal regression for the calibration of oscillometric blood pressure monitors
S1746809414000159
Electroencephalogram (EEG) signals are nonlinear time series, which are generally very noisy, nonstationary, and contaminated with artifacts that can deteriorate classification methods. This contribution presents an efficient scheme to extract features in phase space by exploiting the theoretical results derived in nonlinear dynamics for motor imagery tasks recognition. To remodel the single nonlinear sequence is to reveal variety of information hidden in the origin series by its phase space reconstruction (PSR), and maintaining the original information continuity. The phase space features (PSF) were extracted by the amplitude–frequency analysis (AFA) method in the state space of EEG signals (AFAPS). The linear discriminant analysis (LDA) classifiers based on the AFAPS features are used to classify two Graz datasets used in BCI Competition 2003 and 2005. The results have shown that the LDA classifiers based on PSF outperformed most of other similar studies on the same Graz dataset in terms of the competition criterions. The features extracted by the proposed scheme contain the nonlinear information which helps to improve the classification results in the BCI.
Phase space reconstruction for improving the classification of single trial EEG
S1746809414000172
When a nonlinear biological system is under analysis, one may employ linear and nonlinear tools. Linear tools such as fractional order lumped impedance models have not been previously employed to characterize difference between healthy volunteers and patients diagnosed with kyphoscoliosis (KS). Nonlinear tools such as detection lines from nonlinear contributions in frequency domain have also not been employed previously on KS patient data. KS is an irreversible restrictive disease, of genetic origin, which manifests by deformation of the spine and thorax. The forced oscillation technique (FOT) is a non-invasive, simple lung function test suitable for this class of patients with breathing difficulties, since it does not require any special maneuvre. In this work we show that the FOT method combined with both linear and nonlinear tools reveals important information which may be used as complementary to the standardized lung function tests (i.e. spirometry).
Modelling respiratory impedance in patients with kyphoscoliosis
S1746809414000184
Multichannel electromyography (EMG) signals are one of the common methods used in human motion pattern recognition. In exoskeleton robot control, EMG signals are measured during dynamic or isometric muscle contractions. Various types of contraction can cause EMG signals to vary, affecting recognition performance. A motion pattern recognition model using EMG signals from either dynamic or isometric muscle contractions has not yet been fully investigated. In this study, a novel feature extraction method, using the short-time Fourier transform ranking (STFT-ranking) feature, was employed to determine multichannel EMG signals. The performance of the novel feature and conventional features for motion pattern recognition using EMG signals, which included time-domain and frequency-domain features, was compared during dynamic and isometric muscle contractions. Experiments were conducted using an exoskeleton robotic arm to aid users in generating EMG signals of designated motion patterns. Among the features tested, the STFT-ranking feature yielded an accuracy rate exceeding 90% when the EMG signals used in the training and validation feature data sets were of the same type of muscle contraction. After examining the STFT-ranking feature projected onto the PCA space, the STFT-ranking feature was determined to offer more satisfactory performance than the other features tested for motion pattern recognition, because the feature data it collected from various motion patterns were more separable. The experimental results also revealed that it is preferable that EMG signals from the same type of muscle contraction, whether dynamic or isometric, are consistently used in both the training and validation (control) phases. Inconsistent EMG signals in the training and validation phases yielded a negative effect on motion pattern recognition performance. The methodology developed in this study has potential applications in exoskeleton robot control and rehabilitation.
A comparison of upper-limb motion pattern recognition using EMG signals during dynamic and isometric muscle contractions
S1746809414000196
Detection of early-stage liver diseases is a challenge in medical field. Automated diagnostics based on machine learning therefore could be very important for liver tests of patients. This paper investigates 225 liver function test records (each record include 14 features), which is a subset from 1000 patients’ liver function test records that include the records of 25 patients with liver disease from a community hospital. We combine support vector data description (SVDD) with data visualisation techniques and the glowworm swarm optimisation (GSO) algorithm to improve diagnostic accuracy. The results show that the proposed method can achieve 96% sensitivity, 86.28% specificity, and 84.28% accuracy. The new method is thus well-suited for diagnosing early liver disease.
Early detection of liver disease using data visualisation and classification method
S1746809414000202
Eye state analysis in real-time is a main input source for Fatigue Detection Systems and Human Computer Interaction applications. This paper presents a novel eye state analysis design aimed for human fatigue evaluation systems. The design is based on an interdependence and adaptive scale mean shift (IASMS) algorithm. IASMS uses moment features to track and estimate the iris area in order to quantify the state of the eye. The proposed system is shown to substantially improve non-rigid eye tracking performance, robustness and reliability. For evaluating the design performance an established eye blink database for blink frequency analysis was used. The design performance was further assessed using the newly formed Strathclyde Facial Fatigue (SFF) video footage database 1 1 The Strathclyde Facial Fatigue (SFF) video footage database was developed in collaboration with the Psychology Department, University of Strathclyde and the Sleep Centre, University of Glasgow, and it was approved by the Ethics Committee of the University of Strathclyde. of controlled sleep-deprived volunteers.
Eye-state analysis using an interdependence and adaptive scale mean shift (IASMS) algorithm
S1746809414000342
When a nonlinear biological system is under analysis, one may employ linear and nonlinear tools. Linear tools such as fractional order impedance models have not been previously employed to characterize difference between healthy volunteers and patients diagnosed with cystic fibrosis (CF). Nonlinear tools such as detection lines from nonlinear contributions in frequency domain have also not been employed previously on CF patient data. CF is an irreversible inflammatory disease, of genetic origin. The forced oscillation technique (FOT) is a non-invasive, simple lung function test suitable for this class of patients with breathing difficulties, since it does not require any special maneuvre. In this work we bring additional evidence that the FOT method combined with both linear and nonlinear tools reveals important information which may be used as complementary to the standardized lung function tests.
Respiratory mechanics in children with cystic fibrosis
S1746809414000354
Gait analysis is an important aspect of Biomedical Engineering. In the recent past, researchers have applied several signal processing methods for the analysis of gait activities. Sensors such as accelerometers, gyroscopes and pressure sensors are more commonly used to identify gait activities remotely. Most of the applications have multiple sensors placed on a single board which is used for gait assessment. However, the problem with multiple sensors is the cross talk introduced by one sensor due to another sensor. Some of the applications use a single sensor such as accelerometer with dual axis measuring the gait activity in horizontal and vertical planes. Depending on the orientation of the accelerometer, the two axial outputs could have overlapping spectra which is very difficult to observe. Spectral and temporal filtering is not suitable for this because of overlapping spectra due to simultaneous movements of the foot in the horizontal and vertical planes. To reliably identify the gait activities, there is a need to decompose and separate the two vertical and horizontal acceleration signals. The earlier research has described a novel method which can be used remotely to identify the gait in ITW children. This paper discusses a lab based automated classification method using Blind Source Separation (BSS) technique to identify toe walking gait from normal gait in Idiopathic Toe Walkers (ITW) children. The outcome of the research study reveals that the BSS techniques in association with K-means classifier can suitably distinguish toe-walking gait from normal gait in ITW children with 97.9±0.2% accuracy.
Using Blind Source Separation on accelerometry data to analyze and distinguish the toe walking gait from normal gait in ITW children
S1746809414000366
Wireless telemonitoring of physiological signals is an important topic in eHealth. In order to reduce on-chip energy consumption and extend sensor life, recorded signals are usually compressed before transmission. In this paper, we adopt compressed sensing (CS) as a low-power compression framework, and propose a fast block sparse Bayesian learning (BSBL) algorithm to reconstruct original signals. Experiments on real-world fetal ECG signals and epilepsy EEG signals showed that the proposed algorithm has good balance between speed and data reconstruction fidelity when compared to state-of-the-art CS algorithms. Further, we implemented the CS-based compression procedure and a low-power compression procedure based on a wavelet transform in field programmable gate array (FPGA), showing that the CS-based compression can largely save energy and other on-chip computing resources.
Energy efficient telemonitoring of physiological signals via compressed sensing: A fast algorithm and power consumption evaluation
S1746809414000378
This paper presents a computationally efficient method for detection of optic nerve head in both color and fluorescein angiography retinal fundus images. It involves Radon transformation of multi-overlapping windows within an optimization framework in order to achieve computational efficiency as well as high detection rates in the presence of various structural, color, and intensity variations in such images. Three databases of STARE, DRIVE, and a local database have been examined. It is shown that this method provides high detection rates while achieving faster processing speeds than the existing methods that have reported comparable detection rates. For example, the detection rate for the STARE database which is the most widely used database is found to be 96.3% with a processing time of about 3s per image.
Computationally efficient optic nerve head detection in retinal fundus images
S174680941400038X
Complex biological systems such as the human brain can be expected to be inherently nonlinear and hence difficult to model. Most of the previous studies on investigations of brain function have either used linear models or parametric nonlinear models. In this paper, we propose a novel application of a nonlinear measure of phase synchronization based on recurrences, correlation between probabilities of recurrence (CPR), to study seizures in the brain. The advantage of this nonparametric method is that it makes very few assumptions thus making it possible to investigate brain functioning in a data-driven way. We have demonstrated the utility of CPR measure for the study of phase synchronization in multichannel seizure EEG recorded from patients with global as well as focal epilepsy. For the case of global epilepsy, brain synchronization using thresholded CPR matrix of multichannel EEG signals showed clear differences in results obtained for epileptic seizure and pre-seizure. Brain headmaps obtained for seizure and pre-seizure cases provide meaningful insights about synchronization in the brain in those states. The headmap in the case of focal epilepsy clearly enables us to identify the focus of the epilepsy which provides certain diagnostic value. Comparative studies with linear correlation have shown that the nonlinear measure CPR outperforms the linear correlation measure.
Study of phase synchronization in multichannel seizure EEG using nonlinear recurrence measure
S1746809414000391
Current research on neuro-prosthetics is aimed at designing several computational models and techniques to trigger the neuro-motor rehabilitative aids. Researchers are taking keen interest to accurately classify the stimulated electroencephalography (EEG) signals to interpret motor imagery tasks. In this paper we aim to classify the finger-, elbow- and shoulder-classification along with left- and right-hand classification to move a simulated robot arm in 3D space towards a target of known location. The contribution of the paper lies in the design of an energy optimal trajectory planner, based on differential evolution, which would decide the optimal path for the robot arm to move towards the target based on the classifier output. Each different set of movements consists of a trajectory planner which is activated by the classifier output. The energy distribution of wavelet coefficients of the incoming EEG signals is used as features to be used as inputs in a naïve Bayesian classifier to discriminate among the different mental tasks. The average training classification accuracy obtained is 76.88% and the success rate of the simulated robot arm reaching the target is 85%.
A differential evolution based energy trajectory planner for artificial limb control using motor imagery EEG signal
S1746809414000408
Purpose Spectral analysis of heart rate variability (HRV) constitutes a useful tool for the evaluation of autonomic function. However, it is difficult to compare the published data because different mathematical approaches for the calculation of the frequency bands are applied. Our aim was to compare the HRV frequency domain parameters obtained by application of 2 parametric and 2 non-parametric spectral methods in a group of patients with chronic epilepsy. Methods Sixty-eight patients and 69 healthy controls underwent a 5-min recording of RR signal, which was analyzed off-line in time and in frequency domains. Results The time domain parameters – variation RR ratio, standard deviation of normal-to-normal RR and coefficient of variation – were significantly lower in patients than in controls. In spectral analysis of the patient group deviation toward opposite directions of Low Frequency band (p =0.034) and Total Power (p =0.013) measures was detected depending on the method used. The results of Burg's and Yule-Walker's parametric methods fitted best to those of time domain estimates for both control and patient groups. Conclusions Epilepsy-related abnormalities of HRV were disclosed by time as well as by frequency domain analysis. In the present setting, the parametric methods proved to be superior to the non-parametric ones in matching time domain parameters of patients and healthy subjects and at the same time in detecting abnormalities of the frequency domain measures of patients with epilepsy.
Methodological issues in the spectral analysis of the heart rate variability: Application in patients with epilepsy
S174680941400041X
A novel quadrature clutter rejection approach based on multivariate empirical mode decomposition (MEMD), which is an extension of empirical mode decomposition (EMD) to multivariate for processing multichannel signals, is proposed in this paper to suppress the quadrature clutter signals induced by the vascular wall and the surrounding stationary or slowly moving tissues in composite Doppler ultrasound signals, and extract more blood flow components with low velocities. In this approach, the MEMD algorithms, which include the bivariate empirical mode decomposition with a nonuniform sampling scheme for adaptive selection of projection directions (NS_BEMD) and the trivariate empirical mode decomposition with noise assistance (NA_TEMD), are directly employed to adaptively decompose the complex-valued quadrature composite signals echoed from both bidirectional blood flow and moving wall into a small number of zero-mean rotation components, which are defined as complex intrinsic mode functions (CIMFs). Then the relevant CIMFs contributed to blood flow components are automatically distinguished in terms of the break of the CIMFs’ power, and then directly added up to give the quadrature blood flow signal. Specific simulation and human subject experiments are taken up to demonstrate the advantages and limitations of this novel method for quadrature clutter rejection in bidirectional Doppler ultrasound signals. Due to eliminating the extra errors induced by the Hilbert transform or complex FIR filter algorithms used in the traditional clutter rejection approaches based on the directional separation process, the proposed method provides improved accuracy for clutter rejection, and preserve more slow blood blow components, which could be helpful to early diagnose arterial diseases.
A novel quadrature clutter rejection approach based on the multivariate empirical mode decomposition for bidirectional Doppler ultrasound signals
S1746809414000433
Recent studies have revealed that the contrast-enhanced ultrasound (CEUS) correlates to the presence and degree of intraplaque neovascularization as determined histologically. However, most studies used a qualitative system to visually grade CEUS images. A computer-aided method is proposed for objective and convenient quantification of contrast agent spatial distribution within plaques in CEUS image sequences. It consists of three algorithms including cardiac cycle retrieval and sub-sequence selection, temporal mean image segmentation, and texture feature extraction. The first algorithm automatically retrieves and selects cardiac cycles from CEUS frames without electrocardiogram gating. The second is composed of three steps, i.e., temporal averaging, interactive plaque delineation, and automatic neovascularization segmentation. The third extracts eight texture features from the grayscale temporal mean images and the binary segmented images. The capability of the quantitative features in discriminating between qualitative grades is examined via the t-tests, analysis of variance (ANOVA), Fisher criterion of inter-intra class variance ratio and logistic regression classification with leave-one-out cross-validation. Experimental results on 33 carotid plaques demonstrated that the optimal feature, namely the combined area ratio, exhibited significant difference among three qualitative grades (P <0.001, ANOVA). When distinguishing between low-grade and high-grade plaques, the features improved the area under the receiver operating characteristic curve, sensitivity and specificity by 36.4%, 16.7%, and 11.1%, respectively, contrasted with a classic feature, the traditional area ratio. These results demonstrate the usefulness and advantage of the proposed method in quantifying the spatial distribution of contrast agents in CEUS.
Computer-aided quantification of contrast agent spatial distribution within atherosclerotic plaque in contrast-enhanced ultrasound image sequences
S1746809414000445
A training strategy for simultaneous and proportional myoelectric control of multiple degrees of freedom (DOFs) is proposed. Ten subjects participated in this work in which wrist flexion–extension, abduction–adduction, and pronation–supination were investigated. Subjects were prompted to elicit contractions corresponding and proportional to the excursion of a moving cursor on a computer screen. Artificial neural networks (ANNs) were used to map the electromyogram (EMG) signals obtained from forearm muscles, to the target cursor displacement. Subsequently, a real-time target acquisition test was conducted during which the users controlled a cursor using muscular contractions to reach targets. The results show that the proposed method provided controllability comparable (p >0.1) with the previously reported mirrored bilateral training approach, as measured by completion rate, completion time, target overshoot and path efficiency. Unlike the previous approach, however, the proposed strategy requires no force or position sensing equipment and is readily applicable to both unilateral and bilateral amputees.
Real-time, simultaneous myoelectric control using visual target-based training paradigm
S1746809414000457
The detection of seizure activity in electroencephalogram (EEG) signals is crucial for the classification of epileptic seizures. However, epileptic seizures occur irregularly and unpredictably, automatic seizure detection in EEG recordings is highly required. In this work, we present a new technique for seizure classification of EEG signals using Hilbert–Huang transform (HHT) and support vector machine (SVM). In our method, the HHT based time-frequency representation (TFR) has been considered as time-frequency image (TFI), the segmentation of TFI has been implemented based on the frequency-bands of the rhythms of EEG signals, the histogram of grayscale sub-images has been represented. Statistical features such as mean, variance, skewness and kurtosis of pixel intensity in the histogram have been extracted. The SVM with radial basis function (RBF) kernel has been employed for classification of seizure and nonseizure EEG signals. The classification accuracy and receiver operating characteristics (ROC) curve have been used for evaluating the performance of the classifier. Experimental results show that the best average classification accuracy of this algorithm can reach 99.125% with the theta rhythm of EEG signals.
Classification of seizure based on the time-frequency image of EEG signals using HHT and SVM
S1746809414000470
Background Arrhythmogenic right ventricular dysplasia/cardiomyopathy (ARVD/C) is characterized by delay in depolarization of the right ventricle, detected by prolonged terminal activation duration (TAD) in V1–V3. However, manual ECG measurements have shown moderate-to-low intra- and inter-reader agreement. The goal of this study was to assess reproducibility of automated ECG measurements in the right precordial leads. Methods Pairs of ECGs recorded in the same day from Johns Hopkins ARVD/C Registry participants [n =247, mean age 35.2±15.6 years, 58% men, 92% whites, 11(4.5%) with definite ARVD/C] were retrospectively analyzed. QRS duration, intrinsicoid deflection, TAD, and T-wave amplitude in the right precordial leads, as well as averaged across all leads QRS duration, QRS axis, T axis, QTc interval, and heart rate was measured automatically, using 12SL TM algorithm (GE Healthcare, Wauwatosa, WI, USA). Intrinsicoid deflection was measured as the time from QRS complex onset to the alignment point of the QRS complex. TAD was calculated as the difference between QRS duration and intrinsicoid in V1–V3. Reproducibility was quantified by Bland–Altman analysis (bias with 95% limits of agreement), Lin's concordance coefficient, and Bradley–Blackwood procedure. Results Bland–Altman analysis revealed satisfactory reproducibility of tested parameters. V1 QRS duration bias was −0.10ms [95% limits of agreement −12.77 to 12.56ms], V2 QRS duration bias −0.09ms [−11.13 to 10.96ms]; V1 TAD bias 0.14ms [−13.23 to 13.51ms], V2 TAD bias 0.008ms [−12.42 to 12.44ms]. Conclusion Comprehensive statistical evaluation of reproducibility of automated ECG measurements is important for appropriate interpretation of ECG. Automated ECG measurements are reproducible to within 25%.
Statistical evaluation of reproducibility of automated ECG measurements: An example from arrhythmogenic right ventricular dysplasia/cardiomyopathy clinic
S1746809414000482
Segmentation of the lung is often performed as an important preprocessing step for quantitative analysis of chest computed tomography (CT) imaging. However, the presence of juxtapleural nodules and pulmonary vessels, image noise or artifacts, and individual anatomical variety make lung segmentation very complex. To address these problems, a fast and fully automatic scheme based on iterative weighted averaging and adaptive curvature threshold is proposed in this study to facilitate accurate lung segmentation for inclusion of juxtapleural nodules and pulmonary vessels and ensure the smoothness of the lung boundary. Our segmentation scheme consists of four main stages: image preprocessing, thorax extraction, lung identification and lung contour correction. The aim of preprocessing stage is to encourage intra-region smoothing and preserve the inter-region edge of CT images. In the thorax extraction stage, the artifacts external to the patient's body are discarded. Then, a fuzzy-c-means clustering method is used in the thorax region and all lung parenchyma is identified according to fuzzy membership value and connectivity. Finally, the segmented lung contour is smoothed and corrected with iterative weighted averaging and adaptive curvature threshold on each axis slice, respectively. Our method was validated on 20 datasets of chest CT scans containing 65 juxtapleural nodules. Experimental results show that our method can re-include all juxtapleural nodules and achieve an average volume overlap ratio of 95.81±0.89% and an average mean absolute border distance of 0.63±0.09 mm compared with the manually segmented results. The average processing time for segmenting one slice was 2.56s, which is over 20 times faster than manual segmentation.
Automated lung segmentation and smoothing techniques for inclusion of juxtapleural nodules and pulmonary vessels on chest CT images
S1746809414000512
In this paper, a fuzzy rule based filter is proposed for speckle reduction in Ultrasound (US) images. Considering a relevant US noise model this filter uses local gradients of the image and fuzzy inference to categorize image regions regarding their characteristics due to noise and structural information. Then in the restoration step each pixel is restored using fuzzy similarity criteria to weight its similar neighborhood pixels. Quantitative results on synthetic data show the performance of the proposed method compared to state-of-the-art methods. Results on real clinical images demonstrate that the proposed method is able to preserve accurately edges and structural details. As an application, this filter is used as a preprocessing step for a well-established US segmentation method known as Disk Expansion (DE). The results show an improved true diagnosis of lesion in both simulated and clinical US images.
An ultrasound image enhancement method using local gradient based fuzzy similarity
S1746809414000585
Electrocardiogram (ECG) is a P-QRS-T wave, representing the depolarization and repolarization mechanism of the heart. Among different cardiac abnormalities, the atrial fibrillation (AF) and atrial flutter (AFL) are frequently encountered medical emergencies with life threatening complications. The clinical features of ECG, the amplitude and intervals of different peaks depict the functioning of the heart. The changes in the morphological features during various pathological conditions help the physician to diagnose the abnormality. These changes, however, are very subtle and difficult to correlate with the abnormalities and demand a lot of clinical acumen. Hence a computer aided diagnosis (CAD) tool can help physicians significantly. In this paper, a general methodology is presented for automatic detection of the normal, AF and AFL beats of ECG. Four different methods are investigated for feature extraction: (1) the principal components (PCs) of discrete wavelet transform (DWT) coefficients, (2) the independent components (ICs) of DWT coefficients, (3) the PCs of discrete cosine transform (DCT) coefficients, and (4) the ICs of DCT coefficients. Three different classification techniques are explored: (1) K-nearest neighbor (KNN), (2) decision tree (DT), and (3) artificial neural network (ANN). The methodology is tested using data from MIT BIH arrhythmia and atrial fibrillation databases. DCT coupled with ICA and KNN yielded the highest average sensitivity of 99.61%, average specificity of 100%, and classification accuracy of 99.45% using ten fold cross validation. Thus, the proposed automated diagnosis system provides high reliability to be used by clinicians. The method can be extended for detection of other abnormalities of heart and to other physiological signals.
Computer aided diagnosis of atrial arrhythmia using dimensionality reduction methods on transform domain representation
S1746809414000597
Sparsity regularized iterative reconstruction is an important and promising method for ECG-gated tomographic reconstruction of coronary artery during intervention treatment of cardiovascular diseases. As the reconstruction suffers from the problems of background overlay and data truncation, the background of angiogram should be well suppressed to obtain high reconstruction quality. Considering the deficiency of the commonly applied background suppression methods, this work proposes a strategy of alternate reconstruction and segmentation. During reconstruction, while the image intensity is iteratively updated, a contour is also evolved to segment the reconstructed vascular tree based on level set segmentation method. When the structure of the vascular tree is completely detected, the segmented vascular tree is re-projected to generate projection mask which is used to further reduce the projection background. Several experiments were performed to quantitatively evaluate the proposed method and the method is also compared with a state-of-the-art method. Experimental results show that the proposed strategy could effectively improve the reconstruction quality.
Improved C-arm cardiac cone beam CT based on alternate reconstruction and segmentation
S1746809414000603
This paper introduces an improved physiological animal model with diabetic Göttingen minipigs, which focuses on the application of human therapy devices and metabolic system analysis. Based on measurement data-sets collected by metabolic test procedures, a new mathematical minipig model is developed. The model consists of 16 differential equations and describes the diabetic porcine glucose metabolism according to the human model published by Sorensen. In the future, the mathematical model will be used as a basis for controller design and the physiological model for experimental control performance evaluation.
Analysis and modelling of glucose metabolism in diabetic Göttingen minipigs
S1746809414000615
This work addresses the problem of reconstructing EEG signals from lower dimensional projections. Unlike previous studies, we propose to reconstruct the EEG signal using an analysis prior formulation. Moreover we use the inter-channel correlation while reconstruction which leads to a row-sparse analysis prior multiple measurement vector (MMV) recovery problem. To improve the reconstruction, we formulate the recovery as a non-convex optimization problem. Such a non-convex row-sparse MMV recovery problem had not been encountered before; this work derives an efficient algorithm to solve it. The proposed reconstruction technique is compared with state-of-the-art methods and we find that our technique yields significant improvement over others.
Non-convex row-sparse multiple measurement vector analysis prior formulation for EEG signal reconstruction
S1746809414000743
Rationale and objectives Diffusion weighted imaging (DWI) is always influenced by both thermal noise and spatially/temporally varying artifacts such as subject motion and cardiac pulsation. Motion artifacts are particularly prevalent, especially when scanning an uncooperative population with several disorders. Some motion between acquisitions can be corrected by co-registration approaches. However, automated and accurate motion outlier detection of brain DWIs is an integral component of the analysis and interpretation of tensor estimation. Many different and innovative methods have been proposed to improve upon this technology. In this study, we proposed a classifier work frame, which can classify DWIs as normal images or motion artifacts. Materials and methods The procedure contains the following stages: first, we used the wavelet transform to extract features from the original DWIs; second, the principle component analysis was used to reduce the features; third, the forward neural network (FNN) was employed to construct the classifier; fourth, a Rossler-based chaotic particle swarm optimization method was proposed to train the FNN; fifth, the cost matrix was determined as the false negative (FN) which was 10 times larger than the false position (FP); and finally, the K-fold cross validation was chosen to avoid overfitting. We applied this method on 60 DWI datasets, including 50 training datasets and 10 test datasets. Results The experimental results based on our DWI database showed that the proposed method can effectively extract the global feature from images and achieve better performance in tensor estimation by automatic unvoxelwise outlier rejection compared with manual and visual inspection, and previous voxelwise outlier rejection methods. We found that the motion artifact detection accuracy on both the training and test datasets was over 95.8%, while the computation time per DWI slice was only 0.0149s. Conclusion The proposed method could potentially remove the influence of unexpected motion artifacts in DWI acquisitions and should be applicable to other magnetic resonance imaging.
Outlier detection based on the neural network for tensor estimation
S1746809414000755
Magnetic Resonance Elastography (MRE) is able to identify mechanical properties of biological tissues in vivo based on underlying assumptions of the model used for inversion. Models, such as the linearly elastic or viscoelastic (VE), can be used with a single input frequency data and can produce a reasonable estimate of identified parameters associated with mechanical properties. However, more complex models, such as the Rayleigh damping (RD) model, are not identifiable given single frequency data without significant a priori information under certain conditions, thus limiting diagnostic potential. To overcome this limitation, two approaches have been postulated: simultaneous inversion across multiple input frequencies and a parametric approach, when only single frequency data is available. This research compares simultaneous multi-frequency (MF) RD reconstructions using both zero-order and power-law (PL) models with parametric reconstructions for a series of tissue-simulating phantoms, made of tofu and gelatine materials, tested at 4 frequencies (50Hz, 75Hz, 100Hz and 125Hz) that are commonly applied in clinical MRE examinations. Results indicate that accurate delineation of RD based properties and concomitant damping ratio (ξ d ) using MF inversion is still a challenging task. Specific results showed that the real shear modulus (μ R ) can be reconstructed well, while imaginary components representing attenuation (μ I and ρ I ) had much lower quality. However, overall trends correlate well with the expected higher damping levels within the saturated tofu material compared to stiff gelatine in both phantoms. Depending on the phantom configuration, measured μ R values within the tofu and gelatine materials ranged from 4.77 to 7kPa and 15.5 to 16.3kPa, respectively, while damping levels were 11–19% and 3.1–4.3%, as expected. Correlation of the μ R and ξ d values with previously reported result measured by independent mechanical testing and VE based MRE is acceptable, ranging from 48 to 60%. Both PL and zero-order models produced similar qualitative and quantitate results, thus no significant advantage of the PL model was noted to account for dispersion characteristics of these types of materials. The relatively narrow range of frequencies used in this study limited practical identifiability and can thus produce a potentially false assurance of identifiability of the model parameters. We conclude that application of multiple input frequencies over a wide range, as well as selection of an appropriate model that can accurately account for dispersion characteristics of given materials are required for achieving robust practical identifiability of the RD model in time-harmonic MRE.
Multi-frequency inversion in Rayleigh damped Magnetic Resonance Elastography
S1746809414000779
Knee osteoarthritis (OA) is one of the most common diseases among the elderly people. Typically, medical attention is not sought until the disease has progressed to a point at which it is not possible to treat effectively, often due to concerns over the cost of detection at an earlier stage. Ultrasound (US) imaging has a number of advantages as an imaging technique; apart from being a low cost diagnostic method, it is also non-invasive, utilizes non-ionizing radiation and portable. Due to progression of knee OA, the cartilage will experience a significant change in shape, and it becomes degenerated. After image processing using US medical imaging, it is possible to detect the cartilage shape change of the knee joint. Low contrast ratio and speckle noise are two main disadvantages of US imaging. The aim of this paper is to present a method for enhancing the contrast of the US image of knee joint cartilage in detecting early stages of knee OA. Conventional contrast enhancing methods are known to have some limitations. The objective of this paper is to propose a new contrast enhancing method which can overcome the limitations of the conventional contrast enhancing method. Most conventional contrasts enhancing methods emphasize only on one character. In the proposed method, the optimum value of contrast, brightness and detail preservation are considered. The proposed method is applied to find out the optimum separating point for segmenting the histogram of US image, for which optimum value of contrast, brightness and detail preservation will be preserved. In this method as well, three metrics, named as Preservation of Brightness Score function (PBS), Optimum Contrast Score function (OCS) and Preservation of Detail Score function (PDS), are defined.
Contrast enhancement of ultrasound imaging of the knee joint cartilage for early detection of knee osteoarthritis
S1746809414000780
Patients with Parkinson's disease (PD) show characteristic abnormalities in the performance of simple repetitive movements which can also be observed concerning speech rate and rhythm. The aim of the current study was to survey if patients with early PD already feature impairments of steady vocal pace performance based upon a simple syllable repetition paradigm. N =50 patients with PD with mild to moderate motor impairment and n =32 age-matched healthy controls were tested. Participants had to repeat a single syllable or a pair of alternating syllables in a self chosen steady pace or in a given pace of 80/min. The coefficient of variance was taken as measure of stability of repetition. As main and novel result, vocal pace performance was observed to be irregular in all patients, even in the subgroup of PD patients with only very mild motor impairment (Hoehn&Yahr stage 1), although the capacity of rapid syllable repetition was preserved. Weak correlations were found between the maximum repetition rate (but not with steadiness of repetition) and some distinctive Parkinsonian motor features as speech impairment and gait. Assumed that subsequent studies are able to confirm these preliminary results, analysis of steadiness of syllable repetition might be a promising non-invasive tool for detection of subtle abnormalities of motor speech performance even in the early motor stages of PD.
Steadiness of syllable repetition in early motor stages of Parkinson's disease
S1746809414000792
Robust pattern recognition is critical for myoelectric prosthesis developed in the laboratory to be used in real life. This study focused on the effect of arm movements on surface electromyography (sEMG) pattern recognition for 7 kinds of hand and wrist motions. An experiment was conducted with four static arm conditions and three dynamic arm conditions. Results showed that the arm movements impacted classification performance when the classifier, linear discriminant analysis (LDA), was trained in one arm condition and tested in another arm condition. Inter-condition classification errors (training data and testing data are from different arm conditions) were greater than intra-condition classification errors (training data and testing data are from the same arm condition; average 20.98% vs. 5.26%). Three metrics – repeatability index (RI), mean semi-principal axis (MSA) and mean centroid bias (MCB) – were used to quantify changes in sEMG pattern characteristics of hand and wrist motions. A multi-condition training scheme was explored to improve the robustness of sEMG pattern recognition for hand and wrist motions by reducing the average classification error from 18.73% (LDA trained in single-condition) to 8.20% (LDA trained in multi-condition). Furthermore, a novel classifier, conditional Gaussian mixture model (CGMM) was proposed under this training scheme and yielded a lower classification error than LDA (average 5.92% vs. 8.20%, p =0.0078).
Quantification and solutions of arm movements effect on sEMG pattern recognition
S1746809414000809
Using a realistic nonlinear mathematical model for melanoma dynamics and the technique of optimal dynamic inversion (exact feedback linearization with static optimization), a multimodal automatic drug dosage strategy is proposed in this paper for complete regression of melanoma cancer in humans. The proposed strategy computes different drug dosages and gives a nonlinear state feedback solution for driving the number of cancer cells to zero. However, it is observed that when tumor is regressed to certain value, then there is no need of external drug dosages as immune system and other therapeutic states are able to regress tumor at a sufficiently fast rate which is more than exponential rate. As model has three different drug dosages, after applying dynamic inversion philosophy, drug dosages can be selected in optimized manner without crossing their toxicity limits. The combination of drug dosages is decided by appropriately selecting the control design parameter values based on physical constraints. The process is automated for all possible combinations of the chemotherapy and immunotherapy drug dosages with preferential emphasis of having maximum possible variety of drug inputs at any given point of time. Simulation study with a standard patient model shows that tumor cells are regressed from 2×107 to order of 105 cells because of external drug dosages in 36.93 days. After this no external drug dosages are required as immune system and other therapeutic states are able to regress tumor at greater than exponential rate and hence, tumor goes to zero (less than 0.01) in 48.77 days and healthy immune system of the patient is restored. Study with different chemotherapy drug resistance value is also carried out.
Multimodal therapy for complete regression of malignant melanoma using constrained nonlinear optimal dynamic inversion
S1746809414000810
In automated heart sound analysis and diagnosis, a set of clinically valued parameters including sound intensity, frequency content, timing, duration, shape, systolic and diastolic intervals, the ratio of the first heart sound amplitude to second heart sound amplitude (S1/S2), and the ratio of diastolic to systolic duration (D/S) is measured from the PCG signal. The quality of the clinical feature parameters highly rely on accurate determination of boundaries of the acoustic events (heart sounds S1, S2, S3, S4 and murmurs) and the systolic/diastolic pause period in the PCG signal. Therefore, in this paper, we propose a new automated robust heart sound activity detection (HSAD) method based on the total variation filtering, Shannon entropy envelope computation, instantaneous phase based boundary determination, and boundary location adjustment. The proposed HSAD method is validated using different clean and noisy pathological and non-pathological PCG signals. Experiments on a large PCG database show that the HSAD method achieves an average sensitivity (Se) of 99.43% and positive predictivity (+P) of 93.56%. The HSAD method accurately determines boundaries of major acoustic events of the PCG signal with signal-to-noise ratio of 5dB. Unlike other existing methods, the proposed HSAD method does not use any search-back algorithms. The proposed HSAD method is a quite straightforward and thus it is suitable for real-time wireless cardiac health monitoring and electronic stethoscope devices.
A novel heart sound activity detection framework for automated heart sound analysis
S1746809414000822
In cognitive neuroscience, extracting characteristic textures and features from functional imaging modalities which could be useful in identifying particular cognitive states across different conditions is still an important field of study. This paper explores the potential of two-dimensional ensemble empirical mode decomposition (2DEEMD) to extract such textures, so-called bidimensional intrinsic mode functions (BIMFs), of functional biomedical images, especially functional magnetic resonance images (fMRI) taken while performing a contour integration task. To identify most informative textures, i.e. BIMFs, a support vector machine (SVM) as well as a random forest (RF) classifier is trained for two different stimulus/response conditions. Classification performance is used to estimate the discriminative power of extracted BIMFs. The latter are then analyzed according to their spatial distribution of brain activations related with contour integration. Results distinctly show the participation of frontal brain areas in contour integration. Employing features generated from textures represented by BIMFs exhibit superior classification performance when compared with a canonical general linear model (GLM) analysis employing statistical parametric mapping (SPM).
Bidimensional ensemble empirical mode decomposition of functional biomedical images taken during a contour integration task
S1746809414000846
A new hybrid control method for blood glucose concentration is developed which switches between two operation modes. The effects of meal-induced disturbances on blood glucose concentrations are expected to be more serious than the time-varying behaviour of glucose metabolism during nocturnal phases. Thus, the control method determines insulin impulses (boli) as a manipulated variable during the day and calculates continuous insulin infusion (basal rates) at night. To test the controller-based insulin therapy in vivo, animal trials with diabetic Göttingen minipigs are used as a proxy for human studies. The controller parameters are selected by in silico studies based on mathematical minipig models, and the resulting individualised controllers are experimentally verified. For this, two control performance requirements must be taken into account: blood glucose concentrations below the critical threshold of 50mg/dl have to be avoided, and blood glucose peaks caused by ingestion of minipig diet have to be quickly counteracted. Results from these animal experiments show that the closed-loop system satisfies both control requirements and improves insulin therapy compared with a standard therapeutic protocol.
A switching hybrid control method for automatic blood glucose regulation in diabetic Göttingen minipigs
S1746809414000858
This paper introduces a novel variational method for ultrasound image denoising for speckle suppression and edge enhancement. This method is designed to utilize the favorable denoising properties of framelet regularization and edge enhancement of backward diffusion technique. The sparsity and multiresolution properties of the framelet is well suited for speckle noise reduction. The fidelity term of the method can be obtained by Maximum a Posteriori (MAP). The introduction of backward diffusion and framelet regularization makes it difficult to solve the variational energy function. To simplify minimization problem, the Split Bregman algorithm for the proposed model is proposed and then we use it for ultrasound image denoising. Experiment results validate the usefulness of the proposed method for ultrasound image denoising.
Ultrasound image denoising using backward diffusion and framelet regularization
S174680941400086X
A minimal recruitment model can be used to guide mechanical ventilation PEEP selection for acute respiratory distress syndrome (ARDS) patients. However, implementation of this model requires a specific clinical protocol and is computationally expensive, and thus not suitable for bedside application. This work aims to improve the performance and bedside utility of the minimal recruitment model through simplifying the model, and improving the identification algorithm without compromising the model's physiological relevance to the disease. Identifying the model requires fitting of 8 unique parameters to pressure–volume data at multiple levels of positive end-expiratory pressure (PEEP). A minimal algorithm is proposed to improve the model's performance. The algorithm utilises a non-linear least-squares solver to estimate a global set of the parameters to a pressure–volume curve at one PEEP level. These global parameters were then subsequently used to estimate other parameters at different pressure–volume curves. The accuracy and computational performance of the minimal algorithm is compared to the grid search algorithm for 2 ARDS patient cohorts. The median [IQR] absolute percentage curve fitting error over all patients for grid search algorithm is 1.40% [0.55–3.75], and for the minimal algorithm is 2.43% [0.83–8.09] (p <0.005). The median [IQR] computational time for all patients for the grid search algorithm is 394.51s [284.79–630.45], and for the minimal algorithm is 0.72s [0.39–1.46], where a 600% of reduction of computational time was found for the minimal algorithm. The estimated parameters using the minimal algorithm are correlated with the grid search algorithm with median person's correlation of R 2 >0.9. The model fitting error for the minimal algorithm is higher than the grid search algorithm. However the model was able capture similar trends in physiologically relevant parameters without the loss of important clinical information. The minimal algorithm is less computationally intensive than the grid search algorithm, whilst still providing a means of selecting PEEP with only a small increase in model fitting error. The minimal algorithm is able to improve computational performance while maintaining the ability to capture physiological parameters as the grid search algorithm. The significant reduction in computational time encourages its clinical application at the bedside for decision making.
A minimal algorithm for a minimal recruitment model–model estimation of alveoli opening pressure of an acute respiratory distress syndrome (ARDS) lung
S1746809414000871
This paper presents an on-line myoelectric control system which can classify eight prehensile hand gestures with only two electrodes. The overlapping windowing scheme is adopted in the system leading a continuous decisions flow. We choose mean absolute value (MAV), variance (VAR), the fourth-order autoregressive (AR) coefficient and Sample entropy (SampEn) as the feature set and utilize the linear discriminant analysis (LDA) to reduce the dimension and obtain the projected feature sets. The current projected feature set and the previous one are “pre-smoothed” before the classification, and then a decision is generated by LDA classifier. To get the final decision from the decisions flow, the current decision and m previous decisions are “post-smoothed”. The method mentioned above can obtain a 99.04% off-line accuracy rate and a 97.35% on-line accuracy rate for individual gesture. By choosing a proper value of m, this method can also get a 99.79% accuracy rate for on-line recognition of complex sequences of hand gestures without interruption. In addition, a virtual hand has been developed to display the on-line recognition result visually, and a proper control strategy is proposed to realize the continuous switch of hand gestures.
Realtime recognition of multi-finger prehensile gestures
S1746809414000883
In myoelectric prostheses design, it is normally assumed that the necessary control information can be extracted from the surface myoelectric signals. In the pattern classification paradigm for controlling myoelectric prosthesis, the autoregressive (AR) model coefficients are generally considered an efficient and robust feature set. However, no formal statistical methodologies or tests are reported in the literature to analyze and model the myoelectric signal as an AR process. We analyzed the myoelectric signal as a stochastic time-series and found that the signal is heteroscedastic, i.e., the AR modeling residuals exhibit a time-varying variance. Heteroscedasticity is a major concern in statistical modeling because it can invalidate statistical tests of significance which may assume that the modeling errors are uncorrelated and that the error variances do not vary with the effects being modeled. We subsequently proposed to model the myoelectric signal as an autoregressive-generalized autoregressive conditional heteroscedastic (AR-GARCH) process and used the model parameters as a feature set for signal classification. Multiple statistical tests including the Ljung–Box Q-test, Engle's test for heteroscedasticity, the Kolmogorov–Smirnov test and the goodness of fit test were performed to show the validity of the proposed model. Our experimental results show that the proposed AR-GARCH model coefficients, when used as a feature set in two different classification schemes, significantly outperformed (p <.01) the conventional AR model coefficients.
Surface myoelectric signal classification using the AR-GARCH model
S1746809414000895
A proliferation of signal processing community, the dynamic behavior and the singularity detection are key steps, because dynamics and singularities carry most of signal information. Wavelet zoom is very good at localization of singularities. The Lipschitz Exponent (LE) is the most popular measure of the singularity characteristics of a signal. The singularity, by mean of an LE of a function, is measured by taking a slope of a log-log plot of scale s versus wavelet transform modulus maxima (WTMM). In this paper, we measured the singularity using WTMM, Inter Scale Wavelet Maximum (ISWM) and Wavelet Leaders (WL) by adding white Gaussian noises to the human EEG signal. The statistical performances are assessed (Mean, Standard Deviation (SD), Skewness, SD/Mean, Number of singular points (NSP)) and compared by means of non-parametric hypothesis test (Mann–Whitney U-test). Highly significant differences have been found between WTMM, ISWM and WL using Receiver Operating Characteristics (ROC) curve. WL method provides good performance of singularity measure when the more prominent noise influenced the EEG signal. The result of experiments demonstrated that a Wavelet leader is more precise and robust.
Singularity detection in human EEG signal using wavelet leaders
S1746809414000901
An implementation of the independent component analysis (ICA) technique for three-dimensional (3D) statistical shape analysis is presented. The capabilities of the ICA approach to uncover inherent shape features are first demonstrated through analysis of sets of artificially generated surfaces, and the nature of these features is compared to a more traditional proper orthogonal decomposition (POD) technique. For the surfaces generated, the ICA approach is shown to consistently extract surface features that closely resembled the original basis surfaces used to generate the artificial dataset, while the POD approach produces features that clearly mix the original basis. The details of an implementation of the ICA approach within a statistical shape analysis framework are then presented. Results are shown for the ICA decomposition of a collection of clinically obtained human right ventricle endocardial surfaces (RVES) segmented from cardiac computed tomography imaging, and these results are again compared with an analogous statistical shape analysis framework utilizing POD in lieu of ICA. The ICA approach is shown to produce shape features for the RVES that capture more localized variations in the shape across the set compared to the POD approach, and overall, the ICA approach produces features that represent the RVES variation throughout the set in a considerably different manner than the more traditional POD approach, providing a potentially useful alternate to statistically analyze such a set of shapes.
An implementation of independent component analysis for 3D statistical shape analysis
S1746809414000913
A new Barker coded excitation using linear frequency modulated (LFM) carrier (called LFM-Barker) is proposed for improving ultrasound imaging quality in terms of axial resolution and signal-to-noise ratio (SNR). The LFM-Barker coded excitation has two independent parameters: one is the bandwidth of LFM carrier, and the other is the chip duration of Barker code. To improve the axial resolution, increase the bandwidth of LFM carrier; and to improve the SNR, increase the chip duration of Barker code. In this study, a LFM pulse with proper (<5.5) time–bandwidth product is considered as the carrier in order to avoid sidelobes inside the mainlobe of matched filtered output. A pulse compression scheme for the LFM-Barker coded excitation is developed, and it consists of the LFM matched filter and Barker code mismatched filter. In the simulations, the impulse response of transducer can be approximated by a Gaussian shaped sinusoid with 5MHz central frequency of 60% −6dB fractional bandwidth. The pulse compression filter is performed to suppress sidelobes below −40dB roughly, which is acceptable in medical imaging. Simulation results show that in comparison with conventional Barker coded excitation using sinusoid carrier (called Sinusoid-Barker), the axial resolution of the LFM-Barker coded excitation system can be doubled, and the SNR can be improved by about 3dB. Simulation of B-mode images with the Field II program demonstrates that the axial resolution is improved from 0.7mm to 0.4mm. In addition, the LFM-Barker coded excitation is robust for frequency dependent attenuation of tissues.
Barker coded excitation with linear frequency modulated carrier for ultrasonic imaging
S1746809414000925
The in vitro cultures of cardiac cells represent valuable models to study the mechanism of the arrhythmias at the cellular level. But the dynamics of these experimental models cannot be characterized precisely, as they include a lot of parameters that depend on experimental conditions. This paper is devoted to the investigation of the dynamics of an in vitro model using a phase space reconstruction. Our model, based on the heart cells of new born rats, generates electrical field potentials acquired using a microelectrode technology, which are analyzed in normal and under external stimulation conditions. Phase space reconstructions of electrical field potential signals in normal and arrhythmic cases are performed after characterizing the nonlinearity of the model, computing the embedding dimension and the time lag. A non-parametric statistical test (Kruskal–Wallis test) shows that the time lag τ could be used as an indicator to detect arrhythmia, while the embedding dimension is not significantly different between the normal and the arrhythmia cases. The phase space reconstructions highlight attractors, whose dimension reveals that they are strange, depicting a deterministic dynamics of chaotic nature in our in vitro model.
Analysis of an experimental model of in vitro cardiac tissue using phase space reconstruction
S1746809414000937
The array-comparative genomic hybridization (aCGH) and next generation sequence technologies enable cost-efficient high resolution detection of DNA copy number variations (CNVs). However, while the CNVs estimates provided by different methods are often inconsistent with each other, still a little can be found about the estimation errors. Based on our recent studies of the confidence limits for stepwise signals measured in noise, we develop an efficient algorithm for computing the confidence upper and lower boundary masks in order to guarantee an existence of genomic changes with required probability. We suggest combining these masks with estimates in order to give medical experts more information about true CNVs structures. Applications given for high-resolution CGH microarray measurements ensure that there is a probability that some changes predicted by an estimator may not exist.
Confidence masks for genome DNA copy number variations in applications to HR-CGH array measurements
S1746809414000949
Electrochemical impedance spectroscopy (EIS) allows measuring the properties of the system as a function of the frequency as well as distinguishing between processes that could be involved: resistance, reactance, relaxation times, amplitudes, etc. Although it is possible to find related literature to in vitro and in vivo experiments to estimate glucose concentration, no clear information regarding the condition and precision of the measurements are easily available. This article first address the problem of the condition and precision of the measurements, as well as the effect of the glucose over the impedance spectra at some physiological (normal and pathological) levels. The significance of the measurements and the glucose effect over the impedance are assessment regarding the noise level of the system, the experimental error and the effect of using different sensors. Once the data measurements are analyzed the problem of the glucose estimation is addressed. A rational parametric model in the Laplace domain is proposed to track the glucose concentration. The electrochemical spectrum is measured employing odd random phase multisine excitation signals. This type of signals provides short acquisition time, broadband measurements and allows identifying the best linear approximation of the impedance as well as estimating the level of noise and non-linearities present in the system. All the experiments were repeated five times employing three different sensors from the same brand in order to estimate the significance of the experimental error, the effects of the sensors and the effect of the glucose over the impedance.
Measurement and characterization of glucose in NaCl aqueous solutions by electrochemical impedance spectroscopy
S1746809414000950
Despeckling is of great interest in ultrasound medical images. The inherent limitations of acquisition techniques and systems introduce the speckles in ultrasound images. These speckles are the main factors that degrade the quality and most importantly texture information present in ultrasound images. Due to these speckles, experts may not be able to extract correct and useful information from the images. This paper presents an edge preserved despeckling approach that combines the nonsubsampled shearlet transform (NSST) with improved nonlinear diffusion equations. As a new image representation method with the different features of localization, directionality and multiscale, the NSST is utilized to provide the effective representation of the image coefficients. The anisotropic diffusion approach is applied to the noisy coarser NSST coefficients to improve the noise reduction efficiency and effectively preserves the edge features. In the diffusion process, an adaptive gray variance is also incorporated with the gradient information of eight connected neighboring pixels to preserve the edges, effectively. The performance of the proposed method is evaluated by conducting extensive simulations using both the standard test images and several ultrasound medical images. Experiments show that the proposed method provides an improvement not only in noise reduction but also in the preservation of more edges as compared to several existing methods.
Despeckling of ultrasound medical images using nonlinear adaptive anisotropic diffusion in nonsubsampled shearlet domain
S1746809414000962
The empirical mode decomposition (EMD) decomposes non-stationary signals that may stem from nonlinear systems, in a local and fully data-driven manner. Noise-assisted versions have been proposed to alleviate the so-called “mode mixing” phenomenon, which may appear when real signals are analyzed. Among them, the complete ensemble EMD with adaptive noise (CEEMDAN) recovered the completeness property of EMD. In this work we present improvements on this last technique, obtaining components with less noise and more physical meaning. Artificial signals are analyzed to illustrate the capabilities of the new method. Finally, several real biomedical signals are decomposed, obtaining components that represent physiological phenomenons.
Improved complete ensemble EMD: A suitable tool for biomedical signal processing
S1746809414000974
Measurement of renal function in awake rats or mice can be accomplished by an intelligent plaster device that fits on the back of animals. The device performs a percutaneous measurement of the kinetics of a labeled fluorescent dye exclusively eliminated by the kidney. During the measurement, relative motion between plaster and skin leads to a variation of the illumination conditions, which emerge as artifacts in the data. In this paper, a novel strategy to detect and eliminate artifacts is suggested. The method combines cluster analysis and nonlinear regression with a priori knowledge about signal morphology to correct data. The performance of the proposed method is demonstrated on real and simulated data. Simulations were performed on data with two artifact amplitude ranges: (1) shifts in the recorded data with amplitude exceeding 3% of the signal amplitude for a combined duration of 10% of the total measurement time and (2) shifts greater than 3% for approximately 30% of the total measurement time. Prior to artifact removal, the MAE was calculated to be 10.3% and 21.9%, respectively. Following artifact removal using the proposed method, results showed that, when determining the half-life, the mean absolute error (MAE) was 0.88% for range type 1 and 10.4% for the more substantial range of the type 2 artifacts. When examining real data, the mean difference (bias) while determining the half-life was 7.5%. Results show that novel technique outperforms a number of state-of-the-art techniques when removing artifacts from the signal recorded while an animal is allowed to move freely. In this case, the signal acquires shifts and random changes with large amplitudes, which make it impossible to use standard methods.
Automatic artifact removal from GFR measurements