FileName
stringlengths 17
17
| Abstract
stringlengths 163
6.01k
| Title
stringlengths 12
421
|
---|---|---|
S1746809415000506 | Objective In this study we develop a new complexity measure of time series by combining ordinal patterns and Lempel-Ziv complexity (LZC) for quantifying the dynamical changes of EEG. Methods A neural mass model (NMM) was used to simulate EEG data and test the performance of the permutation Lempel-Ziv complexity (PLZC) in tracking the dynamical changes of signals against different white noise levels. Then, the PLZC was applied to real EEG data to investigate whether it was able to detect the different states of anesthesia and epileptic seizures. The Z-score model, two-way ANOVA and t-test were used to estimate the significance of the results. Results PLZC could successfully track the dynamical changes of EEG series generated by the NMM. Compared with the other four classical LZC based methods, the PLZC was most robust against white noise. In real data analysis, PLZC was effective in differentiating the different anesthesia states and sensitive in detecting epileptic seizures. Conclusions PLZC is simple, robust and effective for quantifying the dynamical changes of EEG. Significance We suggest that PLZC is a potential nonlinear method for characterizing the changes in EEG signal. | A permutation Lempel-Ziv complexity measure for EEG analysis |
S1746809415000518 | This paper investigated the ability of a hybrid time-delayed artificial neural network (TDANN)/autoregressive TDANN (AR-TDANN) to predict clenching movements during mastication from surface electromyography (SEMG) signals. Actual jaw motions and SEMG signals from the masticatory muscles were recorded and used as output and input, respectively. Three separate TDANNs/AR-TDANNs were used to predict displacement (in terms of position/orientation), velocity, and acceleration. The optimal number of neurons in the hidden layer and total duration of delays were obtained for each TDANN/AR-TDANN and each subject through a genetic algorithm (GA). The kinematic modeling of a human-like masticatory robot, based on a 6-universal-prismatic-spherical parallel robot, is described. The structure and motion variables of the robot were determined. The closed-form solution of the inverse kinematic problem (IKP) of the robot was found by vector analysis. Thereafter, the framework for an EMG-based human mastication robot interface is explained. Predictions by AR-TDANN were superior to those by TDANN. SEMG signals from mastication muscles contained important information about the mandibular kinematic parameters. This information can be employed to develop control systems for rehabilitation robots. Thus, by predicting the subject's movement and solving the IKP, we provide applicable tools for EMG-based masticatory robot control. | SEMG-based prediction of masticatory kinematics in rhythmic clenching movements |
S174680941500052X | A novel feedback controlled hydrodynamic human circulatory system simulator, well-suited for in-vitro validation of cardiac assist devices, is presented in this paper. The cardiovascular system simulator consists of high-bandwidth actuators allowing a high precision hardware-in-the-loop hydrodynamic interface in connection with physiological circulatory models calculated in real-time. The hydrodynamically coupled process dynamics consist of several actuator loops and demand a multivariable control design approach in the face of system nonlinearities and uncertainties. Based on a detailed model employing the Lagrange formalism, a robust decentralised controller is designed. Fixed structural constraints and the minimisation of the H ∞ -norm necessitate the application of nonsmooth optimisation techniques. The robust decentralised norm-optimal controller is tested in extensive in-vitro experiments and shows good performance with regard to reference tracking and system coupling. In-vitro experiments include multivariable reference step tests and frequency analysis tests of the vascular impedance transfer function. | Robust decentralised control of a hydrodynamic human circulatory system simulator |
S1746809415000531 | Liver cancer is one of the leading causes of cancer-related mortality worldwide. Non-invasive techniques of medical imaging such as Computerized Tomography (CT) and Magnetic Resonance Imaging (MRI) are often used by radiologists for diagnosis and surgery planning. With the aim of assuring the most reliable intervention planning to surgeons, new accurate methods and tools must be provided to locate and segment the regions of interest. Automated liver segmentation is a challenging problem for which promising results have been achieved mostly for CT. However, MRI is required by radiologists, since it offers better information for diagnosis purposes. MRI liver segmentation represents a challenge due to the presence of characteristic artifacts, such as partial volumes, noise, low contrast and poorly defined edges of the liver in relation to adjacent organs. In this paper, we present a method for MRI automatic 3D liver segmentation by means of an active contour model extended to 3D and minimized by total variation dual approach that has also been extended to 3D. A new approach to enhance the contrast in the input MRI image is proposed and it allows more accurate segmentation. The proposed methodology allows replacing the input image by a probability map obtained by means of a previously generated statistical model of the liver. An Accuracy of 98.89 and Dice Similarity Coefficient of 90.19 are in line with other state-of-the-art methodologies. | Automatic 3D model-based method for liver segmentation in MRI based on active contours and total variation minimization |
S1746809415000634 | In this work we employ a nonlinear data analysis method called recurrence quantification analysis (RQA) to analyze differences between sleep stages and wake using cardio-respiratory signals, only. The data were recorded during full-night polysomnographies of 313 healthy subjects in nine different sleep laboratories. The raw signals are first normalized to common time bases and ranges. Thirteen different RQA and cross-RQA features derived from ECG, respiratory effort, heart rate and their combinations are additionally reconditioned with windowed standard deviation filters and ZSCORE normalization procedures leading to a total feature count of 195. The discriminative power between Wake, NREM and REM of each feature is evaluated using the Cohen's kappa coefficient. Besides kappa performance, sensitivity, specificity, accuracy and inter-correlations of the best 20 features with high discriminative power is also analyzed. The best kappa values for each class versus the other classes are 0.24, 0.12 and 0.31 for NREM, REM and Wake, respectively. Significance is tested with ANOVA F-test (mostly p <0.001). The results are compared to known cardio-respiratory features for sleep analysis. We conclude that many RQA features are suited to discriminate between Wake and Sleep, whereas the differentiation between REM and the other classes remains in the midrange. | Recurrence quantification analysis across sleep stages |
S1746809415000646 | This paper introduces an approach to classify EEG signals using wavelet transform and a fuzzy standard additive model (FSAM) with tabu search learning mechanism. Wavelet coefficients are ranked based on statistics of the Wilcoxon test. The most informative coefficients are assembled to form a feature set that serves as inputs to the tabu-FSAM. Two benchmark datasets, named Ia and Ib, downloaded from the brain-computer interface (BCI) competition II are employed for the experiments. Classification performance is evaluated using accuracy, mutual information, Gini coefficient and F-measure. Widely-used classifiers, including feedforward neural network, support vector machine, k-nearest neighbours, ensemble learning Adaboost and adaptive neuro-fuzzy inference system, are also implemented for comparisons. The proposed tabu-FSAM method considerably dominates the competitive classifiers, and outperforms the best performance on the Ia and Ib datasets reported in the BCI competition II. | Fuzzy system with tabu search learning for classification of motor imagery data |
S1746809415000658 | Angiogenesis is the phenomenon by which new blood vessels are created from preexisting ones. But this natural process is also involved, in a chaotic way, in tumor development. Many molecules have shown particular efficiency in inhibiting this phenomenon, hopefully leading to either: (i) a reorganization of the neovessels allowing a better tumor uptake of cytotoxic molecules (as chemotherapy) or (ii) a deprivation of the tumor vascular network with the view to starve it. However, characterizing the anti-angiogenic effects of a molecule remains difficult, mainly because the proposed physical modeling approaches have barely been confronted to in vivo data, which are not directly available. This paper presents an original approach to characterize and analyze the anti-angiogenic responses in cancerology that allows biologists to account for spatial and dynamical dimensions of the problem. The proposed solution relies on the association of a specific biological in vivo protocol using skinfold chambers, image processing and dynamic system identification. An empirical model structure of the anti-angiogenic effect of a tested molecule is selected according to experimental data. Finally the model is identified and its parameters are used to characterize and compare responses of the tested molecule. | Data-driven modeling and characterization of anti-angiogenic molecule effects on tumoral vascular density |
S174680941500066X | Pulse transit time (PTT) and pulse wave velocity (PWV) are the markers most widely used to evaluate the vascular effects of aging, hypertension, arterial stiffness and atherosclerosis. To calculate these markers it is necessary to determine the location of the onset and systolic peak of the arterial pulse wave (APW). In this paper, a method employed for electrocardiography (ECG) R peak detection, with a slight modification, is applied for both the onset and systolic peak detections in APW. The method employs Shannon energy envelope (SEE) estimator, Hilbert transform (HT) and moving average (MA) filter. The minimum value and the positive zero-crossing points of the odd-symmetry function of the HT correspond to the locations of the onset and systolic peak respectively. The algorithm was evaluated using expert's annotations, with 10 records of 5min length and different signal-to-noise ratios (15, 12 and 9dB) and achieved a good performance and precision. When compared to, expert's annotation, the algorithm detected these fiducial points with average sensitivity, positive predictivity and accuracy of 100% and presented errors less than 10ms. In APW signals contaminated with noise in both cases the relative error is less than 2% respect to pulse wave periods of 800ms. The performance of algorithm was compared with both foot approximation and adaptive threshold methods and the results show that the algorithm outperforms theses reported methods with respect to manuals annotation. The results are promising, suggesting that the method provides a simple but accurate onset and systolic peak detection and can be used in the measurement of pulse transit time, pulse wave velocity and pulse rate variability. | Automated detection of the onset and systolic peak in the pulse wave using Hilbert transform |
S1746809415000737 | One of the main approaches for classifying the ECG signals is the use of wavelet transform. In this paper, a method has been presented for classifying the ECG signals by means of new wavelet functions (WFs). The considered approach for generating the new WFs relies on the degree of similarity between the shapes of the WFs and ECG signals. Thus, by formulating the wavelet design problem in the hybrid GA-PSO framework, and using Euclidean, Dynamic Time Warping, Signed Correlation Index, and Adaptive Signed Correlation Index similarity measures as wavelet design criterion, six WFs corresponding to six common arrhythmias have been designed. Decomposition of ECG signal using designed WFs, and thereafter, application of PCA, and multilayer perceptron classifier provides a classification scheme for ECG signals. Feature vector is obtained by applying all designed WFs to every single beat; so, the main advantage of this method is that the set of WFs used to decompose the beats, always includes a WF similar to those beats. Therefore, the generated features better resolve the various classes. Also, the effects of the number of neurons in the hidden layer and the different training methods of the MLP have been investigated. By performing some tests on the benchmark MIT-BIH arrhythmia database using the proposed method and also the common WFs, the superiority of the proposed approach in the overall accuracy as well as the accuracy of each class has been demonstrated. | A multi-wavelet optimization approach using similarity measures for electrocardiogram signal classification |
S1746809415000749 | Traditional techniques for the diagnosis of neurological disorders are recently complemented by contact-less methods that provide a semi-quantitative assessment of the patient status. In this framework, the assessment of infant's behaviour based on the analysis of audio and video recordings is appealing thanks to its unobtrusiveness and to the affordable costs of the equipment. This paper presents the architecture of a system, named AVIM, conceived for supporting clinical diagnosis in newborns with contact-less techniques. Its most innovative aspect is the ability of merging in a single tool the management of medical records and reports, audio/video data acquisition, handling and analysis, editing and filling out customized tests. Moreover, unlike other commercial or open source software tools, AVIM allows adding markers and notes while recording audio and video signals and provides detailed reports with both perceptual scores and acoustical and kinematical parameters of clinical interest computed through dedicated innovative techniques. AVIM is therefore a unique and flexible system that could successfully support the clinician during the entire process from the acquisition of the signals to the results. In addition to providing an appreciable decrease in investigation time, costs and errors, AVIM could support the diagnosis integrating clinicians’ qualitative analysis, based on subjective skills, with objective measurements. To highlight its capabilities, AVIM is applied here to the management and analysis of personal and clinical data of newborns audio/video recorded in 5 time points from 10 days to the 24th week of age, according to a specific protocol. Patient data, results of customized tests, tables and plots are provided in a user-friendly environment. | AVIM—A contactless system for infant data acquisition and analysis: Software architecture and first results |
S1746809415000762 | Background The total cosine R-to-T (TCRT) is a vectorcardiographic measure of differences in propagation of depolarization and repolarization in the heart. The high TCRT value reflects increased heterogeneity of repolarization process. The aim of study was to assess the prognostic value of the TCRT measured during exercise stress test. Methods The high-resolution body surface potential maps were recorded at rest and during exercise test in the group of 90 cardiac patients and 33 healthy volunteers. The TCRT was computed from averaged in time ECG signals. The prognostic value of TCRT parameter was determined based on the results of SPECT and echocardiography. Results TCRT was heart rate dependent and its magnitude decreased when measured at peak exercise in comparison to TCRT measured at rest. The significant differences in TCRT were obtained between patients with low (LVEF≤40%,) and high (LVEF>40%) left ventricular ejection fraction. These differences were greater if the TCRT was determined from a 67-lead ECG than computed from the standard ECG leads and were more pronounced for TCRT determined at peak exercise than at rest. The separation capacity was farther improved for TCRT normalized to mean heart rate. Simulation and experimental results showed that presence and localization of myocardial ischemia or post-infarction scar tissue have negligible impact on TCRT. Conclusions Wider R-to-T angle was associated with lower LVEF. The statistically significant differences of TCRT in patients with low LVEF and high LVEF may suggest high prognostic value of TCRT parameter in risk assessment of ventricular arrhythmia. | Prognostic value of the total cosine R to T measured in high resolution body surface potential mapping during exercise test |
S1746809415000774 | The problem of frequency-by-frequency cross-correlation detection is studied using two characteristics of blood microcirculation: blood flow and skin temperature. Skin blood flow variations are measured by Laser Doppler Flowmetry (LDF) and Skin Temperature (ST) – by a high-resolution (0.001K) temperature recorder. The wavelet cross-correlation (WCC) is compared with Fourier coherence and demonstrates certain advantages for the frequency-by-frequency analysis of biomedical nonstationary signals. The WCC analysis of the LDF and ST signals performed for 17 healthy subjects reveales a high correlation in the low-frequency range 0.01< ν <0.1Hz. The WCC function provides the phase difference between the oscillations of LDF and ST signals. This phase shift is used to estimate the effective depth of temperature wave generation. Taking into account the decay rate of temperature oscillations at each frequency and the LDF-ST phase shift, we perform an inverse wavelet transform for the frequency band that corresponds to active mechanisms of vascular tone regulation. This technique allows us to recover the filtered ST signal from the LDF signal and vice-versa. It is shown that the ST pulsations mirror the functional state of the microcirculation system and the ST monitoring can be used for microvessels tone control within the frequency ranges corresponding to endothelial, neurogenic and myogenic activities. | Skin temperature variations as a tracer of microvessel tone |
S1746809415000786 | The presence of noise results in quality deterioration of magnetic resonance (MR) images and thus limits the visual inspection and influence the quantitative measurements from the data. In this work, an efficient two stage linear minimum mean square error (LMMSE) method is proposed for the enhancement of magnitude MR images in which data in the presence of noise follows a Rician distribution. The conventional Rician LMMSE estimator determines a closed-form analytical solution to the aforementioned inverse problem. Even-though computationally efficient, this approach fails to take advantage of data redundancy in the 3D MR data and hence leads to a suboptimal filtering performance. Motivated by this observation, we put forward the concept of nonlocal implementation with LMMSE estimation method. To select appropriate samples for the nonlocal version of the LMMSE estimation, the similarity weights are computed using Euclidean distance between either the gray level values in the spatial domain or the coefficients in the transformed domain. Assuming that the signal dependent component of the noise is optimally suppressed by this filtering and the rest is a white and uncorrelated noise with the image, we adopt a second stage LMMSE filtering in the principal component analysis (PCA) domain to further enhance the image and the noise variance is adaptively adjusted. Experiments on both simulated and real data show that the proposed filters have excellent filtering performance over other state-of-the-art methods. | Nonlocal linear minimum mean square error methods for denoising MRI |
S1746809415000798 | Fascicle orientation is one of the most widely used parameters for quantifying muscle function in mechanical analysis, clinical diagnosis, and rehabilitation assessment. Ultrasonography has frequently been used as a reliable way to measure the changes in fascicle orientation of human muscles non-invasively. Conventionally, most such measurements are conducted by a manual analysis of ultrasound images. This manual approach is time consuming, subjective and not suitable for measuring dynamic changes. In this study, we developed an automated tracking method based on a frequency domain Radon transform. The goal of the study was to evaluate the performance of the proposed method by comparing it with the manual approach and by determining its repeatability. A real-time B-mode ultrasound scanner was used to examine the medial gastrocnemius muscle during contraction. The coefficient of multiple correlation (CMC) was used to quantify the level of agreement between the two methods and the repeatability of the proposed method. The two methods were also compared by linear regression and a Bland–Altman analysis. The findings indicated that the results obtained using the proposed method were in good agreement with those obtained using the manual approach (CMC=0.94±0.03, difference=−0.23±0.68°) and were highly repeatable (CMC=0.91±0.04). In conclusion, the new method presented here may provide an accurate, highly repeatable, and efficient approach for estimating fascicle orientation during muscle contraction. | Continuous fascicle orientation measurement of medial gastrocnemius muscle in ultrasonography using frequency domain Radon transform |
S1746809415000804 | Cough is a common symptom of almost all childhood respiratory diseases. In a typical consultation session, physicians may seek for qualitative information (e.g., wetness) and quantitative information (e.g., cough frequency) either by listening to voluntary coughs or by interviewing the patients/carers. This information is useful in the differential diagnosis and in assessing the treatment outcome of the disease. The manual cough assessment is tedious, subjective, and not suitable for long-term recording. Researchers have attempted to develop automated systems for cough assessment but none of the existing systems have specifically targeted the pediatric population. In this paper we address these issues and develop a method to automatically identify cough segments from the pediatric sound recordings. Our method is based on extracting mathematical features such as non-Gaussianity, Shannon entropy, and cepstral coefficients to describe cough characteristics. These features were then used to train an artificial neural network to detect coughs segment in the sound recordings. Working on a prospective data set of 14 subjects (sound recording length 840min), proposed method achieved sensitivity, specificity, and Cohen's Kappa of 93%, 98%, and 0.65, respectively. These results indicate that the proposed method has the potential to be developed as an automated pediatric cough counting device as well as the front-end of a cough analysis system. | Automatic cough segmentation from non-contact sound recordings in pediatric wards |
S1746809415000816 | The paper proposes a two-layer pattern recognition system architecture for asthma wheezing detection in recorded children's respiratory sounds. The first layer consists of two SVM classifiers specifically designed as a cascade stacked in parallel to emphasize the differences among signals with similar acoustic properties, such as wheezes and inspiratory stridors. The second layer is realized using a digital detection threshold, which further upgrades the proposed structure with the aim of improving the process of wheezing detection. The results were experimentally evaluated on the data acquired from the General Hospital of Dubrovnik, Croatia. Classification results obtained on the test data sets revealed that the central frequency of wheezes included in the training data is important for the success of classification. | Two-level coarse-to-fine classification algorithm for asthma wheezing recognition in children's respiratory sounds |
S1746809415000828 | Pre-shock waveform analysis for optimizing the timing of shock delivery could be immensely helpful to emergency medical personnel in treating ventricular fibrillation. For this purpose, our proposed method resolves the pre-shock surface electrocardiogram into independent sources using a blind source separation approach. The electrocardiogram pre-shock waveforms were transformed into the wavelet domain and the independent sources were extracted using component analysis. A database consisting of 50 pre-shock waveforms from 50 pigs was used in this study. The pre-shock waveforms were obtained using a controlled protocol. After ventricular fibrillation was induced and left untreated for 2–5min, cardio pulmonary resuscitation was administered for 3min, followed by defibrillation. Energy-based features were extracted from the independent sources and a linear discriminant analysis based pattern classifier was used to evaluate the features for their ability to discriminate between successful and unsuccessful shock outcomes. The proposed method achieved a classification accuracy of 68% (P <0.02), and the classification results were cross-validated using the leave-one-out method. A comparative study demonstrated that the proposed approach performed relatively well compared to existing methods for the given database. | Analysis of electrocardiogram pre-shock waveforms during ventricular fibrillation |
S174680941500083X | Feature extraction and automatic classification of mental states is an interesting and open area of research in the field of brain–computer interfacing (BCI). A well-trained classifier would allow the BCI system to control an external assistive device in real world problems. Sometimes, standard existing classifiers fail to generalize the components of a non-stationary signal, like Electroencephalography (EEG) which may pose one or more problems during real-time usage of the BCI system. In this paper, we aim to tackle this issue by designing an interval type-2 fuzzy classifier which deals with the uncertainties of the EEG signal over various sessions. Our designed classifier is used to decode various movements concerning the wrist (extension and flexion) and finger (opening and closing of a fist). For this purpose, we have employed extreme energy ratio (EER) to construct the feature vector. The average classification accuracy achieved during offline training and online testing over eight subjects are 86.45% and 78.44%, respectively. On comparison with other related works, it is shown that our designed IT2FS classifier presents a better performance. | An interval type-2 fuzzy approach for real-time EEG-based control of wrist and finger movement |
S1746809415000841 | Synchronization provides an insight into underlying the interaction mechanisms among the bivariate time series and has recently become an increasing focus of interest. In this study, we proposed a new cross entropy measure, named cross fuzzy measure entropy (C-FuzzyMEn), to detect the synchronization of the bivariate time series. The performances of C-FuzzyMEn, as well as two existing cross entropy measures, i.e., cross sample entropy (C-SampEn) and cross fuzzy entropy (C-FuzzyEn), were first tested and compared using three coupled simulation models (i.e., coupled Gaussian noise, coupled MIX(p) and coupled Henon model) by changing the time series length, the threshold value for entropy and the coupling degree. The results from the simulation models showed that compared with C-SampEn, C-FuzzyEn and C-FuzzyMEn had better statistical stability and compared with C-FuzzyEn, C-FuzzyMEn had better discrimination ability. These three measures were then applied to a cardiovascular coupling problem, synchronization analysis for RR and pulse transit time (PTT) series in both the normal subjects and heart failure patients. The results showed that the heart failure group had lower cross entropy values than the normal group for all three cross entropy measures, indicating that the synchronization between RR and PTT time series increases in the heart failure group. Further analysis showed that there was no significant difference between the normal and heart failure groups for C-SampEn (normal 2.13±0.37 vs. heart failure 2.07±0.16, P =0.36). However, C-FuzzyEn had significant difference between two groups (normal 1.42±0.25 vs. heart failure 1.31±0.12, P <0.05). The statistical difference was larger for two groups when performing C-FuzzyMEn analysis (normal 2.40±0.26 vs. heart failure 2.15±0.13, P <0.01). | Measuring synchronization in coupled simulation and coupled cardiovascular time series: A comparison of different cross entropy measures |
S1746809415000853 | Photoplethysmography (PPG)-based heart rate (HR) monitoring is a promising feature in modern wearable devices. However, it is difficult to accurately track HR during physical exercise since PPG signals are vulnerable to motion artifacts (MA). In this paper, an algorithm is presented to combine ensemble empirical mode decomposition (EEMD) with spectrum subtraction (SS) to track HR changes during subjects’ physical activities. In this algorithm, EEMD decomposes a PPG signal and an acceleration signal into intrinsic mode functions (IMFs), respectively. Then noise related IMFs are removed. Next the correlation coefficient is computed between the spectrum of the acceleration signal and that of the PPG signal in the band of [0.4Hz–5Hz]. If the coefficient is above 0.5, SS is used to remove the spectrum of the acceleration signal from the PPG's spectrum. Finally, a spectral peak selection method is used to find the peak corresponding to HR. Experimental results on datasets recorded from 12 subjects during fast running showed the superior performance of the proposed algorithm compared with a benchmark method termed TROIKA. The average absolute error of HR estimation was 1.83 beats per minute (BPM), and the Pearson correlation was 0.989 between the ground-truth and the estimated HR. | Combining ensemble empirical mode decomposition with spectrum subtraction technique for heart rate monitoring using wrist-type photoplethysmography |
S1746809415000865 | In the electroencephalogram (EEG)-based brain–computer interface (BCI) systems, classification is an important signal processing step to control external devices using brain activity. However, scalp-recorded EEG signals have inherent non-stationary characteristics; thus, the classification performance is deteriorated by changing the background activity of the EEG during the BCI experiment. Recently, the sparse representation based classification (SRC) method has shown a robust classification performance in many pattern recognition fields including BCI. In this study, we aim to analyze noise robustness of the SRC method to evaluate the capability of the SRC for non-stationary EEG signal classification. For this purpose, we generate noisy test signals by adding a noise source such as random Gaussian and scalp-recorded background noise into the original motor imagery based EEG signals. Using the noisy test signals and real online-experimental dataset, we compare the classification performance of the SRC and support vector machine (SVM). Furthermore, we analyze the unique classification mechanism of the SRC. We observed that the SRC method provided better classification accuracy and noise robustness compared with the SVM method. In addition, the SRC has an inherent adaptive classification mechanism that makes it suitable for time-varying EEG signal classification for online BCI systems. | Noise robustness analysis of sparse representation based classification method for non-stationary EEG signal classification |
S1746809415000877 | Brain–computer interface (BCI) systems based on electroencephalography have been increasingly used in different contexts, engendering applications from entertainment to rehabilitation in a non-invasive framework. In this study, we perform a comparative analysis of different signal processing techniques for each BCI system stage concerning steady state visually evoked potentials (SSVEP), which includes: (1) feature extraction performed by different spectral methods (bank of filters, Welch's method and the magnitude of the short-time Fourier transform); (2) feature selection by means of an incremental wrapper, a filter using Pearson's method and a cluster measure based on the Davies–Bouldin index, in addition to a scenario with no selection strategy; (3) classification schemes using linear discriminant analysis (LDA), support vector machines (SVM) and extreme learning machines (ELM). The combination of such methodologies leads to a representative and helpful comparative overview of robustness and efficiency of classical strategies, in addition to the characterization of a relatively new classification approach (defined by ELM) applied to the BCI-SSVEP systems. | Comparative analysis of strategies for feature extraction and classification in SSVEP BCIs |
S1746809415000889 | Photoacoustic tomography (PAT) is an effective optical biomedical imaging method characterized with non-ionizing and noninvasive, presenting good soft tissue contrast and excellent spatial resolution. To build a full-view photoacoustic (PA) image, numbers of ultrasound sensors are needed to assure the image quality, which will bring difficulties to data acquisition and real-time image display. Compressed sensing (CS) theory breaks the restriction of Nyquist sampling theorem and is capable to rebuild signals with less measurements. In this contribution, we proposed an optimization method to increase image quality of full-view PAT with less ultrasound sensors, which combines the theory of CS and a circularly distributing asymmetric data acquisition frame. The hardware cost can be saved and time efficiency of PAT can be raised without sacrificing the image quality. The feasibility of our proposed method is verified in simulation experiments which yield expected results. Our study might be helpful for clinical medical imaging such as early stage breast cancer detection, endoscopic imaging and in vivo monitoring. | Full-view photoacoustic tomography using asymmetric distributed sensors optimized with compressed sensing method |
S1746809415000890 | In this study, we have investigated the evidence of fetal heart rate asymmetry and how the fetal heart rate asymmetry changes before and after 35 weeks of gestation. Noninvasive fetal electrocardiogram (fECG) signals from 45 pregnant women at the gestational age from16 to 41 weeks with normal single pregnancies were analysed. A nonlinear parameter called heart rate asymmetry (HRA) index that measures time asymmetry of RR interval time-series signal was used to understand the changes of HRA in early and late fetus groups. Results indicate that fetal HRA measured by Porta's Index (PI) consistently increases after 35 weeks gestation compared to foetus before 32 weeks of gestation. It might be due to significant changes of sympatho-vagal balance towards delivery with more sympathetic surge. On the other hand, Guzik's Index (GI) showed a mixed effect i.e., increases at lower lags and decreases at higher lags. Finally, fHRA could potentially help identify normal and the pathological autonomic nervous system development. | Analysis of fetal heart rate asymmetry before and after 35 weeks of gestation |
S1746809415000907 | In clinical medicine, multidimensional time series data can be used to find the rules of disease progress by data mining technology, such as classification and prediction. However, in multidimensional time series data mining problems, the excessive data dimension causes the inaccuracy of probability density distribution to increase the computational complexity. Besides, information redundancy and irrelevant features may lead to high computational complexity and over-fitting problems. The combination of these two factors can reduce the classification performance. To reduce computational complexity and to eliminate information redundancies and irrelevant features, we improved upon a multidimensional time series feature selection method to achieve dimension reduction. The improved method selects features through the combination of the Kozachenko–Leonenko (K–L) information entropy estimation method for feature extraction based on mutual information and the feature selection algorithm based on class separability. We performed experiments on the Electroencephalogram (EEG) dataset for verification and the non-small cell lung cancer (NSCLC) clinical dataset for application. The results show that with the comparison of CLeVer, Corona and AGV, respectively, the improved method can effectively reduce the dimensions of multidimensional time series for clinical data. | Feature selection method based on mutual information and class separability for dimension reduction in multidimensional time series for clinical data |
S1746809415000919 | The goal of this study was to develop a hybrid mental speller that can effectively prevent unexpected typing errors in the steady-state visual evoked potential (SSVEP)-based mental speller by simultaneously using the information of eye-gaze direction detected by a low-cost webcam without calibration. In the implemented hybrid mental speller, a character corresponding to the strongest SSVEP response was typed only when the position of the selected character coincided with the horizontal eye-gaze direction (‘left’, ‘no direction’, or ‘right’) detected by the webcam-based eye tracker. When the character detected by the SSVEP-based mental speller was located in the direction opposite the eye-gaze direction, the character was not typed at all (a beep sound was generated instead), and thus the users of the speller did not need to correct the mistyped character using a ‘BACKSPACE’ key. To verify the feasibility and usefulness of the developed hybrid mental spelling system, we conducted online experiments with ten healthy participants, each of whom was asked to type 15 English words consisting of a total of 68 characters. As a result, 16.6 typing errors could be prevented on average, demonstrating that the proposed hybrid strategy could effectively enhance the performance of the SSVEP-based mental spelling system. | Development of a hybrid mental spelling system combining SSVEP-based brain–computer interface and webcam-based eye tracking |
S1746809415000920 | General anesthesia is required for some patients in the intensive care units (ICUs) with acute respiratory distress syndrome. Critically ill patients who are assisted by mechanical ventilators require moderate sedation for several days to ensure cooperative and safe treatment in the ICU, reduce anxiety and delirium, facilitate sleep, and increase patient tolerance to endotracheal tube insertion. However, most anesthetics affect cardiac and respiratory functions. Hence, it is important to monitor and control the infusion of anesthetics to meet sedation requirements while keeping patient vital parameters within safe limits. The critical task of anesthesia administration also necessitates that drug dosing be optimal, patient specific, and robust. In this paper, the concept of reinforcement learning (RL) is used to develop a closed-loop anesthesia controller using the bispectral index (BIS) as a control variable while concurrently accounting for mean arterial pressure (MAP). In particular, the proposed framework uses these two parameters to control propofol infusion rates to regulate the BIS and MAP within a desired range. Specifically, a weighted combination of the error of the BIS and MAP signals is considered in the proposed RL algorithm. This reduces the computational complexity of the RL algorithm and consequently the controller processing time. | Closed-loop control of anesthesia and mean arterial pressure using reinforcement learning |
S1746809415000932 | Background We proposed a novel classification system to distinguish among elderly subjects with Alzheimer's disease (AD), mild cognitive impairment (MCI), and normal controls (NC), based on 3D magnetic resonance imaging (MRI) scanning. Methods The method employed 3D data of 178 subjects consisting of 97 NCs, 57 MCIs, and 24 ADs. First, all these 3D MR images were preprocessed with atlas-registered normalization to form an averaged volumetric image. Then, 3D discrete wavelet transform (3D-DWT) was used to extract wavelet coefficients the volumetric image. The triplets (energy, variance, and Shannon entropy) of all subbands coefficients of 3D-DWT were obtained as feature vector. Afterwards, principle component analysis (PCA) was applied for feature reduction. On the basic of the reduced features, we proposed nine classification methods: three individual classifiers as linear SVM, kernel SVM, and kernel SVM trained by PSO with time-varying acceleration-coefficient (PSOTVAC), with three multiclass methods as Winner-Takes-All (WTA), Max-Wins-Voting, and Directed Acyclic Graph. Results The 5-fold cross validation results showed that the “WTA-KSVM+PSOTVAC” performed best over the OASIS benchmark dataset, with overall accuracy of 81.5% among all proposed nine classifiers. Moreover, the method “WTA-KSVM+PSOTVAC” exceeded significantly existing state-of-the-art methods (accuracies of which were less than or equal to 74.0%). Conclusion We validate the effectiveness of 3D-DWT. The proposed approach has the potential to assist in early diagnosis of ADs and MCIs. | Detection of Alzheimer's disease and mild cognitive impairment based on structural volumetric MR images using 3D-DWT and WTA-KSVM trained by PSOTVAC |
S1746809415001020 | The electrically evoked auditory brainstem response (eABR) is one of the clinically employed objective evaluation tools for cochlear implant (CI) subjects. It is commonly obtained by averaging responses, but because of the electric CI stimulation, some artifacts are phase locked to the stimulus and do not average out by increasing repetitions. A series of artifact reduction methods, such as general post-processing procedures for all subjects and individual post-processing procedures for some subjects, were developed in this study, aiming at reducing CI stimulation coherent artifacts. Seven bilateral CI subjects were recruited, and both monaural and binaural multi-channel eABRs were recorded in this study. The results show that the CI stimulation pulse artifacts can be efficiently removed by the general post-processing procedure, using alternating polarity stimuli combined with linear interpolation. Recordings obtained with non-alternating polarity show a strong exponential decay. Exponential fitting and subtraction worked reasonably well in this case. For eABR recordings contaminated with facial nerve stimulation (FNS) artifacts, principle component analysis was introduced to minimize the FNS artifacts for potential clinic application in the future. | Reduction of stimulation coherent artifacts in electrically evoked auditory brainstem responses |
S1746809415001032 | Detection of QRS complexes in ECG signals is required to determine heart rate, and it is an important step in the study of cardiac disorders. ECG signals are usually affected by noise of low and high frequency. To improve the accuracy of QRS detectors several methods have been proposed to filter out the noise and detect the characteristic pattern of QRS complex. Most of the existing methods are at a disadvantage from relatively high computational complexity or high resource needs making them less optimized for its implementation on portable embedded systems, wearable devices or ultra-low power chips. We present a new method to detect the QRS signal in a simple way with minimal computational cost and resource needs using a novel non-linear filter. | Simple real-time QRS detector with the MaMeMi filter |
S1746809415001044 | Hip bone fracture is one of the most important causes of morbidity and mortality in the elder adults. It is necessary to establish a prediction model to provide suggestions for elders. A total of 725 subjects were involved, including 228 patients with first low-trauma hip fracture and 497 ages-, sex-, and living area-matched controls (215 from the same hospital and 282 from community). All the subjects were interviewed with the same questionnaire, and the answers of the interviewees were recorded to the database. Three-layer back-propagation Artificial Neural Networks (ANN) models were applied for females and males separately in this study to predict the risk of hip bone fracture for elders. Furthermore, to improve the accuracies and the generalizations of the models, the ensemble ANNs method was applied. To understand variables contributions and find the important variables for predicting hip fracture, sensitivity analysis and connection weights approach were applied. In this study, three ANNs prediction models were tested with different architectures. With the fivefold cross-validation method evaluating the performances, one of the three models turned out to be the best prediction model and achieved a big success of prediction. The best area under the receiver operating characteristic (ROC) curve and the accuracy of the prediction model are 0.91±0.028 (mean±SD) and 0.85±0.029 for females, while for males are 0.99±0.015 and 0.93±0.020. With the method of sensitivity analysis and connection weights, input variables were ranked according to contributions/importance, and the top 10 variables show great proportion of contribution to predict hip fracture. The top 10 important variables causing hip fracture for both females and males are similar to our previous results got from logistic regression model and other related researches. In conclusion, ANNs has successfully been used to establish prediction models for predicting the risk of hip bone fracture for both female and male elder adults respectively and identified the top 10 important variables from 74 input variables to predict hip bone fracture of elders. This study verified the performance of ANNs to be a highly efficient prediction model. | Ensemble artificial neural networks applied to predict the key risk factors of hip bone fracture for elders |
S1746809415001056 | Averaging of nonlinearly aligned (time-warped) signal cycles is an important method for suppressing noise of quasi-periodical or event related signals. However, in the paper we show that the operation of time warping introduces unfavorable violation of the requirements that should be satisfied for effective averaging and, as a result, it causes poor suppression of noise. To limit these effects, we redefine the matrix of the alignment costs. To improve results of averaging in cases of variable energy noise, we apply weighting of the summed signal samples. The derived formula gives smaller weights for more noisy signal cycles and this way limits their influence on the constructed template. The proposed modifications caused a significant increase of the Noise Reduction Factor (NRF) in the experiments on the simulated evoked potentials. Whereas the greatest NRF obtained by the reference methods in nonstationary white noise environment was equal to 1.55, for the new method proposed we achieved a value of 4.44. For non-stationary colored noise, the corresponding values were 1.44 and 2.99. Moreover, application of the developed method to ECG signal processing, prior to the measurements of the QT interval, significantly improved the measurements immunity to noise. | Averaging of nonlinearly aligned signal cycles for noise suppression |
S1746809415001068 | Objective Magnetic resonance imaging (MRI) is the primary imaging technique for evaluation of the brain tumor progression before and after radiotherapy or surgery. The purpose of the current study is to exploit conventional MR modalities in order to identify and segment brain images with neoplasms. Methods Four conventional MR sequences, namely, T1-weighted, gadolinium-enhanced T1-weighted, T2-weighted and fluid attenuation inversion recovery, are combined with machine learning techniques to extract global and local information of brain tissues and model the healthy and neoplastic imaging profiles. Healthy tissue clustering, outlier detection and geometric and spatial constraints are applied to perform a first segmentation which is further improved by a modified multiparametric Random Walker segmentation method. The proposed framework is applied on clinical data from 57 brain tumor patients (acquired by different scanners and acquisition parameters) and on 25 synthetic MR images with tumors. Assessment is performed against expert-defined tissue masks and is based on sensitivity analysis and Dice coefficient. Results The results demonstrate that the proposed multiparametric framework differentiates neoplastic tissues with accuracy similar to most current approaches while it achieves lower computational cost and higher degree of automation. Conclusion This study might provide a decision-support tool for neoplastic tissue segmentation, which can assist in treatment planning for tumor resection or focused radiotherapy. | A low cost approach for brain tumor segmentation based on intensity modeling and 3D Random Walker |
S174680941500107X | The response of the brain to a sensory stimulus may present itself in the electroencephalogram (EEG) as evoked and/or induced activity. While the evoked response is given by peaks and troughs in the signal, time-locked and phase-locked to the stimuli, the induced response is time- but not phase-locked, and can be considered as an increase or a decrease in the power of EEG in a specific frequency band at a specific time range with regard to the stimulus onset. The induced response does not have the same phase following successive stimuli. It is believed that cognition and perception of a stimulus present themselves primarily as the induced response in the EEG. In this paper, the induced response of the brain to auditory speech stimuli is investigated and different approaches to detect induced activity are compared. It is shown that there is an increase in theta and delta power in response to words compared to the baseline, starting around 500ms after their onset. During this time, there is also an increase in pairwise coherence between the posterior electrodes. In response to tone bursts, a change in pairwise coherence was observed in the beta band starting around 200ms. To the best of our knowledge, this is the first time such responses have been described using simple protocols without complex stimulus manipulations being involved. Responses in the EEG to speech rather than the more conventional tone-bursts or clicks suggests that it may be feasible to use the EEG as an objective means to demonstrate brain activation to salient real world stimuli. This would be of particular benefit in investigating access to speech in patients who are unable or unwilling to reliably respond to conventional subjective experimental protocols, such as infants. | Induced activity in EEG in response to auditory stimulation |
S1746809415001081 | When choosing a representation for the classification of heartbeats a common solution is using the coefficients of a linear combination of basis functions, such as Hermite functions. Among the advantages of this representation is the possibility of using model selection criteria for choosing the optimal representation, a property that is missing in other heartbeat representation schemes. However, to date none of the authors who have used basis functions has studied what is the optimal model length (number of functions in the linear combination). This length is usually chosen using ad hoc techniques such as the visual inspection of the reconstruction obtained for a few beats. This has led to such different choices as representing the QRS of the beats by as few as 3 or as much as 20 Hermite functions. This paper studies what is the optimal number of Hermite functions to be used when representing the QRS. The Hermite characterization of the QRS complex was calculated using from 2 to 30 functions. To determine the optimal number of functions AIC and BIC were calculated for all the heartbeats in the MIT-BIH database, obtaining for each QRS the optimum model length. The features of the Hermite characterization have been studied using feature selection techniques. Data about the impact of the length of the representation chosen on the computational resources is also presented. Using this information, we have developed a clustering algorithm based on mixture models that has a misclassification rate of 0.96% and 0.36% over the MIT-BIH database and the AHA database, respectively. | A study on the representation of QRS complexes with the optimum number of Hermite functions |
S174680941500110X | Muscle fiber conduction velocity (MFCV) can be measured by estimating the time delay between surface EMG signals recorded by electrodes aligned with the fiber direction. In the case of dynamic contractions, the EMG signal is highly non-stationary and the time delay between recording sites may vary rapidly over time. Thus, the processing methods usually applied in the case of static contractions do not hold anymore and the delay estimation requires processing techniques that are adapted to non-stationary conditions. The current paper investigates several methods based on time-frequency approaches or adaptive filtering in order to solve the time-varying delay estimation problem. These approaches are theoretically analyzed and compared by Monte–Carlo simulations in order to determine if their performance is sufficient for practical applications. Moreover, results obtained on experimental signals recorded during cycling from the vastus medialis muscle are also shown. The study presents for the first time a set of approaches for instantaneous delay estimation from two-channels EMG signals. | Time-varying delay estimators for measuring muscle fiber conduction velocity from the surface electromyogram |
S1746809415001111 | The physiological artifacts such as electromyogram (EMG) and electrooculogram (EOG) remain a major problem in electroencephalogram (EEG) research. A number of techniques are currently in use to remove these artifacts with the hope that the process does not unduly degrade the quality of the obscured EEG. In this paper, a new method has been proposed by combining two techniques: a canonical correlation analysis (CCA) followed by a stationary wavelet transform (SWT) to remove EMG artifacts and a second-order blind identification (SOBI) technique followed by SWT to remove EOG artifacts. The simulation results show that these combinations are more effective than either using the individual techniques alone or using other combinations of techniques. The quality of the artifact removal is evaluated by calculating correlations between processed and unprocessed data, and the practicability of the technique is judged by comparing execution times of the algorithms. | Artifacts-matched blind source separation and wavelet transform for multichannel EEG denoising |
S1746809415001123 | This paper presents the application of a bio-inspired method for optimizing a lifelike vectorcardiographic (VCG) model. During the model estimation, a Particle Swarm Optimization (PSO) seeks the optimal combination of all parameters that maximize the correlation coefficient (r) and minimize the Mean Squared Error (MSE) between the synthetic and directly measured VCG leads. The proposed method was tested on 52 different VCG records annotated as a healthy control (HC) from PTB database. 156 models were individualized without any previous analysis of the waves of the original records. The PSO method automatically provides very realistic models with a correlation coefficient r >0.995 and MSE<0.0005mV2 for 152 of the 156 VCG signals. | Individualization of a vectorcardiographic model by a particle swarm optimization |
S1746809415001135 | Ultrasounds, besides their well-established medical imaging role, influence the homeostasis of complex anatomical systems including the physiology of neurons and glia and the permeability of the blood brain barrier. In this study, neurons and microglial cells were treated with ultrasounds (commonly used in diagnostics) and differences in cell proliferation and morphology were evaluated in comparison to control, untreated cells. Cell proliferation was evaluated by standard viability assessment, while the quantitative analysis of cell morphology, usually performed by edge and line detection algorithms, required the development of a new special algorithm. In fact, traditional software methodologies do not provide the appropriate tools for morphological analysis of neurons and microglial cells, typically characterized by a roughly triangular body and numerous elongations of different lengths resulting in a complex neuron–microglia network. This new method, based on a modified Hough Transform algorithm using a matching operator instead of the common gradient filter, enabled the automatic identification of cell elongations and branches, the extraction of related information, and the comparison of the data between control and treated neurons, as well as microglial cells. Results, based on the development of the new algorithm, showed that in ultrasound-treated cells, the number of elongations, as well as their maximum and mean lengths, increased significantly in comparison to control, untreated cells. These results were consistent with the standard microscopic evaluation. Furthermore, a significant correlation between cell morphology and proliferation suggested that ultrasounds induced cell differentiation affecting cell morphology, as well as the ability of neurons and microglial cells to form complex networks. Our results suggest the possibility of using ultrasounds, currently utilized in diagnostics, to reconstitute neuronal and microglial circuits that are often altered in neurodegenerative and neurodevelopmental disorders. | Effect of ultrasounds on neurons and microglia: Cell viability and automatic analysis of cell morphology |
S1746809415001147 | In this paper, multilead electrocardiogram (MECG) data compression using singular value decomposition in multiresolution domain is proposed. It ensures a high compression ratio by exploiting both intra-beat and inter-lead correlations. A new thresholding technique based on multiscale root fractional energy contribution is proposed. It selects the singular values depending on the clinical importance of the wavelet subbands. The proposed method is evaluated with the PTB Diagnostic ECG database. This compression method is embedded with a pulse amplitude modulated direct sequence-ultra wideband technology for transmission of the MECG data. This may be useful in telemonitoring services for the wireless body sensor network. A comparative study of computational time complexity has also been carried out. The results show that the proposed method can be executed at least three times faster than the existing methods. The storage efficiency is enhanced by 19 times using this method. | Multilead ECG data compression using SVD in multiresolution domain |
S1746809415001160 | Auditory Brain Machine Interfaces were designed for patients with severe neurofunctional disabilities, such as those suffering with late-stage amyotrophic lateral sclerosis (ALS), who have impaired eye movements or are unable to maintain gaze preventing them from using visual strategies for communication. This study explores the three-stimulus auditory oddball paradigm with binary choices (yes/no) associated with Empirical Mode Decomposition to extract features used to train and test a Support Vector Machine classifier. Data from standard EEG channels and from the N200-anterior-contralateral (N2ac) response signal were tested. Ten healthy male subjects, age 20 to 27 years, participated in the experiment. The best performance (average classification accuracy of 87.41% and information transfer ratio of 6.48bit/min) was achieved when features extracted from the N2ac response were added to features extracted from the EEG channels. Also, the results showed that by using target stimuli with larger frequency separation helps the subjects focus better on the desired answer. | The anterior contralateral response improves performance in a single trial auditory oddball BMI |
S1746809415001251 | Average Rectified Value (ARV) and Root Mean Square (RMS) are amplitude indicators commonly used in the field of EMG either in time or space. These two indicators are compared (a) analytically for a one dimensional sinusoid, sum of sinusoids, two dimensional sinusoids, and (b) numerically by simulating a high density detection system, sampling in space the distribution of propagating surface action potentials generated by a muscle motor unit (MU). For any signal sampled above the Nyquist frequency the estimated RMS does not depend on the sampling rate while the estimated ARV does. The surface potential is often sampled in space below the Nyquist frequency, by high density surface EMG detection systems (HDsEMG), generating aliasing in space. For point-like electrodes, the lowest spatial sampling frequency corresponding to the largest inter-electrode distance (IED), which avoids spatial aliasing for a simulated MU action potential, is 100 samples/m (IED=10mm). Therefore, IEDs below this value are recommended for measurements of EMG image features. From the theoretical point of view, the spatial RMS of sEMG images is more robust than the ARV with respect to the IED and should be preferred. | Amplitude indicators and spatial aliasing in high density surface electromyography recordings |
S1746809415001263 | The multifocal visual evoked potential (mfVEP) test measures the potentials obtained by simultaneous stimulation of multiple areas of the visual field. The principal mfVEP signal characteristics analysed are intensity and latency. A new parameter named Percentage of Energy (PoE) based on energy distribution along the recording is defined to evaluate (mfVEP) response intensity. mfVEP signals from controls and subjects classified according to the risk of developed Multiple Scleroris (MS) are used: 24 control, 15 Radiologically Isolated Syndrome (RIS), 28 Clinically Isolated Syndrome (CIS) and 28 MS diagnosed. Eyes from CIS and MS groups have been classified in neuritis optic eyes (NO) and no affected by neuritis optics eyes (nON). mfVEP signals intensity are analysed using the classic signal-to-noise (SNR) ratio and the PoE parameter. Index on discrimination between controls and different groups of diagnosed patients based on area under the curve are calculated using SNR and PoE. Lastly reproducibility test of SNR and PoE are analysed. The mean values of coefficient of variability (C v) for all the subjects are C VSRN =0.34 and C VPOE =0.17 (p <0.05). Significance difference between diagnosed groups is found in eight cases if SNR is used to quantify intensity of mfVEP signals. However, if PoE parameter is used, significance difference is founded in 10 cases. The mean area under the curve between control and diagnosed patients is increased from AUCSNR =0.79 to AUCPOE =0.83 (p <0.05). Better concordance in the test–retest measurements is obtained by using the PoE parameter. In conclusion, the PoE parameter presents advantages over the SNR parameter when characterizing mfVEP signal amplitude. | A new method for quantifying mfVEP signal intensity in multiple sclerosis |
S1746809415001275 | Previous studies have shown that the underlying process of speech generation exhibits nonlinear characteristics. Since linear features cannot represent a nonlinear system thoroughly, this paper employs new sets of non-linear measurement for assessing the quality of recorded voices. Such measurement could be exploited for implementing efficient and convenient systems for diagnosing laryngeal diseases without using invasive methods. Three sets of features based on mutual information, false neighbor fraction, and Lyapunov spectrum are investigated to this end. Furthermore, distributions of the proposed features and their discriminative property are investigated. Moreover, the described procedure benefits from the synergy between different concepts of pattern recognition. First, a genetic algorithm (GA) is invoked to find a-near optimum subset of features. Second, linear discriminant analysis (LDA) is applied to remove remaining redundancies and correlations between selected features. Finally, support vector machine (SVM) is employed for learning decision boundaries. Sensitivity and specificity of 99.3% and 94% respectively were achieved in the simulation results. | Detection of vocal disorders based on phase space parameters and Lyapunov spectrum |
S1746809415001287 | The selection of suitable antidepressants for Major Depressive Disorder (MDD) has been challenging and is mainly based on subjective assessments that include minimal scientific evidence. Objective measures that are extracted from neuroimaging modalities such as electroencephalograms (EEGs) could be a potential solution to this problem. This approach is achieved by the successful prediction of antidepressant treatment efficacy early in the patient's care. EEG-based relevant research studies have shown promising results. These studies are based on derived measures from EEG and event-related potentials (ERPs), which are called neurophysiological predictive biomarkers for MDD. This paper seeks to provide a detailed review on such research studies along with their possible limitations. In addition, this paper provides a comparison of these methods based on EEG/ERP common datasets from MDD and healthy controls. This paper also proposes recommendations to improve these methods, e.g., EEG integration with other modalities such as functional magnetic resonance imaging (fMRI) and magnetoencephalograms (MEG), to achieve better evidence of the efficacy than EEG alone, to eventually improve the treatment selection process. | Review on EEG and ERP predictive biomarkers for major depressive disorder |
S1746809415001299 | ECG steganography allows secured transmission of patient data that are tagged to the ECG signals. Signal deterioration leading to loss of diagnosis information and inability to retrieve patient data fully are the major challenges with ECG steganography. In this work, an attempt has been made to use curvelet transforms which permit identifying the coefficients that store the crucial information about diagnosis. The novelty of the proposed approach is the usage of curvelet transform for ECG steganography, adaptive selection of watermark location and a new threshold selection algorithm. It is observed that when coefficients around zero are modified to embed the watermark, the signal deterioration is the least. In order to avoid overlap of watermark, an n × n sequence is used to embed the watermark. The imperceptibility of the watermark is measured using metrics such as Peak Signal to Noise Ratio, Percentage Residual Difference and Kullback-Leibler distance. The ability to extract the patient data is measured by the Bit Error Rate. Performance of the proposed approach is demonstrated on the MIT-BIH database and the results validate that coefficients around zero are ideal for watermarking to minimize deterioration and there is no loss in the data retrieved. For an increased patient data size, the cover signal deteriorates but the Bit Error Rate is zero. Therefore the proposed approach does not affect diagnosability and allows reliable steganography. | ECG steganography using curvelet transform |
S1746809415001305 | In this paper, we propose a modeling technique for the QRS complex based on the fractional linear prediction (FLP). As a result of FLP modeling, each QRS complex is represented by a vector of three coefficients. The FLP modeling evaluation is achieved in two steps. In the first step, the ability of the FLP coefficients to efficiently model QRS complex waves is assessed by comparison with the Linear Prediction (LP) coefficients through the signal-to-error (SER) values evaluated between the original waves and predicted ones. In the second step, the performance of several classifiers is used to evaluate the effectiveness and robustness of FLP modeling over LP modeling. Classifiers are fed by the three estimated coefficients in order to discriminate premature ventricular contraction (PVC) arrhythmia from normal beats. The study has successfully demonstrated that FLP modeling can be an alternative to the LP modeling in the field of QRS complex modeling. | Detection of PVC in ECG signals using fractional linear prediction |
S1746809415001317 | Conventional clustering methods for analyzing the fMRI data usually meet some difficulties, such as the huge samples, the slow processing speed, and serious noise effect. In this study, a novel adaptive RV measure based fuzzy weighting subspace clustering (ARV-FWSC) is proposed for fMRI data analysis. In this approach, the adaptive RV measure, different from the traditional distance measure like Euclidean distance or Pearson correlation coefficient, is applied to the clustering process, where the distance measure between two single voxels is converted into the adaptive RV measure between two sets of multi-voxels contained in the correspondingly generated cubes, whose shape is automatically updated by setting a threshold of the weighted template. Meanwhile, a simple denoising mechanism is also used to find noise points, whose datum generated cube only having one center voxel, and can directly exclude those noise voxels from the cluster. Furthermore, a modified fuzzy weighting subspace clustering is introduced to measure the importance of each dimension to a particular cluster, where the proposed algorithm could take the influence of different time points in each clustering process into account, besides having the advantage of ordinary fuzzy clustering like FCM (fuzzy c-means). Several evaluation metrics, e.g., coverage degree, ROC curve, and the number of clustering iteration, are adopted to assess the performance of the ARV-FWSC on real fMRI data compared with those of GLM (general liner model), ICA (independent component analysis), and FCM. Extensive experiment results show that the proposed ARV-FWSC for fMRI data analysis can effectively improve the clustering speed and raise the clustering accuracy. | An adaptive RV measure based fuzzy weighting subspace clustering (ARV-FWSC) for fMRI data analysis |
S1746809415001329 | Monitoring patient-specific respiratory mechanics can be used to guide mechanical ventilation (MV) therapy in critically ill patients. However, many patients can exhibit spontaneous breathing (SB) efforts during ventilator supported breaths, altering airway pressure waveforms and hindering model-based (or other) identification of the true, underlying respiratory mechanics necessary to guide MV. This study aims to accurately assess respiratory mechanics for breathing cycles masked by SB efforts. A cumulative pressure reconstruction method is used to ameliorate SB by identifying SB affected waveforms and reconstructing unaffected pressure waveforms for respiratory mechanics identification using a single-compartment model. Performance is compared to conventional identification without reconstruction, where identified values from reconstructed waveforms should be less variable. Results are validated with 9485 breaths affected by SB, including periods of muscle paralysis that eliminates SB, as a validation test set where reconstruction should have no effect. In this analysis, the patients are their own control, with versus without reconstruction, as assessed by breath-to-breath variation using the non-parametric coefficient of variation (CV) of respiratory mechanics. Pressure reconstruction successfully estimates more consistent respiratory mechanics. CV of estimated respiratory elastance is reduced up to 78% compared to conventional identification (p <0.05). Pressure reconstruction is comparable (p >0.05) to conventional identification during paralysis, and generally performs better as paralysis weakens, validating the algorithm's purpose. Pressure reconstruction provides less-affected pressure waveforms, ameliorating the effect of SB, resulting in more accurate respiratory mechanics identification. Thus providing the opportunity to use respiratory mechanics to guide mechanical ventilation without additional muscle relaxants, simplifying clinical care and reducing risk. Australian New Zealand Trial Registry Number: ACTRN12613001006730. | Respiratory mechanics assessment for reverse-triggered breathing cycles using pressure reconstruction |
S1746809415001330 | The previous research reveals the presence of relatively strong spatial correlations from spontaneous activity over cortex in Electroencephalography (EEG) and Magnetoencephalography (MEG) measurement. A critical obstacle in MEG current source mapping is that strong background activity masks the relatively weak local information. In this paper, the hypothesis is that the dominant components of this background activity can be captured by the first Principal Component (PC) after employing Principal Component Analysis (PCA), thus discarding the first PC before the back projection would enhance the exposure of the information carried by a subset of sensors that reflects the local neuronal activity. By detecting MEG signals densely (one measurement per 2×2mm2) in three piglets neocortical models over an area of 18×26mm2 with a special shape of lesion by means of a μSQUID, this basic idea was demonstrated by the fact that a strong activity could be imaged in the lesion region after removing the first PC in Delta, Theta and Alpha band, while the original recordings did not show such activity clearly. Thus, the PCA decomposition can be employed to expose the local activity, which is around the lesion in the piglets’ neocortical models, by removing the dominant components of the background activity. | Current source mapping by spontaneous MEG and ECoG in piglets model |
S1746809415001342 | Non-invasive quantitative MRI methods, such as Diffusion Tensor Imaging (DTI) can offer insights into diverse developmental brain disorders such as dyslexia, the most prevalent reading disorder in childhood. In this article, we quantified the microstructural attributes of the main fascicles of both hemispheres related to the reading network in three groups of Spanish children: typically developing readers (TDR or controls), dyslexic readers (DXR) and readers with monocular vision due to ocular motility disorders (MVR), to assess whether the dyslexic children neuronal network for reading shares similarities with the neuronal network for reading in children with impaired binocular vision due to ocular motility disorders or not. Diffusion anisotropy, and mean, radial and axial diffusivity of cross-sectional subregions of the main fascicles studied were computed using a validated DTI methodology. Our results reveal differences in fractional anisotropy (FA) values between the DXR and the non-dyslexic readers, with a decreased FA for the DXR and no significant differences between TDR and MVR groups in the left Arcuate fasciculus, and a tendency to higher FA values in the DXR group compared to the other two groups in the genu of the Corpus Callosum (CC). In the splenium of the CC a trend towards higher FA values was observed in the DXR and MVR groups versus the TDR. This study reveals a different brain connectivity pattern for reading in Spanish children with dyslexia from those with impaired binocular vision due to ocular motility disorders, which would support the hypothesis that ocular motility disorders are not a causal factor of dyslexia. | Differences in effective connectivity between children with dyslexia, monocular vision and typically developing readers: A DTI study |
S1746809415001354 | Speckle reduction is an important pre-processing stage for ultrasound medical image processing. In this paper, an adaptive fuzzy logic approach for speckle noise reduction in ultrasound images is presented. In the proposed method, adaptiveness is incorporated at two levels. In the first level, applying fuzzy logic on the coefficients of variation computed from the noisy image, image regions are classified. The best suitable filter for the particular image region is adaptively selected by the system yielding appreciable improvement in noise suppression and preservation of image structural details. At the second level, to distinguish between edges and noise, the proposed method uses a weighted averaging filter. The structural similarity measure, which depends on the nature of image and quantity of noise present in the image, is used as the tuning parameter. Thus with two levels of adaptiveness, the proposed method has better edge preservation compared to existing methods. Experimental results of the proposed method for natural images, Field II simulated images and real ultrasound images, show that proposed denoising algorithm has better noise suppression and is able to preserve edges and image structural details compared with existing methods. | Adaptive speckle reduction in ultrasound images using fuzzy logic on Coefficient of Variation |
S1746809415001366 | Wrist pulse has been a physical health indicator in Traditional Chinese Medicine (TCM) for a long history. With the development of sensor technology and bioinformatics, quantifying pulse diagnosis by using signal processing technology is attracting increasing attentions in recent years. Since wrist pulse signals collected by the sensors are often corrupted by artifacts in real situations, many approaches on the wrist pulse preprocessing including pulse de-noising and baseline wander removal are introduced for more accurate wrist pulse analysis. However, these scattered methods are incomplete with some limitations when used to preprocess our special pulse data for the clinical applications. This paper presents a robust signal preprocessing framework for wrist pulse analysis. The cascade filter based on frequency-dependent analysis (FDA) is first introduced to remove the high frequency noises and to select the significant pulse intervals. Then the curve fitting method is developed to adjust the direction and the baseline drift with minimum signal distortion. Last, the period segmentation and pulse normalization is applied for the feature extraction. The effectiveness of the proposed pulse preprocessing is validated through experiments on actual pulse records with biochemical markers. In contrast with the traditional methods, the proposed preprocessing framework is effective in extracting more accurate pulse features. And the highest classification rate 91.6% is obtained on diabetes diagnosis. The results demonstrate that our method is superior to the former pulse preprocessing researches and practical for wrist pulse analysis. | A robust signal preprocessing framework for wrist pulse analysis |
S1746809415001378 | Evaluation of human kinematic performance is essential in rehabilitation and skill assessment. These services are in high demand where the improvements made due to exercises need to be regularly assessed. In some relevant industries there is a need to evaluate their employee capabilities quantitatively for accident compensation and insurance purposes. In particular, these assessments are preferred to be based on more quantifiable measures in a standardized form ensuring accuracy, reliability, ease of use and anywhere anytime information to the clinician. Therefore, it is necessary to have an efficient mechanism for evaluation and assessment of human kinematic movements as the current motion matching and recognition algorithms fall short due to characteristically strict specifications required in numerous health care applications. In this paper, we propose a summative approach using a double integral to define a closeness between two trajectories typically generated by human movement. This approach can be considered as a spatial scoring mechanism in the evaluation of human kinematic performance as well as in movement recognition applications. Several experiments based on computer simulations as well as real data were set up to examine the performance of the proposed approach as a scoring mechanism for the evaluation of human kinematic performances. The results demonstrated better characterization of the movement assessment and motion recognition ability, with a recognition rate of 86.19%, than the currently used methods such as Gaussian mixture models and pose normalization employed in motion recognition tasks. Finally, we use the scoring mechanism to analyze the proximity in human kinematic performance. | A summative scoring system for evaluation of human kinematic performance |
S174680941500138X | Permutation entropy (PE) is a well-known and fast method extensively used in many physiological signal processing applications to measure the irregularity of time series. Multiscale PE (MPE) is based on assessing the PE for a number of coarse-grained sequences representing temporal scales. However, the stability of the conventional MPE may be compromised for short time series. Here, we propose an improved MPE (IMPE) to reduce the variability of entropy measures over long temporal scales, leading to more reliable and stable results. We gain insight into the dependency of MPE and IMPE on several straightforward signal processing concepts which appear in biomedical activity via a set of synthetic signals. We also apply these techniques to real biomedical signals via publicly available electroencephalogram (EEG) recordings acquired with eyes open and closed and to ictal and non-ictal intracranial EEGs. We conclude that IMPE improves the reliability of the entropy estimations in comparison with the traditional MPE and that it is a promising technique to characterize physiological changes affecting several temporal scales. We provide the source codes of IMPE and the synthetic data in the public domain. | Improved multiscale permutation entropy for biomedical signal analysis: Interpretation and application to electroencephalogram recordings |
S1746809415001391 | A fundamental problem of retinal fundus image registration is the determination of corresponding points. The scale-invariant feature transform (SIFT) is a well-known algorithm in this regard. However, SIFT suffers from the problems in the quantity and quality of the detected points when facing with high-resolution and low-contrast retinal fundus images. On the other hand, the attention of human visual systems directs to regions instead of points for feature matching. Being aware of these issues, this paper presents a new structure-based region detector, which identifies stable and distinctive regions, to find correspondences. Meanwhile, it describes a robust retinal fundus image registration framework. The region detector is based on a robust watershed segmentation that obtains closed-boundary regions within a clean vascular structure map. Since vascular structure maps are relatively stable in partially overlapping and temporal image pairs, the regions are unaffected by viewpoint, content and illumination variations of retinal images. The regions are approximated by convex polygons, so that robust boundary descriptors are achieved to match them. Finally, correspondences determine the parameters of geometric transformation between input images. Experimental results on four datasets including temporal and partially overlapping image pairs show that our approach is comparable or superior to SIFT-based methods in terms of efficiency, accuracy and speed. The proposed method successfully registered 92.30% of 130 temporal image pairs and 91.42% of 70 different field of view image pairs. | A structure-based region detector for high-resolution retinal fundus image registration |
S1746809415001482 | Speech-language pathologists, traditionally, count the number of speech dysfluencies to measure the rate of stuttering severity. Subjective stuttering assessment is time consuming and highly dependent on clinician's experiences. The present study proposes an objective evaluation of speech dysfluencies (sounds prolongation, syllables\words\phrases repetition) in continuous speech signals. The proposed method is based on finding similarity in successive frames of speech features for sounds prolongation detection and in close segments of speech for repetition detection. Speech signals are initially parameterized to MFCC, PLP or filter bank energy feature sets. Then, similarity matrix is calculated based on similarities of all pairs of frames using cross-correlation or Euclidean criterion. Similarity matrix is considered as an image and highly similar components are extracted using proper threshold. By employing morphological image processing tools, irrelevant parts of similarity matrix are removed and dysfluent parts are detected. The effects of different feature sets and similarity measures on classification results were examined. The promising classification accuracy of 99.84%, 98.07% and 99.87% were achieved for detection of prolongation, syllable/word repetition and phrase repetition, respectively. | Automatic classification of speech dysfluencies in continuous speech based on similarity measures and morphological image processing tools |
S1746809415001494 | The classification of the bio-signal has been used for various purposes in the literature as they are versatile in diagnosis of anomalies, improvement of overall health and sport performance and creating intuitive human computer interfaces. However, automatic identification of the signal patterns on a streaming real-time signal requires a series of complex procedures. A plethora of heuristic methods, such as neural networks and fuzzy systems, have been proposed as a solution. These methods stipulate certain conditions, such as preconditioning the signals, manual feature selection and large number of training samples. In this study, we introduce a novel variant and application of the Collaborative Representation based Classification (CRC) in spectral domain for recognition of hand gestures using raw surface electromyography (EMG) signals. The CRC based methods do not require large number of training samples for an efficient pattern classification. Additionally, we present a training procedure in which a high end subspace clustering method is employed for clustering the representative samples into their corresponding class labels. Thereby, the need for feature extraction and spotting patterns manually on the training samples is obviated. We presented the intuitive use of spectral features via circulant matrices. The proposed Spectral Collaborative Representation based Classification (SCRC) is able to recognize gestures with higher levels of accuracy for a fairly rich gesture set compared to the available methods. The worst recognition result which is the best in the literature is obtained as 97.3% among the four sets of the experiments for each hand gestures. The recognition results are reported with a substantial number of experiments and labeling computation. | Spectral Collaborative Representation based Classification for hand gestures recognition on electromyography signals |
S1746809415001500 | Computer-aided sleep staging based on single channel electroencephalogram (EEG) is a prerequisite for a feasible low-power wearable sleep monitoring system. It can also eliminate the burden of the clinicians during analyzing a high volume of data by making sleep scoring less onerous, time-consuming and error-prone. Most of the prior studies focus on multichannel EEG based methods which hinder the aforementioned goals. Among the limited number of single-channel based methods, only a few yield good performance in automatic sleep staging. In this article, a single-channel EEG based method for sleep staging using recently introduced Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) and Bootstrap Aggregating (Bagging) is proposed. At first, EEG signal segments are decomposed into intrinsic mode functions. Higher order statistical moments computed from these functions are used as features. Bagged decision trees are then employed to classify sleep stages. This is the first time that CEEMDAN is employed for automatic sleep staging. Experiments are carried out using the well-known Sleep-EDF database and the results show that the proposed method is superior as compared to the state-of-the-art methods in terms of accuracy. In addition, the proposed scheme gives high detection accuracy for sleep stages S1 and REM. | Computer-aided sleep staging using Complete Ensemble Empirical Mode Decomposition with Adaptive Noise and bootstrap aggregating |
S1746809415001512 | Glaucoma is an eye disease that results in irreversible loss of vision. The manual examination of optic disk (OD) is a standard procedure used for detecting glaucoma. This paper presents a glaucoma expert system based on the segmentations of OD and optic cup attained from color fundus images. A novel implicit region based active contour model is proposed for OD segmentation which incorporates the image information at the point of interest from multiple image channels to have robustness against the variations found in and around the OD region. A novel optic cup segmentation method is also proposed based on the structural and gray level properties of cup. Based on the precise information about the contours of OD and cup different parameters are calculated for glaucoma assessment. The proposed system is evaluated on 59 retinal images comprising 17 normal and 42 glaucomatous images against the groundtruths given by an experienced ophthalmologist. The proposed OD segmentation method achieved an average F-score of 0.975, average boundary distance of 10.112pixel and average correlation coefficient of 0.916. The cup segmentation method attained an average F-score of 0.89, average boundary distance of 18.927pixel and average correlation coefficient of 0.835. The mean error and standard deviation of the error σ for all the parameters are much smaller in glaucomatous images compared to normal images. This indicates high sensitivity of the proposed method in glaucoma assessment. | Segmentation of optic disk and optic cup from digital fundus images for the assessment of glaucoma |
S1746809415001536 | Deformable image registration remains a challenging research area due to difficulties associated with local intensity variation and large motion. In this paper, an Accurate Inverse-consistent Symmetric Optical Flow (AISOF) method is proposed to overcome these difficulties. The two main contributions of AISOF include the following: (1) a coarse-to-fine strategy for an inverse-consistent symmetric method and (2) a novel Hybrid Local Binary Pattern (HLBP) to the classical Lucas–Kanade optical flow method. The HLBP consists of a median binary pattern and a generalised centre-symmetric local binary pattern. The generalised centre-symmetric local binary pattern has two thresholds, and this pattern can capture more information than the classical centre-symmetric local binary pattern, which has one threshold. The proposed HLBP can cope well with high contrast intensity and local intensity variation. Because the inverse-consistent symmetric method can reduce inverse consistency errors in Markov random fields based registration methods, we adopted this method to improve the accuracy of registration. In addition, a coarse-to-fine strategy was adopted to handle large motion. The proposed AISOF method was evaluated for 10 publicly available 4D CT lung datasets from the DIR-Lab. The mean target registration error of the AISOF method is 1.16mm, which is significantly superior to the error of the classical Lucas–Kanade optical flow method, 2.83mm. Moreover, this error is also the smallest of all unmasked registration methods using these datasets. | Accurate inverse-consistent symmetric optical flow for 4D CT lung registration |
S1746809415001548 | The joint estimation of the parameters and the states of the hemodynamic model from the blood oxygen level dependent (BOLD) signal is a challenging problem. In the functional magnetic resonance imaging (fMRI) literature, quite interestingly, many proposed algorithms work only as a filtering method. This makes the estimation of hidden states and parameters less reliable compared with the algorithms that use smoothing. In standard implementations, smoothing is performed only once. However, joint state and parameter estimation can be improved substantially by iterating smoothing schemes such as the extended Kalman smoother (IEKS). In the fMRI literature, extended Kalman filtering is thought to be less accurate than standard particle filtering (PF). We compared EKF with PF and observed that the contrary is true. We improved the EKF performance by adding smoother. By iterative scheme joint hemodynamic and parameter estimation is improved substantially. We compared IEKS performance with the square-root cubature Kalman smoother (SCKS) algorithm. We show that its accuracy for the state and the parameter estimation is better and much faster than iterative SCKS. SCKS was found to be a better estimator than the dynamic expectation maximization (DEM), EKF, local linearization filter (LLF) and PF methods. We show in this paper that IEKS is a better estimator than iterative SCKS under different process and measurement noise conditions. As a result, IEKS seems to be the best method we evaluated in all aspects. | Joint parameter and state estimation of the hemodynamic model by iterative extended Kalman smoother |
S174680941500155X | Recent psychoacoustic studies found that across-band envelope correlation (ABEC) carried important information for speech intelligibility. This motivated the present work to propose an ABEC-based intelligibility measure that could be used to non-intrusively predict speech intelligibility in noise using only temporal envelope waveforms extracted from the noise-corrupted speech. The proposed ABEC-based metric (denoted as ABECm) was computed by averaging the correlation coefficients of mean-removed envelope waveforms from adjacent frequency bands of the noise-corrupted speech. The ABECm measures were evaluated with intelligibility scores obtained from normal-hearing listeners presented with sentences corrupted by four types of maskers in a total of 22 conditions. High correlation (r =0.96) was obtained between ABECm values and listeners’ sentence recognition scores, and this correlation was comparable to those computed with existing intrusive and non-intrusive intelligibility measures. This suggests that across-band envelope correlation may work as a simple but efficient predictor of speech intelligibility in noise, whose computation does not need access to the clean reference signal. | Predicting the intelligibility of noise-corrupted speech non-intrusively by across-band envelope correlation |
S1746809415001561 | Electrocardiography is considered as a useful diagnostic tool for evaluating the condition of heart's health. QRS complex, which is produced by depolarization of the heart ventricles, is the main graphical deflection seen on a typical electrocardiogram tracing. Detection of the QRS complexes is the first step toward analyzing the electrocardiogram signal. In this regard, many different algorithms have been proposed so far. In the present work an algorithm based on entropy measure is proposed which uses the calculation of the time dependent entropy for QRS complex detection. The algorithm is implemented in a way that entropy of the electrocardiogram can be calculated in different temporal resolution to improve the accurate detection rate of different QRS morphologies. The MIT-BIH arrhythmia and CSE databases are selected to test the performance of the proposed algorithm. The precision and sensitivity of the proposed method for MIT-BIH database are 99.85% and 99.75%, respectively. Also the detection rate of 99.82% is achieved for CSE database. Additionally, the proposed algorithm is fast enough to be applied in real-time. | A multiresolution time-dependent entropy method for QRS complex detection |
S1746809415001639 | Polynomial regression is the most common method to estimate the relationship between muscle signals and torque during muscle contraction, but it is not capable of characterizing important transient patterns in the signal–torque relationship that only exist during short bursts of torque but may convey detailed information of muscle behavior. In this study, we proposed an integrated data analysis approach based on local polynomial regression (LPR) to identify transient patterns in the signal–torque relationship. For each subject, the LPR method can represent electromyography (EMG), mechanomyography (MMG) and ultrasonography (US) features as nonlinear functions of torque and can further estimate the derivatives of these signal–torque nonlinear functions. Further, a number of break points can be detected from the derivatives of the signal–torque relationships at the group level, and they can segment the signal–torque relationships into several stages, where multimodal features change with torque in different dynamic manners. Eight subjects performed isometric ramp contraction of knee up to 90% of the maximal voluntary contraction (MVC). EMG, MMG and US were simultaneously recorded from the rectus femoris muscle. Results showed that, for each feature, the whole torque range were clearly segmented into several distinct stages by the proposed method and the feature–torque relationship could be approximately described by a piecewise linear function with different slopes at different stages. A critical break-point of 20% MVC was detected during the isometric contraction for all muscle signals. As compared with the conventional regression methods, the proposed LPR-based data analysis approach can effectively identify stage-dependent transient patterns in the feature–torque relationships, providing deeper insights into the motor unit activation strategy. | Identifying transient patterns of in vivo muscle behaviors during isometric contraction by local polynomial regression |
S1746809415001640 | The reshape scale (RS) method computes the transition of Sample Entropy (SEn) from low to large values as the scale is increased. At the largest scale, SEn asymptotically converges to the maximum SEn evaluated using the average of SEn for the fully randomly re-ordered realizations of the original signal. The entropic half-life (EnHL) characterizes such a transition. The EnHL is the scale representing the midpoint of the transition and yields a measure for the temporal degradation of the regularity in a signal. In postural balance studies, the EnHL of the center of pressure (COP) signal can be interpreted as the time elapsed before the old sensory states is no longer utilized by the postural control system to adjust the current COP position. Other equally sensitive measures of regularity, such as de-trended fluctuation analysis (DFA), can be interpreted in the same way; however, it results in a dimensionless measure of regularity and complexity. The primary objective of this study was to experimentally demonstrate the correlation between the scaling exponent α calculated using DFA and the inverse of EnHL. When the COP signal was studied in the Fourier domain, a non-random structure was observed in the phase of the COP signal, which might be related to the neuromuscular characteristics of the postural control system. The second objective of this paper was to demonstrate that the EnHL and α both retain information contained in the phase of the Fourier transformed signal. It is shown in this paper that the EnHL and α are both sensitive to non-random structures of the phase of the Fourier transformed signal. Contrary to α, which is a dimensionless number, the EnHL measures the regularity and complexity of the signal in units of time. Therefore, it was concluded that the EnHL provides a more physically interpretable and intuitively understandable measure of the properties of the control of the COP signal. | Is sample entropy based entropic half-life and de-trended fluctuation analysis correlated and do they reflect phase regularity of center of pressure measurements? |
S1746809415001664 | In this paper, we apply image decomposition for image denoising by considering the speckle noise in the (OCT) image as texture or oscillatory patterns. A novel second order total generalised variation (TGV) decomposition model is proposed to remove noise (texture) from the OCT image. The incorporation of the TGV regularisation in the proposed model can eliminate the staircase side effect in the resulting denoised image (structure). By introducing auxiliary splitting variables and Bregman iterative parameters, a fast Fourier transform based split Bregman algorithm is developed to solve the proposed model explicitly and efficiently. Extensive experiments are conducted on both synthetic and real OCT images to demonstrate that the proposed model outperforms state-of-the-art speckle noise reduction methods. | Denoising optical coherence tomography using second order total generalized variation decomposition |
S1746809415001676 | Polysomnography (PSG) is the recording during sleep of multiple physiological parameters enabling to diagnose sleep disorders and to characterize sleep fragmentation. From PSG several sleep characteristics such as the micro arousal rate (MAR), the number of sleep stages shifts (SSS) and the rate of intra sleep awakenings (ISA) can be deduced each having its own fragmentation threshold value and each being more or less important (weight) in the clinician's diagnosis according to his specialization (pulmonologist, neurophysiologist and technical expert). In this work we propose a mathematical model of sleep fragmentation diagnosis based on these three main sleep characteristics (MAR, SSS, ISA) each having its own threshold and weight values for each clinician. Then, a database of 111 PSG consisting of 55 healthy adults and 56 adult patients with a suspicion of obstructive sleep apnoea syndrome (OSAS), has been diagnosed by nine clinicians divided into three groups (three pulmonologists, three neurophysiologists and three technical experts) representing a panel of polysomnography experts usually working in a hospital. This has enabled to determine statistically the thresholds and weights values which characterize each clinician's diagnosis. Thus, we show that the agreement between each clinician's diagnosis and each corresponding mathematical model goes from substantial (κ >61%) to almost perfect (κ >81%), according to their specialization and so, that the mean value of the agreements of each group is also substantial (κ >73%) despite the existing variability between clinicians. It follows from this result that our mathematical model of sleep fragmentation diagnosis is a posteriori validated for each clinician. | Mathematical modelling of sleep fragmentation diagnosis |
S1746809415001688 | Microwave Doppler radar has received considerable attention as a non-contact form of measuring human respiration; in particular for long term monitoring. One of the main challenges in converting this into a viable application is to suppress or separate the artefacts and other interfering signals from the desired respiration signal using a less complex and practically feasible design for regular and potentially real time use. Existing systems either require complex experimental setups or multiple Doppler radar modules to achieve this. In this paper, we propose an approach based on EMD-ICA and approximate entropy ideas to systematically separate received Doppler shifted signal into distinct components and reconstruct the desired respiration pattern pertaining to respective physiological activity. Indeed this allows suppression of the undesirable artefacts and interference from other competing signals. Practical experiments confirmed comparable performance of the proposed method to the measurements obtained through chest straps which are widely used clinically for monitoring respiration. | Suppression of interference in continuous wave Doppler radar based respiratory measurements |
S1746809415001706 | This article addresses the problem of designing therapies for the myeloma bone disease that optimize in a systematic way a compromise between drug toxicity and tumor repression. For that sake, the techniques of optimal control are applied to the dynamics of tumor growth, and the necessary conditions of Pontryagin's minimum principle are solved using a numerical relaxation algorithm. A therapy to accelerate bone mass recovery is applied in parallel, based on a PI control rule. Since the optimal controller provides an open-loop control, it is turned into a feedback law by following a receding horizon strategy. For that sake, an optimal manipulated variable profile is first computed over a time horizon, but only the initial part of this function is applied. The whole optimization procedure is then repeated starting at a time instant that corresponds to the end of the previously applied control. | Optimal and receding horizon control of tumor growth in myeloma bone disease |
S1746809415001718 | This paper presents the automatic system for computer-aided balance assessment reflecting the Berg Balance Scale. The system employs inertial sensors for the data acquisition purposes. A set of features is extracted from multiple signal representations during the examination. A multilevel Fisher's linear discriminant is used to select most suitable features for each of the BBS tasks. The feature space dimensionality reduction and the multilayer perceptron classifier training both involve expert scoring on the observed examinations. The system is verified using data acquired during the BBS scoring of 64 elderly patients. Both assessment modes: the entire examination as well as separate BBS items, are evaluated and discussed using introduced assessment metrics. | Automatic Berg Balance Scale assessment system based on accelerometric signals |
S174680941500172X | Geriatric depression is a pathological process that causes a great number of symptoms resulting in limited mental and physical functionality. The computation of oscillatory and synchronization patterns in certain brain areas may facilitate the early and robust identification of depressive symptoms. In this study electroencephalographic (EEG) data were recorded from 34 participants suffering from both cognitive impairment and geriatric depression (mean age 69.81) and 32 control subjects (mean age 70.33). Both groups were matched according to their cognitive status. The study aims at evaluating neurophysiological features of elderly participants suffering from depression and neurodegeneration. The current work focuses on the identification of depression symptoms that coexist with cognitive decline, the correlation of the examined neurophysiological features with geriatric depression combined with cognitive impairment and the investigation of the role of data mining techniques in the analysis of EEG data. The EEG features were estimated through synchronization analysis (Orthogonal Discrete Wavelet Transform). Depressive patterns were detected through data mining techniques. Random Forest, Random Tree, Multilayer Perceptron (MPL Network) and Support Vector Machines (SVM) were employed for data classification. The efficiency of the classifiers varied from 92.42 to 95.45%. Random Forest demonstrated the highest accuracy (95.5%). Both synchronization and oscillatory features contributed to the decision trees’ formation, with the former prevailing. Moreover, synchronization features significantly contributed to the decision trees’ formation. In line with previous neuroscientific findings, synchronization among right frontal–midline anteriofrontal regions showed great correlation with depressive symptoms. Evaluation of the classifiers indicated the Random Forest as being the most robust algorithm. Synchronization of certain brain regions is more indicative of identifying depression symptoms than oscillatory since synchronization features contributed the most to the formation of the classification trees. | Geriatric depression symptoms coexisting with cognitive decline: A comparison of classification methodologies |
S1746809415001731 | Currently, heart rate variability (HRV) is commonly evaluated using time and frequency domain analysis in the clinical practice. Due to the fact that cardiovascular system is regulated by the autonomic nervous system (ANS) that also influences HRV, however exhibits rather nonlinear behaviour, it appears more appropriate to apply nonlinear methods to evaluate functioning of ANS. This study presents recurrence analysis as a tool to test the presence of ANS dysfunction that is responsible i.e. for orthostatic (vasovagal) syncope by which abnormal HRV has been demonstrated in the past. Study included 18 patients that experienced vasovagal syncope (mean age 23.7±5.2 years) and 18 healthy subjects (mean age 24.5±3.2, p =0.85). In all tested subjects, ECG recording was performed during active orthostatic test that comprised two phases (5min of resting in a supine position and 5min of active standing). Sequence of R-R intervals (time intervals between two consecutive heart beats derived from ECG) was analysed using standard time (mean RR, mean HR, SDNN, SDHR, RMSSD, NN50 and pNN50) and frequency domain (LF, HF and LF/HF ratio) analysis. Moreover, recurrence analysis was performed (RATIO, DIV, AVDL, MAXV, DET, ENTR, LMAX, TT and LAM). Frequency domain analysis did not demonstrate significant difference between the two groups in any of the parameters during both phases of the test. On the contrary, both time domain analysis and recurrence analysis showed comparable findings in both groups during resting phase of the orthostatic test with a significant change of most tested parameters after stand-up. As the use of time domain HRV may be perceived as problematic regarding their interpretation in short ECG recordings, recurrence analysis appears to be a sensitive tool for detecting ANS dysfunction in patients with vasovagal syncope. | Recurrence plot of heart rate variability signal in patients with vasovagal syncopes |
S1746809415001743 | The purpose of this paper is the classification of ECG heartbeats of a patient in five heartbeat types according to AAMI recommendation, using an artificial neural network. In this paper a Block-based Neural Network (BBNN) has been used as the classifier. The BBNN is created from 2-D array of blocks which are connected to each other. The internal structure of each block depends on the number of incoming and outgoing signals. The overall construction of the network is determined by the moving of signals through the network blocks. The Network structure and the weights are optimized using Particle Swarm Optimization (PSO) algorithm. The input of the BBNN is a vector which its elements are the features extracted from the ECG signals. In this paper Hermit function coefficient and temporal features which have been extracted from ECG signals, create the input vector of the BBNN. The BBNN parameters have been optimized by PSO algorithm which can overcome the possible changes of ECG signals from time-to-time and/or person-to-person variations. Therefore the trained BBNN has an unique structure for each person. The performance evaluation using the MIT-BIH arrhythmia database shows a high classification accuracy of 97%. | A new personalized ECG signal classification algorithm using Block-based Neural Network and Particle Swarm Optimization |
S1746809415001755 | One crucial part of an image registration algorithm is utilization of an appropriate similarity metric. For common similarity metrics such as CC or MI, it is assumed that the intensities of image pixels are independent from each other and stationary. Accepting these assumptions, one will have difficulty doing image registration in the presence of spatially varying intensity distortion. In Myronenko et al. [5] a solution based on minimization of residual complexity is introduced to solve this problem. In this work, the weakness of RC method is investigated for more complex spatially varying intensity distortions and a modification of this method is presented to improve its performance in such conditions. The proposed method reduces the error respect to the other methods. Experimental results on synthetic and real-world data sets demonstrate the effectiveness of the proposed method for image registration tasks. | Intensity based image registration by minimizing the complexity of weighted subtraction under illumination changes |
S1746809415001767 | Electroencephalography (EEG) signals have been commonly used for assessing the level of anesthesia during surgery. However, the collected EEG signals are usually corrupted with artifacts which can seriously reduce the accuracy of the depth of anesthesia (DOA) monitors. In this paper, the main purpose is to compare five different EEG based anesthesia indices, namely median frequency (MF), 95% spectral edge frequency (SEF), approximate entropy (ApEn), sample entropy (SampEn) and permutation entropy (PeEn), for their artifacts rejection ability in order to measure the DOA accurately. The current analysis is based on synthesized EEG corrupted with four different types of artificial artifacts and real data collected from patients undergoing general anesthesia during surgery. The experimental results demonstrate that all indices could discriminate awake from anesthesia state (p <0.05), however PeEn is superior to other indices. Furthermore, a combined index is obtained by applying these five indices as inputs to train, validate and test a feed-forward back-propagation artificial neural network (ANN) model with bispectral index (BIS) as target. The combined index via ANN offers more advantages with higher correlation of 0.80±0.01 for real time DOA monitoring in comparison with single indices. | A comparison of five different algorithms for EEG signal analysis in artifacts rejection for monitoring depth of anesthesia |
S1746809415001780 | The Optic disc (OD) nerve head region in general and OD center coordinates in particular form basis for study and analysis of various eye pathologies. The shape, contour and size of OD is vital in classification and grading of retinal diseases like glaucoma. There is a need to develop fast and efficient algorithms for large scale retinal disease screening. With this in mind, this paper present a novel framework for fast and fully automatic detection of OD and its accurate segmentation in digital fundus images. The methodology involves optic disc center localization followed by removal of vascular structure by accurate inpainting of blood vessels in the optic disc region. An adaptive threshold based Region Growing technique is then employed for reliable segmentation of fundus images. The proposed technique achieved significant results when tested on standard test databases like MESSIDOR and DRIVE with average overlapping ratio of 89% and 87%, respectively. Validation experiments were done on a labeled dataset containing healthy and pathological images obtained from a local eye hospital achieving an appreciable 91% average OD segmentation accuracy. | Blood vessel inpainting based technique for efficient localization and segmentation of optic disc in digital fundus images |
S1746809415001792 | This paper presents a new method consisting of two stages for automatic detection and segmentation of coronary arteries in X-ray angiograms. In the first stage, multiscale Gabor filters are used to detect vessel structures in the angiograms. The results of multiscale Gabor filtering are compared with those obtained by applying multiscale methods based on the top-hat operator, Hessian matrix, and Gaussian matched filters. The performance of the vessel-detection methods is evaluated through the area (A z ) under the receiver operating characteristic (ROC) curve. In the second stage, coronary arteries are segmented by binarizing the magnitude response of Gabor filters using a new thresholding method based on multiobjective optimization, which is compared with seven thresholding methods. Measures of sensitivity, specificity, accuracy, and positive predictive value are used to analyze the segmentation methods, by comparing the results to the ground-truth markings of the vessels drawn by a specialist. Finally, the proposed method is compared with seven state-of-the-art vessel segmentation methods. The result of vessel detection using multiscale Gabor filters demonstrated high accuracy with A z =0.961 with a training set of 40 angiograms and A z =0.952 with an independent test set of 40 angiograms. The results of vessel segmentation with the multiobjective thresholding method provided an average accuracy of 0.881 with the test set of angiograms. | Automatic segmentation of coronary arteries using Gabor filters and thresholding based on multiobjective optimization |
S1746809415001809 | The watershed is an efficient algorithm for the segmentation of images. However, over-segmentation, which contains so many tiny regions that regions of interest cannot be identified easily, decreases the effectiveness. In this paper, pre-processing of images and the modification of watershed algorithm are both studied to restrain the over-segmentation. In the process of pre-processing, a kind of multi-scaled transform, contrast à trous wavelet based contourlet transform, is proposed and constructed to get sparse representation. In the aspect of modifying watershed, the “texture gradient” is defined, and the texture gradient is combined with marker-based watershed algorithm to reduce the number of segmented regions. The proposed method is tested by 36 prostate MR images and compared with several image segmentation algorithms; the experiment and comparison results show that the proposed method consistently restrains the number of segmented regions. The segmentation results correctly correspond to the main tissues in the images, and each tissue is integrally segmented, respectively with the elimination of small regions. The segmentation accuracy rate is 87.29%, which is higher than other methods under comparison. | Automatic multi-organ segmentation of prostate magnetic resonance images using watershed and nonsubsampled contourlet transform |
S1746809415001810 | 2D-gel electrophoresis (2DGE) is an important technique in proteomics for analyzing the protein expressions. However, the analysis of 2DGE images is still a cumbersome and tedious task. One of the main reasons is the presence of a large amount of the inhomogeneities in the foreground and the background intensities. In this paper, we have proposed a novel approach of segmentation of the protein spots in the non separable wavelet domain. It utilizes the inter-scale relationship among enhanced wavelet coefficients, which can easily distinguish the different features of the image—the interior region of spots, the edges and the background. This technique is based on a single threshold and is independent of the gray value of the image. It copes with the inhomogeneities in the 2DGE images up to a great extent, which is helpful for finding the protein spots accurately. The artifacts are further removed using a non-threshold based method comprising a weighted Gaussian energy distribution model. Experimental results show that our method outperforms the available commercial software and previously reported works. | Analysis of 2D-gel images for detection of protein spots using a novel non-separable wavelet based method |
S1746809415001883 | This paper deals with the estimation of glucose levels in ICU patients by the application of statistical filter theory to the data provided by a commercial continuous glucose monitoring system using a microdialysis sensor. Kalman and particle filtering are applied to simple models of the glucose dynamics. The particle filter enables the joint filtering and calibration of the sensor. The results show that the proposed filters lead to significant reduction in the estimation error with computational cost well within the capabilities of modern digital equipment. Additionally, the filters can be used for the automatic recognition of sensor faults. These results show that suitable filters can help in the construction of an artificial pancreas. | Calibration of a microdialysis sensor and recursive glucose level estimation in ICU patients using Kalman and particle filtering |
S1746809415001895 | Automatic segmentation of anterior segment optical coherence tomography images provides an important tool to aid management of ocular diseases. Previous studies have mainly focused on 2D segmentation of these images. A novel technique capable of producing 3D maps of the anterior segment is presented here. This method uses graph theory and dynamic programming with shape constraint to segment the anterior and posterior surfaces in individual 2D images. Genetic algorithms are then used to align 2D images to produce a full 3D representation of the anterior segment. In order to validate the results of the 2D segmentation comparison is made to manual segmentation over a set of 39 images. For the 3D reconstruction a data set of 17 eyes is used. These have each been imaged twice so a repeatability measurement can be made. Good agreement was found with manual segmentation for the 2D segmentation method achieving a Dice similarity coefficient of 0.96, which is comparable to the inter-observer agreement. Good repeatability of results was demonstrated with the 3D registration method. A mean difference of 1.77pixel was found between the anterior surfaces found from repeated scans of the same eye. | Reconstruction of 3D surface maps from anterior segment optical coherence tomography images using graph theory and genetic algorithms |
S1746809415001913 | The limitations of the available imaging modalities for prostate cancer (PCa) localization result in suboptimal protocols for management of the disease. In response, several dynamic contrast-enhanced imaging modalities have been developed, which aim at cancer detection through the assessment of the changes occurring in the tumor microenvironment due to angiogenesis. In this context, novel magnetic resonance dispersion imaging (MRDI) enables the estimation of parameters related to the microvascular architecture and leakage, by describing the contrast agent kinetics with a dispersion model. Although a preliminary validation of MRDI on PCa has shown promising results, parameter estimation can become burdensome due the convolution integral present in the dispersion model. To overcome this limitation, in this work we provide analytical solutions of the dispersion model in the time and frequency domains, and we implement three numerical methods to increase the time-efficiency of parameter estimation. The proposed solutions are tested for PCa localization. A reduction by about 50% of computation time could be obtained, without significant changes in the estimation performance and in the clinical results. With the continuous development of new technological solutions to boost the spatiotemporal resolution of DCE-MRI, solutions to improve the computational efficiency of parameter estimation are highly required. | Time-efficient estimation of the magnetic resonance dispersion model parameters for quantitative assessment of angiogenesis |
S1746809415001925 | Obstructive Sleep Apnea (OSA) is a serious sleep disorder where patient experiences frequent upper airway collapse leading to breathing obstructions and arousals. Severity of OSA is assessed by averaging the number of incidences throughout the sleep. In a routine OSA diagnosis test, overnight sleep is broadly categorized into rapid eye movement (REM) and non-REM (NREM) stages and the number of events are considered accordingly to calculate the severity. A typical respiratory event is mostly accompanied by sounds such as loud breathing or snoring interrupted by choking, gasps for air. However, respiratory controls and ventilations are known to differ with sleep states. In this study, we assumed that the effect of sleep on respiration will alter characteristics of respiratory sounds as well as snoring in OSA patients. Our objective is to investigate whether the characteristics are sufficient to label snores of REM and NREM sleep. For investigation, we collected overnight audio recording from 12 patients undergoing routine OSA diagnostic test. We derived features from snoring sounds and its surrounding audio signal. We computed time series statistics such as mean, variance, inter-quartile-range to capture distinctive pattern from REM and NREM snores. We designed a Naïve Bayes classifier to explore the usability of patterns to predict corresponding sleep states. Our method achieved a sensitivity of 92% (±9%) and specificity of 81% (±9%) in labeling snores into REM/NREM group which indicates the potential of snoring sounds to differentiate sleep states. This may be valuable to develop non-contact snore based technology for OSA diagnosis. | Characterization of REM/NREM sleep using breath sounds in OSA |
S1746809415001937 | Most of usual electrocardiogram (ECG) signals normally keep the signal energy in the 0.05–100Hz band, but higher frequencies containing valuable diagnostic information are also present in wide band (WB) ECG signals. Existing studies on computer-assisted myocardial infarction (MI) diagnosis are mostly based on the usual ECG signals, and the valuable diagnostic information has not been used sufficiently yet. Multivariable autoregressive coefficients were extracted from WB orthogonal ECG (OECG) signals for the classification purpose in this research. The data for the analysis were taken from Physikalisch Technische Bundesanstalt diagnostic ECG database including health control, MI in early stage and MI in acute stage. In order to further investigate the performance of WB ECG signals, standard ECG signals with a wide frequency band were utilized for the feature extraction and classification as the same way. The experimental results showed that the MI classification accuracy could be improved by introducing WB ECG signals and the features extracted from WB OECG signals with a frequency of 0–250Hz would be the best efficient representation for discriminating different MI stages. The classifiable and efficient features can be extracted from WB OECG signals for the classification of MI stages with a high accuracy. OECG signal with a bandwidth of 0–100Hz OECG signal with a bandwidth of 0–250Hz OECG signal with a bandwidth of 0–500Hz MAR model coefficients estimated from the OECG signal MAR model coefficients estimated from the SECG signal FMAR coefficients estimated from 100Hz_OECG FMAR coefficients estimated from 250Hz_OECG FMAR coefficients estimated from 500Hz_OECG SECG signal with a bandwidth of 0–100Hz SECG signal with a bandwidth of 0–250Hz SECG signal with a bandwidth of 0–500Hz SMAR coefficients estimated from 100Hz_SECG SMAR coefficients estimated from 250Hz_SECG SMAR coefficients estimated from 500Hz_SECG | Discrimination of different myocardial infarction stages using wide band electrocardiogram |
S1746809415001949 | This paper considers the problem of removing unwanted noise from a gaze tracking signal real-time. The proposed remedy is a linear dynamic model for the gaze and a Kalman filter for estimating its optimal solution in closed form. The location and velocity of gaze are treated as independent parameters of the model. Two alternative methods for estimating the velocity are presented; the first is based on the difference in the subsequent eye images and the second on the PCA model and an affine mapping from the principal component space to the gaze space. The covariance matrix of the measurement noise distribution is modified real-time based on the estimated velocity. The presented filtering algorithm can be utilized with any eye camera based gaze tracker. Here, its ability to decrease noise of two published gaze tracking methods is demonstrated. | An advanced Kalman filter for gaze tracking signal |
S1746809415001962 | Beat-to-beat variability of the QT interval (QTV) has been used as a marker of repolarization lability and sympathetic activation. The aim of this study was to establish ECG sampling rate requirements for reliable QT interval variability measurement. We measured QTV in high resolution simulated (1000Hz) and real ECG (1600Hz; in the supine position during rest and during sympathetic activation upon standing), using time and frequency domain metrics as well as measures of symbolic dynamics for complexity assessment. We successively halved the sampling rate and investigated its effect on the QTV metrics. Reduction in sampling rate below 400Hz and 500Hz, respectively, resulted in a significant overestimation of QTV variability and also affected complexity measurement of QTV. QTV increased during standing compared to the supine measurement. At 100Hz, the posture related change in QTV was completely masked by the measurement noise introduced by the low sampling rate. In conclusion, ECG sampling rates of 500Hz yields a reliable QTV measurement, while sampling rates of 200Hz and below should be avoided. | Effects of ECG sampling rate on QT interval variability measurement |
S1746809415001974 | Electrocardiogram (ECG) is a widely used non-invasive method to study the rhythmic activity of the heart. These signals, however, are often obscured by artifacts/noises from various sources and minimization of these artifacts is of paramount importance for detecting anomalies. This paper presents a thorough analysis of the performance of two hybrid signal processing schemes ((i) Ensemble Empirical Mode Decomposition (EEMD) based method in conjunction with the Block Least Mean Square (BLMS) adaptive algorithm (EEMD-BLMS), and (ii) Discrete Wavelet Transform (DWT) combined with the Neural Network (NN), named the Wavelet NN (WNN)) for denoising the ECG signals. These methods are compared to the conventional EMD (C-EMD), C-EEMD, EEMD-LMS as well as the DWT thresholding (DWT-Th) based methods through extensive simulation studies on real as well as noise corrupted ECG signals. Results clearly show the superiority of the proposed methods. | A comprehensive performance analysis of EEMD-BLMS and DWT-NN hybrid algorithms for ECG denoising |
S1746809415001986 | A multi-wavelengths analysis for pulse waveform extraction using laser speckle is conducted. The proposed system consists of three coherent light sources (532nm, 635nm, 850nm). A bench-test composed of a moving skin-like phantom (silicone membrane) is used to compare the results obtained from different wavelengths. The system is able to identify a skin-like phantom vibration frequency, within physiological values, with a minimum error of 0.5mHz for the 635nm and 850nm wavelengths and a minimum error of 1.3mHz for the 532nm light wavelength using a FFT-based algorithm. The phantom velocity profile is estimated with an error ranging from 27% to 9% using a bidimensional correlation coefficient-based algorithm. An in vivo trial is also conducted, using the 532nm and 635nm laser sources. The 850nm light source has not been able to extract the pulse waveform. The heart rate is identified with a minimum error of 0.48 beats per minute for the 532nm light source and a minimal error of 1.15 beats per minute for the 635nm light source. Our work reveals that a laser speckle-based system with a 532nm wavelength is able to give arterial pulse waveform with better results than those given with a 635nm laser. | Which wavelength is the best for arterial pulse waveform extraction using laser speckle imaging? |
S1746809415001998 | Parkinson's disease (PD) has been reported to involve postganglionic sympathetic failure and a wide spectrum of autonomic dysfunctions including cardiovascular, sexual, bladder, gastrointestinal and sudo-motor abnormalities. While these symptoms may have a significant impact on daily activities, as well as quality of life, the evaluation of autonomic nervous system (ANS) dysfunctions relies on a large and expensive battery of autonomic tests only accessible in highly specialized laboratories. In this paper we aim to devise a comprehensive computational assessment of disease-related heartbeat dynamics based on instantaneous, time-varying estimates of spontaneous (resting state) cardiovascular oscillations in PD. To this end, we combine standard ANS-related heart rate variability (HRV) metrics with measures of instantaneous complexity (dominant Lyapunov exponent and entropy) and higher-order statistics (bispectra). Such measures are computed over 600-s recordings acquired at rest in 29 healthy subjects and 30 PD patients. The only significant group-wise differences were found in the variability of the dominant Lyapunov exponent. Also, the best PD vs. healthy controls classification performance (balanced accuracy: 73.47%) was achieved only when retaining the time-varying, non-stationary structure of the dynamical features, whereas classification performance dropped significantly (balanced accuracy: 61.91%) when excluding variability-related features. Additionally, both linear and nonlinear model features correlated with both clinical and neuropsychological assessments of the considered patient population. Our results demonstrate the added value and potential of instantaneous measures of heartbeat dynamics and its variability in characterizing PD-related disabilities in motor and cognitive domains. | Assessment of spontaneous cardiovascular oscillations in Parkinson's disease |
S1746809415002001 | Denoising of biomedical signals using wavelet transform is a widely used technique. The use of undecimated wavelet transform (UWT) assures better denoising results but implies a higher complexity than discrete wavelet transform (DWT). Some implementation schemes have been proposed to perform UWT, one of them is Cycle Spinning (CS). CS is performed using the DWT of several circular shifted versions of the signal to analyse. The reduction of the number of shifted versions of the biomedical signal during denoising process used is addressed in the present work. This paper is about a variant of CS with a reduced number of shifts, called Partial Cycle Spinning (PCS), applied to ultrasonic trace denoising. The influence of the choice of PCS shifts in the denoised registers quality is studied. Several shifts selection rules are proposed, compared and evaluated. Denoising results over a set of ultrasonic registers are provided for PCS with different shift selection rules, CS and DWT. The work shows that PCS with the appropriate choice of shifts could be the best option to denoise biomedical ultrasonic traces. | Shift selection influence in partial cycle spinning denoising of biomedical signals |
S1746809415002013 | The model-based algorithm for photoacoustic imaging (PAI) has been proved to be stable and accurate. However, its reconstruction is computationally burdensome which limits its application in the practical PAI. In this paper, we proposed a block-sparse discrete cosine transform (BS-DCT) model-based PAI reconstruction algorithm in order to improve the computational efficiency of the model-based PAI reconstruction. We adopted the discrete cosine transform (DCT) to eliminate the minor coefficients and reduce the data scale. A block-sparse based iterative method was proposed to accomplish the image reconstruction. Due to its block independent nature, we used the CPU-based parallel calculation implementation to accelerate the reconstruction. During the iterative reconstruction, the number of required iterations was reduced by adopting the fast-converging optimization Barzilai–Borwein method. The numerical simulations and in-vitro experiments were carried out. The results has shown that the reconstruction quality is equivalent to the state-of-the-art iterative algorithms. Our algorithm requires less number of iterations with a reduced data scale and significant acceleration through the parallel calculation implementation. In conclusion, the BS-DCT algorithm may be an effectively accelerated practical algorithm for the PAI reconstruction. | Efficient block-sparse model-based algorithm for photoacoustic image reconstruction |
S1746809415002025 | Active contours, or snakes, have a wide range of applications in medical image segmentation. Gradient vector flow (GVF) field, generalized GVF field and other external force fields have been proposed to address the problems of traditional snake models, such as low accuracy of segmentation and poor convergence ability in indentations. In order to further solve the two problems, we put forward a novel generalized gradient vector flow snake model using minimal surface and component-normalized method. We adopt minimal surface function instead of Laplace operator to settle the problem of low segmentation accuracy. We also use component-based normalization method instead of conventional vector-based normalization method to improve the ability of snake curve to converge into long and thin indentations. Experimental results and comparisons against other methods indicate that the proposed snake model own the ability to protect weak borders and solve the incorrect segmentation problem effectively. Meantime, our method performs much better than generalized GVF snake model in terms of long and thin indentation. | A novel generalized gradient vector flow snake model using minimal surface and component-normalized method for medical image segmentation |
S1746809415002037 | Robust and sparse modeling are two important issues in brain–computer interface systems. L1-norm-based common spatial patterns (CSP-L1) method is a recently developed technique that seeks robust spatial filters by using L1-norm-based dispersions. However, the spatial filters obtained are still dense, and thus lack interpretability. This paper presents a regularized version of CSP-L1 with sparsity, termed as sp-CSPL1. It produces sparse spatial filters, which eliminate redundant channels and retain meaningful EEG signals. The sparsity is induced by penalizing the objective function of CSP-L1 with the L1-norm. The sp-CSPL1 approach uses the L1-norm twice for inducing sparsity and defining dispersions simultaneously. The presented sp-CSPL1 algorithm is evaluated on two publicly available EEG data sets, on which it shows significant improvement in classification accuracy. | Robust common spatial patterns with sparsity |
S1746809415002049 | Non-invasive estimation of arterial oxygen saturation (SpO2) and heart rate using pulse oximeters is widely used in hospitals. Pulse oximeters rely on photoplethysmographic (PPG) signals from a peripherally placed optical sensor. However, pulse oximeters can be less accurate if the sensor site is relatively cold. This research investigates the effects on PPG signal quality of local site temperatures for 20 healthy adult volunteers (24.5±4.1 years of age). Raw PPG data, composed of Infrared (IR) and Red (RD) signals, was obtained from a transmittance finger probe using a custom pulse oximeter (PO) system. Three tests were performed with the subject's hand surface temperature maintained at baseline (29±2°C), cold (19±2°C), and warm (33±2°C) conditions. Median root mean square (RMS) of PPG signal during the Cold test dropped by 54.0% for IR and 30.6% for RD from the baseline values. In contrast, the PPG RMS increased by 64.4% and 60.2% for RD and IR, respectively, during the Warm test. Mean PPG pulse amplitudes decreased by 59.5% for IR and 46.1% for RD in the cold test when compared to baseline, but improved by 70.1% for IR and 59.0% for RD in the warm test. This improvement of up to 4× in signal quality during the warm condition was associated with a closer match (median difference of 1.5%) between the SpO2 values estimated by the PO system and a commercial pulse oximeter. The differences measured in RMS and mean amplitudes for the three tests were statistically significant (p <0.001). Overall, warm temperatures significantly improve PPG signal quality and SpO2 estimation accuracy. Sensor site temperature is recommended to be maintained near 33°C for reliable transmittance pulse oximetry. | Analysing the effects of cold, normal, and warm digits on transmittance pulse oximetry |
S1746809415002050 | In this work, an attempt has been made to analyze the progression of muscle fatigue using surface electromyography (sEMG) signals and modified B distribution (MBD) based time–frequency analysis. For this purpose, signals are recorded from biceps brachii muscles of fifty healthy adult volunteers during dynamic contractions. The recorded signals are preprocessed and then subjected to MBD based time–frequency distribution (TFD). The instantaneous median frequency (IMDF) is extracted from the time–frequency matrix for different values of kernel parameter. The linear regression technique is used to model the temporal variations of IMDF. Correlation coefficient is computed in order to select the appropriate value for kernel parameter of MBD based TFD. Further, extended version of frequency domain features namely instantaneous spectral ratio (InstSPR) at low frequency band (LFB), medium frequency band (MFB) and high frequency band (HFB) are extracted from the time–frequency spectrum. In addition to these features, IMDF and instantaneous mean frequency (IMNF) are also calculated. The least square error based linear regression technique is used to track the slope variations of these features. The results show that MBD based time–frequency spectrum is able to provide the instantaneous variations of frequency components associated with fatiguing contractions. The values of InstSPR at MFB and HFB regions, IMDF and IMNF show a decreasing trend during the progression of muscle fatigue. However, an increasing trend is observed in LFB regions. Further the coefficient of variation is calculated for all the features. It is found that the values of IMDF, IMNF and InstSPR in LFB region have lowest variability across different subjects in comparison with other two features. It appears that this method could be useful in analyzing various neuromuscular activities in normal and abnormal conditions. | Surface electromyography based muscle fatigue progression analysis using modified B distribution time–frequency features |
S1746809415002062 | Photoacoustic Microscopy (PAM) has developed into a powerful tool for deep tissue imaging with a better spatial resolution. But the data acquisition time in PAM is so long that it is a great challenge for real time imaging. In this paper, a new PAM data acquisition and image recovery method, called Compressive Sampling PAM System based on Low Rank Matrix Completion (CSLRM-PAM) is proposed to obtain a high-resolution PAM image with relatively low sampling rates. In order to successfully set up a CSLRM-PAM system, the two key problems which we need to keep focus on are design of the compressive sampling scheme and the corresponding image recovery algorithm. In this paper, two compressive sampling schemes based on expander graphs are proposed to replace the conventional point-by-point scanning scheme to implement fast data acquisition. Then, the low rank matrix completion is utilized to obtain high-resolution PAM image directly from the compressive sampling data. The effectiveness of the proposed scheme is validated using both numerical analysis and PAM experiments. In contrast with the conventional system, the proposed CSLRM-PAM system is able to dramatically decrease the total sampling points for a relatively high-resolution PAM image and to implement accelerated data acquisition. | Compressive Sampling Photoacoustic Microscope System based on Low Rank Matrix Completion |
S1746809415002074 | One of humans’ auditory abilities is differentiation between sounds with slightly different frequencies. Recently, the auditory image model (AIM) was developed to numerically explain this auditory phenomenon. Acoustic analyses of snore sounds have been performed recently by using non-contact microphones. Snore/non-snore classification techniques have been required at the front-end of snore analyses. The performances of sound classification methods can be evaluated based on human hearing, which is considered to be the gold standard. In this paper, we propose a novel method of automatically extracting snore sounds from sleep sounds by using an AIM-based snore/non-snore classification system. We report that the proposed automatic classification method could achieve a sensitivity of 97.2% and specificity of 96.3% when analyzing snore and non-snore sounds from 40 subjects. It is anticipated that our findings will contribute to the development of an automated snore analysis system to be used in sleep studies. | Automatic snore sound extraction from sleep sound recordings via auditory image modeling |
S1746809415002086 | The main idea of a traditional Steady State Visually Evoked Potentials (SSVEP)-BCI is the activation of commands through gaze control. However, the widely named “dependent” SSVEP-BCIs might not be applicable for patients with ocular motor impairments or severe neuromuscular problems. Nevertheless, an “independent” SSVEP-BCIs might be a potential approach to solve this problem. This study presents a novel independent-BCI based on SSVEP using Figure-Ground Perception (FGP), terminology widely known and used in Gestalt psychology for object recognition by means of changes in perception. This BCI proposes to identify two different targets that represent commands in a limited visual space without needing to shift the gaze by the paradigm of covert attention. For that purpose, the well-known example of Rubin's face-vase in FGP was used. The traditional EEG signal analysis consists of three steps: filtering, feature extraction and classification. In this work, two techniques were used for performance comparison, and the classification was obtained through a criterion of maxima for both techniques. Ten subjects participated in this study in offline tests and five subjects for online tests. The flickering frequencies were 15.0Hz (vase) and 11.0Hz (faces). Our results demonstrate that the electrode Oz is the best channel for characterization of visual perception, from a quantitative point of view based on the canonical correlation, after a channel analysis by independent way. Regarding the classification, MSI technique was more accurate in relation to CCA, in all the cases with same conditions, either using three electrodes or a single electrode (Oz), even for different window lengths. The online performance appeared to decrease as participants switched from Face (82.7%) to Vase (76%) stimulus. These results are consistent with our results in offline tasks. Muscular activity related to the eye movements was also evaluated using a commercial device of eye tracking (Eye Tribe). These findings strongly support the hypothesis of visual selectivity by means of perception and neural mechanism of spatial attention. | An independent-BCI based on SSVEP using Figure-Ground Perception (FGP) |
S1746809416000021 | The aim was to develop and to investigate the technical feasibility of a novel smartphone-based mobile system for feedback control of heart rate during outdoor running. Accurate control is important because heart rate can be used for prescription of exercise intensity for development and maintenance of cardiorespiratory fitness. An Android smartphone was employed together with wearable, wireless sensors for heart rate and running speed. A simple feedback design algorithm appropriate for embedded mobile applications was developed. Controller synthesis uses a low-order, physiologically-validated plant model and requires a single bandwidth-related tuning parameter. Twenty real time controller tests demonstrated highly accurate tracking of target heart rate with a mean root-mean-square tracking error (RMSE) of less than 2 beats per minute (bpm); a sufficient level of robustness was demonstrated within the range of conditions tested. Adjustment of the tuning parameter towards lower closed-loop bandwidth gave markedly lower control signal power (0.0008 vs. 0.0030m2/s2, p <0.0001, low vs. high bandwidth), but at the cost of a significantly lower heart rate tracking accuracy (RMSE 1.99 vs. 1.67bpm, p <0.01). The precision achieved suggests that the system might be applicable for accurate achievement of prescribed exercise intensity for development and maintenance of cardiorespiratory fitness. High-accuracy feedback control of heart rate during outdoor running using smartphone technology is deemed feasible. | Feedback control of heart rate during outdoor running: A smartphone implementation |
Subsets and Splits