FileName
stringlengths 17
17
| Abstract
stringlengths 163
6.01k
| Title
stringlengths 12
421
|
---|---|---|
S0885230815000212 | We introduce a Bayesian approach for the adaptation of the log-linear weights present in state-of-the-art statistical machine translation systems. Typically, these weights are estimated by optimising a given translation quality criterion, taking only into account a certain set of development data (e.g., the adaptation data). In this article, we show that the Bayesian framework provides appropriate estimates of such weights in conditions where adaptation data is scarce. The theoretical framework is presented, alongside with a thorough experimentation and comparison with other weight estimation methods. We provide a comparison of different sampling strategies, including an effective heuristic strategy and a theoretically sound Markov chain Monte-Carlo algorithm. Experimental results show that Bayesian predictive adaptation (BPA) outperforms the re-estimation from scratch in conditions where adaptation data is scarce. Further analysis reveals that the improvements obtained are due to the greater stability of the estimation procedure. In addition, the proposed BPA framework has a much lower computational cost than raw re-estimation. | Improving translation quality stability using Bayesian predictive adaptation |
S0885230815000224 | This work develops a method of estimating subspace-based direction of arrival (DOA) that uses two proposed preprocesses. The method can be used in applications that involve interactive robots to calculate the direction to a noise-contaminated signal in noisy environments. The proposed method can be divided into two parts, which are linear phase approximation and frequency bin selection. Linear phase approximation rectifies the phases of the two-channel signals that are affected by noise, and reconstructs the covariance matrix of the received signals according to the compensative phases using phase line regression. To increase the accuracy of DOA result, a method of frequency bin selection that is based on eigenvalue decomposition (EVD) is utilized to detect and filter out the noisy frequency bins of the microphone signals. The proposed techniques are adopted in a method of subspace-based DOA estimation that is called multiple signal classification (MUSIC). Experimental results reveal that the mean estimation error obtained using proposed method can be reduced by 7.61° from the conventional MUSIC method. The proposed method is compared with the covariance-based DOA method that is called the minimum variance distortionless response (MVDR). The DOA improves the mean estimation accuracy by 4.98° relative to the conventional MVDR method. The experimental results demonstrate that both subspace-based and covariance-based DOA algorithms with the proposed preprocessing method outperform the DOA estimation in detecting the direction of signal in a noisy environment. | Subspace-based DOA with linear phase approximation and frequency bin selection preprocessing for interactive robots in noisy environments |
S0885230815000236 | This paper attempts to provide a state-of-the-art of sound source localization in robotics. Noticeably, this context raises original constraints—e.g. embeddability, real time, broadband environments, noise and reverberation—which are seldom simultaneously taken into account in acoustics or signal processing. A comprehensive review is proposed of recent robotics achievements, be they binaural or rooted in array processing techniques. The connections are highlighted with the underlying theory as well as with elements of physiology and neurology of human hearing. | A survey on sound source localization in robotics: From binaural to array processing methods |
S0885230815000339 | How the speech production and perception systems evolved in humans still remains a mystery today. Previous research suggests that human auditory systems are able, and have possibly evolved, to preserve maximal information about the speaker's articulatory gestures. This paper attempts an initial step toward answering the complementary question of whether speakers’ articulatory mechanisms have also evolved to produce sounds that can be optimally discriminated by the listener's auditory system. To this end we explicitly model, using computational methods, the extent to which derived representations of “primitive movements” of speech articulation can be used to discriminate between broad phone categories. We extract interpretable spatio-temporal primitive movements as recurring patterns in a data matrix of human speech articulation, i.e., representing the trajectories of vocal tract articulators over time. To this end, we propose a weakly-supervised learning method that attempts to find a part-based representation of the data in terms of recurring basis trajectory units (or primitives) and their corresponding activations over time. For each phone interval, we then derive a feature representation that captures the co-occurrences between the activations of the various bases over different time-lags. We show that this feature, derived entirely from activations of these primitive movements, is able to achieve a greater discrimination relative to using conventional features on an interval-based phone classification task. We discuss the implications of these findings in furthering our understanding of speech signal representations and the links between speech production and perception systems. | Directly data-derived articulatory gesture-like representations retain discriminatory information about phone categories |
S0885230815000340 | This article investigates the use of statistical mapping techniques for the conversion of articulatory movements into audible speech with no restriction on the vocabulary, in the context of a silent speech interface driven by ultrasound and video imaging. As a baseline, we first evaluated the GMM-based mapping considering dynamic features, proposed by Toda et al. (2007) for voice conversion. Then, we proposed a ‘phonetically-informed’ version of this technique, based on full-covariance HMM. This approach aims (1) at modeling explicitly the articulatory timing for each phonetic class, and (2) at exploiting linguistic knowledge to regularize the problem of silent speech conversion. Both techniques were compared on continuous speech, for two French speakers (one male, one female). For modal speech, the HMM-based technique showed a lower spectral distortion (objective evaluation). However, perceptual tests (transcription and XAB discrimination tests) showed a better intelligibility of the GMM-based technique, probably related to its less fluctuant quality. For silent speech, a perceptual identification test revealed a better segmental intelligibility for the HMM-based technique on consonants. | Statistical conversion of silent articulation into audible speech using full-covariance HMM |
S0885230815000352 | The paper deals with the automatic analysis of real-life telephone conversations between an agent and a customer of a customer care service (ccs). The application domain is the public transportation system in Paris and the purpose is to collect statistics about customer problems in order to monitor the service and decide priorities on the intervention for improving user satisfaction. Of primary importance for the analysis is the detection of themes that are the object of customer problems. Themes are defined in the application requirements and are part of the application ontology that is implicit in the ccs documentation. Due to variety of customer population, the structure of conversations with an agent is unpredictable. A conversation may be about one or more themes. Theme mentions can be interleaved with mentions of facts that are irrelevant for the application purpose. Furthermore, in certain conversations theme mentions are localized in specific conversation segments while in other conversations mentions cannot be localized. As a consequence, approaches to feature extraction with and without mention localization are considered. Application domain relevant themes identified by an automatic procedure are expressed by specific sentences whose words are hypothesized by an automatic speech recognition (asr) system. The asr system is error prone. The word error rates can be very high for many reasons. Among them it is worth mentioning unpredictable background noise, speaker accent, and various types of speech disfluencies. As the application task requires the composition of proportions of theme mentions, a sequential decision strategy is introduced in this paper for performing a survey of the large amount of conversations made available in a given time period. The strategy has to sample the conversations to form a survey containing enough data analyzed with high accuracy so that proportions can be estimated with sufficient accuracy. Due to the unpredictable type of theme mentions, it is appropriate to consider methods for theme hypothesization based on global as well as local feature extraction. Two systems based on each type of feature extraction will be considered by the strategy. One of the four methods is novel. It is based on a new definition of density of theme mentions and on the localization of high density zones whose boundaries do not need to be precisely detected. The sequential decision strategy starts by grouping theme hypotheses into sets of different expected accuracy and coverage levels. For those sets for which accuracy can be improved with a consequent increase of coverage a new system with new features is introduced. Its execution is triggered only when specific preconditions are met on the hypotheses generated by the basic four systems. Experimental results are provided on a corpus collected in the call center of the Paris transportation system known as ratp. The results show that surveys with high accuracy and coverage can be composed with the proposed strategy and systems. This makes it possible to apply a previously published proportion estimation approach that takes into account hypothesization errors. | Multiple topic identification in human/human conversations |
S0885230815000364 | This paper investigates some conditions under which polarized user appraisals gathered throughout the course of a vocal interaction between a machine and a human can be integrated in a reinforcement learning-based dialogue manager. More specifically, we discuss how this information can be cast into socially-inspired rewards for speeding up the policy optimisation for both efficient task completion and user adaptation in an online learning setting. For this purpose a potential-based reward shaping method is combined with a sample efficient reinforcement learning algorithm to offer a principled framework to cope with these potentially noisy interim rewards. The proposed scheme will greatly facilitate the system's development by allowing the designer to teach his system through explicit positive/negative feedbacks given as hints about task progress, in the early stage of training. At a later stage, the approach will be used as a way to ease the adaptation of the dialogue policy to specific user profiles. Experiments carried out using a state-of-the-art goal-oriented dialogue management framework, the Hidden Information State (HIS), support our claims in two configurations: firstly, with a user simulator in the tourist information domain (and thus simulated appraisals), and secondly, in the context of man–robot dialogue with real user trials. | Reinforcement-learning based dialogue system for human–robot interactions with socially-inspired rewards |
S0885230815000376 | This paper proposes an emotion transplantation method capable of modifying a synthetic speech model through the use of CSMAPLR adaptation in order to incorporate emotional information learned from a different speaker model while maintaining the identity of the original speaker as much as possible. The proposed method relies on learning both emotional and speaker identity information by means of their adaptation function from an average voice model, and combining them into a single cascade transform capable of imbuing the desired emotion into the target speaker. This method is then applied to the task of transplanting four emotions (anger, happiness, sadness and surprise) into 3 male speakers and 3 female speakers and evaluated in a number of perceptual tests. The results of the evaluations show how the perceived naturalness for emotional text significantly favors the use of the proposed transplanted emotional speech synthesis when compared to traditional neutral speech synthesis, evidenced by a big increase in the perceived emotional strength of the synthesized utterances at a slight cost in speech quality. A final evaluation with a robotic laboratory assistant application shows how by using emotional speech we can significantly increase the students’ satisfaction with the dialog system, proving how the proposed emotion transplantation system provides benefits in real applications. | Emotion transplantation through adaptation in HMM-based speech synthesis |
S0885230815000388 | Autonomous human–robot interaction ultimately requires an artificial audition module that allows the robot to process and interpret a combination of verbal and non-verbal auditory inputs. A key component of such a module is the acoustic localization. The acoustic localization not only enables the robot to simultaneously localize multiple persons and auditory events of interest in the environment, but also provides input to auditory tasks such as speech enhancement and speech recognition. The use of microphone arrays in robots is an efficient and commonly applied approach to the localization problem. In this paper, moving away from simulated environments, we look at the acoustic localization under real-world conditions and limitations. Our approach proposes a series of enhancements, taking into account the imperfect frequency response of the array microphones and addressing the influence of the robot's shape and surface material. Motivated by the importance of the signal's phase information, we introduce a novel pre-processing step for enhancing the acoustic localization. Results show that the proposed approach improves the localization performance in joint noisy and reverberant conditions and allows a humanoid robot to locate multiple speakers in a real-world environment. | Robust speaker localization for real-world robots |
S0885230815000406 | This paper proposes a singing style control technique based on multiple regression hidden semi-Markov models (MRHSMMs) for changing singing styles and their intensities appearing in synthetic singing voices. In the proposed technique, singing styles and their intensities are represented by low-dimensional vectors called style vectors and are modeled in accordance with the assumption that mean parameters of acoustic models are given as multiple regressions of the style vectors. In the synthesis process, we can weaken or emphasize the intensities of singing styles by setting a desired style vector. In addition, the idea of pitch adaptive training is extended to the case of the MRHSMM to improve the modeling accuracy of pitch associated with musical notes. A novel vibrato modeling technique is also presented to extract vibrato parameters from singing voices that sometimes have unclear vibrato expressions. Subjective evaluations show that we can intuitively control singing styles and their intensities while maintaining the naturalness of synthetic singing voices comparable to the conventional HSMM-based singing voice synthesis. | HMM-based expressive singing voice synthesis with singing style control and robust pitch modeling |
S0885230815000418 | Text-to-speech synthesis system has been widely studied for many languages. However, speech synthesis for Arabic language has not sufficient progresses and it is still in its first stage. Statistical parametric synthesis based on hidden Markov models was the most commonly applied approach for Arabic language. Recently, synthesized speech quality based on deep neural networks was found as intelligible as human voice. This paper describes a Text-To-Speech (TTS) synthesis system for modern standard Arabic language based on statistical parametric approach and Mel-cepstral coefficients. Deep neural networks achieved state-of-the-art performance in a wide range of tasks, including speech synthesis. Our TTS system includes a diacritization system which is very important for Arabic TTS application. Our diacritization system is also based on deep neural networks. In addition to the use deep techniques, different methods were also proposed to model the acoustic parameters in order to address the problem of acoustic models accuracy. They are based on linguistic and acoustic characteristics (e.g. letter position based diacritization system, unit types based synthesis system, diacritic marks based synthesis system) and based on deep learning techniques (stacked generalization techniques). Experimental results show that our diacritization system can generate a diacritized text with high accuracy. As regards the speech synthesis system, the experimental results and subjective evaluation show that our proposed method for synthesis system can generate intelligible and natural speech. | Text-to-speech synthesis system with Arabic diacritic recognition system |
S088523081500042X | Phonological studies suggest that the typical subword units such as phones or phonemes used in automatic speech recognition systems can be decomposed into a set of features based on the articulators used to produce the sound. Most of the current approaches to integrate articulatory feature (AF) representations into an automatic speech recognition (ASR) system are based on a deterministic knowledge-based phoneme-to-AF relationship. In this paper, we propose a novel two stage approach in the framework of probabilistic lexical modeling to integrate AF representations into an ASR system. In the first stage, the relationship between acoustic feature observations and various AFs is modeled. In the second stage, a probabilistic relationship between subword units and AFs is learned using transcribed speech data. Our studies on a continuous speech recognition task show that the proposed approach effectively integrates AFs into an ASR system. Furthermore, the studies show that either phonemes or graphemes can be used as subword units. Analysis of the probabilistic relationship captured by the parameters has shown that the approach is capable of adapting the knowledge-based phoneme-to-AF representations using speech data; and allows different AFs to evolve asynchronously. | Articulatory feature based continuous speech recognition using probabilistic lexical modeling |
S0885230815000431 | In this study we explore opinion summarization on spontaneous conversations using unsupervised and supervised approaches. We annotate a phone conversation corpus with reference extractive and abstractive summaries for a speaker's opinion on a given topic. We investigate two methods: the first is an unsupervised graph-based method, which incorporates topic and sentiment information, as well as sentence-to-sentence relations extracted based on dialogue structure; the second is a supervised method that casts the summarization problem as a classification problem. Furthermore, we investigate the use of pronoun resolution in this summarization task. We develop various features based on pronoun coreference and incorporate them in the supervised opinion summarization system. Our experimental results show that both the graph-based method and the supervised method outperform the baseline approach, and the pronoun related features can help to generate better summaries. | Opinion summarization on spontaneous conversations |
S0885230815000443 | This paper describes an optimal algorithm using continuous state Hidden Markov Models for solving the HMS decoding problem, which is the problem of recovering an underlying sequence of phonetic units from measurements of smoothly varying acoustic features, thus inverting the speech generation process described by Holmes, Mattingly and Shearme in a well known paper (Speech synthesis by rule. Lang. Speech 7 (1964)). | Application of continuous state Hidden Markov Models to a classical problem in speech recognition |
S0885230815000455 | The steered response power phase transform (SRP-PHAT) is one of the widely used algorithms for sound source localization. Since it must examine a large number of candidate sound source locations, conventional SRP-PHAT approaches may not be used in real time. To overcome this problem, an effort was made previously to parallelize the SRP-PHAT on graphics processing units (GPUs). However, the full capacities of the GPU were not exploited since on-chip memory usage was not addressed. In this paper, we propose GPU-based parallel algorithms of the SRP-PHAT both in the frequency domain and time domain. The proposed methods optimize the memory access patterns of the SRP-PHAT and efficiently use the on-chip memory. As a result, the proposed methods demonstrate a speedup of 1276 times in the frequency domain and 80 times in the time domain compared to CPU-based algorithms, and 1.5 times in the frequency domain and 6 times in the time domain compared to conventional GPU-based methods. | Parallel SRP-PHAT for GPUs |
S0885230815000467 | We propose a practical, feature-level and score-level fusion approach by combining acoustic and estimated articulatory information for both text independent and text dependent speaker verification. From a practical point of view, we study how to improve speaker verification performance by combining dynamic articulatory information with the conventional acoustic features. On text independent speaker verification, we find that concatenating articulatory features obtained from measured speech production data with conventional Mel-frequency cepstral coefficients (MFCCs) improves the performance dramatically. However, since directly measuring articulatory data is not feasible in many real world applications, we also experiment with estimated articulatory features obtained through acoustic-to-articulatory inversion. We explore both feature level and score level fusion methods and find that the overall system performance is significantly enhanced even with estimated articulatory features. Such a performance boost could be due to the inter-speaker variation information embedded in the estimated articulatory features. Since the dynamics of articulation contain important information, we included inverted articulatory trajectories in text dependent speaker verification. We demonstrate that the articulatory constraints introduced by inverted articulatory features help to reject wrong password trials and improve the performance after score level fusion. We evaluate the proposed methods on the X-ray Microbeam database and the RSR 2015 database, respectively, for the aforementioned two tasks. Experimental results show that we achieve more than 15% relative equal error rate reduction for both speaker verification tasks. | Speaker verification based on the fusion of speech acoustics and inverted articulatory signals |
S0885230815000546 | Articulatory data can nowadays be obtained using a wide range of techniques, with a notable emphasis on imaging modalities such as ultrasound and real-time magnetic resonance, resulting in large amounts of image data. One of the major challenges posed by these large datasets concerns how they can be efficiently analysed to extract relevant information to support speech production studies. Traditional approaches, including the superposition of vocal tract profiles, provide only a qualitative characterisation of notable properties and differences. While providing valuable information, these methods are rather inefficient and inherently subjective. Therefore, analysis must evolve towards a more automated, replicable and quantitative approach. To address these issues we propose the use of objective measures to compare the configurations assumed by the vocal tract during the production of different sounds. The proposed framework provides quantitative normalised data regarding differences covering meaningful regions under the influence of various articulators. An important part of the framework is the visual representation of the data, proposed to support analysis, and depicting the differences found and corresponding direction of change. The normalised nature of the computed data allows comparison among different sounds and speakers in a common representation. Representative application examples, concerning the articulatory characterisation of European Portuguese vowels, are presented to illustrate the capabilities of the proposed framework, both for static configurations and the assessment of dynamic aspects during speech production. | Quantitative systematic analysis of vocal tract data |
S0885230815000558 | Hybrid deep neural network–hidden Markov model (DNN-HMM) systems have become the state-of-the-art in automatic speech recognition. In this paper we experiment with DNN-HMM phone recognition systems that use measured articulatory information. Deep neural networks are both used to compute phone posterior probabilities and to perform acoustic-to-articulatory mapping (AAM). The AAM processes we propose are based on deep representations of the acoustic and the articulatory domains. Such representations allow to: (i) create different pre-training configurations of the DNNs that perform AAM; (ii) perform AAM on a transformed (through DNN autoencoders) articulatory feature (AF) space that captures strong statistical dependencies between articulators. Traditionally, neural networks that approximate the AAM are used to generate AFs that are appended to the observation vector of the speech recognition system. Here we also study a novel approach (AAM-based pretraining) where a DNN performing the AAM is instead used to pretrain the DNN that computes the phone posteriors. Evaluations on both the MOCHA-TIMIT msak0 and the mngu0 datasets show that: (i) the recovered AFs reduce phone error rate (PER) in both clean and noisy speech conditions, with a maximum 10.1% relative phone error reduction in clean speech conditions obtained when autoencoder-transformed AFs are used; (ii) AAM-based pretraining could be a viable strategy to exploit the available small articulatory datasets to improve acoustic models trained on large acoustic-only datasets. | Integrating articulatory data in deep neural network-based acoustic modeling |
S0885230815000601 | In this paper, we propose a single-channel speech enhancement method, based on the combination of the wavelet packet transform and an improved version of the principal component analysis (PCA). Our method integrates ability of PCA to de-correlate the coefficients by extracting a linear relationship with what of wavelet packet analysis to derive feature vectors used for speech enhancement. This allows us to operate with a convenient shrinkage function on these new coefficients, removing the noise without degrading the speech. Then, the enhanced speech obtained by the inverse wavelet packet transform is decomposed into three subspaces: low rank, sparse, and the remainder noise components. Finally, we calculate the components as a segregation problem. The performance evaluation shows that our method provides a higher noise reduction and a lower signal distortion even in highly noisy conditions without introducing artifacts. | Speech enhancement based on wavelet packet of an improved principal component analysis |
S0885230815000613 | Several modification algorithms that alter natural or synthetic speech with the goal of improving intelligibility in noise have been proposed recently. A key requirement of many modification techniques is the ability to predict intelligibility, both offline during algorithm development, and online, in order to determine the optimal modification for the current noise context. While existing objective intelligibility metrics (OIMs) have good predictive power for unmodified natural speech in stationary and fluctuating noise, little is known about their effectiveness for other forms of speech. The current study evaluated how well seven OIMs predict listener responses in three large datasets of modified and synthetic speech which together represent 396 combinations of speech modification, masker type and signal-to-noise ratio. The chief finding is a clear reduction in predictive power for most OIMs when faced with modified and synthetic speech. Modifications introducing durational changes are particularly harmful to intelligibility predictors. OIMs that measure masked audibility tend to over-estimate intelligibility in the presence of fluctuating maskers relative to stationary maskers, while OIMs that estimate the distortion caused by the masker to a clean speech prototype exhibit the reverse pattern. | Evaluating the predictions of objective intelligibility metrics for modified and synthetic speech |
S0885230815000625 | Probabilistic linear discriminant analysis (PLDA) with i-vectors as features has become one of the state-of-the-art methods in speaker verification. Discriminative training (DT) has proven to be effective for improving PLDA's performance but suffers more from data insufficiency than generative training (GT). In this paper, we achieve robustness against data insufficiency in DT in two ways. First, we compensate for statistical dependencies in the training data by adjusting the weights of the training trials in order for the training loss to be an accurate estimate of the expected loss. Second, we propose three constrained DT schemes, among which the best was a discriminatively trained transformation of the PLDA score function having four parameters. Experiments on the male telephone part of the NIST SRE 2010 confirmed the effectiveness of our proposed techniques. For various number of training speakers, the combination of weight-adjustment and the constrained DT scheme gave between 7% and 19% relative improvements in C ˆ llr over GT followed by score calibration. Compared to another baseline, DT of all the parameters of the PLDA score function, the improvements were larger. | Robust discriminative training against data insufficiency in PLDA-based speaker verification |
S0885230815000637 | In most of the wavelet based speech enhancement methods, it is assumed that the wavelet coefficients are independent of each other. However, investigating the joint histogram of the wavelet coefficients reveals some dependencies among them. In this regard, Sendur proposed a probability density function (pdf) that models the relation between a wavelet coefficient of image signal and its parent. Then, this pdf is utilized to propose a bivariate shrinkage function which uses the dependencies between the child–parent wavelet coefficients of Image signals to enhance the noisy images. In this paper, we intend to find wavelet structures which are more suitable for speech enhancement based on bivariate shrinkage. We show that the dependencies between the child–parent wavelet coefficients can only be modeled rather easily up to two stages of two-channel discrete wavelet transform using the Sendur's pdf. However, the bivariate shrinkage function works better in three-channel redundant wavelet filter-bank with dilation 2, since it has a joint distribution which is similar to the Sendur's pdf up to the fourth stage of decomposition for speech signals. Furthermore, we show that three-channel higher density wavelet obtained by eliminating the downsampling part of the third channel is more suitable for the bivariate shrinkage function when it is utilized for speech enhancement. Then, appropriate filter values for three-channel higher density wavelet filter-bank are found. Moreover, we propose four-channel double density discrete wavelet filter-bank which leads to some improvement in speech enhancement results. Since the probability of speech presence is higher in lower frequencies, we suggest level-dependent bivariate shrinkage. Finally, Sendur bivariate shrinkage is optimized for speech enhancement and new methods are proposed by combining former successful methods with the bivariate shrinkage function. | New features for speech enhancement using bivariate shrinkage based on redundant wavelet filter-banks |
S0885230815000649 | Speech that has been distorted by introducing spectral or temporal gaps is still perceived as continuous and complete by human listeners, so long as the gaps are filled with additive noise of sufficient intensity. When such perceptual restoration occurs, the speech is also more intelligible compared to the case in which noise has not been added in the gaps. This observation has motivated so-called ‘missing data’ systems for automatic speech recognition (ASR), but there have been few attempts to determine whether such systems are a good model of perceptual restoration in human listeners. Accordingly, the current paper evaluates missing data ASR in a perceptual restoration task. We evaluated two systems that use a new approach to bounded marginalisation in the cepstral domain, and a bounded conditional mean imputation method. Both methods model available speech information as a clean-speech posterior distribution that is subsequently passed to an ASR system. The proposed missing data ASR systems were evaluated using distorted speech, in which spectro-temporal gaps were optionally filled with additive noise. Speech recognition performance of the proposed systems was compared against a baseline ASR system, and with human speech recognition performance on the same task. We conclude that missing data methods improve speech recognition performance in a manner that is consistent with perceptual restoration in human listeners. | Comparing human and automatic speech recognition in a perceptual restoration experiment |
S0885230815000650 | This paper describes the ALISA tool, which implements a lightly supervised method for sentence-level alignment of speech with imperfect transcripts. Its intended use is to enable the creation of new speech corpora from a multitude of resources in a language-independent fashion, thus avoiding the need to record or transcribe speech data. The method is designed so that it requires minimum user intervention and expert knowledge, and it is able to align data in languages which employ alphabetic scripts. It comprises a GMM-based voice activity detector and a highly constrained grapheme-based speech aligner. The method is evaluated objectively against a gold standard segmentation and transcription, as well as subjectively through building and testing speech synthesis systems from the retrieved data. Results show that on average, 70% of the original data is correctly aligned, with a word error rate of less than 0.5%. In one case, subjective listening tests show a statistically significant preference for voices built on the gold transcript, but this is small and in other tests, no statistically significant differences between the systems built from the fully supervised training data and the one which uses the proposed method are found. | ALISA: An automatic lightly supervised speech segmentation and alignment tool |
S0885230815000662 | Recent years have seen rapid growth in the deployment of statistical methods for computational language and speech processing. The current popularity of such methods can be traced to the convergence of several factors, including the increasing amount of data now accessible, sustained advances in computing power and storage capabilities, and ongoing improvements in machine learning algorithms. The purpose of this contribution is to review the state of the art in both areas, point out the top trends in statistical modelling across a wide range of problems, and identify their most salient characteristics. The paper concludes with some prognostications regarding the likely impact on the field going forward. | State of the art in statistical methods for language and speech processing |
S0885230815000674 | In this paper, a new statistical method for detecting bilabial closure gestures is proposed based on articulatory data. This can be surprisingly challenging, since mere proximity of the lips does not imply their involvement in a directed phonological goal. This segment-based bilabial closure detection scheme uses principal differential analysis (PDA) to extract articulatory gestures. The dynamic patterns of the tract variables (TVs) lip aperture, lip protrusion, and their derivatives, are captured with PDA and used to detect and quantify bilabial closure gestures. The proposed feature sets, which are optimized using sequential forward floating selection (SFFS), are combined and used in binary classification. Experimental results using the articulatory database MOCHA-TIMIT show the effectiveness of the proposed method demonstrating promising performance in terms of high classification accuracy (95%), sensitivity (95%), and specificity (95%). | Principal differential analysis for detection of bilabial closure gestures from articulatory data |
S0885230815000686 | Spoken language, especially conversational speech, is characterized by great variability in word pronunciation, including many variants that differ grossly from dictionary prototypes. This is one factor in the poor performance of automatic speech recognizers on conversational speech, and it has been very difficult to mitigate in traditional phone-based approaches to speech recognition. An alternative approach, which has been studied by ourselves and others, is one based on sub-phonetic features rather than phones. In such an approach, a word's pronunciation is represented as multiple streams of phonological features rather than a single stream of phones. Features may correspond to the positions of the speech articulators, such as the lips and tongue, or may be more abstract categories such as manner and place. This article reviews our work on a particular type of articulatory feature-based pronunciation model. The model allows for asynchrony between features, as well as per-feature substitutions, making it more natural to account for many pronunciation changes that are difficult to handle with phone-based models. Such models can be efficiently represented as dynamic Bayesian networks. The feature-based models improve significantly over phone-based counterparts in terms of frame perplexity and lexical access accuracy. The remainder of the article discusses related work and future directions. automatic speech recognition International Phonetic Alphabet lip constriction location lip opening degree tongue tip constriction location tongue tip opening degree tongue body constriction location tongue body opening degree velum opening degree glottis opening degree conditional probability table | Articulatory feature-based pronunciation modeling |
S0885230815000698 | For summary readers, coherence is no less important than informativeness and is ultimately measured in human terms. Taking a human cognitive perspective, this paper is aimed to generate coherent summaries of narrative text by developing a cognitive model. To model coherence with a cognitive background, we simulate the long-term human memory by building a semantic network from a large corpus like Wiki and design algorithms to account for the information flow among different compartments of human memory. Proposition is the basic processing unit for the model. After processing a whole narrative in a cyclic way, our model supplies information to be used for extractive summarization on the proposition level. Experimental results on two kinds of narrative text, newswire articles and fairy tales, show the superiority of our proposed model to several representative and popular methods. | Coherent narrative summarization with a cognitive model |
S0885230815000704 | Spectral imputation and classifier modification can be counted as the two main missing data approaches for robust automatic speech recognition (ASR). Despite their potentials, little attention has been paid to the classifier modification techniques. In this paper, we show that transferring bounded marginalization, which is a classifier modification method, from spectral to cepstral domain would be beneficial for robust ASR. We also propose improved solutions on this transfer toward a better performance. Two such techniques are presented. The first approach still does not need training of any extra model. It benefits from an observed characteristic of cepstral features and raises accuracy of previously proposed method to a comparable level with that of a classic imputation method. The second technique combines our originally proposed method with an imputation technique but replaces spectral reconstruction with a simpler and faster possible range estimation of missing components. We show that the resulting method improves the accuracies of either of the two combined methods. The proposed techniques also show good robustness when implemented with an inaccurate spectrographic mask. | Bounded cepstral marginalization of missing data for robust speech recognition |
S0885230815000716 | Discriminative criteria have been widely used for training acoustic models for automatic speech recognition (ASR). Many discriminative criteria have been proposed including maximum mutual information (MMI), minimum phone error (MPE), and boosted MMI (BMMI). Discriminative training is known to provide significant performance gains over conventional maximum-likelihood (ML) training. However, as discriminative criteria aim at direct minimization of the classification error, they strongly rely on having accurate reference labels. Errors in the reference labels directly affect the performance. Recently, the differenced MMI (dMMI) criterion has been proposed for generalizing conventional criteria such as BMMI and MPE. dMMI can approach BMMI or MPE if its hyper-parameters are properly set. Moreover, dMMI introduces intermediate criteria that can be interpreted as smoothed versions of BMMI or MPE. These smoothed criteria are robust to errors in the reference labels. In this paper, we demonstrate the effect of dMMI on unsupervised speaker adaptation where the reference labels are estimated from a first recognition pass and thus inevitably contain errors. In particular, we introduce dMMI-based linear regression (dMMI-LR) adaptation and demonstrate significant gains in performance compared with MLLR and BMMI-LR in two large vocabulary lecture recognition tasks. | Differenced maximum mutual information criterion for robust unsupervised acoustic model adaptation |
S0885230815000728 | A multi-component emotion model is proposed to describe the affective states comprehensively and provide more details about emotion for the application of expressive speech synthesis. Four types of components from different perspectives – cognitive appraisal, psychological feeling, physical response and utterance manner are involved. And the interactions among them are also considered, by which the four components constitute a multi-layered structure. Based on the describing model, a detecting method is proposed to extract the affective states from text, as it is the requisite first step for an automatic generation of expressive synthetic speech. The deep stacking network is adopted and integrated with the hypothetic producing process of the four components, by which the intermediate layers of the network become visible and explicable. In addition, the affective states at document level and paragraph level are regarded as contextual features to extend available information for the emotion detection at sentence level. The effectiveness of the proposed method is validated through experiments. At sentence level, a 0.59 F-value of the predictions of utterance manner is achieved. | Detecting affective states from text based on a multi-component emotion model |
S088523081500073X | Non-verbal communication involves encoding, transmission and decoding of non-lexical cues and is realized using vocal (e.g. prosody) or visual (e.g. gaze, body language) channels during conversation. These cues perform the function of maintaining conversational flow, expressing emotions, and marking personality and interpersonal attitude. In particular, non-verbal cues in speech such as paralanguage and non-verbal vocal events (e.g. laughters, sighs, cries) are used to nuance meaning and convey emotions, mood and attitude. For instance, laughters are associated with affective expressions while fillers (e.g. um, ah, um) are used to hold floor during a conversation. In this paper we present an automatic non-verbal vocal events detection system focusing on the detect of laughter and fillers. We extend our system presented during Interspeech 2013 Social Signals Sub-challenge (that was the winning entry in the challenge) for frame-wise event detection and test several schemes for incorporating local context during detection. Specifically, we incorporate context at two separate levels in our system: (i) the raw frame-wise features and, (ii) the output decisions. Furthermore, our system processes the output probabilities based on a few heuristic rules in order to reduce erroneous frame-based predictions. Our overall system achieves an Area Under the Receiver Operating Characteristics curve of 95.3% for detecting laughters and 90.4% for fillers on the test set drawn from the data specifications of the Interspeech 2013 Social Signals Sub-challenge. We perform further analysis to understand the interrelation between the features and obtained results. Specifically, we conduct a feature sensitivity analysis and correlate it with each feature's stand alone performance. The observations suggest that the trained system is more sensitive to a feature carrying higher discriminability with implications towards a better system design. | Detecting paralinguistic events in audio stream using context in features and probabilistic decisions |
S0885230815000790 | In this paper, a systematic review of relevant published studies on computer-based speech therapy systems or virtual speech therapists (VSTs) for people with speech disorders is presented. We structured this work based on the PRISMA framework. The advancements in speech technology and the increased number of successful real-world projects in this area point to a thriving market for VSTs in the near future; however, there is no standard roadmap to pinpoint how these systems should be designed, implemented, customized, and evaluated with respect to the various speech disorders. The focus of this systematic review is on articulation and phonological impairments. This systematic review addresses three research questions: what types of articulation and phonological disorders do VSTs address, how effective are virtual speech therapists, and what technological elements have been utilized in VST projects. The reviewed papers were sourced from comprehensive digital libraries, and were published in English between 2004 and 2014. All the selected studies involve computer-based intervention in the form of a VST regarding articulation or phonological impairments, followed by qualitative and/or quantitative assessments. To generate this review, we encountered several challenges. Studies were heterogeneous in terms of disorders, type and frequency of therapy, sample size, level of functionality, etc. Thus, overall conclusions were difficult to draw. Commonly, publications with rigorous study designs did not describe the technical elements used in their VST, and publications that did describe technical elements had poor study designs. Despite this heterogeneity, the selected studies reported the effectiveness of computers as a more engaging type of intervention with more tools to enrich the intervention programs, particularly when it comes to children; however, it was emphasized that virtual therapists should not drive the intervention but must be used as a medium to deliver the intervention planned by speech-language pathologists. Based on the reviewed papers, VSTs are significantly effective in training people with a variety of speech disorders; however, it cannot be claimed that a consensus exists in the superiority of VSTs over speech-language pathologists regarding rehabilitation outcomes. Our review shows that hearing-impaired cases were the most frequently addressed disorder in the reviewed studies. Automatic speech recognition, speech corpus, and speech synthesizers were the most popular technologies used in the VSTs. | Systematic review of virtual speech therapists for speech disorders |
S0885230815000807 | Stridence as a form of speech disorder in Serbian language is manifested by the appearance of an intense and sharp whistling. Its acoustic characteristics significantly affect the quality of verbal communication. Although various forms of stridence manifestation are successfully diagnosed by speech therapists, there is a need for the automatic detection and evaluation of stridence. In this paper, an algorithm for stridence detection using Patterson's auditory model is presented. The algorithm consists of three processing stages. In the first stage spectral analysis and masking effects are applied using Paterson's auditory model. In the second stage a contour of spectral peaks that best fits characteristic features of the stridence is selected in the time-frequency (TF) representation of the signal obtained by Patterson's auditory model. In the third stage hypothesis testing is performed with three decisions: D 0 – no stridence, D 1 – stridence, and D 2 – unable to decide. The reliability of stridence detection is tested on the speech corpus of 16 speakers without stridence (with correct speech), 16 speakers without stridence but with some other speech sound disorders, and 16 speakers with stridence. Test results show high correspondence of subjective measures and automatic detection. | Automatic detection of stridence in speech using the auditory model |
S0885230815000856 | Many under-resourced languages such as Arabic diglossia or Hindi sub-dialects do not have sufficient in-domain text to build strong language models for use with automatic speech recognition (ASR). Semi-supervised language modeling uses a speech-to-text system to produce automatic transcripts from a large amount of in-domain audio typically to augment a small amount of manual transcripts. In contrast to the success of semi-supervised acoustic modeling, conventional language modeling techniques have provided only modest gains. This paper first explains the limitations of back-off language models due to their dependence on long-span n-grams, which are difficult to accurately estimate from automatic transcripts. From this analysis, we motivate a more robust use of the automatic counts as a prior over the estimated parameters of a log-linear language model. We demonstrate consistent gains for semi-supervised language models across a range of low-resource conditions. | Getting more from automatic transcripts for semi-supervised language modeling |
S0885230815000868 | In speech enhancement, Gaussian Mixture Models (GMMs) can be used to model the Probability Density Function (PDF) of the Periodograms of speech and different noise types. These GMMs are created by applying the Estimate Maximization (EM) algorithm on large datasets of speech and different noise type Periodograms and hence classify them into a small number of clusters whose centroid Periodograms are the mean vectors of the GMMs. These GMMs are used to realize the Maximum A-Posteriori (MAP) estimation of the speech and noise Periodograms present in a noisy speech observation. To realize the MAP estimation, use of a constrained optimization algorithm is proposed in which relatively good enhancement results with high processing times are attained. Due to the use of constraints in the optimization algorithm, incorrect estimation results may arise due to possible local maxima. A simple analytic MAP algorithm is proposed to attain global maximums in lower calculation times. With the new method the complicated MAP formula is simplified as much as possible to find the maxima, through solving a set of equations and not through conventional numerical methods used in optimization. This method results in excellent speech enhancement with a relatively short processing time. | Speech enhancement using Maximum A-Posteriori and Gaussian Mixture Models for speech and noise Periodogram estimation |
S088523081500087X | Due to the increasing aging population in modern society and to the proliferation of smart devices, there is a need to enhance speech recognition among smart devices in order to make information easily accessible to the elderly as it is to the younger population. In general, speech recognition systems are optimized to an average adult's voice and tend to exhibit a lower accuracy rate when recognizing an elderly person's voice, due to the effects of speech articulation and speaking style. Additional costs are bound to be incurred when adding modifications to current speech recognitions systems for better speech recognition among elderly users. Thus, using a preprocessing application on a smart device can not only deliver better speech recognition but also substantially reduce any added costs. Audio samples of 50 words uttered by 80 elderly and young adults were collected and comparatively analyzed. The speech patterns of the elderly have a slower speech rate with longer inter-syllabic silence length and slightly lower speech intelligibility. The speech recognition rate for elderly adults could be improved by means of increasing the speech rate, adding a 1.5% increase in accuracy, eliminating silence periods, adding another 4.2% increase in accuracy, and boosting the energy of the formant frequency bands for a 6% boost in accuracy. After all the preprocessing, a 12% increase in the accuracy of elderly speech recognition was achieved. Through this study, we show that speech recognition of elderly voices can be improved through modifying specific aspects of differences in speech articulation and speaking style. In the future, we will conduct studies on methods that can precisely measure and adjust speech rate and find additional factors that impact intelligibility. | Preprocessing for elderly speech recognition of smart devices |
S0885230815000923 | Child engagement is defined as the interaction of a child with his/her environment in a contextually appropriate manner. Engagement behavior in children is linked to socio-emotional and cognitive state assessment with enhanced engagement identified with improved skills. A vast majority of studies however rely solely, and often implicitly, on subjective perceptual measures of engagement. Access to automatic quantification could assist researchers/clinicians to objectively interpret engagement with respect to a target behavior or condition, and furthermore inform mechanisms for improving engagement in various settings. In this paper, we present an engagement prediction system based exclusively on vocal cues observed during structured interaction between a child and a psychologist involving several tasks. Specifically, we derive prosodic cues that capture engagement levels across the various tasks. Our experiments suggest that a child's engagement is reflected not only in the vocalizations, but also in the speech of the interacting psychologist. Moreover, we show that prosodic cues are informative of the engagement phenomena not only as characterized over the entire task (i.e., global cues), but also in short term patterns (i.e., local cues). We perform a classification experiment assigning the engagement of a child into three discrete levels achieving an unweighted average recall of 55.8% (chance is 33.3%). While the systems using global cues and local level cues are each statistically significant in predicting engagement, we obtain the best results after fusing these two components. We perform further analysis of the cues at local and global levels to achieve insights linking specific prosodic patterns to the engagement phenomenon. We observe that while the performance of our model varies with task setting and interacting psychologist, there exist universal prosodic patterns reflective of engagement. | Analysis of engagement behavior in children during dyadic interactions using prosodic cues |
S0885230815000935 | The field of Cross-Language Information Retrieval relates techniques close to both the Machine Translation and Information Retrieval fields, although in a context involving characteristics of its own. The present study looks to widen our knowledge about the effectiveness and applicability to that field of non-classical translation mechanisms that work at character n-gram level. For the purpose of this study, an n-gram based system of this type has been developed. This system requires only a bilingual machine-readable dictionary of n-grams, automatically generated from parallel corpora, which serves to translate queries previously n-grammed in the source language. n-Gramming is then used as an approximate string matching technique to perform monolingual text retrieval on the set of n-grammed documents in the target language. The tests for this work have been performed on CLEF collections for seven European languages, taking English as the target language. After an initial tuning phase in order to analyze the most effective way for its application, the results obtained, close to the upper baseline, not only confirm the consistency across languages of this kind of character n-gram based approaches, but also constitute a further proof of their validity and applicability, these not being tied to a given implementation. | On the feasibility of character n-grams pseudo-translation for Cross-Language Information Retrieval tasks |
S0885230815000947 | In this paper, automatic assessment models are developed for two perceptual variables: speech intelligibility and voice quality. The models are developed and tested on a corpus of Dutch tracheoesophageal (TE) speakers. In this corpus, each speaker read a text passage of approximately 300 syllables and two speech therapists provided consensus scores for the two perceptual variables. Model accuracy and stability are investigated as a function of the amount of speech that is made available for speaker assessment (clinical setting). Five sets of automatically generated acoustic-phonetic speaker features are employed as model inputs. In Part I, models taking complete feature sets as inputs are compared to models taking only the features which are expected to have sufficient support in the speech available for assessment. In Part II, the impact of phonetic content and stimulus length on the computer-generated scores is investigated. Our general finding is that a text encompassing circa 100 syllables is long enough to achieve close to asymptotic accuracy. | Computing scores of voice quality and speech intelligibility in tracheoesophageal speech for speech stimuli of varying lengths |
S0885230815000959 | Language transfer creates a challenge for Chinese (L1) speakers in acquiring English (L2) rhythm. This appears to be a widely encountered difficulty among foreign learners of English, and is a major obstacle in acquiring a near-native oral proficiency. This paper presents a system named MusicSpeak, which strives to capitalize on musical rhythm for prosodic training in second language acquisition. This is one of the first efforts that develop an automatic procedure which can be applied to arbitrary English sentences, to cast rhythmic patterns in speech into rhythmic patterns in music. Learners can practice by speaking in synchrony with the musical rhythm. Evaluation results suggest that after practice, the learners’ speech generally achieves higher durational variability and better approximates stress-timed rhythm. | Capitalizing on musical rhythm for prosodic training in computer-aided language learning |
S0885230815000960 | Automatic speech recognition applications can benefit from a confidence measure (CM) to predict the reliability of the output. Previous works showed that a word-dependent naïve Bayes (NB) classifier outperforms the conventional word posterior probability as a CM. However, a discriminative formulation usually renders improved performance due to the available training techniques. Taking this into account, we propose a logistic regression (LR) classifier defined with simple input functions to approximate to the NB behaviour. Additionally, as a main contribution, we propose to adapt the CM to the speaker in cases in which it is possible to identify the speakers, such as online lecture repositories. The experiments have shown that speaker-adapted models outperform their non-adapted counterparts on two difficult tasks from English (videoLectures.net) and Spanish (poliMedia) educational lectures. They have also shown that the NB model is clearly superseded by the proposed LR classifier. | Speaker-adapted confidence measures for speech recognition of video lectures |
S0885230815000972 | Incremental dialogue systems are often perceived as more responsive and natural because they are able to address phenomena of turn-taking and overlapping speech, such as backchannels or barge-ins. Previous work in this area has often identified distinctive prosodic features, or features relating to syntactic or semantic completeness, as marking appropriate places of turn-taking. In a separate strand of work, psycholinguistic studies have established a connection between information density and prominence in language—the less expected a linguistic unit is in a particular context, the more likely it is to be linguistically marked. This has been observed across linguistic levels, including the prosodic, which plays an important role in predicting overlapping speech. In this article, we explore the hypothesis that information density (ID) also plays a role in turn-taking. Specifically, we aim to show that humans are sensitive to the peaks and troughs of information density in speech, and that overlapping speech at ID troughs is perceived as more acceptable than overlaps at ID peaks. To test our hypothesis, we collect human ratings for three models of generating overlapping speech based on features of: (1) prosody and semantic or syntactic completeness, (2) information density, and (3) both types of information. Results show that over 50% of users preferred the version using both types of features, followed by a preference for information density features alone. This indicates a clear human sensitivity to the effects of information density in spoken language and provides a strong motivation to adopt this metric for the design, development and evaluation of turn-taking modules in spoken and incremental dialogue systems. | Information density and overlap in spoken dialogue |
S0885230815001072 | In this paper, we investigate the ensemble of deep neural networks (DNNs) by using an acoustic environment classification (AEC) technique for the statistical model-based voice activity detection (VAD). From an investigation of the statistical model-based VAD, it is known that the traditional decision rule is based on the geometric mean of the likelihood ratio or the support vector machine (SVM), which is a shallow model with zero or one hidden layer. Since the shallow models cannot take an advantage of the diversity of the space distribution of features, in the training step, we basically build the multiple DNNs according the different noise types by employing the parameters of the statistical model-based VAD algorithm. In addition, the separate DNN is designed for the AEC algorithm in order to choose the best DNN for each noise. In the on-line noise-aware VAD step, the AEC is first performed on a frame-by-frame basis using the separate DNN so the a posteriori probabilities to identify noise are obtained. Once the probabilities are achieved for each noise, the environmental knowledge is contributed to allow us to combine the speech presence probabilities which are derived from the ensemble of the DNNs trained for the individual noise. Our approach for VAD was evaluated in terms of objective measures and showed significant improvement compared to the conventional algorithm. | Ensemble of deep neural networks using acoustic environment classification for statistical model-based voice activity detection |
S0885230815001084 | Multi-document summarization (MDS) is becoming a crucial task in natural language processing. MDS targets to condense the most important information from a set of documents to produce a brief summary. Most existing extractive multi-document summarization methods employ different sentence selection approaches to obtain the summary as a subset of sentences from the given document set. The ability of the weighted hierarchical archetypal analysis to select âÂÂthe best of the bestâ summary sentences motivates us to use this method in our solution to multi-document summarization tasks. In this paper, we propose a new framework for various multi-document summarization tasks based on weighted hierarchical archetypal analysis. The paper demonstrates how four variant summarization tasks, including general, query-focused, update, and comparative summarization, can be modeled as different versions acquired from the proposed framework. Experiments on summarization data sets (DUC04-07, TAC08) are conducted to demonstrate the efficiency and effectiveness of our framework for all four kinds of the multi-document summarization tasks. | Weighted hierarchical archetypal analysis for multi-document summarization |
S0885230815001096 | This paper proposes a new probabilistic synchronous context-free grammar model for statistical machine translation. The model labels nonterminals with classes of boundary words on the target side of aligned phrase pairs. Labeling of the rules is performed with coarse grained and fine grained nonterminals using POS tags and word clusters trained on the target language corpus. Considering the large size of the proposed model due to the diversity of nonterminals, we have also proposed a novel approach for filtered rule extraction based on the alignment pattern of phrase pairs. Using limited patterns of rules, the extraction of hierarchical rules gets restricted from phrase pairs that are decomposable to two aligned subphrases. The proposed filtered rule extraction decreases the model size and the decoding time considerably with no significant impact on the translation quality. Using BLEU as a metric in our experiments, the proposed model achieved a notable improvement rate over the state-of-the-art hierarchical phrase-based model in the translation from Persian, French and Spanish to English language. This is applicable for all languages, even under-resourced ones having no linguistic tools. | Phrase-boundary model for statistical machine translation |
S0885230815001102 | A review is proposed of the impact of word representations and classification methods in the task of theme identification of telephone conversation services having highly imperfect automatic transcriptions. We firstly compare two word-based representations using the classical Term Frequency-Inverse Document Frequency with Gini purity criteria (TF-IDF-Gini) method and the latent Dirichlet allocation (LDA) approach. We then introduce a classification method that takes advantage of the LDA topic space representation, highlighted as the best word representation. To do so, two assumptions about topic representation led us to choose a Gaussian Process (GP) based method. Its performance is compared with a classical Support Vector Machine (SVM) classification method. Experiments showed that the GP approach is a better solution to deal with the multiple theme complexity of a dialogue, no matter the conditions studied (manual or automatic transcriptions) (Morchid et al., 2014). In order to better understand results obtained using different word representation methods and classification approaches, we then discuss the impact of discriminative and non-discriminative words extracted by both word representations methods in terms of transcription accuracy (Morchid et al., 2014). Finally, we propose a novel study that evaluates the impact of the Word Error Rate (WER) in the LDA topic space learning process as well as during the theme identification task. This original qualitative study points out that selecting a small subset of words having the lowest WER (instead of using all the words) allows the system to better classify automatic transcriptions with an absolute gain of 0.9 point, in comparison to the best performance achieved on this dialogue classification task (precision of 83.3%). | Impact of Word Error Rate on theme identification task of highly imperfect human–human conversations |
S0885230815001126 | A Concept-to-Speech (CTS) system converts the conceptual representation of a sentence-to-be-spoken into speech. While some CTS systems consist of independently built text generation and Text-to-Speech (TTS) modules, the majority of the existing CTS systems enhance the connection between these two modules with a prosodic prediction module that utilizes linguistic knowledge from the text generator to predict prosodic features for TTS generation. However, knowledge embodied within the individual modules has the potential to be shared in more ways. This paper describes knowledge sharing for acoustic modelling and utterance filtering in a Mandarin CTS system. First, syntactic information generated by the text generator is propagated to a hidden Markov model (HMM) based acoustic model within the TTS module and replaces the symbolic prosodic phrasing features therein. Our experimental results show that this approach alleviates the local hard-decision problem in automatic prosodic phrasing for Mandarin CTS systems and achieves a comparable performance to the traditional approach without explicit prosodic phrasing. Second, the acoustic features of multiple synthetic utterances expressing the same input concept are utilized to evaluate the utterance candidates. With this ‘post-processing’ mechanism, our CTS system is able to filter out inferior synthetic utterances and find an acceptable candidate to express the input concept. | Concept-to-Speech generation with knowledge sharing for acoustic modelling and utterance filtering |
S0885230815300255 | In this paper we present a silent speech interface (SSI) system aimed at restoring speech communication for individuals who have lost their voice due to laryngectomy or diseases affecting the vocal folds. In the proposed system, articulatory data captured from the lips and tongue using permanent magnet articulography (PMA) are converted into audible speech using a speaker-dependent transformation learned from simultaneous recordings of PMA and audio signals acquired before laryngectomy. The transformation is represented using a mixture of factor analysers, which is a generative model that allows us to efficiently model non-linear behaviour and perform dimensionality reduction at the same time. The learned transformation is then deployed during normal usage of the SSI to restore the acoustic speech signal associated with the captured PMA data. The proposed system is evaluated using objective quality measures and listening tests on two databases containing PMA and audio recordings for normal speakers. Results show that it is possible to reconstruct speech from articulator movements captured by an unobtrusive technique without an intermediate recognition step. The SSI is capable of producing speech of sufficient intelligibility and naturalness that the speaker is clearly identifiable, but problems remain in scaling up the process to function consistently for phonetically rich vocabularies. | A silent speech system based on permanent magnet articulography and direct synthesis |
S088523081530036X | In this work, we present a comprehensive study on the use of deep neural networks (DNNs) for automatic language identification (LID). Motivated by the recent success of using DNNs in acoustic modeling for speech recognition, we adapt DNNs to the problem of identifying the language in a given utterance from its short-term acoustic features. We propose two different DNN-based approaches. In the first one, the DNN acts as an end-to-end LID classifier, receiving as input the speech features and providing as output the estimated probabilities of the target languages. In the second approach, the DNN is used to extract bottleneck features that are then used as inputs for a state-of-the-art i-vector system. Experiments are conducted in two different scenarios: the complete NIST Language Recognition Evaluation dataset 2009 (LRE'09) and a subset of the Voice of America (VOA) data from LRE'09, in which all languages have the same amount of training data. Results for both datasets demonstrate that the DNN-based systems significantly outperform a state-of-art i-vector system when dealing with short-duration utterances. Furthermore, the combination of the DNN-based and the classical i-vector system leads to additional performance improvements (up to 45% of relative improvement in both EER and C a v g on 3s and 10s conditions, respectively). | On the use of deep feedforward neural networks for automatic language identification |
S0885230816000024 | Previous studies have demonstrated the benefits of PLDA–SVM scoring with empirical kernel maps for i-vector/PLDA speaker verification. The method not only performs significantly better than the conventional PLDA scoring and utilizes the multiple enrollment utterances of target speakers effectively, but also opens up opportunity for adopting sparse kernel machines in PLDA-based speaker verification systems. This paper proposes taking the advantages of empirical kernel maps by incorporating them into a more advanced kernel machine called relevance vector machines (RVMs). The paper reports extensive analyses on the behaviors of RVMs and provides insight into the properties of RVMs and their applications in i-vector/PLDA speaker verification. Results on NIST 2012 SRE demonstrate that PLDA–RVM outperforms the conventional PLDA and that it achieves a comparable performance as PLDA–SVM. Results also show that PLDA–RVM is much sparser than PLDA–SVM. | Sparse kernel machines with empirical kernel maps for PLDA speaker verification |
S0885230816000036 | In this paper, we present an approach to multilingual Spoken Language Understanding based on a process of generalization of multiple translations, followed by a specific methodology to perform a semantic parsing of these combined translations. A statistical semantic model, which is learned from a segmented and labeled corpus, is used to represent the semantics of the task in a language. Our goal is to allow the users to interact with the system using other languages different from the one used to train the semantic models, avoiding the cost of segmenting and labeling a training corpus for each language. In order to reduce the effect of translation errors and to increase the coverage, we propose an algorithm to generate graphs of words from different translations. We also propose an algorithm to parse graphs of words with the statistical semantic model. The experimental results confirm the good behavior of this approach using French and English as input languages in a spoken language understanding task that was developed for Spanish. | Multilingual Spoken Language Understanding using graphs and multiple translations |
S0885230816000048 | The degree of similarity between sentences is assessed by sentence similarity methods. Sentence similarity methods play an important role in areas such as summarization, search, and categorization of texts, machine translation, etc. The current methods for assessing sentence similarity are based only on the similarity between the words in the sentences. Such methods either represent sentences as bag of words vectors or are restricted to the syntactic information of the sentences. Two important problems in language understanding are not addressed by such strategies: the word order and the meaning of the sentence as a whole. The new sentence similarity assessment measure presented here largely improves and refines a recently published method that takes into account the lexical, syntactic and semantic components of sentences. The new method was benchmarked using Li–McLean, showing that it outperforms the state of the art systems and achieves results comparable to the evaluation made by humans. Besides that, the method proposed was extensively tested using the SemEval 2012 sentence similarity test set and in the evaluation of the degree of similarity between summaries using the CNN-corpus. In both cases, the measure proposed here was proved effective and useful. | Assessing sentence similarity through lexical, syntactic and semantic analysis |
S088523081600005X | Traditional concept retrieval is based on usual word definition dictionaries with simple performance: they just map words to their definitions. This approach is mostly helpful for readers and language students, but writers sometimes need to find a word that encompasses a set of ideas that they have in mind. For this task, inverse dictionaries are ready to help; however, in some cases a sought word does not correspond to a single definition but to a composite meaning of several concepts. A language producer then tends to require a concept search that starts with a group of words or a series of related terms, looking for a target word. This paper aims to assist on this task by presenting a new approach for concept blending through the development of a search-by-concept method based on vector space representation using semantic analysis and statistical natural language processing techniques. Words are represented as numeric vectors based on different semantic similarity measures and probabilistic measures; the semantic properties of a word are captured in the vector elements determined by a given linguistic context. Three different sources are used as context for word vector construction: WordNet, a distributional thesaurus, and the Latent Dirichlet Allocation algorithm; each source is used for building a different semantic vector space. The concept-blender input is then conformed by a set of n-nouns. All input members are read and substituted by their corresponding vectors. Then, a semantic space analysis including a filtering and ranking process is carried out to deploy a list of target words. A test set of 50 concepts was created in order to evaluate the system's performance. A group of 30 evaluators found our integrated concept blending model to provide better results for finding an adequate word for the provided set of concepts. | Integrated concept blending with vector space models |
S0885230816000061 | Nowadays natural language processing plays an important and critical role in the domain of intelligent computing, pattern recognition, semantic analysis and machine intelligence. For Chinese information processing, to construct the predictive models of different semantic word-formation patterns with a large-scale corpus can significantly improve the efficiency and accuracy of the paraphrase of the unregistered or new word, ambiguities elimination, automatic lexicography, machine translation and other applications. Therefore it is required to find the relationship between word-formation patterns and different influential factors, which can be denoted as a classification problem. However, due to noise, anomalies, imprecision, polysemy, ambiguity, nonlinear structure, and class-imbalance in semantic word-formation data, multi-criteria optimization classifier (MCOC), support vector machines (SVM) and other traditional classification approaches will give the poor predictive performance. In this paper, according to the characteristic analysis of Chinese word-formations, we firstly proposed a novel layered semantic graph of each disyllabic word, the layer-weighted graph edit distance (GED) and its similarity kernel embedded into a new vector space, then on the normalized data MCOC with kernel, fuzzification and penalty factors (KFP-MCOC) and SVM are employed to predict Chinese semantic word-formation patterns. Our experimental results and comparison with SVM show that KFP-MCOC based on the layer-weighted semantic graphs can increase the separation of different patterns, the predictive accuracy of target patterns and the generalization of semantic pattern classification on new compound words. | Prediction of Chinese word-formation patterns using the layer-weighted semantic graph-based KFP-MCO classifier |
S0885230816000140 | To automatically build, from scratch, the language processing component for a speech synthesis system in a new language, a purified text corpora is needed where any words and phrases from other languages are clearly identified or excluded. When using found data and where there is no inherent linguistic knowledge of the language/languages contained in the data, identifying the pure data is a difficult problem. We propose an unsupervised language identification approach based on Latent Dirichlet Allocation where we take the raw n-gram count as features without any smoothing, pruning or interpolation. The Latent Dirichlet Allocation topic model is reformulated for the language identification task and Collapsed Gibbs Sampling is used to train an unsupervised language identification model. In order to find the number of languages present, we compared four kinds of measure and also the Hierarchical Dirichlet process on several configurations of the ECI/UCI benchmark. Experiments on the ECI/MCI data and a Wikipedia based Swahili corpus shows this LDA method, without any annotation, has comparable precisions, recalls and F-scores to state of the art supervised language identification techniques. | Unsupervised language identification based on Latent Dirichlet Allocation |
S0885230816300572 | Parallel corpora are essential resources for statistical machine translation (SMT) and cross language information retrieval (CLIR) systems. Creating parallel corpora is highly expensive in terms of both time and cost. In this paper, we propose a novel approach to automatically extract parallel sentences from aligned documents. To do so, we first train a Maximum Entropy binary classifier to compute the local similarity between each two sentences in different languages. To consider global information (e.g., the position of sentence pairs in the aligned documents), we define an objective function to penalize the cross alignments and then propose an integer linear programming approach to optimize the objective function. In our experiments, we focus on English and Persian Wikipedia articles. The experimental results on manually aligned test data indicate that the proposed method outperforms the baselines, significantly. Furthermore, the extrinsic evaluations of the corpus extracted from Wikipedia on both SMT and CLIR systems demonstrate the quality of the extracted parallel sentences. In addition, Experiments on the English–German language pair demonstrate that the proposed ILP method is a language-independent sentence alignment approach. The extracted English–Persian parallel corpus is freely available for research purposes. | Sentence alignment using local and global information |
S0885230816300614 | We propose an information theoretic region selection algorithm from the real time magnetic resonance imaging (rtMRI) video frames for a broad phonetic class recognition task. Representations derived from these optimal regions are used as the articulatory features for recognition. A set of connected and arbitrary shaped regions are selected such that the articulatory features computed from such regions provide maximal information about the broad phonetic classes. We also propose a tree-structured greedy region splitting algorithm to further segment these regions so that articulatory features from these split regions enhance the information about the phonetic classes. We find that some of the proposed articulatory features correlate well with the articulatory gestures from the Articulatory Phonology theory of speech production. Broad phonetic class recognition experiment using four rtMRI subjects reveals that the recognition accuracy with optimal split regions is, on average, higher than that using only acoustic features. Combining acoustic and articulatory features further reduces the error-rate by ∼8.25% (relative). | Information theoretic optimal vocal tract region selection from real time magnetic resonance images for broad phonetic class recognition |
S0888327013003865 | To meet the main requirements of output displacement, bandwidth frequency and accuracy in noncircular turning, a fast tool servo (FTS) system based on piezoelectric (PZT) voltage feedback is developed. A flexure hinge structure is designed to amplify the output of PZT actuator and topology optimization is done to reduce the mass and compliance of the structure so that the output displacement and response frequency of FTS can be improved. A compound controller, into which repetitive techniques, PI and feed-forward control, are introduced, and the method that PZT voltage of the actuator is treated as feedback are put forward. The feasibility and reliability of this system are proved well by turning experiments. | Development of a fast tool servo in noncircular turning and its control |
S0888327014002520 | This work details the Bayesian identification of a nonlinear dynamical system using a novel MCMC algorithm: ‘Data Annealing’. Data Annealing is similar to Simulated Annealing in that it allows the Markov chain to easily clear ‘local traps’ in the target distribution. To achieve this, training data is fed into the likelihood such that its influence over the posterior is introduced gradually - this allows the annealing procedure to be conducted with reduced computational expense. Additionally, Data Annealing uses a proposal distribution which allows it to conduct a local search accompanied by occasional long jumps, reducing the chance that it will become stuck in local traps. Here it is used to identify an experimental nonlinear system. The resulting Markov chains are used to approximate the covariance matrices of the parameters in a set of competing models before the issue of model selection is tackled using the Deviance Information Criterion. | Bayesian system identification of a nonlinear dynamical system using a novel variant of Simulated Annealing |
S0888327014003975 | This paper is concerned with the Bayesian system identification of structural dynamical systems using experimentally obtained training data. It is motivated by situations where, from a large quantity of training data, one must select a subset to infer probabilistic models. To that end, using concepts from information theory, expressions are derived which allow one to approximate the effect that a set of training data will have on parameter uncertainty as well as the plausibility of candidate model structures. The usefulness of this concept is then demonstrated through the system identification of several dynamical systems using both physics-based and emulator models. The result is a rigorous scientific framework which can be used to select ‘highly informative’ subsets from large quantities of training data. | Bayesian system identification of dynamical systems using highly informative training data |
S0888327014004610 | Piston slap is a major source of vibration and noise in internal combustion engines. Therefore, better understanding of the conditions favouring piston slap can be beneficial for the reduction of engine Noise, Vibration and Harshness (NVH). Past research has attempted to determine the exact position of piston slap events during the engine cycle and correlate them to the engine block vibration response. Validated numerical/analytical models of the piston assembly can be very useful towards this aim, since extracting the relevant information from experimental measurements can be a tedious and complicated process. In the present work, a coupled simulation of piston dynamics and engine tribology (tribodynamics) has been performed using quasi-static and transient numerical codes. Thus, the inertia and reaction forces developed in the piston are calculated. The occurrence of piston slap events in the engine cycle is monitored by introducing six alternative concepts: (i) the quasi-static lateral force, (ii) the transient lateral force, (iii) the minimum film thickness occurrence, (iv) the maximum energy transfer, (v) the lubricant squeeze velocity and (vi) the piston-impact angular duration. The validation of the proposed methods is achieved using experimental measurements taken from a single cylinder petrol engine in laboratory conditions. The surface acceleration of the engine block is measured at the thrust- and anti-thrust side locations. The correlation between the theoretically predicted events and the measured acceleration signals has been satisfactory in determining piston slap incidents, using the aforementioned concepts. The results also exhibit good repeatability throughout the set of measurements obtained in terms of the number of events occurring and their locations during the engine cycle. | On the identification of piston slap events in internal combustion engines using tribodynamic analysis |
S0888327015000990 | This paper deals with the condition monitoring of wind turbine gearboxes under varying operating conditions. Generally, gearbox systems include nonlinearities so a simplified nonlinear gear model is developed, on which the time–frequency analysis method proposed is first applied for the easiest understanding of the challenges faced. The effect of varying loads is examined in the simulations and later on in real wind turbine gearbox experimental data. The Empirical Mode Decomposition (EMD) method is used to decompose the vibration signals into meaningful signal components associated with specific frequency bands of the signal. The mode mixing problem of the EMD is examined in the simulation part and the results in that part of the paper suggest that further research might be of interest in condition monitoring terms. For the amplitude–frequency demodulation of the signal components produced, the Hilbert Transform (HT) is used as a standard method. In addition, the Teager–Kaiser energy operator (TKEO), combined with an energy separation algorithm, is a recent alternative method, the performance of which is tested in the paper too. The results show that the TKEO approach is a promising alternative to the HT, since it can improve the estimation of the instantaneous spectral characteristics of the vibration data under certain conditions. | A time–frequency analysis approach for condition monitoring of a wind turbine gearbox under varying load conditions |
S0888327015001521 | Fully electric vehicles with multiple drivetrains allow a significant variation of the steady-state and transient cornering responses through the individual control of the electric motor drives. As a consequence, alternative driving modes can be created that provide the driver the option to select the preferred dynamic vehicle behavior. This article presents a torque-vectoring control structure based on the combination of feedforward and feedback contributions for the continuous control of vehicle yaw rate. The controller is specifically developed to be easily implementable on real-world vehicles. A novel model-based procedure for the definition of the control objectives is described in detail, together with the automated tuning process of the algorithm. The implemented control functions are demonstrated with experimental vehicle tests. The results show the possibilities of torque-vectoring control in designing the vehicle understeer characteristic. | Driving modes for designing the cornering response of fully electric vehicles with multiple motors |
S0888327015003647 | Transfer Path Analysis (TPA) designates the family of test-based methodologies to study the transmission of mechanical vibrations. Since the first adaptation of electric network analogies in the field of mechanical engineering a century ago, a multitude of TPA methods have emerged and found their way into industrial development processes. Nowadays the TPA paradigm is largely commercialised into out-of-the-box testing products, making it difficult to articulate the differences and underlying concepts that are paramount to understanding the vibration transmission problem. The aim of this paper is to derive and review a wide repertoire of TPA techniques from their conceptual basics, liberating them from their typical field of application. A selection of historical references is provided to align methodological developments with particular milestones in science. Eleven variants of TPA are derived from a unified framework and classified into three categories, namely classical, component-based and transmissibility-based TPA. Current challenges and practical aspects are discussed and reference is made to related fields of research. degree of freedom frequency response function dynamic displacements/rotations applied forces/moments interface forces/moments admittance FRF matrix impedance FRF matrix transmissibility matrix pertaining to the assembled system pertaining to the active/passive component pertaining to the test rig source excitation DoF interface DoF receiver DoF indicator DoF pseudo-force DoF | General framework for transfer path analysis: History, theory and classification of techniques |
S0888327015005439 | An easy to use, fast to apply, cost-effective, and very accurate non-destructive testing (NDT) technique for damage localisation in complex structures is key for the uptake of structural health monitoring systems (SHM). Acoustic emission (AE) is a viable technique that can be used for SHM and one of the most attractive features is the ability to locate AE sources. The time of arrival (TOA) technique is traditionally used to locate AE sources, and relies on the assumption of constant wave speed within the material and uninterrupted propagation path between the source and the sensor. In complex structural geometries and complex materials such as composites, this assumption is no longer valid. Delta T mapping was developed in Cardiff in order to overcome these limitations; this technique uses artificial sources on an area of interest to create training maps. These are used to locate subsequent AE sources. However operator expertise is required to select the best data from the training maps and to choose the correct parameter to locate the sources, which can be a time consuming process. This paper presents a new and improved fully automatic delta T mapping technique where a clustering algorithm is used to automatically identify and select the highly correlated events at each grid point whilst the “Minimum Difference” approach is used to determine the source location. This removes the requirement for operator expertise, saving time and preventing human errors. A thorough assessment is conducted to evaluate the performance and the robustness of the new technique. In the initial test, the results showed excellent reduction in running time as well as improved accuracy of locating AE sources, as a result of the automatic selection of the training data. Furthermore, because the process is performed automatically, this is now a very simple and reliable technique due to the prevention of the potential source of error related to manual manipulation. | Acoustic emission source location in complex structures using full automatic delta T mapping technique |
S0888327015005452 | Fully electric vehicles with individually controlled drivetrains can provide a high degree of drivability and vehicle safety, all while increasing the cornering limit and the ‘fun-to-drive’ aspect. This paper investigates a new approach on how sideslip control can be integrated into a continuously active yaw rate controller to extend the limit of stable vehicle cornering and to allow sustained high values of sideslip angle. The controllability-related limitations of integrated yaw rate and sideslip control, together with its potential benefits, are discussed through the tools of multi-variable feedback control theory and non-linear phase-plane analysis. Two examples of integrated yaw rate and sideslip control systems are presented and their effectiveness is experimentally evaluated and demonstrated on a four-wheel-drive fully electric vehicle prototype. Results show that the integrated control system allows safe operation at the vehicle cornering limit at a specified sideslip angle independent of the tire-road friction conditions. left front wheel right front wheel left rear wheel right rear wheel front axle rear axle a generic discrete parameter front and rear semi-wheelbases longitudinal acceleration lateral acceleration maximum reference value of lateral acceleration process equation matrices in the extended Kalman filter stiffness parameter of the brush-type model of the tires damping ratio of the actuators error vector yaw rate error sideslip angle error first output singular vector, second output singular vector, singular vector matrix within the singular value decomposition of the sensitivity function longitudinal tire force lateral tire force vertical tire force static vertical load plant transfer function transfer function (with components G M z , r and G M z , β ) from the actual yaw moment to vehicle states transfer function (with components G p , r and G p , β ) from the reference yaw moment to vehicle states transfer function of the shaped plant transfer function (with components G δ , r and G δ , β ) from steering angle to vehicle states measurement equation matrices of the extended Kalman filter roll center height vertical distance between center of gravity and roll center identity matrix imaginary unit yaw mass moment of inertia half-length of tire contact patch slope for the increase of the reference yaw rate after the de-activation of the sideslip controller suspension roll stiffness control system matrix parameter of the yaw rate correction algorithm proportional gain yaw rate controller H ∞ optimal controller sideslip angle controller final controller formulation vehicle wheelbase vehicle mass reference control yaw moment actual yaw moment applied to the vehicle yaw moment contribution related to yaw rate control feedback part of M z,r feedforward part of M z,r yaw moment contribution related to sideslip control threshold of M z , β for activating the reference yaw rate ramp stability derivatives in the yaw moment balance equation yaw rate, yaw acceleration yaw rate output from the look-up table reference yaw rate reference vector worst reference direction, best reference direction, matrix with the worst and best reference directions according to the singular value decomposition Laplace operator or abbreviation for second closed-loop sensitivity function slip ratio time track width integral parameter input and output complementary sensitivity functions plant input first output singular vector, second output singular vector, singular vector matrix within the plant singular value decomposition vehicle speed input singular vector of the plant pre- and post- compensators extended Kalman filter process and measurement noises weighting factor for sideslip estimation state vector stability derivatives in the lateral force balance equation output vector slip angle and sliding limit slip angle sideslip angle, sideslip rate estimated sideslip angle sideslip angle estimated by the extended Kalman filter values of sideslip angle and sideslip rate at the equilibrium for the passive vehicle values of sideslip angle and sideslip rate at the equilibrium for the controlled vehicle in Sport Mode sideslip angle estimated through the integration method activation parameters of the sideslip controller peak value of sideslip angle reference value of sideslip angle sideslip control activation threshold sideslip threshold camber angle steering angle steering wheel angle load transfer induced by the aerodynamic forces load transfer caused by lateral acceleration reference yaw rate correction and corresponding threshold time step offsets between the nominal activation thresholds and the actual activation and deactivation thresholds in the relay activation scheme of the sideslip controller maximum robust stability margin parameter of the brush-type model of the tyres actual and estimated tire-road friction coefficient minimum and maximum singular values matrix resulting from the singular value decomposition of the plant frequency natural frequency of the actuators cut-off frequency of the reference yaw rate angular wheel speed | Enhancing vehicle cornering limit through sideslip and yaw rate control |
S0888327015006007 | In general, vehicle vibration is non-stationary and has a non-Gaussian probability distribution; yet existing testing methods for packaging design employ Gaussian distributions to represent vibration induced by road profiles. This frequently results in over-testing and/or over-design of the packaging to meet a specification and correspondingly leads to wasteful packaging and product waste, which represent $15bn per year in the USA and €3bn per year in the EU. The purpose of the paper is to enable a measured non-stationary acceleration signal to be replaced by a constructed signal that includes as far as possible any non-stationary characteristics from the original signal. The constructed signal consists of a concatenation of decomposed shorter duration signals, each having its own kurtosis level. Wavelet analysis is used for the decomposition process into inner and outlier signal components. The constructed signal has a similar PSD to the original signal, without incurring excessive acceleration levels. This allows an improved and more representative simulated input signal to be generated that can be used on the current generation of shaker tables. The wavelet decomposition method is also demonstrated experimentally through two correlation studies. It is shown that significant improvements over current international standards for packaging testing are achievable; hence the potential for more efficient packaging system design is possible. | Wavelet analysis to decompose a vibration simulation signal to improve pre-distribution testing of packaging |
S0888327016000029 | Control-based continuation is technique for tracking the solutions and bifurcations of nonlinear experiments. The idea is to apply the method of numerical continuation to a feedback-controlled physical experiment such that the control becomes non-invasive. Since in an experiment it is not (generally) possible to set the state of the system directly, the control target becomes a proxy for the state. Control-based continuation enables the systematic investigation of the bifurcation structure of a physical system, much like if it was numerical model. However, stability information (and hence bifurcation detection and classification) is not readily available due to the presence of stabilising feedback control. This paper uses a periodic auto-regressive model with exogenous inputs (ARX) to approximate the time-varying linearisation of the experiment around a particular periodic orbit, thus providing the missing stability information. This method is demonstrated using a physical nonlinear tuned mass damper. | Control-based continuation: Bifurcation and stability analysis for physical experiments |
S0888327016300048 | Rolling bearings are widely used in rotary machinery systems. The measured vibration signal of any part linked to rolling bearings contains fault information when failure occurs, differing only by energy levels. Bearing failure will cause the vibration of other components, and therefore the collected bearing vibration signals are mixed with vibration signal of other parts and noise. Using multiple sensors to collect signals at different locations on the machine to obtain multivariate signal can avoid the loss of local information. Subsequently using the multivariate empirical mode decomposition (multivariate EMD) to simultaneously analyze the multivariate signal is beneficial to extract fault information, especially for weak fault characteristics during the period of early failure. This paper proposes a novel method for fault feature extraction of rolling bearing based on multivariate EMD. The nonlocal means (NL-means) denoising method is used to preprocess the multivariate signal and the correlation analysis is employed to calculate fault correlation factors to select effective intrinsic mode functions (IMFs). Finally characteristic frequencies are extracted from the selected IMFs by spectrum analysis. The numerical simulations and applications to bearing monitoring verify the effectiveness of the proposed method and indicate that this novel method is promising in the field of signal decomposition and fault diagnosis. | Multivariate empirical mode decomposition and its application to fault diagnosis of rolling bearing |
S088832701630005X | This work presents a conceptually simple experiment consisting of a cantilever beam with a nonlinear spring at the tip. The configuration allows manipulation of the relative spacing between the modal frequencies of the underlying linear structure, and this permits the deliberate introduction of internal resonance. A 3:1 resonance is studied in detail; the response around the first mode shows a classic stiffening response, with the addition of more complex dynamic behaviour and an isola region. Quasiperiodic responses are also observed but in this work the focus remains on periodic responses. Predictions using Normal Form analysis and continuation methods show good agreement with experimental observations. The experiment provides valuable insight into frequency responses of nonlinear modal structures, and the implications of nonlinearity for vibration tests. Young's modulus vector of Fourier components of the forcing signal nth modal variable first modal response amplitude taken at drive frequency second modal response amplitude taken at third harmonic of drive frequency vector of Fourier components of the voltage signal sent to the shaker amplifier resonant component of the nth modal variable response amplitude of u n axial coordinate lateral coordinate linear modal damping ratio phase Poisson's ratio mass density nth mode shape function shorthand for ϕ n ( x i ) , ϕ n ( x L ) linear natural angular frequency of nth mode resonant response frequency of nth modal variable | Periodic responses of a structure with 3:1 internal resonance |
S0888327016300681 | Although linear modal analysis has proved itself to be the method of choice for the analysis of linear dynamic structures, its extension to nonlinear structures has proved to be a problem. A number of competing viewpoints on nonlinear modal analysis have emerged, each of which preserves a subset of the properties of the original linear theory. From the geometrical point of view, one can argue that the invariant manifold approach of Shaw and Pierre is the most natural generalisation. However, the Shaw–Pierre approach is rather demanding technically, depending as it does on the analytical construction of a mapping between spaces, which maps physical coordinates into invariant manifolds spanned by independent subsets of variables. The objective of the current paper is to demonstrate a data-based approach motivated by Shaw–Pierre method which exploits the idea of statistical independence to optimise a parametric form of the mapping. The approach can also be regarded as a generalisation of the Principal Orthogonal Decomposition (POD). A machine learning approach to inversion of the modal transformation is presented, based on the use of Gaussian processes, and this is equivalent to a nonlinear form of modal superposition. However, it is shown that issues can arise if the forward transformation is a polynomial and can thus have a multi-valued inverse. The overall approach is demonstrated using a number of case studies based on both simulated and experimental data. | A machine learning approach to nonlinear modal analysis |
S0893608014002378 | Nonnegative Matrix Factorization (NMF) has been a popular representation method for pattern classification problems. It tries to decompose a nonnegative matrix of data samples as the product of a nonnegative basis matrix and a nonnegative coefficient matrix. The columns of the coefficient matrix can be used as new representations of these data samples. However, traditional NMF methods ignore class labels of the data samples. In this paper, we propose a novel supervised NMF algorithm to improve the discriminative ability of the new representation by using the class labels. Using the class labels, we separate all the data sample pairs into within-class pairs and between-class pairs. To improve the discriminative ability of the new NMF representations, we propose to minimize the maximum distance of the within-class pairs in the new NMF space, and meanwhile to maximize the minimum distance of the between-class pairs. With this criterion, we construct an objective function and optimize it with regard to basis and coefficient matrices, and slack variables alternatively, resulting in an iterative algorithm. The proposed algorithm is evaluated on three pattern classification problems and experiment results show that it outperforms the state-of-the-art supervised NMF methods. | Max–min distance nonnegative matrix factorization |
S0895611113001225 | A new methodology for detecting the fovea center position in digital retinal images is presented in this paper. A pixel is firstly searched for within the foveal region according to its known anatomical position relative to the optic disc and vascular tree. Then, this pixel is used to extract a fovea-containing subimage on which thresholding and feature extraction techniques are applied so as to find fovea center. The methodology was evaluated on 1200 fundus images from the publicly available MESSIDOR database, 660 of which present signs of diabetic retinopathy. In 93.92% of these images, the distance between the methodology-provided and actual fovea center position remained below 1/4 of one standard optic disc radius (i.e., 17, 26, and 27 pixels for MESSIDOR retinas of 910, 1380 and 1455 pixels in size, respectively). These results outperform all the reviewed methodologies available in literature. Its effectiveness and robustness with different illness conditions makes this proposal suitable for retinal image computer analyses such as automated screening for early diabetic retinopathy detection. | Locating the fovea center position in digital fundus images using thresholding and feature extraction techniques |
S0895611114000652 | Images of ocular fundus are routinely utilized in ophthalmology. Since an examination using fundus camera is relatively fast and cheap procedure, it can be used as a proper diagnostic tool for screening of retinal diseases such as the glaucoma. One of the glaucoma symptoms is progressive atrophy of the retinal nerve fiber layer (RNFL) resulting in variations of the RNFL thickness. Here, we introduce a novel approach to capture these variations using computer-aided analysis of the RNFL textural appearance in standard and easily available color fundus images. The proposed method uses the features based on Gaussian Markov random fields and local binary patterns, together with various regression models for prediction of the RNFL thickness. The approach allows description of the changes in RNFL texture, directly reflecting variations in the RNFL thickness. Evaluation of the method is carried out on 16 normal (“healthy”) and 8 glaucomatous eyes. We achieved significant correlation (normals: ρ =0.72±0.14; p ≪0.05, glaucomatous: ρ =0.58±0.10; p ≪0.05) between values of the model predicted output and the RNFL thickness measured by optical coherence tomography, which is currently regarded as a standard glaucoma assessment device. The evaluation thus revealed good applicability of the proposed approach to measure possible RNFL thinning. | Thickness related textural properties of retinal nerve fiber layer in color fundus images |
S0895611114000664 | Modern medical information retrieval systems are paramount to manage the insurmountable quantities of clinical data. These systems empower health care experts in the diagnosis of patients and play an important role in the clinical decision process. However, the ever-growing heterogeneous information generated in medical environments poses several challenges for retrieval systems. We propose a medical information retrieval system with support for multimodal medical case-based retrieval. The system supports medical information discovery by providing multimodal search, through a novel data fusion algorithm, and term suggestions from a medical thesaurus. Our search system compared favorably to other systems in 2013 ImageCLEFMedical. | Multimodal medical information retrieval with unsupervised rank fusion |
S0895611114000895 | Latent Semantic Analysis (LSA) although has been used successfully in text retrieval when applied to CBIR induces scalability issues with large image collections. The method so far has been used with small collections due to the high cost of storage and computational time for solving the SVD problem for a large and dense feature matrix. Here we present an effective and efficient approach of applying LSA skipping the SVD solution of the feature matrix and overcoming in this way the deficiencies of the method with large scale datasets. Early and late fusion techniques are tested and their performance is calculated. The study demonstrates that early fusion of several composite descriptors with visual words increase retrieval effectiveness. It also combines well in a late fusion for mixed (textual and visual) ad hoc and modality classification. The results reported are comparable to state of the art algorithms without including additional knowledge from the medical domain. | Applying latent semantic analysis to large-scale medical image databases |
S0895611114000962 | As there is an increasing need for the computer-aided effective management of pathology in lumbar spine, we have developed a computer-aided diagnosis and characterization framework using lumbar spine MRI that provides radiologists a second opinion. In this paper, we propose a left spinal canal boundary extraction method, based on dynamic programming in lumbar spine MRI. Our method fuses the absolute intensity difference of T1-weighted and T2-weighted sagittal images and the inverted gradient of the difference image into a dynamic programming scheme and works in a fully automatic fashion. The boundaries generated by our method are compared against reference boundaries in terms of the Euclidean distance and the Chebyshev distance. The experimental results from 85 clinical data show that our methods find the boundary with a mean Euclidean distance of 3mm, achieving a speedup factor of 167 compared with manual landmark extraction. The proposed method successfully extracts landmarks automatically and fits well with our framework for computer-aided diagnosis in lumbar spine. | Automatic spinal canal detection in lumbar MR images in the sagittal view using dynamic programming |
S0895611114000974 | This work presents an automatic method for distortion correction and calibration of intra-operative spine X-ray images, a fundamental step for the use of this modality in computer and robotic assisted surgeries. Our method is based on a prototype calibration drum, attached to the c-arm intensifier during the intervention. The projections of its embedded fiducial beads onto the X-ray images are segmented by the proposed method, which uses its calculated centroids to undo the distortion and, afterwards, calibrate the c-arm. For the latter purpose, we propose the use of a constrained version of the well known Direct Linear Transform (DLT) algorithm, reducing its degrees of freedom from 11 to 3. Experimental evaluation of our method is included in this work, showing that it is fast and more accurate than other existing methods. The low segmentation error level also ensures accurate calibration of the c-arm, with an expected error of 4% in the computation of its focal distance. | Distortion correction and calibration of intra-operative spine X-ray images using a constrained DLT algorithm |
S0895611114000986 | In this paper, we present the approach that we applied to the medical modality classification tasks at the ImageCLEF evaluation forum. More specifically, we used the modality classification databases from the ImageCLEF competitions in 2011, 2012 and 2013, described by four visual and one textual types of features, and combinations thereof. We used local binary patterns, color and edge directivity descriptors, fuzzy color and texture histogram and scale-invariant feature transform (and its variant opponentSIFT) as visual features and the standard bag-of-words textual representation coupled with TF-IDF weighting. The results from the extensive experimental evaluation identify the SIFT and opponentSIFT features as the best performing features for modality classification. Next, the low-level fusion of the visual features improves the predictive performance of the classifiers. This is because the different features are able to capture different aspects of an image, their combination offering a more complete representation of the visual content in an image. Moreover, adding textual features further increases the predictive performance. Finally, the results obtained with our approach are the best results reported on these databases so far. | Improved medical image modality classification using a combination of visual and textual features |
S0895611114000998 | Literature-based image informatics techniques are essential for managing the rapidly increasing volume of information in the biomedical domain. Compound figure separation, modality classification, and image retrieval are three related tasks useful for enabling efficient access to the most relevant images contained in the literature. In this article, we describe approaches to these tasks and the evaluation of our methods as part of the 2013 medical track of ImageCLEF. In performing each of these tasks, the textual and visual features used to represent images are an important consideration often left unaddressed. Therefore, we also describe a gradient-based optimization strategy for determining meaningful combinations of features and apply the method to the image retrieval task. An evaluation of our optimization strategy indicates the method is capable of producing statistically significant improvements in retrieval performance. Furthermore, the results of the 2013 ImageCLEF evaluation demonstrate the effectiveness of our techniques. In particular, our text-based and mixed image retrieval methods ranked first among all the participating groups. | Literature-based biomedical image classification and retrieval |
S0895611114001025 | This paper presents a fractional anisotropy asymmetry (FAA) method to detect the asymmetry of white matter (WM) integrity and its correlation with the side of seizure origin for partial onset temporal lobe epilepsy (TLE) using diffusion tensor image (DTI). In this study, FAA analysis is applied to 30 patients of partial TLE (15 left, 15 right) and 14 matched normal controls. Specifically, after registering all the images with the JHU-DTI-MNI template the average FA value of each FA skeleton section is calculated using the tract-based spatial statistics (TBSS) method. Then, FAA is calculated to quantify the WM diffusivity asymmetry of the corresponding region-pairs between the left and right hemispheres. Using FAA the regional asymmetry contributing significantly to the group differences of controls and left/right TLE, as well as the left and right TLE, is identified. As a comparison, the ROI-based average FA values for WM and corresponding FAAs are also calculated. TBSS-based analysis reflects the average of local maximal FA values along the white matter skeleton sections, and ROI-based analysis shows the average of WM FA values within each anatomical region. The FAA statistical results indicated that the FA values of anatomical region-pairs are asymmetric in the ipsilateral hemisphere with seizure origin against the contralateral hemisphere. Particularly, FAA values within the temporal lobe (superior, middle, and inferior temporal WM) are significantly different between the left and right TLE patients, consistently found from both analysis methods. The study suggests that FAA values can be potentially used to identify the seizures of origin of TLE and to help understand the relationship between fiber tracts with the side of seizure origin of TLE. | Fractional anisotropy asymmetry and the side of seizure origin for partial onset-temporal lobe epilepsy |
S0895611114001050 | This paper presents novel pre-processing image enhancement algorithms for retinal optical coherence tomography (OCT). These images contain a large amount of speckle causing them to be grainy and of very low contrast. To make these images valuable for clinical interpretation, we propose a novel method to remove speckle, while preserving useful information contained in each retinal layer. The process starts with multi-scale despeckling based on a dual-tree complex wavelet transform (DT-CWT). We further enhance the OCT image through a smoothing process that uses a novel adaptive-weighted bilateral filter (AWBF). This offers the desirable property of preserving texture within the OCT image layers. The enhanced OCT image is then segmented to extract inner retinal layers that contain useful information for eye research. Our layer segmentation technique is also performed in the DT-CWT domain. Finally we describe an OCT/fundus image registration algorithm which is helpful when two modalities are used together for diagnosis and for information fusion. | Adaptive-weighted bilateral filtering and other pre-processing techniques for optical coherence tomography |
S0895611114001086 | 2D/3D image fusion applications are widely used in endovascular interventions. Complaints from interventionists about existing state-of-art visualization software are usually related to the strong compromise between 2D and 3D visibility or the lack of depth perception. In this paper, we investigate several concepts enabling improvement of current image fusion visualization found in the operating room. First, a contour enhanced visualization is used to circumvent hidden information in the X-ray image. Second, an occlusion and depth color-coding scheme is considered to improve depth perception. To validate our visualization technique both phantom and clinical data are considered. An evaluation is performed in the form of a questionnaire which included 24 participants: ten clinicians and fourteen non-clinicians. Results indicate that the occlusion correction method provides 100% correctness when determining the true position of an aneurysm in X-ray. Further, when integrating an RGB or RB color-depth encoding in the image fusion both perception and intuitiveness are improved. | Augmented depth perception visualization in 2D/3D image fusion |
S0895611114001098 | Segmentation of needles in ultrasound images remains a challenging problem. In this paper, we introduce a machine learning-based method for needle segmentation in 2D beam-steered ultrasound images. We used a statistical boosting approach to train a pixel-wise classifier for needle segmentation. The Radon transform was then used to find the needle position and orientation from the segmented image. We validated our method with data from ex vivo specimens and clinical nerve block procedures, and compared the results to those obtained using previously reported needle segmentation methods. Results show improved localization success and accuracy using the proposed method. For the ex vivo datasets, assuming that the needle orientation was known a priori, the needle was successfully localized in 86.2% of the images, with a mean targeting error of 0.48mm. The robustness of the proposed method to a lack of a priori knowledge of needle orientation was also demonstrated. For the clinical datasets, assuming that the needle orientation was closely aligned with the beam steering angle selected by the physician, the needle was successfully localized in 99.8% of the images, with a mean targeting error 0.19mm. These results indicate that the learning-based segmentation method may allow for increased targeting accuracy and enhanced visualization during ultrasound-guided needle procedures. | Enhanced needle localization in ultrasound using beam steering and learning-based segmentation |
S0895611114001104 | Magnetic resonance imaging (MRI), particularly dynamic contrast enhanced (DCE) imaging, has shown great potential in prostate cancer diagnosis and staging. In the current practice of DCE-MRI, diagnosis is based on quantitative parameters extracted from the series of T1-weighted images acquired after the injection of a contrast agent. To calculate these parameters, a pharmacokinetic model is fitted to the T1-weighted intensities. Most models make simplistic assumptions about the perfusion process. Moreover, these models require accurate estimation of the arterial input function, which is challenging. In this work we propose a data-driven approach to characterization of the prostate tissue that uses the time series of DCE T1-weighted images without pharmacokinetic modeling. This approach uses a number of model-free empirical parameters and also the principal component analysis (PCA) of the normalized T1-weighted intensities, as features for cancer detection from DCE MRI. The optimal set of principal components is extracted with sparse regularized regression through least absolute shrinkage and selection operator (LASSO). A support vector machine classifier was used with leave-one-patient-out cross validation to determine the ability of this set of features in cancer detection. Our data is obtained from patients prior to radical prostatectomy and the results are validated based on histological evaluation of the extracted specimens. Our results, obtained on 449 tissue regions from 16 patients, show that the proposed data-driven features outperform the traditional pharmacokinetic parameters with an area under ROC of 0.86 for LASSO-isolated PCA parameters, compared to 0.78 for pharmacokinetic parameters. This shows that our novel approach to the analysis of DCE data has the potential to improve the multiparametric MRI protocol for prostate cancer detection. | A data-driven approach to prostate cancer detection from dynamic contrast enhanced MRI |
S0895611114001116 | Introduction Dynamic image acquisition protocols are increasingly used in emission tomography for drug development and clinical research. As such, there is a need for computational phantoms to accurately describe both the spatial and temporal distribution of radiotracers, also accounting for periodic and non-periodic physiological processes occurring during data acquisition. Methods A new 5D anthropomorphic digital phantom was developed based on a generic simulation platform, for accurate parametric imaging simulation studies in emission tomography. The phantom is based on high spatial and temporal information derived from real 4D MR data and a detailed multi-compartmental pharmacokinetic modelling simulator. Results The proposed phantom is comprised of three spatial and two temporal dimensions, including periodic physiological processes due to respiratory motion and non-periodic functional processes due to tracer kinetics. Example applications are shown in parametric [18F]FDG and [15O]H2O PET imaging, successfully generating realistic macro- and micro-parametric maps. Conclusions The envisaged applications of this digital phantom include the development and evaluation of motion correction and 4D image reconstruction algorithms in PET and SPECT, development of protocols and methods for tracer and drug development as well as new pharmacokinetic parameter estimation algorithms, amongst others. Although the simulation platform is primarily developed for generating dynamic phantoms for emission tomography studies, it can easily be extended to accommodate dynamic MR and CT imaging simulation protocols. | A 5D computational phantom for pharmacokinetic simulation studies in dynamic emission tomography |
S0895611114001128 | Aim To assess the quality of three-dimensional volume rendered computer tomography (3D-CTVR), multi-planar reformation (MPR) and CT section plane in the fine diagnosis of ossicular chain in middle ear cholesteatoma. Methods Sixty patients with middle ear cholesteatoma were selected in this retrospective study. All cases underwent pre-operative CT scan. The respective radiologic reports of the ossicles status via three protocols were then compared to surgical findings. Results Quality assessment of these three protocols in the fine diagnosis of fine ossicles buried inside the soft tissue showed that both CTVR and MPR are more superior to conventional section plane, especially CTVR. Conclusion The uses of CTVR and MPR, in conjunction with conventional section plane, are better able to show where the true and fine ossicular chain in the cholesteatoma mass is. In the final analysis, we believe that the use of CTVR and MPR techniques can have profound contributive value in future clinical work. | Quality assessment of 3D-CTVR, MPR and section plane techniques in ossicular chain reconstruction in middle ear cholesteatoma |
S089561111400113X | Shape-based 3D surface reconstructing methods for liver vessels have difficulties to tackle with limited contrast of medical images and the intrinsic complexity of multi-furcation parts. In this paper, we propose an effective and robust technique, called Gap Border Pairing (GBPa), to reconstruct surface of liver vessels with complicated multi-furcations. The proposed method starts from a tree-like skeleton which is extracted from segmented liver vessel volumes and preprocessed as a number of simplified smooth branching lines. Secondly, for each center point of any branching line, an optimized elliptic cross-section ring (contour) is generated by optimizedly fitting its actual cross-section outline based on its tangent vector. Thirdly, a tubular surface mesh is generated for each branching line by weaving all of its adjacent rings. Then for every multi-furcation part, a transitional regular mesh is effectively and regularly reconstructed by using GBP. An initial model is generated after reconstructing all multi-furcation parts. Finally, the model is refined by using just one time subdivision and its topologies can be re-maintained by grouping its facets according to the skeleton, providing high-level editability. Our method can be automatically implemented in parallel if the segmented vessel volume and corresponding skeletons are provided. The experimental results show that GBP model is accurate enough in terms of the boundary deviations between segmented volume and the model. | An effective and robust method for modeling multi-furcation liver vessel by using Gap Border Pairing |
S0895611114001141 | Contrast-enhanced C-arm CT is routinely used for intra-operative guidance during the trans-catheter aortic valve implantation (TAVI); however, the requirement for contrast agent injection is not preferable, especially for patients with renal insufficiencies. To address this problem, we present a novel framework for fully automatic registration of pre-operative CT and non-contrast-enhanced C-arm CT. The proposed framework provides an improved workflow and minimizes the usage of contrast agent in the TAVI procedure. Our framework consists of three steps: coarse rigid-body alignment, anatomical knowledge-based prior deformation field generation, and fine deformable registration. We validated the proposed framework on 20 real patient data sets. Based on the 20 data sets, the mesh-to-mesh errors at the aortic root from different methods are measured. Our proposed method significantly outperforms the other state-of-the-art methods. Specifically, we achieve the registration accuracy at 1.76±0.43mm which is clinically plausible. Quantitative evaluation on real non-contrast enhanced C-arm CT data sets confirms the applicability in the clinical usage. The proposed heart registration method is generic and hence can be easily applied to other cardiac applications. | A pre-operative CT and non-contrast-enhanced C-arm CT registration framework for trans-catheter aortic valve implantation |
S0895611114001153 | In this paper we report how thickness and density vary over the calvarium region of a collection of human skulls. Most previous reports involved a limited number of skulls, with a limited number of measurement sites per skull, so data in the literature are sparse. We collected computer tomography (CT) scans of 51 ex vivo human calvaria, and analyzed these in silico using over 2000 measurement sites per skull. Thickness and density were calculated at these sites, for the three skull layers separately and combined, and were mapped parametrically onto the skull surfaces to examine the spatial variations per skull. These were found to be highly variable, and unique descriptors of the individual skulls. Of the three skull layers, the thickness of the inner cortical layer was found to be the most variable, while the least variable was the outer cortical density. | Parametric mapping and quantitative analysis of the human calvarium |
S0895611114001165 | The task of microscopy cell detection is of great biological and clinical importance. However, existing algorithms for microscopy cell detection usually ignore the large variations of cells and only focus on the shape feature/descriptor design. Here we propose a new two-layer model for cell centre detection by a two-layer structure prediction framework, which is respectively built on classification for the cell centres implicitly using rich appearances and contextual information and explicit structural information for the cells. Experimental results demonstrate the efficiency and effectiveness of the proposed method over competing state-of-the-art methods, providing a viable alternative for microscopy cell detection. | A two-layer structure prediction framework for microscopy cell detection |
S0895611114001177 | The detection of MRI abnormalities that can be associated to seizures in the study of temporal lobe epilepsy (TLE) is a challenging task. In many cases, patients with a record of epileptic activity do not present any discernible MRI findings. In this domain, we propose a method that combines quantitative relaxometry and diffusion tensor imaging (DTI) with support vector machines (SVM) aiming to improve TLE detection. The main contribution of this work is two-fold: on one hand, the feature selection process, principal component analysis (PCA) transformations of the feature space, and SVM parameterization are analyzed as factors constituting a classification model and influencing its quality. On the other hand, several of these classification models are studied to determine the optimal strategy for the identification of TLE patients using data collected from multi-parametric quantitative MRI. A total of 17 TLE patients and 19 control volunteers were analyzed. Four images were considered for each subject (T1 map, T2 map, fractional anisotropy, and mean diffusivity) generating 936 regions of interest per subject, then 8 different classification models were studied, each one comprised by a distinct set of factors. Subjects were correctly classified with an accuracy of 88.9%. Further analysis revealed that the heterogeneous nature of the disease impeded an optimal outcome. After dividing patients into cohesive groups (9 left-sided seizure onset, 8 right-sided seizure onset) perfect classification for the left group was achieved (100% accuracy) whereas the accuracy for the right group remained the same (88.9%). We conclude that a linear SVM combined with an ANOVA-based feature selection+PCA method is a good alternative in scenarios like ours where feature spaces are high dimensional, and the sample size is limited. The good accuracy results and the localization of the respective features in the temporal lobe suggest that a multi-parametric quantitative MRI, ROI-based, SVM classification could be used for the identification of TLE patients. This method has the potential to improve the diagnostic assessment, especially for patients who do not have any obvious lesions in standard radiological examinations. | Detection of temporal lobe epilepsy using support vector machines in multi-parametric quantitative MR imaging |
S0895611114001189 | We present an image processing approach to automatically analyze duo-channel microscopic images of muscular fiber nuclei and cytoplasm. Nuclei and cytoplasm play a critical role in determining the health and functioning of muscular fibers as changes of nuclei and cytoplasm manifest in many diseases such as muscular dystrophy and hypertrophy. Quantitative evaluation of muscle fiber nuclei and cytoplasm thus is of great importance to researchers in musculoskeletal studies. The proposed computational approach consists of steps of image processing to segment and delineate cytoplasm and identify nuclei in two-channel images. Morphological operations like skeletonization is applied to extract the length of cytoplasm for quantification. We tested the approach on real images and found that it can achieve high accuracy, objectivity, and robustness. | An image processing approach to analyze morphological features of microscopic images of muscle fibers |
S0895611114001190 | Dynamic MRI has been widely used to track the motion of the tongue and measure its internal deformation during speech and swallowing. Accurate segmentation of the tongue is a prerequisite step to define the target boundary and constrain the tracking to tissue points within the tongue. Segmentation of 2D slices or 3D volumes is challenging because of the large number of slices and time frames involved in the segmentation, as well as the incorporation of numerous local deformations that occur throughout the tongue during motion. In this paper, we propose a semi-automatic approach to segment 3D dynamic MRI of the tongue. The algorithm steps include seeding a few slices at one time frame, propagating seeds to the same slices at different time frames using deformable registration, and random walker segmentation based on these seed positions. This method was validated on the tongue of five normal subjects carrying out the same speech task with multi-slice 2D dynamic cine-MR images obtained at three orthogonal orientations and 26 time frames. The resulting semi-automatic segmentations of a total of 130 volumes showed an average dice similarity coefficient (DSC) score of 0.92 with less segmented volume variability between time frames than in manual segmentations. | Semi-automatic segmentation for 3D motion analysis of the tongue with dynamic MRI |
S0895611114001207 | Dynamic contrast-enhanced (DCE)–magnetic resonance imaging (MRI) represents an emerging method for the prediction of biomarker responses in cancer. However, DCE images remain difficult to analyze and interpret. Although pharmacokinetic approaches, which involve multi-step processes, can provide a general framework for the interpretation of these data, they are still too complex for robust and accurate implementation. Therefore, statistical data analysis techniques were recently suggested as another valid interpretation strategy for DCE–MRI. In this context, we propose a spectral clustering approach for the analysis of DCE–MRI time–intensity signals. This graph theory-based method allows for the grouping of signals after spatial transformation. Subsequently, these data clusters can be labeled following comparison to arterial signals. Here, we have performed experiments with simulated (i.e., generated via pharmacokinetic modeling) and clinical (i.e., obtained from patients scanned during prostate cancer diagnosis) data sets in order to demonstrate the feasibility and applicability of this kind of unsupervised and non-parametric approach. | Spectral clustering applied for dynamic contrast-enhanced MR analysis of time–intensity curves |
S0895611114001384 | A virtual reality (VR) based vascular intervention simulation system is introduced in this paper, which helps trainees develop surgical skills and experience complications in safety remote from patients. The system simulates interventional radiology procedures, in which flexible tipped guidewires are employed to advance diagnostic or therapeutic catheters into vascular anatomy of a patient. A real-time physically-based modeling approach ground on Kirchhoff elastic rod is proposed to simulate complicated behaviors of guidewires and catheters. The slender body of guidewire and catheter is modeled using more efficient special case of naturally straight, isotropic Kirchhoff rods, and the shorter flexible tip composed of straight or angled design is modeled using more complex generalized Kirchhoff rods. The motion equations for guidewire and catheter were derived with continuous elastic energy, followed by a discretization using a linear implicit scheme that guarantees stability and robustness. In addition, we used a fast-projection method to enforce the inextensibility of guidewire and catheter. An adaptive sampling algorithm was also implemented to improve the simulation efficiency without decrease of accuracy. Experimental results revealed that our system is both robust and efficient in a real-time performance. | A robust and real-time vascular intervention simulation based on Kirchhoff elastic rod |
S0895611114001451 | Personalized resection guides (PRG) have been recently proposed in the domain of knee replacement, demonstrating clinical outcome similar or even superior to both manual and navigated interventions. Among the mandatory pre-surgical steps for PRG prototyping, the measurement of clinical landmarks (CL) on the bony surfaces is recognized as a key issue due to lack of standardized methodologies, operator-dependent variability and time expenditure. In this paper, we focus on the reliability and repeatability of an anterior–posterior axis, also known as Whiteside line (WL), of the distal femur proposing automatic surface processing and modeling methods aimed at overcoming some of the major concerns related to the manual identification of such CL on 2D images and 3D models. We show that the measurement of WL, exploiting the principle of mean-shifting surface curvature, is highly repeatable and coherent with clinical knowledge. | Automating the design of resection guides specific to patient anatomy in knee replacement surgery by enhanced 3D curvature and surface modeling of distal femur shape models |
S0895611114001463 | Computational fluid dynamics (CFD) is a widely used method in mechanical engineering to solve complex problems by analysing fluid flow, heat transfer, and associated phenomena by using computer simulations. In recent years, CFD has been increasingly used in biomedical research of coronary artery disease because of its high performance hardware and software. CFD techniques have been applied to study cardiovascular haemodynamics through simulation tools to predict the behaviour of circulatory blood flow in the human body. CFD simulation based on 3D luminal reconstructions can be used to analyse the local flow fields and flow profiling due to changes of coronary artery geometry, thus, identifying risk factors for development and progression of coronary artery disease. This review aims to provide an overview of the CFD applications in coronary artery disease, including biomechanics of atherosclerotic plaques, plaque progression and rupture; regional haemodynamics relative to plaque location and composition. A critical appraisal is given to a more recently developed application, fractional flow reserve based on CFD computation with regard to its diagnostic accuracy in the detection of haemodynamically significant coronary artery disease. | Computational fluid dynamics in coronary artery disease |
S0895611114001475 | This work introduces a self-contained framework for endoscopic camera tracking by combining 3D ultrasonography with endoscopy. The approach can be readily incorporated into surgical workflows without installing external tracking devices. By fusing the ultrasound-constructed scene geometry with endoscopic vision, this integrated approach addresses issues related to initialization, scale ambiguity, and interest point inadequacy that may be faced by conventional vision-based approaches when applied to fetoscopic procedures. Vision-based pose estimations were demonstrated by phantom and ex vivo monkey placenta imaging. The potential contribution of this method may extend beyond fetoscopic procedures to include general augmented reality applications in minimally invasive procedures. | Vision-based endoscope tracking for 3D ultrasound image-guided surgical navigation |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.