FileName
stringlengths
17
17
Abstract
stringlengths
163
6.01k
Title
stringlengths
12
421
S0885230813000284
This paper presents a novel approach to Sentiment Polarity Classification in Twitter posts, by extracting a vector of weighted nodes from the graph of WordNet. These weights are used in SentiWordNet to compute a final estimation of the polarity. Therefore, the method proposes a non-supervised solution that is domain-independent. The evaluation of a generated corpus of tweets shows that this technique is promising.
Ranked WordNet graph for Sentiment Polarity Classification in Twitter
S0885230813000296
This paper investigates advanced channel compensation techniques for the purpose of improving i-vector speaker verification performance in the presence of high intersession variability using the NIST 2008 and 2010 SRE corpora. The performance of four channel compensation techniques: (a) weighted maximum margin criterion (WMMC), (b) source-normalized WMMC (SN-WMMC), (c) weighted linear discriminant analysis (WLDA) and (d) source-normalized WLDA (SN-WLDA) have been investigated. We show that, by extracting the discriminatory information between pairs of speakers as well as capturing the source variation information in the development i-vector space, the SN-WLDA based cosine similarity scoring (CSS) i-vector system is shown to provide over 20% improvement in EER for NIST 2008 interview and microphone verification and over 10% improvement in EER for NIST 2008 telephone verification, when compared to SN-LDA based CSS i-vector system. Further, score-level fusion techniques are analyzed to combine the best channel compensation approaches, to provide over 8% improvement in DCF over the best single approach, SN-WLDA, for NIST 2008 interview/telephone enrolment-verification condition. Finally, we demonstrate that the improvements found in the context of CSS also generalize to state-of-the-art GPLDA with up to 14% relative improvement in EER for NIST SRE 2010 interview and microphone verification and over 7% relative improvement in EER for NIST SRE 2010 telephone verification.
I-vector based speaker recognition using advanced channel compensation techniques
S0885230813000302
This paper presents a two-stage mixed language model technique for detecting and recognizing words that are not included in the vocabulary of a large vocabulary continuous speech recognition system. The main idea is to spot the out-of-vocabulary words and to produce a transcription for these words in terms of subword units with the help of a mixed word/subword language model in the first stage, and to convert the subword transcriptions to word hypotheses by means of a look-up table in the second stage. The performance of the proposed approach is compared to that of the state-of-the-art hybrid method reported in the literature, both on in-domain and on out-of-domain Dutch spoken material, where the term ‘domain’ refers to the ensemble of topics that were covered in the material from which the lexicon and language model were retrieved. It turns out that the proposed approach is at least equally effective as a hybrid approach when it comes to recognizing in-domain material, and significantly more effective when applied to out-of-domain data. This proves that the proposed approach is easily adaptable to new domains and to new words (e.g. proper names) in the same domain. On the out-of-domain recognition task, the word error rate could be reduced by 12% relative over a baseline system incorporating a 100k word vocabulary and a basic garbage OOV word model.
An improved two-stage mixed language model approach for handling out-of-vocabulary words in large vocabulary continuous speech recognition
S0885230813000314
Reproducing the smooth vocal tract trajectories is critical for high quality articulatory speech synthesis. This paper presents an adaptive neural control scheme for such a task using fuzzy logic and neural networks. The control scheme estimates motor commands from trajectories of flesh-points on selected articulators. These motor commands are then used to reproduce the trajectories of the underlying articulators in a 2nd order dynamical system. Initial experiments show that the control scheme is able to manipulate the mass-spring based elastic tract walls in a 2-dimensional articulatory synthesizer and to realize efficient speech motor control. The proposed controller achieves high accuracy during on-line tracking of the lips, the tongue, and the jaw in the simulation of consonant–vowel sequences. It also offers salient features such as generality and adaptability for future developments of control models in articulatory synthesis.
An adaptive neural control scheme for articulatory synthesis of CV sequences
S088523081300034X
In this paper, we suggest a list of high-level features and study their applicability in detection of cyberpedophiles. We used a corpus of chats downloaded from http://www.perverted-justice.com and two negative datasets of different nature: cybersex logs available online, and the NPS chat corpus. The classification results show that the NPS data and the pedophiles’ conversations can be accurately discriminated from each other with character n-grams, while in the more complicated case of cybersex logs there is need for high-level features to reach good accuracy levels. In this latter setting our results show that features that model behaviour and emotion significantly outperform the low-level ones, and achieve a 97% accuracy.
Exploring high-level features for detecting cyberpedophilia
S0885230813000363
A set of words labeled with their prior emotion is an obvious place to start on the automatic discovery of the emotion of a sentence, but it is clear that context must also be considered. It may be that no simple function of the labels on the individual words captures the overall emotion of the sentence; words are interrelated and they mutually influence their affect-related interpretation. It happens quite often that a word which invokes emotion appears in a neutral sentence, or that a sentence with no emotional word carries an emotion. This could also happen among different emotion classes. The goal of this work is to distinguish automatically between prior and contextual emotion, with a focus on exploring features important in this task. We present a set of features which enable us to take the contextual emotion of a word and the syntactic structure of the sentence into account to put sentences into emotion classes. The evaluation includes assessing the performance of different feature sets across multiple classification methods. We show the features and a promising learning method which significantly outperforms two reasonable baselines. We group our features by the similarity of their nature. That is why another facet of our evaluation is to consider each group of the features separately and investigate how well they contribute to the result. The experiments show that all features contribute to the result, but it is the combination of all the features that gives the best performance.
Prior and contextual emotion of words in sentential context
S0885230813000375
This paper presents our research on automatic annotation of a five-billion-word corpus of Japanese blogs with information on affect and sentiment. We first perform a study in emotion blog corpora to discover that there has been no large scale emotion corpus available for the Japanese language. We choose the largest blog corpus for the language and annotate it with the use of two systems for affect analysis: ML-Ask for word- and sentence-level affect analysis and CAO for detailed analysis of emoticons. The annotated information includes affective features like sentence subjectivity (emotive/non-emotive) or emotion classes (joy, sadness, etc.), useful in affect analysis. The annotations are also generalized on a two-dimensional model of affect to obtain information on sentence valence (positive/negative), useful in sentiment analysis. The annotations are evaluated in several ways. Firstly, on a test set of a thousand sentences extracted randomly and evaluated by over forty respondents. Secondly, the statistics of annotations are compared to other existing emotion blog corpora. Finally, the corpus is applied in several tasks, such as generation of emotion object ontology or retrieval of emotional and moral consequences of actions.
Automatically annotating a five-billion-word corpus of Japanese blogs for sentiment and affect analysis
S0885230813000387
Language models are crucial for many tasks in NLP (Natural Language Processing) and n-grams are the best way to build them. Huge effort is being invested in improving n-gram language models. By introducing external information (morphology, syntax, partitioning into documents, etc.) into the models a significant improvement can be achieved. The models can however be improved with no external information and smoothing is an excellent example of such an improvement. In this article we show another way of improving the models that also requires no external information. We examine patterns that can be found in large corpora by building semantic spaces (HAL, COALS, BEAGLE and others described in this article). These semantic spaces have never been tested in language modeling before. Our method uses semantic spaces and clustering to build classes for a class-based language model. The class-based model is then coupled with a standard n-gram model to create a very effective language model. Our experiments show that our models reduce the perplexity and improve the accuracy of n-gram language models with no external information added. Training of our models is fully unsupervised. Our models are very effective for inflectional languages, which are particularly hard to model. We show results for five different semantic spaces with different settings and different number of classes. The perplexity tests are accompanied with machine translation tests that prove the ability of proposed models to improve performance of a real-world application.
Semantic spaces for improving language modeling
S0885230813000399
We present our approach to unsupervised training of speech recognizers. Our approach iteratively adjusts sound units that are optimized for the acoustic domain of interest. We thus enable the use of speech recognizers for applications in speech domains where transcriptions do not exist. The resulting recognizer is a state-of-the-art recognizer on the optimized units. Specifically we propose building HMM-based speech recognizers without transcribed data by formulating the HMM training as an optimization over both the parameter and transcription sequence space. Audio is then transcribed into these self-organizing units (SOUs). We describe how SOU training can be easily implemented using existing HMM recognition tools. We tested the effectiveness of SOUs on the task of topic classification on the Switchboard and Fisher corpora. On the Switchboard corpus, the unsupervised HMM-based SOU recognizer, initialized with a segmental tokenizer, performed competitively with an HMM-based phoneme recognizer trained with 1h of transcribed data, and outperformed the Brno University of Technology (BUT) Hungarian phoneme recognizer (Schwartz et al., 2004). We also report improvements, including the use of context dependent acoustic models and lattice-based features, that together reduce the topic verification equal error rate from 12% to 7%. In addition to discussing the effectiveness of the SOU approach, we describe how we analyzed some selected SOU n-grams and found that they were highly correlated with keywords, demonstrating the ability of the SOU technology to discover topic relevant keywords.
Unsupervised training of an HMM-based self-organizing unit recognizer with applications to topic classification and keyword discovery
S0885230813000466
This paper describes speech intelligibility enhancement for Hidden Markov Model (HMM) generated synthetic speech in noise. We present a method for modifying the Mel cepstral coefficients generated by statistical parametric models that have been trained on plain speech. We update these coefficients such that the glimpse proportion – an objective measure of the intelligibility of speech in noise – increases, while keeping the speech energy fixed. An acoustic analysis reveals that the modified speech is boosted in the region 1–4kHz, particularly for vowels, nasals and approximants. Results from listening tests employing speech-shaped noise show that the modified speech is as intelligible as a synthetic voice trained on plain speech whose duration, Mel cepstral coefficients and excitation signal parameters have been adapted to Lombard speech from the same speaker. Our proposed method does not require these additional recordings of Lombard speech. In the presence of a competing talker, both modification and adaptation of spectral coefficients give more modest gains.
Intelligibility enhancement of HMM-generated speech in additive noise by modifying Mel cepstral coefficients to increase the glimpse proportion
S088523081300048X
Language is being increasingly harnessed to not only create natural human–machine interfaces but also to infer social behaviors and interactions. In the same vein, we investigate a novel spoken language task, of inferring social relationships in two-party conversations: whether the two parties are related as family, strangers or are involved in business transactions. For our study, we created a corpus of all incoming and outgoing calls from a few homes over the span of a year. On this unique naturalistic corpus of everyday telephone conversations, which is unlike Switchboard or any other public domain corpora, we demonstrate that standard natural language processing techniques can achieve accuracies of about 88%, 82%, 74% and 80% in differentiating business from personal calls, family from non-family calls, familiar from unfamiliar calls and family from other personal calls respectively. Through a series of experiments with our classifiers, we characterize the properties of telephone conversations and find: (a) that 30 words of openings (beginnings) are sufficient to predict business from personal calls, which could potentially be exploited in designing context sensitive interfaces in smart phones; (b) our corpus-based analysis does not support Schegloff and Sack's manual analysis of exemplars in which they conclude that pre-closings differ significantly between business and personal calls – closing fared no better than a random segment; and (c) the distribution of different types of calls are stable over durations as short as 1–2 months. In summary, our results show that social relationships can be inferred automatically in two-party conversations with sufficient accuracy to support practical applications.
Inferring social nature of conversations from words: Experiments on a corpus of everyday telephone conversations
S0885230813000491
We present work on understanding natural language in a situated domain in an incremental, word-by-word fashion. We explore a set of models specified as Markov Logic Networks and show that a model that has access to information about the visual context during an utterance, its discourse context, the words of the utterance, as well as the linguistic structure of the utterance performs best and is robust to noisy speech input. We explore the incremental properties of the models and offer some analysis. We conclude that mlns provide a promising framework for specifying such models in a general, possibly domain-independent way.
Situated incremental natural language understanding using Markov Logic Networks
S0885230813000508
This study investigated whether the signal-to-noise ratio (SNR) of the interlocutor (speech partner) influences a speaker's vocal intensity in conversational speech. Twenty participants took part in artificial conversations with controlled levels of interlocutor speech and background noise. Three different levels of background noise were presented over headphones and the participant engaged in a “live interaction” with the experimenter. The experimenter's vocal intensity was manipulated in order to modify the SNR. The participants’ vocal intensity was measured. As observed previously, vocal intensity increased as background noise level increased. However, the SNR of the interlocutor did not have a significant effect on participants’ vocal intensity. These results suggest that increasing the signal level of the other party at the earpiece would not reduce the tendency of telephone users to talk loudly
Does the signal-to-noise ratio of an interlocutor influence a speaker's vocal intensity?
S088523081300051X
This paper describes a noisy-channel approach for the normalization of informal text, such as that found in emails, chat rooms, and SMS messages. In particular, we introduce two character-level methods for the abbreviation modeling aspect of the noisy channel model: a statistical classifier using language-based features to decide whether a character is likely to be removed from a word, and a character-level machine translation model. A two-phase approach is used; in the first stage the possible candidates are generated using the selected abbreviation model and in the second stage we choose the best candidate by decoding using a language model. Overall we find that this approach works well and is on par with current research in the field.
Normalization of informal text
S0885230813000521
This paper proposes the use of neutral reference models to detect local emotional prominence in the fundamental frequency. A novel approach based on functional data analysis (FDA) is presented, which aims to capture the intrinsic variability of F0 contours. The neutral models are represented by a basis of functions and the testing F0 contour is characterized by the projections onto that basis. For a given F0 contour, we estimate the functional principal component analysis (PCA) projections, which are used as features for emotion detection. The approach is evaluated with lexicon-dependent (i.e., one functional PCA basis per sentence) and lexicon-independent (i.e., a single functional PCA basis across sentences) models. The experimental results show that the proposed system can lead to accuracies as high as 75.8% in binary emotion classification, which is 6.2% higher than the accuracy achieved by a benchmark system trained with global F0 statistics. The approach can be implemented at sub-sentence level (e.g., 0.5s segments), facilitating the detection of localized emotional information conveyed within the sentence. The approach is validated with the SEMAINE database, which is a spontaneous corpus. The results indicate that the proposed scheme can be effectively employed in real applications to detect emotional speech.
Shape-based modeling of the fundamental frequency contour for emotion detection in speech
S0885230813000533
Since 2008, interview-style speech has become an important part of the NIST speaker recognition evaluations (SREs). Unlike telephone speech, interview speech has lower signal-to-noise ratio, which necessitates robust voice activity detectors (VADs). This paper highlights the characteristics of interview speech files in NIST SREs and discusses the difficulties in performing speech/non-speech segmentation in these files. To overcome these difficulties, this paper proposes using speech enhancement techniques as a pre-processing step for enhancing the reliability of energy-based and statistical-model-based VADs. A decision strategy is also proposed to overcome the undesirable effects caused by impulsive signals and sinusoidal background signals. The proposed VAD is compared with the ASR transcripts provided by NIST, VAD in the ETSI-AMR Option 2 coder, satistical-model (SM) based VAD, and Gaussian mixture model (GMM) based VAD. Experimental results based on the NIST 2010 SRE dataset suggest that the proposed VAD outperforms these conventional ones whenever interview-style speech is involved. This study also demonstrates that (1) noise reduction is vital for energy-based VAD under low SNR; (2) the ASR transcripts and ETSI-AMR speech coder do not produce accurate speech and non-speech segmentations; and (3) spectral subtraction makes better use of background spectra than the likelihood-ratio tests in the SM-based VAD. The segmentation files produced by the proposed VAD can be found in http://bioinfo.eie.polyu.edu.hk/ssvad.
A study of voice activity detection techniques for NIST speaker recognition evaluations
S0885230813000545
We report progress towards developing a sensor module that categorizes types of laughter for application in dialogue systems or social-skills training situations. The module will also function as a component to measure discourse engagement in natural conversational speech. This paper presents the results of an analysis into the sounds of human laughter in a very large corpus of naturally occurring conversational speech and our classification of the laughter types according to social function. Various types of laughter were categorized into either polite or genuinely mirthful categories and the analysis of these laughs forms the core of this report. Statistical analysis of the acoustic features of each laugh was performed and a Principal Component Analysis and Classification Tree analysis were performed to determine the main contributing factors in each case. A statistical model was then trained using a Support Vector Machine to predict the most likely category for each laugh in both speaker-specific and speaker-independent manner. Better than 70% accuracy was obtained in automatic classification tests.
Classification of social laughter in natural conversational speech
S0885230813000557
What makes speech produced in the presence of noise (Lombard speech) more intelligible than conversational speech produced in quiet conditions? This study investigates the hypothesis that speakers modify their speech in the presence of noise in such a way that acoustic contrasts between their speech and the background noise are enhanced, which would improve speech audibility. Ten French speakers were recorded while playing an interactive game first in quiet condition, then in two types of noisy conditions with different spectral characteristics: a broadband noise (BB) and a cocktail-party noise (CKTL), both played over loudspeakers at 86dB SPL. Similarly to (Lu and Cooke, 2009b), our results suggest no systematic “active” adaptation of the whole speech spectrum or vocal intensity to the spectral characteristics of the ambient noise. Regardless of the type of noise, the gender or the type of speech segment, the primary strategy was to speak louder in noise, with a greater adaptation in BB noise and an emphasis on vowels rather than any type of consonants. Active strategies were evidenced, but were subtle and of second order to the primary strategy of speaking louder: for each gender, fundamental frequency (f 0) and first formant frequency (F1) were modified in cocktail-party noise in a way that optimized the release in energetic masking induced by this type of noise. Furthermore, speakers showed two additional modifications as compared to shouted speech, which therefore cannot be interpreted in terms of vocal effort only: they enhanced the modulation of their speech in f 0 and vocal intensity and they boosted their speech spectrum specifically around 3kHz, in the region of maximum ear sensitivity associated with the actor's or singer's formant.
Speaking in noise: How does the Lombard effect improve acoustic contrasts between speech and ambient noise?
S0885230813000570
Obstructive sleep apnoea (OSA) is a highly prevalent disease affecting an estimated 2–4% of the adult male population that is difficult and very costly to diagnose because symptoms can remain unnoticed for years. The reference diagnostic method, Polysomnography (PSG), requires the patient to spend a night at the hospital monitored by specialized equipment. Therefore fast and less costly screening techniques are normally applied for setting priorities to proceed to the polysomnography diagnosis. In this article the use of speech analysis is proposed as an alternative or complement to existing screening methods. A set of voice features that could be related to apnoea are defined, based on previous results from other authors and our own analysis. These features are analyzed first in isolation and then in combination to assess their discriminative power to classify voices as corresponding to apnoea patients and healthy subjects. This analysis is performed in a database containing three repetitions of four carefully designed sentences read by 40 healthy subjects and 42 subjects suffering from severe apnoea. As a result of the analysis, a linear discriminant model (LDA) was defined including a subset of eight features (signal-to-disperiodicity ratio, a nasality measure, harmonic-to-noise ratio, jitter, difference between third and second formants on a specific vowel, duration of two of the sentences and the percentage of silence in one of the sentences). This model was tested on a separate database containing 20 healthy and 20 apnoea subjects yielding a sensitivity of 85% and a specificity of 75%, with a F1-measure of 81%. These results indicate that the proposed method, only requiring a few minutes to record and analyze the patient's voice during the visit to the specialist, could help in the development of a non-intrusive, fast and convenient PSG-complementary screening technique for OSA.
Analysis of voice features related to obstructive sleep apnoea and their application in diagnosis support
S0885230813000582
Speech output technology is finding widespread application, including in scenarios where intelligibility might be compromised – at least for some listeners – by adverse conditions. Unlike most current algorithms, talkers continually adapt their speech patterns as a response to the immediate context of spoken communication, where the type of interlocutor and the environment are the dominant situational factors influencing speech production. Observations of talker behaviour can motivate the design of more robust speech output algorithms. Starting with a listener-oriented categorisation of possible goals for speech modification, this review article summarises the extensive set of behavioural findings related to human speech modification, identifies which factors appear to be beneficial, and goes on to examine previous computational attempts to improve intelligibility in noise. The review concludes by tabulating 46 speech modifications, many of which have yet to be perceptually or algorithmically evaluated. Consequently, the review provides a roadmap for future work in improving the robustness of speech output.
The listening talker: A review of human and algorithmic context-induced modifications of speech
S0885230813000594
Automatic emotion recognition from speech signals is one of the important research areas, which adds value to machine intelligence. Pitch, duration, energy and Mel-frequency cepstral coefficients (MFCC) are the widely used features in the field of speech emotion recognition. A single classifier or a combination of classifiers is used to recognize emotions from the input features. The present work investigates the performance of the features of Autoregressive (AR) parameters, which include gain and reflection coefficients, in addition to the traditional linear prediction coefficients (LPC), to recognize emotions from speech signals. The classification performance of the features of AR parameters is studied using discriminant, k-nearest neighbor (KNN), Gaussian mixture model (GMM), back propagation artificial neural network (ANN) and support vector machine (SVM) classifiers and we find that the features of reflection coefficients recognize emotions better than the LPC. To improve the emotion recognition accuracy, we propose a class-specific multiple classifiers scheme, which is designed by multiple parallel classifiers, each of which is optimized to a class. Each classifier for an emotional class is built by a feature identified from a pool of features and a classifier identified from a pool of classifiers that optimize the recognition of the particular emotion. The outputs of the classifiers are combined by a decision level fusion technique. The experimental results show that the proposed scheme improves the emotion recognition accuracy. Further improvement in recognition accuracy is obtained when the scheme is built by including MFCC features in the pool of features.
Class-specific multiple classifiers scheme to recognize emotions from speech signals
S0885230813000600
Since Sag et al. (2002) highlighted a key problem that had been underappreciated in the past in natural language processing (NLP), namely idiosyncratic multiword expressions (MWEs) such as idioms, quasi-idioms, clichés, quasi-clichés, institutionalized phrases, proverbs and old sayings, and how to deal with them, many attempts have been made to extract these expressions from corpora and construct a lexicon of them. However, no extensive, reliable solution has yet been realized. This paper presents an overview of a comprehensive lexicon of Japanese multiword expressions (Japanese MWE Lexicon: JMWEL), which has been compiled in order to realize linguistically precise and wide-coverage natural Japanese processing systems. The JMWEL is characterized by significant notational, syntactic, and semantic diversity as well as a detailed description of the syntactic functions, structures, and flexibilities of MWEs. The lexicon contains about 111,000 header entries written in kana (phonetic characters) and their almost 820,000 variants written in kana and kanji (ideographic characters). The paper demonstrates the JMWEL's validity, supported mainly by comparing the lexicon with a large-scale Japanese N-gram frequency dataset, namely the LDC2009T08 generated by Google Inc. (Kudo and Kazawa, 2009). The present work is an attempt to provide a tentative answer for Japanese, from outside statistical empiricism, to the question posed by Church (2011): “How many multiword expressions do people know?”
A lexicon of multiword expressions for linguistically precise, wide-coverage natural language processing
S0885230813000612
This paper proposes a domain-independent statistical methodology to develop dialog managers for spoken dialog systems. Our methodology employs a data-driven classification procedure to generate abstract representations of system turns taking into account the previous history of the dialog. A statistical framework is also introduced for the development and evaluation of dialog systems created using the methodology, which is based on a dialog simulation technique. The benefits and flexibility of the proposed methodology have been validated by developing statistical dialog managers for four spoken dialog systems of different complexity, designed for different languages (English, Italian, and Spanish) and application domains (from transactional to problem-solving tasks). The evaluation results show that the proposed methodology allows rapid development of new dialog managers as well as to explore new dialog strategies, which permit developing new enhanced versions of already existing systems.
A domain-independent statistical methodology for dialog management in spoken dialog systems
S0885230813000764
Automatic detection of a user's interest in spoken dialog plays an important role in many applications, such as tutoring systems and customer service systems. In this study, we propose a decision-level fusion approach using acoustic and lexical information to accurately sense a user's interest at the utterance level. Our system consists of three parts: acoustic/prosodic model, lexical model, and a model that combines their decisions for the final output. We use two different regression algorithms to complement each other for the acoustic model. For lexical information, in addition to the bag-of-words model, we propose new features including a level-of-interest value for each word, length information using the number of words, estimated speaking rate, silence in the utterance, and similarity with other utterances. We also investigate the effectiveness of using more automatic speech recognition (ASR) hypotheses (n-best lists) to extract lexical features. The outputs from the acoustic and lexical models are combined at the decision level. Our experiments show that combining acoustic evidence with lexical information improves level-of-interest detection performance, even when lexical features are extracted from ASR output with high word error rate.
Level of interest sensing in spoken dialog using decision-level fusion of acoustic and lexical evidence
S0885230813000776
This paper summarizes the results of our investigations into estimating the shape of the glottal excitation source from speech signals. We employ the Liljencrants–Fant (LF) model describing the glottal flow and its derivative. The one-dimensional glottal source shape parameter R d describes the transition in voice quality from a tense to a breathy voice. The parameter R d has been derived from a statistical regression of the R waveshape parameters which parameterize the LF model. First, we introduce a variant of our recently proposed adaptation and range extension of the R d parameter regression. Secondly, we discuss in detail the aspects of estimating the glottal source shape parameter R d using the phase minimization paradigm. Based on the analysis of a large number of speech signals we describe the major conditions that are likely to result in erroneous R d estimates. Based on these findings we investigate into means to increase the robustness of the R d parameter estimation. We use Viterbi smoothing to suppress unnatural jumps of the estimated R d parameter contours within short time segments. Additionally, we propose to steer the Viterbi algorithm by exploiting the covariation of other voice descriptors to improve Viterbi smoothing. The novel Viterbi steering is based on a Gaussian Mixture Model (GMM) that represents the joint density of the voice descriptors and the Open Quotient (OQ) estimated from corresponding electroglottographic (EGG) signals. A conversion function derived from the mixture model predicts OQ from the voice descriptors. Converted to R d it defines an additional prior probability to adapt the partial probabilities of the Viterbi algorithm accordingly. Finally, we evaluate the performances of the phase minimization based methods using both variants to adapt and extent the R d regression on one synthetic test set as well as in combination with Viterbi smoothing and each variant of the novel Viterbi steering on one test set of natural speech. The experimental findings exhibit improvements for both Viterbi approaches.
On the use of voice descriptors for glottal source shape parameter estimation
S088523081300079X
Discriminative confidence based on multi-layer perceptrons (MLPs) and multiple features has shown significant advantage compared to the widely used lattice-based confidence in spoken term detection (STD). Although the MLP-based framework can handle any features derived from a multitude of sources, choosing all possible features may lead to over complex models and hence less generality. In this paper, we design an extensive set of features and analyze their contribution to STD individually and as a group. The main goal is to choose a small set of features that are sufficiently informative while keeping the model simple and generalizable. We employ two established models to conduct the analysis: one is linear regression which targets for the most relevant features and the other is logistic linear regression which targets for the most discriminative features. We find the most informative features are comprised of those derived from diverse sources (ASR decoding, duration and lexical properties) and the two models deliver highly consistent feature ranks. STD experiments on both English and Spanish data demonstrate significant performance gains with the proposed feature sets.
Feature analysis for discriminative confidence estimation in spoken term detection
S0885230813000806
We compared the performance of an automatic speech recognition system using n-gram language models, HMM acoustic models, as well as combinations of the two, with the word recognition performance of human subjects who either had access to only acoustic information, had information only about local linguistic context, or had access to a combination of both. All speech recordings used were taken from Japanese narration and spontaneous speech corpora. Humans have difficulty recognizing isolated words taken out of context, especially when taken from spontaneous speech, partly due to word-boundary coarticulation. Our recognition performance improves dramatically when one or two preceding words are added. Short words in Japanese mainly consist of post-positional particles (i.e. wa, ga, wo, ni, etc.), which are function words located just after content words such as nouns and verbs. So the predictability of short words is very high within the context of the one or two preceding words, and thus recognition of short words is drastically improved. Providing even more context further improves human prediction performance under text-only conditions (without acoustic signals). It also improves speech recognition, but the improvement is relatively small. Recognition experiments using an automatic speech recognizer were conducted under conditions almost identical to the experiments with humans. The performance of the acoustic models without any language model, or with only a unigram language model, were greatly inferior to human recognition performance with no context. In contrast, prediction performance using a trigram language model was superior or comparable to human performance when given a preceding and a succeeding word. These results suggest that we must improve our acoustic models rather than our language models to make automatic speech recognizers comparable to humans in recognition performance under conditions where the recognizer has limited linguistic context.
Effect of acoustic and linguistic contexts on human and machine speech recognition
S0885230813000843
This paper presents a study on the importance of short-term speech parameterizations for expressive statistical parametric synthesis. Assuming a source-filter model of speech production, the analysis is conducted over spectral parameters, here defined as features which represent a minimum-phase synthesis filter, and some excitation parameters, which are features used to construct a signal that is fed to the minimum-phase synthesis filter to generate speech. In the first part, different spectral and excitation parameters that are applicable to statistical parametric synthesis are tested to determine which ones are the most emotion dependent. The analysis is performed through two methods proposed to measure the relative emotion dependency of each feature: one based on K-means clustering, and another based on Gaussian mixture modeling for emotion identification. Two commonly used forms of parameters for the short-term speech spectral envelope, the Mel cepstrum and the Mel line spectrum pairs are utilized. As excitation parameters, the anti-causal cepstrum, the time-smoothed group delay, and band-aperiodicity coefficients are considered. According to the analysis, the line spectral pairs are the most emotion dependent parameters. Among the excitation features, the band-aperiodicity coefficients present the highest correlation with the speaker's emotion. The most emotion dependent parameters according to this analysis were selected to train an expressive statistical parametric synthesizer using a speaker and language factorization framework. Subjective test results indicate that the considered spectral parameters have a bigger impact on the synthesized speech emotion when compared with the excitation ones.
On the impact of excitation and spectral parameters for expressive statistical parametric speech synthesis
S0885230813000855
We examine the similarities and differences in the expression of emotion in the singing and the speaking voice. Three internationally renowned opera singers produced “vocalises” (using a schwa vowel) and short nonsense phrases in different interpretations for 10 emotions. Acoustic analyses of emotional expression in the singing samples show significant differences between the emotions. In addition to the obvious effects of loudness and tempo, spectral balance and perturbation make significant contributions (high effect sizes) to this differentiation. A comparison of the emotion-specific patterns produced by the singers in this study with published data for professional actors portraying different emotions in speech generally show a very high degree of similarity. However, singers tend to rely more than actors on the use of voice perturbation, specifically vibrato, in particular in the case of high arousal emotions. It is suggested that this may be due to by the restrictions and constraints imposed by the musical structure.
Comparing the acoustic expression of emotion in the speaking and the singing voice
S0885230813000867
One of the aims of Assistive Technologies is to help people with disabilities to communicate with others and to provide means of access to information. As an aid to Deaf people, we present in this work a production-quality rule-based machine system for translating from Spanish to Spanish Sign Language (LSE) glosses, which is a necessary precursor to building a full machine translation system that eventually produces animation output. The system implements a transfer-based architecture from the syntactic functions of dependency analyses. A sketch of LSE is also presented. Several topics regarding translation to sign languages are addressed: the lexical gap, the bootstrapping of a bilingual lexicon, the generation of word order for topic-oriented languages, and the treatment of classifier predicates and classifier names. The system has been evaluated with an open-domain testbed, reporting a 0.30 BLEU (BiLingual Evaluation Understudy) and 42% TER (Translation Error Rate). These results show consistent improvements over a statistical machine translation baseline, and some improvements over the same system preserving the word order in the source sentence. Finally, the linguistic analysis of errors has identified some differences due to a certain degree of structural variation in LSE.
A rule-based translation from written Spanish to Spanish Sign Language glosses
S0885230813000879
While there is great potential for sign language animation generation software to improve the accessibility of information for deaf individuals with low written-language literacy, the understandability of current sign language animation systems is limited. Data-driven methodologies using annotated sign language corpora encoding detailed human movement have enabled some researchers to address several key linguistic challenges in ASL generation. This article motivates and describes our current research on collecting a motion-capture corpus of American Sign Language (ASL). As an evaluation of our motion-capture configuration, calibration, and recording protocol, we have conducted several rounds of evaluation studies with native ASL signers, and we have made use of our collected data to synthesize novel animations of ASL, which have also been evaluated in experimental studies with native signers.
Collecting and evaluating the CUNY ASL corpus for research on American Sign Language animation
S0885230813000880
Most research in the automatic assessment of free text answers written by students address English language. This paper handles the assessment task in Arabic language. This research focuses on applying multiple similarity measures separately and in combination. Many aspects are introduced that depend on translation to overcome the lack of text processing resources in Arabic, such as extracting model answers automatically from an already built database and applying K-means clustering to scale the obtained similarity values. Additionally, this research presents the first benchmark Arabic data set that contains 610 students’ short answers together with their English translations.
Automatic scoring for answers to Arabic test questions
S0885230813001101
A speech pre-processing algorithm is presented that improves the speech intelligibility in noise for the near-end listener. The algorithm improves intelligibility by optimally redistributing the speech energy over time and frequency according to a perceptual distortion measure, which is based on a spectro-temporal auditory model. Since this auditory model takes into account short-time information, transients will receive more amplification than stationary vowels, which has been shown to be beneficial for intelligibility of speech in noise. The proposed method is compared to unprocessed speech and two reference methods using an intelligibility listening test. Results show that the proposed method leads to significant intelligibility gains while still preserving quality. Although one of the methods used as a reference obtained higher intelligibility gains, this happened at the cost of decreased quality. Matlab code is provided.
Speech energy redistribution for intelligibility improvement in noise based on a perceptual distortion measure
S0885230813001113
This study focuses on feature selection in paralinguistic analysis and presents recently developed supervised and unsupervised methods for feature subset selection and feature ranking. Using the standard k-nearest-neighbors (kNN) rule as the classification algorithm, the feature selection methods are evaluated individually and in different combinations in seven paralinguistic speaker trait classification tasks. In each analyzed data set, the overall number of features highly exceeds the number of data points available for training and evaluation, making a well-generalizing feature selection process extremely difficult. The performance of feature sets on the feature selection data is observed to be a poor indicator of their performance on unseen data. The studied feature selection methods clearly outperform a standard greedy hill-climbing selection algorithm by being more robust against overfitting. When the selection methods are suitably combined with each other, the performance in the classification task can be further improved. In general, it is shown that the use of automatic feature selection in paralinguistic analysis can be used to reduce the overall number of features to a fraction of the original feature set size while still achieving a comparable or even better performance than baseline support vector machine or random forest classifiers using the full feature set. The most typically selected features for recognition of speaker likability, intelligibility and five personality traits are also reported.
Feature selection methods and their combinations in high-dimensional classification of speaker likability, intelligibility and personality traits
S0885230813001125
The maximum a posteriori (MAP) criterion is popularly used for feature compensation (FC) and acoustic model adaptation (MA) to reduce the mismatch between training and testing data sets. MAP-based FC and MA require prior densities of mapping function parameters, and designing suitable prior densities plays an important role in obtaining satisfactory performance. In this paper, we propose to use an environment structuring framework to provide suitable prior densities for facilitating MAP-based FC and MA for robust speech recognition. The framework is constructed in a two-stage hierarchical tree structure using environment clustering and partitioning processes. The constructed framework is highly capable of characterizing local information about complex speaker and speaking acoustic conditions. The local information is utilized to specify hyper-parameters in prior densities, which are then used in MAP-based FC and MA to handle the mismatch issue. We evaluated the proposed framework on Aurora-2, a connected digit recognition task, and Aurora-4, a large vocabulary continuous speech recognition (LVCSR) task. On both tasks, experimental results showed that with the prepared environment structuring framework, we could obtain suitable prior densities for enhancing the performance of MAP-based FC and MA.
Incorporating local information of the acoustic environments to MAP-based feature compensation and acoustic model adaptation
S0885230813001137
Laryngeal high-speed videoendoscopy is a state-of-the-art technique to examine physiological vibrational patterns of the vocal folds. With sampling rates of thousands of frames per second, high-speed videoendoscopy produces a large amount of data that is difficult to analyze subjectively. In order to visualize high-speed video in a straightforward and intuitive way, many methods have been proposed to condense the three-dimensional data into a few static images that preserve characteristics of the underlying vocal fold vibratory patterns. In this paper, we propose the “glottaltopogram,” which is based on principal component analysis of changes over time in the brightness of each pixel in consecutive video images. This method reveals the overall synchronization of the vibrational patterns of the vocal folds over the entire laryngeal area. Experimental results showed that this method is effective in visualizing pathological and normal vocal fold vibratory patterns.
The glottaltopogram: A method of analyzing high-speed images of the vocal folds
S0885230813001149
For several decades now, there has been sporadic interest in automatically characterizing the speech impairment due to Parkinson's disease (PD). Most early studies were confined to quantifying a few speech features that were easy to compute. More recent studies have adopted a machine learning approach where a large number of potential features are extracted and the models are learned automatically from the data. In the same vein, here we characterize the disease using a relatively large cohort of 168 subjects, collected from multiple (three) clinics. We elicited speech using three tasks – the sustained phonation task, the diadochokinetic task and a reading task, all within a time budget of 4min, prompted by a portable device. From these recordings, we extracted 1582 features for each subject using openSMILE, a standard feature extraction tool. We compared the effectiveness of three strategies for learning a regularized regression and find that ridge regression performs better than lasso and support vector regression for our task. We refine the feature extraction to capture pitch-related cues, including jitter and shimmer, more accurately using a time-varying harmonic model of speech. Our results show that the severity of the disease can be inferred from speech with a mean absolute error of about 5.5, explaining 61% of the variance and consistently well-above chance across all clinics. Of the three speech elicitation tasks, we find that the reading task is significantly better at capturing cues than diadochokinetic or sustained phonation task. In all, we have demonstrated that the data collection and inference can be fully automated, and the results show that speech-based assessment has promising practical application in PD. The techniques reported here are more widely applicable to other paralinguistic tasks in clinical domain.
Fully automated assessment of the severity of Parkinson's disease from speech
S0885230813001150
This article describes an evaluation of a POMDP-based spoken dialogue system (SDS), using crowdsourced calls with real users. The evaluation compares a “Hidden Information State” POMDP system which uses a hand-crafted compression of the belief space, with the same system instead using an automatically computed belief space compression. Automatically computed compressions are a way of introducing automation into the design process of statistical SDSs and promise a principled way of reducing the size of the very large belief spaces which often make POMDP approaches intractable. This is the first empirical comparison of manual and automatic approaches on a problem of realistic scale (restaurant, pub and coffee shop domain) with real users. The evaluation took 2193 calls from 85 users. After filtering for minimal user participation the two systems were compared on more than 1000 calls.
Real user evaluation of a POMDP spoken dialogue system using automatic belief compression
S0885230813001162
Spoken content retrieval will be very important for retrieving and browsing multimedia content over the Internet, and spoken term detection (STD) is one of the key technologies for spoken content retrieval. In this paper, we show acoustic feature similarity between spoken segments used with pseudo-relevance feedback and graph-based re-ranking can improve the performance of STD. This is based on the concept that spoken segments similar in acoustic feature vector sequences to those with higher/lower relevance scores should have higher/lower scores, while graph-based re-ranking further uses a graph to consider the similarity structure among all the segments retrieved in the first pass. These approaches are formulated on both word and subword lattices, and a complete framework of using them in open vocabulary retrieval of spoken content is presented. Significant improvements for these approaches with both in-vocabulary and out-of-vocabulary queries were observed in preliminary experiments.
Improved open-vocabulary spoken content retrieval with word and subword lattices using acoustic feature similarity
S0885230813001174
In this paper, we describe several approaches to language-independent spoken term detection and compare their performance on a common task, namely “Spoken Web Search”. The goal of this part of the MediaEval initiative is to perform low-resource language-independent audio search using audio as input. The data was taken from “spoken web” material collected over mobile phone connections by IBM India as well as from the LWAZI corpus of African languages. As part of the 2011 and 2012 MediaEval benchmark campaigns, a number of diverse systems were implemented by independent teams, and submitted to the “Spoken Web Search” task. This paper presents the 2011 and 2012 results, and compares the relative merits and weaknesses of approaches developed by participants, providing analysis and directions for future research, in order to improve voice access to spoken information in low resource settings.
Language independent search in MediaEval's Spoken Web Search task
S0885230813001186
Increasing amounts of informal spoken content are being collected, e.g. recordings of meetings, lectures and personal data sources. The amount of this content being captured and the difficulties of manually searching audio data mean that efficient automated search tools are of increasing importance if its full potential is to be realized. Much existing work on speech search has focused on retrieval of clearly defined document units in ad hoc search tasks. We investigate search of informal speech content using an extended version of the AMI meeting collection. A retrieval collection was constructed by augmenting the AMI corpus with a set of ad hoc search requests and manually identified relevant regions of the recorded meetings. Unlike standard ad hoc information retrieval focussing primarily on precision, we assume a recall-focused search scenario of a user seeking to retrieve a particular incident occurring within meetings relevant to the query. We explore the relationship between automatic speech recognition (ASR) accuracy, automated segmentation of the meeting into retrieval units and retrieval behaviour with respect to both precision and recall. Experimental retrieval results show that while averaged retrieval effectiveness is generally comparable in terms of precision for automatically extracted segments for manual content transcripts and ASR transcripts with high recognition accuracy, segments with poor recognition quality become very hard to retrieve and may fall below the retrieval rank position to which a user is willing search. These changes impact on system effectiveness for recall-focused search tasks. Varied ASR quality across the relevant and non-relevant data means that the rank of some well-recognized relevant segments is actually promoted for ASR transcripts compared to manual ones. This effect is not revealed by the averaged precision based retrieval evaluation metrics typically used for evaluation of speech retrieval. However such variations in the ranks of relevant segments can impact considerably on the experience of the user in terms of the order in which retrieved content is presented. Analysis of our results reveals that while relevant longer segments are generally more robust to ASR errors, and consequentially retrieved at higher ranks, this is often at the expense of the user needing to engage in longer content playback to locate the relevant content in the audio recording. Our overall conclusion being that it is desirable to minimize the length of retrieval units containing relevant content while seeking to maintain high ranking of these items.
Exploring speech retrieval from meetings using the AMI corpus
S0885230814000114
This article investigates speech feature enhancement based on deep bidirectional recurrent neural networks. The Long Short-Term Memory (LSTM) architecture is used to exploit a self-learnt amount of temporal context in learning the correspondences of noisy and reverberant with undistorted speech features. The resulting networks are applied to feature enhancement in the context of the 2013 2nd Computational Hearing in Multisource Environments (CHiME) Challenge track 2 task, which consists of the Wall Street Journal (WSJ-0) corpus distorted by highly non-stationary, convolutive noise. In extensive test runs, different feature front-ends, network training targets, and network topologies are evaluated in terms of frame-wise regression error and speech recognition performance. Furthermore, we consider gradually refined speech recognition back-ends from baseline ‘out-of-the-box’ clean models to discriminatively trained multi-condition models adapted to the enhanced features. In the result, deep bidirectional LSTM networks processing log Mel filterbank outputs deliver best results with clean models, reaching down to 42% word error rate (WER) at signal-to-noise ratios ranging from −6 to 9dB (multi-condition CHiME Challenge baseline: 55% WER). Discriminative training of the back-end using LSTM enhanced features is shown to further decrease WER to 22%. To our knowledge, this is the best result reported for the 2nd CHiME Challenge WSJ-0 task yet.
Feature enhancement by deep LSTM networks for ASR in reverberant multisource environments
S0885230814000126
We discuss the use of low-dimensional physical models of the voice source for speech coding and processing applications. A class of waveform-adaptive dynamic glottal models and parameter identification procedures are illustrated. The model and the identification procedures are assessed by addressing signal transformations on recorded speech, achievable by fitting the model to the data, and then acting on the physically oriented parameters of the voice source. The class of models proposed provides in principle a tool for both the estimation of glottal source signals, and the encoding of the speech signal for transformation purposes. The application of this model to time stretching and to fundamental frequency control (pitch shifting) is also illustrated. The experiments show that copy synthesis is perceptually very similar to the target, and that time stretching and “pitch extrapolation” effects can be obtained by simple control strategies.
Speaker adaptive voice source modeling with applications to speech coding and processing
S0885230814000138
We introduce a ranking approach for emotion recognition which naturally incorporates information about the general expressivity of speakers. We demonstrate that our approach leads to substantial gains in accuracy compared to conventional approaches. We train ranking SVMs for individual emotions, treating the data from each speaker as a separate query, and combine the predictions from all rankers to perform multi-class prediction. The ranking method provides two natural benefits. It captures speaker specific information even in speaker-independent training/testing conditions. It also incorporates the intuition that each utterance can express a mix of possible emotion and that considering the degree to which each emotion is expressed can be productively exploited to identify the dominant emotion. We compare the performance of the rankers and their combination to standard SVM classification approaches on two publicly available datasets of acted emotional speech, Berlin and LDC, as well as on spontaneous emotional data from the FAU Aibo dataset. On acted data, ranking approaches exhibit significantly better performance compared to SVM classification both in distinguishing a specific emotion from all others and in multi-class prediction. On the spontaneous data, which contains mostly neutral utterances with a relatively small portion of less intense emotional utterances, ranking-based classifiers again achieve much higher precision in identifying emotional utterances than conventional SVM classifiers. In addition, we discuss the complementarity of conventional SVM and ranking-based classifiers. On all three datasets we find dramatically higher accuracy for the test items on whose prediction the two methods agree compared to the accuracy of individual methods. Furthermore on the spontaneous data the ranking and standard classification are complementary and we obtain marked improvement when we combine the two classifiers by late-stage fusion.
Speaker-sensitive emotion recognition via ranking: Studies on acted and spontaneous speech
S088523081400014X
Pathological speech usually refers to the condition of speech distortion resulting from atypicalities in voice and/or in the articulatory mechanisms owing to disease, illness or other physical or biological insult to the production system. Although automatic evaluation of speech intelligibility and quality could come in handy in these scenarios to assist experts in diagnosis and treatment design, the many sources and types of variability often make it a very challenging computational processing problem. In this work we propose novel sentence-level features to capture abnormal variation in the prosodic, voice quality and pronunciation aspects in pathological speech. In addition, we propose a post-classification posterior smoothing scheme which refines the posterior of a test sample based on the posteriors of other test samples. Finally, we perform feature-level fusions and subsystem decision fusion for arriving at a final intelligibility decision. The performances are tested on two pathological speech datasets, the NKI CCRT Speech Corpus (advanced head and neck cancer) and the TORGO database (cerebral palsy or amyotrophic lateral sclerosis), by evaluating classification accuracy without overlapping subjects’ data among training and test partitions. Results show that the feature sets of each of the voice quality subsystem, prosodic subsystem, and pronunciation subsystem, offer significant discriminating power for binary intelligibility classification. We observe that the proposed posterior smoothing in the acoustic space can further reduce classification errors. The smoothed posterior score fusion of subsystems shows the best classification performance (73.5% for unweighted, and 72.8% for weighted, average recalls of the binary classes).
Automatic intelligibility classification of sentence-level pathological speech
S0885230814000151
Traditional dialogue systems use a fixed silence threshold to detect the end of users’ turns. Such a simplistic model can result in system behaviour that is both interruptive and unresponsive, which in turn affects user experience. Various studies have observed that human interlocutors take cues from speaker behaviour, such as prosody, syntax, and gestures, to coordinate smooth exchange of speaking turns. However, little effort has been made towards implementing these models in dialogue systems and verifying how well they model the turn-taking behaviour in human–computer interactions. We present a data-driven approach to building models for online detection of suitable feedback response locations in the user's speech. We first collected human–computer interaction data using a spoken dialogue system that can perform the Map Task with users (albeit using a trick). On this data, we trained various models that use automatically extractable prosodic, contextual and lexico-syntactic features for detecting response locations. Next, we implemented a trained model in the same dialogue system and evaluated it in interactions with users. The subjective and objective measures from the user evaluation confirm that a model trained on speaker behavioural cues offers both smoother turn-transitions and more responsive system behaviour.
Data-driven models for timing feedback responses in a Map Task dialogue system
S0885230814000163
This paper describes a new kind of language models based on the possibility theory. The purpose of these new models is to better use the data available on the Web for language modeling. These models aim to integrate information relative to impossible word sequences. We address the two main problems of using this kind of model: how to estimate the measures for word sequences and how to integrate this kind of model into the ASR system. We propose a word-sequence possibilistic measure and a practical estimation method based on word-sequence statistics, which is particularly suited for estimating from Web data. We develop several strategies and formulations for using these models in a classical automatic speech recognition engine, which relies on a probabilistic modeling of the speech recognition process. This work is evaluated on two typical usage scenarios: broadcast news transcription with very large training sets and transcription of medical videos, in a specialized domain, with only very limited training data. The results show that the possibilistic models provide significantly lower word error rate on the specialized domain task, where classical n-gram models fail due to the lack of training materials. For the broadcast news, the probabilistic models remain better than the possibilistic ones. However, a log-linear combination of the two kinds of models outperforms all the models used individually, which indicates that possibilistic models bring information that is not modeled by probabilistic ones.
Web-based possibilistic language models for automatic speech recognition
S0885230814000175
This paper presents a simplified and supervised i-vector modeling approach with applications to robust and efficient language identification and speaker verification. First, by concatenating the label vector and the linear regression matrix at the end of the mean supervector and the i-vector factor loading matrix, respectively, the traditional i-vectors are extended to label-regularized supervised i-vectors. These supervised i-vectors are optimized to not only reconstruct the mean supervectors well but also minimize the mean square error between the original and the reconstructed label vectors to make the supervised i-vectors become more discriminative in terms of the label information. Second, factor analysis (FA) is performed on the pre-normalized centered GMM first order statistics supervector to ensure each gaussian component's statistics sub-vector is treated equally in the FA, which reduces the computational cost by a factor of 25 in the simplified i-vector framework. Third, since the entire matrix inversion term in the simplified i-vector extraction only depends on one single variable (total frame number), we make a global table of the resulting matrices against the frame numbers’ log values. Using this lookup table, each utterance's simplified i-vector extraction is further sped up by a factor of 4 and suffers only a small quantization error. Finally, the simplified version of the supervised i-vector modeling is proposed to enhance both the robustness and efficiency. The proposed methods are evaluated on the DARPA RATS dev2 task, the NIST LRE 2007 general task and the NIST SRE 2010 female condition 5 task for noisy channel language identification, clean channel language identification and clean channel speaker verification, respectively. For language identification on the DARPA RATS, the simplified supervised i-vector modeling achieved 2%, 16%, and 7% relative equal error rate (EER) reduction on three different feature sets and sped up by a factor of more than 100 against the baseline i-vector method for the 120s task. Similar results were observed on the NIST LRE 2007 30s task with 7% relative average cost reduction. Results also show that the use of Gammatone frequency cepstral coefficients, Mel-frequency cepstral coefficients and spectro-temporal Gabor features in conjunction with shifted-delta-cepstral features improves the overall language identification performance significantly. For speaker verification, the proposed supervised i-vector approach outperforms the i-vector baseline by relatively 12% and 7% in terms of EER and norm old minDCF values, respectively.
Simplified supervised i-vector modeling with application to robust and efficient language identification and speaker verification
S0885230814000187
This paper outlines a comprehensive system for automatically generating a phonetic transcription of a given Arabic text which closely matches the pronunciation of the speakers. The presented system is based on a set of (language-dependent) pronunciation rules that works on converting fully diacriticised Arabic text into the actual sounds, along with a lexicon for exceptional words. This is a two-phase process: one-to-one grapheme to phoneme conversion and then phoneme-to-allophone conversion using a set of “phonological rules”. Phonological rules operate on the phonemes and convert them to the actual sounds considering the neighbouring phones or the containing syllable or word. This system is developed for the purpose of delivering a robust Automatic Arabic Speech Recognition (AASR) system which is able to handle speech variation resulting from the mismatch between the text and the pronunciation. We anticipate that it could also be used for producing natural sounding speech from an Arabic text-to-speech (ATTS) system as well, but we have not extensively tested it in this application.
Generation of a phonetic transcription for modern standard Arabic: A knowledge-based model
S0885230814000199
Voice conversion functions based on Gaussian mixture models and parametric speech signal representations are opaque in the sense that it is not straightforward to interpret the physical meaning of the conversion parameters. Following the line of recent works based on the frequency warping plus amplitude scaling paradigm, in this article we show that voice conversion functions can be designed according to physically meaningful constraints in such manner that they become highly informative. The resulting voice conversion method can be used to visualize the differences between source and target voices or styles in terms of formant location in frequency, spectral tilt and amplitude in a number of spectral bands.
Interpretable parametric voice conversion functions based on Gaussian mixture models and constrained transformations
S0885230814000205
Accurate phonetic transcription of proper nouns can be an important resource for commercial applications that embed speech technologies, such as audio indexing and vocal phone directory lookup. However, an accurate phonetic transcription is more difficult to obtain for proper nouns than for regular words. Indeed, phonetic transcription of a proper noun depends on both the origin of the speaker pronouncing it and the origin of the proper noun itself. This work proposes a method that allows the extraction of phonetic transcriptions of proper nouns using actual utterances of those proper nouns, thus yielding transcriptions based on practical use instead of mere pronunciation rules. The proposed method consists in a process that first extracts phonetic transcriptions, and then iteratively filters them. In order to initialize the process, an alignment dictionary is used to detect word boundaries. A rule-based grapheme-to-phoneme generator (LIA_PHON), a knowledge-based approach (JSM), and a Statistical Machine Translation based system were evaluated for this alignment. As a result, compared to our reference dictionary (BDLEX supplemented by LIA_PHON for missing words) on the ESTER 1 French broadcast news corpus, we were able to significantly decrease the Word Error Rate (WER) on segments of speech with proper nouns, without negatively affecting the WER on the rest of the corpus.
Improving recognition of proper nouns in ASR through generating and filtering phonetic transcriptions
S0885230814000217
This paper investigates the temporal excitation patterns of creaky voice. Creaky voice is a voice quality frequently used as a phrase-boundary marker, but also as a means of portraying attitude, affective states and even social status. Consequently, the automatic detection and modelling of creaky voice may have implications for speech technology applications. The acoustic characteristics of creaky voice are, however, rather distinct from modal phonation. Further, several acoustic patterns can bring about the perception of creaky voice, thereby complicating the strategies used for its automatic detection, analysis and modelling. The present study is carried out using a variety of languages, speakers, and on both read and conversational data and involves a mutual information-based assessment of the various acoustic features proposed in the literature for detecting creaky voice. These features are then exploited in classification experiments where we achieve an appreciable improvement in detection accuracy compared to the state of the art. Both experiments clearly highlight the presence of several creaky patterns. A subsequent qualitative and quantitative analysis of the identified patterns is provided, which reveals a considerable speaker-dependent variability in the usage of these creaky patterns. We also investigate how creaky voice detection systems perform across creaky patterns.
Data-driven detection and analysis of the patterns of creaky voice
S0885230814000229
The great majority of current voice technology applications rely on acoustic features, such as the widely used MFCC or LP parameters, which characterize the vocal tract response. Nonetheless, the major source of excitation, namely the glottal flow, is expected to convey useful complementary information. The glottal flow is the airflow passing through the vocal folds at the glottis. Unfortunately, glottal flow analysis from speech recordings requires specific and complex processing operations, which explains why it has been generally avoided. This paper gives a comprehensive overview of techniques for glottal source processing. Starting from analysis tools for pitch tracking, detection of glottal closure instant, estimation and modeling of glottal flow, this paper discusses how these tools and techniques might be properly integrated in various voice technology applications.
Glottal source processing: From analysis to applications
S0885230814000230
In command-and-control applications, a vocal user interface (VUI) is useful for handsfree control of various devices, especially for people with a physical disability. The spoken utterances are usually restricted to a predefined list of phrases or to a restricted grammar, and the acoustic models work well for normal speech. While some state-of-the-art methods allow for user adaptation of the predefined acoustic models and lexicons, we pursue a fully adaptive VUI by learning both vocabulary and acoustics directly from interaction examples. A learning curve usually has a steep rise in the beginning and an asymptotic ceiling at the end. To limit tutoring time and to guarantee good performance in the long run, the word learning rate of the VUI should be fast and the learning curve should level off at a high accuracy. In order to deal with these performance indicators, we propose a multi-level VUI architecture and we investigate the effectiveness of alternative processing schemes. In the low-level layer, we explore the use of MIDA features (Mutual Information Discrimination Analysis) against conventional MFCC features. In the mid-level layer, we enhance the acoustic representation by means of phone posteriorgrams and clustering procedures. In the high-level layer, we use the NMF (Non-negative Matrix Factorization) procedure which has been demonstrated to be an effective approach for word learning. We evaluate and discuss the performance and the feasibility of our approach in a realistic experimental setting of the VUI-user learning context.
Fast vocabulary acquisition in an NMF-based self-learning vocal user interface
S0885230814000242
A new approach for intonation stylization that enables the extraction of an intonation representation from prosodically unlabeled data is introduced. This approach yields global and local intonation contour classes arising from a contour-based, parametric and superpositional intonation stylization. Based on findings about the linguistic interpretation of the contour classes derived from corpus statistics and perception experiments, we created simple prediction models for the partial generation of intonation contours from discourse structure defined by discourse segment boundaries and the information status of nouns within these segments. The predicted intonation contours were evaluated by human judgments of adequacy that yielded a high accordance.
Linking bottom-up intonation stylization to discourse structure
S0885230814000254
In this paper, we propose a new front-end for Acoustic Event Classification tasks (AEC). First, we study the spectral characteristics of different acoustic events in comparison with the structure of speech spectra. Second, from the findings of this study, we propose a new parameterization for AEC, which is an extension of the conventional Mel-Frequency Cepstral Coefficients (MFCC) and is based on the high pass filtering of the acoustic event signal. The proposed front-end have been tested in clean and noisy conditions and compared to the conventional MFCC in an AEC task. Results support the fact that the high pass filtering of the audio signal is, in general terms, beneficial for the system, showing that the removal of frequencies below 100–275Hz in the feature extraction process in clean conditions and below 400–500Hz in noisy conditions, improves significantly the performance of the system with respect to the baseline.
Feature extraction based on the high-pass filtering of audio signals for Acoustic Event Classification
S0885230814000266
In this article we investigate what representations of acoustics and word usage are most suitable for predicting dimensions of affect—arousal, valance, power and expectancy—in spontaneous interactions. Our experiments are based on the AVEC 2012 challenge dataset. For lexical representations, we compare corpus-independent features based on psychological word norms of emotional dimensions, as well as corpus-dependent representations. We find that corpus-dependent bag of words approach with mutual information between word and emotion dimensions is by far the best representation. For the analysis of acoustics, we zero in on the question of granularity. We confirm on our corpus that utterance-level features are more predictive than word-level features. Further, we study more detailed representations in which the utterance is divided into regions of interest (ROI), each with separate representation. We introduce two ROI representations, which significantly outperform less informed approaches. In addition we show that acoustic models of emotion can be improved considerably by taking into account annotator agreement and training the model on smaller but reliable dataset. Finally we discuss the potential for improving prediction by combining the lexical and acoustic modalities. Simple fusion methods do not lead to consistent improvements over lexical classifiers alone but improve over acoustic models.
Acoustic and lexical representations for affect prediction in spontaneous conversations
S0885230814000278
This paper presents a new approach to speech enhancement from single-channel measurements involving both noise and channel distortion (i.e., convolutional noise), and demonstrates its applications for robust speech recognition and for improving noisy speech quality. The approach is based on finding longest matching segments (LMS) from a corpus of clean, wideband speech. The approach adds three novel developments to our previous LMS research. First, we address the problem of channel distortion as well as additive noise. Second, we present an improved method for modeling noise for speech estimation. Third, we present an iterative algorithm which updates the noise and channel estimates of the corpus data model. In experiments using speech recognition as a test with the Aurora 4 database, the use of our enhancement approach as a preprocessor for feature extraction significantly improved the performance of a baseline recognition system. In another comparison against conventional enhancement algorithms, both the PESQ and the segmental SNR ratings of the LMS algorithm were superior to the other methods for noisy speech enhancement.
An iterative longest matching segment approach to speech enhancement with additive noise and channel distortion
S088523081400028X
Natural languages are known for their expressive richness. Many sentences can be used to represent the same underlying meaning. Only modelling the observed surface word sequence can result in poor context coverage and generalization, for example, when using n-gram language models (LMs). This paper proposes a novel form of language model, the paraphrastic LM, that addresses these issues. A phrase level paraphrase model statistically learned from standard text data with no semantic annotation is used to generate multiple paraphrase variants. LM probabilities are then estimated by maximizing their marginal probability. Multi-level language models estimated at both the word level and the phrase level are combined. An efficient weighted finite state transducer (WFST) based paraphrase generation approach is also presented. Significant error rate reductions of 0.5–0.6% absolute were obtained over the baseline n-gram LMs on two state-of-the-art recognition tasks for English conversational telephone speech and Mandarin Chinese broadcast speech using a paraphrastic multi-level LM modelling both word and phrase sequences. When it is further combined with word and phrase level feed-forward neural network LMs, a significant error rate reduction of 0.9% absolute (9% relative) and 0.5% absolute (5% relative) were obtained over the baseline n-gram and neural network LMs respectively.
Paraphrastic language models
S0885230814000369
South African English is currently considered an under-resourced variety of English. Extensive speech resources are, however, available for North American (US) English. In this paper we consider the use of these US resources in the development of a South African large vocabulary speech recognition system. Specifically we consider two research questions. Firstly, we determine the performance penalties that are incurred when using US instead of South African language models, pronunciation dictionaries and acoustic models. Secondly, we determine whether US acoustic and language modelling data can be used in addition to the much more limited South African resources to improve speech recognition performance. In the first case we find that using a US pronunciation dictionary or a US language model in a South African system results in fairly small penalties. However, a substantial penalty is incurred when using a US acoustic model. In the second investigation we find that small but consistent improvements over a baseline South African system can be obtained by the additional use of US acoustic data. Larger improvements are obtained when complementing the South African language modelling data with US and/or UK material. We conclude that, when developing resources for an under-resourced variety of English, the compilation of acoustic data should be prioritised, language modelling data has a weaker effect on performance and the pronunciation dictionary the smallest.
Capitalising on North American speech resources for the development of a South African English large vocabulary speech recognition system
S0885230814000382
This paper proposes an efficient speech data selection technique that can identify those data that will be well recognized. Conventional confidence measure techniques can also identify well-recognized speech data. However, those techniques require a lot of computation time for speech recognition processing to estimate confidence scores. Speech data with low confidence should not go through the time-consuming recognition process since they will yield erroneous spoken documents that will eventually be rejected. The proposed technique can select the speech data that will be acceptable for speech recognition applications. It rapidly selects speech data with high prior confidence based on acoustic likelihood values and using only speech and monophone models. Experiments show that the proposed confidence estimation technique is over 50 times faster than the conventional posterior confidence measure while providing equivalent data selection performance for speech recognition and spoken document retrieval.
Efficient data selection for speech recognition based on prior confidence estimation using speech and monophone models
S088523081400059X
This paper regards social question-and-answer (Q&A) collections such as Yahoo! Answers as knowledge repositories and investigates techniques to mine knowledge from them to improve sentence-based complex question answering (QA) systems. Specifically, we present a question-type-specific method (QTSM) that extracts question-type-dependent cue expressions from social Q&A pairs in which the question types are the same as the submitted questions. We compare our approach with the question-specific and monolingual translation-based methods presented in previous works. The question-specific method (QSM) extracts question-dependent answer words from social Q&A pairs in which the questions resemble the submitted question. The monolingual translation-based method (MTM) learns word-to-word translation probabilities from all of the social Q&A pairs without considering the question or its type. Experiments on the extension of the NTCIR 2008 Chinese test data set demonstrate that our models that exploit social Q&A collections are significantly more effective than baseline methods such as LexRank. The performance ranking of these methods is QTSM>{QSM, MTM}. The largest F3 improvements in our proposed QTSM over QSM and MTM reach 6.0% and 5.8%, respectively.
Leveraging social Q&A collections for improving complex question answering
S0885230814000606
In this paper, we present unsupervised language model (LM) adaptation approaches using latent Dirichlet allocation (LDA) and latent semantic marginals (LSM). The LSM is the unigram probability distribution over words that are calculated using LDA-adapted unigram models. The LDA model is used to extract topic information from a training corpus in an unsupervised manner. The LDA model yields a document–topic matrix that describes the number of words assigned to topics for the documents. A hard-clustering method is applied on the document–topic matrix of the LDA model to form topics. An adapted model is created by using a weighted combination of the n-gram topic models. The stand-alone adapted model outperforms the background model. The interpolation of the background model and the adapted model gives further improvement. We modify the above models using the LSM. The LSM is used to form a new adapted model by using the minimum discriminant information (MDI) adaptation approach called unigram scaling, which minimizes the distance between the new adapted model and the other model. The unigram scaling of the adapted model using LSM yields better results over a conventional unigram scaling approach. The unigram scaling of the interpolation of the background and the adapted model using the LSM outperform the background model, the unigram scaling of the background model, the unigram scaling of the adapted model, and the interpolation of the background and the adapted models respectively. We perform experiments using the '87–89 Wall Street Journal (WSJ) corpus incorporating a multi-pass continuous speech recognition (CSR) system. In the first pass, we used the background n-gram language model for lattice generation and then we apply the LM adaptation approaches for lattice rescoring in the second pass.
Unsupervised language model adaptation using LDA-based mixture models and latent semantic marginals
S0885230814000618
Probabilistic approaches are now widespread in most natural language processing applications and selection of a particular approach usually depends on the task at hand. Targeting speech semantic interpretation in a multilingual context, this paper presents a comparison between the state-of-the-art methods used for machine translation and speech understanding. This comparison justifies our proposition of a unified framework for both tasks based on a discriminative approach. We demonstrate that this framework can be used to perform a joint translation-understanding decoding which allows to combine, in the same process, translation and semantic tagging scores of a sentence. A cascade of finite-state transducers is used to compose the translation and understanding hypothesis graphs (1-bests, word graphs or confusion networks). Not only this proposition is competitive with the state-of-the-art techniques but also its framework is even more attractive as it can be generalized to other components of human–machine vocal interfaces (e.g. speech recognizer) so as to allow a richer transmission of information between them.
A unified framework for translation and understanding allowing discriminative joint decoding for multilingual speech semantic interpretation
S0885230814000722
Random Indexing based extractive text summarization has already been proposed in literature. This paper looks at the above technique in detail, and proposes several improvements. The improvements are both in terms of formation of index (word) vectors of the document, and construction of context vectors by using convolution instead of addition operation on the index vectors. Experiments have been conducted using both angular and linear distances as metrics for proximity. As a consequence, three improved versions of the algorithm, viz. RISUM, RISUM+ and MRISUM were obtained. These algorithms have been applied on DUC 2002 documents, and their comparative performance has been studied. Different ROUGE metrics have been used for performance evaluation. While RISUM and RISUM+ perform almost at par, MRISUM is found to outperform both RISUM and RISUM+ significantly. MRISUM also outperforms LSA+TRM based summarization approach. The study reveals that all the three Random Indexing based techniques proposed in this study produce consistent results when linear distance is used for measuring proximity.
Random Indexing and Modified Random Indexing based approach for extractive text summarization
S0885230814000734
Recently, re-ranking algorithms have been successfully applied on statistical machine translation systems. Due to the errors in the hypothesis alignment and varying word order between the source and target sentences and also the lack of sufficient resources such as parallel corpora, decoding may result in ungrammatical or non-fluent outputs. This paper proposes a re-ranking system based on swarm algorithms, which makes the use of sophisticated non-syntactical features to re-rank the n-best translation candidates. We introduce plenty of easy-computed non-syntactical features to deal with SMT system errors plus the quantum-behaved particle swarm optimization (QPSO) algorithm to adjust the weights of features. We have evaluated the proposed approach on 2 translation tasks in different language pairs (Persian→English and German→English) and genres (news and novel books). In comparison with PSO-, GA-, Perceptron- and averaged Perceptron-style re-ranking systems, the experimental study demonstrates the superiority of the proposed system in terms of translation quality on both translation tasks. In addition, the impacts of the proposed features on the translation quality have been analyzed, and the most positive ones have been recognized. At the end, the impact of the n-best list size on the proposed system is investigated.
A swarm-inspired re-ranker system for statistical machine translation
S0885230814000746
After total laryngectomy, the placement of a tracheoesophageal (TE) prosthesis offers the possibility to recover a new voice. However, the quality of the resulting TE speech is known to be degraded. To assess a patient's voice, current approaches rely either on quality-of-life questionnaires or on a perceptual evaluation carried out by speech therapists. These two methods exhibit the disadvantage of being both subjective and time-consuming. In this paper, we propose a dedicated scale, called A4S, for the objective Automatic Acoustic Assessment of Alaryngeal Speech. For this purpose, we first identify the artefacts existing in TE speech. These are linked to the periodicity, regularity, high-frequency noise and gargling noise/creakiness of the signal, as well as to the speaking rate. Specific acoustic features are proposed for the characterization of each artefact. A statistical study shows that TE speakers have a significantly worse voice compared to the control group, except for the speaking rate. Based on these advances, the A4S scale is proposed. This scale is made of five normalized dimensions, related to the five identified artefacts. A given patient's phonation can then be represented by a pentagon in a radar chart, which allows a fast and intuitive visualization of the strengths and flaws of the voice. A4S can then be seen as a useful tool for speech therapists to design tailored exercises specific to the patient's voice. In addition, we show the applicability of A4S for the follow-up of patients, as well as to study the impact of the type of surgery (open neck, robot and flap reconstruction) used for total laryngectomy and of a pre-surgical radiotherapy on various aspects of the TE voice.
Tracheoesophageal speech: A dedicated objective acoustic assessment
S0885230814000758
This paper proposes a hybrid refinement scheme for more accurate localization of phonetic boundaries by combining three different post-processing techniques, including statistical correction, fusion, and predictive models. A statistical method based on state-level correction is proposed to improve the segmentation results. Effects of search ranges on the statistical correction process are studied and a state selection scheme is used to enhance the correction results. This paper also examines the effects of time resolution, i.e., stepsize, of acoustic models on the accuracy of segmentation. A multi-resolution fusion process is proposed to further refine the statistically corrected results. Finally, predictive models are designed to improve the segmentation accuracy by incorporating various acoustic features and searching around the preliminary boundary with a smaller stepsize. By applying the hybrid refinement scheme on a well-known corpus, significant improvements of segmentation results in terms of segmentation accuracy with different tolerances, mean absolute error (MAE), and root-mean-square error (RMSE) can be observed. Furthermore, a scenario of cross-corpora segmentation is examined in generating the segmentation results for a new corpus with a small set of labeled data. Experimental results show that the proposed refinement procedure can generate segmentation results comparable to those given by well-trained acoustic models obtained from the new corpus.
A hybrid refinement scheme for intra- and cross- corpora phonetic segmentation
S0885230814000783
In this paper, the production characteristics of laughter are analysed at call and bout levels. Data of natural laughter is examined using electroglottograph (EGG) and acoustic signals. Nonspeech-laugh and laughed-speech are analysed in comparison with normal speech using features derived from the EGG and acoustic signals. Analysis of EGG signal is used to derive the average closed phase quotient in glottal cycles and the average instantaneous fundamental frequency (F 0). Excitation source characteristics are analysed from the acoustic signal using a modified zero-frequency filtering (mZFF) method. Excitation impulse density and the strength of impulse-like excitation are extracted from the mZFF signal. Changes in the vocal tract system characteristics are examined in terms of the first two dominant frequencies derived using linear prediction (LP) analysis. Additional excitation information present in the acoustic signal is examined using a measure of sharpness of peaks in the Hilbert envelope of the LP residual at the glottal closure instants. Parameters representing degree of change and temporal changes in the production features are also derived to study the discriminating characteristics of laughter from normal speech. Changes are larger for nonspeech-laugh than laughed-speech, with reference to normal speech.
Analysis of production characteristics of laughter
S0885230814000795
Owing to the growing need of acquiring medical data from clinical records, processing such documents is an important topic in natural language processing (NLP). However, for general NLP methods to work, a proper, normalized input is required. Otherwise the system is overwhelmed by the unusually high amount of noise generally characteristic of this kind of text. The different types of this noise originate from non-standard language use: short fragments instead of proper sentences, usage of Latin words, many acronyms and very frequent misspellings. In this paper, a method is described for the automated correction of spelling errors in Hungarian clinical records. First, a word-based algorithm was implemented to generate a ranked list of correction candidates for word forms regarded as incorrect. Second, the problem of spelling correction was modelled as a translation task, where the source language is the erroneous text and the target language is the corrected one. A Statistical Machine Translation (SMT) decoder performed the task of error correction. Since no orthographically correct proofread text from this domain is available, we could not use such a corpus for training the system. Instead, the word-based system was used to create translation models. In addition, a 3-gram token-based language model was used to model lexical context. Due to the high number of abbreviations and acronyms in the texts, the behaviour of these abbreviated forms was further examined both in the case of the context-unaware word-based and the SMT-decoder-based implementations. The results show that the SMT-based method outperforms the first candidate accuracy of the word-based ranking system. However, the normalization of abbreviations should be handled as a separate task.
Context-aware correction of spelling errors in Hungarian medical documents
S0885230814000801
Synchronous context-free grammars (SCFGs) can be learned from parallel texts that are annotated with target-side syntax, and can produce translations by building target-side syntactic trees from source strings. Ideally, producing syntactic trees would entail that the translation is grammatically well-formed, but in reality, this is often not the case. Focusing on translation into German, we discuss various ways in which string-to-tree translation models over- or undergeneralise. We show how these problems can be addressed by choosing a suitable parser and modifying its output, by introducing linguistic constraints that enforce morphological agreement and constrain subcategorisation, and by modelling the productive generation of German compounds.
A tree does not make a well-formed sentence: Improving syntactic string-to-tree statistical machine translation with more linguistic knowledge
S088523081400093X
In this paper, we present a survey on the application of recurrent neural networks to the task of statistical language modeling. Although it has been shown that these models obtain good performance on this task, often superior to other state-of-the-art techniques, they suffer from some important drawbacks, including a very long training time and limitations on the number of context words that can be taken into account in practice. Recent extensions to recurrent neural network models have been developed in an attempt to address these drawbacks. This paper gives an overview of the most important extensions. Each technique is described and its performance on statistical language modeling, as described in the existing literature, is discussed. Our structured overview makes it possible to detect the most promising techniques in the field of recurrent neural networks, applied to language modeling, but it also highlights the techniques for which further research is required.
A survey on the application of recurrent neural networks to statistical language modeling
S0885230814000953
It is acknowledged that Hidden Markov Models (HMMs) with Gaussian Mixture Models (GMMs) as the observation density functions achieve excellent digit recognition performance at high signal to noise ratios (SNRs). Moreover, many years of research have led to good techniques to reduce the impact of noise, distortion and mismatch between training and test conditions on the recognition accuracy. Nevertheless, we still await systems that are truly robust against these confounding factors. The present paper extends recent work on acoustic modeling based on Reservoir Computing (RC), a concept that has its roots in Machine Learning. By introducing a novel analysis of reservoirs as non-linear dynamical systems, new insights are gained and translated into a new reservoir design recipe that is extremely simple and highly comprehensible in terms of the dynamics of the acoustic features and the modeled acoustic units. By tuning the reservoir to these dynamics, one can create RC-based systems that not only compete well with conventional systems in clean conditions, but also degrade more gracefully in noisy conditions. Control experiments show that noise-robustness follows from the random fixation of the reservoir neurons whereas, tuning the reservoir dynamics increases the accuracy without compromising the noise-robustness.
Robust continuous digit recognition using Reservoir Computing
S0885230814000965
Despite having a research history of more than 20 years, English to Hindi machine translation often suffers badly from incorrect translations of noun compounds. The problems envisaged can be of various types, such as, the absence of proper postpositions, inappropriate word order, incorrect semantics. Different existing English to Hindi machine translation systems show their vulnerability, irrespective of the underlying technique. A potential solution to this problem lies in understanding the semantics of the noun compounds. The present paper proposes a scheme based on semantic relations to address this issue. The scheme works in three steps: identification of the noun compounds in a given text, determination of the semantic relationship(s) between them, and finally, selecting the right translation pattern. The scheme provides translation patterns for different semantic relations for 2-word noun compounds first. These patterns are used recursively to find the semantic relations and the translation patterns for 3-word and 4-word noun compounds. Frequency and probability based adjacency and dependency models are used for bracketing (grouping) the constituent words of 3-word and 4-word noun compounds into 2-word noun compounds. The semantic relations and the translation patterns generated for 2-word, 3-word and 4-word noun compounds are evaluated. The proposed scheme is compared with some well-known English to Hindi translators, viz. AnglaMT, Anuvadaksh, Bing, Google, and also with the Moses baseline system. The results obtained, show significant improvement over the Moses baseline system. Also, it performs better than the other online MT systems in terms of recall and precision.
Translating noun compounds using semantic relations
S0885230814000977
In this paper, we study methods to discover words and extract their pronunciations from audio data for non-written and under-resourced languages. We examine the potential and the challenges of pronunciation extraction from phoneme sequences through cross-lingual word-to-phoneme alignment. In our scenario a human translator produces utterances in the (non-written) target language from prompts in a resource-rich source language. We add the resource-rich source language prompts to help the word discovery and pronunciation extraction process. By aligning the source language words to the target language phonemes, we segment the phoneme sequences into word-like chunks. The resulting chunks are interpreted as putative word pronunciations but are very prone to alignment and phoneme recognition errors. Thus we suggest our alignment model Model 3P that is particularly designed for cross-lingual word-to-phoneme alignment. We present two different methods (source word dependent and independent clustering) that extract word pronunciations from word-to-phoneme alignments and compare them. We show that both methods compensate for phoneme recognition and alignment errors. We also extract a parallel corpus consisting of 15 different translations in 10 languages from the Christian Bible to evaluate our alignment model and error recovery methods. For example, based on noisy target language phoneme sequences with 45.1% errors, we build a dictionary for an English Bible with a Spanish Bible translation with 4.5% OOV rate, where 64% of the extracted pronunciations contain no more than one wrong phoneme. Finally, we use the extracted pronunciations in an automatic speech recognition system for the target language and report promising word error rates – given that pronunciation dictionary and language model are learned completely unsupervised and no written form for the target language is required for our approach.
Word segmentation and pronunciation extraction from phoneme sequences through cross-lingual word-to-phoneme alignment
S0885230814001016
This paper explores the use of linguistic information for the selection of data to train language models. We depart from the state-of-the-art method in perplexity-based data selection and extend it in order to use word-level linguistic units (i.e. lemmas, named entity categories and part-of-speech tags) instead of surface forms. We then present two methods that combine the different types of linguistic knowledge as well as the surface forms (1, naïve selection of the top ranked sentences selected by each method; 2, linear interpolation of the datasets selected by the different methods). The paper presents detailed results and analysis for four languages with different levels of morphologic complexity (English, Spanish, Czech and Chinese). The interpolation-based combination outperforms the purely statistical baseline in all the scenarios, resulting in language models with lower perplexity. In relative terms the improvements are similar regardless of the language, with perplexity reductions achieved in the range 7.72–13.02%. In absolute terms the reduction is higher for languages with high type-token ratio (Chinese, 202.16) or rich morphology (Czech, 81.53) and lower for the remaining languages, Spanish (55.2) and English (34.43 on the English side of the same parallel dataset as for Czech and 61.90 on the same parallel dataset as for Spanish).
Linguistically-augmented perplexity-based data selection for language models
S0885230814001028
Statistical and rule-based methods are complementary approaches to machine translation (MT) that have different strengths and weaknesses. This complementarity has, over the last few years, resulted in the consolidation of a growing interest in hybrid systems that combine both data-driven and linguistic approaches. In this paper, we address the situation in which the amount of bilingual resources that is available for a particular language pair is not sufficiently large to train a competitive statistical MT system, but the cost and slow development cycles of rule-based MT systems cannot be afforded either. In this context, we formalise a new method that uses scarce parallel corpora to automatically infer a set of shallow-transfer rules to be integrated into a rule-based MT system, thus avoiding the need for human experts to handcraft these rules. Our work is based on the alignment template approach to phrase-based statistical MT, but the definition of the alignment template is extended to encompass different generalisation levels. It is also greatly inspired by the work of Sánchez-Martínez and Forcada (2009) in which alignment templates were also considered for shallow-transfer rule inference. However, our approach overcomes many relevant limitations of that work, principally those related to the inability to find the correct generalisation level for the alignment templates, and to select the subset of alignment templates that ensures an adequate segmentation of the input sentences by the rules eventually obtained. Unlike previous approaches in literature, our formalism does not require linguistic knowledge about the languages involved in the translation. Moreover, it is the first time that conflicts between rules are resolved by choosing the most appropriate ones according to a global minimisation function rather than proceeding in a pairwise greedy fashion. Experiments conducted using five different language pairs with the free/open-source rule-based MT platform Apertium show that translation quality significantly improves when compared to the method proposed by Sánchez-Martínez and Forcada (2009), and is close to that obtained using handcrafted rules. For some language pairs, our approach is even able to outperform them. Moreover, the resulting number of rules is considerably smaller, which eases human revision and maintenance.
A generalised alignment template formalism and its application to the inference of shallow-transfer machine translation rules from scarce bilingual corpora
S088523081400103X
Globalization has dramatically increased the need of translating information from one language to another. Frequently, such translation needs should be satisfied under very tight time constraints. Machine translation (MT) techniques can constitute a solution to this overly complex problem. However, the documents to be translated in real scenarios are often limited to a specific domain, such as a particular type of medical or legal text. This situation seriously hinders the applicability of MT, since it is usually expensive to build a reliable translation system, no matter what technology is used, due to the linguistic resources that are required to build them, such as dictionaries, translation memories or parallel texts. In order to solve this problem, we propose the application of automatic post-editing in an online learning framework. Our proposed technique allows the human expert to translate in a specific domain by using a base translation system designed to work in a general domain whose output is corrected (or adapted to the specific domain) by means of an automatic post-editing module. This automatic post-editing module learns to make its corrections from user feedback in real time by means of online learning techniques. We have validated our system using different translation technologies to implement the base translation system, as well as several texts involving different domains and languages. In most cases, our results show significant improvements in terms of BLEU (up to 16 points) with respect to the baseline systems. The proposed technique works effectively when the n-grams of the document to be translated presents a certain rate of repetition, situation which is common according to the document-internal repetition property.
Translating without in-domain corpus: Machine translation post-editing with online learning techniques
S0885230814001041
Languages such as English need to be morphologically analyzed in translation into morphologically rich languages such as Persian. Analyzing the output of English to Persian machine translation systems illustrates that Persian morphology comes with many challenges especially in the verb conjugation. In this paper, we investigate three ways to deal with the morphology of Persian verb in machine translation (MT): no morphology generation in statistical MT, rule-based morphology generation in rule-based MT and a hybrid-model-independent morphology generation. By model-independent we mean that it is not based on statistical or rule-based MT and could be applied to any English to Persian MT as a post-processor. We select Google translator (translate.google.com) to show the performance of a statistical MT without any morphology generation component for the verb conjugation. Rule-based morphology generation is implemented as a part of a rule-based MT. Finally, we enrich the rule-based approach by statistical methods and information to present a hybrid model. A set of linguistically motivated features are defined using both English and Persian linguistic knowledge obtained from a parallel corpus. Then we make a model to predict six morphological features of the verb in Persian using decision tree classifier and generate an inflected verb form. In a real translation process, by applying our model to the output of Google translator and a rule-based MT as a post-processor, we achieve an improvement of about 0.7% absolute BLEU score in the best case. When we are given the gold lemma in our reference experiments, using the most common feature values as a baseline shows an improvement of almost 2.8% absolute BLEU score on a test set containing 15K sentences.
Using decision tree to hybrid morphology generation of Persian verb for English–Persian translation
S0885230814001053
This paper proposes a new set of speech features called Locally-Normalized Cepstral Coefficients (LNCC) that are based on Seneff's Generalized Synchrony Detector (GSD). First, an analysis of the GSD frequency response is provided to show that it generates spurious peaks at harmonics of the detected frequency. Then, the GSD frequency response is modeled as a quotient of two filters centered at the detected frequency. The numerator is a triangular band pass filter centered around a particular frequency similar to the ordinary Mel filters. The denominator term is a filter that responds maximally to frequency components on either side of the numerator filter. As a result, a local normalization is performed without the spurious peaks of the original GSD. Speaker verification results demonstrate that the proposed LNCC features are of low computational complexity and far more effectively compensate for spectral tilt than ordinary MFCC coefficients. LNCC features do not require the computation and storage of a moving average of the feature values, and they provide relative reductions in Equal Error Rate (EER) as high as 47.7%, 34.0% or 25.8% when compared with MFCC, MFCC+CMN, or MFCC+RASTA in one case of variable spectral tilt, respectively.
A perceptually-motivated low-complexity instantaneous linear channel normalization technique applied to speaker verification
S0885230814001065
Arabic is a highly inflected language and a morpho-syntactically complex language with many differences compared to several languages that are heavily studied. It may thus require good pre-processing as it presents significant challenges for Natural Language Processing (NLP), specifically for Machine Translation (MT). This paper aims to examine how Statistical Machine Translation (SMT) can be improved using rule-based pre-processing and language analysis. We describe a hybrid translation approach coupling an Arabic–French statistical machine translation system using the Moses decoder with additional morphological rules that reduce the morphology of the source language (Arabic) to a level that makes it closer to that of the target language (French). Moreover, we introduce additional swapping rules for a structural matching between the source language and the target language. Two structural changes involving the positions of the pronouns and verbs in both the source and target languages have been attempted. The results show an improvement in the quality of translation and a gain in terms of BLEU score after introducing a pre-processing scheme for Arabic and applying these rules based on morphological variations and verb re-ordering (VS into SV constructions) in the source language (Arabic) according to their positions in the target language (French). Furthermore, a learning curve shows the improvement in terms on BLEU score under scarce- and large-resources conditions. The proposed approach is completed without increasing the amount of training data or radically changing the algorithms that can affect the translation or training engines.
Hybrid Arabic–French machine translation using syntactic re-ordering and morphological pre-processing
S0885230814001077
This survey on hybrid machine translation (MT) is motivated by the fact that hybridization techniques have become popular as they attempt to combine the best characteristics of highly advanced pure rule or corpus-based MT approaches. Existing research typically covers either simple or more complex architectures guided by either rule or corpus-based approaches. The goal is to combine the best properties of each type. This survey provides a detailed overview of the modification of the standard rule-based architecture to include statistical knowledge, the introduction of rules in corpus-based approaches, and the hybridization of approaches within this last single category. The principal aim here is to cover the leading research and progress in this field of MT and in several related applications.
Latest trends in hybrid machine translation and its applications
S088523081400120X
In recent years, the use of rhythm-based features in speech processing systems has received growing interest. This approach uses a wide array of rhythm metrics that have been developed to capture speech timing differences between and within languages. However, the reliability of rhythm metrics is being increasingly called into question. In this paper, we propose two modifications to this approach. First, we describe a model that is based on auditory cues that simulate the external, middle and inner parts of the ear. We evaluate this model by performing experiments to discriminate between native and non-native Arabic speech. Data are from the West Point Arabic Speech Corpus; testing is done on standard classifiers based on Gaussian Mixture Models (GMMs), Support Vector Machines (SVMs) and a hybrid GMM/SVM. Results show that the auditory-based model consistently outperforms a traditional rhythm-metric approach that includes both duration- and intensity-based metrics. Second, we propose a framework that combines the rhythm metrics and the auditory-based cues in the context of a Logistic Regression (LR) method that can optimize feature combination. Further results show that the proposed LR-based method improves performance over the standard classifiers in the discrimination between the native and non-native Arabic speech.
Native and non-native class discrimination using speech rhythm- and auditory-based cues
S0885230814001211
This paper addresses the issue of language model adaptation for Recurrent Neural Network Language Models (rnnlms), which have recently emerged as a state-of-the-art method for language modeling in the area of speech recognition. Curriculum learning is an established machine learning approach that achieves better models by applying a curriculum, i.e., a well-planned ordering of the training data, during the learning process. Our contribution is to demonstrate the importance of curriculum learning methods for adapting rnnlms and to provide key insights on how they should be applied. rnnlms model language in a continuous space and can theoretically exploit word-dependency information over arbitrarily long distances. These characteristics give rnnlms the ability to learn patterns robustly with relatively little training data, implying that they are well suited for adaptation. In this paper, we focus on two related challenges facing language models: within-domain adaptation and limited-data within-domain adaptation. We propose three types of curricula that start with general data, i.e., characterizing the domain as a whole, and move towards specific data, i.e., characterizing the sub-domain targeted for adaptation. Effectively, these curricula result in a model that can be considered to represent an implicit interpolation between general data and sub-domain-specific data. We carry out an extensive set of experiments that investigates how adapting rnnlms using curriculum learning can improve their performance. Our first set of experiments addresses the within-domain adaptation challenge, i.e., creating models that are adapted to specific sub-domains that are part of a larger, heterogeneous domain of speech data. Under this challenge, all training data is available to the system at the time when the language model is trained. First, we demonstrate that curriculum learning can be used to create effective sub-domain-adapted rnnlms. Second, we show that a combination of sub-domain-adapted rnnlms can be used if the sub-domain of the target data is unknown at test time. Third, we explore the potential of applying combinations of sub-domain-adapted rnnlms to data for which sub-domain information is unknown at training time and must be inferred. Our second set of experiments addresses limited-data within-domain adaptation, i.e., adapting an existing model trained on a large set of data using a smaller amount of data from the target sub-domain. Under this challenge, data from the target sub-domain is not available at the time when the language model is trained, but rather becomes available little by little over time. We demonstrate that the implicit interpolation carried out by applying curriculum learning methods to rnnlms outperforms conventional interpolation and has the potential to make more of less adaptation data.
Recurrent neural network language model adaptation with curriculum learning
S0885230814001223
This article assesses the impact of translation on the acquisition of vocabulary for higher-intermediate level students of English for Speakers of Other Languages (ESOL). The use of translation is a relevant issue in the research of Second Language (L2) acquisition and different authors provide arguments on both sides of the issue. Language technologies serve this issue in both the usability of automatic translation and the automatic detection of lexical phenomena. This paper will explore translation as it affects the acquisition of new words in context when students are given real texts retrieved from the Internet in a web-based interface. The students can instantly obtain the dictionary definition of a word and its translation to their native language. This platform can accurately measure how much each student relies on translation and compare this to their accuracy and fluency on a lexical retrieval task using words seen in the texts. Results show that abundant use of translation may increase accuracy in the short term, but in the longer term, it negatively affects accuracy and possibly fluency. However, students who use translation in moderation seem to benefit the most in this lexical task. This paper provides a focused and precise way to measure the relevant variables for each individual student, and new findings that contribute to our understanding of the impact of the use of translation in language learning.
Measuring the impact of translation on the accuracy and fluency of vocabulary acquisition of English
S0885230814001247
This paper presents uses a data-driven approach to improve Spoken Dialog System (SDS) performance by automatically finding the most appropriate terms to be used in system prompts. The literature shows that speakers use one another's terms (entrain) when trying to create common ground during a spoken dialog. Those terms are commonly called “primes”, since they influence the interlocutors’ linguistic decision-making. This approach emulates human interaction, with a system built to propose primes to the user and accept the primes that the user proposes. These primes are chosen on the fly during the interaction, based on a set of features that indicate good candidate primes. A good candidate is one that we know is easily recognized by the speech recognizer, and is also a normal word choice given the context. The system is trained to follow the user's choice of prime if system performance is not negatively affected. When system performance is affected, the system proposes a new prime. In our previous work we have shown how we can identify the prime candidates and how the system can select primes using rules. In this paper we go further, presenting a data-driven method to perform the same task. Live tests with this method show that use of on-the-fly entrainment reduces out-of-vocabulary and word error rate, and also increases the number of correctly transferred concepts.
From rule-based to data-driven lexical entrainment models in spoken dialog systems
S0885230814001259
This paper examines the individual and combined impacts of various front-end approaches on the performance of deep neural network (DNN) based speech recognition systems in distant talking situations, where acoustic environmental distortion degrades the recognition performance. Training of a DNN-based acoustic model consists of generation of state alignments followed by learning the network parameters. This paper first shows that the network parameters are more sensitive to the speech quality than the alignments and thus this stage requires improvement. Then, various front-end robustness approaches to addressing this problem are categorised based on functionality. The degree to which each class of approaches impacts the performance of DNN-based acoustic models is examined experimentally. Based on the results, a front-end processing pipeline is proposed for efficiently combining different classes of approaches. Using this front-end, the combined effects of different classes of approaches are further evaluated in a single distant microphone-based meeting transcription task with both speaker independent (SI) and speaker adaptive training (SAT) set-ups. By combining multiple speech enhancement results, multiple types of features, and feature transformation, the front-end shows relative performance gains of 7.24% and 9.83% in the SI and SAT scenarios, respectively, over competitive DNN-based systems using log mel-filter bank features.
Environmentally robust ASR front-end for deep neural network acoustic models
S0885230814001260
In this paper, we present a framework for facilitation robots that regulate imbalanced engagement density in a four-participant conversation as the forth participant with proper procedures for obtaining initiatives. Four is the special number in multiparty conversations. In three-participant conversations, the minimum unit for multiparty conversations, social imbalance, in which a participant is left behind in the current conversation, sometimes occurs. In such scenarios, a conversational robot has the potential to objectively observe and control situations as the fourth participant. Consequently, we present model procedures for obtaining conversational initiatives in incremental steps to harmonize such four-participant conversations. During the procedures, a facilitator must be aware of both the presence of dominant participants leading the current conversation and the status of any participant that is left behind. We model and optimize these situations and procedures as a partially observable Markov decision process (POMDP), which is suitable for real-world sequential decision processes. The results of experiments conducted to evaluate the proposed procedures show evidence of their acceptability and feeling of groupness.
Four-participant group conversation: A facilitation robot controlling engagement density as the fourth participant
S0885230814001272
The linear source-filter model of speech production assumes that the source of the speech sounds is independent of the filter. However, acoustic simulations based on the physical speech production models show that when the fundamental frequency of the source harmonics approaches the first formant of the vocal tract filter, the filter has significant effects on the source due to the nonlinear coupling between them. In this study, two interactive system models are proposed under the quasi steady Bernoulli flow and linear vocal tract assumptions. An algorithm is developed to estimate the model parameters. Glottal flow and the linear vocal tract parameters are found by conventional methods. Rosenberg model is used to synthesize the glottal waveform. A recursive optimization method is proposed to find the parameters of the interactive model. Finally, glottal flow produced by the nonlinear interactive system is computed. The experimental results show that the interactive system model produces fine details of glottal flow source accurately.
Nonlinear interactive source-filter models for speech
S0885230814001284
Advances on real-time magnetic resonance imaging (RT-MRI) make it suitable to study the dynamic aspects of the upper airway. One of the main challenges concerns how to deal with the large amount of data resulting from these studies, particularly to extract relevant features for analysis such as the vocal tract profiles. A method is proposed, based on a modified active appearance model (AAM) approach, for unsupervised segmentation of the vocal tract from midsagittal RT-MRI sequences. The described approach was designed considering the low inter-frame difference. As a result, when compared to a traditional AAM approach, segmentation is performed faster and model convergence is improved, attaining good results using small training sets. The main goal is to extract the vocal tract profiles automatically, over time, providing identification of different regions of interest, to allow the study of the dynamic features of the vocal tract, for example, during speech production. The proposed method has been evaluated against vocal tract delineations manually performed by four observers, yielding good agreement.
Unsupervised segmentation of the vocal tract from real-time MRI sequences
S0885230814001296
A vocal tract model based on a digital waveguide is presented in which the vocal tract has been decomposed into a number of convergent and divergent ducts. The divergent duct is modeled by a 2D-featured 1D digital waveguide and the convergent duct by a one dimensional waveguide. The modeling of the divergent duct is based on splitting the volume velocity into axial and radial components. The combination of separate modeling of the divergent and convergent ducts forms the foundation of the current approach. The advantage of this approach is the ability to get a transfer function in zero-pole form that eliminates the need to perform numerical calculations on a discrete 2D mesh. In this way the present model named as a 2D-featured 1D digital waveguide model has been found to be more efficient than the standard 2D waveguide model and in very good comparison with it in the formant frequency patterns of the vowels /a/, /e/, /i/, /o/ and /u/. The model has two control parameters, the wall and glottal reflection coefficients that can be effectively employed for bandwidth tuning. The model also shows its ability to generate smooth dynamic changes in the vocal tract during the transition of vowels.
Two dimensional featured one dimensional digital waveguide model for the vocal tract
S0885230814001302
In this paper, we experiment a discriminative possibilistic classifier with a reweighting model for morphological disambiguation of Arabic texts. The main idea is to provide a possibilistic classifier that acquires automatically disambiguation knowledge from vocalized corpora and tests on non-vocalized texts. Initially, we determine all the possible analyses of vocalized words using a morphological analyzer. The values of their morphological features are exploited to train the classifier. The testing phase consists in identifying the accurate class value (i.e., a morphological feature) using the features of the preceding and the following words. The appropriate class is the one having the greatest value of a possibilistic measure computed over the training set. To discriminate the effect of each feature, we add the weights of the training attributes to this measure. To assess this approach, we carry out experiments on a corpus of Arabic stories and on the Arabic Treebank. We present results concerning all the morphological features and we discern to which degree the discriminative approach improves disambiguation rates and extract the dependency relationships among the features. The results reveal the contribution of possibility theory for resolving ambiguities in real applications. We also compare the success rates in modern versus classical Arabic texts. Finally, we try to evaluate the impact of the lexical likelihood in morphological disambiguation.
Experimenting a discriminative possibilistic classifier with reweighting model for Arabic morphological disambiguation
S0885230815000029
We present a new modelling framework for dialogue management based on the concept of probabilistic rules. Probabilistic rules are defined as structured mappings between logical conditions and probabilistic effects. They function as high-level templates for probabilistic graphical models and may include unknown parameters whose values are estimated from data using Bayesian inference. Thanks to their use of logical abstractions, probabilistic rules are able to encode the probability and utility models employed in dialogue management in a compact and human-readable form. As a consequence, they can reduce the amount of dialogue data required for parameter estimation and allow system designers to directly incorporate their expert domain knowledge into the dialogue models. Empirical results of a user evaluation in a human–robot interaction task with 37 participants show that a dialogue manager structured with probabilistic rules outperforms both purely hand-crafted and purely statistical methods on a range of subjective and objective quality metrics. The framework is implemented in a software toolkit called OpenDial, which can be used to develop various types of dialogue systems based on probabilistic rules.
A hybrid approach to dialogue management based on probabilistic rules
S0885230815000030
In this paper, we present Scusi?, an anytime numerical mechanism for the interpretation of spoken referring expressions. Our contributions are: (1) an anytime interpretation process that considers multiple alternatives at different interpretation stages (speech, syntax, semantics and pragmatics), which enables Scusi? to defer decisions to the end of the interpretation process; (2) a mechanism that combines scores associated with the output of the different interpretation stages, taking into account the uncertainty arising from a variety of sources, such as ambiguity or inaccuracy in a description, speech recognition errors and out-of-vocabulary terms; and (3) distance-based functions with probabilistic semantics that represent lexical similarity between objects’ names and similarity between stated requirements and physical properties of objects (viz colour, size and positional relations). We considered two approaches for combining these descriptive attributes, viz multiplicative and additive, and determined whether prioritizing certain interpretation stages and descriptive attributes affects interpretation performance. We conducted two experiments to evaluate different aspects of Scusi?'s performance: Interpretive, where we compared Scusi?'s understanding of descriptions that are mainly ambiguous or inaccurate with people's understanding of these descriptions, and Generative, where we assessed Scusi?'s understanding of naturally occurring spoken descriptions. Our results show that Scusi?'s understanding of the descriptions in the Interpretive trial is comparable to that of people; and that its performance is encouraging when given arbitrary spoken descriptions in diverse scenarios, and excellent for the corresponding written descriptions. In both experiments, Scusi? significantly outperformed a baseline system that maintains only top same-score interpretations.
Employing distance-based semantics to interpret spoken referring expressions
S0885230815000042
We address a spoken dialogue system which conducts information navigation in a style of small talk. The system uses Web news articles as an information source, and the user can receive information about the news of the day through interaction. The goal and procedure of this kind of dialogue are not well defined. An empirical approach based on a partially observable Markov decision process (POMDP) has recently been widely used for dialogue management, but it assumes a definite task goal and information slots, which does not hold in our application system. In this work, we formulate the problem of dialogue management as a selection of modules and optimize it with POMDP by tracking the dialogue state and focus of attention. The POMDP-based dialogue manager receives a user intention that is classified by a spoken language understanding (SLU) component based on logistic regression (LR). The manager also receives a user focus that is detected by the SLU component based on conditional random fields (CRFs). These dialogue states are used for selecting appropriate modules by policy function, which is optimized by reinforcement learning. The reward function is defined by the quality of interaction to encourage long interaction of information navigation with users. The module which responds to user queries is based on a similarity of predicate-argument (P-A) structures that are automatically defined from a domain corpus. It allows for flexible response generation even if the system cannot find exact matching information to the user query. The system also proactively presents information by following the user focus and retrieving a news article based on the similarity measure even if the user does not make any utterance. Experimental evaluations with real dialogue sessions demonstrate that the proposed system outperformed the conventional rule-based system in terms of dialogue state tracking and action selection. Effect of focus detection in the POMDP framework is also confirmed.
Conversational system for information navigation based on POMDP with user focus tracking
S0885230815000054
This paper investigates three different sources of information and their integration into language modelling. Global semantics is modelled by Latent Dirichlet allocation and brings long range dependencies into language models. Word clusters given by semantic spaces enrich these language models with short range semantics. Finally, our own stemming algorithm is used to further enhance the performance of language modelling for inflectional languages. Our research shows that these three sources of information enrich each other and their combination dramatically improves language modelling. All investigated models are acquired in a fully unsupervised manner. We show the efficiency of our methods for several languages such as Czech, Slovenian, Slovak, Polish, Hungarian, and English, proving their multilingualism. The perplexity tests are accompanied by machine translation tests that prove the ability of the proposed models to improve the performance of a real-world application.
Latent semantics in language models
S0885230815000066
Due to the mobile Internet revolution, people tend to browse the Web while driving their car which puts the driver's safety at risk. Therefore, an intuitive and non-distractive in-car speech interface to the Web needs to be developed. Before developing a new speech dialog system (SDS) in a new domain developers have to examine the user's preferred interaction style and its influence on driving safety. This paper reports a driving simulation study, which was conducted to compare different speech-based in-car human–machine interface concepts concerning usability and driver distraction. The applied SDS prototypes were developed to perform an online hotel booking by speech while driving. The speech dialog prototypes were based on different speech dialog strategies: a command-based and a conversational dialog. Different graphical user interface (GUI) concepts (one including a human-like avatar) were designed in order to support the respective dialog strategy the most and to evaluate the effect of the GUI on usability and driver distraction. The results show that only few differences concerning speech dialog quality were found when comparing the speech dialog strategies. The command-based dialog was slightly better accepted than the conversational dialog, which seems to be due to the high concept error rate of the conversational dialog. A SDS without GUI also seems to be feasible for the driving environment and was accepted by the users. The comparison of speech dialog strategies did not reveal differences in driver distraction. However, the use of a GUI impaired the driving performance and increased gaze-based distraction. The presence of an avatar was not appreciated by participants and did not affect the dialog performance. Concerning driver distraction, the virtual agent did neither negatively affect the driving performance nor increase visual distraction. The results implicate that in-car SDS developers should take both speaking styles into consideration when designing an SDS for information exchange tasks. Furthermore, developers have to consider reducing the content presented on the screen in order to reduce driver distraction. A human-like avatar was not appreciated by users while driving. Research should further investigate if other kinds of avatars might achieve different results.
Evaluation of speech-based HMI concepts for information exchange tasks: A driving simulator study
S0885230815000078
In this paper we study the effect of different lexical resources for selecting synonyms and strategies for word sense disambiguation in a lexical simplification system for the Spanish language. The resources used for the experiments are the Spanish EuroWordNet, the Spanish Open Thesaurus and a combination of both. As for the synonym selection strategies, we have used both local and global contexts for word sense disambiguation. We present a novel evaluation framework in lexical simplification that takes into account the level of ambiguity of the word to be simplified. The evaluation compares various instances of the lexical simplification system, a gold standard, and a baseline. The paper presents an in-depth qualitative error analysis of the results.
Simplifying words in context. Experiments with two lexical resources in Spanish
S0885230815000194
In this paper, we address issues in situated language understanding in a moving car, which has the additional challenge of being a rapidly changing environment. More specifically, we propose methods for understanding user queries regarding specific target buildings in their surroundings. Unlike previous studies on physically situated interactions, such as interactions with mobile robots, the task at hand is very time sensitive because the spatial relationship between the car and target changes while the user is speaking. We collected situated utterances from drivers using our research system called Townsurfer, which was embedded in a real vehicle. Based on this data, we analyzed the timing of user queries, the spatial relationships between the car and the targets, the head pose of the user, and linguistic cues. Based on this analysis, we further propose methods to optimize timing and spatial distances and to make use of linguistic cues. Finally, we demonstrate that our algorithms improved the target identification rate by 24.1% absolute.
Situated language understanding for a spoken dialog system within vehicles
S0885230815000200
The conventional approach for data-driven articulatory synthesis consists of modeling the joint acoustic-articulatory distribution with a Gaussian mixture model (GMM), followed by a post-processing step that optimizes the resulting acoustic trajectories. This final step can significantly improve the accuracy of the GMM frame-by-frame mapping but is computationally intensive and requires that the entire utterance be synthesized beforehand, making it unsuited for real-time synthesis. To address this issue, we present a deep neural network (DNN) articulatory synthesizer that uses a tapped-delay input line, allowing the model to capture context information in the articulatory trajectory without the need for post-processing. We characterize the DNN as a function of the context size and number of hidden layers, and compare it against two GMM articulatory synthesizers, a baseline model that performs a simple frame-by-frame mapping, and a second model that also performs trajectory optimization. Our results show that a DNN with a 60-ms context window and two 512-neuron hidden layers can synthesize speech at four times the frame rate – comparable to frame-by-frame mappings, while improving the accuracy of trajectory optimization (a 9.8% reduction in Mel Cepstral distortion). Subjective evaluation through pairwise listening tests also shows a strong preference toward the DNN articulatory synthesizer when compared to GMM trajectory optimization.
Data driven articulatory synthesis with deep neural networks