FileName
stringlengths 17
17
| Abstract
stringlengths 163
6.01k
| Title
stringlengths 12
421
|
---|---|---|
S0965997816300515 | Most of the existing methods for dam behavior modeling presuppose temporal immutability of the modeled structure and require a persistent set of input parameters. In real-world applications, permanent structural changes and failures of measuring equipment can lead to a situation in which a selected model becomes unusable. Hence, the development of a system capable to automatically generate the most adequate dam model for a given situation is a necessity. In this paper, we present a self-tuning system for dam behavior modeling based on artificial neural networks (ANN) optimized for given conditions using genetic algorithms (GA). Throughout an evolutionary process, the system performs near real-time adjustment of ANN architecture according to currently active sensors and a present measurement dataset. The model was validated using the Grancarevo dam case study (at the Trebisnjica river located in the Republic of Srpska), where radial displacements of a point inside the dam structure have been modeled as a function of headwater, temperature, and ageing. The performance of the system was compared to the performance of an equivalent hybrid model based on multiple linear regression (MLR) and GA. The results of the analysis have shown that the ANN/GA hybrid can give rather better accuracy compared to the MLR/GA hybrid. On the other hand, the ANN/GA has shown higher computational demands and noticeable sensitivity to the temperature phase offset present at different geographical locations. | A self-tuning system for dam behavior modeling based on evolving artificial neural networks |
S0965997816300588 | A wind turbine operating in the wake of another turbine and has a reduced power production because of a lower wind speed after rotor. The flow field in the wake behind the first row turbines is characterized by a significant deficit in wind velocity and increased levels of turbulence intensity. To maximize the wind farm net profit, the number of turbines installed in the wind farm should be different in depend on wind farm project investment parameters. Therefore modeling wake effect is necessary because it has a great influence on the actual energy output of a wind farm. In this paper, the extreme learning machine (ELM) coupled with wavelet transform (ELM-WAVELET) is used for the prediction of wind turbine wake effect in wind far. Estimation and prediction results of ELM-WAVELET model are compared with the ELM, genetic programming (GP), support vector machine (SVM) and artificial neural network (ANN) models. The following error and correlation functions are applied to evaluate the proposed models: Root Mean Square Error (RMSE), Coefficient of Determination (R 2) and Pearson coefficient (r). The experimental results show that an improvement in predictive accuracy and capability of generalization can be achieved by ELM-WAVELET approach (RMSE=0.269) in comparison with the ELM (RMSE=0.27), SVM (RMSE=0.432), ANN (RMSE=0.432) and GP model (RMSE=0.433). | Extreme learning approach with wavelet transform function for forecasting wind turbine wake effect to improve wind farm efficiency |
S096599781630059X | In a previous work we presented algorithms which allow obtaining three-dimensional models from graphs which represent a projection in conical parallel perspectives and conical oblique perspectives of polyhedral models with normalon and quasi-normalon typology. In this paper the new advances that we have achieved in this field are presented, allowing increasing the set of models which can be reconstructed to other typologies different from the normalon and quasi-normalon ones. Moreover, we present a new technique which extends the previous work in order to be implemented to conical perspectives with three vanishing points, and the method proposed for the detection of the type of conical perspective represented by the graph, including the detection and subsequent reconstruction of graphs which represent a flat shape, has been improved. The results obtained on a total of 336 tests, with a success ratio of 100%, make the method a proposal to be considered for obtaining models from conical perspectives automatically. | New advances in obtaining three-dimensional models from conical perspectives |
S0965997816300606 | Differential Evolution (DE) is one of the most powerful stochastic real parameter optimizers. An alternative adaptive DE algorithm called Expected Improvement (EI)-High Dimensional Model Representation (HDMR)-DE is suggested. The EI criterion and the Kriging-HDMR are used to adjust scale factor F and crossover constant Cr , respectively. Considering the expensive computational cost of evaluation, the Kriging is integrated to evaluate the objective function when an accuracy criterion is met. To compare the performance, the suggested method, it has been compared with four popular adaptive DE algorithms over 25 standard numerical benchmarks derived from the IEEE Congress on Evolutionary Computation 2005 competition. To verify the feasibility of the suggested algorithm, a real-world application, time-dependent variable Blank Hold Force (BHF) optimization problem is also carried out by the EI-HDMR-DE. The results show that the EI-HDMR-DE improves the performance of adaptive DE and has potential capability to solve some complicated real-world applications. | An alternative adaptive differential evolutionary Algorithm assisted by Expected Improvement criterion and cut-HDMR expansion and its application in time-based sheet forming design |
S0965997816300618 | A prerequisite for simulating the biophysics of complex biological tissues and whole organisms are computational descriptions of biological matter that are flexible and can interface with materials of different viscosities, such as liquid. The landscape of software that is easily available to do such work is limited and lacks essential features necessary for combining elastic matter with simulations of liquids. Here we present an open source software package called Sibernetic, designed for the physical simulation of biomechanical matter (membranes, elastic matter, contractile matter) and environments (liquids, solids and elastic matter with variable physical properties). At its core, Sibernetic is built as an extension to Predictive–Corrective Incompressible Smoothed Particle Hydrodynamics (PCISPH). Sibernetic is built on top of OpenCL, making it possible to run simulations on CPUs or GPUs, and has 3D visualization support built on top of OpenGL. Several test examples of the software running and reproducing physical experiments, as well as performance benchmarks, are presented and future directions are discussed. | Application of smoothed particle hydrodynamics to modeling mechanisms of biological tissue |
S096599781630062X | The Colliding Bodies Optimization (CBO) algorithm is a metaheuristic algorithm inspired by the physics laws of collision in which each candidate solution is modeled as an agent with mass body in proportion to the fitness of the solution. In this paper a modified version of CBO, denoted by MCBO, is utilized to optimize the cost of bridge superstructures. The problem consists of 17 variables and 101 implicit constraints based on AASHTO standard specifications and construction limitations. The optimization is performed for bridges with different span lengths and deck widths, and with various unit costs of concrete. A comparison among the PSO, CBO, and MCBO algorithms is conducted which shows the efficiency and robustness of the MCBO algorithm. | Cost optimum design of post-tensioned concrete bridges using a modified colliding bodies optimization algorithm |
S0965997816300631 | This paper presents a virtual training system with the aim to make the assembly training easy and realistic through the usage of haptics and visual fidelity. The paper gives a detailed description on setting basic constraints to solve the shortcomings when physics engine and haptic feedback are both integrated in the training system. The basic constraints not only simplify the assembly operation and shorten the assembly time, but also increase the visual realism when physics engine is integrated in assembly training. Assembly sequence generation based on the disassembly process is also provided. Moreover, except the display and physics engine modules, the whole system is made up of widgets, which can be activated only when its function is needed. The experiment of the Slider Mechanism is presented. 24 participants were distributed into three groups (haptic-based group, mouse-based group and traditional training group). The results show that the haptic-based training system received good evaluations on the ease of operation and assembly cues. No significant differences in training time were found between the haptic group and the traditional training group. But the training of the mouse-based group was significantly slower. Moreover, no significant differences in learning of the task were found among the three groups. | A new constraint-based virtual environment for haptic assembly training |
S0965997816300643 | Model-Based Systems Engineering (MBSE) is a promising methodology for the design of complex mechatronic systems. There are some tools developed to support MBSE in complex system design and modeling. However, none of them has got the functionality of supporting system optimization. This work is precisely motivated by this gap and aims to develop effective methods to support automatic system optimization for MBSE. Specifically, a pattern-based method is proposed to support the integration of system optimization into mechatronic system design. In such a method, optimization problems, their solving methods and computation results are formally defined in each pattern based on the System Modeling Language (SysML). In addition, a model description scheme termed an optimization profile is proposed based on SysML to include the components for formalizing different kinds of optimization problems and optimization methods. After an optimization profile is created for an optimization problem, system optimization methods can be chosen automatically from the pattern library based on a semantic similarity evaluation. Then, optimization results are provided to users to support decision-making and the pattern library is updated using the relevant information obtained in this process. A system design example is used to demonstrate the feasibility and effectiveness of proposed methods. | Pattern-based integration of system optimization in mechatronic system design |
S0965997816300667 | This paper presents the geometrical modeling methods for braided structures overbraiding non-cylindrical prisms. This modeling method comes from the motion analysis of braiding process. The motion of strands in braiding process could be decomposed into circumferential motion, radial motion and axial motion based on the movements of carriers and take-up roller. These motions could be re-composed to form two independent surfaces: the braiding surface and helical surface, which include all information of strand movements in braiding process. Therefore, the strands could be obtained by the intersections of these two surfaces. The helical surfaces define the position of strands and the pitch while the braiding surfaces define the interlacing patterns and the outline of braids. For different braids overbraiding complicated structures, the simulated braids could be obtained by changing the braiding surfaces. Based on this theory, this paper illustrated the modeling methods for braids overbraiding prisms, which generated by extruding, sweeping, revolving and lofting respectively. | Computer-aided modeling of braided structures overbraiding non-cylindrical prisms based on surface transformation |
S096599781630076X | One of the most important activities in software project planning involves scheduling tasks and assigning them to developers. Project managers must decide who will do what and when in a software project, with the aim of minimizing both its duration and cost. However, project managers often struggle to efficiently allocate developers and schedule tasks in a way that balances these conflicting goals. Furthermore, the different criteria used to select developers could lead to inaccurate estimation of the duration and cost of tasks, resulting in budget overruns, delays, or reduced software quality. This paper proposes an approach that makes use of multi-objective optimization to handle the simultaneous minimization of project cost and duration, taking into account several productivity-related attributes for better estimation of task duration and cost. In particular, we focus on dealing with the non-interchangeable nature of human resources and the different ways in which teams carry out work by considering the relationship between the type of task interdependence and the productivity rate of developers, as well as the communication overhead incurred among developers. The approach is applied to four well-known optimization algorithms, whose performance and scalability are compared using generated software project instances. Additionally, several real-world case studies are explored to help discuss the implications of such approach in the software development industry. The results and observations show positive indications that using a productivity-based multi-objective optimization approach has the potential to provide software project managers with more accurate developer allocation and task scheduling solutions in a more efficient manner. | Investigating the impact of developer productivity, task interdependence type and communication overhead in a multi-objective optimization approach for software project planning |
S0965997816300783 | This paper describes our new hybrid parallelization of the Finite Element Tearing and Interconnecting (FETI) method for the multi-socket and multi-core computer cluster. This is an essential step in our development of the Hybrid FETI solver were small number of neighboring subdomains is aggregated into clusters and each cluster is processed by a single compute node. In our previous work we have implemented FETI solver using MPI parallelization into our ESPRESO solver. The proposed hybrid implementation provides better utilization of resources of modern HPC machines using advanced shared memory runtime systems such as Cilk++ runtime. Cilk++ is an alternative to OpenMP which is used by ESPRESO for shared memory parallelization. We have compared the performance of the hybrid parallelization to MPI-only parallelization. The results show that we have reduced both solver runtime and memory utilization. This allows a solver to use a larger number of smaller sub-domains and in order to solve larger problems using a limited number of compute nodes. This feature is essential for users with smaller computer clusters. In addition, we have evaluated this approach with large-scale benchmarks of size up to 1.3 billion of unknowns to show that the hybrid parallelization also reduces runtime of the FETI solver for these types of problems. | Hybrid parallelization of the total FETI solver |
S0965997816300813 | In the paper a general and direct method for implementation of influence lines in finite element software is provided. Generally influence lines are applied to identify the most critical location and combination of live loads in civil engineering structures. The proposed method is based on the Müller-Breslau principle and the basic idea is to equate discontinuous displacement fields with consistent nodal forces, thus obtaining influence functions only applying a single load case without changing the geometry or boundary conditions of the finite element model. Initially the method is developed by means of some illustrative beam problems, where the consistent nodal forces for angular, lateral and axial displacement discontinuities for a Bernoulli-Euler beam element are derived. Finally it is shown that the method is fully general and efficient in identifying the influence functions of generalized stresses in e.g. plates and shells. | A direct and fully general implementation of influence lines/surfaces in finite element software |
S0965997816300837 | The quality of computer-aided-design (CAD) generated ‘As-built’ documentation is evaluated for a High Voltage Switchgear System (HVSS), which forms part of a Supervisory Control and Data Acquisition up-grade within a geo-thermal power plant. A total of 267 CAD drawings for the HVSS were used to create a Systems Information Model (SIM) whereby the physical components and associated connections were constructed in an object orientated database. Throughout the modelling process a considerable amount of errors and information redundancy were identified and examples are presented. The production of the CAD drawings took 10,680 man-h in stark contrast to the 80 man-h required to construct the SIM, illustrating the efficiency and effectiveness of SIM compared to CAD for the documentation of electrical instrumentation and control systems (EICS). To realise this significant potential cost and productivity saving requires a shift in mindset and a move beyond the use of CAD, to an object oriented SIM, with a 1:1 relationship between objects in the model, and components in the real world. | Moving beyond CAD to an object-oriented approach for electrical control and instrumentation systems |
S0965997816300849 | The arrangement design of a submarine depends on the data of the parent ships and the knowledge of experts. Some delay in design can occur when data or experts are absent. An arrangement design problem of a submarine can also be difficult to solve due to the number of compartments and equipment placed in the limited space, as well as the numerous potential alternatives for the arrangement design. Thus, a compelling need arises to accumulate data regarding the parent ships, the knowledge of experts, and design rules as a systematic structure, increasing the demand for optimization of the arrangement design. In this study, we proposed an arrangement method of the submarine compartments and equipment based on an expert system and a multistage optimization. For this task, we used a template model for the arrangement design of a submarine proposed by the authors in the previous study to store the arrangement data. We also improved and used an expert system that can systematically computerize the knowledge of experts, previously developed by the authors. Then, we proposed an optimization method that can yield a better arrangement design after formulating a submarine arrangement problem as an optimization problem, solving it with the use of an efficient optimization algorithm. To evaluate the applicability of the proposed method, we developed a prototype program consisting of an arrangement template model, an expert system module, and an optimization module. Finally, we applied the developed program to a problem with regard to the arrangement design of a small submarine. The results showed that the developed program can be used as a new tool for the arrangement design of a submarine. | A submarine arrangement design program based on the expert system and the multistage optimization |
S1047320313000679 | Object segmentation of unknown objects with arbitrary shape in cluttered scenes is an ambitious goal in computer vision and became a great impulse with the introduction of cheap and powerful RGB-D sensors. We introduce a framework for segmenting RGB-D images where data is processed in a hierarchical fashion. After pre-clustering on pixel level parametric surface patches are estimated. Different relations between patch-pairs are calculated, which we derive from perceptual grouping principles, and support vector machine classification is employed to learn Perceptual Grouping. Finally, we show that object hypotheses generation with Graph-Cut finds a globally optimal solution and prevents wrong grouping. Our framework is able to segment objects, even if they are stacked or jumbled in cluttered scenes. We also tackle the problem of segmenting objects when they are partially occluded. The work is evaluated on publicly available object segmentation databases and also compared with state-of-the-art work of object segmentation. | Learning of perceptual grouping for object segmentation on RGB-D data |
S1047320314001643 | In RBIR, texture features are crucial in determining the class a region belongs to since they can overcome the limitations of color and shape features. Two robust approaches to model texture features are Gabor and curvelet features. Although both features are close to human visual perception, sufficient information needs to be extracted from their sub-bands for effective texture classification. Moreover, shape irregularity can be a problem since Gabor and curvelet transforms can only be applied on the regular shapes. In this paper, we propose an approach that uses both the Gabor wavelet and the curvelet transforms on the transferred regular shapes of the image regions. We also apply a fitting method to encode the sub-bands’ information in the polynomial coefficients to create a texture feature vector with the maximum power of discrimination. Experiments on texture classification task with ImageCLEF and Outex databases demonstrate the effectiveness of the proposed approach. | Texture classification and discrimination for region-based image retrieval |
S105120041300064X | Automatic segmentation of non-stationary signals such as electroencephalogram (EEG), electrocardiogram (ECG) and brightness of galactic objects has many applications. In this paper an improved segmentation method based on fractal dimension (FD) and evolutionary algorithms (EAs) for non-stationary signals is proposed. After using Kalman filter (KF) to reduce existing noises, FD which can detect the changes in both the amplitude and frequency of the signal is applied to reveal segments of the signal. In order to select two acceptable parameters of FD, in this paper two authoritative EAs, namely, genetic algorithm (GA) and imperialist competitive algorithm (ICA) are used. The proposed approach is applied to synthetic multi-component signals, real EEG data, and brightness changes of galactic objects. The proposed methods are compared with some well-known existing algorithms such as improved nonlinear energy operator (INLEO), Varriʼs and wavelet generalized likelihood ratio (WGLR) methods. The simulation results demonstrate that segmentation by using KF, FD, and EAs have greater accuracy which proves the significance of this algorithm. | A hybrid evolutionary approach to segmentation of non-stationary signals |
S1051200413000663 | Qi, the wireless power standard, has been proposed to allow low power systems to receive power through wireless inductive power transfer. The standard outlines the essential, desired and optional requirements for developing the wireless power transfer platform. In this paper, we present the design and implementation results of communication controller for guided positioning single transmitterâsingle receiver wireless power transfer platform. Apart from the basic design, additional processing and data storage capability is introduced to make the design adaptive in terms of response time and the size of control data transfer. The method of estimating the amount of power transfer is modified to reduce design complexity and internal power consumption of power transmitter and receiver. The implementation results help to access the ratio of power transferred to resource utilization and the ratio of power transferred to power consumed in simplistic wireless power transfer platform. | Communication controller and control unit design for Qi wireless power transfer |
S1051200413000705 | A new generalized conversion method of the MDCT to MDST coefficients directly in the frequency domain is proposed for arbitrary symmetric windowing function. Based on the compact block matrix representation of the MDCT and MDST filter banks, on their properties and on relations among transform sub-matrices, a relation in the matrix-vector form between the MDCT and MDST coefficients in the frequency domain is derived. Given MDCT coefficients of three consecutive data blocks at a decoder, the MDST coefficients of the current data block can be obtained by combining the MDCT coefficients of the previous, current and next blocks via conversion matrices. Since the forms of conversion matrices depend on the employed windowing function, a specific solution for each windowing function is derived. Because the conversion matrices have a very regular structure, the matrix-vector products are reduced to simple analytical formulas. The new generalized conversion method is more efficient and structurally simpler both in terms of arithmetic complexity and memory requirements compared to existing exact frequency domain-based conversion methods. Although the new generalized conversion method enables us to compute the exact MDST coefficients only in specified one or more frequency ranges, the computation of complete set of MDST coefficients still requires a high number of arithmetic operations. As an alternative, an efficient and flexible approximate conversion method is constructed. With properly selected parameters it can produce acceptable approximated results with much lower computational complexity. Therefore, the approximate conversion method has a potential to be used in many MDCT-based audio decoders, and particularly at resource-limited and low-cost decoders for spectral analysis to obtain the magnitude and phase information. | New generalized conversion method of the MDCT to MDST coefficients in the frequency domain for arbitrary symmetric windowing function |
S1051200413000742 | A hybrid method of block truncation coding (BTC) and differential pulse code modulation (DPCM) offers better visual quality than the standard BTC for small block sizes due to its inherent multitone representation. Recently, a two-level quantizer design method has been proposed to increase the coding performance of the DPCM-BTC framework. However, the design method is near optimal in the sense that its coding performance depends on the initial bit plane patterns. In this paper, we propose a bit plane modification (BPM) algorithm to achieve further performance improvement. The BPM algorithm, inspired by error diffusion, effectively distributes large quantization error at a certain pixel to its neighboring pixels having small quantization errors by changing partial bit patterns. Experimental results show that the proposed algorithm successfully achieves much higher coding performance than various conventional BTC methods. The average PSNR performance of the proposed method is 2.31 dB, 5.15 dB, and 5.15 dB higher than that of BTC, DPCM-BTC, and a recently developed BTC scheme using error diffusion and bilateral filtering, respectively. | Bit plane modification for improving MSE-near optimal DPCM-based block truncation coding |
S1051200413000754 | Traditional minimum spanning tree-based clustering algorithms only make use of information about edges contained in the tree to partition a data set. As a result, with limited information about the structure underlying a data set, these algorithms are vulnerable to outliers. To address this issue, this paper presents a simple while efficient MST-inspired clustering algorithm. It works by finding a local density factor for each data point during the construction of an MST and discarding outliers, i.e., those whose local density factor is larger than a threshold, to increase the separation between clusters. This algorithm is easy to implement, requiring an implementation of iDistance as the only k-nearest neighbor search structure. Experiments performed on both small low-dimensional data sets and large high-dimensional data sets demonstrate the efficacy of our method. | Enhancing minimum spanning tree-based clustering by removing density-based outliers |
S1051200413000821 | This paper shows a new algorithm to improve the performance of IQ demodulators and frequency converters exhibiting gain and phase imbalances between their branches. This algorithm does not require any input calibration signal and is independent of the input signal level. It exploits the spectral coherence (SC) concept using a monobit kernel to achieve optimization targets with minimum time to convergence, low computational load, and a wide range of input levels. Its effectiveness is shown through a low-IF receiver that improves its image rejection ratio (IRR) from 30 dB to 60 dB. | Efficient adaptive compensation of I/Q imbalances using spectral coherence with monobit kernel |
S1051200413000833 | This paper presents the implementation of two hardware architectures, i.e., A2 Lattice Vector Quantization (LVQ) and Multistage A2LVQ (MA2LVQ), using a Field-Programmable Gate Array (FPGA). First, the renowned LVQ quantizer by Conway and Sloane is implemented followed by a low-complexity A2LVQ based on a new A2LVQ algorithm. It is revealed that the implementation requires high number of multiplier circuits. Then the implementation of a low-complexity A2LVQ is presented. This implementation uses only the first quadrant of the A2 lattice Voronoi region formed by W and T regions. This paper also presents the implementation of a multistage A2LVQ (MA2LVQ) with an architecture built from successive A2 quantizer blocks. Synthesis results show that the execution time of the low-complexity A2LVQ reaches up to 35.97 ns. The MA2LVQ is implemented using both low-complexity A2LVQ and ordinary A2 architectures. The system with the former architecture utilizes less logic and register elements by 47%. | Multistage A2LVQ architecture and implementation for image compression |
S1051200413000857 | A novel measure of the Autoregressive Causal Relation based on a multivariate autoregressive model is proposed. It reveals the strength of the connections among a simultaneous time series and also the direction of the information flow. It is defined in the frequency domain, similar to the formerly published methods such as: Directed Transfer Function, Direct Directed Transfer Function, Partial Directed Coherence, and Generalized Partial Directed Coherence. Compared to the Granger causality concept, frequency decomposition extends the possibility to reveal the frequency rhythms participating on the information flow in causal relations. The Autoregressive Causal Relation decomposes diagonal elements of a spectral matrix and enables a user to distinguish between direct and indirect causal relations. The main advantage lies in its definition using power spectral densities, thus allowing for a clear interpretation of strength of causal relation in meaningful physical terms. The causal measures can be used in neuroscience applications like the analysis of underlying structures of brain connectivity in neural multichannel time series during different tasks measured via electroencephalography or functional magnetic resonance imaging, or other areas using the multivariate autoregressive models for causality modeling like econometrics or atmospheric physics but this paper is focused on theoretical aspects and model data examples in order to illustrate a behavior of methods in known situations. | Autoregressive causal relation: Digital filtering approach to causality measures in frequency domain |
S1051200413001127 | The amount of noise present in the Fiber Optic Gyroscope (FOG) signal limits its applications and has a negative impact on navigation system. Existing algorithms such as Discrete Wavelet Transform (DWT), Kalman Filter (KF) denoise the FOG signal under static environment, however denoising fails in dynamic environment. Therefore in this paper an Adaptive Moving Average Dual Mode Kalman Filter (AMADMKF) is developed for denoising the FOG signal under both the static and dynamic environments. Performance of the proposed algorithm is compared with DWT and KF techniques. Further, a hardware Intellectual Property (IP) of the algorithm is developed for System on Chip (SoC) implementation using Xilinx Virtex-5 Field Programmable Gate Array (Virtex-5FX70T-1136). The developed IP is interfaced as a Co-processor/ Auxiliary Processing Unit (APU) with the PowerPC (PPC440) embedded processor of the FPGA. It is proved that the proposed system is an efficient solution for denoising the FOG signal in real-time environment. Hardware acceleration of developed Co-processor is 65Ã with respect to its equivalent software implementation of AMADMKF algorithm in the PPC440 embedded processor. | Design and implementation of a realtime co-processor for denoising Fiber Optic Gyroscope signal |
S1051200413001140 | Multimedia-based hashing is considered an important technique for achieving authentication and copy detection in digital contents. However, 3D model hashing has not been as widely used as image or video hashing. In this study, we develop a robust 3D mesh-model hashing scheme based on a heat kernel signature (HKS) that can describe a multi-scale shape curve and is robust against isometric modifications. We further discuss the robustness, uniqueness, security, and spaciousness of the method for 3D model hashing. In the proposed hashing scheme, we calculate the local and global HKS coefficients of vertices through time scales and 2D cell coefficients by clustering HKS coefficients with variable bin sizes based on an estimated L2 risk function, and generate the binary hash through binarization of the intermediate hash values by combining the cell values and the random values. In addition, we use two parameters, bin center points and cell amplitudes, which are obtained through an iterative refinement process, to improve the robustness, uniqueness, security, and spaciousness further, and combine them in a hash with a key. By evaluating the robustness, uniqueness, and spaciousness experimentally, and through a security analysis based on the differential entropy, we verify that our hashing scheme outperforms conventional hashing schemes. | Key-dependent 3D model hashing for authentication using heat kernel signature |
S1051200413001164 | In this paper, the interaction and combination of Fuzzy Fading Memory (FFM) technique and Augmented Kalman Filtering (AUKF) method are presented for the state estimation of non-linear dynamic systems in presence of maneuver. It is shown that the AUKF method in conjunction with the FFM technique (FFM-AUKF) can estimate the target states appropriately since the FFM tunes the covariance matrix of the AUKF method in presence of unknown target accelerations by using a fuzzy system. In addition, the benefits of both FFM technique and AUKF method are employed in the scheme of well-known Interacting Multiple Model (IMM) algorithm. The proposed Fuzzy IMM (FIMM) algorithm does not need the predefinition and adjustment of sub-filters with respect to the target maneuver and reduces the number of required sub-filters to cover the wide range of unknown target accelerations. The Monte Carlo simulation analysis shows the effectiveness of the above-mentioned methods in maneuvering target tracking. | An interacting Fuzzy-Fading-Memory-based Augmented Kalman Filtering method for maneuvering target tracking |
S1051200413001218 | Recovery of sparse signals from compressed measurements constitutes an ℓ 0 norm minimization problem, which is unpractical to solve. A number of sparse recovery approaches have appeared in the literature, including ℓ 1 minimization techniques, greedy pursuit algorithms, Bayesian methods and nonconvex optimization techniques among others. This manuscript introduces a novel two stage greedy approach, called the Forward–Backward Pursuit (FBP). FBP is an iterative approach where each iteration consists of consecutive forward and backward stages. The forward step first expands the support estimate by the forward step size, while the following backward step shrinks it by the backward step size. The forward step size is larger than the backward step size, hence the initially empty support estimate is expanded at the end of each iteration. Forward and backward steps are iterated until the residual power of the observation vector falls below a threshold. This structure of FBP does not necessitate the sparsity level to be known a priori in contrast to the Subspace Pursuit or Compressive Sampling Matching Pursuit algorithms. FBP recovery performance is demonstrated via simulations including recovery of random sparse signals with different nonzero coefficient distributions in noisy and noise-free scenarios in addition to the recovery of a sparse image. | Compressed sensing signal recovery via forward–backward pursuit |
S1051200413001231 | For the detection of a weak known signal in additive white noise, a generalized correlation detector is considered. In the case of a large number of measurements, an asymptotic efficacy is analytically computed as a general measure of detection performance. The derivative of the efficacy with respect to the noise level is also analytically computed. Positivity of this derivative is the condition for enhancement of the detection performance by increasing the level of noise. The behavior of this derivative is analyzed in various important situations, especially showing when noise-enhanced detection is feasible and when it is not. | Weak signal detection: Condition for noise induced enhancement |
S1051200413001267 | We investigate channel equalization problem for time-varying flat fading channels under bounded channel uncertainties. We analyze three robust methods to estimate an unknown signal transmitted through a time-varying flat fading channel. These methods are based on minimizing certain mean-square error criteria that incorporate the channel uncertainties into their problem formulations instead of directly using the inaccurate channel information that is available. We present closed-form solutions to the channel equalization problems for each method and for both zero mean and nonzero mean signals. We illustrate the performances of the equalization methods through simulations. | Robust estimation in flat fading channels under bounded channel uncertainties |
S1051200413001280 | Visual secret sharing (VSS) is a variant form of secret sharing, and is efficient since secret decoding only depends on the human vision system. However, cheating in VSS, first showed by Horng et al., is a significant issue like a limelight. Since then, plenty of studies for cheating activities and cheating prevention visual secret sharing (CPVSS) schemes have been introduced. In this paper, we revisit some well-known cheating activities and CPVSS schemes, and then categorize cheating activities into meaningful cheating, non-meaningful cheating, and meaningful deterministic cheating. Moreover, we analyze the research challenges in CPVSS, and propose a new cheating prevention scheme which is better than the previous schemes in the aspects of some security requirements. | Visual secret sharing with cheating prevention revisited |
S1051200413001292 | The aim of this paper is the presentation of the designing of asymptotic unrestricted polar quantizers with square cells, since it is known that square cells give minimal moment of inertia, which leads to minimization of distortion. Until now, polar quantizers with square cells have been designed only for the optimal companding function and their performances have been analyzed only for the stationary variance. In this paper, the design is done in a general manner, i.e. it is valid for any companding function and performances are analyzed in the wide range of variances. After that, this general design is applied for the logarithmic μ-law companding function. It is important that expressions for the numbers of magnitude and phase levels are obtained in the closed form, which simplifies the design. It is shown that the proposed polar quantizer has better performances (more than 3 dB higher the maximal and the average SQNR (signal-to-quantization noise ratio)) than the corresponding scalar quantizer with μ-law. Simulation is performed, and it is shown that theoretical and simulation results are matched very well. It is shown that the proposed polar quantizer can be used both for stationary and non-stationary signals, choosing the appropriate value of μ. This quantizer can be widely used, for all signals with Gaussian distribution. | The general design of asymptotic unrestricted polar quantizers with square cells |
S1051200413001346 | This paper focuses on some critical issues in a recently reported approach [V. Singh, Improved LMI-based criterion for global asymptotic stability of 2-D state-space digital filters described by Roesser model using twoʼs complement overflow arithmetic, Digital Signal Process. 22 (2012) 471–475] for the global asymptotic stability of two-dimensional (2-D) fixed-point state-space digital filters described by the Roesser model employing twoʼs complement overflow arithmetic. In particular, it is highlighted that the situation where Singhʼs approach can be applied to ensure the global asymptotic stability of digital filters in the presence of twoʼs complement overflow nonlinearities is not conceivable. | A note on the improved LMI-based criterion for global asymptotic stability of 2-D state-space digital filters described by Roesser model using twoʼs complement overflow arithmetic |
S1051200413001371 | Estimation of the number of signals impinging on an array of sensors, also known as source enumeration, is usually required prior to direction-of-arrival (DOA) estimation. In challenging scenarios such as the presence of closely-spaced sources and/or high level of noise, using the true source number for nonlinear parameter estimation leads to the threshold effect which is characterized by an abnormally large mean square error (MSE). In cases that sources have distinct powers and/or are closely spaced, the error distribution among parameter estimates of different sources is unbalanced. In other words, some estimates have small errors while others may be quite inaccurate with large errors. In practice, we will be only interested in the former and have no concern on the latter. To formulate this idea, the concept of effective source number (ESN) is proposed in the context of joint source enumeration and DOA estimation. The ESN refers to the actual number of sources that are visible at a given noise level by a parameter estimator. Given the numbers of sensors and snapshots, number of sources, source parameters and noise level, a Monte Carlo method is designed to determine the ESN, which is the maximum number of available accurate estimates. The ESN has a theoretical value in that it is useful for judging what makes a good source enumerator in the threshold region and can be employed as a performance benchmark of various source enumerators. Since the number of sources is often unknown, its estimate by a source enumerator is used for DOA estimation. In an effort to automatically remove inaccurate estimates while keeping as many accurate estimates as possible, we define the matched source number (MSN) as the one which in conjunction with a parameter estimator results in the smallest MSE of the parameter estimates. We also heuristically devise a detection scheme that attains the MSN for ESPRIT based on the combination of state-of-the-art source enumerators. | Efficient source enumeration for accurate direction-of-arrival estimation in threshold region |
S1051200413001383 | This paper presents the formulation and performance analysis of four techniques for detection of a narrowband acoustic source in a shallow range-independent ocean using an acoustic vector sensor (AVS) array. The array signal vector is not known due to the unknown location of the source. Hence all detectors are based on a generalized likelihood ratio test (GLRT) which involves estimation of the array signal vector. One non-parametric and three parametric (model-based) signal estimators are presented. It is shown that there is a strong correlation between the detector performance and the mean-square signal estimation error. Theoretical expressions for probability of false alarm and probability of detection are derived for all the detectors, and the theoretical predictions are compared with simulation results. It is shown that the detection performance of an AVS array with a certain number of sensors is equal to or slightly better than that of a conventional acoustic pressure sensor array with thrice as many sensors. | Narrowband signal detection techniques in shallow ocean by acoustic vector sensor array |
S1051200413001401 | In this paper, a local weighted interpolation method for intra-field deinterlacing is proposed as an improved version of the DCS (deinterlacing with awareness of closeness and similarity) algorithm. The original DCS method is derived from bilateral filter which takes the local spatial closeness and pixel similarity into account when calculating the weight of interpolation. The proposed algorithm achieves three improvements: 1) instead of the line average, a more accurate interpolation filter is used to estimate the center missing pixel; 2) the center-independent interpolation method is proposed to replace the center-dependent interpolation strategy; 3) the adaptive weighted interpolation method is used to improve the accuracy of interpolation. Experimental results show that the proposed algorithm provides superior performance in terms of both objective and subjective image qualities when compared with other conventional benchmarks, including DCS algorithms with low complexity. | A local adaptive weighted interpolation for deinterlacing |
S1051200413001528 | In order to improve the detection performance of constant false alarm rate (CFAR) detectors in multiple targets situations, a CFAR detector based on the maximal reference cell (MRC) named MRC-CFAR is proposed. In MRC-CFAR, a comparison threshold is generated by multiplying the amplitude of MRC by a scaling factor. The number of the reference cells left, whose amplitudes are smaller than the comparison threshold, is counted and compared with a threshold integer. Based on the comparison result, proper reference cells are selected for detection threshold computation. A closed-form analysis for MRC-CFAR in both homogeneous and non-homogeneous environments is presented. The performance of MRC-CFAR is evaluated and compared with other CFAR detectors. MRC-CFAR exhibits a very low CFAR loss in a homogeneous environment and performs robustly during clutter power transitions. In multiple targets situations, MRC-CFAR achieves a much better detection performance than switching CFAR (S-CFAR) and order-statistic CFAR (OS-CFAR). Experiment results from an X-band linear frequency modulated continuous wave radar system are given to demonstrate the efficiency of MRC-CFAR. Because ranking reference cells is not required for MRC-CFAR, the computation load of MRC-CFAR is low; it is easy to implement the detector in radar system in practice. | Constant false alarm rate detector based on the maximal reference cell |
S105120041300184X | A simple physical model consisting of a point source displaced from its center of rotation, in combination with a directivity model that includes backwards emitted energy, is considered for the problem of estimating the orientation of a directional acoustic source. Such a problem arises, for instance, in voice-commanded devices in a smart room and is usually tackled with a large or distributed microphone array. We show, however, that when the time difference of arrival is also taken into account, a small array of only two microphones is sufficiently robust against unaccounted factors such as microphone directivity variation and mild reverberation. This is shown by comparing predicted and measured values of binaural cues, and by using them and pairwise frame energies as inputs for an artificial neural network (ANN) in order to estimate source orientation. | Directional acoustic source orientation estimation using only two microphones |
S1051200413001863 | Recently, mobile landmark recognition has become one of the emerging applications in mobile media, offering landmark information and e-commerce opportunities to both mobile users and business owners. Existing mobile landmark recognition techniques mainly use GPS (Global Positioning System) location information to obtain a shortlist of database landmark images nearby the query image, followed by visual content analysis within the shortlist. This is insufficient since (i) GPS data often has large errors in dense build-up areas, and (ii) direction data that can be acquired from mobile devices is underutilized to further improve recognition. In this paper, we propose to integrate content and context in an effective and efficient vocabulary tree framework. Specifically, visual content and two types of mobile context: location and direction, can be integrated by the proposed Context-aware Discriminative Vocabulary Tree Learning (CDVTL) algorithm. The experimental results show that the proposed mobile landmark recognition method outperforms the state-of-the-art methods by about 6 % , 21 % and 13 % on NTU Landmark-50, PKU Landmark-198 and the large-scale San Francisco landmark dataset, respectively. | Context-aware Discriminative Vocabulary Tree Learning for mobile landmark recognition |
S1051200413001875 | The change detection and segmentation methods have gained considerable attention in scientific research and appear to be the central issue in various application areas. The objective of the paper is to present a segmentation method, based on maximum a posteriori probability (MAP) estimator, with application in seismic signal processing; some interpretations and connections with other approaches in change detection and segmentation, as well as computational aspects in this field are also discussed. The experimental results obtained by Monte Carlo simulations for signal segmentation using different signal models, including models with changes in the mean, in FIR, AR and ARX model parameters, as well as comparisons with other methods, are presented and the effectiveness of the proposed approach is proved. Finally, we discuss an application of segmentation in the analysis of the earthquake records during the Kocaeli seism, Turkey, August 1999, Arcelik station (ARC). The optimal segmentation results are compared with time–frequency analysis, for the reduced interference distribution (RID). The analysis results confirm the efficiency of the segmentation approach used, the change instants resulted by MAP appearing clear in energy and frequency contents of time–frequency distribution. | Signal segmentation using changing regression models with application in seismic engineering |
S1051200413001905 | In this paper, we propose automatic image segmentation using constraint learning and propagation. Recently, kernel learning is receiving much attention because a learned kernel can fit the given data better than a predefined kernel. To effectively learn the constraints generated by initial seeds for image segmentation, we employ kernel propagation (KP) based on kernel learning. The key idea of KP is first to learn a small-sized seed-kernel matrix and then propagate it into a large-sized full-kernel matrix. By applying KP to automatic image segmentation, we design a novel segmentation method to achieve high performance. First, we generate pairwise constraints, i.e., must-link and cannot-link, from initially selected seeds to make the seed-kernel matrix. To select the optimal initial seeds, we utilize global k-means clustering (GKM) and self-tuning spectral clustering (SSC). Next, we propagate the seed-kernel matrix into the full-kernel matrix of the entire image, and thus image segmentation results are obtained. We test our method on the Berkeley segmentation database, and the experimental results demonstrate that the proposed method is very effective in automatic image segmentation. | Automatic image segmentation using constraint learning and propagation |
S1051200413002170 | This paper studies the detection of Least Significant Bits (LSB) steganography in digital media by using hypothesis testing theory. The main goal is threefold: first, it is aimed to design a test whose statistical properties are known, this especially allows the guaranteeing of a false alarm probability. Second, the quantization of samples is studied throughout this paper. Lastly, the use of a linear parametric model of samples is used to estimate unknown parameters and design a test which can be used when no information on cover medium is available. To this end, the steganalysis problem is cast within the framework of hypothesis testing theory and digital media are considered as quantized signals. In a theoretical context where media parameters are assumed to be known, the Likelihood Ratio Test (LRT) is presented. Its statistical performances are analytically established; this highlights the impact of quantization on the most powerful steganalyzer. In a practical situation, when image parameters are unknown, a Generalized LRT (GLRT) is proposed based on a local linear parametric model of samples. The use of such model allows us to establish GLRT statistical properties in order to guarantee a prescribed false-alarm probability. Focusing on digital images, it is shown that the well-known WS (Weighted-Stego) is close to the proposed GLRT using a specific model of cover image. Finally, numerical results on natural images show the relevance of theoretical findings. | Hidden information detection using decision theory and quantized samples: Methodology, difficulties and results |
S1051200413002182 | With the rapidly rising interest in geographic information system (GIS) contents, a large volume of valuable map data has been unlawfully distributed by pirates. Therefore, the secure storage and transmission of classified national digital map datasets have been increasingly threatened. As the importance of secure, large-volume map datasets has increased, vector map security techniques that focus on secure network and data encryption have been studied. These techniques are required to ensure access control and prevent illegal copying of digital maps. This paper presents perceptual encryption on the vector compression domain for copy protection and access control of vector maps. Our algorithm compresses all vector data of polylines and polygons by lossless minimum coding object (MCO) units and perceptually encrypts using two processes using the mean points and directions of MCOs. The first process changes the position of vector data by randomly permuting the mean points of MCOs, the so-called position encryption. The second process changes the geographic shape by circularly encrypting the directions of vertices in MCOs by the XOR operator. Experimental results have verified that our algorithm can encrypt GIS digital maps effectively and simply and can also improve the compression ratio, unlike general data encryption techniques, and thus, our algorithm is very effective for a large volume of GIS datasets. | Perceptual encryption with compression for secure vector map data processing |
S1051200413002303 | In a previous paper [1] it was discussed the viability of functional analysis using as a basis a couple of generic functions, and hence vectorial decomposition. Here we complete the paradigm exploiting one of the analysis methodologies developed there, but applied to phase coordinates, so needing only one function as a basis. It will be shown that, thanks to the novel iterative analysis, any function satisfying a rather loose requisite is ontologically a basis. This in turn generalizes the polar version of the Fourier theorem to an ample class of nonorthogonal bases. The main advantage of this generalization is that it inherits some of the properties of the original Fourier theorem. As a result the new transform has a wide range of applications and some remarkable consequences. The new tool will be compared with wavelets and frames. Examples of analysis and reconstruction of functions using the developed algorithms and generic bases will be given. Some of the properties, and applications that can promptly benefit from the theory, will be discussed. The implementation of a matched filter for noise suppression will be used as an example of the potential of the theory. | Nonorthogonal bases and phase decomposition: Properties and applications |
S1051200413002352 | Histogram equalization (HE) method proved to be a simple and most effective technique for contrast enhancement of digital images. However it does not preserve the brightness and natural appearance of the images, which is a major drawback. To overcome this limitation, several Bi- and Multi-HE methods have been proposed. Although the Bi-HE methods significantly enhance the contrast and may preserve the brightness, the natural appearance of the images is not preserved as these methods suffer with the problem of intensity saturation. While Multi-HE methods are proposed to further maintain the brightness and natural appearance of images, but at the cost of contrast enhancement. In this paper, two novel Multi-HE methods for contrast enhancement of natural images, while preserving the brightness and natural appearance of the images, have been proposed. The technique involves decomposing the histogram of an input image into multiple segments based on mean or median values as thresholds. The narrow range segments are identified and are allocated full dynamic range before applying HE to each segment independently. Finally the combined equalized histogram is normalized to avoid the saturation of intensities and un-even distribution of bins. Simulation results show that, for the variety of test images (120 images) the proposed method enhances contrast while preserving brightness and natural appearance and outperforms contemporary methods both qualitatively and quantitatively. The statistical consistency of results has also been verified through ANOVA statistical tool. | Segment dependent dynamic multi-histogram equalization for image contrast enhancement |
S105120041300242X | Fitting a pair of coupled geometric objects to a number of coordinate points is a challenging and important problem in many applications including coordinate metrology, petroleum engineering and image processing. This paper derives two asymptotically efficient estimators, one for concentric circles fitting and the other for concentric ellipses fitting, based on the weighted equation error formulation and non-linear parameter transformation. The Kanatani–Cramér–Rao (KCR) lower bounds for the parameter estimates of the concentric circles and concentric ellipses under zero-mean Gaussian noise are provided to serve as the performance benchmark. Small-noise analysis shows that the proposed estimators reach the KCR lower bound performance asymptotically. The accuracy of the proposed estimators is corroborated by experiments with synthetic data and realistic images. | Asymptotically efficient estimators for the fittings of coupled circles and ellipses |
S1051200413002467 | A fast algorithm for matrix embedding steganography is proposed in this paper. Matrix embedding encodes the cover image and the secret message with an error correction code and modifies the cover image according to the coding result. The modification to the cover image is the coset leader of the error correction code, and it is computationally complex to find the coset leader. This paper proposes a fast algorithm to find the coset leader by using a lookup table algorithm. The proposed algorithm is suitable for matrix embedding steganography using Hamming code and random linear code. In our scheme, the syndrome of the coset is used to search for the coset leader in the standard array of the error correction code. For the Hamming code, we improved the parity check matrix of the code in order to make the syndrome indicate the coset leader by itself. Therefore, it is not necessary to search for the coset leader in a table. For the random linear code, this method is effective for most cosets, and we only memorize the coset leaders that cannot be identified by their syndromes. With this approach, the size of the table can be reduced significantly, and the computational complexity of embedding can be decreased. The proposed fast embedding algorithm has the same embedding efficiency as the conventional matrix embedding. Compared with the existing fast matrix embedding algorithms, the computational complexity of the proposed scheme is decreased significantly for the steganographic systems with low and medium embedding rates. | A fast algorithm for matrix embedding steganography |
S1051200413002716 | Illustration of Tracking Groups/Extended Objects with the Bayesian approach. The peaks of the posterior state probability density function (shown on the top) correspond to the two groups G1 and G2 (visualized at the bottom). Based on the peaks one can deduce where the positions of the groups are. In recent years there has been an increasing interest in tracking a number of objects moving in a coordinated and interacting fashion. There are many fields in which such situations are frequently encountered: video surveillance, sport events, biomedicine, neuroscience, meteorology, situation awareness and search rescue operations, to mention but a few. Although individual objects in the group can exhibit independent movement at a certain level, overall the group moves as one whole, synchronously with respect to the individual entities and avoiding collisions. | Overview of Bayesian sequential Monte Carlo methods for group and extended object tracking |
S1051200413002728 | The traditional design method for digital audio graphic equalizer using infinite impulse response filters usually has some deficiencies, including center frequency shift, narrower bandwidth at high frequency and inaccurate gain. In this paper, an improved method based on the modified bilinear transformation is proposed to design a digital audio graphic equalizer. The new bilinear mapping can compensate the center frequency shift, and pre-distorting the quality factors and optimizing the gains can correct the bandwidth and gain of each sub-band respectively. Experimental results reveal that both center frequency and bandwidth of the proposed digital graphic equalizer are strictly equal to the desired ones, and the average gain error decreases at least 2 dB. | A pre-distortion based design method for digital audio graphic equalizer |
S1051200413002753 | This paper discusses DNA watermarking for copyright protection and authentication of a DNA sequence. We then propose a DNA watermarking method that confers mutation resistance, amino acid residue conservation, and watermark security. Our method allocates codons to random circular angles using a random mapping table and selects a number of codons for embedding targets using the Lipschitz regularity that is measured from the evolution across scales of local modulus maxima of codon circular angles. We then embed the watermark into random circular angles of codons without changing the amino acid residue. The length and location of target codons depend on the random mapping table and the singularity of detection of Lipschitz regularity. This table is used as the watermark key and can be applied to any codon sequence regardless of sequence length. Without knowledge of this table, it is very difficult to detect the length and location of sequences for extracting the watermark. From experimental results on the suitability of similar watermark capacities, we verified that our method has a lower bit-rate error for point mutations compared with previous methods. Further, we established that the entropies of the random mapping table and the location of target codons are high, indicating that the watermark is secure. | DNA sequence watermarking based on random circular angle |
S1051200413002789 | Methods based on sparse coding have been successfully used in single-image super-resolution reconstruction. However, they tend to reconstruct incorrectly the edge structure and lose the difference among the image patches to be reconstructed. To overcome these problems, we propose a new approach based on global non-zero gradient penalty and non-local Laplacian sparse coding. Firstly, we assume that the high resolution image consists of two components: the edge component and the texture component. Secondly, we develop the global non-zero gradient penalty to reconstruct correctly the edge component and the non-local Laplacian sparse coding to preserve the difference among texture component patches to be reconstructed respectively. Finally, we develop a global and local optimization on the initial image, which is composed of the reconstructed edge component and texture component, to remove possible artifacts. Experimental results demonstrate that the proposed approach can achieve more competitive single-image super-resolution quality compared with other state-of-the-art methods. | Single-image super-resolution reconstruction based on global non-zero gradient penalty and non-local Laplacian sparse coding |
S1051200413002790 | The problem of reconstructing a known high-resolution signal from a set of its low-resolution parts exposed to additive white Gaussian noise is addressed in this paper from the perspective of statistical multirate signal processing. To enhance the performance of the existing high-resolution signal reconstruction procedure that is based on using a set of linear periodically time-varying (LPTV) Wiener filter structures, we propose two empirical methods combining empirical mode decomposition- and least squares support vector machine regression-based noise reduction schemes with these filter structures. The methods originate from the idea of reducing the effects of white Gaussian noise present in the low-resolution observations before applying them directly to the LPTV Wiener filters. Performances of the proposed methods are evaluated over one-dimensional simulated signals and two-dimensional images. Simulation results show that, under certain conditions, considerable improvements have been achieved by the proposed methods when compared with the previous study that only uses a set of LPTV Wiener filter structures for the signal reconstruction process. | Two empirical methods for improving the performance of statistical multirate high-resolution signal reconstruction |
S1051200413002832 | The accuracy of pseudo-Zernike moments (PZMs) suffers from various errors, such as the geometric error, numerical integration error, and discretization error. Moreover, the high order moments are vulnerable to numerical instability. In this paper, we present a method for the accurate calculation of PZMs which not only removes the geometric error and numerical integration error, but also provides numerical stability to PZMs of high orders. The geometric error is removed by taking the square-grids and arc-grids, the ensembles of which maps exactly the circular domain of PZMs calculation. The Gaussian numerical integration is used to eliminate the numerical integration error. The recursive methods for the calculation of pseudo-Zernike polynomials not only reduce the computation complexity, but also provide numerical stability to high order moments. A simple computational framework to implement the proposed approach is also discussed. Detailed experimental results are presented which prove the accuracy and numerical stability of PZMs. | Accurate calculation of high order pseudo-Zernike moments and their numerical stability |
S1051200413002893 | Condition assessment is one of the most important techniques to realize the equipment's health management and condition based maintenance (CBM). This paper introduces a preprocessing model of the bearing using wavelet packet–empirical mode decomposition (WP-EMD) for feature extraction. Then it uses self-organization mapping (SOM) for the condition assessment of the performance degradation. To verify the superiority of the proposed method, it is compared with some traditional features, such as RMS, kurtosis, crest factor and entropy. Meanwhile, seventeen datasets from the bearing run-to-failure test are used to validate the proposed method. The analysis results from the bearing's signals with multiple faults show that the proposed assessment model can effectively indicate the degradation state and help us to estimate remaining useful life (RUL) of the bearings. | Condition assessment for the performance degradation of bearing based on a combinatorial feature extraction method |
S1051200413003126 | The accuracy of a source location estimate is very sensitive to the presence of the random noise in the known sensor positions. This paper investigates the use of calibration sensors, each of which is capable of broadcasting calibration signals to other sensors as well as receiving the signals from the source and other calibration sensors, to reduce the loss in the source localization accuracy due to uncertainties in sensor positions. We begin the study with deriving the Cramer–Rao lower bound (CRLB) for source localization using time difference of arrival (TDOA) and frequency difference of arrival (FDOA) measurements when a single calibration sensor is available. The obtained CRLB result is then extended to the more general case with multiple calibration sensors. The performance improvement due to the use of calibration sensors is established analytically. We then propose a closed-form algorithm that can explore efficiently the calibration sensors to improve the source localization accuracy when the sensor positions are subject to random errors. We prove analytically that the newly developed localization method attains the CRLB accuracy under some mild approximations. Simulations verify the theoretical developments. | On the use of calibration sensors in source localization using TDOA and FDOA measurements |
S1051200414000025 | This paper reports an experimental result obtained by additionally using unlabeled data together with labeled ones to improve the classification accuracy of dissimilarity-based methods, namely, dissimilarity-based classifications (DBC) [25]. In DBC, classifiers among classes are not based on the feature measurements of individual objects, but on a suitable dissimilarity measure among the objects instead. In order to measure the dissimilarity distance between pairwise objects, an approach using the one-shot similarity (OSS) [30] measuring technique instead of the Euclidean distance is investigated in this paper. In DBC using OSS, the unlabeled set can be used to extend the set of prototypes as well as to compute the OSS distance. The experimental results, obtained with artificial and real-life benchmark datasets, demonstrate that designing the classifiers in the OSS dissimilarity matrices instead of expanding the set of prototypes can further improve the classification accuracy in comparison with the traditional Euclidean approach. Moreover, the results demonstrate that the proposed setting does not work with non-Euclidean data. | An empirical study on improving dissimilarity-based classifications using one-shot similarity measure |
S1051200414000074 | For linear time invariant transforms, the multiplications in these transformed domains are referred to as filtering. On the other hand, the multiplications in linear time varying transformed domains are referred to as mask operations. Discrete fractional Fourier transforms (DFrFTs) are linear time varying transforms which map signals from the time domain to the rotated time frequency domains. In this paper, effects of the rotational angles of the DFrFTs on the output signals after applying the mask operations are studied. It is proved in this paper that if the rotational angles of the DFrFTs are not integer multiples of π , as well as they are not odd integer multiples of π 2 when the signal lengths are odd, then there is only one degree of freedom for designing the mask coefficients. Otherwise, there are N degrees of freedom for designing the mask coefficients. Moreover, it is proved in this paper that satisfying the conditions for obtaining real valued output signals will automatically satisfy the conditions for obtaining wide sense stationary (WSS) output signals. Based on this result, designs of the mask coefficients are formulated as optimization problems with L 1 norm nonconvex objective functions only subject to the conditions for obtaining real valued output signals. These constrained optimization problems are further reformulated to unconstrained optimization problems by a vector space approach. Finally, when there is only one degree of freedom for designing the mask coefficients, the globally optimal solutions of the unconstrained optimization problems are derived analytically. Computer numerical simulation results are presented for illustrations. | Mask operations in discrete fractional Fourier transform domains with nearly white real valued wide sense stationary output signals |
S1051200414000116 | In this paper we propose to solve a range of computational imaging problems under a unified perspective of a regularized weighted least-squares (RWLS) framework. These problems include data smoothing and completion, edge-preserving filtering, gradient-vector flow estimation, and image registration. Although originally very different, they are special cases of the RWLS model using different data weightings and regularization penalties. Numerically, we propose a preconditioned conjugate gradient scheme which is particularly efficient in solving RWLS problems. We provide a detailed analysis of the system conditioning justifying our choice of the preconditioner that improves the convergence. This numerical solver, which is simple, scalable and parallelizable, is found to outperform most of the existing schemes for these imaging problems in terms of convergence rate. | Fast solver for some computational imaging problems: A regularized weighted least-squares approach |
S1051200414000360 | Low density parity check codes (LDPC) exhibit near capacity performance in terms of error correction. Large hardware costs, limited flexibility in terms of code length/code rate and considerable power consumption limit the use of belief-propagation algorithm based LDPC decoders in area and energy sensitive mobile environment. Serial bit flipping algorithms offer a trade-off between resource utilization and error correction performance at the expense of increased number of decoding iterations required for convergence. Parallel weighted bit flipping decoding and its variants aim at reducing the decoding iteration and time by flipping the potential erroneous bits in parallel. However, in most of the existing parallel decoding methods, the flipping threshold requires complex computations. In this paper, Hybrid Weighted Bit Flipping (HWBF) decoding is proposed to allow multiple bit flipping in each decoding iteration. To compute the number of bits that can be flipped in parallel, a criterion for determining the relationship between the erroneous bits in received code word is proposed. Using the proposed relation the proposed scheme can detect and correct a maximum of 3 erreneous hard decision bits in an iteration. The simulation results show that as compared to existing serial bit flipping decoding methods, the number of iterations required for convergence is reduced by 45% and the decoding time is reduced by 40%, by the use of proposed HWBF decoding. As compared to existing parallel bit flipping decoding methods, the proposed HWBF decoding can achieve similar bit error rate (BER) with same number of iterations and lesser computational complexity. Due to reduced number of decoding iterations, less computational complexity and reduced decoding time, the proposed HWBF decoding can be useful in energy sensitive mobile platforms. | Hybrid weighted bit flipping low density parity check decoding |
S1051200414000372 | Biological pathways can be modeled as a nonlinear system described by a set of nonlinear ordinary differential equations (ODEs). A central challenge in computational modeling of biological systems is the determination of the model parameters. In such cases, estimating these variables or parameters from other easily obtained measurements can be extremely useful. For example, time-series dynamic genomic data can be used to develop models representing dynamic genetic regulatory networks, which can be used to design intervention strategies to cure major diseases and to better understand the behavior of biological systems. Unfortunately, biological measurements are usually highly affected by errors that hide the important characteristics in the data. Therefore, these noisy measurements need to be filtered to enhance their usefulness in practice. This paper addresses the problem of state and parameter estimation of biological phenomena modeled by S-systems using Bayesian approaches, where the nonlinear observed system is assumed to progress according to a probabilistic state space model. The performances of various conventional and state-of-the-art state estimation techniques are compared. These techniques include the extended Kalman filter (EKF), unscented Kalman filter (UKF), particle filter (PF), and the developed improved particle filter (IPF). Specifically, two comparative studies are performed. In the first comparative study, the state variables (the enzyme CadA, the transport protein CadB, the regulatory protein CadC and lysine Lys for a model of the Cad System in E. coli (CSEC)) are estimated from noisy measurements of these variables, and the various estimation techniques are compared by computing the estimation root mean square error (RMSE) with respect to the noise-free data. In the second comparative study, the state variables as well as the model parameters are simultaneously estimated. In this case, in addition to comparing the performances of the various state estimation techniques, the effect of the number of estimated model parameters on the accuracy and convergence of these techniques is also assessed. The results of both comparative studies show that the UKF provides a higher accuracy than the EKF due to the limited ability of EKF to accurately estimate the mean and covariance matrix of the estimated states through lineralization of the nonlinear process model. The results also show that the IPF provides a significant improvement over PF because, unlike the PF which depends on the choice of sampling distribution used to estimate the posterior distribution, the IPF yields an optimum choice of the sampling distribution, which also accounts for the observed data. The results of the second comparative study show that, for all techniques, estimating more model parameters affects the estimation accuracy as well as the convergence of the estimated states and parameters. However, the IPF can still provide both convergence as well as accuracy related advantages over other estimation methods. | State and parameter estimation for nonlinear biological phenomena modeled by S-systems |
S1051200414000426 | A kind of weighted orthogonal constrained independent component analysis (ICA) algorithms with weighted orthogonalization for achieving this constraint is proposed recently. It has been proved in the literature that weighted orthogonal constrained ICA algorithms keep the equivariance property and have much better convergence speed, separation ability and steady state misadjustment, but the convergence is not yet analyzed in the published literature. The goal of this paper is to fill this gap. Firstly, a characterization of the stationary points corresponding to these algorithms using symmetric Minimum Distance Weighted Unitary Mapping (MDWUM) for achieving the weighted orthogonalization is obtained. Secondly, the monotonic convergence of the weighted orthogonal constrained fixed point ICA algorithms using symmetric MDWUM for convex contrast function is proved, which is further extended to nonconvex contrast functions case by adding a weighted orthogonal constraint term onto the contrast function. Together with the boundedness of contrast function, the convergence of fixed point ICA algorithms with weighted orthogonal constraint using symmetric MDWUM is implied. Simulation experiments results show that the adaptive ICA algorithms using symmetric MDWUM are better in terms of accuracy than the ones with pre-whitening, and the fixed-point ICA algorithms using symmetric MDWUM converge monotonically. | On the convergence of ICA algorithms with weighted orthogonal constraint |
S1051200414000438 | This paper deals with the problem of global asymptotic stability of fixed-point state-space digital filters under various combinations of quantization and overflow nonlinearities and for the situation where quantization occurs after summation only. Utilizing the structural properties of the nonlinearities in greater detail, a new global asymptotic stability criterion is proposed. A unique feature of the presented approach is that it exploits the information about the maximum normalized quantization error of the quantizer and the maximum representable number for a given wordlength. The approach leads to an enhanced stability region in the parameter-space, as compared to several previously reported criteria. | An improved criterion for the global asymptotic stability of fixed-point state-space digital filters with combinations of quantization and overflow |
S1051200414000451 | Otsu method is one of the most popular image thresholding methods. The segmentation results of Otsu method are in general acceptable for the gray level images with bimodal histogram patterns that can be approximated with mixture Gaussian modal. However, it is difficult for Otsu method to determine the reliable thresholds for the images with mixture non-Gaussian modal, such as mixture Rayleigh modal, mixture extreme value modal, mixture Beta modal, mixture uniform modal, comb-like modal. In order to determine automatically the robust and optimum thresholds for the images with various histogram patterns, this paper proposes a new global thresholding method based on a maximum-image-similarity idea. The idea is inspired by analyzing the relationship between Otsu method and Pearson correlation coefficient (PCC), which provides a novel interpretation of Otsu method from the perspective of maximizing image similarity. It is then natural to construct a maximum similarity thresholding (MST) framework by generalizing Otsu method with the maximum-image-similarity concept. As an example, a novel MST method is directly designed according to this framework, and its robustness and effectiveness are confirmed by the experimental results on 41 synthetic images and 86 real world images with various histogram shapes. Its extension to multilevel thresholding case is also discussed briefly. | Maximum similarity thresholding |
S1051200414000487 | Compressive sensing (CS) is an emerging approach for acquisition of sparse or compressible signals. For natural images, block compressive sensing (BCS) has been designed to reduce the size of sensing matrix and the complexity of sampling and reconstruction. On the other hand, image blocks with varying structures are too different to share the same sampling rate and sensing matrix. Motivated by this, a novel framework of adaptive acquisition and reconstruction is proposed to assign sampling rate adaptively. The framework contains three aspects. First, a small part of sampling rate is employed to pre-sense each block and a novel approach is proposed to estimate its compressibility only from pre-sensed measurements. Next, two assignment schemes are proposed to assign the other part of the sampling rate adaptively to each block based on its estimated compressibility. A higher sampling rate is assigned to incompressible blocks but a lower one to compressible ones. The sensing matrix is constructed based on the assigned sampling rates. The pre-sensed measurements and the adaptive ones are concatenated to form the final measurements. Finally, it is proposed that the reconstruction is modeled as a multi-objects optimization problem which involves the structured sparsity and the non-local total variation prior together. It is simplified into a 3-stage alternating optimization problem and is solved by an augmented Lagrangian method. Experiments on four categories of real natural images and medicine images demonstrate that the proposed framework captures local and nonlocal structures and outperforms the state-of-the-art methods. | Self-adaptive sampling rate assignment and image reconstruction via combination of structured sparsity and non-local total variation priors |
S105120041400075X | We propose a new method to incorporate priors on the solution of nonnegative matrix factorization (NMF). The NMF solution is guided to follow the minimum mean square error (MMSE) estimates of the weight combinations under a Gaussian mixture model (GMM) prior. The proposed algorithm can be used for denoising or single-channel source separation (SCSS) applications. NMF is used in SCSS in two main stages, the training stage and the separation stage. In the training stage, NMF is used to decompose the training data spectrogram for each source into a multiplication of a trained basis and gains matrices. In the separation stage, the mixed signal spectrogram is decomposed as a weighted linear combination of the trained basis matrices for the source signals. In this work, to improve the separation performance of NMF, the trained gains matrices are used to guide the solution of the NMF weights during the separation stage. The trained gains matrix is used to train a prior GMM that captures the statistics of the valid weight combinations that the columns of the basis matrix can receive for a given source signal. In the separation stage, the prior GMMs are used to guide the NMF solution of the gains/weights matrices using MMSE estimation. The NMF decomposition weights matrix is treated as a distorted image by a distortion operator, which is learned directly from the observed signals. The MMSE estimate of the weights matrix under the trained GMM prior and log-normal distribution for the distortion is then found to improve the NMF decomposition results. The MMSE estimate is embedded within the optimization objective to form a novel regularized NMF cost function. The corresponding update rules for the new objectives are derived in this paper. The proposed MMSE estimates based regularization avoids the problem of computing the hyper-parameters and the regularization parameters. MMSE also provides a better estimate for the valid gains matrix. Experimental results show that the proposed regularized NMF algorithm improves the source separation performance compared with using NMF without a prior or with other prior models. | Source separation using regularized NMF with MMSE estimates under GMM priors with online learning for the uncertainties |
S1051200414000761 | In this paper we introduce a robust feature extractor, dubbed as robust compressive gammachirp filterbank cepstral coefficients (RCGCC), based on an asymmetric and level-dependent compressive gammachirp filterbank and a sigmoid shape weighting rule for the enhancement of speech spectra in the auditory domain. The goal of this work is to improve the robustness of speech recognition systems in additive noise and real-time reverberant environments. As a post processing scheme we employ a short-time feature normalization technique called short-time cepstral mean and scale normalization (STCMSN), which, by adjusting the scale and mean of cepstral features, reduces the difference of cepstra between the training and test environments. For performance evaluation, in the context of speech recognition, of the proposed feature extractor we use the standard noisy AURORA-2 connected digit corpus, the meeting recorder digits (MRDs) subset of the AURORA-5 corpus, and the AURORA-4 LVCSR corpus, which represent additive noise, reverberant acoustic conditions and additive noise as well as different microphone channel conditions, respectively. The ETSI advanced front-end (ETSI-AFE), the recently proposed power normalized cepstral coefficients (PNCC), conventional MFCC and PLP features are used for comparison purposes. Experimental speech recognition results demonstrate that the proposed method is robust against both additive and reverberant environments. The proposed method provides comparable results to that of the ETSI-AFE and PNCC on the AURORA-2 as well as AURORA-4 corpora and provides considerable improvements with respect to the other feature extractors on the AURORA-5 corpus. | Robust feature extraction based on an asymmetric level-dependent auditory filterbank and a subband spectrum enhancement technique |
S1051200414000979 | The closer proximity between airports and residential areas has created a growing attention regarding noise pollution. The noise abatement procedures established by the aeronautical authorities and the models for computing noise contours around airports are proof of that. There are also models for identifying aircraft taking off which have focused on the correlation between the aircraft position and the noise signal. However, this correlation has been made so far without spatial information. The present study proposes a method to estimate the geo-referenced flight path followed by an aircraft taking off, using the spatio-temporal information extracted from the noise signal and improved with a smoothing algorithm. A microphone array with twelve sensors is used in order to evaluate different sensor spacings and the spatial aliasing effect when working with take-off noise signals. The flight path estimation method assumes that the aircraft is following a ground track collinear to the runway and was compared against radar information and Automatic Dependent Surveillance-Broadcast (ADS-B) data. The average method accuracy was between 3 and 6 meters. The estimated flight path has a ground length of about two kilometers, including locations at least one kilometer apart from the measurement point. | Geo-referenced flight path estimation based on spatio-temporal information extracted from aircraft take-off noise |
S1051200414000992 | This paper presents a stochastic model for the normalized least-mean-square (NLMS) algorithm operating in a nonstationary environment with complex-valued Gaussian input data. To derive this model, several approximations commonly used in the modeling of algorithms with normalized step size are avoided, thus giving rise to very accurate model expressions describing the algorithm behavior in both transient and steady-state phases. Such accuracy comes mainly from the strategy used for computing the normalized autocorrelation-like matrices arising from the model development, for which analytical solutions are also derived here. In addition, based on the proposed model expressions, the impact of the algorithm parameters on its performance is discussed, clarifying the tracking properties of the NLMS algorithm in a nonstationary environment. Through simulation results, the effectiveness of the proposed model is assessed for different operating scenarios. | Stochastic modeling of the NLMS algorithm for complex Gaussian input data and nonstationary environment |
S1051200414001018 | Tone mapping algorithms are used for image processing to reduce the dynamic range of an image to be displayed on low dynamic range (LDR) devices. The Retinex, which was developed using multi-scale and luminance-based methods, is one of the tone mapping algorithms for dynamic range compression, color constancy and color rendition. Retinex algorithms still have drawbacks, such as lower contrast and desaturation. This paper proposes a multi-scale luminance adaptation transform (MLAT) based on visual brightness functions for the enhancement of contrast and saturation of rendered images. In addition, the proposed algorithm was used to estimate the minimum and maximum luminance and a visual gamma function for local adapted viewing conditions. MLAT showed enhanced contrast and better color representation than the conventional methods in the objective evaluations (CIEDE200 and VCM). | Luminance adaptation transform based on brightness functions for LDR image reproduction |
S1051200414001043 | In this paper, the problem of designing weighted fusion robust time-varying Kalman predictors is considered for multisensor time-varying systems with uncertainties of noise variances. Using the minimax robust estimation principle and the unbiased linear minimum variance (ULMV) rule, based on the worst-case conservative system with the conservative upper bounds of noise variances, the local and five weighted fused robust time-varying Kalman predictors are designed, which include a robust weighted measurement fuser, three robust weighted state fusers, and a robust covariance intersection (CI) fuser. Their actual prediction error variances are guaranteed to have the corresponding minimal upper bounds for all admissible uncertainties of noise variances. Their robustness is proved based on the proposed Lyapunov equation approach. The concept of the robust accuracy is presented, and the robust accuracy relations are proved. The corresponding steady-state robust local and fused Kalman predictors are also presented, and the convergence in a realization between the time-varying and steady-state robust Kalman predictors is proved by the dynamic error system analysis (DESA) method and the dynamic variance error system analysis (DVESA) method. Simulation results show the effectiveness and correctness of the proposed results. | Robust weighted fusion Kalman predictors with uncertain noise variances |
S1051200414001316 | This paper investigates the permutation ambiguity problem in frequency-domain blind source separation and proposes a robust permutation alignment algorithm based on inter-frequency dependency, which is measured by the correlation coefficient between the time activity sequences of separated signals. To calculate a global reference for permutation alignment, a multi-band multi-centroid clustering algorithm is proposed where at first the permutation inside each subband is aligned with multi-centroid clustering and then the permutation among subbands is aligned sequentially. The multi-band scheme can reduce the dynamic range of the activity sequence and improve the efficiency of clustering, while the multi-centroid clustering scheme can improve the precision of the reference and reduce the risk of wrong permutation among subbands. The combination of two techniques enables to capture the variation of the time–frequency activity of a speech signal precisely, promising robust permutation alignment performance. Extensive experiments are carried out in different testing scenarios (up to reverberation time of 700 ms and 4 × 4 mixtures) to investigate the influence of two parameters, the number of subbands and the number of clustering-centroids, on the performance of the proposed algorithm. Comparison with existing permutation alignment algorithms proves that the proposed algorithm can improve the robustness in challenging scenarios and can reduce block permutation errors effectively. | Multi-band multi-centroid clustering based permutation alignment for frequency-domain blind speech separation |
S1051200414001353 | This paper presents a method based on graph behaviour analysis for the evaluation of descriptor graphs (applied to image/video datasets) for descriptor performance analysis and ranking. Starting from the Erdős–Rényi model on uniform random graphs, the paper presents results of investigating random geometric graph behaviour in relation with the appearance of the giant component as a basis for ranking descriptors based on their clustering properties. We analyse the phase transition and the evolution of components in such graphs, and based on their behaviour, the corresponding descriptors are compared, ranked, and validated in retrieval tests. The goal is to build an evaluation framework where descriptors can be analysed for automatic feature selection. | Component evolution analysis in descriptor graphs for descriptor ranking |
S1051200414001389 | In this paper, a low-complexity algorithm SAGE-USL is presented for 3-dimensional (3-D) localization of multiple acoustic sources in a shallow ocean with non-Gaussian ambient noise, using a vertical and a horizontal linear array of sensors. In the proposed method, noise is modeled as a Gaussian mixture. Initial estimates of the unknown parameters (source coordinates, signal waveforms and noise parameters) are obtained by known/conventional methods, and a generalized expectation maximization algorithm is used to update the initial estimates iteratively. Simulation results indicate that convergence is reached in a small number of (≤10) iterations. Initialization requires one 2-D search and one 1-D search, and the iterative updates require a sequence of 1-D searches. Therefore the computational complexity of the SAGE-USL algorithm is lower than that of conventional techniques such as 3-D MUSIC by several orders of magnitude. We also derive the Cramér–Rao Bound (CRB) for 3-D localization of multiple sources in a range-independent ocean. Simulation results are presented to show that the root-mean-square localization errors of SAGE-USL are close to the corresponding CRBs and significantly lower than those of 3-D MUSIC. | Three-dimensional localization of multiple acoustic sources in shallow ocean with non-Gaussian noise |
S1051200414001407 | Automatic recognition of the speech of children is a challenging topic in computer-based speech recognition systems. Conventional feature extraction method namely Mel-frequency cepstral coefficient (MFCC) is not efficient for children's speech recognition. This paper proposes a novel fuzzy-based discriminative feature representation to address the recognition of Malay vowels uttered by children. Considering the age-dependent variational acoustical speech parameters, performance of the automatic speech recognition (ASR) systems degrades in recognition of children's speech. To solve this problem, this study addresses representation of relevant and discriminative features for children's speech recognition. The addressed methods include extraction of MFCC with narrower filter bank followed by a fuzzy-based feature selection method. The proposed feature selection provides relevant, discriminative, and complementary features. For this purpose, conflicting objective functions for measuring the goodness of the features have to be fulfilled. To this end, fuzzy formulation of the problem and fuzzy aggregation of the objectives are used to address uncertainties involved with the problem. The proposed method can diminish the dimensionality without compromising the speech recognition rate. To assess the capability of the proposed method, the study analyzed six Malay vowels from the recording of 360 children, ages 7 to 12. Upon extracting the features, two well-known classification methods, namely, MLP and HMM, were employed for the speech recognition task. Optimal parameter adjustment was performed for each classifier to adapt them for the experiments. The experiments were conducted based on a speaker-independent manner. The proposed method performed better than the conventional MFCC and a number of conventional feature selection methods in the children speech recognition task. The fuzzy-based feature selection allowed the flexible selection of the MFCCs with the best discriminative ability to enhance the difference between the vowel classes. | Fuzzy-based discriminative feature representation for children's speech recognition |
S1051200414001572 | Owning to the excellent ability to characterize the sparsity of natural images, ℓ 1 norm sparse representation is widely applied to face hallucination. However, the determination on two key parameters such as patch size and regularization parameter has not been satisfactorily resolved yet. To this end, we proposed a novel parameter estimation method to identify them in an analytical way. In particular, the optimal patch size is derived from the sufficient condition for reliable sparse signal recovery established in compressive sensing theory. Furthermore, by interpreting ℓ 1 norm SR as the corresponding maximum a posteriori estimator with Laplace prior constraint, we obtain an explicit expression for regularization parameter in statistics of reconstruction errors and coefficients. Our proposed method can significantly reduce the computational cost of parameter determination while without sacrificing numerical precision and eventual face hallucination performance. Experimental results on degraded images in simulation and real-world scenarios validate its effectiveness. | Parameter estimation in sparse representation based face hallucination |
S1051200414001596 | Most speech enhancement methods based on short-time spectral modification are generally expressed as a spectral gain depending on the estimate of the local signal-to-noise ratio (SNR) on each frequency bin. Several studies have analyzed the performance of a priori SNR estimation algorithms to improve speech quality and to reduce speech distortions. In this paper, we concentrate on the analysis of over- and under estimation of the a priori SNR in speech enhancement and noise reduction systems. We first show that conventional approaches such as the decision-directed approach proposed by Ephraïm and Malah lead to a biased estimator for the a priori SNR. To reduce this bias, our strategy relies on the introduction of a correction term in the a priori SNR estimate depending on the current state of both the available a posteriori SNR and the estimated a priori one. The proposed solution leads to a bias-compensated a priori SNR estimate, and allows to finely estimating the output speech signal to be very close to the original one on each frequency bin. Such refinement procedure in the a priori SNR estimate can be inserted in any type of spectral gain function to improve the output speech quality. Objective tests under various environments in terms of the Normalized Covariance Metric (NCM) criterion, the Coherence Speech Intelligibility Index (CSII) criterion, the segmental SNR criterion and the Perceptual Evaluation of Speech Quality (PESQ) measure are presented showing the superiority of the proposed method compared to competitive algorithms. | Reducing over- and under-estimation of the a priori SNR in speech enhancement techniques |
S1051200414001602 | This paper presents a cuckoo search algorithm (CSA) based adaptive infinite impulse response (IIR) system identification scheme. The proposed scheme prevents the local minima problem encountered in conventional IIR modeling mechanisms. The performance of the new method has been compared with that obtained by other evolutionary computing algorithms like genetic algorithm (GA) and particle swarm optimization (PSO). The superior system identification capability of the proposed scheme is evident from the results obtained through an exhaustive simulation study. | On a cuckoo search optimization approach towards feedback system identification |
S105120041400164X | A text-independent speaker recognition system using a hybrid Probabilistic Principal Component Analysis (PPCA) and conventional i-vector modeling technique is proposed. In this framework, the total variability space (TVS) is estimated using PPCA while the i-vectors of target speakers and test utterances are extracted using the conventional method. This leads to appreciable decrease in development time, while the time required for training and testing remains unchanged. In this a paper, an algorithmic optimization to the PPCA's EM algorithm is developed. This is observed to provide a speed up of 3.7×. To simplify the testing procedure, two different approximation procedures are proposed to be used in this framework. The first approximation assumes a covariance matrix computed based on the PPCA framework. The second approximation proposes an optimization to avoid inverting the precision matrix of the i-vector. The comparison of time taken by these approximations with the baseline i-vector extraction procedure shows speed gains with some deterioration in performance in terms of the Equal Error Rate (EER). Among the proposed techniques, a best case trade-off is obtained with a speed up of 81.2× with deterioration in performance by 0.7 % in absolute terms. Speaker recognition performances are studied on the telephone conditions of the benchmark NIST SRE 2010 dataset with systems built on the Mel Frequency Cepstral Co-efficient (MFCC) feature. A trade-off in the performance is observed when the proposed approximations are used. The scalability of these trade-offs is tested on the Mel Filterbank Slope (MFS) feature. The trade-offs observed with the approximations are reduced when the two systems are fused. | A fast and scalable hybrid FA/PPCA-based framework for speaker recognition |
S1051200414001924 | Selecting the order of autoregressions when the parameters of the model are estimated with least-squares algorithms (LSA) is a well researched topic. This type of approach assumes implicitly that the analyzed time series is stationary, which is rarely true in practical applications. It is known since long time that, in the case of nonstationary signals, is recommended to employ forgetting factor least-squares algorithms (FF-LSA) instead of LSA. This makes necessary to modify the selection criteria originally designed for LSA in order to become compatible with FF-LSA. Sequentially normalized maximum likelihood (SNML), which is one of the newest model selection criteria, has been modified independently by two groups of researchers such that to be used in conjunction with FF-LSA. As the proposals coming from the two groups have not been compared in the previous literature, we conduct in this work a theoretical and empirical study for clarifying the relationship between the existing solutions. As part of our study, we also investigate some possibilities to further modify the criteria. Based on our findings, we provide guidance which can potentially be useful for the practitioners. | New insights on AR order selection with information theoretic criteria based on localized estimators |
S1051200414001997 | In this paper, a new robust and secure digital image watermarking scheme that can be used for copyright protection is proposed. The scheme uses the integer wavelet transform (IWT) and singular value decomposition (SVD). The grey image watermark pixels values are embedded directly into the singular values of the 1-level IWT decomposed sub-bands. Experimental results demonstrate the effectiveness of the proposed scheme in terms of robustness, imperceptibility and capacity due to the IWT and SVD properties. A challenge due to the false positive problem which may be faced by most of SVD-based watermarking schemes has been solved in this work by adopting a digital signature into the watermarked image. The proposed digital signature mechanism is applied to generate and embed a digital signature after embedding the watermarks; the ownership is then authenticated before extracting watermarks. Thus, the proposed scheme achieved the security issue where the false positive problem is solved, in addition to that, the scheme is considered as a blind scheme. A computer simulation is used to verify the feasibility of the proposed scheme and its robustness against various types of attacks and to compare it with some previous schemes. Furthermore, the statistical Wilcoxon signed rank test is employed to certify the effectiveness of the proposed scheme. | A new robust and secure digital image watermarking scheme based on the integer wavelet transform and singular value decomposition |
S1051200414002206 | In this paper, we extend the concept of the optimal similarity measure, originally developed for Zernike moments (ZMs) which belong to a class of orthogonal rotation invariant moments (ORIMs), to angular radial transform (ART) which is non-orthogonal. The proposed distance measure not only uses the magnitude of the ART coefficients but also incorporates phase component unlike the existing L 1 -distance and L 2 -distance measures which use only the magnitude of ART in image matching problems. Experimental results show that the new distance measure outperforms L 2 -distance measure. The performance of the proposed method is highly robust to Gaussian noise and salt-and-pepper noise even at very high level of noise. The results are compared with the ZMs-based optimal similarity measure. It is shown that the recognition rate of the proposed distance measure is comparable to that of the ZMs, however, at very low computational complexity. | A noise resistant image matching method using angular radial transform |
S105120041400222X | This study deals with the asymptotic performance of a multiple-spur cancellation scheme. Radio frequency transceivers are now multi-standard and specific impairment can occur. The clock harmonics, called spurs, can leak into the signal band of the reception stage, and thus degrade the performance. The performance of a fully digital approach is presented here. A one-spur cancellation scheme is first described, for which we exploit the a priori knowledge of the spur frequency to create a reference of the polluting tone with the same frequency. A least-mean-square (LMS) algorithm block that uses this reference to mitigate the polluter is designed. However, due to imperfections in the physical components, there is a shift between the a priori frequency and the actual frequency of the spur, and the spur is affected by Brownian phase noise. Under these circumstances, we study the asymptotic and transient performance of the algorithm. We next improve the transient performance by adding a previously proposed adaptive-step-size process. In a second part of this paper, we present a multiple-spur parallel approach that is based on the one-spur cancellation scheme, for which we provide a closed-form expression of the asymptotic signal-plus-noise interference ratio in the presence of frequency shifts and phase noise. | On the performance of digital adaptive spur cancellation for multi-standard radio frequency transceivers |
S1051200414002309 | We propose a new methodology for designing decentralized random field estimation schemes that takes the tradeoff between the estimation accuracy and the cost of communications into account. We consider a sensor network in which nodes perform bandwidth limited two-way communications with other nodes located in a certain range. The in-network processing starts with each node measuring its local variable and sending messages to its immediate neighbors followed by evaluating its local estimation rule based on the received messages and measurements. Local rule design for this two-stage strategy can be cast as a constrained optimization problem with a Bayesian risk capturing the cost of transmissions and penalty for the estimation errors. A similar problem has been previously studied for decentralized detection. We adopt that framework for estimation, however, the corresponding optimization schemes involve integral operators that are impossible to evaluate exactly, in general. We employ an approximation framework using Monte Carlo methods and obtain an optimization procedure based on particle representations and approximate computations. The procedure operates in a message-passing fashion and generates results for any distributions if samples can be produced from, e.g., the marginals. We demonstrate graceful degradation of the estimation accuracy as communication becomes more costly. | Optimization of decentralized random field estimation networks under communication constraints through Monte Carlo methods |
S1051200414002322 | This paper mainly focuses on the multi-sensor distributed fusion estimation problem for networked systems with time delays and packet losses. Measurements of individual sensors are transmitted to local processors over different communication channels with different random delay and packet loss rates. Several groups of Bernoulli distributed random variables are employed to depict the phenomena of different time delays and packet losses. Based on received measurements of individual sensors, local processors produce local estimates that have been developed in a new recent literature. Then local estimates are transmitted to the fusion center over a perfect connection, where a distributed fusion filter is obtained by using the well-known matrix-weighted fusion estimation algorithm in the linear minimum variance sense. The filtering error cross-covariance matrices between any two local filters are derived. The steady-state property of the proposed distributed fusion filter is analyzed. A simulation example verifies the effectiveness of the algorithm. | Multi-sensor distributed fusion filtering for networked systems with different delay and loss rates |
S1051200414002498 | Interference mitigation is one of the main challenges in wireless communication, especially in ad hoc networks. In such context, the Multiple Access Interference (MAI) is known to be of an impulsive nature. Therefore, the conventional Gaussian assumption is inadequate to model this type of interference. Nevertheless, it can be accurately modeled by stable distributions. In fact, it was shown in literature that the α-stable distribution is a useful tool to model impulsive data. In this paper, we tackle the problem of noise compensation in ad hoc networks. More precisely, this issue is addressed within an Orthogonal Frequency Division Multiplexing (OFDM) transmission link assuming a symmetric α-stable model for the signal distortion due to MAI. Based on Bayesian estimation, the proposed approach estimates the transmitted OFDM symbols in the time domain using the Sequential Monte Carlo (SMC) methods. Unlike existing schemes, we consider the more realistic case where the impulsive noise parameters are assumed to be unknown at the receiver. Consequently, our approach deals also with the difficult task of noise parameters estimation which can be very useful for other purposes such as target tracking in wireless sensor networks or channel estimation. Simulations results, provided in terms of Mean Square Error (MSE) and Bit Error Rate (BER), illustrate the efficiency and the robustness of this scheme. | Joint estimation of state and noise parameters in a linear dynamic system with impulsive measurement noise: Application to OFDM systems |
S1051200414002528 | Real-world signals are often not band-limited, and in many cases of practical interest sampling points are not always measured regularly. The purpose of this paper is to propose an irregular sampling theorem for the fractional Fourier transform (FRFT), which places no restrictions on the input signal. First, we construct frames for function spaces associated with the FRFT. Then, we introduce a unified framework for sampling and reconstruction in the function spaces. Based upon the proposed framework, an FRFT-based irregular sampling theorem without band-limiting constraints is established. The theoretical derivations are validated via numerical results. | Sampling expansion for irregularly sampled signals in fractional Fourier transform domain |
S1051200414002541 | Rotating bearing degradation is a physical process that typically evolves in stages characterized by different speeds of evolution of the characteristic health indicators. Therefore, it is opportune to apply different predictive models in the different stages, with the aim of balancing accuracy and calculation complexity in light of the varying needs and constraints of the different stages. This paper proposes a condition-based adaptive trend prediction method for rotating bearings. The empirical mode decomposition–self-organizing map (EMD–SOM) method is applied to analyze vibration signals and calculate a confidence value (CV) on the bearing health state. Four different degradation stages, normal, slight degradation, severe degradation and failure, are identified by using the CV value and CV change rate. At each stage, we develop a different prediction strategy tailored to the specific degradation profile. In operation, upon recognition of the stage, the corresponding prognostics models are selected to estimate the health trend. A case study on datasets from 17 test bearings demonstrates and validates the feasibility of the proposed method. The experiment results show that the adaptive prediction method is accurate and reduces computational complexity, which can be important for online applications, especially in case of limited computing resources. | An adaptive method for health trend prediction of rotating bearings |
S1051200414002760 | This paper describes a system for separating multiple moving sound sources from two-channel recordings based on spatial cues and a model adaptation technique. We employ a statistical model of observed interaural level and phase differences, where maximum likelihood estimation of model parameters is achieved through an expectation-maximization algorithm. This model is used to partition spectrogram points into several clusters (one cluster per source) and generate spectrogram masks accordingly for isolating individual sound sources. We follow a maximum likelihood linear regression (MLLR) approach for tracking source relocations and adapting model parameters accordingly. The proposed algorithm is able to separate more sources than input channels, i.e. in the underdetermined setting. In simulated anechoic and reverberant environments with two and three speakers, the proposed model-adaptation algorithm yields more than 10 dB gain in signal-to-noise-ratio-improvement for azimuthal source relocations of 15° or more. Moreover, this performance gain is achievable with only 0.6 seconds of input mixture received after relocation. | Binaural source separation based on spatial cues and maximum likelihood model adaptation |
S1051200414002826 | The enhancement of monitoring biosignals plays a crucial role to thrive successfully computer-assisted diagnosis, ergo the deployment of outstanding approaches is an ongoing field of research demand. In the present article, a computational prototype for preprocessing short daytime polysomnographic (sdPSG) recordings based on advanced estimation techniques is introduced. The postulated model is capable of performing data segmentation, baseline correction, whitening, embedding artefacts removal and noise cancellation upon multivariate sdPSG data sets. The methodological framework includes Karhunen–Loève Transformation (KLT), Blind Source Separation with Second Order Statistics (BSS-SOS) and Wavelet Packet Transform (WPT) to attain low-order, time-to-diagnosis efficiency and modular autonomy. The data collected from 10 voluntary subjects were preprocessed by the model, in order to evaluate the withdrawal of noisy and artefactual activity from electroencephalographic (EEG) and electrooculographic (EOG) channels. The performance metrics are distinguished in qualitative (visual inspection) and quantitative manner, such as: Signal-to-Interference Ratio (SIR), Root Mean Square Error (RMSE) and Signal-to-Noise Ratio (SNR). The computational model demonstrated a complete artefact rejection in 80% of the preprocessed epochs, 4 to 8 dB for residual error and 12 to 30 dB in signal-to-noise gain after denoising trial. In comparison to previous approaches, N-way ANOVA tests were conducted to attest the prowess of the system in the improvement of electrophysiological signals to forthcoming processing and classification stages. | Advanced daytime polysomnographic preprocessing: A versatile approach for stream-wise estimation |
S1051200414002863 | A novel robust method for surface tracking in range-image sequences is presented which combines a clustering method based on surface models with a particle-filter-based 2-D affine-motion estimator. Segmented regions obtained at previous time steps are used to create seed areas by comparing measured depth values with those obtained from surface-model fitting. The seed areas are further refined using a motion-probability region estimated by the particle-filter-based tracker through prediction of future states. This helps resolving ambiguities that arise when surfaces belonging to different objects are in physical contact with each other, for example during hand-object manipulations. Region growing allows recovering the complete segment area. The obtained segmented regions are then used to improve the predictions of the tracker for the next frame. The algorithm runs in quasi real-time and uses on-line learning, eliminating the need to have a priori knowledge about the surface being tracked. We apply the method to in-house depth videos acquired with both time-of-flight and structured-light sensors, demonstrating object tracking in real-world scenarios, and we compare the results with those of an ICP-based tracker. | Robust surface tracking in range image sequences |
S1051200414002899 | Stochastic resonance (SR) has been proved to be an effective approach for weak signal detection. In this paper, an underdamped step-varying second-order SR (USSSR) method is proposed to further improve the output signal-to-noise ratio (SNR). In the method, by selecting a proper underdamped damping factor and a proper calculation step, the weak periodic signal, the noise and the potential can be matched with each other in the regime of second-order SR to generate an optimal dynamical system. The proposed method has three distinct merits as: 1) secondary filtering effect produces a low-noise output waveform; 2) good band-pass filtering effect attenuates the multiscale noise that locates in high- and (or) low-frequency domains; and 3) good anti-noise capability in detecting weak signal being submerged in heavy background noise. Numerical analysis and application verification are performed to confirm the effectiveness and efficiency of the proposed method in comparison with a traditional SR method. | Effects of underdamped step-varying second-order stochastic resonance for weak signal detection |
S1051200414002917 | Joint detection and estimation (JDE) of a target refers to determining the existence of the target and estimating the state of the target, if the target exists. This paper studies the error bounds for JDE of an unresolved target-group in the presence of clutter and missed detection using the random finite set (RFS) framework. We define a meaningful distance error for JDE of the unresolved target-group by modeling the state as a Bernoulli RFS. We derive the single and multiple sensor bounds on the distance error for an unresolved target-group observation model, which is based on the concept of the continuous individual target number. Maximum a posterior (MAP) detection criteria and unbiased estimation criteria are used in deriving the bounds. Examples 1 and 2 show the variation of the bounds with the probability of detection and clutter density for single and multiple sensors. Example 3 verifies the effectiveness of the bounds by indicating the performance limitations of an unresolved target-group cardinalized probability hypothesis density (UCPHD) filter. | Joint detection and estimation error bounds for an unresolved target-group using single or multiple sensors |
S1051200414003108 | This paper proposes a new modeling framework for estimating single-trial event-related potentials (ERPs). Existing studies based on state-space approach use discrete-time random-walk models. We propose to use continuous-time partially observed diffusion process which is more natural and appropriate to describe the continuous dynamics underlying ERPs, discretely observed in noise as single-trials. Moreover, the flexibility of the continuous-time model being specified and analyzed independently of observation intervals, enables a more efficient handling of irregularly or variably sampled ERPs than its discrete-time counterpart which is fixed to a particular interval. We consider the Ornstein–Uhlenbeck (OU) process for the inter-trial parameter dynamics and further propose a nonlinear process of Cox, Ingersoll & Ross (CIR) with a heavy-tailed density to better capture the abrupt changes. We also incorporate a single-trial trend component using the mean-reversion variant, and a stochastic volatility noise process. The proposed method is applied to analysis of auditory brainstem responses (ABRs). Simulation shows that both diffusions give satisfactory tracking performance, particularly of the abrupt ERP parameter variations by the CIR process. Evaluation on real ABR data across different subjects, stimulus intensities and hearing conditions demonstrates the superiority of our method in extracting the latent single-trial dynamics with significantly improved SNR, and in detecting the wave V which is critical for diagnosis of hearing loss. Estimation results on data with variable sampling frequencies and missing single-trials show that the continuous-time diffusion model can capture more accurately the inter-trial dynamics between varying observation intervals, compared to the discrete-time model. | Modeling and estimation of single-trial event-related potentials using partially observed diffusion processes |
S1051200414003133 | We study the problem of estimating an unknown deterministic signal that is observed through an unknown deterministic data matrix under additive noise. In particular, we present a minimax optimization framework to the least squares problems, where the estimator has imperfect data matrix and output vector information. We define the performance of an estimator relative to the performance of the optimal least squares (LS) estimator tuned to the underlying unknown data matrix and output vector, which is defined as the regret of the estimator. We then introduce an efficient robust LS estimation approach that minimizes this regret for the worst possible data matrix and output vector, where we refrain from any structural assumptions on the data. We demonstrate that minimizing this worst-case regret can be cast as a semi-definite programming (SDP) problem. We then consider the regularized and structured LS problems and present novel robust estimation methods by demonstrating that these problems can also be cast as SDP problems. We illustrate the merits of the proposed algorithms with respect to the well-known alternatives in the literature through our simulations. | Robust least squares methods under bounded data uncertainties |
S1051200414003145 | This paper focuses on the parameter estimation problems of output error autoregressive systems and output error autoregressive moving average systems (i.e., the Box–Jenkins systems). Two recursive least squares parameter estimation algorithms are proposed by using the data filtering technique and the auxiliary model identification idea. The key is to use a linear filter to filter the input–output data. The proposed algorithms can identify the parameters of the system models and the noise models interactively and can generate more accurate parameter estimates than the auxiliary model based recursive least squares algorithms. Two examples are given to test the proposed algorithms. | Recursive least squares parameter identification algorithms for systems with colored noise using the filtering technique and the auxilary model |
S1051200414003157 | The conventional methods are not effective and efficient for image multi-level thresholding due to time-consuming and expensive computation cost. The multi-level thresholding problem can be posed as an optimization problem, optimizing some thresholding criterion. In this paper, membrane computing is introduced to propose an efficient and robust multi-level thresholding method, where a cell-like P system with the nested structure of three layers is designed as its computing framework. Moreover, an improved velocity-position model is developed to evolve the objects in membranes based on the special membrane structure and communication mechanism of objects. Under the control of evolution-communication mechanism of objects, the cell-like P system can efficiently exploit the best multi-level thresholds for an image. Simulation experiments on nine standard images compare the proposed multi-level thresholding method with several state-of-the-art multi-level thresholding methods and demonstrate its superiority. | Optimal multi-level thresholding with membrane computing |
S1051200414003169 | A recently reported Lyapunov based criterion (Singh (2014) [7]) for the asymptotic stability of two-dimensional (2-D) linear discrete systems described by the Fornasini–Marchesini second local state-space (FMSLSS) model is reviewed. It is established in this paper that, despite utilizing a more general Lyapunov matrix, Singh's criterion will never lead to an enhanced stability region in the parameter-space as compared to that obtainable via Hinamoto–Lu's criterion. | A note on stability analysis of 2-D linear discrete systems based on the Fornasini–Marchesini second model: Stability with asymmetric Lyapunov matrix |
S1051200414003170 | Stochastic resonance offers the possibility of signal amplification by the addition of noise. This curious, interesting phenomenon has received considerable attention since the 1990s. Since such effect has the potential to improve the signal processing performance, intensive works have been done about this topic. One of the most effective implementations of stochastic resonance is in the Collins network, which can provide outstanding performance in that the network output consists of an amplified version of a weak, sub-threshold signal. In practical situations, the sub-threshold signal is easily buried in external noise from the environment. The present paper focuses on the discrete-time system (plus continuous-time system) and analyzes this situation to clarify the performance degradation of the amplification effect. As a countermeasure, we herein propose a novel delay network. The present analysis indicates that the proposed scheme produces an amplification effect in the presence of external noise. The results of the analysis are used to determine the condition for which the delay network is effective, and the results of an experimental evaluation verifies the validity of the analysis. | Concept, analysis, and demonstration of a novel delay network exhibiting stochastic resonance induced by external noise |
S1051200414003418 | In this paper, we propose a fast local image inpainting algorithm based on the Allen–Cahn model. The proposed algorithm is applied only on the inpainting domain and has two features. The first feature is that the pixel values in the inpainting domain are obtained by curvature-driven diffusions and utilizing the image information from the outside of the inpainting region. The second feature is that the pixel values outside of the inpainting region are the same as those in the original input image since we do not compute the outside of the inpainting region. Thus the proposed method is computationally efficient. We split the governing equation into one linear equation and one nonlinear equation by using an operator splitting technique. The linear equation is discretized by using a fully implicit scheme and the nonlinear equation is solved analytically. We prove the unconditional stability of the proposed scheme. To demonstrate the robustness and accuracy of the proposed method, various numerical results on real and synthetic images are presented. | Fast local image inpainting based on the Allen–Cahn model |
S105120041400342X | This paper introduces a method to perform a Time-Scale Local Hurst Exponent (TS-LHE) analysis for time series. The traditional Hurst exponent methods usually analyze time series as a whole, providing a single value that characterizes their global behavior. In contrast, the methods based on the Local Hurst Exponent allow the evaluation of the fractal structure of a time series on local events. However, a critical parameter in these methods is the selection of scale. Here, a TS-LHE method is presented, based on a systematic implementation of the rescaled-range (R/S) method, in a set of sliding windows of different sizes. This method allows calculating instantaneous values of Local Hurst Exponents at different scales, associating them with individual samples of a time series. This paper is organized as follows: first, an overview of the TS-LHE is provided; then, a proof-of-concept of this analysis is presented, considering (a) different fractional Brownian motion series, (b) a synthetic seismic signal under different noise conditions, and (c) a group of real seismic traces. Finally, the obtained results show that the TS-LHE analysis is particularly sensitive to sudden behavior changes of the time series, such as frequency or phase variations. This sensitivity is independent of the amplitude of the data, and thus, it can be used to identify pattern changes as well as long- and short-range correlations within a time series. | Application of a Time-Scale Local Hurst Exponent analysis to time series |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.