FileName
stringlengths 17
17
| Abstract
stringlengths 163
6.01k
| Title
stringlengths 12
421
|
---|---|---|
S1051200414003431 | This paper deals with the design of the low complexity and efficient dynamic spectrum learning and access (DSLA) scheme for next-generation heterogeneous decentralized Cognitive Radio Networks (CRNs) such as Long Term Evolution-Advanced and 5G. Existing DSLA schemes for decentralized CRNs are focused predominantly on the decision making policies which perform the task of orthogonalization of secondary users to optimum vacant subbands of fixed bandwidth. The focus of this paper is the design of DSLA scheme for decentralized CRNs to support the tunable vacant bandwidth requirements of the secondary users while minimizing the computationally intensive subband switchings. We first propose a new low complexity VDF which is designed by modifying second order frequency transformation and subsequently combining it with the interpolation technique. It is referred to as Interpolation and Modified Frequency Transformation based VDF (IMFT-VDF) and it provides tunable bandpass responses anywhere over Nyquist band with complete control over the bandwidth as well as the center frequency. Second, we propose a tunable decision making policy, ρ t _ rand , consisting of learning and access unit, and is designed to take full advantage of exclusive frequency response control offered by IMFT-VDF. The simulation results verify the superiority of the proposed DSLA scheme over the existing DSLA schemes while complexity comparisons indicate total gate count savings from 11% to as high as 87% over various existing schemes. Also, lower number of subband switchings make the proposed scheme power-efficient and suitable for battery-operated cognitive radio terminals. | Low complexity and efficient dynamic spectrum learning and tunable bandwidth access for heterogeneous decentralized cognitive radio networks |
S1051200414003479 | The last few years witnessed an increased interest in the robust lossless data hiding schemes because they can verify the main requirements of the lossless data hiding (i.e., reversibility, capacity, and invisibility) and at the same time provide robustness against attacks. The reversibility is one of the important requirements of those methods. Another important requirement is the improvement of the robustness against attacks. The methods that improve the robustness are at the cost of reducing capacity and invisibility. Taking into consideration the need for improving the four requirements that have been mentioned above, this paper presents a novel robust lossless data hiding method in the transform domain. The proposed algorithm depends on transforming non-overlapping blocks of the host image using Slantlet transform (SLT) matrix and embedding data bits by modifying the difference between the mean values of the SLT coefficients in the high frequency subbands. As a practical application, the proposed algorithm has been adjusted in order to be applied to the color medical images. The data bits can be embedded not only in a single channel but also in the three channels of the RGB color image and thus further improving the embedding capacity. The results of the experiments that were conducted and the comparisons with the previous robust lossless data hiding (i.e., robust reversible watermarking) methods prove the effectiveness of the proposed algorithm. | A new robust lossless data hiding scheme and its application to color medical images |
S1051200414003509 | The representation of sound signals at the cochlea and auditory cortical level has been studied as an alternative to classical analysis methods. In this work, we put forward a recently proposed feature extraction method called approximate auditory cortical representation, based on an approximation to the statistics of discharge patterns at the primary auditory cortex. The approach here proposed estimates a non-negative sparse coding with a combined dictionary of atoms. These atoms represent the spectro-temporal receptive fields of the auditory cortical neurons, and are calculated from the auditory spectrograms of clean signal and noise. The denoising is carried out on noisy signals by the reconstruction of the signal discarding the atoms corresponding to the noise. Experiments are presented using synthetic (chirps) and real data (speech), in the presence of additive noise. For the evaluation of the new method and its variants, we used two objective measures: the perceptual evaluation of speech quality and the segmental signal-to-noise ratio. Results show that the proposed method improves the quality of the signals, mainly under severe degradation. | Denoising sound signals in a bioinspired non-negative spectro-temporal domain |
S1051200414003522 | Many systems and physical processes require non-Gaussian probabilistic models to accurately capture their dynamic behaviour. In this paper, we present a random-coefficient mathematical form that can be used to simulate a third-order Laplace autoregressive (AR) process. The mathematical structure of the random-coefficient AR process has a Markovian property that makes it flexible and simple to implement. A detailed derivation of its parameters as well as its pseudo-code implementation is provided. Moreover, it is shown that the process has an autocorrelation property that satisfies Yule–Walker type of equations. Having such an autocorrelation property makes the developed AR process, particularly, convenient for deriving mathematical models for dynamic systems, as well as signals, whose parameters of interest are Laplace distributed. | A random-coefficient third-order autoregressive process |
S1051200414003546 | Two reference indices used to characterize left ventricular (LV) global chamber function are end-systolic peak elastance ( E max ) and the time-constant of relaxation rate (τ). However, these two indices are very difficult to obtain in the clinical setting as they require invasive high-fidelity catheterization procedures. We have previously demonstrated that it is possible to approximate these indices noninvasively by digital processing color-Doppler M-mode (CDMM) images. The aim of the present study was twofold: (1) to study which feature extraction from linearly reduced input spaces yields the most useful information for the prediction of the haemodynamic variables from CDMM images; (2) to verify whether the use of nonlinear versions of those linear methods actually improves the estimation. We studied the performance and interpretation of different linearly transformed input spaces (raw image, discrete cosine transform (DCT) coefficients, partial least squares, and principal components regression), and we compared whether nonlinear versions of the above methods provided significant improvement in the estimation quality. Our results showed that very few input features suffice for providing a good (medium) quality estimator for E max (for τ), which can be readily interpreted in terms of the measured flows. Additional covariates should be included to improve the prediction accuracy of both reference indices, but especially in τ models. The use of efficient nonlinear kernel algorithms does improve the estimation quality of LV indices from CDMM images when using DCT input spaces that capture almost all energy. | On feature extraction for noninvasive kernel estimation of left ventricular chamber function indices from echocardiographic images |
S1051200414003571 | This paper presents a tutorial review of recent advances in the field of time–frequency ( t , f ) signal processing with focus on exploiting ( t , f ) image feature information using pattern recognition techniques for detection and classification applications. This is achieved by (1) revisiting and streamlining the design of high-resolution quadratic time frequency distributions (TFDs) so as to produce adequate ( t , f ) images, (2) using image enhancement techniques to improve the resolution of TFDs, and (3) defining new ( t , f ) features such as ( t , f ) flatness and ( t , f ) entropy by extending time-domain or frequency-domain features. Comparative results indicate that the new ( t , f ) features give better performance as compared to time-only or frequency-only features for the detection of abnormalities in newborn EEG signals. Defining high-resolution TFDs for the extraction of new ( t , f ) features further improves performance. The findings are corroborated by new experimental results, theoretical derivations and conceptual insights. | Time–frequency features for pattern recognition using high-resolution TFDs: A tutorial review |
S1051200415000226 | The goal of this paper is to design a statistical test for the camera model identification problem from JPEG images. The approach focuses on extracting information in Discrete Cosine Transform (DCT) domain. The main motivation is that the statistics of DCT coefficients change with different sensor noises combining with various in-camera processing algorithms. To accurately capture this information, this paper relies on the state-of-the-art model of DCT coefficients proposed in our previous work. The DCT coefficient model is characterized by two parameters ( α , β ) . The parameters ( c , d ) that characterize the simplified relation between these parameters are exploited as camera fingerprint for camera model identification. The camera model identification problem is cast in the framework of hypothesis testing theory. In an ideal context where all model parameters are perfectly known, the Likelihood Ratio Test is presented and its performances are theoretically established. For a practical use, two Generalized Likelihood Ratio Tests are designed to deal with unknown model parameters such that they can meet a prescribed false alarm probability while ensuring a high detection performance. Numerical results on simulated and real JPEG images highlight the relevance of the proposed approach. | Camera model identification based on DCT coefficient statistics |
S1051200415000238 | In this paper, a novel region of interest (ROI) query method is proposed for image retrieval by combining a mean shift tracking (MST) algorithm and an improved expectation–maximisation (EM)-like (IEML) method. In the proposed combination, the MST is used to seek the initial location of the target candidate model and then IEML is used to adaptively change the location and scale of the target candidate model to include the relevant region and exclude the irrelevant region as far as possible. In order to improve the performance and effectiveness using IEML to track the target candidate model, a new similarity measure is built based on spatial and colour features and a new image retrieval framework for this new environment is proposed. Extensive experiments confirm that compared with the latest developed approaches, such as the generalized Hough transform (GHT) and EM-like tracking methods, our method can provide a much better performance in effectiveness. On the other hand, for the IEML, the new similarity measure model also substantially decreases computational complexity and improves the precision tracking of the target candidate model. Compared with the conventional ROI-based image retrieval methods, the most significant highlight is that the proposed method can directly find the target candidate model in the candidate image without pre-segmentation in advance. | ROI image retrieval based on multiple features of mean shift and expectation–maximisation |
S105120041500024X | We consider the problem of joint tracking and classification using the information from radar and electronic support measure. For each target class, a separate filter is operated in parallel, and each class-dependent filter is implemented by interacting multiple model regularized particle filter. The speed likelihood for each class is defined using a priori information about speed constraint and combined with the likelihoods from two sensors to improve tracking and classification. Moreover, the output of classifier is also used for particle reassignment of different classes, which might lead to better performance. Simulations show that our proposed method can provide reliable tracking and correct classification. joint tracking and classification point-target-motion-model based JTC rigid-target-motion-model based JTC electronic support measure transferable belief model particle filter regularized particle filter interacting multiple model interacting multiple model regularized particle filter IMMRPF based JTC probability density mass function root-mean squared errors posterior joint state-class probability density mass function (pdmf) target state vector ith target type known number of target classes measurement sequences from radar and ESM measurement at time k from radar and ESM flight envelope constraints for class i set of maneuver models for class i target Markovian maneuver transition matrix for class i target Gaussian white process noise vector with zero mean and variance Q state transition matrix gain matrix additive Gaussian white noise with zero mean and variance R set of all possible emitters, where N is the total number of emitter types set of on-board emitters of class i target number of the on-board emitters of class i target ESM measurement space, where m = 2 N ESM confusion matrix usage transition probability matrix for emitter j overall emitter usage transition matrix for class i target with emitter set ε i posterior target class probability using ESM data only emitter usage status vector of class i target posterior target class probability using radar data only jth model probability of class i target at time k likelihood for jth model probability of class i target at time k speed likelihood functions speed feature measurement initial class probabilities number of particles for class i target at time k initial number of particles for class i indicator function effective sample size for class i target preset re-sampling threshold parameter hybrid particle weights of particles | Joint tracking and classification with constraints and reassignment by radar and ESM |
S1051200415000263 | This paper proposes an effective composite image detection method that uses the feature inconsistency of image components of the composite image to detect tampered regions. The composite image is first divided into image components. Next, the variance of the noise remaining after de-noising in each image component is calculated and used as a feature. Finally, tampered regions are detected using this feature based on a tampering detection rule. Experimental results show that the proposed method has good composite image detection performance. | Effective composite image detection method based on feature inconsistency of image components |
S1051200415000494 | High bandwidth-efficiency quadrature amplitude modulation (QAM) signaling widely adopted in high-rate communication systems suffers from a drawback of high peak-to-average power ratio, which may cause the nonlinear saturation of the high power amplifier (HPA) at transmitter. Thus, practical high-throughput QAM communication systems exhibit nonlinear and dispersive channel characteristics that must be modeled as a Hammerstein channel. Standard linear equalization becomes inadequate for such Hammerstein communication systems. In this paper, we advocate an adaptive B-Spline neural network based nonlinear equalizer. Specifically, during the training phase, an efficient alternating least squares (LS) scheme is employed to estimate the parameters of the Hammerstein channel, including both the channel impulse response (CIR) coefficients and the parameters of the B-spline neural network that models the HPA's nonlinearity. In addition, another B-spline neural network is used to model the inversion of the nonlinear HPA, and the parameters of this inverting B-spline model can easily be estimated using the standard LS algorithm based on the pseudo training data obtained as a natural byproduct of the Hammerstein channel identification. Nonlinear equalisation of the Hammerstein channel is then accomplished by the linear equalization based on the estimated CIR as well as the inverse B-spline neural network model. Furthermore, during the data communication phase, the decision-directed LS channel estimation is adopted to track the time-varying CIR. Extensive simulation results demonstrate the effectiveness of our proposed B-Spline neural network based nonlinear equalization scheme. | Adaptive B-spline neural network based nonlinear equalization for high-order QAM systems with nonlinear transmit high power amplifier |
S1051200415000676 | Recently, two dimensional (2D) adaptive filter, which can self-adjust the filter coefficients by using an optimization algorithm driven by an error function, has attracted much attention by researchers and practitioners, because 2D adaptive filtering can be employed in many image processing applications, such as image denoising, enhancement and deconvolution. In this paper, a novel 2D artificial bee colony (2D-ABC) adaptive filter algorithm was firstly proposed and to the best of our knowledge, there is no study describing 2D adaptive filter algorithm based on metaheuristic algorithms in the literature. At the first stage, in order to analyze the performance and computational efficiency of the novel 2D-ABC adaptive filter algorithm, it was used in the 2D adaptive noise cancellation (ANC) as recommend in literature. For a fair comparison, the competitor 2D adaptive filter algorithms were applied to the same 2D-ANC setup under same condition, such as same Gaussian noise, same filter order or same test images. The results of the novel 2D-ABC adaptive filter algorithm were compared with those of the 2D affine projection algorithms (APA), 2D normalized least mean square (NLMS) and 2D least mean square (LMS) adaptive filter algorithms. At the second stage, to demonstrate the robustness of the novel 2D-ABC adaptive filter algorithm, it was implemented for speckle noise filtering on noisy clinical ultrasound images. The results show that the novel 2D-ABC adaptive filter algorithm has a better performance than the other classical adaptive filter algorithms and its denoising efficiency is quite well on noisy images with different characteristics. | A novel 2D-ABC adaptive filter algorithm: A comparative study |
S1051200415000688 | Both image enhancement and image segmentation are important pre-processing steps for various image processing fields including autonomous navigation, remote sensing, computer vision, and biomedical image analysis. Both methods have their merits and their short comings. It then becomes obvious to ask the question: is it possible to develop a new better image enhancement method which has the key elements from both segmentation and image enhancement techniques? The choice of the threshold level is a key task in image segmentation. There are other challenges of image segmentation. For example, it is very difficult to perform the image segmentation in poor data such as shadows and noise. Recently, a homothetic curves Fibonacci-based cross sections thresholding has been developed for the de-noising purposes. Is it possible to develop a new image cross sections thresholding method, which can be used for both segmentation and image enhancement purposes? This paper a) describes a unified approach for signal thresholding, b) extends cross sections concept by generating and using a new class of monotonic, piecewise linear, sequences (slowly or faster growing than Fibonacci numbers) of numbers; c) uses the extended sections concept to the image enhancement and segmentation applications. Extensive experimental evaluation demonstrates that the newly proposed monotonic sequences have great potential in image processing applications, including image segmentation and image enhancement applications. Moreover, study has shown that the generalized cross techniques are invariant under morphological transformations such as erosion, dilation, and median, able to be described analytically, can be implemented by using the look up table methods. | Monotonic sequences for image enhancement and segmentation |
S1051200415000706 | Empirical mode decomposition (EMD) is an adaptive (data-driven) method to decompose non-linear and non-stationary signals into AM-FM components. Despite its well-known usefulness, one of the major EMD drawbacks is its lack of mathematical foundation, being defined as an algorithm output. In this paper we present an alternative formulation for the EMD method, based on unconstrained optimization. Unlike previous optimization-based efforts, our approach is simple, with an analytic solution, and its algorithm can be easily implemented. By making no explicit use of envelopes to find the local mean, possible inherent problems of the original EMD formulation (such as the under- and overshoot) are avoided. Classical EMD experiments with artificial signals overlapped in both time and frequency are revisited, and comparisons with other optimization-based approaches to EMD are made, showing advantages for our proposal both in recovering known components and computational times. A voice signal is decomposed by our method evidencing some advantages in comparison with traditional EMD and noise-assisted versions. The new method here introduced catches most flavors of the original EMD but with a more solid mathematical framework, which could lead to explore analytical properties of this technique. | An unconstrained optimization approach to empirical mode decomposition |
S1051200415000718 | Clustering is the task of classifying patterns or observations into clusters or groups. Generally, clustering in high-dimensional feature spaces has a lot of complications such as: the unidentified or unknown data shape which is typically non-Gaussian and follows different distributions; the unknown number of clusters in the case of unsupervised learning; and the existence of noisy, redundant, or uninformative features which normally compromise modeling capabilities and speed. Therefore, high-dimensional data clustering has been a subject of extensive research in data mining, pattern recognition, image processing, computer vision, and other areas for several decades. However, most of existing researches tackle one or two problems at a time which is unrealistic because all problems are connected and should be tackled simultaneously. Thus, in this paper, we propose two novel inference frameworks for unsupervised non-Gaussian feature selection, in the context of finite asymmetric generalized Gaussian (AGG) mixture-based clustering. The choice of the AGG distribution is mainly due to its ability not only to approximate a large class of statistical distributions (e.g. impulsive, Laplacian, Gaussian and uniform distributions) but also to include the asymmetry. In addition, the two frameworks simultaneously perform model parameters estimation as well as model complexity (i.e., both model and feature selection) determination in the same step. This was done by incorporating a minimum message length (MML) penalty in the model learning step and by fading out the redundant densities in the mixture using the rival penalized EM (RPEM) algorithm, for first and second frameworks, respectively. Furthermore, for both algorithms, we tackle the problem of noisy and uninformative features by determining a set of relevant features for each data cluster. The efficiencies of the proposed algorithms are validated by applying them to real challenging problems namely action and facial expression recognition. | Model-based approach for high-dimensional non-Gaussian visual data clustering and feature weighting |
S105120041500072X | Baseline correction is an important pre-processing technique used to separate true spectra from interference effects or remove baseline effects. In this paper, an adaptive iteratively reweighted genetic programming based on excellent community information (GPEXI) is proposed to model baselines from spectra. Excellent community information which is abstracted from the present excellent community includes an automatic common threshold, normal global and local slope information. Significant peaks can be firstly detected by an automatic common threshold. Then based on the characteristic that a baseline varies slowly with respect to wavelength, normal global and local slope information are used to further confirm whether a point is in peak regions. Moreover the slope information is also used to determine the range of baseline curve fluctuation in peak regions. The proposed algorithm is more robust for different kinds of baselines and its curvature and slope can be automatically adjusted without prior knowledge. Experimental results in both simulated data and real data demonstrate the effectiveness of the algorithm. | A robust baseline elimination method based on community information |
S1051200415000779 | Accurate frequency estimation of sinusoidal signal is commonly required in a large number of engineering practice scenarios. In this paper, an iterative frequency estimation algorithm is proposed based on the interpolation of Fourier coefficients of weighted samples. This algorithm is nearly applicable for all conventional window functions. Systematic errors for various windows are presented and the performance of the proposed algorithm is investigated in the presence of white Gaussian noise. The simulation results demonstrate that errors caused by a mistaken location of the spectral line can be significantly reduced. The proposed algorithm is straightforward to implement, has high precision, good compatibility, and is robust against additive noise, all of which make it ideal for accurate frequency estimation in spectral analysis. | Frequency estimation of the weighted real tones or resolved multiple tones by iterative interpolation DFT algorithm |
S1051200415000883 | This work proposes a new method of extracting texture descriptors from digital images based on local scaling properties of the greyscale function using constraints to define connected local sets. The texture is first mapped onto a three-dimensional cloud of points and the local coarseness under different scales is assigned to each point p. This measure is obtained from the size of the largest “connected” set of points within a cube centred at p. Here, the “connected set” is defined as the set of points such that for each point in the local domain there is at least one other point at a distance smaller than a threshold t. Finally, the Bouligand–Minkowski fractal descriptors of the local coarseness of each pixel are computed. The classificatory power of the descriptors on the Brodatz, Vistex, UIUC and UMD databases showed an improvement over the results obtained with other well-known texture descriptors reported in the literature. The performance achieved also suggests possible applications to real-world problems where the images are best analysed as textures. | Texture descriptors by a fractal analysis of three-dimensional local coarseness |
S1051200415001128 | In this paper, the noise enhanced system performance in a binary hypothesis testing problem is investigated when the additive noise is a convex combination of the optimal noise probability density functions (PDFs) obtained in two limit cases, which are the minimization of false-alarm probability ( P FA ) without decreasing detection probability ( P D ) and the maximization of P D without increasing P FA , respectively. Existing algorithms do not fully consider the relationship between the two limit cases and the optimal noise is often deduced according to only one limit case or Bayes criterion. We propose a new optimal noise framework which utilizes the two limit cases and deduce the PDFs of the new optimal noise. Furthermore, the sufficient conditions are derived to determine whether the performance of the detector can be improved or not via the new noise. In addition, the effects of the new noise are analyzed according to Bayes criterion. Rather than adjusting the additive noise again as shown in other algorithms, we just tune one parameter of the new optimal noise PDF to meet the different requirements under the Bayes criterion. Finally, an illustrative example is presented to study the theoretical results. | Noise enhanced binary hypothesis-testing in a new framework |
S1051200415001153 | Research on high dimension, low sample size (HDLSS) data has revealed their neighborless nature. This paper addresses the classification of HDLSS image or video data for human activity recognition. Existing approaches often use off-the-shelf classifiers such as nearest neighbor techniques or support vector machines and tend to ignore the geometry of underlying feature distributions. Addressing this issue, we investigate different geometric classifiers and affirm the lack of neighborhoods within HDLSS data. As this undermines proximity based methods and may cause over-fitting for discriminant methods, we propose a QR factorization approach to Nearest Affine Hull (NAH) classification which remedies the HDLSS dilemma and noticeably reduces time and memory requirements of existing methods. We show that the resulting non-parametric models provide smooth decision surfaces and yield efficient and accurate solutions in multiclass HDLSS scenarios. On several action recognition benchmarks, the proposed NAH classifier outperforms other instance based methods and shows competitive or superior performance than SVMs. In addition, for online settings, the proposed NAH method is faster than online SVMs. high dimension low sample size linear discriminant analysis k nearest neighbor nearest affine hull nearest convex hull nearest hyperdisk one-against-all SVM one-against-one SVM support vector machine SVM with stochastic gradient descent singular value decomposition | High dimensional low sample size activity recognition using geometric classifiers |
S1051200415001165 | A novel signal compression and reconstruction procedure suitable for guided wave based structural health monitoring (SHM) applications is presented. The proposed approach combines the wavelet packet transform and frequency warping to generate a sparse decomposition of the acquired dispersive signal. The sparsity of the signal in the considered representation is exploited to develop data compression strategy based on the Best-Basis Compressive sensing (CS) theory. The proposed data compression strategy has been compared with the transform encoder based on the Embedded Zerotree (EZT), a well known data compression algorithm. These approaches are tested on experimental Lamb wave signals obtained by acquiring acoustic emissions in a aluminum plate with conventional piezoelectric sensors. The performances of the two methods are analyzed by varying the compression ratio in the range 40–80%, and measuring the discrepancy between the original and the reconstructed signal. Results show the improvement in signal reconstruction with the use of the modified CS framework with respect to transform-encoders such as the EZT algorithm with Huffman coding. | Best basis compressive sensing of guided waves in structural health monitoring |
S1051200415001177 | This paper examines the application of binary integration to X-band maritime surveillance radar, with a view to enhancing detection performance. The clutter is assumed to follow a Pareto distribution, since this model has been validated for high resolution X-band maritime clutter returns. The binary integration process is based upon an order statistic detection scheme, which has the constant false alarm rate property with respect to the Pareto shape parameter. An optimisation procedure is outlined, which results in ideal choices for the binary integration factor and the order statistic index. Performance of the resultant detection process is analysed, with homogeneous and heterogeneous simulated clutter, whose parameters are matched to those obtained from real clutter data sets. A direct application to real data is also included. | Optimised binary integration with order statistic CFAR in Pareto distributed clutter |
S1051200415001414 | Conventional Fuzzy C-means (FCM) algorithm uses Euclidean distance to describe the dissimilarity between data and cluster prototypes. Since the Euclidean distance based dissimilarity measure only characterizes the mean information of a cluster, it is sensitive to noise and cluster divergence. In this paper, we propose a novel fuzzy clustering algorithm for image segmentation, in which the Mahalanobis distance is utilized to define the dissimilarity measure. We add a new regularization term to the objective function of the proposed algorithm, reflecting the covariance of the cluster. We experimentally demonstrate the effectiveness of the proposed algorithm on a generated 2D dataset and a subset of Berkeley benchmark images. | Mahalanobis distance based on fuzzy clustering algorithm for image segmentation |
S1051200415001475 | This paper considers the problem of joint detection and tracking in mixed line-of-sight (LOS) and non-line-of-sight (NLOS) environments by using received signal strength (RSS) measurements. A nonlinear target tracking model with multiple switching parameters has been proposed, in which multiple independent Markov chains are used to describe the switching of target maneuvers and the transition of LOS/NLOS measurements, respectively. Based on the proposed tracking model, a multi-sensor multiple model Bernoulli filter (MMBF) has been developed by employing the random finite set theory which can formulate the joint detection and tracking in a unified framework. To derive a closed-form expression to the MMBF, the Gaussian mixture implementations have been provided by applying the extended Kalman filter technique. A numerical example is provided involving tracking a maneuvering target by a sensor network with 30 nodes. Simulation results confirm the effectiveness of the proposed filter. Acronyms LOS Line-of-sight AOA Angle-of-arrival NLOS Non-line-of-sight RFS Random finite set RSS Received signal strength PHD Probability hypothesis density MMBF Multiple model Bernoulli filter IMM Interacting multiple model WSN Wireless sensor networks EKF Extended Kalman filter TOA Time-of-arrival UKF Unscented Kalman filter TDOA Time-difference-of-arrival CKF Cubature Kalman filter | RSS-based joint detection and tracking in mixed LOS and NLOS environments |
S1051200415001487 | Locally linear embedding (LLE) has been widely used in data processing, such as data clustering, video identification and face recognition, but its application in image hashing is still limited. In this work, we investigate the use of LLE in image hashing and find that embedding vector variances of LLE are approximately linearly changed by content-preserving operations. Based on this observation, we propose a novel LLE-based image hashing. Specifically, an input image is firstly mapped to a normalized matrix by bilinear interpolation, color space conversion, block mean extraction, and Gaussian low-pass filtering. The normalized matrix is then exploited to construct a secondary image. Finally, LLE is applied to the secondary image and the embedding vector variances of LLE are used to form image hash. Hash similarity is determined by correlation coefficient. Many experiments are conducted to validate our efficiency and the results illustrate that our hashing is robust to content-preserving operations and reaches a good discrimination. Comparisons of receiver operating characteristics (ROC) curve indicate that our hashing outperforms some notable hashing algorithms in classification between robustness and discrimination. | Robust image hashing with embedding vector variance of LLE |
S1051200415001505 | This work proposes to enhance well-known descriptors of texture images by extracting such descriptors both directly from pixel intensities as well as from the local non-additive entropy of the image. The method can be divided into four steps. 1) The descriptors are computed for the original image according to what is described in the literature. 2) The image is transformed by computing the non-additive entropy at each pixel, considering its neighborhood. 3) Similarly to step 1, the descriptors are computed from the transformed image. 4) Descriptors from the original and transformed images are combined by means of a Karhunen–Loève transform. Four texture descriptors widely used in the literature were considered: Gabor wavelets, Gray-Level Co-occurrence Matrix, Local Binary Patterns and Bouligand–Minkowski fractal descriptors. The proposal is assessed by comparing the performance of the descriptors alone and after combined with the non-additive entropy. The results demonstrate that the combination achieved the best results both in image retrieval and classification tasks. The entropy is still more efficient in local-based methods: Local Binary Patterns and Gray-Level Co-occurrence Matrix. | Enhancing texture descriptors by a neighborhood approach to the non-additive entropy |
S1051200415001517 | In this paper, we investigate the impacts of frequency increment errors on frequency diverse array (FDA) multiple-input and multiple-output (MIMO) radar in adaptive beamforming and target localization. Since a small frequency increment, as compared to the carrier frequency, is applied between the transmit elements, FDA MIMO radar offers a range-dependent transmit beampattern to suppress range-dependent interferences and thus yields better direction-of-arrival (DOA) estimation performance than conventional MIMO radar. But the frequency increment errors will degrade FDA MIMO performance such as adaptive beamforming and target localization precision. In adaptive transmit beamforming analysis, we analyze the FDA MIMO radar beampattern mainlobe offset in range dimension, angle dimension and signal to interference and noise ratio (SINR) performance under different frequency increment error cases. While for the target localization investigation, the performance analysis is based on an explicit expansion of the estimation error in the signal subspace and frequency increment error matrix. Simulation results show that the impacts of frequency increment errors are mainly on the range dimension, i.e., the range offset of mainlobe in beamforming and range estimation in target localization, especially for large frequency increment errors. | Impact of frequency increment errors on frequency diverse array MIMO in adaptive beamforming and target localization |
S1051200415001529 | This paper considers the problem where resource-limited client such as a smartphone wants to outsource chaotic selective image encryption to the cloud; meanwhile the client does not want to reveal the plain image to the cloud. A general solution is proposed with the help of steganography. The client first selects the important data to be selectively encrypted, embeds it into a cover image, and sends the stego image to the cloud for outsourced encryption; after receiving the encrypted stego image from the cloud, the client can extract the secret data in its encrypted form and get the selectively encrypted image. Theoretical analysis and extensive experiments are conducted to validate the correctness, security, and performance of the proposed scheme. It is shown that the client can fulfill the task of selective image encryption securely and save much overhead at the same time. | Outsourcing chaotic selective image encryption to the cloud with steganography |
S1051200415001670 | Radio tomographic imaging (RTI) is a promising technique to localize and track the target without wearing any electronic device. However, the performance of traditional shadowing-based RTI (SRTI) degrades in indoor environments due to the existence of interference links caused by multipath. The interference links can bring false spots in the imaging results of RTI and make the true spot drift, resulting in position estimation error of the target. In this paper, we propose an interference link canceling technique to improve the performance of RTI where temporal and spatial properties of shadowed links are jointly used to detect the interference links. Since the spatial detection relies on the prior knowledge of the position of the target, we use Kalman filter to provide the position estimation. Moreover, a mean-shift clustering method is adopted to obtain the initial position estimation of the target. The experimental results demonstrate that the proposed enhanced SRTI (ESRTI) method outperforms the existing methods in terms of both image quality and tracking accuracy. | Enhancing indoor radio tomographic imaging based on interference link elimination |
S1051200415001840 | It is well known that two-dimensional (2D) filter bank is far removed from a straightforward extension of one-dimensional (1D) filter bank. There are many challenging problems on the theory and design methods for the 2D filter bank. Among these problems, the perfect-reconstruction (PR) theory of the 2D DFT modulated filter bank with arbitrary modulation and decimation matrices remains an unsolved difficulty, which is the focus of this paper. The necessary and sufficient condition for perfect reconstruction (PR) is derived by using the polyphase decomposition of the analysis and synthesis filters, as well as the fast implementation structure of the filter bank. Then, the PR condition in frequency domain is transformed into a set of quadratic equations with respect to the prototype filter (PF), which is utilized to formulate the design problem into an unconstrained optimization problem. An efficient iterative algorithm is proposed to solve the problem. Numerical examples are included to verify the validity of the PR condition and the effectiveness of the design method. | Theory and design of two-dimensional DFT modulated filter bank with arbitrary modulation and decimation matrices |
S1051200415001852 | This paper presents a stochastic analysis of the transform-domain least-mean-square (TDLMS) algorithm operating in a nonstationary environment (time-varying plant) with real-valued correlated Gaussian input data, from which the analysis for the stationary environment follows as a special case. In this analysis, accurate model expressions are derived describing the transformed mean weight-error behavior, learning curve, transformed weight-error covariance matrix, steady-state excess mean-square error (EMSE), misadjustment, step size for minimum EMSE, degree of nonstationarity, as well as a relationship between misadjustment and degree of nonstationarity. Based on these model expressions, the impact of the algorithm parameters on its performance is discussed, clarifying the behavior of the algorithm vis-à-vis the nonstationary environment considered. Simulation results for the TDLMS algorithm are shown by using the discrete cosine transform, which confirm the accuracy of the proposed model for both transient and steady-state phases. | Analysis of the TDLMS algorithm operating in a nonstationary environment |
S1051200415001943 | This work shows an example of the application of Bayesian dynamic linear models in fMRI analysis. Estimating the error variances of such a model, we are able to obtain samples from the posterior distribution of the signal-to-noise ratio for each voxel, which is used as a criterion for the detection of brain activity. The benefits of this approach are: (i) the reduced number of parameters, (ii) the model makes no assumptions about the stimulation paradigm, (iii) an interpretable model based approach, and (iv) flexibility. The performance of the proposed method is shown by simulations and further results are presented on the application of the model for the analysis of a real fMRI data set, in order to illustrate some practical issues and to compare with previously proposed techniques. The results obtained demonstrate the ability of the model to detect brain activity, even when the stimulus paradigm is unknown, constituting an alternative to data driven approaches when dealing with resting-state fMRI. | Brain activity detection by estimating the signal-to-noise ratio of fMRI time series using dynamic linear models |
S1051200415001967 | Voice transformation, which has been integrated in many audio (speech) processing tools, is a common operation to change a person's voice and to conceal his or her identity. It can deceive human beings and automatic speaker verification (ASV) systems easily, and thus it presents threats to security. Until now, few efforts have been reported on the recognition of hidden speakers from such disguised voices. In this paper, we propose concrete countermeasures to erase the disguise effects and verify the speaker's identity from voice transformation disguised voices. The proposed system is tested by commonly used audio editors and voice transformation algorithms. The experimental results show that the performances of baseline ASV system without our proposed countermeasures are entirely destroyed by voice transformation disguise with equal error rates (EERs) higher than 40%; while with our proposed countermeasures, the verification performances are improved significantly with EERs lowered to 3%–4%. | Verification of hidden speaker behind transformation disguised voices |
S1051200415002213 | A demodulation technique based on improved local mean decomposition (LMD) is investigated in this paper. LMD heavily depends on the local mean and envelope estimate functions in the sifting process. It is well known that the moving average (MA) approach exists in many problems (such as step size selection, inaccurate results and time-consuming). Aiming at the drawbacks of MA in the smoothing process, this paper proposes a new self-adaptive analysis algorithm called optimized LMD (OLMD). In OLMD method, an alternative approach called rational Hermite interpolation is proposed to calculate local mean and envelope estimate functions using the upper and lower envelopes of a signal. Meanwhile, a reasonable bandwidth criterion is introduced to select the optimum product function (OPF) from pre-OPFs derived from rational Hermite interpolation with different shape controlling parameters in each rank. Subsequently, the orthogonality criterion (OC) is taken as the product function (PF) iterative stopping condition. The effectiveness of OLMD method is validated by the numerical simulations and applications to gearbox and roller bearing fault diagnosis. Results demonstrate that OLMD method has better fault identification capacity, which is effective in rotating machinery fault diagnosis. | A new rotating machinery fault diagnosis method based on improved local mean decomposition |
S1051200415002249 | In computer systems that are used for actions recognition the human movements are often represented by three-dimensional coordinates of body joints that are tracked by motion capture hardware. The motivation of our research was to propose a novel method for automatic generation of knowledge base for syntactic Gesture Description Language (GDL) classifier by analyzing unsegmented data recordings of gestures. We have proposed novel unsupervised learning approach to deal with this task. Because this process seems to be reverse engineering to GDL approach, the learning algorithm we introduce in this paper, is called Revers-GDL (R-GDL). The R-GDL machine-learning approach for full-body movements recognition is a novel method of time-varying multidimensional signals classification. The description of R-GDL and its validation is our original and never before published achievement. The evaluation of R-GDL was performed with k-fold cross validation on large dataset that contains 770 complete movements samples of 9 gym exercises performed by 14 persons and compared with results from multivariate normally continuous density hidden Markov model classifier. Depending on exercise type GDL obtained recognition rate at the level of 100% to 91%. | Full body movements recognition – unsupervised learning approach with heuristic R-GDL method |
S1051200415002262 | The presence of jamming usually degrades the detection performance of a detector. Moreover, sufficient information about the jamming may be difficult to be obtained. To overcome the problem of adaptive array signal detection in noise and completely unknown jamming, we temporarily assume the jamming belongs to a subspace which is orthogonal to the signal steering vector in the stage of detector design. Consequently, by resorting to the criteria of generalized likelihood ratio test (GLRT) and Wald test, we propose two adaptive detectors, which can achieve signal detection and jamming suppression. It is shown, by Monte Carlo simulations, that the two proposed adaptive detectors have improved detection performance over existing ones. | Adaptive array detection in noise and completely unknown jamming |
S1051200415002493 | A novel signal processing method for the analysis of financial and commodity price time series is here introduced to assess the predictability of financial markets. Our technique, exploiting the maximum entropy method (MEM), predicts the entropy of the next future time interval of the time series under investigation by a least square minimization approach. Like in conventional ex-post analysis based on estimated entropy, high entropy values characterize unpredictable series, while more stable series exhibit lower entropy values. We first evaluate (by theory and simulation) the performance of our method in terms of mean and variance of the predictions. Then, we apply our technique to several sets of historical financial data, correlating the entropy trend to contemporary socio-political events. The efficiency of our technique for application to financial engineering analysis is shown in comparison with the conventional approximate entropy method (usually applied in econometrics). | A maximum entropy method to assess the predictability of financial and commodity prices |
S1051200415002535 | Although the least mean pth power (LMP) and normalized LMP (NLMP) algorithms of adaptive Volterra filters outperform the conventional least mean square (LMS) algorithm in the presence of a-stable noise, they still exhibit slow convergence and high steady-state kernel error in nonlinear system identification. To overcome these limitations, an enhanced recursive least mean pth power algorithm with logarithmic transformation (RLogLMP) is proposed in this paper. The proposed algorithm is adjusted to minimize the new cost function with the p-norm logarithmic transformation of the error signal. The logarithmic transformation, which can diminish the significance of outliers under a-stable noise environment, increases the robustness of the proposed algorithm and reduces the steady-state kernel error. Moreover, the proposed method improves the convergence rate by the enhanced recursive scheme. Finally, simulation results demonstrate that the proposed algorithm is superior to the LMP, NLMP, normalized least mean absolute deviation (NLMAD), recursive least squares (RLS) and nonlinear iteratively reweighted least squares (NIRLS) algorithms in terms of convergence rate and steady-state kernel error. | Adaptive recursive algorithm with logarithmic transformation for nonlinear system identification in α-stable noise |
S1051200415002547 | Bayesian approach has become a commonly used method for inverse problems arising in signal and image processing. One of the main advantages of the Bayesian approach is the possibility to propose unsupervised methods where the likelihood and prior model parameters can be estimated jointly with the main unknowns. In this paper, we propose to consider linear inverse problems in which the noise may be non-stationary and where we are looking for a sparse solution. To consider both of these requirements, we propose to use Student-t prior model both for the noise of the forward model and the unknown signal or image. The main interest of the Student-t prior model is its Infinite Gaussian Scale Mixture (IGSM) property. Using the resulted hierarchical prior models we obtain a joint posterior probability distribution of the unknowns of interest (input signal or image) and their associated hidden variables. To be able to propose practical methods, we use either a Joint Maximum A Posteriori (JMAP) estimator or an appropriate Variational Bayesian Approximation (VBA) technique to compute the Posterior Mean (PM) values. The proposed method is applied in many inverse problems such as deconvolution, image restoration and computed tomography. In this paper, we show only some results in signal deconvolution and in periodic components estimation of some biological signals related to circadian clock dynamics for cancer studies. | Bayesian sparse solutions to linear inverse problems with non-stationary noise with Student-t priors |
S1051200415002572 | Harmony search (HS) and its variants have been found successful applications, however with poor solution accuracy and convergence performance for high-dimensional (≥200) multimodal optimization problems. The reason is mainly huge search space and multiple local minima. To tackle the problem, we present a new HS algorithm called DIHS, which is based on Dynamic-Dimensionality-Reduction-Adjustment (DDRA) and dynamic fret width (fw) strategy. The former is for avoiding generating invalid solutions and the latter is to balance global exploration and local exploitation. Theoretical analysis on the DDRA strategy for success rate of update operation is given and influence of related parameters on solution accuracy is investigated. Our experiments include comparison on solution accuracy and CPU time with seven typical HS algorithms and four widely used evolutionary algorithms (SaDE, CoDE, CMAES and CLPSO) and statistical comparison by the Wilcoxon Signed-Rank Test with the seven HS algorithms and four evolutionary algorithms. The problems in experiments include twelve multimodal and four complex uni-modal functions with high-dimensionality. Experimental results indicate that the proposed approach can provide significant improvement on solution accuracy with less CPU time in solving high-dimensional multimodal optimization problems, and the more dimensionality that the optimization problem is, the more benefits it provides. | A harmony search algorithm for high-dimensional multimodal optimization problems |
S1051200415002584 | We investigate how and when to diversify capital over assets, i.e., the portfolio selection problem, from a signal processing perspective. To this end, we first construct portfolios that achieve the optimal expected growth in i.i.d. discrete-time two-asset markets under proportional transaction costs. We then extend our analysis to cover markets having more than two stocks. The market is modeled by a sequence of price relative vectors with arbitrary discrete distributions, which can also be used to approximate a wide class of continuous distributions. To achieve the optimal growth, we use threshold portfolios, where we introduce a recursive update to calculate the expected wealth. We then demonstrate that under the threshold rebalancing framework, the achievable set of portfolios elegantly form an irreducible Markov chain under mild technical conditions. We evaluate the corresponding stationary distribution of this Markov chain, which provides a natural and efficient method to calculate the cumulative expected wealth. Subsequently, the corresponding parameters are optimized yielding the growth optimal portfolio under proportional transaction costs in i.i.d. discrete-time two-asset markets. As a widely known financial problem, we also solve the optimal portfolio selection problem in discrete-time markets constructed by sampling continuous-time Brownian markets. For the case that the underlying discrete distributions of the price relative vectors are unknown, we provide a maximum likelihood estimator that is also incorporated in the optimization framework in our simulations. | Growth optimal investment in discrete-time markets with proportional transaction costs |
S1051200415002602 | The performance of space–time adaptive processing (STAP) radar degrades dramatically when the target occurs in the training data. Traditional robust linearly constrained minimum variance (LCMV) STAP method uses magnitude constraint to maintain the mainlobe of the STAP beamformer. In this paper, a joint magnitude and phase constrained (MPC) STAP method is proposed with the phase constraint incorporated in the response vector of the beamformer. The explicit expression of the phase constraint is derived by exploring the conjugate symmetric characteristic of the adaptive weights. With joint magnitude and phase constraints imposed on several discrete points in the mainlobe region, the MPC-STAP approach has good robustness against target contamination. In addition, the linear-phase response can be guaranteed by the proposed method, which provides distortionless response in both spatial and temporal domains. Simulation results are provided to demonstrate the effectiveness of the proposed method. | Joint magnitude and phase constrained STAP approach |
S1051200415002663 | According to random finite set (RFS) and information inequality, this paper derives an error bound for joint detection and estimation (JDE) of multiple unresolved target-groups in the presence of clutters and missed detections. The JDE here refers to determining the number of unresolved target-groups and estimating their states. In order to obtain the results of this paper, the states of the unresolved target-groups are modeled as a multi-Bernoulli RFS first. The point cluster model proposed by Mahler is used to describe the observation likelihood of each group. Then, the error metric between the true and estimated state sets of the groups is defined by the optimal sub-pattern assignment distance rather than the usual Euclidean distance. The maximum a posteriori detection and unbiased estimation criteria are used in deriving the bound. Finally, we discuss some special cases of the bound when the number of unresolved target-groups is known a priori or is at most one. Example 1 shows the variation of the bound with respect to the probability of detection and clutter density. Example 2 verifies the effectiveness of the bound by indicating the performance limitations of the cardinalized probability hypothesis density and cardinality balanced multi-target multi-Bernoulli filters for unresolved target-groups. Example 3 compares the bound of this paper with the (single-sensor) bound of [4] for the case of JDE of a single unresolved target-group. At present, this paper only addresses the static JDE problem of multiple unresolved target-groups. Our future work will study the recursive extension of the bound proposed in this paper to the filtering problems by considering the group state evolutions. | Error bounds for joint detection and estimation of multiple unresolved target-groups |
S1051200415002699 | This paper presents an anisotropic diffusion (AD)-based noise reduction that extends the diffusion dimensions of a typical AD by producing diffusion paths on inter-color planes. To properly utilize an inter-color correlation for the AD-based noise reduction, inter-color planes from different color planes are predicted by adjusting their local mean values. Then, diffusion path-based kernels (DPKs) for the current color plane and predicted inter-color planes (PIPs) are generated to transform the iterative AD into a single-pass smoothing, which can avoid iterative region analysis. Simultaneously, a regionally and directionally varying diffusion threshold is adopted for the current color plane to preserve image details and to improve the quality of noise elimination near strong edges. For the PIPs, diffusion thresholds are regionally adjusted depending on local correlations between the current color plane and each of the PIPs to optimize the performance of noise reduction obtained from the extended diffusion dimension. Lastly, DPK-based filtering is performed in the current color plane and PIPs by using selected diffusion thresholds for the noise reduction. The experimental results demonstrate that the proposed method successfully improves the quality of denoising by greatly increasing the peak signal-to-noise ratio and structural similarity indexes by up to 4.921 dB and 0.090, respectively, compared with benchmark methods. In addition, the proposed method effectively reduces the computational complexity by avoiding the use of an expensive region analysis. | Extended-dimensional anisotropic diffusion using diffusion paths on inter-color planes for noise reduction |
S105120041500281X | The anisotropic diffusion is an efficient smoothing process. It is widely used in noise removing and edges preserving via different schemes. In this paper based on a mathematical background and the existing efficient anisotropic function in the literature we developed a new mathematical anisotropic diffusion function which is able to overcome the drawbacks of the traditional process such as the details loss and the image blur. The simulations results and the comparative study with other recent techniques are conducted and showed that the proposed schema generates a wide improvement in the quality of the restored images. This improvement has been shown subjectively in terms of visual quality, and objectively with reference to the computation of some criteria. The simulated images are well de-noised but the most important is that details and structural information are kept intact. In addition to that, the proposed new function was found very interesting and presents numerous advantages like its similarity to the conventional model and the importance of the speed hence it converges faster which allows an opportunity to be well implemented in our de-noising process. | Rapid and efficient image restoration technique based on new adaptive anisotropic diffusion function |
S1051200415002833 | A modified quantized kernel least mean square (M-QKLMS) algorithm is proposed in this paper, which is an improvement of quantized kernel least mean square (QKLMS) and the gradient descent method is used to update the coefficient of filter. Unlike the QKLMS method which only considers the prediction error, the M-QKLMS method uses both the new training data and the prediction error for coefficient adjustment of the closest center in the dictionary. Therefore, the proposed method completely utilizes the knowledge hidden in the new training data, and achieves a better accuracy. In addition, the energy conservation relation and a sufficient condition for mean-square convergence of the proposed method are obtained. Simulations on prediction of chaotic time series show that the M-QKLMS method outperforms the QKLMS method in terms of steady-state mean square errors. | A modified quantized kernel least mean square algorithm for prediction of chaotic time series |
S1051200415002845 | Many applications require the detection of unknown nonlinear frequency modulated (FM) signals in noise. In this paper, a nonlinear FM signal in one time interval is approximated by linear FM (LFM) segments in successive subintervals. Each LFM segment is parameterized by a 2-dimensional (2D) state vector and its evolution from a subinterval to the next one is modeled as a dynamic system of unknown statistics with linear state transition equations and nonlinear measurement equations. A forward–backward cost-reference particle filter (FB-CRPF) is proposed to estimate the state sequence. From the estimated state sequence, the generalized likelihood ratio test (GLRT) statistic and the total variation (TV) statistic are computed for signal detection. In the 2D feature plane of the GLRT versus TV, the decision region of the null hypothesis at a given false alarm rate is determined by the 2D convexhull learning algorithm from noise-only training samples. Two kinds of simulated signals are used to test the proposed detector and results show that the proposed detector attains better performance than the two existing detectors. | Detection of nonlinear FM signals via forward–backward cost-reference particle filter |
S1051200415002973 | Estimation of the number of harmonics in multidimensional sinusoids is studied in this paper. The ESTimation ERror (ESTER) is a subspace based detection approach that is robust against colored noise. However, the number of signals it can detect is very limited. To improve the identifiability, we propose to combine the multidimensional folding (MDF) techniques with ESTER for multidimensional sinusoidal order selection. Our algorithm development is inspired by the shift invariance properties of the two matrix slices resulting from multidimensional folding and unfolding, which have been exploited to extract the spatial frequencies in the literature. The maximum identifiable number of signals of the MDF-ESTER is of the order of magnitude of product of the lengths of all spatial dimensions with uniform spacing, which is significantly larger than that of the conventional multidimensional ESTER methods. Meanwhile, it inherits the robustness of the ESTER against colored noise, and performs comparably to state-of-the-art schemes when the number of signals is small. | Multidimensional folding for sinusoidal order selection |
S1051200415003000 | This paper proposes a novel algorithm for the voltage fluctuation detection of a power system based on sparse representation modeling. The contents of this research mainly include: (1) By first constructing a proper objective function of the frequency and phase, we convert the fundamental signal estimation problem to a simple mathematical convex optimization problem, which can be easily solved using an exhaustive search strategy. (2) From the viewpoint of signal restoration, we regard the voltage fluctuation detection as a signal inpainting problem and then develop an l 0 norm-based optimization equation that exploits the sparsity prior of fluctuation component to recover the desired representation vector. (3) With the assumption that the voltage envelope changes smoothly, we establish an l 2 norm-based regularization equation to further improve the regularity of the result. Experimental results show that the proposed algorithm performs well on demodulating the fundamental signal and voltage fluctuation component, with good ability of noise robustness, when compared to the classical Hilbert transform-based detection method and square demodulation method. | A sparse representation-based algorithm for the voltage fluctuation detection of a power system |
S1051200415003012 | The goal of this paper is to design a statistical test for the camera model identification problem. The approach is based on the generalized noise model that is developed by following the image processing pipeline of the digital camera. More specifically, this model is given by starting from the heteroscedastic noise model that describes the linear relation between the expectation and variance of a RAW pixel and taking into account the non-linear effect of gamma correction. The generalized noise model characterizes more accurately a natural image in TIFF or JPEG format. The present paper is similar to our previous work that was proposed for camera model identification from RAW images based on the heteroscedastic noise model. The parameters that are specified in the generalized noise model are used as camera fingerprint to identify camera models. The camera model identification problem is cast in the framework of hypothesis testing theory. In an ideal context where all model parameters are perfectly known, the Likelihood Ratio Test is presented and its statistical performances are theoretically established. In practice when the model parameters are unknown, two Generalized Likelihood Ratio Tests are designed to deal with this difficulty such that they can meet a prescribed false alarm probability while ensuring a high detection performance. Numerical results on simulated images and real natural JPEG images highlight the relevance of the proposed approach. | Camera model identification based on the generalized noise model in natural images |
S1051200415003024 | This paper proposes a novel diffusion subband adaptive filtering algorithm for distributed networks. To achieve a fast convergence rate and small steady-state errors, a variable step size and a new combination method is developed. For the adaptation step, the upper bound of the mean-square deviation (MSD) of the algorithm is derived and the step size is adaptive by minimizing it in order to attain the fastest convergence rate on every iteration. Furthermore, for a combination step realized by a convex combination of the neighbor-node estimates, the proposed algorithm uses the MSD, which contains information on the reliability of the estimates, to determine combination coefficients. Simulation results show that the proposed algorithm outperforms the existing algorithms in terms of the convergence rate and the steady-state errors. | A diffusion subband adaptive filtering algorithm for distributed estimation using variable step size and new combination method based on the MSD |
S1051200415003048 | The CV (Chan–Vese) model is a piecewise constant approximation of the Mumford and Shah model. It assumes that the original image can be segmented into two regions such that each region can be represented as constant grayscale value. In fact, the objective functional of the CV model actually finds a segmentation of the image such that the within-class variance is minimized. This is equivalent to the Otsu image thresholding algorithm which also aims to minimize the within-class variance. Similarly to the Otsu image thresholding algorithm, cross entropy is another widely used image thresholding algorithm and it finds a segmentation such that the cross entropy of the segmented image and the original image is minimized. Inspired from the cross entropy, a new active contour image segmentation algorithm is proposed. The region term in the new objective functional is the integral of the logarithm of the ratio between the grayscale of the original image and the mean value computed from the segmented image weighted by the grayscale of the original image. The new objective functional can be solved by the level set evolution method. A distance regularized term is added to the level set evolution equation so the level set need not be reinitialized periodically. A fast global minimization algorithm of the objective functional is also proposed which incorporates the edge term originated from the geodesic active contour model. Experimental results show that, the algorithm proposed can segment images more accurately than the CV model and the implementation speed of the fast global minimization algorithm is fast. | A new active contour remote sensing river image segmentation algorithm inspired from the cross entropy |
S1051200415003061 | This paper deals with state estimation problem for linear uncertain systems with correlated noises and incomplete measurements. Multiplicative noises enter into state and measurement equations to account for the stochastic uncertainties. And one-step autocorrelated and cross-correlated process noises and measurement noises are taken into consideration. Using the latest received measurement to compensate lost packets, the modified multi-step random delays and packet dropout model is adopted in the present paper. By augmenting system states, measurements and new defined variables, the original system is transformed into the stochastic parameter one. On this basis, the optimal linear estimators in the minimum variance sense are designed via projection theory. They depend on the variances of multiplicative noises, the one-step correlation coefficient matrices together with the probabilities of delays and packet losses. The sufficient condition on the existence of steady-state estimators is then given. Finally, simulation results illustrate the performance of the developed algorithms. | Minimum variance estimation for linear uncertain systems with one-step correlated noises and incomplete measurements |
S1051200415003231 | The main objective of active noise control (ANC) is to provide attenuation for the environmental acoustic noise. The adaptive algorithms for ANC systems work well to attenuate the Gaussian noise; however, their performance may degrade for non-Gaussian impulsive noise sources. Recently, we have proposed variants of the most famous ANC algorithm, the filtered-x least mean square (FxLMS) algorithm, where an improved performance has been realized by thresholding the input data or by efficiently normalizing the step-size. In this paper, we propose a modified binormalized data-reusing (BNDR)-based adaptive algorithm for impulsive ANC. The proposed algorithm is derived by minimizing a modified cost function, and is based on reusing the past and present samples of data. The main contribution of the paper is to develop a practical DR-type adaptive algorithm, which incorporates an efficiently normalized step-size, and is well suited for ANC of impulsive noise sources. The computer simulations are carried out to demonstrate the effectiveness of the proposed algorithm. It is shown that an improved performance has been realized with a reasonable increase in the computational complexity. | Binormalized data-reusing adaptive filtering algorithm for active control of impulsive sources |
S105120041500336X | Acoustic echo canceller (AEC) is used in communication and teleconferencing systems to reduce undesirable echoes resulting from the coupling between the loudspeaker and the microphone. In this paper, we propose an improved variable step-size normalized least mean square (VSS-NLMS) algorithm for acoustic echo cancellation applications based on adaptive filtering. The steady-state error of the NLMS algorithm with a fixed step-size (FSS-NLMS) is very large for a non-stationary input. Variable step-size (VSS) algorithms can be used to decrease this error. The proposed algorithm, named MESVSS-NLMS (mean error sigmoid VSS-NLMS), combines the generalized sigmoid variable step-size NLMS (GSVSS-NLMS) with the ratio of the estimation error to the mean history of the estimation error values. It is shown from single-talk and double-talk scenarios using speech signals from TIMIT database that the proposed algorithm achieves a better performance, more than 3 dB of attenuation in the misalignment evaluation compared to GSVSS-NLMS, non-parametric VSS-NLMS (NPVSS-NLMS) and standard NLMS algorithms for a non-stationary input in noisy environments. | Improved variable step-size NLMS adaptive filtering algorithm for acoustic echo cancellation |
S1051200415003425 | An equivalent multi-input multi-output (MIMO) linear model of the standard transceiver for OFDM/OQAM systems is derived. Specifically, the impulse response of the equivalent filter that determines the effect on each output of the symbol sequence transmitted on each subcarrier, both with reference to AWGN and multipath channels, is obtained. The derived model is exploited for the design of the optimum single-tap per-subcarrier equalizer according to the maximum signal-to-interference ratio criterion. It is shown that the optimum single-tap equalizer in the absence of guard subcarriers is the inverse of a specific coefficient of an impulse response of the derived MIMO model. This coefficient under particular conditions leads to the conventionally used single-tap equalizer given by the inverse of the channel frequency response at the frequency of the considered subcarrier. The effects of guard subcarriers and the inclusion of the noise term in the cost function are also studied. | Optimum single-tap per-subcarrier equalization for OFDM/OQAM systems |
S1051200415003589 | Empirical mode decomposition (EMD) is an adaptive decomposition method, which is widely used in time-frequency analysis. As a bidimensional extension of EMD, bidimensional empirical mode decomposition (BEMD) presents many useful applications in image processing and computer vision. In this paper, we define the mean points in BEMD ‘sifting’ processing as centroid point of neighbour extrema points in Delaunay triangulation and propose using mean approximation instead of envelope mean in ‘sifting’. The proposed method improves the decomposition result and reduces average computation time of ‘sifting’ processing. Furthermore, a BEMD-based image fusion approach is presented in this paper. Experimental results show our method can achieve more orthogonal and physical meaningful components and more effective result in image fusion application. | A mean approximation based bidimensional empirical mode decomposition with application to image fusion |
S1051200415003590 | Secret image sharing is a method to decompose a secret image into shadow images (shadows) so that only qualified subset of shadows can be used to reconstruct the secret image. Usually all shadows have the same importance. Recently, an essential SIS (ESIS) scheme with different importance of shadows was proposed. All shadows are divided into two group: essential shadows and non-essential shadows. In reconstruction, the involved shadows should contain at least a required number of shadows, including at least a required number of essential shadows. However, there are two problems in previous ESIS scheme: unequal size of shadows and concatenation of sub-shadow images. These two problems may lead to security vulnerability and complicate the reconstruction. In this paper, we propose a novel ESIS scheme based on derivative polynomial and Birkhoff interpolation. A single shadow with the same-size is generated for each essential and non-essential participant. The experimental results demonstrate that our scheme can avoid above two problems effectively. | Essential secret image sharing scheme with the same size of shadows |
S1051200415003668 | Intensity inhomogeneity in images makes automated segmentation of these images difficult. As intensity inhomogeneity is often caused by inhomogeneous light reflection, the Retinex theory can be used to reduce inhomogeneity. We introduce the Retinex theory into the active contour model, which is commonly used for image segmentation. The segmentation procedure is then guided by the image intensity and light reflection. In order to solve the proposed model efficiently, we develop a new fast Split Bregman algorithm. Experimental results on a variety of real images with inhomogeneity validate the performance of the proposed methods. | Retinex theory based active contour model for segmentation of inhomogeneous images |
S1051200416000026 | During the last decade, audio information hiding has attracted lots of attention due to its ability to provide a covert communication channel. On the other hand, various audio steganalysis schemes have been developed to detect the presence of any secret messages. Basically, audio steganography methods attempt to hide their messages in areas of time or frequency domains where human auditory system (HAS) does not perceive. Considering this fact, we propose a reliable audio steganalysis system based on the reversed Mel-frequency cepstral coefficients (R-MFCC) which aims to provide a model with maximum deviation from HAS model. Genetic algorithm is deployed to optimize dimension of the R-MFCC-based features. This will both speed up feature extraction and reduce the complexity of classification. The final decision is made by a trained support vector machine (SVM) to detect suspicious audio files. The proposed method achieves detection rates of 97.8% and 94.4% in the targeted ([email protected]%) and universal scenarios. These results are respectively 17.3% and 20.8% higher than previous D2-MFCC based method. | Audio steganalysis based on reversed psychoacoustic model of human hearing |
S105120041600004X | This paper introduces a new variation of the p-norm detector, which is designed for application to coherent multilook detection in compound Gaussian clutter with inverse Gamma texture. By applying what is termed a compensator, enhanced detection performance can be achieved independently of the number of looks used. This is particularly useful in the case of a fast scan rate radar where the number of looks may be quite small. Conventional coherent detectors tend to experience saturation in such scenarios, and so this new detection process complements recent advances in this area. Further validation is provided by applying this new decision rule to synthetic target detection in real X-band radar clutter. | An enhanced p-norm energy detector for coherent multilook detection in X-band maritime surveillance radar |
S1051200416000063 | Detecting the number of components of the CANDECOMP/PARAFAC (CP) model, also known as CP model order selection, is an essential task in signal processing and data mining applications. Existing multilinear detection algorithms for handling N-dimensional data, where N ≥ 3 , e.g., the CORe CONsistency DIAgnostic, rely on the CP decomposition which is computationally very expensive. An alternative solution is to rearrange the tensor as a matrix using the unfolding operation and then utilize the eigenvalues of the resultant matrices for CP model order selection. We propose to employ the eigenvalues associated with the unfolding along merged dimensions, namely, the multi-mode eigenvalues, as well as the n-mode eigenvalues for accurate rank detection. These multiple sets of eigenvalues are combined via the information theoretic criterion. By designing a sequential detection scheme starting from the most squared unfolded matrix, the identifiable rank is increased to the square root of the product of all dimension lengths, which renders the detection algorithm to estimate the rank that can exceed any individual dimension length. The conditions under which the proposed multilinear detection algorithm correctly detects the tensor rank are theoretically investigated and its computational efficiency and detection performance are verified. | Detection of number of components in CANDECOMP/PARAFAC models via minimum description length |
S1051200416000117 | Recently a transformation approach for noncoherent radar detector design has been introduced, where the classical constant false alarm rate detectors for Exponentially distributed clutter are modified to operate in any clutter intensity model of interest. Recent applications of this approach have introduced new decision rules for target detection in Pareto and Weibull distributed clutter. These transformed detectors tended to lose the constant false alarm rate property with respect to one of the clutter parameters. A closer examination of this transformation process yields conditions under which the constant false alarm rate property can be retained. Based upon this, a new model for X-band maritime radar returns is investigated, and corresponding detectors are developed. The relative merits of this new development are investigated with synthetic and real X-band data. | The constant false alarm rate property in transformed noncoherent detection processes |
S1051200416000130 | A chain code is a common, compact and size-efficient way to represent the contour shape of an object. When a group of objects is studied using chain codes, previous works require to obtain one chain code for each object. In this paper we assign a single chain to a group of objects, in such a way that all the properties of each object of the group can be recovered from the single chain. In order to achieve higher levels of compression, we propose a lossless method, that consists of representing a group of objects by means of a single chain, and then to apply a context-mixing algorithm. Regarding other methods of compression of the state-of-the-art, our experiments demonstrate that the best compression performance is achieved when our lossless method is applied. In this case more than 15% of a better compression level is reached. | Single chains to represent groups of objects |
S1051200416000154 | This paper deals with a joint estimation algorithm that is dedicated to digital compensation of transmitter leakage pollution in frequency division duplexing transceivers. These transceivers are affected by transmitter to receiver signal leakage. Combined with the nonlinearity of amplifying components in the receiver path, the baseband signal received can be severely polluted by a baseband polluting term. This term is based on the square modulus of the transmitted signal, and it depends on the equivalent transmitter leakage channel which models leakages and the receiver path. Here, we consider a nonconvolutive, time-varying channel that is modeled by a time-varying complex gain and the presence of a fractional delay, modeling the propagation effects into the receiver. The complex gain is a sum of two components, a constant term that models static effects, and a first-order autoregressive model that approximates the time variation of the transmitter leakage channel. We focus here on a fully digital approach, using digital signal processing techniques and knowledge of the transmitted samples to mitigate the pollution. We first express the asymptotic performance of a transmitter leakage gain estimator piloted by a reference-based least-mean-square (LMS) approach in the synchronized case, and then we derive the influence of the fractional delay. We show that, in practice, the fractional delay cannot be neglected, and we propose a joint estimation of the fractional delay and the transmitter leakage gain to perform digital compensation. The proposed method is adaptive, recursive and online, and it has low complexity. This algorithm, that is developed for a flat transmitter leakage channel case, is seen to be robust in a typical selective channel simulation case, and more suitable than a classic multi-tap LMS scheme proposed in the literature. | Performance of a digital transmitter leakage LMS-based cancellation algorithm for multi-standard radio-frequency transceivers |
S1051200416000166 | In this paper we propose a novel method for color image segmentation. This method uses only hue and intensity components (which are chosen rationally) of image and combines those by adaptive tuned weights in a specially defined fuzzy c-means cost function. The tuned weights indicate how informative every color component (hue and intensity) is. Obtaining tuned weights begins with finding peaks of hue and intensity's histograms and continues by obtaining the table of the frequencies of hue and intensity values and computing entropy and contrast of every color component. Also this method specifies proper initial values for cluster centers with the aim of reducing the overall number of iterations and avoiding converging of FCM to wrong centroids. Experimental results demonstrate that our algorithm achieves better segmentation performance and also runs faster than similar methods. | Robust color image segmentation using fuzzy c-means with weighted hue and intensity |
S1051200416000208 | Texture is a prominent feature of image and very useful in feature extraction for image retrieval application. Statistical and structural patterns have been proposed for image retrieval and browsing. In the proposed work, a new texture feature descriptor is developed. The proposed method uses local intensity of pixels based on three directions in the neighborhood and named as the local tri-directional pattern (LTriDP). Also, one magnitude pattern is merged for better feature extraction. The proposed method is tested on three databases, in which first two, Brodatz texture image database and MIT VisTex database are texture image databases and third one is the AT&T face database. Further, the effectiveness of the proposed method is proven by comparing it with existing algorithms for image retrieval application. | Local tri-directional patterns: A new texture feature descriptor for image retrieval |
S105120041600021X | First-order Riesz transform based monogenic signal representation has been widely used in image processing and computer vision, however it only characterizes image intrinsic one-dimensional structure, and is incapable of describing intrinsic two-dimensional structure. To this end, a novel feature extraction approach, named Riesz Binary Pattern (RBP), is proposed for face recognition based on image multi-scale analysis and multi-order Riesz transform. RBP consists of two complementary components, i.e., local Riesz binary pattern (LRBP) and global Riesz binary pattern (GRBP). LRBP is obtained by performing local binary coding operator on each Riesz transform response to extract image intrinsic two-dimensional structure features. While GRBP is the global binary coding of joint information of image pixel multi-scale analysis and multi-order Riesz transform. Histogram of LRBP and GRBP are concatenated to form face image RBP description. Experimental results on three databases demonstrate that our proposed RBP descriptor is more discriminant in extracting image information and can provide a higher classification rate compared to some state-of-the-art image representation methods. | Face recognition with Riesz binary pattern |
S1051200416000300 | For the problem of multi-target tracking, utilization of Doppler information can greatly improve the tracking performance. However, the presence of Doppler blind zone (DBZ) strongly deteriorates the performance. To overcome this problem, we first employ a well-known detection probability model with incorporating the minimum detectable velocity (MDV), which determines the width of the DBZ. Then, by substituting the resulting detection probability into the update equation of the standard Gaussian mixture probability hypothesis density (GM-PHD) filter, we derive the update expression for a novel GM-PHD filter where the MDV and Doppler information are fully exploited. Moreover, we present an approximate filter with detailed implementation steps to reduce the amount of calculation without a significant degradation in performance. It is demonstrated through numerical examples that the proposed approach outperforms the existing GM-PHD filter which doesn't incorporate the MDV or Doppler information. Particularly, it has significantly better performance at small MDV value. Main abbreviations DBZ Doppler blind zone MDV Minimum detectable velocity GM Gaussian mixture MTT Multi-target tracking GMTI Ground moving target indication PF Particle filter IMM Interacting multiple model PHD Probability hypothesis density MD(s) Missed detection(s) CPHD Cardinalized PHD | GM-PHD filter-based multi-target tracking in the presence of Doppler blind zone |
S1051200416000312 | The multiple signal classification (MUSIC) algorithm based on spatial time-frequency distribution (STFD) has been investigated for direction of arrival (DOA) estimation of closely-spaced sources. However, the limitations of the bilinear time-frequency based MUSIC (TF-MUSIC) algorithm lie in that it suffers from heavy implementation complexity, and its performance strongly depends on appropriate selection of auto-term location of the sources in time-frequency (TF) domain for the formulation of a group of STFD matrices, which is practically difficult especially when the sources are spectrally-overlapped. In order to relax these limitations, this paper aims to develop a novel DOA estimation algorithm. Specifically, we build a MUSIC algorithm based on spatial short-time Fourier transform (STFT), which effectively reduces implementation cost. More importantly, we propose an efficient method to precisely select single-source auto-term location for constructing the STFD matrices of each source. In addition to low complexity, the main advantage of the proposed STFT-MUSIC algorithm compared to some existing ones is that it can better deal with closely-spaced sources whose spectral contents are highly overlapped in TF domain. | DOA estimation of closely-spaced and spectrally-overlapped sources using a STFT-based MUSIC algorithm |
S105120041600035X | In order to realize the patient privacy protection in medical image, opposite to traditional reversible data hiding (RDH) methods which prior to embed message into the smooth area for pursuing high PSNR value, the proposed method priors to embed message into the texture area of the medical images for improving the quality of the details information and helping accurate diagnosis. Furthermore, in order to decrease the embedding distortion while enhancing the contrast of the texture area, this paper also proposes a message sparse representation method. Experiments implemented on medical images showed that the proposed method enhances the contrast of texture area when compared with previous methods. | Reversible data hiding in medical images with enhanced contrast in texture area |
S1051200416000373 | Identity recognition faces several challenges especially in extracting an individual's unique features from biometric modalities and pattern classifications. Electrocardiogram (ECG) waveforms, for instance, have unique identity properties for human recognition, and their signals are not periodic. At present, in order to generate a significant ECG feature set, non-fiducial methodologies based on an autocorrelation (AC) in conjunction with linear dimension reduction methods are used. This paper proposes a new non-fiducial framework for ECG biometric verification using kernel methods to reduce both high autocorrelation vectors' dimensionality and recognition system after denoising signals of 52 subjects with Discrete Wavelet Transform (DWT). The effects of different dimensionality reduction techniques for use in feature extraction were investigated to evaluate verification performance rates of a multi-class Support Vector Machine (SVM) with the One-Against-All (OAA) approach. The experimental results demonstrated higher test recognition rates of Gaussian OAA SVMs on random unknown ECG data sets with the use of the Kernel Principal Component Analysis (KPCA) as compared to the use of the Linear Discriminant Analysis (LDA) and Principal Component Analysis (PCA). | ECG biometric authentication based on non-fiducial approach using kernel methods |
S1051200416000488 | This paper considers the suitability of string transformation techniques for lossless chain codes' compression. The more popular chain codes are compressed including the Freeman chain code in four and eight directions, the vertex chain code, the three orthogonal chain code, and the normalised directional chain code. A testing environment consisting of the constant 0-symbol Run-Length Encoding ( RLE 0 L ), Move-To-Front Transformation (MTFT), and Burrows–Wheeler Transform (BWT) is proposed in order to develop a more suitable configuration of these techniques for each type of the considered chain code. Finally, a simple yet efficient entropy coding is proposed consisting of MTFT, followed by the chain code symbols' binarisation and the run-length encoding. PAQ8L compressor is also an option that can be considered in the final compression stage. Comparisons were done between the state-of-the-art including the Universal Chain Code Compression algorithm, Move-To-Front based algorithm, and an algorithm, based on the Markov model. Interesting conclusions were obtained from the experiments: the sequential uses of MTFT, RLE 0 L , and BWT are reasonable only in the cases of shorter chain codes' alphabets as with the vertex chain code and the three orthogonal chain code. For the remaining chain codes, BWT alone provided the best results. The experiments confirm that the proposed approach is comparable against other lossless chain code compression methods, while in total achieving higher compression rates. | Chain code compression using string transformation techniques |
S1051200416300197 | Two dimensional (2D) integer transforms are used in all the profiles of H.264/MPEG-4 Advanced Video Coding (AVC) standard. This paper presents a resource shared architecture of 2D and integer transforms in H.264/MPEG-4 AVC coders. Existing architectures use separate designs to compute 2D and forward/inverse integer transform. Shared resource architectures for and transforms can be used to reduce the implementation area without sacrificing high throughput. Matrix decomposition is used to show that the 2D forward/inverse integer transform of two independent data blocks can be obtained from one 2D forward/inverse integer transform circuit. A high throughput architecture is used as the base design for the implementation of 2D forward/inverse transform. Data rearrangement stages are added to the base design to compute the 2D forward/inverse transform. The proposed dual-clock pipelined architecture does not require any transpose memory. As compared to existing designs, the proposed design operates on two independent sub-blocks. Hence, the overall throughput of the 2D forward/inverse transform computation increases by approximately 200% with less than a 5% increase in the gate count. The proposed design operates at a clock frequency of approximately 1.25 GHz and achieves a throughput of 7 G and 18.7 G pixels/sec for each block of and forward integer transforms, respectively. Due to resource shared implementation and high throughput, the proposed design can be used for real-time H.264/MPEG-4 AVC processing. | High throughput resource shared 2D integer transform computation for H.264/MPEG-4 AVC |
S1051200416300239 | This paper presents a novel methodology to estimate the frequency shift in chirp signals with SNRs as low as −17 dB through the use of an adaptive array of Duffing oscillators. The system used here is an array of five Duffing oscillators with each oscillator's response enhanced through a correlation with the reference signal. As a final result, a time-frequency depiction is provided by the Duffing array for further analysis of chirp signals. Using computer simulated experiments, it is found that the analysis of chirp signals with low SNR by means of the Duffing oscillator shows a markedly better performance than the conventional methods of time-frequency analysis. To this end, the results obtained from the proposed Duffing method are compared against some recent techniques in time-frequency analysis. Furthermore, to strengthen the proposed representation, Monte Carlo simulation is used. | High resolution time-frequency representation for chirp signals using an adaptive system based on duffing oscillators |
S1068520013001107 | This paper overviews recent development in gas detection with micro- and nano-engineered optical fibers, including hollow-core fibers, suspended-core fibers, tapered optical micro/nano fibers, and fiber-tip micro-cavities. Both direct absorption and photoacoustic spectroscopy based detection schemes are discussed. Emphasis is placed on post-processing stock optical fibers to achieve better system performance. Our recent demonstration of distributed methane detection with a ∼75-m long of hollow-core photonic bandgap fiber is also reported. | Gas detection with micro- and nano-engineered optical fibers |
S1068520013001338 | We propose a WDM MIMO system that uses a selective mode excitation technique to reduce MIMO DSP over a conventional GI-MMF. We show numerically that we can selectively excite low-order modes with the small DMD of GI-MMF and confirm experimentally that we can obtain a small DMD over a wide wavelength range under selective mode excitation conditions. We realize a C- and L-band WDM coherent optical 2×2 MIMO transmission over a 10-km 50μm-core GI-MMF, which enables us to reduce MIMO DSP complexity over a wide wavelength range. | Wideband WDM coherent optical MIMO transmission over 50 μm-core GI-MMF using selective mode excitation technique |
S1068520013001703 | We investigate optical prefiltering for 56Gbaud (224Gbit/s) electrical time-division multiplexed (ETDM) dual polarization (DP) quaternary phase shift keying (QPSK) transmission. Different transmitter-side optical filter shapes are tested and their bandwidths are varied. Comparison of studied filter shapes shows an advantage of a pre-emphasis filter. Subsequently, we perform a fiber transmission of the 56Gbaud DP QPSK signal filtered with the 65GHz pre-emphasis filter to fit the 75GHz transmission grid. Bit error rate (BER) of the signal remains below forward error correction (FEC) limit after 300km of fiber propagation. | Experimental evaluation of prefiltering for 56Gbaud DP-QPSK signal transmission in 75GHz WDM grid |
S1068520014000108 | We perform an experimental investigation of a maximum likelihood-based (ML-based) algorithm for bulk chromatic dispersion estimation for digital coherent receivers operating in uncompensated optical networks. We demonstrate the robustness of the method at low optical signal-to-noise ratio (OSNR) and against differential group delay (DGD) in an experiment involving 112Gbit/s polarization-division multiplexed (PDM) 16-ary quadrature amplitude modulation (16QAM) and quaternary phase-shift keying (QPSK). | Experimental demonstration of the maximum likelihood-based chromatic dispersion estimator for coherent receivers |
S1068520014001084 | We demonstrate a great variability of single-pulse (with only one pulse/wave-packet traveling along the cavity) generation regimes in fiber lasers passively mode-locked by non-linear polarization evolution (NPE) effect. Combining extensive numerical modeling and experimental studies, we identify multiple very distinct lasing regimes with a rich variety of dynamic behavior and a remarkably broad spread of key parameters (by an order of magnitude and more) of the generated pulses. Such a broad range of variability of possible lasing regimes necessitates developing techniques for control/adjustment of such key pulse parameters as duration, radiation spectrum, and the shape of the auto-correlation function. From a practical view point, availability of pulses/wave-packets with such different characteristics from the same laser makes it imperative to develop variability-aware designs with control techniques and methods to select appropriate application-oriented regimes. | Mode-locked fiber lasers with significant variability of generation regimes |
S1068520015000474 | The joint iterative detection and decoding (JIDD) technique has been proposed by Barbieri et al. (2007) with the objective of compensating the time-varying phase noise and constant frequency offset experienced in satellite communication systems. The application of JIDD to optical coherent receivers in the presence of laser frequency fluctuations has not been reported in prior literature. Laser frequency fluctuations are caused by mechanical vibrations, power supply noise, and other mechanisms. They significantly degrade the performance of the carrier phase estimator in high-speed intradyne coherent optical receivers. This work investigates the performance of the JIDD algorithm in multi-gigabit optical coherent receivers. We present simulation results of bit error rate (BER) for non-differential polarization division multiplexing (PDM)-16QAM modulation in a 200Gb/s coherent optical system that includes an LDPC code with 20% overhead and net coding gain of 11.3dB at BER=10−15. Our study shows that JIDD with a pilot rate ⩽ 5 % compensates for both laser phase noise and laser frequency fluctuation. Furthermore, since JIDD is used with non-differential modulation formats, we find that gains in excess of 1dB can be achieved over existing solutions based on an explicit carrier phase estimator with differential modulation. The impact of the fiber nonlinearities in dense wavelength division multiplexing (DWDM) systems is also investigated. Our results demonstrate that JIDD is an excellent candidate for application in next generation high-speed optical coherent receivers. | On the performance of joint iterative detection and decoding in coherent optical channels with laser frequency fluctuations |
S1071581913001407 | Analysis of the usability of an interactive system requires both an understanding of how the system is to be used and a means of assessing the system against that understanding. Such analytic assessments are particularly important in safety-critical systems as latent vulnerabilities may exist which have negative consequences only in certain circumstances. Many existing approaches to assessment use tasks or scenarios to provide explicit representation of their understanding of use. These normative user behaviours have the advantage that they clarify assumptions about how the system will be used but have the disadvantage that they may exclude many plausible deviations from these norms. Assessments of how a design fails to support these user behaviours can be a matter of judgement based on individual experience rather than evidence. We present a systematic formal method for analysing interactive systems that is based on constraints rather than prescribed behaviour. These constraints capture precise assumptions about what information resources are used to perform action. These resources may either reside in the system itself or be external to the system. The approach is applied to two different medical device designs, comparing two infusion pumps currently in common use in hospitals. Comparison of the two devices is based on these resource assumptions to assess consistency of interaction within the design of each device. | Analysing interactive devices based on information resource constraints |
S1071581913002012 | Recovery is a necessary factor in avoiding work-related strain and in feeling prepared for the next day of work. In order for recovery to be successful, an individual must experience psychological detachment from work, relaxation, mastery experiences and a sense of control, all of which have been argued to be assisted by digital game use. However, it is unclear whether these associations will be greater for certain digital game genres, or whether this would extend to other recovery-related outcomes, for instance work home interference (WHI), where the stress from work interferes with home-life. These factors may be vital in determining whether interventions aimed at improving recovery using digital games would be effective, and what form these should take. The present research surveyed 491 participants and found that the total number of hours spent playing digital games per week was positively correlated with overall recovery. Correlations varied with genre, highlighting the importance of game characteristics in this relationship: first person shooters and action games were most highly correlated with recovery. Moreover, digital game use was not related to a reduction in work–home interference. When restricting the analysis to gamers who report to have developed online relationships, online social support mediated the relationship between digital game use and recovery. Results are discussed in terms of how digital games may be utilised to improve recovery and reduce work-related stress. | Switch on to games: Can digital games aid post-work recovery? |
S1071581914001013 | The process of authoring ontologies appears to be fragmented across several tools and workarounds, and there exists no well accepted framework for common authoring tasks such as exploring ontologies, comparing versions, debugging, and testing. This lack of an adequate and seamless tool chain potentially hinders the broad uptake of ontologies, especially OWL, as a knowledge representation formalism. We start to address this situation by presenting insights from an interview-based study with 15 ontology experts. We uncover the tensions that may emerge between ontology authors including antagonistic ontology building styles (definition-driven vs. manually crafted hierarchies). We identify the problems reported by the ontology authors and the strategies they employ to solve them. These data are mapped to a set of key design recommendations, which should inform and guide future efforts for improving ontology authoring tool support, thus opening up ontology authoring to a new generation of users. We discuss future research avenues in light of these results. | Overcoming the pitfalls of ontology authoring: Strategies and implications for tool design |
S1071581914001220 | We investigate the diversity of participatory design research practice, based on a review of ten years of participatory design research published as full research papers at the Participatory Design Conferences (PDC) 2002–2012, and relate this body of research to five fundamental aspects of PD from classic participatory design literature. We identify five main categories of research contributions: Participatory Design in new domains, Participatory Design methods, Participatory Design and new technology, Theoretical contributions to Participatory Design, and Basic concepts in Participatory Design. Moreover, we identify how participation is defined, and how participation is conducted in experimental design cases, with a particular focus on interpretation, planning, and decision-making in the design process. | The diversity of participatory design research practice at PDC 2002–2012 |
S1071581914001232 | The field of Participatory Design (PD) has greatly diversified and we see a broad spectrum of approaches and methodologies emerging. However, to foster its role in designing future interactive technologies, a discussion about accountability and rigour across this spectrum is needed. Rejecting the traditional, positivistic framework, we take inspiration from related fields such as Design Research and Action Research to develop interpretations of these concepts that are rooted in PD׳s own belief system. We argue that unlike in other fields, accountability and rigour are nuanced concepts that are delivered through debate, critique and reflection. A key prerequisite for having such debates is the availability of a language that allows designers, researchers and practitioners to construct solid arguments about the appropriateness of their stances, choices and judgements. To this end, we propose a “tool-to-think-with” that provides such a language by guiding designers, researchers and practitioners through a process of systematic reflection and critical analysis. The tool proposes four lenses to critically reflect on the nature of a PD effort: epistemology, values, stakeholders and outcomes. In a subsequent step, the coherence between the revealed features is analysed and shows whether they pull the project in the same direction or work against each other. Regardless of the flavour of PD, we argue that this coherence of features indicates the level of internal rigour of PD work and that the process of reflection and analysis provides the language to argue for it. We envision our tool to be useful at all stages of PD work: in the planning phase, as part of a reflective practice during the work, and as a means to construct knowledge and advance the field after the fact. We ground our theoretical discussions in a specific PD experience, the ECHOES project, to motivate the tool and to illustrate its workings. | In pursuit of rigour and accountability in participatory design |
S1071581914001268 | Social micro-blogging systems such as Twitter are designed for rapid and informal communication from a large potential number of participants. Due to the volume of content received, human users must typically skim their timeline of received content and exercise judgement in selecting items for consumption, necessitating a selection process based on heuristics and content meta-data. This selection process is not well understood, yet is important due to its potential use in content management systems. In this research we have conducted an open online experiment in which participants are shown quantitative and qualitative meta-data describing two pieces of Twitter content. Without revealing the text of the tweet, participants are asked to make a selection. We observe the decisions made from 239 surveys and discover insights into human behaviour on decision making for content selection. We find that for qualitative meta-data consumption decisions are driven by online friendship and for quantitative meta-data the largest numerical value presented influences choice. Overall, the ‘number of retweets’ is found to be the most influential quantitative meta-data, while displaying multiple cues about an author׳s identity provides the strongest qualitative meta-data. When both quantitative and qualitative meta-data is presented, it is the qualitative meta-data (friendship information) that drives selection. The results are consistent with application of the Recognition heuristic, which postulates that when faced with constrained decision-making, humans will tend to exercise judgement based on cues representing familiarity. These findings are useful for future interface design for content filtering and recommendation systems. | Human content filtering in Twitter: The influence of metadata |
S1071581915000270 | While urban computing has often been envisaged as bridging place, technology and people, there is a gap between the micro-level of urban computing which focuses on the solitary user with technological solutions and the macro-level which proposes grand visions of making better cities for the public. The gap is one of scale of audience as well as scale of normative ambition. To bridge this gap the paper proposes a transdisciplinary approach that brings together actor–network theory with critical and participatory design to create prototypes that engage people and build publics. The theoretical discussion examines a way of thinking about size as performative and shiftable through practical design methods. The micro/macro prototyping approach is demonstrated via an empirical case study of a series of provocative prototypes which attempt to build a material public around the issue of community noise at Heathrow airport. The paper suggests that this approach allows issues to be followed and engaged with, and their dynamics re-designed across different scales. This proposes a new role and scope for the researcher/designer as proactively engaging in normative shaping and supporting of real world settings which bridge place, technology and people. | Micro/macro prototyping |
S1071581915000592 | Cultural aspects such as values, beliefs, and behavioral patterns influence the way technology is understood and used, and the impact it may cause on the environment and on people. Although there is influential literature devoted to the subject of values and culture in Human–Computer Interaction, there is still a lack of principled and practical artifacts and methods to support researchers and practitioners in their activities. In this paper, we present a Value-oriented and Culturally Informed Approach (VCIA) to sensitize and support Computer Science and Engineering professionals in taking values and culture into consideration throughout the design of interactive systems. The approach is grounded on theoretical and methodological bases of Organizational Semiotics, Building Blocks of Culture, and Socially Aware Computing. VCIA offers a set of artifacts and methods articulated to support the design process in its different stages and activities: from the identification of stakeholders and their values, to the organization of requirements and the evaluation of the designed solution. In this paper, we present VCIA’s principles, artifacts, and illustrate its usefulness in bringing values into consideration, supporting a socially aware system design. | A value-oriented and culturally informed approach to the design of interactive systems |
S1071581915001238 | A tool for the multisensory stylus-based exploration of virtual textures was used to investigate how different feedback modalities (static or dynamically deformed images, vibration, sound) affect exploratory gestures. To this end, we ran an experiment where participants had to steer a path with the stylus through a curved corridor on the surface of a graphic tablet/display, and we measured steering time, dispersion of trajectories, and applied force. Despite the variety of subjective impressions elicited by the different feedback conditions, we found that only nonvisual feedback induced significant variations in trajectories and an increase in movement time. In a post-experiment, using a paper-and-wood physical realization of the same texture, we recorded a variety of gestural behaviors markedly different from those found with the virtual texture. With the physical setup, movement time was shorter and texture-dependent lateral accelerations could be observed. This work highlights the limits of multisensory pseudo-haptic techniques in the exploration of surface textures. | Multisensory texture exploration at the tip of the pen |
S1071581916000021 | Anthropometrics show that the lengths of many human body segments follow a common proportional relationship. To know the length of one body segment – such as a thumb – potentially provides a predictive route to other physical characteristics, such as overall standing height. In this study, we examined whether it is feasible that the length of a person׳s thumb could be revealed from the way in which they complete swipe gestures on a touchscreen-based smartphone. From a corpus of approx. 19,000 swipe gestures captured from 178 volunteers, we found that people with longer thumbs complete swipe gestures with shorter completion times, higher speeds and with higher accelerations than people with shorter thumbs. These differences were also observed to exist between our male and female volunteers, along with additional differences in the amount of touch pressure applied to the screen. Results are discussed in terms of linking behavioural and physical biometrics. | Different strokes for different folks? Revealing the physical characteristics of smartphone users from their swipe gestures |
S1071581916000045 | Mobile applications have the ability to present information to users that is influenced by their surroundings, activities and interests. Such applications have the potential to influence the likelihood of individuals experiencing ‘serendipity’, through a combination of information, context, insight and activity. This study reports the deployment of a system that sends push text suggestions to users throughout the day, where the content of those messages is informed by users’ experience and interests. We investigated the responses to and interactions with messages that varied in format and relevance, and which were received at different times throughout the day. Sixteen participants were asked to use a mobile diary application to record their experiences and thoughts regarding information that was received over a period of five consecutive days. Results suggest that participants’ perception of the received suggestions was influenced by the relevance of the suggestion to their interests, but that there were also positive attitudes towards seemingly irrelevant information. Qualitative data indicates that participants, if in an appropriate time and place, are willing to accept and act upon push suggestions as long as the number of suggestions that they receive is not overwhelming. This study contributes towards an understanding of how mobile users make connections with new information, furthering our understanding of how serendipitous connections and insightful thinking could be accommodated using technology. | Encouraging serendipity in research: Designing technologies to support connection-making |
S1071581916000392 | Experiencing stress during training is a way to prepare professionals for real-life crises. With the help of feedback tools, professionals can train to recognize and overcome negative effects of stress on task performances. This paper reports two studies that empirically examined the effect of such a feedback system. The system, based on the COgnitive Performance and Error (COPE) model, provides its users with physiological, predicted performance and predicted error-chance feedback. The first experiment focussed on creating stressful scenarios and establishing the parameters for the predictive models for the feedback system. Participants (n=9) performed fire-extinguishing tasks on a virtual ship. By altering time pressure, information uncertainty and consequences of performance, stress was induced. COPE variables were measured and models were established that predicted performance and the chances on specific errors. In the second experiment a new group of participants (n=29) carried out the same tasks while receiving eight different combinations of the three feedback types in a counterbalanced order. Performance scores improved when feedback was provided during the task. The number of errors made did not decrease. The usability score for the system with physiological feedback was significantly higher than a system without physiological feedback, unless combined with error feedback. This paper shows effects of feedback on performances and usability. To improve the effectiveness of the feedback system it is suggested to provide more in-depth tutorial sessions. Design changes are recommended that would make the feedback system more effective in improving performances. | Effects of different real-time feedback types on human performance in high-demanding work conditions |
S1077314213000520 | For the purpose of content-based image retrieval (CBIR), image classification is important to help improve the retrieval accuracy and speed of the retrieval process. However, the CBIR systems that employ image classification suffer from the problem of hidden classes. The queries associated with hidden classes cannot be accurately answered using a traditional CBIR system. To address this problem, a robust CBIR scheme is proposed that incorporates a novel query detection technique and a self-adaptive retrieval strategy. A number of experiments carried out on the two popular image datasets demonstrate the effectiveness of the proposed scheme. | Robust image retrieval with hidden classes |
S1077314213000532 | Most color cameras are fitted with a single sensor that provides color filter array (CFA) images, in which each pixel is characterized by one of the three color components (either red, green, or blue). To produce a color image, the two missing color components have to be estimated at each pixel of the corresponding CFA image. This process is commonly referred to as demosaicing, and its result as the demosaiced color image. Since demosaicing methods intend to produce “perceptually satisfying” demosaiced color images, they attempt to avoid color artifacts. Because this is often achieved by filtering, demosaicing schemes tend to alter the local texture information that is, however, useful to discriminate texture images. To avoid this issue while exploiting color information for texture classification, it may be relevant to compute texture descriptors directly from CFA images. From chromatic co-occurrence matrices (CCMs) that capture the spatial interaction between color components, we derive new descriptors (CFA CCMs) for CFA texture images. Color textures are then compared by means of the similarity between their CFA CCMs. Experimental results achieved on benchmark color texture databases show the efficiency of this approach for texture classification. | Color texture analysis using CFA chromatic co-occurrence matrices |
S1077314213000544 | Video shot boundary detection (SBD) is a fundamental step in automatic video content analysis toward video indexing, summarization and retrieval. Despite the beneficial previous works in the literature, reliable detection of video shots is still a challenging issue with many unsolved problems. In this paper, we focus on the problem of hard cut detection and propose an automatic algorithm in order to accurately determine abrupt transitions from video. We suggest a fuzzy rule-based scene cut identification approach in which a set of fuzzy rules are evaluated to detect cuts. The main advantage of the proposed method is that, we incorporate spatial and temporal features to describe video frames, and model cut situations according to temporal dependency of video frames as a set of fuzzy rules. Also, while existing cut detection algorithms are mainly threshold dependent; our method identifies cut transitions using a fuzzy logic which is more flexible. The proposed algorithm is evaluated on a variety of video sequences from different genres. Experimental results, in comparison with the most standard cut detection algorithms confirm our method is more robust to object and camera movements as well as illumination changes. | AVCD-FRA: A novel solution to automatic video cut detection using fuzzy-rule-based approach |
S1077314213000556 | In this paper, we propose a method to extend the multi-phase piecewise-constant segmentation method of Mumford and Shah to the multi-channel case. To this effect, we show that it is crucial to find an agreement between the syntactic constraint of obtaining regions that form a partition of the image space and the semantic constraint that attributes a formal meaning to the segmented regions. We elaborate from the work of Sandberg et al. that addresses the same problem in the binary (2-phase) case and we show that the agreement principle presented there, based on De Morgan’s law, cannot be generalized to the multi-phase case. Therefore, we base the agreement between syntactic and semantic constraints on another mathematical principle, namely the fundamental theorem of equivalence relation. After we give some details regarding the implementation of the method, we show results on brain MR T1-weighted and T2-weighted images, which illustrate the good behavior of our method, leading to robust joint segmentation of brain structures and tumors. | Conciliating syntactic and semantic constraints for multi-phase and multi-channel region segmentation |
S1077314213000623 | Image patterns at different spatial levels are well organized, such as regions within one image and feature points within one region. These classes of spatial structures are hierarchical in nature. The appropriate integration and utilization of such relationship are important to improve the performance of region tagging. Inspired by the recent advances of sparse coding methods, we propose an approach, called Unified Dictionary Learning and Region Tagging with Hierarchical Sparse Representation. This approach consists of two steps: region representation and region reconstruction. In the first step, rather than using the ℓ 1 -norm as it is commonly done in sparse coding, we add a hierarchical structure to the process of sparse coding and form a framework of tree-guided dictionary learning. In this framework, the hierarchical structures among feature points, regions, and images are encoded by forming a tree-guided multi-task learning process. With the learned dictionary, we obtain a better representation of training and testing regions. In the second step, we propose to use a sub-hierarchical structure to guide the sparse reconstruction for testing regions, i.e., the structure between regions and images. Thanks to this hierarchy, the obtained reconstruction coefficients are more discriminate. Finally, tags are propagated to testing regions by the learned reconstruction coefficients. Extensive experiments on three public benchmark image data sets demonstrate that the proposed approach has better performance of region tagging than the current state of the art methods. | Unified Dictionary Learning and Region Tagging with Hierarchical Sparse Representation |
S1077314213000635 | We propose a novel calibration method for catadioptric systems made up of an axial symmetrical mirror and a pinhole camera with its optical center located at the mirror axis. The calibration estimates the relative camera/mirror position and the extrinsic rotation and translation w.r.t. the world frame. The procedure requires a single image of a (possibly planar) calibration object. We show how most of the calibration parameters can be estimated using linear methods (Direct-Linear-Transformation algorithm) and cross-ratio. Two remaining parameters are obtained by using non-linear optimization. We present experimental results on simulated and real images. | Calibration of mirror position and extrinsic parameters in axial non-central catadioptric systems |
S1077314213000647 | In computer vision and image analysis, image registration between 2D projections and a 3D image that achieves high accuracy and near real-time computation is challenging. In this paper, we propose a novel method that can rapidly detect an object’s 3D rigid motion or deformation from a 2D projection image or a small set thereof. The method is called CLARET (Correction via Limited-Angle Residues in External Beam Therapy) and consists of two stages: registration preceded by shape space and regression learning. In the registration stage, linear operators are used to iteratively estimate the motion/deformation parameters based on the current intensity residue between the target projection(s) and the digitally reconstructed radiograph(s) (DRRs) of the estimated 3D image. The method determines the linear operators via a two-step learning process. First, it builds a low-order parametric model of the image region’s motion/deformation shape space from its prior 3D images. Second, using learning-time samples produced from the 3D images, it formulates the relationships between the model parameters and the co-varying 2D projection intensity residues by multi-scale linear regressions. The calculated multi-scale regression matrices yield the coarse-to-fine linear operators used in estimating the model parameters from the 2D projection intensity residues in the registration. The method’s application to Image-guided Radiation Therapy (IGRT) requires only a few seconds and yields good results in localizing a tumor under rigid motion in the head and neck and under respiratory deformation in the lung, using one treatment-time imaging 2D projection or a small set thereof. | 2D/3D image registration using regression learning |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.