FileName
stringlengths 17
17
| Abstract
stringlengths 163
6.01k
| Title
stringlengths 12
421
|
---|---|---|
S0262885614001012 | Facial expression is central to human experience. Its efficiency and valid measurement are challenges that automated facial image analysis seeks to address. Most publically available databases are limited to 2D static images or video of posed facial behavior. Because posed and un-posed (aka “spontaneous”) facial expressions differ along several dimensions including complexity and timing, well-annotated video of un-posed facial behavior is needed. Moreover, because the face is a three-dimensional deformable object, 2D video may be insufficient, and therefore 3D video archives are required. We present a newly developed 3D video database of spontaneous facial expressions in a diverse group of young adults. Well-validated emotion inductions were used to elicit expressions of emotion and paralinguistic communication. Frame-level ground-truth for facial actions was obtained using the Facial Action Coding System. Facial features were tracked in both 2D and 3D domains. To the best of our knowledge, this new database is the first of its kind for the public. The work promotes the exploration of 3D spatiotemporal features in subtle facial expression, better understanding of the relation between pose and motion dynamics in facial action units, and deeper understanding of naturally occurring facial action. | BP4D-Spontaneous: a high-resolution spontaneous 3D dynamic facial expression database |
S0262885614001024 | In this paper, initially, the impact of mask spoofing on face recognition is analyzed. For this purpose, one baseline technique is selected for both 2D and 3D face recognition. Next, novel countermeasures, which are based on the analysis of different shape, texture and reflectance characteristics of real faces and mask faces, are proposed to detect mask spoofing. In this paper, countermeasures are developed using both 2D data (texture images) and 3D data (3D scans) available in the mask database. The results show that each of the proposed countermeasures is successful in detecting mask spoofing, and the fusion of these countermeasures further improves the results compared to using a single countermeasure. Since there is no publicly available mask database, studies on mask spoofing are limited. This paper provides significant results by proposing novel countermeasures to protect face recognition systems against mask spoofing. | Mask spoofing in face recognition and countermeasures |
S0262885614001036 | Visual speech information plays an important role in automatic speech recognition (ASR) especially when audio is corrupted or even inaccessible. Despite the success of audio-based ASR, the problem of visual speech decoding remains widely open. This paper provides a detailed review of recent advances in this research area. In comparison with the previous survey [97] which covers the whole ASR system that uses visual speech information, we focus on the important questions asked by researchers and summarize the recent studies that attempt to answer them. In particular, there are three questions related to the extraction of visual features, concerning speaker dependency, pose variation and temporal information, respectively. Another question is about audio-visual speech fusion, considering the dynamic changes of modality reliabilities encountered in practice. In addition, the state-of-the-art on facial landmark localization is briefly introduced in this paper. Those advanced techniques can be used to improve the region-of-interest detection, but have been largely ignored when building a visual-based ASR system. We also provide details of audio-visual speech databases. Finally, we discuss the remaining challenges and offer our insights into the future research on visual speech decoding. | A review of recent advances in visual speech decoding |
S0262885614001048 | Locally affine transformation with globally elastic interpolation is a common strategy for non-rigid registration. Current techniques improve the registration accuracy by only processing the sub-images that contain well-defined structures quantified by Moran's spatial correlation. As an indicator, Moran's metric successfully excludes noisy structures that result in misleading global optimum in terms of similarity. However, some well-defined structures with intensity only varying in one direction may also cause mis-registration. In this paper, we propose a new metric based on the response of a similarity function to quantify the ability of being correctly registered for each sub-image. Using receiver operating characteristic analysis, we show that the proposed metric more accurately reflects such ability than Moran's metric. Incorporating the proposed metric into a hierarchical non-rigid registration scheme, we show that registration accuracy is improved relative to Moran's metric. | Non-rigid registration using gradient of self-similarity response |
S0262885614001061 | This paper introduces a novel topic model for learning a robust object model. In this hierarchical model, the layout topic is used to capture the local relationships among a limited number of parts when the part topic is used to locate the potential part regions. Naturally, an object model is represented as a probability distribution over a set of parts with certain layouts. Rather than a monolithic model, our object model is composed of multiple sub-category models designed to capture the significant variations in appearance and shape of an object category. Given a set of object instances with a bounding box, an iterative learning process is proposed to divide them into several sub-categories and learn the corresponding sub-category models without any supervision. Through an experiment in object detection, the learned object model is examined and the results highlight the advantages of our present method compared with others. | Automatic sub-category partitioning and parts localization for learning a robust object model |
S0262885614001073 | Improper camera orientation produces convergent vertical lines (keystone distortion) and skewed horizon lines (horizon distortion) in digital pictures; an a-posteriori processing is then necessary to obtain appealing pictures. We show here that, after accurate calibration, the camera on-board accelerometer can be used to automatically generate an alternative perspective view from a virtual camera, leading to images with residual keystone and horizon distortions that are essentially imperceptible at visual inspection. Furthermore, we describe the uncertainty on the position of each pixel in the corrected image with respect to the accelerometer noise. Experimental results show a similar accuracy for a smartphone and for a digital reflex camera. The method can find application in customer imaging devices as well as in the computer vision field, especially when reference vertical and horizontal features are not easily detectable in the image. | Accelerometer-based correction of skewed horizon and keystone distortion in digital photography |
S0262885614001085 | Gender and ethnicity are both key demographic attributes of human beings and they play a very fundamental and important role in automatic machine based face analysis, therefore, there has been increasing attention for face based gender and ethnicity classification in recent years. In this paper, we present an effective and efficient approach on this issue by combining both boosted local texture and shape features extracted from 3D face models, in contrast to the existing ones that only depend on either 2D texture or 3D shape of faces. In order to comprehensively represent the difference between different genders or ethnicities, we propose a novel local descriptor, namely local circular patterns (LCP). LCP improves the widely utilized local binary patterns (LBP) and its variants by replacing the binary quantization with a clustering based one, resulting in higher discriminative power as well as better robustness to noise. Meanwhile the following Adaboost based feature selection finds the most discriminative gender- and race-related features and assigns them with different weights to highlight their importance in classification, which not only further raises the performance but reduces the time and memory cost as well. Experimental results achieved on the FRGC v2.0 and BU-3DFE datasets clearly demonstrate the advantages of the proposed method. | Local circular patterns for multi-modal facial gender and ethnicity classification |
S0262885614001097 | The difficulty of face recognition (FR) systems to operate efficiently in diverse operational environments, e.g. day and night time, is aided by employing sensors covering different spectral bands (i.e. visible and infrared). Biometric practitioners have identified a framework of band-specific algorithms, which can contribute to both assessment and intervention. While these motions are proven to achieve improvement of identification performance, they traditionally result in solutions that typically fail to work efficiently across multiple spectrums. In this work, we designed and developed an efficient, fully automated, direct matching-based FR approach, that is designed to operate efficiently when face data is captured using either visible or passive infrared (IR) sensors. Thus, it can be applied in both daytime and nighttime environments. First, input face images are geometrically normalized using our pre-processing pipeline prior to feature-extraction. Then, face-based features including wrinkles, veins, as well as edges of facial characteristics, are detected and extracted for each operational band (visible, MWIR, and LWIR). Finally, global and local face-based matching is applied, before fusion is performed at the score level. Our approach achieves a rank-1 identification rate of at least 99.43%, regardless of the spectrum of operation. This suggests that our approach results in better performance than other tested standard commercial and academic face-based matchers, on all spectral bands used. | A spectral independent approach for physiological and geometric based face recognition in the visible, middle-wave and long-wave infrared bands |
S0262885614001103 | Recent studies witness the success of Bag-of-Features (BoF) frameworks for video based human action recognition. The detection and description of local interest regions are two fundamental problems in BoF framework. In this paper, we propose a motion boundary based sampling strategy and spatial-temporal (3D) co-occurrence descriptors for action video representation and recognition. Our sampling strategy is partly inspired by the recent success of dense trajectory (DT) based features [Wang et al., 2013] for action recognition. Compared with DT, we densely sample spatial-temporal cuboids along a motion boundary which can greatly reduce the number of valid trajectories and preserve the discriminative power. Moreover, we develop a set of 3D co-occurrence descriptors which take account of the spatial-temporal context within local cuboids and deliver rich information for recognition. Furthermore, we decompose each 3D co-occurrence descriptor at pixel level and bin level and integrate the decomposed components with a multi-channel framework, which can improve the performance significantly. To evaluate the proposed methods, we conduct extensive experiments on three benchmarks including KTH, YouTube and HMDB51. The results show that our sampling strategy significantly reduces the computational cost of point tracking without degrading performance. Meanwhile, we achieve superior performance than the state-of-the-art methods. We report 95.6% on KTH, 87.6% on YouTube and 51.8% on HMDB51. | Motion boundary based sampling and 3D co-occurrence descriptors for action recognition |
S0262885614001115 | Texture classification is one of the most important tasks in computer vision field and it has been extensively investigated in the last several decades. Previous texture classification methods mainly used the template matching based methods such as Support Vector Machine and k-Nearest-Neighbour for classification. Given enough training images the state-of-the-art texture classification methods could achieve very high classification accuracies on some benchmark databases. However, when the number of training images is limited, which usually happens in real-world applications because of the high cost of obtaining labelled data, the classification accuracies of those state-of-the-art methods would deteriorate due to the overfitting effect. In this paper we aim to develop a novel framework that could correctly classify textural images with only a small number of training images. By taking into account the repetition and sparsity property of textures we propose a sparse representation based multi-manifold analysis framework for texture classification from few training images. A set of new training samples are generated from each training image by a scale and spatial pyramid, and then the training samples belonging to each class are modelled by a manifold based on sparse representation. We learn a dictionary of sparse representation and a projection matrix for each class and classify the test images based on the projected reconstruction errors. The framework provides a more compact model than the template matching based texture classification methods, and mitigates the overfitting effect. Experimental results show that the proposed method could achieve reasonably high generalization capability even with as few as 3 training images, and significantly outperforms the state-of-the-art texture classification approaches on three benchmark datasets. | Sparse representation with multi-manifold analysis for texture classification from few training images |
S0262885614001127 | This paper proposes a novel robust texture descriptor based on Gaussian Markov random fields (GMRFs). A spatially localized parameter estimation technique using local linear regression is performed and the distributions of local parameter estimates are constructed to formulate the texture features. The inconsistencies arising in localized parameter estimation are addressed by applying generalized inverse, regularization and an estimation window size selection criterion. The texture descriptors are named as local parameter histograms (LPHs) and are used in texture segmentation with the k-means clustering algorithm. The segmentation results on general texture datasets demonstrate that LPH descriptors significantly improve the performance of classical GMRF features and achieve better results compared to the state-of-the-art texture descriptors based on local feature distributions. Impressive natural image segmentation results are also achieved and comparisons to the other standard natural image segmentation algorithms are also presented. LPH descriptors produce promising texture features that integrate both statistical and structural information about a texture. The region boundary localization can be further improved by integrating colour information and using advanced segmentation algorithms. | Gaussian Markov random field based improved texture descriptor for image segmentation |
S0262885614001139 | Given a surveillance video of a moving person, we present a novel method of estimating layout of a cluttered indoor scene. We propose an idea that trajectories of a moving person can be used to generate features to segment an indoor scene into different areas of interest. We assume a static uncalibrated camera. Using pixel-level color and perspective cues of the scene, each pixel is assigned to a particular class either a sitting place, the ground floor, or the static background areas like walls and ceiling. The pixel-level cues are locally integrated along global topological order of classes, such as sitting objects and background areas are above ground floor into a conditional random field by an ordering constraint. The proposed method yields very accurate segmentation results on challenging real world scenes. We focus on videos with people walking in the scene and show the effectiveness of our approach through quantitative and qualitative results. The proposed estimation method shows better estimation results as compared to the state of the art scene layout estimation methods. We are able to correctly segment 90.3% of background, 89.4% of sitting areas and 74.7% of the ground floor. | Estimating layout of cluttered indoor scenes using trajectory-based priors |
S0262885614001280 | This paper deals with the problem of estimating the human upper body orientation. We propose a framework which integrates estimation of the human upper body orientation and the human movements. Our human orientation estimator utilizes a novel approach which hierarchically employs partial least squares-based models of the gradient and texture features, coupled with the random forest classifier. The movement predictions are done by projecting detected persons into 3D coordinates and running an Unscented Kalman Filter-based tracker. The body orientation results are then fused with the movement predictions to build a more robust estimation of the human upper body orientation. We carry out comprehensive experiments and provide comparison results to show the advantages of our system over the other existing methods. | Partial least squares-based human upper body orientation estimation with combined detection and tracking |
S0262885614001292 | This paper presents an unsupervised deep learning framework that derives spatio-temporal features for human–robot interaction. The respective models extract high-level features from low-level ones through a hierarchical network, viz. the Hierarchical Temporal Memory (HTM), providing at the same time a solution to the curse of dimensionality in shallow techniques. The presented work incorporates the tensor-based framework within the operation of the nodes and, thus, enhances the feature derivation procedure. This is due to the fact that tensors allow the preservation of the initial data format and their respective correlation and, moreover, attain more compact representations. The computational nodes form spatial and temporal groups by exploiting the multilinear algebra and subsequently express the samples according to those groups in terms of proximity. This generic framework may be applied in a diverse of visual data, while it has been examined on sequences of color and depth images, exhibiting remarkable performance. | A tensor-based deep learning framework |
S0262885614001309 | Fully automatic annotation of tennis game using broadcast video is a task with a great potential but with enormous challenges. In this paper we describe our approach to this task, which integrates computer vision, machine listening, and machine learning. At the low level processing, we improve upon our previously proposed state-of-the-art tennis ball tracking algorithm and employ audio signal processing techniques to detect key events and construct features for classifying the events. At high level analysis, we model event classification as a sequence labelling problem, and investigate four machine learning techniques using simulated event sequences. Finally, we evaluate our proposed approach on three real world tennis games, and discuss the interplay between audio, vision and learning. To the best of our knowledge, our system is the only one that can annotate tennis game at such a detailed level. | Automatic annotation of tennis games: An integration of audio, vision, and learning |
S0262885614001322 | Extracting local keypoints and keypoint descriptions from images is a primary step for many computer vision and image retrieval applications. In the literature, many researchers have proposed methods for representing local texture around keypoints with varying levels of robustness to photometric and geometric transformations. Gradient-based descriptors such as the Scale Invariant Feature Transform (SIFT) are among the most consistent and robust descriptors. The SIFT descriptor, a 128-element vector consisting of multiple gradient histograms computed from local image patches around a keypoint, is widely considered as the gold standard keypoint descriptor. However, SIFT descriptors require at least 128bytes of storage per descriptor. Since images are typically described by thousands of keypoints, it may require more space to store the SIFT descriptors for an image than the original image itself. This may be prohibitive in extremely large-scale applications and applications on memory-constrained devices such as tablets and smartphones. In this paper, with the goal of reducing the memory requirements of keypoint descriptors such as SIFT, without affecting their performance, we propose BIG-OH, a simple yet extremely effective method for binary quantization of any descriptor based on gradient orientation histograms. BIG-OH's memory requirements are very small—when it uses SIFT's default parameters for the construction of the gradient orientation histograms, it only requires 16bytes per descriptor. BIG-OH quantizes gradient orientation histograms by computing a bit vector representing the relative magnitudes of local gradients associated with neighboring orientation bins. In a series of experiments on keypoint matching with different types of keypoint detectors under various photometric and geometric transformations, we find that the quantized descriptor has performance comparable to or better than other descriptors, including BRISK, CARD, BRIEF, D-BRIEF, SQ, and PCA-SIFT. Our experiments also show that BIG-OH is extremely effective for image retrieval, with modestly better performance than SIFT. BIG-OH's drastic reduction in memory requirements, obtained while preserving or improving the image matching and image retrieval performance of SIFT, makes it an excellent descriptor for large image databases and applications running on memory-constrained devices. | BIG-OH: BInarization of gradient orientation histograms |
S0262885614001334 | This paper proposes a novel approach to recognize object and scene categories in depth images. We introduce a Bag of Words (BoW) representation in 3D, the Selective 3D Spatial Pyramid Matching Kernel (3DSPMK). It starts quantizing 3D local descriptors, computed from point clouds, to build a vocabulary of 3D visual words. This codebook is used to build the 3DSPMK, which starts partitioning a working volume into fine sub-volumes, and computing a hierarchical weighted sum of histogram intersections of visual words at each level of the 3D pyramid structure. With the aim of increasing both the classification accuracy and the computational efficiency of the kernel, we propose two selective hierarchical volume decomposition strategies, based on representative and discriminative sub-volume selection processes, which drastically reduce the pyramid to consider. Results on different RGBD datasets show that our approaches obtain state-of-the-art results for both object recognition and scene categorization. | Recognizing in the depth: Selective 3D Spatial Pyramid Matching Kernel for object and scene categorization |
S0262885614001346 | Recently, sparse representation has been applied to object tracking, where each candidate target is approximately represented as a sparse linear combination of target templates. In this paper, we present a new tracking algorithm, which is faster and more robust than other tracking algorithms, based on sparse representation. First, with an analysis of many typical tracking examples with various degrees of corruption, we model the corruption as a Laplacian distribution. Then, a LAD–Lasso optimisation model is proposed based on Bayesian Maximum A Posteriori (MAP) estimation theory. Compared with L1 Tracker and APG-L1 Tracker, the number of optimisation variables is reduced greatly; it is equal to the number of target templates, regardless of the dimensions of the feature. Finally, we use the Alternating Direction Method of Multipliers (ADMM) to solve the proposed optimisation problem. Experiments on some challenging sequences demonstrate that our proposed method performs better than the state-of-the-art methods in terms of accuracy and robustness. | Robust object tracking using least absolute deviation |
S0262885614001358 | This paper introduces an adaptive visual tracking method that combines the adaptive appearance model and the optimization capability of the Markov decision process. Most tracking algorithms are limited due to variations in object appearance from changes in illumination, viewing angle, object scale, and object shape. This paper is motivated by the fact that tracking performance degradation is caused not only by changes in object appearance but also by the inflexible controls of tracker parameters. To the best of our knowledge, optimization of tracker parameters has not been thoroughly investigated, even though it critically influences tracking performance. The challenge is to equip an adaptive tracking algorithm with an optimization capability for a more flexible and robust appearance model. In this paper, the Markov decision process, which has been applied successfully in many dynamic systems, is employed to optimize an adaptive appearance model-based tracking algorithm. The adaptive visual tracking is formulated as a Markov decision process based dynamic parameter optimization problem with uncertain and incomplete information. The high computation requirements of the Markov decision process formulation are solved by the proposed prioritized Q-learning approach. We carried out extensive experiments using realistic video sets, and achieved very encouraging and competitive results. | Adaptive visual tracking using the prioritized Q-learning algorithm: MDP-based parameter learning approach |
S0262885614001371 | The rotation, scaling and translation invariant property of image moments has a high significance in image recognition. Legendre moments as a classical orthogonal moment have been widely used in image analysis and recognition. Since Legendre moments are defined in Cartesian coordinate, the rotation invariance is difficult to achieve. In this paper, we first derive two types of transformed Legendre polynomial: substituted and weighted radial shifted Legendre polynomials. Based on these two types of polynomials, two radial orthogonal moments, named substituted radial shifted Legendre moments and weighted radial shifted Legendre moments (SRSLMs and WRSLMs) are proposed. The proposed moments are orthogonal in polar coordinate domain and can be thought as generalized and orthogonalized complex moments. They have better image reconstruction performance, lower information redundancy and higher noise robustness than the existing radial orthogonal moments. At last, a mathematical framework for obtaining the rotation, scaling and translation invariants of these two types of radial shifted Legendre moments is provided. Theoretical and experimental results show the superiority of the proposed methods in terms of image reconstruction capability and invariant recognition accuracy under both noisy and noise-free conditions. | Radial shifted Legendre moments for image analysis and invariant image recognition |
S0262885614001383 | This article focuses on the usability evaluation of biometric recognition systems in mobile devices. In particular, a behavioural modality has been used: the dynamic handwritten signature. Testing usability in behavioural modalities involves a big challenge due to the number of degrees of freedom that users have in interacting with sensors, as well as the variety of capture devices to be used. In this context we propose a usability evaluation that allows users to interact freely with the system while minimizing errors at the same time. The participants signed in a smartphone with a stylus through the different phases in the use of a biometric system: training, enrolment and verification. In addition, a profound study on the automation of the evaluation processes has been done, so as to reduce the resources employed. The influence of the users' stress has also been studied, to obtain conclusions on its impact on both the usability systems in scenarios where the user may suffer a certain level of stress, such as in courts, banks or even shopping. In brief, the results shown in this paper prove not only that a dynamic handwritten signature is a trustable solution for a large number of applications in the real world, but also that the evaluation of the usability of biometric systems can be carried out at lower costs and shorter duration. | Automatic usability and stress analysis in mobile biometrics |
S0262885614001395 | The basic goal of scene understanding is to organize the video into sets of events and to find the associated temporal dependencies. Such systems aim to automatically interpret activities in the scene, as well as detect unusual events that could be of particular interest, such as traffic violations and unauthorized entry. The objective of this work, therefore, is to learn behaviors of multi-agent actions and interactions in a semi-supervised manner. Using tracked object trajectories, we organize similar motion trajectories into clusters using the spectral clustering technique. This set of clusters depicts the different paths/routes, i.e., the distinct events taking place at various locations in the scene. A temporal mining algorithm is used to mine interval-based frequent temporal patterns occurring in the scene. A temporal pattern indicates a set of events that are linked based on their relationship with other events in the set, and we use Allen's interval-based temporal logic to describe these relations. The resulting frequent patterns are used to generate temporal association rules, which convey the semantic information contained in the scene. Our overall aim is to generate rules that govern the dynamics of the scene and perform anomaly detection. We apply the proposed approach on two publicly available complex traffic datasets and demonstrate considerable improvements over the existing techniques. | Dynamic scene understanding using temporal association rules |
S0262885614001401 | When a videometric system operates over a long period, temperature variations in the camera and its environment will affect the measurement results, which cannot be ignored. How to eliminate or compensate for the effects of such variations in temperature is an emergent problem. Starting with the image drift phenomenon, this paper presents an image-drift model that analyzes the relationship between variations in the camera parameters and drift in the coordinates of the image. A simplified model is then introduced by analyzing the coupling relationships among the variations in the camera parameters. Furthermore, a model of the relationship between the camera parameters and temperature variations is established with the system identification method. Finally, several compensation experiments on image drift are carried out, using the parameter–temperature relationship model calibrated with one arbitrary data set to compensate the others. The analyses and experiments demonstrate the feasibility and efficiency of the proposed method. | The effects of temperature variation on videometric measurement and a compensation method |
S0262885614001413 | In this paper, we present an accelerated system for segmenting flower images based on graph-cut technique which formulates the segmentation problem as an energy function minimization. The contribution of this paper consists to propose an improvement of the classical used energy function, which is composed of a data-consistent term and a boundary term. For this, we integrate an additional data-consistent term based on the spatial prior and we add gradient information in the boundary term. Then, we propose an automated coarse-to-fine segmentation method composed mainly of two levels: coarse segmentation and fine segmentation. First, the coarse segmentation level is based on minimizing the proposed energy function. Then, the fine segmentation is done by optimizing the energy function through the standard graph-cut technique. Experiments were performed on a subset of Oxford flower database and the obtained results are compared to the reimplemented method of Nilsback et al. [1]. The evaluation shows that our method consumes less CPU time and it has a satisfactory accuracy compared with the mentioned method above [1]. | Model-based graph-cut method for automatic flower segmentation with spatial constraints |
S0262885614001425 | The evaluation of the scale of an object in a cluttered background is a serious problem in computer vision. The most existing contour-based approaches relevant to object detection address this problem by normalizing descriptor or multi-scale searching, such as sliding-window searching, spatial pyramid model etc. Besides, Hough-voting framework can predict the scale of an object according to some meaning fragments. However, utilizing scale-variant descriptor or complicated structure in these measures reduces the efficiency of detection. In the present paper, we propose a novel shape feature called scale-invariant contour segment context (CSC). This feature is based on the angle between contour line segments. It remains unchanged as scale varies. Most importantly, it evaluates the scale of objects located in cluttered images and facilitates localization of the boundary of the object in unseen images simultaneously. In this way, we need to focus on just the shape matching algorithm without considering the variant scale of the object in an image. This is a procedure which absolutely differs from voting and sliding window searching. We do experiments on ETHZ shape dataset, Weizmann horses dataset, and the bottle subset from PASCAL datasets. The results confirm that the present model of object detection, based on CSC, outperforms state-of-the-art of shape-based detection methods. | Scale-invariant contour segment context in object detection |
S0262885614001449 | Representation of facial expressions using continuous dimensions has shown to be inherently more expressive and psychologically meaningful than using categorized emotions, and thus has gained increasing attention over recent years. Many sub-problems have arisen in this new field that remain only partially understood. A comparison of the regression performance of different texture and geometric features and the investigation of the correlations between continuous dimensional axes and basic categorized emotions are two of these. This paper presents empirical studies addressing these problems, and it reports results from an evaluation of different methods for detecting spontaneous facial expressions within the arousal–valence (AV) dimensional space. The evaluation compares the performance of texture features (SIFT, Gabor, LBP) against geometric features (FAP-based distances), and the fusion of the two. It also compares the prediction of arousal and valence, obtained using the best fusion method, to the corresponding ground truths. Spatial distribution, shift, similarity, and correlation are considered for the six basic categorized emotions (i.e. anger, disgust, fear, happiness, sadness, surprise). Using the NVIE database, results show that the fusion of LBP and FAP features performs the best. The results from the NVIE and FEEDTUM databases reveal novel findings about the correlations of arousal and valence dimensions to each of six basic emotion categories. | Representation of facial expression categories in continuous arousal–valence space: Feature and correlation |
S0262885614001450 | One of the leading time of flight imaging technologies for depth sensing is based on Photonic Mixer Devices (PMD). In PMD sensors each pixel samples the correlation between emitted and received light signals. Current PMD cameras compute eight correlation samples per pixel in four sequential stages to obtain depth with invariance to signal amplitude and offset variations. With motion, PMD pixels capture different depths at each stage. As a result, correlation samples are not coherent with a single depth, producing artifacts. We propose to detect and remove motion artifacts from a single frame taken by a PMD camera. The algorithm we propose is very fast, simple and can be easily included in camera hardware. We recover depth of each pixel by exploiting consistency of the correlation samples and local neighbors of the pixel. In addition, our method obtains the motion flow of occluding contours in the image from a single frame. The system has been validated in real scenes using a commercial low-cost PMD camera and high speed dynamics. In all cases our method produces accurate results and it highly reduces motion artifacts. | Single frame correction of motion artifacts in PMD-based time of flight cameras |
S0262885614001462 | In this paper, a local approach for 3D object recognition is presented. It is based on the topological invariants provided by the critical points of the 3D object. The critical points and the links between them are represented by a set of size functions obtained after splitting the 3D object into portions. A suitable similarity measure is used to compare the sets of size functions associated with the 3D objects. In order to validate our approach's recognition performance, we used different collections of 3D objects. The obtained scores are favourably comparable to the related work. | A local approach for 3D object recognition through a set of size functions |
S0262885614001474 | In this paper, a novel and effective lip-based biometric identification approach with the Discrete Hidden Markov Model Kernel (DHMMK) is developed. Lips are described by shape features (both geometrical and sequential) on two different grid layouts: rectangular and polar. These features are then specifically modeled by a DHMMK, and learnt by a support vector machine classifier. Our experiments are carried out in a ten-fold cross validation fashion on three different datasets, GPDS-ULPGC Face Dataset, PIE Face Dataset and RaFD Face Dataset. Results show that our approach has achieved an average classification accuracy of 99.8%, 97.13%, and 98.10%, using only two training images per class, on these three datasets, respectively. Our comparative studies further show that the DHMMK achieved a 53% improvement against the baseline HMM approach. The comparative ROC curves also confirm the efficacy of the proposed lip contour based biometrics learned by DHMMK. We also show that the performance of linear and RBF SVM is comparable under the frame work of DHMMK. | Using a Discrete Hidden Markov Model Kernel for lip-based biometric identification |
S0262885614001565 | We describe an Eikonal-based algorithm for computing dense oversegmentation of an image, often called superpixels. This oversegmentation respects local image boundaries while limiting undersegmentation. The proposed algorithm relies on a region growing scheme, where the potential map used is not fixed and evolves during the diffusion. Refinement steps are also proposed to enhance at low cost the first oversegmentation. Quantitative comparisons on the Berkeley dataset show good performance on traditional metrics over current state-of-the art superpixel methods. | Eikonal-based region growing for efficient clustering |
S0262885614001589 | This paper proposes a new method to extract a gait feature from a raw gait video directly. The Space–Time Interest Points (STIPs) are detected where there are significant movements of human body along both spatial and temporal directions in local spatio-temporal volumes of a raw gait video. Then, a histogram of STIP descriptors (HSD) is constructed as a gait feature. In the classification stage, the support vector machine (SVM) is applied to recognize gaits based on HSDs. In this study, the standard multi-class (i.e. multiple subjects) classification can often be computationally infeasible at test phase, when gait recognition is performed by using every possible classifiers (i.e. SVM models) trained for all individual subjects. In this paper, the attribute-based classification is applied to reduce the number of SVM models needed for recognizing each probe gait. This process will significantly reduce the test-time computational complexity and also retain or even improve the recognition accuracy. When compared with other existing methods in the literature, the proposed method is shown to have the promising performance for the case of normal walking, and the outstanding performance for the cases of walking with variations such as walking with carrying a bag and walking with varying a type of clothes. | Attribute-based learning for gait recognition using spatio-temporal interest points |
S0262885614001590 | In this paper, we study the problem of Face Recognition (FR) when using Single Sensor Multi-Wavelength (SSMW) imaging systems that operate in the Short-Wave Infrared (SWIR) band. The contributions of our work are four fold: First, a SWIR database is collected when using our developed SSMW system under the following scenarios, i.e. Multi-Wavelength (MW) multi-pose images were captured when the camera was focused at either 1150, 1350 or 1550nm. Second, an automated quality-based score level fusion scheme is proposed for the classification of input MW images. Third, a weighted quality-based score level fusion scheme is proposed for the automated classification of full frontal (FF) vs. nonfrontal (NFF) face images. Fourth, a set of experiments is performed indicating that our proposed algorithms, for the classification of (i) multiwavelength images and (ii) FF vs. NFF face images, are beneficial when designing different steps of multi-spectral face recognition (FR) systems, including face detection, eye detection and face recognition. We also determined that when our SWIR-based system is focused at 1350nm, the identification performance increases compared to focusing the camera at any of the other SWIR wavelengths available. This outcome is particularly important for unconstrained FR scenarios, where imaging at 1550nm, at long distances and when operating at night time environments, is preferable over different SWIR wavelengths. | Face recognition in the SWIR band when using single sensor multi-wavelength imaging systems |
S0262885614001607 | Visual tracking is an important task in various computer vision applications including visual surveillance, human computer interaction, event detection, video indexing and retrieval. Recent state of the art sparse representation (SR) based trackers show better robustness than many of the other existing trackers. One of the issues with these SR trackers is low execution speed. The particle filter framework is one of the major aspects responsible for slow execution, and is common to most of the existing SR trackers. In this paper, 1 1 An earlier brief version of the paper has appeared in ICIP'13 (R. Venkatesh Babu and P. Priti, “Interest points based object tracking via sparse representation”, in proceedings of International Conference on Image Processing (ICIP), Melbourne, Australia, 2013). we propose a robust interest point based tracker in l 1 minimization framework that runs at real-time with performance comparable to the state of the art trackers. In the proposed tracker, the target dictionary is obtained from the patches around target interest points. Next, the interest points from the candidate window of the current frame are obtained. The correspondence between target and candidate points is obtained via solving the proposed l 1 minimization problem. In order to prune the noisy matches, a robust matching criterion is proposed, where only the reliable candidate points that mutually match with target and candidate dictionary elements are considered for tracking. The object is localized by measuring the displacement of these interest points. The reliable candidate patches are used for updating the target dictionary. The performance and accuracy of the proposed tracker is benchmarked with several complex video sequences. The tracker is found to be considerably fast as compared to the reported state of the art trackers. The proposed tracker is further evaluated for various local patch sizes, number of interest points and regularization parameters. The performance of the tracker for various challenges including illumination change, occlusion, and background clutter has been quantified with a benchmark dataset containing 50 videos. | Robust tracking with interest points: A sparse representation approach |
S0262885614001619 | This paper presents an accurate and efficient eye detection method using the discriminatory Haar features (DHFs) and a new efficient support vector machine (eSVM). The DHFs are extracted by applying a discriminating feature extraction (DFE) method to the 2D Haar wavelet transform. The DFE method is capable of extracting multiple discriminatory features for two-class problems based on two novel measure vectors and a new criterion in the whitened principal component analysis (PCA) space. The eSVM significantly improves the computational efficiency upon the conventional SVM for eye detection without sacrificing the generalization performance. Experiments on the Face Recognition Grand Challenge (FRGC) database and the BioID face database show that (i) the DHFs exhibit promising classification capability for eye detection problem; (ii) the eSVM runs much faster than the conventional SVM; and (iii) the proposed eye detection method achieves near real-time eye detection speed and better eye detection performance than some state-of-the-art eye detection methods. | Eye detection using discriminatory Haar features and a new efficient SVM |
S0262885614001620 | Accurate reconstruction of 3D geometrical shape from a set of calibrated 2D multiview images is an active yet challenging task in computer vision. The existing multiview stereo methods usually perform poorly in recovering deeply concave and thinly protruding structures, and suffer from several common problems like slow convergence, sensitivity to initial conditions, and high memory requirements. To address these issues, we propose a two-phase optimization method for generalized reprojection error minimization (TwGREM), where a generalized framework of reprojection error is proposed to integrate stereo and silhouette cues into a unified energy function. For the minimization of the function, we first introduce a convex relaxation on 3D volumetric grids which can be efficiently solved using variable splitting and Chambolle projection. Then, the resulting surface is parameterized as a triangle mesh and refined using surface evolution to obtain a high-quality 3D reconstruction. Our comparative experiments with several state-of-the-art methods show that the performance of TwGREM based 3D reconstruction is among the highest with respect to accuracy and efficiency, especially for data with smooth texture and sparsely sampled viewpoints. | Multiview stereo and silhouette fusion via minimizing generalized reprojection error |
S0262885614001632 | An algorithm for fitting multiple models that characterize the projective relationships between point-matches in pairs of (or single) images is proposed herein. Specifically, the problem of estimating multiple algebraic varieties that relate the projections of 3 dimensional (3D) points in one or more views is predominantly turned into a problem of inference over a Markov random field (MRF) using labels that include outliers and a set of candidate models estimated from subsets of the point matches. Thus, not only the MRF can trivially incorporate the errors of fit in singleton factors, but the sheer benefit of this approach is the ability to consider the interactions between data points. The proposed method (CSAMMFIT) refines the outlier posterior over the course of consecutive inference sweeps, until the process settles at a local minimum. The inference “engine” employed is a Markov Chain Monte Carlo (MCMC) method which samples new labels from clusters of data points. The advantage of this technique pertains to the fact that cluster formation can be manipulated to favor common label assignments between points related to each other by image based criteria. Moreover, although CSAMMFIT uses a Potts-like pairwise factor, the inference algorithm allows for arbitrary prior formulations, thereby accommodating the needs for more elaborate feature based constraints. | Fitting multiple projective models using clustering-based Markov chain Monte Carlo inference |
S0262885614001644 | Feature correspondence lays the foundation for many tasks in computer vision and pattern recognition. In this paper the directed structural model is utilized to represent the feature set, and the correspondence problem is then formulated as the structural model matching. Compared with the undirected structural model, the proposed directed model provides more discriminating ability and invariance against rotation and scale transformations. Finally, the recently proposed convex–concave relaxation procedure (CCRP) is generalized to approximately solve the problem. Extensive experiments on synthetic and real data witness the effectiveness of the proposed method. | Feature correspondence based on directed structural model matching |
S0262885614001668 | We propose a holistic approach to the problem of re-identification in an environment of distributed smart cameras. We model the re-identification process in a distributed camera network as a distributed multi-class classifier, composed of spatially distributed binary classifiers. We treat the problem of re-identification as an open-world problem, and address novelty detection and forgetting. As there are many tradeoffs in design and operation of such a system, we propose a set of evaluation measures to be used in addition to the recognition performance. The proposed concept is illustrated and evaluated on a new many-camera surveillance dataset and SAIVT-SoftBio dataset. | Visual re-identification across large, distributed camera networks |
S0262885614001681 | This paper presents yet another algorithm for finding polygonal approximations of digital planar curves; however, with a significant distinction: the vertices of an approximating polygon need not lie on the contour itself. This approach gives us more flexibility to reduce the approximation error of the polygon compared to the conventional way, where the vertices of the polygon are restricted to lie on the contour. To compute the approximation efficiently, we adaptively define a local neighborhood of each point on the contour. The vertices of the polygonal approximation are allowed to ‘move around’ in the neighborhoods. In addition, we demonstrate a general approach where the error measure of an already computed polygonal approximation can possibly be reduced further by vertex relocation, without increasing the number of dominant points. Moreover, the proposed method is non-parametric, requiring no parameter to set for any particular application. Suitability of the proposed algorithm is validated by testing on several databases and comparing with existing methods. | Optimized polygonal approximations through vertex relocations in contour neighborhoods |
S0262885614001693 | The Bag-of-Words (BoW) framework is well-known in image classification. In the framework, there are two essential steps: 1) coding, which encodes local features by a visual vocabulary, and 2) pooling, which pools over the response of all features into image representation. Many coding and pooling methods are proposed, and how to apply them better in different conditions has become a practical problem. In this paper, to better use BoW in different applications, we study the relation between many typical coding methods and two popular pooling methods. Specifically, complete combinations of coding and pooling are evaluated based on an extremely large range of vocabulary sizes (16 to 1M) on five primary and popular datasets. Three typical ones are 15 Scenes, Caltech 101 and PASCAL VOC 2007, while the other two large-scale ones are Caltech 256 and ImageNet. Based on the systematic evaluation, some interesting conclusions are drawn. Some conclusions are the extensions of previous viewpoints, while some are different but important to understand BoW model. Based on these conclusions, we provide detailed application criterions by evaluating coding and pooling based on precision, efficiency and memory requirements in different applications. We hope that this study can be helpful to evaluate different coding and pooling methods, the conclusions can be beneficial to better understand BoW, and the application criterions can be valuable to use BoW better in different applications. | How to use Bag-of-Words model better for image classification |
S0262885614001784 | Finger vein identification is a new biometric identification technology. While many existing works approach the problem by using shape matching which is the generative method, in this paper, we introduce a joint discriminative and generative algorithm for the task. Our method considers both the discriminative appearance of local image patches as well as their generative spatial layout. The method is based on the popular vocabulary tree model, where we utilize the hidden leaf node layer to calculate a generative confidence to weight the discriminative vote from the leaf node. The training process remains the same as building a conventional vocabulary tree, while the prediction process utilizes a proposed point set matching method to support non-parametric patch layout matching. In this way, the entire model retains the efficiency of the vocabulary tree model, which is much lighter than other similar models such as the constellation model (Fergus et al., 2003). The overall estimation follows the Bayesian theory. Experimental results show that our proposed joint model outperformed the purely generative or discriminative counterpart, and can offer competitive performance than existing methods for both the vein authentication and recognition tasks. | Discriminative and generative vocabulary tree: With application to vein image authentication and recognition |
S0262885614001851 | This paper presents a novel stereo disparity estimation method, which combines three different cost metrics, defined using RGB information, the CENSUS transform, as well as Scale-Invariant Feature Transform coefficients. The selected cost metrics are aggregated based on an adaptive weight approach, in order to calculate their corresponding cost volumes. The resulting cost volumes are then merged into a combined one, following a novel two-phase strategy, which is further refined by exploiting scanline optimization. A mean-shift segmentation-driven approach is exploited to deal with outliers in the disparity maps. Additionally, low-textured areas are handled using disparity histogram analysis, which allows for reliable disparity plane fitting on these areas. Finally, an efficient two-step approach is introduced to refine disparity discontinuities. Experiments performed on the four images of the Middlebury benchmark demonstrate the accuracy of this methodology, which currently ranks first among published methods. Moreover, this algorithm is tested on 27 additional Middlebury stereo pairs for evaluating thoroughly its performance. The extended comparison verifies the efficiency of this work. | Enhanced disparity estimation in stereo images |
S0262885614001863 | Researchers have recently been performing region of interest detection in such applications as object recognition, object segmentation, and adaptive coding. In this paper, a novel region of interest detection model that is based on visually salient regions is introduced by utilizing the frequency and space domain features in very high resolution remote sensing images. First, the frequency domain features that are based on a multi-scale spectrum residual algorithm are extracted to yield intensity features. Next, we extract the color and orientation features by generating space dynamic pyramids. Then, spectral features are obtained by analyzing spectral information content. In addition, a multi-scale feature fusion method is proposed to generate a saliency map. Finally, the detected visual saliency regions are described using adaptive threshold segmentation. Compared with existing models, our model eliminates the background information effectively and highlights the salient detected results with well-defined boundaries and shapes. Moreover, an experimental evaluation indicates promising results from our model with respect to the accuracy of detection results. | Multi-scale hybrid saliency analysis for region of interest detection in very high resolution remote sensing images |
S0262885614001875 | 3D face recognition and emotion analysis play important roles in many fields of communication and edutainment. An effective facial descriptor, with higher discriminating capability for face recognition and higher descriptiveness for facial emotion analysis, is a challenging issue. However, in the practical applications, the descriptiveness and discrimination are independent and contradictory to each other. 3D facial data provide a promising way to balance these two aspects. In this paper, a robust regional bounding spherical descriptor (RBSR) is proposed to facilitate 3D face recognition and emotion analysis. In our framework, we first segment a group of regions on each 3D facial point cloud by shape index and spherical bands on the human face. Then the corresponding facial areas are projected to regional bounding spheres to obtain our regional descriptor. Finally, a regional and global regression mapping (RGRM) technique is employed to the weighted regional descriptor for boosting the classification accuracy. Three largest available databases, FRGC v2, CASIA and BU-3DFE, are contributed to the performance comparison and the experimental results show a consistently better performance for 3D face recognition and emotion analysis. | Robust regional bounding spherical descriptor for 3D face recognition and emotion analysis |
S0262885615000086 | Surface defects are important factors of surface quality of industrial products. Most of the traditional machine vision based methods for surface defect recognition have some shortcomings such as low detection rate of defects and high rate of false alarms. Different types of defects have special information at some directions and scales of their images, while the traditional methods of feature extraction, such as Wavelet transform, are unable to get the information at all directions. In this study, Shearlet transform is introduced to provide efficient multi-scale directional representation, and a general framework has been developed to analyze and represent surface defect images with anisotropic information. The metal surface images captured from production lines are decomposed into multiple directional subbands with Shearlet transform, and features are extracted from all subbands and combined into a high-dimensional feature vector. Kernel Locality Preserving Projection is applied to the dimension reduction of the feature vector. The proposed method is tested with the surface images captured from different production lines, and the results show that the classification rates of surface defects of continuous casting slabs, hot-rolled steels, and aluminum sheets are 94.35%, 95.75% and 92.5% respectively. | Application of Shearlet transform to classification of surface defects for metals |
S0262885615000098 | IDR/QR, which is an incremental dimension reduction algorithm based on linear discriminant analysis (LDA) and QR decomposition, has been successfully employed for feature extraction and incremental learning. IDR/QR can update the discriminant vectors with light computation when new training samples are inserted into the training data set. However, IDR/QR has two limitations: 1) IDR/QR can only process new samples one instance after another even if a chunk of training samples is available at a time; and 2) the approximate trick is used in IDR/QR. Then there exists a gap in performance between incremental and batch IDR/QR solutions. To address the problems of IDR/QR, in this paper, we propose a new chunk IDR method which is capable of processing multiple data instances at a time and can accurately update the discriminant vectors when new data items are added dynamically. Experiments on some real databases demonstrate the effectiveness of the proposed algorithm over the original one. | Incremental learning from chunk data for IDR/QR |
S0262885615000104 | In this paper, we propose a robust dense stereo reconstruction algorithm using a random walk with restart. The pixel-wise matching costs are aggregated into superpixels and the modified random walk with restart algorithm updates the matching cost for all possible disparities between the superpixels. In comparison to the majority of existing stereo methods using the graph cut, belief propagation, or semi-global matching, our proposed method computes the final reconstruction through the determination of the best disparity at each pixel in the matching cost update. In addition, our method also considers occlusion and depth discontinuities through the visibility and fidelity terms. These terms assist in the cost update procedure in the calculation of the standard smoothness constraint. The method results in minimal computational costs while achieving high accuracy in the reconstruction. We test our method on standard benchmark datasets and challenging real-world sequences. We also show that the processing time increases linearly in relation to an increase in the disparity search range. | Robust stereo matching using adaptive random walk with restart algorithm |
S0262885615000116 | We present a novel method that tackles the problem of facial landmarking in unconstrained conditions within the part-based framework. Part-based methods alternate the evaluation of local appearance models to produce a per-point response map and a shape fitting step which finds a valid face shape that maximises the sum of the per-point responses. Our approach focuses on obtaining better appearance models for the creation of the response maps, and it can be used in combination with any shape fitting strategy. Local appearance models need to tackle very heterogeneous data when dealing with in-the-wild imagery due to factors as varying head poses, facial expressions, identity, lighting conditions, or image quality among others. Pose-wise experts are typically used in this scenario so that each expert deals with more homogeneous data. However, the computation cost at test time is significantly increased. Furthermore, choosing the right expert is not straightforward, which can lead to gross errors. We propose to dynamically select at test time the training examples used for inference. We use a global similarity measure to select the most adequate training examples for inference, and create a single test sample-specific expert using a localised inference technique. To illustrate the validity of these ideas, we capitalise on the recently proposed use of regression to generate local appearance models. In particular, we use Gaussian processes, as their non-parametric nature easily allows for localised regression. This novel way of constructing the response maps is combined with two state-of-the-art standard shape fitting algorithms, the popular Constrained Local Models framework and the Consensus of Exemplars method. We validate our method on two publicly available datasets as well as on a cross-dataset experiment, showing a considerable performance improvement of the proposed approach. | Facial landmarking for in-the-wild images with local inference based on global appearance |
S0262885615000128 | Video segmentation is a fundamental problem in computer vision and aims to extract meaningful entities from a video. One of the most useful cues in this quest is motion as is described by the trajectories of tracked points. In this paper we present a motion segmentation method attempting to address some of the major issues in the area. Namely, we propose an efficient framework where more complex motion models can be seamlessly integrated both maintaining computational tractability and not penalizing non translational motion. Moreover, we expose in depth the problem of object leakage due to occlusion and highlight that motion segmentation could be treated as a graph coloring problem. Our algorithm uses an approach based on graph theory and resolves occlusion cases in a robust manner. To endow our method with scalability, we follow the previously presented subsequence architecture and test it in a streaming setup. Extensive experiments demonstrate the flexibility and robustness of the method. The segmentation results are competitive compared to the state of the art. | Incorporating higher order models for occlusion resilient motion segmentation in streaming videos |
S0262885615000141 | In this paper, a novel method is proposed for real-world pose-invariant face recognition from only a single image in a gallery. A 3D Facial Expression Generic Elastic Model (3D FE-GEM) is proposed to reconstruct a 3D model of each human face using only a single 2D frontal image. Then, for each person in the database, a Sparse Dictionary Matrix (SDM) is created from all face poses by rotating the 3D reconstructed models and extracting features in the rotated face. Each SDM is subsequently rendered based on triplet angles of face poses. Before matching to SDM, an initial estimate of triplet angles of face poses is obtained in the probe face image using an automatic head pose estimation approach. Then, an array of the SDM is selected based on the estimated triplet angles for each subject. Finally, the selected arrays from SDMs are compared with the probe image by sparse representation classification. Convincing results were acquired to handle pose changes on the FERET, CMU PIE, LFW and video face databases based on the proposed method compared to several state-of-the-art in pose-invariant face recognition. | Unrestricted pose-invariant face recognition by sparse dictionary matrix |
S0262885615000153 | This paper proposes a novel method to address the registration of images with affine transformation. Firstly, the Maximally Stable Extremal Region (MSER) detection method is performed on the reference image and the image to be registered, respectively. And the coarse affine transformation matrix between the two images is estimated by the matched MSER pairs. Two circular regions containing roughly the same image content are also obtained by fitting and normalizing the centroids of the matched MSERs from the two images. Secondly, a scale invariant and approximate affine transformation invariant feature point detection algorithm based on the Gabor filter decomposition and phase congruency is performed on the two coarsely aligned regions, and two feature point sets are achieved, respectively. Finally, the affine transformation matrix between the two feature point sets is obtained by using a probabilistic point set registration algorithm, and the final affine transformation matrix between the reference image and the image to be registered is achieved according to the coarse affine transformation matrix and the affine transformation matrix between the two feature point sets. Several sets of experiments demonstrate that our proposed method performs competitively with the classical scale-invariant feature transform (SIFT) method for images with scale changes, and performs better than the traditional MSER and Affine-SIFT (ASIFT) methods for images with affine distortions. Moreover, the proposed method shows higher computation efficiency and robustness to illumination change than some existing area-based or feature-based methods do. | Registration of images with affine geometric distortion based on Maximally Stable Extremal Regions and phase congruency |
S0262885615000165 | We propose a method to produce near laser-scan quality 3-D face models of a freely moving user with a low-cost, low resolution range sensor in real-time. Our approach does not require any prior knowledge about the geometry of a face and can produce faithful geometric models of any star-shaped object. We use a cylindrical representation, which enables us to efficiently process the 3-D mesh by applying 2-D filters. We use the first frame as a reference and incrementally build the model by registering each subsequent cloud of 3-D points to the reference using the ICP (Iterative Closest Point) algorithm implemented on a GPU (Graphics Processing Unit). The registered point clouds are merged into a single image through a cylindrical representation. The noise from the sensor and from the pose estimation error is removed with a temporal integration and a spatial smoothing of the successively incremented model. To validate our approach, we quantitatively compare our model to laser scans, and show comparable accuracy. 1 1 This paper extends the method presented in [15]. | Near laser-scan quality 3-D face reconstruction from a low-quality depth stream |
S0262885615000232 | In this paper, we propose a method for face recognition by using the two-dimensional discrete wavelet transform (2D-DWT) and a new patch strategy. Based on the average image of all training samples, by using integral projection technique for two top-level's high-frequency sub-bands of 2D-DWT, we propose a non-uniform patch strategy for the top-level's low-frequency sub-band. This patch strategy is more suitable to reflect the structure feature of face image; and it is better for retaining the integrity of local information. By applying the obtained patch strategy to all samples, we obtain patches of training samples and testing samples; and then, give the final decision by using the nearest neighbor classifier and the majority voting. Experiments are run on the AR, FERET, Extended Yale B and LFW face databases. The obtained numerical results show that the new face recognition method outperforms the traditional 2D-DWT method and some state-of-the-art patch based methods. | Non-uniform patch based face recognition via 2D-DWT |
S0262885615000311 | Wide field of view panoramic videos have recently become popular due to the availability of high resolution displays. These panoramic videos are generated by stitching video frames captured from a panoramic video acquisition system, typically comprising of multiple video cameras arranged on a static or mobile platform. A mobile panoramic video acquisition system may suffer from global mechanical vibrations as well as independent inter-camera vibrations resulting in a jittery panoramic video. While existing stabilization schemes generally tackle single-camera vibrations, they do not account for these inter-camera vibrations. In this paper, we propose a video stabilization technique for multi-camera panoramic videos under the consideration that independent jitter may be exhibited by content of each camera. The proposed method comprises of three steps; the first step removes the global jitter in the video by estimating collective motion and subsequently removing the high frequency component from it. The second step removes the independent i.e. local jitter of each camera by estimating motion of each camera content separately. Pixels that are located in the overlapping regions of panoramic video are contributed by neighboring cameras, therefore, the estimated camera motion for these pixels is weighted using the blend masks generated by the stitching process. The final step applies local geometric warping to the stitched frames and removes any residual jitter induced due to parallax. Experimental results prove that proposed scheme performs better than existing panoramic stabilization schemes. | Stabilization of panoramic videos from mobile multi-camera platforms |
S0262885615000323 | Sudden illumination changes are a fundamental problem in background modeling applications. Most strategies to solve it are based on determining the particular form of the color transformation which the pixels undergo when an illumination change occurs. Here we present an approach which does not assume any specific form of the color transformation. It is based on a quantitative assessment of the smoothness of the local color transformation from one frame to the background model. In addition to this, an assessment of the obtained illumination states of the pixels is carried out with the help of fuzzy logic. Experimental results are presented, which demonstrate the performance of our approach in a range of situations. | Local color transformation analysis for sudden illumination change detection |
S0262885615000335 | Recent research trends in Content-based Video Retrieval have shown topic models as an effective tool to deal with the semantic gap challenge. In this scenario, this paper has a dual target: (1) it is aimed at studying how the use of different topic models (pLSA, LDA and FSTM) affects video retrieval performance; (2) a novel incremental topic model (IpLSA) is presented in order to cope with incremental scenarios in an effective and efficient way. A comprehensive comparison among these four topic models using two different retrieval systems and two reference benchmarking video databases is provided. Experiments revealed that pLSA is the best model in sparse conditions, LDA tend to outperform the rest of the models in a dense space and IpLSA is able to work properly in both cases. | Incremental probabilistic Latent Semantic Analysis for video retrieval |
S0262885615000347 | In robot localization, particle filtering can estimate the position of a robot in a known environment with the help of sensor data. In this paper, we present an approach based on particle filtering, for accurate stereo matching. The proposed method consists of three parts. First, we utilize multiple disparity maps in order to acquire a very distinctive set of features called landmarks, and then we use segmentation as a grouping technique. Secondly, we apply scan line particle filtering using the corresponding landmarks as a virtual sensor data to estimate the best disparity value. Lastly, we reduce the computational redundancy of particle filtering in our stereo correspondence with a Markov chain model, given the previous scan line values. More precisely, we assist particle filtering convergence by adding a proportional weight in the predicted disparity value estimated by Markov chains. In addition to this, we optimize our results by applying a plane fitting algorithm along with a histogram technique to refine any outliers. This work provides new insights into stereo matching methodologies by taking advantage of global geometrical and spatial information from distinctive landmarks. Experimental results show that our approach is capable of providing high-quality disparity maps comparable to other well-known contemporary techniques. | A stereo matching approach based on particle filters and scattered control landmarks |
S0262885615000360 | In this paper, we address the document image binarization problem with a three-stage procedure. First, possible stains and general document background information are removed from the image through a background removal stage. The remaining misclassified background and character pixels are then separated using a Local Co-occurrence Mapping, local contrast and a two-state Gaussian Mixture Model. Finally, some isolated misclassified components are removed by a morphology operator. The proposed scheme offers robust and fast performance, especially for both handwritten and printed documents, which compares favorably with other binarization methods. | Document image binarization using local features and Gaussian mixture modeling |
S0262885615000372 | This paper describes how to generate optimal projection patterns to supplement general stereo camera systems. In contrast to structured light, the active stereo systems utilize the projected patterns only as auxiliary information in correspondence search, whereas the structured light systems have to detect the patterns and decode them to compute depth. The concept of non-recurring De Bruijn sequences is introduced, and a few algorithms based on the non-recurring De Bruijn sequence are designed to build optimized projection patterns for several stereo parameters. When only the search window size of a stereo system is given, we show that a non-recurring De Bruijn sequence with corresponding parameters makes the longest functional pattern, and presents experimental results using real scenes to show the effectiveness of the proposed projection patterns. Additionally if the pattern length is given in the form of maximum disparity search range, the algorithm using branch-and-bound search scheme to find an optimal sub-sequence of a non-recurring De Bruijn sequence is proposed. | Optimized projection patterns for stereo systems |
S0262885615000384 | Various visual tracking approaches have been proposed for robust target tracking, among which using sparse representation of the tracking target yields promising performance. Some earlier works in this line used a fixed subset of features to compress the target's appearance, which has limited modeling capacity between the target and the background, and could not accommodate their appearance change over long period of time. In this paper, we propose a visual tracking method by modeling targets with online-learned sparse features. We first extract high dimensional Haar-like features as an over-completed basis set, and then solve the feature selection problem in an efficient L 1-regularized sparse-coding process. The selected low-dimensional representation best discriminates the target from its neighboring background. Next we use a naive Bayesian classifier to select the most-likely target candidate by a binary classification process. The online feature selection process happens when there are significant appearance changes identified by a thresholding strategy. In this way, our proposed method could work for long tracking tasks. At the same time, our comprehensive experimental evaluation has shown that the proposed methods achieve excellent running speed and higher accuracy over many state-of-the-art approaches. | Visual tracking based on online sparse feature learning |
S0262885615000475 | With the increasing number of videos all over the Internet and the increasing number of cameras looking at people around the world, one of the most interesting applications would be human activity recognition in videos. Many researches have been conducted in the literature for this purpose. But, still recognizing activities in a video with unrestricted conditions is a challenging problem. Moreover, finding the spatio-temporal location of the activity in the video is another issue. In this paper, we present a method based on a non-negative matrix completion framework, that learns to label videos with activity classes, and localizes the activity of interest spatio-temporally throughout the video. This approach has a multi-label weakly supervised setting for activity detection, with a convex optimization procedure. The experimental results show that the proposed approach is competitive with the state-of-the-art methods. | Non-negative matrix completion for action detection |
S0262885615000487 | In this paper, a tracking method based on sequential Bayesian inference is proposed. The proposed method focuses on solving both the problem of tracking under partial occlusions and the problem of non-rigid object tracking in real-time on a desktop personal computer (PC). The proposed method is mainly composed of two parts: (1) modeling the target object using elastic structure of local patches for robust performance; and (2) efficient hierarchical diffusion method to perform the tracking procedure in real-time. The elastic structure of local patches allows the proposed method to handle partial occlusions and non-rigid deformations through the relationship among neighboring patches. The proposed hierarchical diffusion method generates samples from the region where the posterior is concentrated to reduce computation time. The method is extensively tested on a number of challenging image sequences with occlusion and non-rigid deformation. The experimental results show the real-time capability and the robustness of the proposed method under various situations. | Visual tracking of non-rigid objects with partial occlusion through elastic structure of local patches and hierarchical diffusion |
S0262885615000554 | This paper investigates the effects of adding texture to images with poorly-textured regions on optical flow performance, namely the accuracy of foreground boundary detection and computation time. Despite significant improvements in optical flow computations, poor texture still remains a challenge to even the most accurate methods. Accordingly, we explored the effects of simple modification of images, rather than the algorithms. To localize and add texture to poorly-textured regions in the background, which induce the propagation of foreground optical flow, we first perform a texture segmentation using Laws' masks and generate a texture map. Next, using a binary frame difference, we constrain the poorly-textured regions to those with negligible motion. Finally, we calculate the optical flow for the modified images with added texture using the best optical flow methods available. It is shown that if the threshold used for binarizing the frame difference is in a specific range determined empirically, variations in the final foreground detection will be insignificant. Employing the texture addition in conjunction with leading optical flow methods on multiple real and animation sequences with different texture distributions revealed considerable advantages, including improvement in the accuracy of foreground boundary preservation, prevention of object merging, and reduction in the computation time. The F-measure and the Boundary Displacement Error metrics were used to evaluate the similarity between detected and ground-truth foreground masks. Furthermore, preventing foreground optical flow propagation and reduction in the computation time are discussed using analysis of optical flow convergence. | Effects of texture addition on optical flow performance in images with poor texture |
S0262885615000682 | Some human detection or tracking algorithms output a low-dimensional representation of the human body, such as a bounding box. Even though this representation is enough for some tasks, a more accurate and detailed point-wise representation of the human body is more useful for pose estimation and action recognition. The refinement process can produce a point-wise mask of the human body from its low-dimensional representation. In this paper, we tackle the problem of refining low-dimensional human shapes using RGB-D data with a novel and accurate method for this purpose. This algorithm combines low-level cues such as shape and color, and high level observations such as the estimated ground plane, in a multi-layer graph cut framework. In our algorithm, shape prior information is learned by training a classifier. Unlike some existing work, our method does not utilize or carry features from the internal steps of the methods which provide the bounding box, so our method can work on the outputs of any similar shape providers. Extensive experiments demonstrate that the proposed technique significantly outperforms other suitable methods. Moreover, a previously published refinement method is extended by incorporating more generic cues to serve this purpose. | Approaches for automatic low-dimensional human shape refinement with priors or generic cues using RGB-D data |
S0262885615000694 | Automatic optical inspection plays an important role to control the appearance quality of wide range of products in the product process. Recently, the high popularity of smartphones and information appliances drives significant demand of touch panels. However, the traditional frequency-based method which exploits the line structure feature of texture images is not effective for the defect detection of touch panels. The paper presents a novel spatial domain algorithm to inspect the defects on touch panel. By utilizing the characteristics of periodic patterns of the sensing circuits, an adaptive model for each pattern is learned online to effectively extract defects. The experimental results indicate that our proposed method achieves accurate detection with efficient computation. In addition, the users pay very little effort for the testing of different panel products. | A novel algorithm for defect inspection of touch panels |
S0262885615000700 | Pose estimation is a key concern in 3D urban surveying, mapping, and navigation. Although Global Positioning System (GPS) technologies can be used to estimate a robot's or vehicle's pose, there are many urban environments in which GPS functions poorly or not at all. For these situations, we offer a novel approach based on a careful fusion of panoramic camera data and 2D laser scanner input. First, a Constrained Bundle Adjustment (CBA) is introduced to handle scale and loop closure constraints. The fusion of a panoramic image series and laser data then enables an accurate scale to be estimated and loop closures detected. Finally, the two geometric constraints are enforced on the global CBA solution, which in turn produces a robust pose estimate. Experiments show that the proposed method is practicable and more accurate than vision-only methods, with an average error of just 0.2m in the horizontal plane over a 580m trajectory. | Fusion of a panoramic camera and 2D laser scanner data for constrained bundle adjustment in GPS-denied environments |
S0262885615000712 | Advanced segmentation techniques in the surveillance domain deal with shadows to avoid distortions when detecting moving objects. Most approaches for shadow detection are still typically restricted to penumbra shadows and cannot cope well with umbra shadows. Consequently, umbra shadow regions are usually detected as part of moving objects, thus affecting the performance of the final detection. In this paper we address the detection of both penumbra and umbra shadow regions. First, a novel bottom-up approach is presented based on gradient and colour models, which successfully discriminates between chromatic moving cast shadow regions and those regions detected as moving objects. In essence, those regions corresponding to potential shadows are detected based on edge partitioning and colour statistics. Subsequently (i) temporal similarities between textures and (ii) spatial similarities between chrominance angle and brightness distortions are analysed for each potential shadow region for detecting the umbra shadow regions. Our second contribution refines even further the segmentation results: a tracking-based top-down approach increases the performance of our bottom-up chromatic shadow detection algorithm by properly correcting non-detected shadows. To do so, a combination of motion filters in a data association framework exploits the temporal consistency between objects and shadows to increase the shadow detection rate. Experimental results exceed current state-of-the-art in shadow accuracy for multiple well-known surveillance image databases which contain different shadowed materials and illumination conditions. | Chromatic shadow detection and tracking for moving foreground segmentation |
S0262885615000724 | The goals of this paper are: (1) to enhance the quality of images of faces, (2) to enable 3D Morphable Models (3DMMs) to cope with severely degraded images, and (3) to reconstruct textured 3D faces with details that are not in the input images. Details that are lost in the input images due to blur, low resolution or occlusions, are filled in by the 3DMM and an additional texture enhancement algorithm that adds high-resolution details from example faces. By leveraging class-specific knowledge, this restoration process goes beyond what general image operations such as deblurring or inpainting can achieve. The benefit of the 3DMM for image restoration is that it can be applied to any pose and illumination, unlike image-based methods. However, it is only with the new fitting algorithm that 3DMMs can produce realistic faces from severely degraded images. The new method includes the blurring or downsampling operator explicitly into the analysis-by-synthesis algorithm. | Hallucination of facial details from degraded images using 3D face models |
S0262885615000736 | Bipartite graph matching has been demonstrated to be one of the most efficient algorithms to solve error-tolerant graph matching. This algorithm is based on defining a cost matrix between the whole nodes of both graphs and solving the nodes' correspondence through a linear assignment method (for instance, Hungarian or Jonker–Volgenant methods). Recently, two versions of this algorithm have been published called Fast Bipartite and Square Fast Bipartite. They compute the same distance value than Bipartite but with a reduced runtime if some restrictions on the edit costs are considered. In this paper, we do not present a new algorithm but we compare the three versions of Bipartite algorithm and show how the violation of the theoretically imposed restrictions in Fast Bipartite and Square Fast Bipartite do not affect the algorithm's performance. That is, in practice, we show that these restrictions do not affect the optimality of the algorithm and so, the three algorithms obtain similar distances and recognition ratios in classification applications although the restrictions do not hold. Moreover, we conclude that the Square Fast Bipartite with the Jonker–Volgenant solver is the fastest algorithm. | Computation of graph edit distance: Reasoning about optimality and speed-up |
S0262885615000748 | Semi-supervised sparse feature selection, which can exploit the large number unlabeled data and small number labeled data simultaneously, has placed an important role in web image annotation. However, most of the semi-supervised sparse feature selection methods are developed for single-view data and these methods cannot naturally deal with the multi-view data, though it has shown that leveraging information contained in multiple views can dramatically improve the feature selection performance. Recently, multi-view learning has obtained much research attention because it can reveal and leverage the correlated and complementary information between different views. So in this paper, we apply multi-view learning into semi-supervised sparse feature selection and propose a semi-supervised sparse feature selection method based on multi-view Laplacian regularization, namely, multi-view Laplacian sparse feature selection (MLSFS).1 MLSFS utilizes multi-view Laplacian regularization to boost semi-supervised sparse feature selection performance. A simple iterative method is proposed to solve the objective function of MLSFS. We apply MLSFS algorithm into image annotation task and conduct experiments on two web image datasets. The experimental results show that the proposed MLSFS outperforms the state-of-art single-view sparse feature selection methods. | Semi-supervised sparse feature selection based on multi-view Laplacian regularization |
S0262885615000761 | In this paper, we propose a novel contactless palmprint authentication system where the system uses a CCD camera to capture the user's hand at a distance without any restrictions and touching the device. Furthermore, a novel and high performance region of interest (ROI) extraction method which makes use of nonlinear regression and palm model to extract the ROIs with high success is proposed. Comparative results indicate that the proposed ROI extraction method gives superior performance as compared to the previously proposed point-based approaches. To show the performance of the proposed system, a novel contactless database has also been created. This database includes images captured from the users who present their hands with various hand positions and orientations in cluttered backgrounds. Furthermore, experiments show that the proposed system has achieved a recognition rate of 99.488% and equal error rate of 0.277% on the contactless database of 145 people containing 1752 hand images. | Developing a contactless palmprint authentication system by introducing a novel ROI extraction method |
S0262885615000773 | This paper proposes a unified multi-lateral filter to efficiently increase the spatial resolution of low-resolution and noisy depth maps in real-time. Time-of-Flight (ToF) cameras have become a very promising alternative to stereo-based range sensing systems as they provide depth measurements at a high frame rate. However, there are actually two main drawbacks that restrict their use in a wide range of applications; namely, their fairly low spatial resolution as well as the amount of noise within the depth estimation. In order to address these drawbacks, we propose a new approach based on sensor fusion. That is, we couple a ToF camera of low-resolution with a 2-D camera of higher resolution to which the low-resolution depth map will be efficiently upsampled. In this paper, we first review the existing depth map enhancement approaches based on sensor fusion and discuss their limitations. We then propose a unified multi-lateral filter that accounts for the inaccuracy of depth edges position due to the low-resolution ToF depth maps. By doing so, unwanted artefacts such as texture copying and edge blurring are almost entirely eliminated. Moreover, the proposed filter is configurable to behave as most of the alternative depth enhancement approaches. Using a convolution-based formulation and data quantization and downsampling, the described filter has been effectively and efficiently implemented for dynamic scenes in real-time applications. The experimental results show a sensitive qualitative as well as quantitative improvement on raw depth maps, outperforming state-of-the-art multi-lateral filters. | Unified multi-lateral filter for real-time depth map enhancement |
S0262885615000785 | Recently, a video representation based on dense trajectories has been shown to outperform other human action recognition methods on several benchmark datasets. The trajectories capture the motion characteristics of different moving objects in space and temporal dimensions. In dense trajectories, points are sampled at uniform intervals in space and time and then tracked using a dense optical flow field over a fixed length of L frames (optimally 15) spread overlapping over the entire video. However, among these base (dense) trajectories, a few may continue for longer than duration L, capturing motion characteristics of objects that may be more valuable than the information from the base trajectories. Thus, we propose a technique that searches for trajectories with a longer duration and refer to these as ‘ordered trajectories’. Experimental results show that ordered trajectories perform much better than the base trajectories, both standalone and when combined. Moreover, the uniform sampling of dense trajectories does not discriminate objects of interest from the background or other objects. Consequently, a lot of information is accumulated, which actually may not be useful. This can especially escalate when there is more data due to an increase in the number of action classes. We observe that our proposed trajectories remove some background clutter, too. We use a Bag-of-Words framework to conduct experiments on the benchmark HMDB51, UCF50 and UCF101 datasets containing the largest number of action classes to date. Further, we also evaluate three state-of-the art feature encoding techniques to study their performance on a common platform. | Ordered trajectories for human action recognition with large number of classes |
S0262885615000797 | Modern appearance-based object recognition systems typically involve feature/descriptor extraction and matching stages. The extracted descriptors are expected to be robust to illumination changes and to reasonable (rigid or affine) image/object transformations. Some descriptors work well for general object matching, but gray-scale key-point-based methods may be suboptimal for matching line-rich color scenes/objects such as cars, buildings, and faces. We present a Rotation- and Scale-Invariant, Line-based Color-aware descriptor (RSILC), which allows matching of objects/scenes in terms of their key-lines, line-region properties, and line spatial arrangements. An important special application is face matching, since face characteristics are best captured by lines/curves. We tested RSILC performance on publicly available datasets and compared it with other descriptors. Our experiments show that RSILC is more accurate in line-rich object description than other well-known generic object descriptors. | RSILC: Rotation- and Scale-Invariant, Line-based Color-aware descriptor |
S0262885615000955 | In daily life, humans demonstrate an amazing ability to remember images they see on magazines, commercials, TV, web pages, etc. but automatic prediction of intrinsic memorability of images using computer vision and machine learning techniques has only been investigated very recently. Our goal in this article is to explore the role of visual attention and image semantics in understanding image memorability. In particular, we present an attention-driven spatial pooling strategy and show that considering image features from the salient parts of images improves the results of the previous models. We also investigate different semantic properties of images by carrying out an analysis of a diverse set of recently proposed semantic features which encode meta-level object categories, scene attributes, and invoked feelings. We show that these features which are automatically extracted from images provide memorability predictions as nearly accurate as those derived from human annotations. Moreover, our combined model yields results superior to those of state-of-the art fully automatic models. | Predicting memorability of images using attention-driven spatial pooling and image semantics |
S0262885615000967 | We present a multi-view face detector based on Cascade Deformable Part Models (CDPM). Over the last decade, there have been several attempts to extend the well-established Viola&Jones face detector algorithm to solve the problem of multi-view face detection. Recently a tree structure model for multi-view face detection was proposed. This method is primarily designed for facial landmark detection and consequently a face detection is provided. However, the effort to model inner facial structures by using a detailed facial landmark labelling resulted on a complex and suboptimal system for face detection. Instead, we adopt CDPMs, where the models are learned from partially labelled images using Latent Support Vector Machines (LSVM). Furthermore, LSVM is enhanced with data-mining and bootstrapping procedures to enrich models during the training. Furthermore, a post-optimization procedure is derived to improve the performance. This semi-supervised methodology allows us to build models based on weakly labelled data while incrementally learning latent positive and negative samples. Our results show that the proposed model can deal with highly expressive and partially occluded faces while outperforming the state-of-the-art face detectors by a large margin on challenging benchmarks such as the Face Detection Data Set and Benchmark (FDDB) [1] and the Annotated Facial Landmarks in the Wild (AFLW) [2] databases. In addition, we validate the accuracy of our models under large head pose variation and facial occlusions in the Head Pose Image Database (HPID) [3] and Caltech Occluded Faces in the Wild (COFW) datasets [4], respectively. We also outline the suitability of our models to support facial landmark detection algorithms. | Empirical analysis of cascade deformable models for multi-view face detection |
S0262885615000979 | The detection of vanishing points in a monoscopic image is a first step to the extraction of 3D data. This article shows a partition of the image space in order to determine the type of perspective which is present, thereby allowing the determination of the vanishing point associated with each of the axes of the special reference system (X, Y, Z). Additionally, the Thales' second theorem allows us to determine the position of the vanishing points of the image and to automatize the process. An algorithm has been used with the data provided by the selected edge detector (Prewitt, Roberts, Sobel, Frei-Chen, Kirsch, Robinson 3 levels, Robinson 5 levels and Frei-Chen directional), which provides the location of the vanishing points contained on the image plane. The comparative study includes two variables: the number of vanishing points and the image resolution. The results show that in general the Prewitt's edge detector provides the best results, both positional and statistical. Increasing the image resolution improves the results, although the results obtained for a resolution of 640×480 pixels and another of 1024×768 pixels are very similar. | Application of gradient-based edge detectors to determine vanishing points in monoscopic images: Comparative study |
S0262885615000980 | Current semantic video analysis systems are usually hierarchical and consist of some levels to overcome semantic gaps between low-level features and high-level concepts. In these systems, some features, descriptors, objects or concepts are extracted in each level and therefore, total computational complexity of such systems is huge. In this paper, we present a new general framework to impose attention control on a video analysis system using Q-learning. Thus, our proposed framework restructures a given system dynamically to direct attention to the blocks extracting the most informative features/concepts and reduces computational complexity of the system. In other words, the proposed framework directs flow of processing actively using a learning attention control method. The proposed framework is evaluated for event detection in broadcast soccer videos using limited numbers of training samples. Experiments show that the proposed framework is able to learn how to direct attention to informative features/concepts and restructure the initial structure of the system dynamically to reach the final goal with less computational complexity. | A framework for dynamic restructuring of semantic video analysis systems based on learning attention control |
S0262885615000992 | This paper proposes a globally rotation invariant multi-scale co-occurrence local binary pattern (MCLBP) feature for texture-relevant tasks. In MCLBP, we arrange all co-occurrence patterns into groups according to properties of the co-patterns, and design three encoding functions (Sum, Moment, and Fourier Pooling) to extract features from each group. The MCLBP can effectively capture the correlation information between different scales and is also globally rotation invariant (GRI). The MCLBP is substantially different from most existing LBP variants including the LBP, the CLBP, and the MSJ-LBP that achieves rotation invariance by locally rotation invariant (LRI) encoding. We fully evaluate the properties of the MCLBP and compare it with some powerful features on five challenging databases. Extensive experiments demonstrate the effectiveness of the MCLBP compared to the state-of-the-art LBP variants including the CLBP and the LBPHF. Meanwhile, the dimension and computational cost of the MCLBP is also lower than that of the CLBP_S/M/C and LBPHF_S_M. | Globally rotation invariant multi-scale co-occurrence local binary pattern |
S0262885615001067 | Time-of-flight (ToF) depth cameras have widely been used in many applications such as 3D imaging, 3D reconstruction, human interaction and robot navigation. However, conventional depth cameras are incapable of imaging a translucent object which occupies a substantial portion of a real world scene. Such a limitation prohibits realistic imaging using depth cameras. In this work, we propose a new skewed stereo ToF camera for detecting and imaging translucent objects under minimal prior of environment. We find that the depth calculation of a ToF camera with a translucent object presents a systematic distortion due to the superposed reflected light ray observation from multiple surfaces. We propose to use a stereo ToF camera setup and derive a generalized depth imaging formulation for translucent objects. Distorted depth value is refined using an iterative optimization. Experimental evaluation shows that our proposed method reasonably recovers the depth image of translucent objects. | Skewed stereo time-of-flight camera for translucent object imaging |
S0262885615001079 | We propose to represent a time-of-flight (TOF) camera by the map of "internal radial distances" (IRD), associating an intrinsic distance to each pixel, as an alternative for the classic pinhole model. This representation is more general than the perspective model and appears to be a natural concept for 3D reconstruction and other applications of TOF cameras. In this new framework, calibrating a ToF camera comes down to the determination of this IRD map. We show how this can be accomplished by images of flat surfaces, without performing any feature detection. We prove deterministic calibration formulas, using one or more plane images. We also offer a numerical optimization method that in principle needs only one image of a flat surface. This paper has been recommended for acceptance by Peter Sturm. | Investigating new calibration methods without feature detection for TOF cameras |
S0262885615001092 | We propose a new methodology for facial landmark detection. Similar to other state-of-the-art methods, we rely on the use of cascaded regression to perform inference, and we use a feature representation that results from concatenating 66 HOG descriptors, one per landmark. However, we propose a novel regression method that substitutes the commonly used Least Squares regressor. This new method makes use of the L 2,1 norm, and it is designed to increase the robustness of the regressor to poor initialisations (e.g., due to large out of plane head poses) or partial occlusions. Furthermore, we propose to use multiple initialisations, consisting of both spatial translation and 4 head poses corresponding to different pan rotations. These estimates are aggregated into a single prediction in a robust manner. Both strategies are designed to improve the convergence behaviour of the algorithm, so that it can cope with the challenges of in-the-wild data. We further detail some important experimental details, and show extensive performance comparisons highlighting the performance improvement attained by the method proposed here. | L 2,1-based regression and prediction accumulation across views for robust facial landmark detection |
S0262885615001109 | The ability of most existing approaches to classify abandoned and removed objects (AROs) in images is affected by external environmental conditions such as illumination and traffic volume because the approaches use several pre-defined threshold values and generate many falsely-classified static regions. To reduce these effects, we propose an accurate ARO classification method using a hierarchical finite state machine (FSM) that consists of pixel-layer, region-layer, and event-layer FSMs, where the result of the lower-layer FSM is used as the input of the higher-layer FSM. Each FSM is defined by a Mealy state machine with three states and several state transitions, where a support vector machine (SVM) determines the state transition based on the current state and input features such as area, intensity, motion, shape, time duration, color and edge. Because it uses the hierarchical FSM (H-FSM) structure with features that are optimally trained by SVM classifiers, the proposed ARO classification method does not require threshold values and guarantees better classification accuracy under severe environmental changes. In experiments, the proposed ARO classification method provided much higher classification accuracy and lower false alarm rate than the state-of-the-art methods in both public databases and a commercial database. The proposed ARO classification method can be applied to many practical applications such as detection of littering, illegal parking, theft, and camouflaged soldiers. | Accurate abandoned and removed object classification using hierarchical finite state machine |
S0262885615001110 | In this work we deal with the problem of high-level event detection in video. Specifically, we study the challenging problems of i) learning to detect video events from solely a textual description of the event, without using any positive video examples, and ii) additionally exploiting very few positive training samples together with a small number of “related” videos. For learning only from an event's textual description, we first identify a general learning framework and then study the impact of different design choices for various stages of this framework. For additionally learning from example videos, when true positive training samples are scarce, we employ an extension of the Support Vector Machine that allows us to exploit “related” event videos by automatically introducing different weights for subsets of the videos in the overall training set. Experimental evaluations performed on the large-scale TRECVID MED 2014 video dataset provide insight on the effectiveness of the proposed methods. | Learning to detect video events from zero or very few video examples |
S0262885615001134 | This paper proposes to model an action as the output of a sequence of atomic Linear Time Invariant (LTI) systems. The sequence of LTI systems generating the action is modeled as a Markov chain, where a Hidden Markov Model (HMM) is used to model the transition from one atomic LTI system to another. In turn, the LTI systems are represented in terms of their Hankel matrices. For classification purposes, the parameters of a set of HMMs (one for each action class) are learned via a discriminative approach. This work proposes a novel method to learn the atomic LTI systems from training data, and analyzes in detail the action representation in terms of a sequence of Hankel matrices. Extensive evaluation of the proposed approach on two publicly available datasets demonstrates that the proposed method attains state-of-the-art accuracy in action classification from the 3D locations of body joints (skeleton). | Hankelet-based dynamical systems modeling for 3D action recognition |
S0262885615001146 | In recent years, much effort has been put into the development of novel algorithms to solve the person re-identification problem. The goal is to match a given person's image against a gallery of people. In this paper, we propose a single-shot supervised method to compute a scoring function that, when applied to a pair of images, provides a score expressing the likelihood that they depict the same individual. The method is characterized by: (i) the usage of a set of local image descriptors based on Fisher vectors, (ii) the training of a pool of scoring functions based on the local descriptors, and (iii) the construction of a strong scoring function by means of an adaptive boosting procedure. The method has been tested on four data-sets and results have been compared with state-of-the-art methods clearly showing superior performance. | Boosting Fisher vector based scoring functions for person re-identification |
S0262885615001158 | An example-based face hallucination system is proposed, in which given a low-resolution facial image, a corresponding high-resolution image is automatically obtained. In practice, such a problem is extremely challenging since it is often the case that two discriminative high-resolution images may have similar low-resolution inputs. To address this issue, this study proposes an ensemble of image feature representations, including various local patch- or block-based representations, a one-dimensional vector image representation, a two-dimensional matrix image representation, and a global matrix image representation. Notably, some of these representations are designed to preserve the global facial geometry of the low-resolution input, while others are designed to preserve the local detailed texture. For each feature representation, a regression function is constructed to synthesize a high-resolution image from the low-resolution input image. The synthesis process is conducted in a layer-by-layer fashion, with the output from one layer serving as the input to the following layer. Importantly, each regression function is associated with a classifier in order to determine which regression functions are required in the synthesis procedure in accordance with the particular characteristics of the input image. Furthermore, these classifiers can also help to deal with the individual ambiguity of system low-resolution inputs. The experimental results show that the proposed framework is capable of synthesizing high-resolution images from low-resolution input images with a wide variety of facial poses, geometry misalignments and facial expressions even when such images are not included within the original training dataset. | Robust face hallucination using ensemble of feature-based regression functions and classifiers |
S0262885615001316 | Pedestrian detection is an important image understanding problem with many potential applications. There has been little success in creating an algorithm which exhibits a high detection rate while keeping the false alarm in a relatively low rate. This paper presents a method designed to resolve this problem. The proposed method uses the Kinect or any similar type of sensors which facilitate the extraction of a distinct foreground. Then potential regions, which are candidates for the presence of human(s), are detected by employing the widely used Histogram of Oriented Gradients (HOG) technique, which performs well in terms of good detection rates but suffers from significantly high false alarm rates. Our method applies a sequence of operations to eliminate the false alarms produced by the HOG detector based on investigating the fine details of local shape information. Local shape information can be identified by efficient utilization of the edge points which, in this work, are used to formulate the so called Shape Context (SC) model. The proposed detection framework is divided in four sequential stages, with each stage aiming at refining the detection results of the previous stage. In addition, our approach employs a pre-evaluation stage to pre-screen and restrict further detection results. Extensive experimental results on the dataset created by the authors, involves 673 images collected from 11 different scenes, demonstrate that the proposed method eliminates a large percentage of the false alarms produced by the HOG pedestrian detector. | A novel low false alarm rate pedestrian detection framework based on single depth images |
S0262885615001328 | This paper proposed a new method based on spatial filter banks and discrete wavelet transform (DWT) for invariant texture classification. The method used a multi-resolution analysis method like DWT and applied the proposed filter bank on different resolutions. Then, a simple fusion of features on different resolutions was used for invariant texture analysis. A comprehensive study was done to examine the effectiveness of the proposed method. Different datasets with different properties were used in this paper such as Brodatz, Outex, and KTH-TIPS for the evaluation. Local binary pattern (LBP) methods have been one of the powerful methods in recent years for invariant texture classification. A comparative study was performed with some state-of-the-art LBP methods. This comparison indicated promising results for the proposed approach as compared with the LBP methods. | Invariant texture classification using a spatial filter bank in multi-resolution analysis |
S0262885615001341 | In this paper we present our solution to the 300 Faces in the Wild Facial Landmark Localization Challenge. We demonstrate how to achieve very competitive localization performance with a simple deep learning based system. Human study is conducted to show that the accuracy of our system has been very close to human performance. We discuss how this finding would affect our future direction to improve our system. | Approaching human level facial landmark localization by deep learning |
S0262885615001353 | Automatic face alignment is a fundamental step in facial image analysis. However, this problem continues to be challenging due to the large variability of expression, illumination, occlusion, pose, and detection drift in the real-world face images. In this paper, we present a multi-view, multi-scale and multi-component cascade shape regression (M 3CSR) model for robust face alignment. Firstly, face view is estimated according to the deformable facial parts for learning view specified CSR, which can decrease the shape variance, alleviate the drift of face detection and accelerate shape convergence. Secondly, multi-scale HoG features are used as the shape-index features to incorporate local structure information implicitly, and a multi-scale optimization strategy is adopted to avoid trapping in local optimum. Finally, a component-based shape refinement process is developed to further improve the performance of face alignment. Extensive experiments on the IBUG dataset and the 300-W challenge dataset demonstrate the superiority of the proposed method over the state-of-the-art methods. | M 3 CSR: Multi-view, multi-scale and multi-component cascade shape regression |
S0262885615001365 | Background modeling is widely used in visual surveillance systems aiming to facilitate analysis of real-world video scenes. The goal is to discriminate between pixels from foreground objects and those ones from the background. However, real-world scenarios tend to have time and spatial non-stationary variations, being difficult to reveal the foreground and background entities from video data. Here, we propose a novel adaptive background modeling, termed Object-based Selective Updating with Correntropy (OSUC), to support video-based surveillance systems. Our approach that is developed within an adaptive learning framework unveils existing spatio-temporal pixel relationships, making use of a single Gaussian for the model representation stage. Moreover, we introduce a background updating scheme composed of an updating rule that is based on the stochastic gradient algorithm and Correntropy cost function. As a result, this scheme can extract the temporal statistical pixel distribution, at the same time, dealing with non-stationary pixel value fluctuations that affect the background model. Here, an automatic tuning strategy of the cost function bandwidth parameter is carried out that can handle both Gaussian and non-Gaussian noise environments. Besides, to include pixel spatial relationships in the background modeling processing, we introduce an object-based selective learning rate strategy for enhancing the background modeling accuracy. Particularly, an object motion analysis stage is presented to detect and track foreground entities based on pixel intensities and motion direction attained via optical flow computation. Testing is provided on well-known datasets for discriminating between foreground and background that include stationary and non-stationary behaviors. Achieved results show that the OSUC outperforms, in most of the considered cases, the-state-of-the-art approaches with an affordable computational cost. Therefore, the proposed approach is suitable for supporting real-world video-based surveillance systems. | Background modeling using Object-based Selective Updating and Correntropy adaptation |
S0262885615001377 | A trustworthy protocol is essential to evaluate a text detection algorithm in order to, first measure its efficiency and adjust its parameters and, second to compare its performances with those of other algorithms. However, current protocols do not give precise enough evaluations because they use coarse evaluation metrics, and deal with inconsistent matchings between the output of detection algorithms and the ground truth, both often limited to rectangular shapes. In this paper, we propose a new evaluation protocol, named EvaLTex, that solves some of the current problems associated with classical metrics and matching strategies. Our system deals with different kinds of annotations and detection shapes. It also considers different kinds of granularity between detections and ground truth objects and hence provides more realistic and accurate evaluation measures. We use this protocol to evaluate text detection algorithms and highlight some key examples that show that the provided scores are more relevant than those of currently used evaluation protocols. | What is a good evaluation protocol for text localization systems? Concerns, arguments, comparisons and solutions |
S0262885615001390 | The article describes a reconstruction pipeline that generates piecewise-planar models of man-made environments using two calibrated views. The 3D space is sampled by a set of virtual cut planes that intersect the baseline of the stereo rig and implicitly define possible pixel correspondences across views. The likelihood of these correspondences being true matches is measured using signal symmetry analysis [1], which enables to obtain profile contours of the 3D scene that become lines whenever the virtual cut planes intersect planar surfaces. The detection and estimation of these lines cuts is formulated as a global optimization problem over the symmetry matching cost, and pairs of reconstructed lines are used to generate plane hypotheses that serve as input to PEARL clustering [2]. The PEARL algorithm alternates between a discrete optimization step, which merges planar surface hypotheses and discards detections with poor support, and a continuous optimization step, which refines the plane poses taking into account surface slant. The pipeline outputs an accurate semi-dense Piecewise-Planar Reconstruction of the 3D scene. In addition, the input images can be segmented into piecewise-planar regions using a standard labeling formulation for assigning pixels to plane detections. Extensive experiments with both indoor and outdoor stereo pairs show significant improvements over state-of-the-art methods with respect to accuracy and robustness. | Piecewise-planar reconstruction using two views |
S0262885615001407 | The possibility of sharing multimedia contents in easy and ubiquitous way has brought to the creation of multiuser photo albums. Pictures and video sequences taken by different people attending common social events (e.g., concerts and sport competitions) are gathered together into huge sets of heterogeneous multimedia data. These databases require effective compression strategies that exploit the common visual information related to the scene but compensate effectively the differences depending on the acquiring viewpoints, camera models, and acquisition time instants. The paper presents a predictive coding strategy for multi-user photo gallery, which initially localizes each picture in terms of viewpoint, orientation, time, and acquired elements. This information permits ordering all the images in a prediction tree and associates to each of them a reference picture. From this structure, it is possible to build a predictive coding strategy that exploits the redundant elements between the image to be coded and its reference. Experimental results show an average bit rate reduction up to 75% with respect to HEVC Intra low complexity coding. | Compression of multiple user photo galleries |
S0262885615001419 | The process of creating a photo product, such as a photobook, calendar or collage, from a large personal image collection requires intensive user effort. The primary goal of the current research was to develop an end-to-end solution to the problem of photo product generation that enables the user to complete the process with minimal edits, where the system intelligently selects assets and groups them before presenting the output to the user. The automation is driven by metadata extracted both from individual images as well as from sets of assets in a collection. In particular, we use an automatically detected event hierarchy to establish meaningful groupings in the assets, and to determine an appropriate grouping and pagination for the final product. We propose a novel intermediate construct, called a storyboard, which can be translated to different product types without recomputing the underlying metadata. In addition to chronological storyboards, we also describe a novel hybrid storyboard that joins chronological image presentation with groups of images of a common theme. A pagination algorithm uses the information in the storyboard and the product constraints to generate a product. Finally, the user is provided with a metadata-driven editing mechanism that makes it easy to change the auto-populated product. Given that the proposed system envisions user interaction in creating the final product, user studies are conducted to judge the usefulness of the system, where consumers use the system to generate a photobook with their own images. | Event-enabled intelligent asset selection and grouping for photobook creation |
S0262885616000020 | The emergence of large-scale human action datasets poses a challenge to efficient action labeling. Hand labeling large-scale datasets is tedious and time consuming; thus a more efficient labeling method would be beneficial. One possible solution is to make use of the knowledge of a known dataset to aid the labeling of a new dataset. To this end, we propose a new transfer learning method for cross-dataset human action recognition. Our method aims at learning generalized feature representation for effective cross-dataset classification. We propose a novel dual many-to-one encoder architecture to extract generalized features by mapping raw features from source and target datasets to the same feature space. Benefiting from the favorable property of the proposed many-to-one encoder, cross-dataset action data are encouraged to possess identical encoded features if the actions share the same class labels. Experiments on pairs of benchmark human action datasets achieved state-of-the-art accuracy, proving the efficacy of the proposed method. | Dual many-to-one-encoder-based transfer learning for cross-dataset human action recognition |
S0262885616000123 | People re-identification has been a very active research topic recently in computer vision. It is an important application in surveillance systems with disjoint cameras. In this paper, a framework is proposed to extract descriptors of people in videos, which are based on soft-biometric traits and can be further used for people re-identification or other applications. Soft-biometric based description is more invariant to changing factors than directly using low level features such as color and texture. The ensemble of a set of soft-biometric traits can achieve good performance in people re-identification. In the proposed method, the body of detected people is divided into three parts and the selected soft-biometric traits are extracted from each part. All traits are then combined to form the final descriptor, and people re-identification is performed based on the descriptor and Nearest Neighbor (NN) matching strategy. The experiments are carried out on SAIVT-SoftBio database which consists of videos from disjoint surveillance cameras, as well as some static image based datasets. An open ID recognition problem is also evaluated for the proposed method. Comparisons with some state-of-the-art methods are provided as well. The experiment results show the good performance of the proposed framework. | A framework for semantic people description in multi-camera surveillance systems |
S0262885616000135 | This paper introduces an action recognition system based on a multiscale local part model. This model includes both a coarse primitive level root patch covering local global information and higher resolution overlapping part patches incorporating local structure and temporal relations. Descriptors are then computed over the local part models by applying fast random sampling at very high density. We also improve the recognition performance using a discontinuity-preserving optical flow algorithm. The evaluation shows that the feature dimensions can be reduced by 7/8 through PCA while preserving high accuracy. Our system achieves state-of-the-art results on large challenging realistic datasets, namely, 61.0% on HMDB51, 92.0% on UCF50, 86.6% on UCF101 and 65.3% on Hollywood2. | Local part model for action recognition |
S0262885616300014 | Many studies have confirmed gait as a robust biometric feature for identification of individuals. However, direction changes cause difficulties for most of the gait recognition systems, due to appearance changes. This study presents an efficient multi-view gait recognition method that allows curved trajectories on unconstrained paths in indoor environments. The recognition is based on volumetric analysis of the human gait, to exploit most of the 3D information enclosed in it. Appearance-based gait descriptors are extracted from 3D gait volumes and temporal patterns of them are classified using a Support Vector Machine with a sliding temporal window for majority voting. The proposed approach is experimentally validated on the “AVA Multi-View Dataset (AVAMVG)” and on the “Kyushu University 4D Gait Database (KY4D)”. The results show that this new approach is able to identify people walking on curved paths. | Viewpoint-independent gait recognition through morphological descriptions of 3D human reconstructions |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.