FileName
stringlengths
17
17
Abstract
stringlengths
163
6.01k
Title
stringlengths
12
421
S1077314214002082
Extracting rotation invariant features is a valuable technique for the effective classification of rotation invariant texture. The Histograms of Oriented Gradients (HOG) algorithm has been proved to be theoretically simple, and has been applied in many areas. Also, the co-occurrence HOG (CoHOG) algorithm provides a unified description including both statistical and differential properties of a texture patch. However, HOG and CoHOG have some shortcomings: they discard some important texture information and are not invariant to rotation. In this paper, based on the original HOG and CoHOG algorithms, four novel feature extraction methods are proposed. The first method uses Gaussian derivative filters named GDF-HOG. The second and the third methods use eigenvalues of the Hessian matrix named Eig(Hess)-HOG and Eig(Hess)-CoHOG, respectively. The fourth method exploits the Gaussian and means curvatures to calculate curvatures of the image surface named GM-CoHOG. We have empirically shown that the proposed novel extended HOG and CoHOG methods provide useful information for rotation invariance. The classification results are compared with original HOG and CoHOG algorithms methods on the CUReT, KTH-TIPS, KTH-TIPS2-a and UIUC datasets show that proposed four methods achieve best classification result on all datasets. In addition, we make a comparison with several well-known descriptors. The experiments of rotation invariant analysis are carried out on the Brodatz dataset, and promising results are obtained from those experiments.
Continuous rotation invariant features for gradient-based texture classification
S1077314214002094
In this paper a 3D human pose tracking framework is presented. A new dimensionality reduction method (Hierarchical Temporal Laplacian Eigenmaps) is introduced to represent activities in hierarchies of low dimensional spaces. Such a hierarchy provides increasing independence between limbs, allowing higher flexibility and adaptability that result in improved accuracy. Moreover, a novel deterministic optimisation method (Hierarchical Manifold Search) is applied to estimate efficiently the position of the corresponding body parts. Finally, evaluation on public datasets such as HumanEva demonstrates that our approach achieves a 62.5–65mm average joint error for the walking activity and outperforms state-of-the-art methods in terms of accuracy and computational cost.
Efficient tracking of human poses using a manifold hierarchy
S1077314214002100
Large-scale 3D shape retrieval has become an important research direction in content-based 3D shape retrieval. To promote this research area, two Shape Retrieval Contest (SHREC) tracks on large scale comprehensive and sketch-based 3D model retrieval have been organized by us in 2014. Both tracks were based on a unified large-scale benchmark that supports multimodal queries (3D models and sketches). This benchmark contains 13680 sketches and 8987 3D models, divided into 171 distinct classes. It was compiled to be a superset of existing benchmarks and presents a new challenge to retrieval methods as it comprises generic models as well as domain-specific model types. Twelve and six distinct 3D shape retrieval methods have competed with each other in these two contests, respectively. To measure and compare the performance of the participating and other promising Query-by-Model or Query-by-Sketch 3D shape retrieval methods and to solicit state-of-the-art approaches, we perform a more comprehensive comparison of twenty-six (eighteen originally participating algorithms and eight additional state-of-the-art or new) retrieval methods by evaluating them on the common benchmark. The benchmark, results, and evaluation tools are publicly available at our websites (http://www.itl.nist.gov/iad/vug/sharp/contest/2014/Generic3D/, 2014, http://www.itl.nist.gov/iad/vug/sharp/contest/2014/SBR/, 2014).
A comparison of 3D shape retrieval methods based on a large-scale benchmark supporting multimodal queries
S1077314214002112
Utilizing in situ measurements to build 3-D volumetric object models under variety of turbidity conditions is highly desirable for marine sciences. To address the ineffectiveness of feature-based structure from motion and stereo methods under poor visibility, we explore a multi-modal stereo imaging technique that utilizes coincident optical and forward-scan sonar cameras, a so-called opti-acoustic stereo imaging system. The challenges of establishing dense feature correspondences in either opti-acoustic or low-contrast optical stereo images are avoided, by employing 2-D occluding contour correspondences, namely, the images of 3-D object occluding rims. Collecting opti-acoustic stereo pairs while circling an object, matching 2-D apparent contours in optical and sonar views to construct the 3-D occluding rim, and computing the stereo rig trajectory by opti-acoustic bundle adjustment, we generate registered samples of 3-D surface in a reference coordinate system. A surface interpolation gives the 3-D object model. In addition to the key advantage of utilizing range measurements from sonar, the proposed paradigm requires no assumption about local surface curvature as traditionally made in 3-D shape reconstruction from occluding contours. The reconstruction accuracy is improved by computing both the 3-D positions and local surface normals of sampled contours. We also present (1) a simple calibration method to estimate and correct for small discrepancy from the desired relative stereo pose; (2) an analytical analysis of the degenerate configuration that enables special treatment in mapping (tall) elongated objects with dominantly vertical edges. We demonstrate the performance of our method based on the 3-D surface rendering of certain objects, imaged by an underwater opti-acoustic stereo system.
3-D object modeling from 2-D occluding contour correspondences by opti-acoustic stereo imaging
S1077314214002124
In this paper, we propose a novel method including segmentation, a combination of new and well-known feature extraction and classification methods to classify plant leaves. The aim of the proposed features is to distinguish leaf margins, which cannot be distinguished using commonly used geometric features. Additionally, Linear Discriminant Classifier is used for classification, therefore using features that are noisy for some leaf types does not reduce the performance of the system. The proposed system outperforms the well-known geometric methods that are used for leaf classification.
Geometric leaf classification
S1077314214002227
In the object segmentation by active contours, an initial contour provided by user is often required. This paper extends the conventional active contour model by incorporating feature matching in the formulation for automatic object segmentation, yielding a novel matching-constrained active contour. The key to our formulation is a mathematical model of the relationship between interior feature points and object shape, called the interior-points-to-shape relation. According to this interior-points-to-shape relation, we are able to achieve the automatic object segmentation in two steps. Specifically, we are able to estimate the object boundary position given the matched interior feature points. Afterwards, we are able to further optimize the boundary position in the active contour framework. To obtain a unified optimization model for this task, we additionally formulate the matching score as a constraint to active contour model, resulting in our matching-constrained active contour. We also derive the projected-gradient descent equations to solve the constrained optimization. In the experiments, we show that our method achieves automatic object segmentation, and it clearly outperforms the related methods.
Matching-constrained active contours with affine-invariant shape prior
S1077314214002239
We consider the problem of multi-label classification where a feature vector may belong to one of more different classes or concepts at the same time. Many existing approaches are devoted for solving the difficult estimation task of uncovering the relationship between features and active concepts, solely from data without taking into account any sensible functional structure. In this paper, we propose a novel probabilistic generative model that aims to describe the core generative process of how multiple active concepts can contribute to feature generation. Within our model, each concept is associated with multiple representative base feature vectors, which shares the central idea of sparse feature modeling with the popular dictionary learning. However, by dealing with the weight coefficients as exclusive latent random variables encoding contribution levels, we effectively frame the coefficient learning task as probabilistic inference. We introduce two parameter learning algorithms for the proposed model: one based on standard maximum likelihood learning via the expectation–maximization algorithm, the other focusing on maximally separating the margin of the true concept configuration away from the class boundary. In the latter we suggest an efficient approximate optimization method where each iteration admits closed-form update with no line search. For several benchmark datasets mostly from the multi-label image classification, we demonstrate that our generative model with proposed estimators can often yield superior prediction performance to existing methods.
Multiple-concept feature generative models for multi-label image classification
S1077314214002240
Image segmentation is a very important step in image analysis, and performance evaluation of segmentation algorithms plays a key role both in developing efficient algorithms and in selecting suitable methods for the given tasks. Although a number of publications have appeared on segmentation methodology and segmentation performance evaluation, little attention has been given to statistically bounding the performance of image segmentation algorithms. In this paper, to determine the performance limits of image segmentation algorithms, a modified Cramér–Rao bound combined with the Affine bias model is employed. A fuzzy segmentation formulation is considered, of which hard segmentation is a special case. Experimental results are obtained where we compare the performance of several representative image segmentation algorithms with the derived bound on both synthetic and real-world image data.
On performance limits of image segmentation algorithms
S1077314214002252
Among the example-based learning methods of image super-resolution (SR), the mapping function between a high-resolution (HR) image and its low-resolution (LR) version plays a critical role in SR process. This paper presents a novel framework on 2D tensor regression learning model to favor single image SR reconstruction. From the image statistical point of view, the statistical matching relationship between an HR image patch and its LR counterpart can be efficiently represented in tensor spaces. Specifically, in this paper, we define a generalized 2D tensor regression framework between HR and LR image patch pairs to learn a set of tensor coefficients gathering statistical dependency between HR and LR patches. The framework is imposed by different constraint terms resulting in an interesting interpretation for the linear mapping function relating the LR and HR image patch spaces for image super-resolution. Finally, the HR image is then synthesized by a set of patches from one LR image input under the learned tensor regression model. Experimental results show that our algorithm generates HR images that are competitive or even superior to images produced by other similar SR methods in both PSNR (peak signal-to-noise ratio) and visual quality.
Image super-resolution via 2D tensor regression learning
S1077314214002264
In the present paper, a novel image classification method that uses the hierarchical structure of categories to produce more semantic prediction is presented. This implies that our algorithm may not yield a correct prediction, but the result is likely to be semantically close to the right category. Therefore, the proposed method is able to provide a more informative classification result. The main idea of our method is twofold. First, it uses semantic representation, instead of low-level image features, enabling the construction of high-level constraints that exploit the relationship among semantic concepts in the category hierarchy. Second, from such constraints, an optimization problem is formulated to learn a semantic similarity function in a large-margin framework. This similarity function is then used to classify test images. Experimental results demonstrate that our method provides effective classification results for various real-image datasets.
Large margin learning of hierarchical semantic similarity for image classification
S1077314214002392
Automatic multiple object tracking with a single pan–tilt–zoom (PTZ) cameras is a hard task, with few approaches in the literature, most of them proposing simplistic scenarios. In this paper, we present a novel PTZ camera management framework in which at each time step, the next camera pose (pan, tilt, focal length) is chosen to support multiple object tracking. The policy can be myopic or non-myopic, where the former analyzes exclusively the current frame for deciding the next camera pose, while the latter takes into account plausible future target displacements and camera poses, through a multiple look-ahead optimization. In both cases, occlusions, a variable number of subjects and genuine pedestrian detectors are taken into account, for the first time in the literature. Convincing comparative results on synthetic data, realistic simulations and real trials validate our proposal, showing that non-myopic strategies are particularly suited for a PTZ camera management.
Non-myopic information theoretic sensor management of a single pan–tilt–zoom camera for multiple object detection and tracking
S1077314214002409
Recovering a deformable 3D surface from a single image is an ill-posed problem because of the depth ambiguities. The resolution to this ambiguity normally requires prior knowledge about the most probable deformations that the surface can support. Many methods that address this problem have been proposed in the literature. Some of them rely on physical properties, while others learn the principal deformations of the object or are based on a reference textured image. However, they present some limitations such as high computational cost or the lack of the possibility of recovering the 3D shape. As an alternative to existing solutions, this paper provides a novel approach that simultaneously recovers the non-rigid 3D shape and the camera pose in real time from a single image. This proposal relies on an efficient particle filter that performs an intelligent search of a database of deformations. We present an exhaustive Design of Experiments to obtain the optimal parametrization of the particle filter, as well as a set of results to demonstrate the visual quality and the performance of our approach.
Real time non-rigid 3D surface tracking using particle filter
S1077314214002422
The momentum term has long been used in machine learning algorithms, especially back-propagation, to improve their speed of convergence. In this paper, we derive an expression to prove the O ( 1 / k 2 ) convergence rate of the online gradient method, with momentum type updates, when the individual gradients are constrained by a growth condition. We then apply these type of updates to video background modelling by using it in the update equations of the Region-based Mixture of Gaussians algorithm. Extensive evaluations are performed on both simulated data, as well as challenging real world scenarios with dynamic backgrounds, to show that these regularised updates help the mixtures converge faster than the conventional approach and consequently improve the algorithm’s performance.
Fast convergence of regularised Region-based Mixture of Gaussians for dynamic background modelling
S1077314214002434
Many computer vision applications involve modeling complex spatio-temporal patterns in high-dimensional motion data. Recently, restricted Boltzmann machines (RBMs) have been widely used to capture and represent spatial patterns in a single image or temporal patterns in several time slices. To model global dynamics and local spatial interactions, we propose to theoretically extend the conventional RBMs by introducing another term in the energy function to explicitly model the local spatial interactions in the input data. A learning method is then proposed to perform efficient learning for the proposed model. We further introduce a new method for multi-class classification that can effectively estimate the infeasible partition functions of different RBMs such that RBM is treated as a generative model for classification purpose. The improved RBM model is evaluated on two computer vision applications: facial expression recognition and human action recognition. Experimental results on benchmark databases demonstrate the effectiveness of the proposed algorithm.
A generative restricted Boltzmann machine based method for high-dimensional motion data modeling
S1077314214002446
We propose a novel approach to synthesizing images that are effective for training object detectors. Starting from a small set of real images, our algorithm estimates the rendering parameters required to synthesize similar images given a coarse 3D model of the target object. These parameters can then be reused to generate an unlimited number of training images of the object of interest in arbitrary 3D poses, which can then be used to increase classification performances. A key insight of our approach is that the synthetically generated images should be similar to real images, not in terms of image quality, but rather in terms of features used during the detector training. We show in the context of drone, plane, and car detection that using such synthetically generated images yields significantly better performances than simply perturbing real images or even synthesizing images in such way that they look very realistic, as is often done when only limited amounts of training data are available.
On rendering synthetic images for training an object detector
S1077314214002458
Foreground detection algorithms have sometimes relied on rather ad hoc procedures, even when probabilistic mixture models are defined. Moreover, the fact that the input features have different variances and that they are not independent from each other is often neglected, which hampers performance. Here we aim to obtain a background model which is not tied to any particular choice of features, and that accounts for the variability and the dependences among features. It is based on the stochastic approximation framework. A possible set of features is presented, and their suitability for this problem is assessed. Finally, the proposed procedure is compared with several state-of-the-art alternatives, with satisfactory results.
Features for stochastic approximation based foreground detection
S1077314215000028
3D mesh segmentation has become a crucial part of many applications in 3D shape analysis. In this paper, a comprehensive survey on 3D mesh segmentation methods is presented. Analysis of the existing methodologies is addressed taking into account a new categorization along with the performance evaluation frameworks which aim to support meaningful benchmarks not only qualitatively but also in a quantitative manner. This survey aims to capture the essence of current trends in 3D mesh segmentation.
A comprehensive overview of methodologies and performance evaluation frameworks in 3D mesh segmentation
S107731421500003X
We present a method for efficiently generating dense, relative depth estimates from video without requiring any knowledge of the imaging system, either a priori or by estimating it during processing. Instead we only require that the epipolar constraint between any two frames is satisfied and that the fundamental matrix can be estimated. By tracking sparse features across many frames and aggregating the multiple depth estimates together, we are able to improve the overall estimate for any given frame. Once the depth estimates are available, we treat the generation of the depth maps as a label propagation problem. This allows us to combine the automatically generated depth maps with any user corrections and modifications (if so desired).
A framework for estimating relative depth in video
S1077314215000168
In this paper we propose a framework for tracking multiple interacting targets in a wide-area camera network consisting of both overlapping and non-overlapping cameras. Our method is motivated from observations that both individuals and groups of targets interact with each other in natural scenes. We associate each raw target trajectory (i.e., a tracklet) with a group state, which indicates if the trajectory belongs to an individual or a group. Structural Support Vector Machine (SSVM) is applied to the group states to decide if merge or split events occur in the scene. Information fusion between multiple overlapping cameras is handled using a homography-based voting scheme. The problem of tracking multiple interacting targets is then converted to a network flow problem, for which the solution can be obtained by the K-shortest paths algorithm. We demonstrate the effectiveness of the proposed algorithm on the challenging VideoWeb dataset in which a large amount of multi-person interaction activities are present. Comparative analysis with state-of-the-art methods is also shown.
Tracking multiple interacting targets in a camera network
S107731421500017X
Appearance model is a key part of tracking algorithms. To attain robustness, many complex appearance models are proposed to capture discriminative information of object. However, such models are difficult to maintain accurately and efficiently. In this paper, we observe that hashing techniques can be used to represent object by compact binary code which is efficient for processing. However, during tracking, online updating hash functions is still inefficient with large number of samples. To deal with this bottleneck, a novel hashing method called two dimensional hashing is proposed. In our tracker, samples and templates are hashed to binary matrices, and the hamming distance is used to measure confidence of candidate samples. In addition, the designed incremental learning model is applied to update hash functions for both adapting situation change and saving training time. Experiments on our tracker and other eight state-of-the-art trackers demonstrate that the proposed algorithm is more robust in dealing with various types of scenarios.
Two dimensional hashing for visual tracking
S1077314215000181
We propose the use of explicitly identified image structure to guide the solution of the single image super-resolution (SR) problem. We treat the image as a layout of homogeneous regions, surrounded by ramp edges of a larger contrast. Ramps are characterized by the property that any path through any ramp pixel, monotonically leading from one to the other side, has monotonically increasing (or decreasing) intensity values along it. Such a ramp profile thus captures the large contrast between the two homogeneous regions. In this paper, the SR problem is viewed primarily as one of super-resolving these ramps, since the relatively homogeneous interiors can be handled using simpler methods. Our approach involves learning how these ramps transform across resolutions, and applying the learnt transformations to the ramps of a test image. To obtain our final SR reconstruction, we use the transformed ramps as priors in a regularization framework, where the traditional backprojection constraint is used as the data term. As compared to conventional edge based SR methods, our approach provides three distinct advantages: (1) Conventional edge based SR methods are based on gradients, which use 2D filters with heuristically chosen parameters and these choices result in different gradient values. This sensitivity adversely affects learning gradient domain correspondences across different resolutions. We show that ramp profiles are more adaptive, stable and therefore reliable representations for learning edge transformations across resolutions. (2) Existing gradient based SR methods are often unable to sufficiently constrain the absolute brightness levels in the image. Our approach on the other hand, operates directly in the image intensity domain, enforcing sharpness as well as brightness consistency. (3) Unlike previous gradient based methods, we also explicitly incorporate dependency between closely spaced edges while learning ramp correspondences. This allows for better recovery of contrast across thin structures such as in high spatial frequency areas. We obtain results that are sharper and more faithful to the true image color, and show almost no ringing artifacts.
Learning ramp transformation for single image super-resolution
S1077314215000193
The Bag of Words paradigm has been the baseline from which several successful image classification solutions were developed in the last decade. These represent images by quantizing local descriptors and summarizing their distribution. The quantization step introduces a dependency on the dataset, that even if in some contexts significantly boosts the performance, severely limits its generalization capabilities. Differently, in this paper, we propose to model the local features distribution with a multivariate Gaussian, without any quantization. The full rank covariance matrix, which lies on a Riemannian manifold, is projected on the tangent Euclidean space and concatenated to the mean vector. The resulting representation, a Gaussian of Local Descriptors (GOLD), allows to use the dot product to closely approximate a distance between distributions without the need for expensive kernel computations. We describe an image by an improved spatial pyramid, which avoids boundary effects with soft assignment: local descriptors contribute to neighboring Gaussians, forming a weighted spatial pyramid of GOLD descriptors. In addition, we extend the model leveraging dataset characteristics in a mixture of Gaussian formulation further improving the classification accuracy. To deal with large scale datasets and high dimensional feature spaces the Stochastic Gradient Descent solver is adopted. Experimental results on several publicly available datasets show that the proposed method obtains state-of-the-art performance.
GOLD: Gaussians of Local Descriptors for image representation
S1077314215000211
Learning adaptive dictionaries for sparse coding has been the focus of latest research as it provides a promising way to maximize the efficiency of sparse representation. In particular, learning discriminative dictionaries rather than reconstructive ones has demonstrated significantly improved performance in pattern recognition. In this paper, a powerful method is proposed for discriminative dictionary learning. During the dictionary learning process, we enhance the discriminability of sparse codes by promoting hierarchical group sparsity and reducing linear prediction error on sparse codes. With the employment of joint within-class collaborative hierarchical sparsity, our method is able to learn adaptive dictionaries from labeled data for classification, which encourage coefficients to be sparse at both group level and singleton level and thus enforce the separability of sparse codes. Benefiting from joint dictionary and classifier learning, the discriminability of sparse codes is further strengthened. An efficient alternating iterative scheme is presented to solve the proposed model. We applied our method to face recognition, object recognition and scene classification. Experimental results have demonstrated the excellent performance of our method in comparison with existing discriminative dictionary learning approaches.
Discriminative structured dictionary learning with hierarchical group sparsity
S1077314215000223
Modeling of visual saliency is an important domain of research in computer vision, given the significant role of attention mechanisms during neural processing of visual information. This work presents a new approach for the construction of image representations of salient locations, generally known as saliency maps. The developed method is based on an efficient comparison scheme for the local sparse representations deriving from non-overlapping image patches. The sparse coding stage is implemented via an overcomplete dictionary trained with a soft-competitive bio-inspired algorithm and the use of natural images. The resulting local sparse codes are pairwise compared using the Hamming distance as a gauge of their co-activation. The calculated distances are used to quantify the saliency strength for each individual patch, and then, the saliency values are non-linearly filtered to form the final map. The evaluation results obtained on four image databases, demonstrate the competitive performance of the proposed approach compared to several state-of-the-art saliency modeling algorithms. More importantly, the proposed scheme is simple, efficient, and robust under a variety of visual conditions. Thus, it appears as an ideal solution for a hardware implementation of a frontend saliency modeling module in a computer vision system.
Efficient modeling of visual saliency based on local sparse representation and the use of hamming distance
S1077314215000235
Face image interpretation with generative models is done by reconstructing the input image as well as possible. A comparison between the target and the model-generated image is complicated by the fact that faces are surrounded by background. The standard likelihood formulation only compares within the modeled face region. Through this restriction an unwanted but unavoidable background model appears in the likelihood. This implicitly present model is inappropriate for most backgrounds and leads to artifacts in the reconstruction, ranging from pose misalignment to shrinking of the face. We discuss the problem in detail for a probabilistic 3D Morphable Model and propose to use explicit image-based background models as a simple but fundamental solution. We also discuss common practical strategies which deal with the problem but suffer from a limited applicability which inhibits the fully automatic adaption of such models. We integrate the explicit background model through a likelihood ratio correction of the face model and thereby remove the need to evaluate the complete image. The background models are generic and do not need to model background specifics. The corrected 3D Morphable Model directly leads to more accurate pose estimation and image interpretations at large yaw angles with strong self-occlusion.
Background modeling for generative image models
S1077314215000247
Deformable models are mathematical tools, used in image processing to analyze the shape and movement of real objects due to their ability to emulate physical features such as elasticity, stiffness, mass and damping. In the original approach, parametric models are obtained from the minimization of an energy functional by means of the Euler–Lagrange equation. Finite element method is used for spatial discretization. The shape and position of the model is governed by a second-order partial differential equation system, which is obtained by applying the calculus of variations. Subsequent work propose a model formulation defined completely in the frequency domain, by translating the PDE system into the Fourier domain. This new approach offers important computational efficiency and an easier generalization to multidimensional models, since each spectral component of the model is ruled by an independent PDE. This paper reviews the frequency based formulation and analyzes the convergence and stability of these multidimensional parametric deformable models. Results show that the accuracy and speed of convergence depend on the dynamic parameters of the system and the spectrum of the data to be characterized, providing a procedure to speed-up the convergence by an appropriate choice of these parameters.
Convergence analysis of multidimensional parametric deformable models
S1077314215000259
Topic models have demonstrated to be effective on building image representations for general images. Recently, how to build better image representations for images in social media becomes an interesting problem, where one key issue is how to leverage images’ social contextual cues, e.g., user tags associated with images. Nevertheless, most previous methods either just exploited image content and neglect user tags, or assumed there are exact correspondences between image content and tags, i.e., tags are closely related to image content. Thus, they cannot be applied to the realistic scenarios where the images are only weakly annotated with tags, i.e., tags are only loosely related to image content as already manifested in real-world social media data. In this paper, we address the problem of building better image representations in social media, where the images are weakly annotated with user tags. In particular, we organize a collection of images as an image network where the relations between images are modeled by user tags. To model such image network and build image representations, we further propose a network structured topic model, namely Visual Topic Network (VTN), where the image content and their relations are simultaneously modeled. In this way, the weakly annotated tags can be effectively leveraged as building image representations. The proposed VTN model is inspired by the Relational Topic Model (RTM) recently introduced in the document analysis literature. Different from the binary article relations in RTM, the proposed VTN can model the multiple-level image relations. Our extensive experiments on two social media datasets demonstrated the advantage of the proposed VTN model.
Visual Topic Network: Building better image representations for images in social media
S1077314215000260
Spatio-temporal saliency detection has attracted lots of research interests due to its competitive performance on wide multimedia applications. For spatio-temporal saliency detection, existing bottom-up algorithms often over-simplify the fusion strategy, which results in the inferior performance than the human vision system. In this paper, a novel bottom-up spatio-temporal saliency model is proposed to improve the accuracy of attentional region estimation in videos through fully exploiting the merit of fusion. In order to represent the space constructed by several types of features such as location, appearance and temporal cues extracted from video, kernel regression in mixed feature spaces (KR-MFS) including three approximation entity-models is proposed. Using KR-MFS, a hybrid fusion strategy which considers the combination of spatial and temporal saliency of each individual unit and incorporates the impacts from the neighboring units is presented and embedded into the spatio-temporal saliency model. The proposed model has been evaluated on the publicly available dataset. Experimental results show that the proposed spatio-temporal saliency model can achieve better performance than the state-of-the-art approaches.
Kernel regression in mixed feature spaces for spatio-temporal saliency detection
S1077314215000351
We propose the application of a phase-field framework for three-dimensional volume reconstruction using slice data. The proposed method is based on the Allen–Cahn and Cahn–Hilliard equations, and the algorithm consists of two steps. First, we perform image segmentation on the given raw data using a modified Allen–Cahn equation. Second, we reconstruct a three-dimensional volume using a modified Cahn–Hilliard equation. In the modified Cahn–Hilliard equation, a fidelity term is introduced to keep the solution close to the slice data. The numerical methods use a hybrid method and an unconditionally stable nonlinear splitting scheme. The resulting discrete equations are solved using a multigrid method. The experiments on synthetic and real medical images are performed to demonstrate the accuracy and efficiency of the proposed method.
Three-dimensional volume reconstruction from slice data using phase-field models
S1077314215000375
Free-hand sketch recognition has become increasingly popular due to the recent expansion of portable touchscreen devices. However, the problem is non-trivial due to the complexity of internal structures that leads to intra-class variations, coupled with the sparsity in visual cues that results in inter-class ambiguities. In order to address the structural complexity, a novel structured representation for sketches is proposed to capture the holistic structure of a sketch. Moreover, to overcome the visual cue sparsity problem and therefore achieve state-of-the-art recognition performance, we propose a Multiple Kernel Learning (MKL) framework for sketch recognition, fusing several features common to sketches. We evaluate the performance of all the proposed techniques on the most diverse sketch dataset to date (Mathias et al., 2012), and offer detailed and systematic analyses of the performance of different features and representations, including a breakdown by sketch-super-category. Finally, we investigate the use of attributes as a high-level feature for sketches and show how this complements low-level features for improving recognition performance under the MKL framework, and consequently explore novel applications such as attribute-based retrieval.
Free-hand sketch recognition by multi-kernel feature learning
S1077314215000387
A popular approach for finding the correspondence between two nonrigid shapes is to embed their two-dimensional surfaces into some common Euclidean space, defining the comparison task as a problem of rigid matching in that space. We propose to extend this line of thought and introduce a novel spectral embedding, which exploits gradient fields for point to point matching. With this new embedding, a fully automatic system for finding the correspondence between shapes is introduced. The method is demonstrated to accurately recover the natural maps between nearly isometric surfaces and shown to achieve state-of-the-art results on known shape matching benchmarks.
Spectral gradient fields embedding for nonrigid shape matching
S1077314215000399
In most stereo-matching algorithms, stereo similarity measures are used to determine which image patches in a left–right image pair correspond to each other. Different similarity measures may behave very differently on different kinds of image structures, for instance, some may be more robust to noise whilst others are more susceptible to small texture variations. As a result, it may be beneficial to use different similarity measures in different image regions. We present an adaptive stereo similarity measure that achieves this via a weighted combination of measures, in which the weights depend on the local image structure. Specifically, the weights are defined as a function of a confidence measure on the stereo similarities: similarity measures with a higher confidence at a particular image location are given higher weight. We evaluate the performance of our adaptive stereo similarity measure in both local and global stereo algorithms on standard benchmarks such as the Middlebury and KITTI data sets. The results of our experiments demonstrate the potential merits of our adaptive stereo similarity measure.
Adaptive stereo similarity fusion using confidence measures
S1077314215000405
Attributes of objects such as “square”, “metallic”, and “red” allow a way for humans to explain or discriminate object categories. These attributes also provide a useful intermediate representation for object recognition, including support for zero-shot learning from textual descriptions of object appearance. However, manual selection of relevant attributes among thousands of potential candidates is labor intensive. Hence, there is increasing interest in mining attributes for object recognition. In this paper, we introduce two novel techniques for nominating attributes and a method for assessing the suitability of candidate attributes for object recognition. The first technique for attribute nomination estimates attribute qualities based on their ability to discriminate objects at multiple levels of the taxonomy. The second technique leverages the linguistic concept of distributional similarity to further refine the estimated qualities. Attribute nomination is followed by our attribute assessment procedure, which assesses the quality of the candidate attributes based on their performance in object recognition. Our evaluations demonstrate that both taxonomy and distributional similarity serve as useful sources of information for attribute nomination, and our methods can effectively exploit them. We use the mined attributes in supervised and zero-shot learning settings to show the utility of the selected attributes in object recognition. Our experimental results show that in the supervised case we can improve on a state of the art classifier while in the zero-shot scenario we make accurate predictions outperforming previous automated techniques.
Identifying visual attributes for object recognition from text and taxonomy
S1077314215000417
The employment of visual sensor networks for video surveillance has brought in as many challenges as advantages. While the integration of multiple cameras into a network has the potential advantage of fusing complementary observations from sensors and enlarging visual coverage, it also increases the complexity of tracking tasks and poses challenges to system scalability. For real time performance, a key approach to tackling these challenges is the mapping of the global tracking task onto a distributed sensing and processing infrastructure. In this paper, we present an efficient and scalable multi-camera multi-people tracking system with a three-layer architecture, in which we formulate the overall task (i.e., tracking all people using all available cameras) as a vision based state estimation problem and aim to maximize utility and sharing of available sensing and processing resources. By exploiting the geometric relations between sensing geometry and people’s positions, our method is able to dynamically and adaptively partition the overall task into a number of nearly independent subtasks with the aid of occlusion reasoning, each of which tracks a subset of people with a subset of cameras (or agencies). The method hereby reduces task complexity dramatically and helps to boost parallelization and maximize the system’s real time throughput and reliability while accounting for intrinsic uncertainty induced, e.g., by visual clutter and occlusions. We demonstrate the efficiency of our decentralized tracker on challenging indoor and outdoor video sequences.
Dynamic task decomposition for decentralized object tracking in complex scenes
S1077314215000429
Optical flow estimation is one of the oldest and still most active research domains in computer vision. In 35years, many methodological concepts have been introduced and have progressively improved performances, while opening the way to new challenges. In the last decade, the growing interest in evaluation benchmarks has stimulated a great amount of work. In this paper, we propose a survey of optical flow estimation classifying the main principles elaborated during this evolution, with a particular concern given to recent developments. It is conceived as a tutorial organizing in a comprehensive framework current approaches and practices. We give insights on the motivations, interests and limitations of modeling and optimization techniques, and we highlight similarities between methods to allow for a clear understanding of their behavior.
Optical flow modeling and computation: A survey
S1077314215000430
Local features, such as scale-invariant feature transform (SIFT) and speeded up robust features (SURF), are widely used for describing an object in the applications of visual object recognition and classification. However, these approaches cannot apply to transparent objects made of glass or plastic, as such objects take on the visual features of background scenes, and the appearance of such objects dramatically varies with changes in the scenes. Indeed, transparent objects have the unique characteristic of distorting the background by refraction. In this paper, we use a single-shot light field image as input and model the distortion of the light field caused by the refractive property of a transparent object. We propose a new feature which is called the light field distortion (LFD) feature. The proposed feature is background-invariant so that it is able to describe a transparent object without knowing the texture of the scene. The proposal incorporates this LFD feature into the bag-of-features approach for classifying transparent objects. We evaluated its performance and analyzed the limitations in various settings.
Light field distortion feature for transparent object classification
S1077314215000442
Object instance detection is a fundamental problem in computer vision and has many applications. Compared with the problem of detecting a texture-rich object, the detection of a texture-less object is more involved because it is usually based on matching the shape of the object with the shape primitives extracted from an image, which is not as discriminative as matching appearance-based local features, such as the SIFT features. The Dominant Orientation Templates (DOT) method proposed by Hinterstoisser et al. is a state-of-the-art method for the detection of texture-less objects and can work in real time. However, it may well generate false detections in a cluttered background. In this paper, we propose a new method which has three contributions. Firstly, it augments the DOT method with a type of illumination insensitive color information. Since color is complementary to shape, the proposed method significantly outperforms the original DOT method in the detection of texture-less object in cluttered scenes. Secondly, we come up with a systematic way based on logistic regression to combine the color and shape matching scores in the proposed method. Finally, we propose a speed-up strategy to work with the proposed method so that it runs even faster than the original DOT method. Extensive experimental results are presented in this paper to compare the proposed method directly with the original DOT method and the LINE-2D method, and indirectly with another two state-of-the-art methods.
Combine color and shape in real-time detection of texture-less objects
S1077314215000454
The focus of this paper is on proposing new schemes based on score level and feature level fusion to fuse face and iris modalities by employing several global and local feature extraction methods in order to effectively code face and iris modalities. The proposed schemes are examined using different techniques at matching score level and feature level fusion on CASIA Iris Distance database, Print Attack face database, Replay Attack face database and IIIT-Delhi Contact Lens iris database. The proposed schemes involve the consideration of Particle Swarm Optimization (PSO) and Backtracking Search Algorithm (BSA) in order to select optimized features and weights to achieve robust recognition system by reducing the number of features in feature level fusion of the multimodal biometric system and optimizing the weights assigned to the face-iris multimodal biometric system scores in score level fusion step. Additionally, in order to improve face and iris recognition systems and subsequently the recognition of multimodal face-iris biometric system, the proposed methods attempt to correct and align the location of both eyes by measuring the iris rotation angle. Demonstration of the results based on both identification and verification rates clarifies that the proposed fusion schemes obtain a significant improvement over unimodal and other multimodal methods implemented in this study. Furthermore, the robustness of the proposed multimodal schemes is demonstrated against spoof attacks on several face and iris spoofing datasets.
Selection of optimized features and weights on face-iris fusion using distance images
S1077314215000466
Not all frames are equal – selecting a subset of discriminative frames from a video can improve performance at detecting and recognizing human interactions. In this paper we present models for categorizing a video into one of a number of predefined interactions or for detecting these interactions in a long video sequence. The models represent the interaction by a set of key temporal moments and the spatial structures they entail. For instance: two people approaching each other, then extending their hands before engaging in a “handshaking” interaction. Learning the model parameters requires only weak supervision in the form of an overall label for the interaction. Experimental results on the UT-Interaction and VIRAT datasets verify the efficacy of these structured models for human interactions.
Discriminative key-component models for interaction detection and recognition
S1077314215000478
Given a general affine camera, we study the problem of finding the closest metric affine camera, where the latter is one of the orthographic, weak-perspective and paraperspective projection models. This problem typically arises in stratified Structure-from-Motion methods such as factorization-based methods. For each type of metric affine camera, we give a closed-form solution and its implementation through an algebraic procedure. Using our algebraic procedure, we can then provide a complete analysis of the problem’s generic ambiguity space. This also gives the means to generate the other solutions if any.
Metric corrections of the affine camera
S107731421500048X
Computer vision is hard because of a large variability in lighting, shape, and texture; in addition the image signal is non-additive due to occlusion. Generative models promised to account for this variability by accurately modelling the image formation process as a function of latent variables with prior beliefs. Bayesian posterior inference could then, in principle, explain the observation. While intuitively appealing, generative models for computer vision have largely failed to deliver on that promise due to the difficulty of posterior inference. As a result the community has favoured efficient discriminative approaches. We still believe in the usefulness of generative models in computer vision, but argue that we need to leverage existing discriminative or even heuristic computer vision methods. We implement this idea in a principled way with an informed sampler and in careful experiments demonstrate it on challenging generative models which contain renderer programs as their components. We concentrate on the problem of inverting an existing graphics rendering engine, an approach that can be understood as “Inverse Graphics”. The informed sampler, using simple discriminative proposals based on existing computer vision technology, achieves significant improvements of inference.
The informed sampler: A discriminative approach to Bayesian inference in generative computer vision models
S1077314215000508
The movement of a vehicle is much affected by surrounding environments such as road shapes and other traffic participants. This paper proposes a new vehicle motion prediction method to predict future motion of an on-road vehicle which is observed by a stereo camera system mounted on a moving vehicle. Our proposed algorithm considers not only the history movement of the observed vehicle, but also the environment configuration around the vehicle. To find feasible paths under a dynamic road environment, the Rapidly-Exploring Random Tree (RRT) is used. A simulation based method is then applied to generate future trajectories by combining results from RRT and a motion prediction algorithm modelled as a Gaussian Mixture Model (GMM). Our experiments show that our approach can predict future motion of a vehicle accurately, and outperforms previous works where only motion history is considered for motion prediction.
A simulation based method for vehicle motion prediction
S107731421500051X
Recently, head pose estimation in real-world environments has been receiving attention in the computer vision community due to its applicability to a wide range of contexts. However, this task still remains as an open problem because of the challenges presented by real-world environments. The focus of most of the approaches to this problem has been on estimation from single images or video frames, without leveraging the temporal information available in the entire video sequence. Other approaches frame the problem in terms of classification into a set of very coarse pose bins. In this paper, we propose a hierarchical graphical model that probabilistically estimates continuous head pose angles from real-world videos, by leveraging the temporal pose information over frames. The proposed graphical model is a general framework, which is able to use any type of feature and can be adapted to any facial classification task. Furthermore, the framework outputs the entire pose distribution for a given video frame. This permits robust temporal probabilistic fusion of pose information over the video sequence, and also probabilistically embedding the head pose information into other inference tasks. Experiments on large, real-world video sequences reveal that our approach significantly outperforms alternative state-of-the-art pose estimation methods. The proposed framework is also evaluated on gender and facial hair estimation. By incorporating pose information into the proposed hierarchical temporal graphical mode, superior results are achieved for attribute classification tasks.
Hierarchical temporal graphical model for head pose estimation and subsequent attribute classification in real-world videos
S1077314215000521
How to build a suitable image representation remains a critical problem in computer vision. Traditional Bag-of-Feature (BoF) based models build image representation by the pipeline of local feature extraction, feature coding and spatial pooling. However, three major shortcomings hinder the performance, i.e., the limitation of hand-designed features, the discrimination loss in local appearance coding and the lack of spatial information. To overcome the above limitations, in this paper, we propose a generalized BoF-based framework, which is hierarchically learned by exploring recently developed deep learning methods. First, with raw images as input, we densely extract local patches and learn local features by stacked Independent Subspace Analysis network. The learned features are then transformed to appearance codes by sparse Restricted Boltzmann Machines. Second, we perform spatial max-pooling on a set of over-complete spatial regions, which is generated by covering various spatial distributions, to incorporate more flexible spatial information. Third, a structured sparse Auto-encoder is proposed to explore the region representations into the image-level signature. To learn the proposed hierarchy, we layerwise pre-train the network in unsupervised manner, followed by supervised fine-tuning with image labels. Extensive experiments on different benchmarks, i.e., UIUC-Sports, Caltech-101, Caltech-256, Scene-15 and MIT Indoor-67, demonstrate the effectiveness of our proposed model.
Learning representative and discriminative image representation by deep appearance and spatial coding
S1077314215000533
Supervised learning approaches to image segmentation receive considerable interest due to their power and flexibility. However, the training phase is not painless, often being long and tedious. Accurate image labelling can take several hours of expert operators’ valuable time. User interfaces are often specifically designed to assist the user for the task at hand. This is clearly unfeasible for most application domains. We propose a simple segmentation framework based on classification and supervised incremental learning. A statistical model of pixel classes is learnt by incrementally adding new sample image patches to automatically-learned probability functions. Learning is iterated and refined in a number of steps rather than being executed in a one-shot training phase. We show that one-shot training and incremental labelling tend to produce similar statistical models, as the number of iterations grows. Comparable classification results are thus obtained with considerably less human effort.
Incremental learning to segment micrographs
S1077314215000545
Three-dimensional head pose estimation from a single 2D image is a challenging task with extensive applications. Existing approaches lack the capability to deal with multiple pose-related and -unrelated factors in a uniform way. Most of them can provide only one-dimensional yaw estimation and suffer from limited representation ability for out-of-sample testing inputs. These drawbacks lead to limited performance when extensive variations exist on faces in-the-wild. To address these problems, we propose a coarse-to-fine pose estimation framework, where the unit circle and 3-sphere are employed to model the manifold topology on the coarse and fine layer respectively. It can uniformly factorize multiple factors in an instance parametric subspace, where novel inputs can be synthesized under a generative framework. Moreover, our approach can effectively avoid the manifold degradation problem when 3D pose estimation is performed. The results on both experimental and in-the-wild databases demonstrate the validity and superior performance of our approach compared with the state-of-the-arts.
From circle to 3-sphere: Head pose estimation by instance parameterization
S1077314215000673
This work proposes a novel part-based method for visual object tracking. In our model, keypoints are considered as elementary predictors localizing the target in a collaborative search strategy. While numerous methods have been proposed in the model-free tracking literature, finding the most relevant features to track remains a challenging problem. To distinguish reliable features from outliers and bad predictors, we evaluate feature saliency comprising three factors: the persistence, the spatial consistency, and the predictive power of a local feature. Saliency information is learned during tracking to be exploited in several algorithm components: local prediction, global localization, model update, and scale change estimation. By encoding the object structure via the spatial layout of the most salient features, the proposed method is able to accomplish successful tracking in difficult real life situations such as long-term occlusion, presence of distractors, and background clutter. The proposed method shows its robustness on challenging public video sequences, outperforming significantly recent state-of-the-art trackers. Our Salient Collaborating Features Tracker (SCFT) also demonstrated a high accuracy even if a few local features are available.
Collaborative part-based tracking using salient local predictors
S1077314215000685
Six methods for the accurate estimation of the phase-correlation maxima are discussed and evaluated in this article for one- and two-dimensional signals. The evaluation was carried out under a rigid image registration framework, where artificially generated transformations were used in order to perform a quantitative assessment of the accuracy of each method and its robustness in the presence of noise, incomplete data, or extreme transformations. Another round of tests were performed with real cases where the true transformation is unknown, and not necessarily rigid; for these tests, quantitative evaluation was achieved by means of the root mean square error of the overlapping area between the two aligned images. While most methods behaved similarly under difficult conditions, three of the methods under study displayed clear advantages under mild levels of noise, low transformation complexity, and small percentages of missing data. These methods are the local center of mass, sinc function fitting, and minimization of the POC gradient magnitude. The other tested methods included quadratic fitting, linear fitting in the frequency domain, and up-sampling; however, these methods did not perform consistently well.
Phase correlation with sub-pixel accuracy: A comparative study in 1D and 2D
S1077314215000697
We propose a novel method for keeping track of multiple objects in provided regions of interest, i.e. object detections, specifically in cases where a single object results in multiple co-occurring detections (e.g. when objects exhibit unusual size or pose) or a single detection spans multiple objects (e.g. during occlusion). Our method identifies a minimal set of objects to explain the observed features, which are extracted from the regions of interest in a set of frames. Focusing on appearance rather than temporal cues, we treat video as an unordered collection of frames, and “unmix” object appearances from inaccurate detections within a Latent Dirichlet Allocation (LDA) framework, for which we propose an efficient Variational Bayes inference method. After the objects have been localized and their appearances have been learned, we can use the posterior distributions to “back-project” the assigned object features to the image and obtain segmentation at pixel level. In experiments on challenging datasets, we show that our batch method outperforms state-of-the-art batch and on-line multi-view trackers in terms of number of identity switches and proportion of correctly identified objects. We make our software and new dataset publicly available for non-commercial, benchmarking purposes.
Identifying multiple objects from their appearance in inaccurate detections
S1077314215000703
RGB-Depth (or RGB-D) cameras are increasingly being adopted in robotic and vision applications, including mobile robot localization and mapping, gesture recognition, and at-home healthcare monitoring. As with any other sensor, calibrating RGB-D cameras is needed to increase their sensing accuracy, especially since the manufacturer’s calibration parameters might change between models. In this paper, we present a novel RGB-D camera-calibration algorithm for the estimation of the full set of intrinsic and extrinsic parameters. Our method is easy to use, can be utilized with any arrangement of RGB and depth sensors, and only requires that a spherical object (e.g., a basketball) is moved in front of the camera for a few seconds. Our image-processing pipeline automatically and robustly detects the moving calibration object while rejecting noise and outliers in the image data. Our calibration method uses all the frames of the detected sphere and leverages novel analytical results on the multi-view projection of spheres to accurately estimate all the calibration parameters. Extensive numerical simulations and real-world experiments have been conducted to validate our algorithm and compare its performance with that of other state-of-the-art calibration methods. An RGB-D Calibration Toolbox for MATLAB is also made freely available for the scientific community.
Practical and accurate calibration of RGB-D cameras using spheres
S1077314215000715
Due to large variations in shape, appearance, and viewing conditions, object recognition is a key precursory challenge in the fields of object manipulation and robotic/AI visual reasoning in general. Recognizing object categories, particular instances of objects and viewpoints/poses of objects are three critical subproblems robots must solve in order to accurately grasp/manipulate objects and reason about their environments. Multi-view images of the same object lie on intrinsic low-dimensional manifolds in descriptor spaces (e.g. visual/depth descriptor spaces). These object manifolds share the same topology despite being geometrically different. Each object manifold can be represented as a deformed version of a unified manifold. The object manifolds can thus be parameterized by its homeomorphic mapping/reconstruction from the unified manifold. In this work, we develop a novel framework to jointly solve the three challenging recognition sub-problems, by explicitly modeling the deformations of object manifolds and factorizing it in a view-invariant space for recognition. We perform extensive experiments on several challenging datasets and achieve state-of-the-art results.
Factorization of view-object manifolds for joint object recognition and pose estimation
S1077314215000727
Face detection is one of the most studied topics in computer vision literature, not only because of the challenging nature of face as an object, but also due to the countless applications that require the application of face detection as a first step. During the past 15years, tremendous progress has been made due to the availability of data in unconstrained capture conditions (so-called ‘in-the-wild’) through the Internet, the effort made by the community to develop publicly available benchmarks, as well as the progress in the development of robust computer vision algorithms. In this paper, we survey the recent advances in real-world face detection techniques, beginning with the seminal Viola–Jones face detector methodology. These techniques are roughly categorized into two general schemes: rigid templates, learned mainly via boosting based methods or by the application of deep neural networks, and deformable models that describe the face by its parts. Representative methods will be described in detail, along with a few additional successful methods that we briefly go through at the end. Finally, we survey the main databases used for the evaluation of face detection algorithms and recent benchmarking efforts, and discuss the future of face detection.
A survey on face detection in the wild: Past, present and future
S1077314215000831
We propose a block-based scene reconstruction method using multiple stereo pairs of spherical images. We assume that the urban scene consists of axis-aligned planar structures (Manhattan world). Captured spherical stereo images are converted into six central-point perspective images by cubic projection and façade alignment. Depth information is recovered by stereo matching between images. Semantic regions are segmented based on colour, edge and normal information. Independent 3D rectangular planes are constructed by fitting planes aligned with the principal axes of the segmented 3D points. Finally cuboid-based scene structure is recovered from multiple viewpoints by merging and refining planes based on connectivity and visibility. The reconstructed model efficiently shows the structure of the scene with a small amount of data.
Block world reconstruction from spherical stereo image pairs
S1077314215000843
In this paper, we introduce an original framework for computing local binary like-patterns on 2D mesh manifolds (i.e., surfaces in the 3D space). This framework, dubbed mesh-LBP, preservers the simplicity and the adaptability of the 2D LBP and has the capacity of handling both open and close mesh surfaces without requiring normalization as compared to its 2D counterpart. We describe the foundations and the construction of mesh-LBP and showcase the different LBP patterns that can be generated on the mesh. In the experimentation, we provide evidence of the uniform patterns in the mesh-LBP, the repeatability of its descriptors, and its robustness to moderate shape deformations. Then, we show how the mesh-LBP descriptors can be adapted to a number of surface local and global analysis including 3D texture classification and retrieval, and 3D face matching. We also compare the performance of the mesh-LBP descriptors with a bunch of state of the art surface descriptors.
Local binary patterns on triangular meshes: Concept and applications
S1077314215000855
Surveillance cameras have become a customary security equipment in buildings and streets worldwide. It is up to the field of Computational Forensics to provide automated methods for extracting and analyzing relevant image data captured by such equipment. In this article, we describe an effective and semi-automated method for detecting vanishing points, with their subsequent application to the problem of computing heights in single images. With no necessary camera calibration, our method iteratively clusters segments in the bi-dimensional projective space, identifying all vanishing points – finite and infinite – in an image. We conduct experiments on images of man-made environments to evaluate the output of the proposed method and we also consider its application on a photogrammetry framework.
Efficient height measurements in single images based on the detection of vanishing points
S1077314215000867
The paper addresses structural decomposition of images by using a family of non-linear and non-convex objective functions. These functions rely on ℓ p quasi-norm estimation costs in a piecewise constant regularization framework. These objectives make image decomposition into constant cartoon levels and rich textural patterns possible. The paper shows that these regularizing objectives yield image texture-versus-cartoon decompositions that cannot be reached by using standard penalized least square regularizations associated with smooth and convex objectives.
High order structural image decomposition by using non-linear and non-convex regularizing objectives
S1077314215000879
Motion segmentation and human face clustering are two fundamental problems in computer vision. The state-of-the-art algorithms employ the subspace clustering scheme when processing the two problems. Among these algorithms, sparse subspace clustering (SSC) achieves the state-of-the-art clustering performance via solving a ℓ1 minimization problem and employing the spectral clustering technique for clustering data points into different subspaces. In this paper, we propose an iterative weighting (reweighted) ℓ1 minimization framework which largely improves the performance of the traditional ℓ1 minimization framework. The reweighted ℓ1 minimization framework makes a better approximation to the ℓ0 minimization than tradition ℓ1 minimization framework. Following the reweighted ℓ1 minimization framework, we propose a new subspace clustering algorithm, namely, reweighted sparse subspace clustering (RSSC). Through an extensive evaluation on three benchmark datasets, we demonstrate that the proposed RSSC algorithm significantly reduces the clustering errors over the SSC algorithm while the additional reweighted step has a moderate impact on the computational cost. The proposed RSSC also achieves lowest clustering errors among recently proposed algorithms. On the other hand, as majority of the algorithms were evaluated on the Hopkins155 dataset, which is insufficient of non-rigid motion sequences, the dataset can hardly reflect the ability of the existing algorithms on processing non-rigid motion segmentation. Therefore, we evaluate the performance of the proposed RSSC and state-of-the-art algorithms on the Freiburg-Berkeley Motion Segmentation Dataset, which mainly contains non-rigid motion sequences. The performance of these state-of-the-art algorithms, as well as RSSC, will drop dramatically on this dataset with mostly non-rigid motion sequences. Though the proposed RSSC achieves the better performance than other algorithms, the results suggest that novel algorithms that focus on segmentation of non-rigid motions are still in need.
Reweighted sparse subspace clustering
S1077314215000880
Exemplar SVMs (E-SVMs, Malisiewicz et al., ICCV 2011), where an SVM is trained with only a single positive sample, have found applications in the areas of object detection and content-based image retrieval (CBIR), amongst others. In this paper we introduce a method of part based transfer regularization that boosts the performance of E-SVMs, with a negligible additional cost. This enhanced E-SVM (EE-SVM) improves the generalization ability of E-SVMs by softly forcing it to be constructed from existing classifier parts cropped from previously learned classifiers. In CBIR applications, where the aim is to retrieve instances of the same object class in a similar pose, the EE-SVM is able to tolerate increased levels of intra-class variation, including occlusions and truncations, over E-SVM, and thereby increases precision and recall. In addition to transferring parts, we introduce a method for transferring the statistics between the parts and also show that there is an equivalence between transfer regularization and feature augmentation for this problem and others, with the consequence that the new objective function can be optimized using standard libraries. EE-SVM is evaluated both quantitatively and qualitatively on the PASCAL VOC 2007 and ImageNet datasets for pose specific object retrieval. It achieves a significant performance improvement over E-SVMs, with greater suppression of negative detections and increased recall, whilst maintaining the same ease of training and testing.
Part level transfer regularization for enhancing exemplar SVMs
S1077314215000892
We describe a novel technique to combine motion data with scene information to capture activity characteristics of older adults using a single Microsoft Kinect depth sensor. Specifically, we describe a method to learn activities of daily living (ADLs) and instrumental ADLs (IADLs) in order to study the behavior patterns of older adults to detect health changes. To learn the ADLs, we incorporate scene information to provide contextual information to build our activity model. The strength of our algorithm lies in its generalizability to model different ADLs while adding more information to the model as we instantiate ADLs from learned activity states. We validate our results in a controlled environment and compare it with another widely accepted classifier, the hidden Markov model (HMM) and its variations. We also test our system on depth data collected in a dynamic unstructured environment at TigerPlace, an independent living facility for older adults. An in-home activity monitoring system would benefit from our algorithm to alert healthcare providers of significant temporal changes in ADL behavior patterns of frail older adults for fall risk, cognitive impairment, and other health changes.
Recognizing complex instrumental activities of daily living using scene information and fuzzy logic
S1077314215000909
In this paper we identify two types of problems with excessive feature sharing and the lack of discriminative learning in hierarchical compositional models: (a) similar category misclassifications and (b) phantom detections in background objects. We propose to overcome those issues by fully utilizing a discriminative features already present in the generative models of hierarchical compositions. We introduce descriptor called histogram of compositions to capture the information important for improving discriminative power and use it with a classifier to learn distinctive features important for successful discrimination. The generative model of hierarchical compositions is combined with the discriminative descriptor by performing hypothesis verification of detections produced by the hierarchical compositional model. We evaluate proposed descriptor on five datasets and show to improve the misclassification rate between similar categories as well as the misclassification rate of phantom detections on backgrounds. Additionally, we compare our approach against a state-of-the-art convolutional neural network and show to outperform it under significant occlusions.
Adding discriminative power to a generative hierarchical compositional model using histograms of compositions
S1077314215000910
Monitoring large crowds using video cameras is a challenging task. Detecting humans in video is becoming essential for monitoring crowd behavior. However, occlusion and low resolution in the region of interest hinders accurate crowd segmentation. In such scenarios, it is likely that only the head is visible, and often very small. Most existing people-detection systems rely on low-level visual appearance features such as the Histogram of Oriented Gradients (HOG), and these are unsuitable for detecting human heads at low resolutions. In this paper, a novel head detector is presented using motion histogram features. The shape and the motion information, including crowd direction and magnitude, is learned and used to detect humans in occluded crowds. We introduce novel features based on a multi level pyramid architecture for Motion Boundary Histogram (MBH) and Histogram of Oriented Optical Flow (HOOF), derived from the TV-L1 optical flow. In addition, a new feature, called Relative Motion Distance (RMD) is proposed to efficiently capture correlation statistics. For classification distinguishing human head from similar features, a two-stage Support Vector Machine (SVM) is used, and an explicit kernel mapping on our motion histogram features is performed using Bhattacharyya-distance kernels. A second stage of classification is required to reduce the number of false positives. The proposed features and system were tested on videos from the PETS 2009 dataset and compared with state-of-the-art features, against which our system reported excellent results.
Head detection using motion features and multi level pyramid architecture
S1077314215000922
Deformable object models capture variations in an object’s appearance that can be represented as image deformations. Other effects such as out-of-plane rotations, three-dimensional articulations, and self-occlusions are often captured by considering mixture of deformable models, one per object aspect. A more scalable approach is representing instead the variations at the level of the object parts, applying the concept of a mixture locally. Combining a few part variations can in fact cheaply generate a large number of global appearances. A limited version of this idea was proposed by Yang and Ramanan [1], for human pose dectection. In this paper we apply it to the task of generic object category detection and extend it in several ways. First, we propose a model for the relationship between part appearances more general than the tree of Yang and Ramanan [1], which is more suitable for generic categories. Second, we treat part locations as well as their appearance as latent variables so that training does not need part annotations but only the object bounding boxes. Third, we modify the weakly-supervised learning of Felzenszwalb et al. and Girshick et al. [2], [3] to handle a significantly more complex latent structure. Our model is evaluated on standard object detection benchmarks and is found to improve over existing approaches, yielding state-of-the-art results for several object categories.
Factorized appearances for object detection
S1077314215000946
The majority of facial recognition systems depend on the correct location of both the left and right eye centers in an effort to geometrically normalize face images. We propose a novel eye detection algorithm that efficiently locates the eye centers in five different bands of the SWIR spectrum, ranging from 1150 nm up to 1550 nm in increments of 100 nm. Our eye detection methodology utilizes a combination of algorithmic steps, including 2D normalized correlation coefficients as well as summation range filters to effectively find the eyes in the aforementioned SWIR wavelengths. We validate our method by comparing our approach with currently available eye detection algorithms including a commercial face recognition software in which one of its capabilities is the extraction of the eye locations and a state of the art academic approach. Eye detection results as well as face recognition studies show that our proposed approach outperforms all other approaches, including the state of the art (originally designed to work in the visible band), when operating in the SWIR spectrum. We also show that our approach is robust to typical image degradation factors including spatial resolution changes, image compression, and image blurring. This is an important achievement that has also practical value for biometric operators. It is impractical to manually annotate thousands to millions of eye centers, therefore, a quick and robust method for automatically determining the eye center locations is needed.
Accurate eye localization in the Short Waved Infrared Spectrum through summation range filters
S1077314215000958
We address combinatorial problems that can be formulated as minimization of a partially separable function of discrete variables (energy minimization in graphical models, weighted constraint satisfaction, pseudo-Boolean optimization, 0–1 polynomial programming). For polyhedral relaxations of such problems it is generally not true that variables integer in the relaxed solution will retain the same values in the optimal discrete solution. Those which do are called persistent. Such persistent variables define a part of a globally optimal solution. Once identified, they can be excluded from the problem, reducing its size. To any polyhedral relaxation we associate a sufficient condition proving persistency of a subset of variables. We set up a specially constructed linear program which determines the set of persistent variables maximal with respect to the relaxation. The condition improves as the relaxation is tightened and possesses all its invariances. The proposed framework explains a variety of existing methods originating from different areas of research and based on different principles. A theoretical comparison is established that relates these methods to the standard linear relaxation and proves that the proposed technique identifies same or larger set of persistent variables.
Higher order maximum persistency and comparison theorems
S107731421500096X
In this paper, a new pipeline of structure-from-motion for ground-view images is proposed that uses feature points on an aerial image as references for removing accumulative errors. The challenge here is to design a method for discriminating correct matches from unreliable matches between ground-view images and an aerial image. If we depend on only local image features, it is not possible in principle to remove all the incorrect matches, because there frequently exist repetitive and/or similar patterns, such as road signs. In order to overcome this difficulty, we employ geometric consistency-verification of matches using the RANSAC scheme that comprises two stages: (1) sampling-based local verification focusing on the orientation and scale information extracted by a feature descriptor, and (2) global verification using camera poses estimated by the bundle adjustment using sampled matches.
Bundle adjustment using aerial images with two-stage geometric verification
S1077314215001071
Recently, the new Kinect One has been issued by Microsoft, providing the next generation of real-time range sensing devices based on the Time-of-Flight (ToF) principle. As the first Kinect version was using a structured light approach, one would expect various differences in the characteristics of the range data delivered by both devices. This paper presents a detailed and in-depth comparison between both devices. In order to conduct the comparison, we propose a framework of seven different experimental setups, which is a generic basis for evaluating range cameras such as Kinect. The experiments have been designed with the goal to capture individual effects of the Kinect devices as isolatedly as possible and in a way, that they can also be adopted, in order to apply them to any other range sensing device. The overall goal of this paper is to provide a solid insight into the pros and cons of either device. Thus, scientists who are interested in using Kinect range sensing cameras in their specific application scenario can directly assess the expected, specific benefits and potential problem of either device.
Kinect range sensing: Structured-light versus Time-of-Flight Kinect
S1077314215001083
With new depth sensing technology such as Kinect providing high quality synchronized RGB and depth images (RGB-D data), combining the two distinct views for object recognition has attracted great interest in computer vision and robotics community. Recent methods mostly employ supervised learning methods for this new RGB-D modality based on the two feature sets. However, supervised learning methods always depend on large amount of manually labeled data for training models. To address the problem, this paper proposes a semi-supervised learning method to reduce the dependence on large annotated training sets. The method can effectively learn from relatively plentiful unlabeled data, if powerful feature representations for both the RGB and depth view can be extracted. Thus, a novel and effective feature termed CNN-SPM-RNN is proposed in this paper, and four representative features (KDES [1], CKM [2], HMP [3] and CNN-RNN [4]) are evaluated and compared with ours under the unified semi-supervised learning framework. Finally, we verify our method on three popular and publicly available RGB-D object databases. The experimental results demonstrate that, with only 20% labeled training set, the proposed method can achieve competitive performance compared with the state of the arts on most of the databases.
Semi-supervised learning and feature evaluation for RGB-D object recognition
S1077314215001174
Line segment detection is a fundamental procedure in computer vision, pattern recognition, or image analysis applications. This paper proposes a statistical method based on the Hough transform for line segment detection by considering quantization error, image noise, pixel disturbance, and peak spreading, also taking the choice of the coordinate origin into account. A random variable is defined in each column in a peak region. Statistical means and statistical variances are calculated; the statistical non-zero cells are analyzed and computed. The normal angle is determined by minimizing the function which fits the statistical variances; the normal distance is calculated by interpolating the function which fits the statistical means. Endpoint coordinates of a detected line segment are determined by fitting a sine curve (rather than searching for the first and last non-zero voting cells, and solving equations containing coordinates of such cells). Experimental results on simulated data and real world images validate the performance of the proposed method for line segment detection.
A statistical method for line segment detection
S1077314215001204
Tagging of visual content is becoming more and more widespread as web-based services and social networks have popularized tagging functionalities among their users. These user-generated tags are used to ease browsing and exploration of media collections, e.g. using tag clouds, or to retrieve multimedia content. However, not all media are equally tagged by users. Using the current systems is easy to tag a single photo, and even tagging a part of a photo, like a face, has become common in sites like Flickr and Facebook. On the other hand, tagging a video sequence is more complicated and time consuming, so that users just tag the overall content of a video. In this paper we present a method for automatic video annotation that increases the number of tags originally provided by users, and localizes them temporally, associating tags to keyframes. Our approach exploits collective knowledge embedded in user-generated tags and web sources, and visual similarity of keyframes and images uploaded to social sites like YouTube and Flickr, as well as web sources like Google and Bing. Given a keyframe, our method is able to select “on the fly” from these visual sources the training exemplars that should be the most relevant for this test sample, and proceeds to transfer labels across similar images. Compared to existing video tagging approaches that require training classifiers for each tag, our system has few parameters, is easy to implement and can deal with an open vocabulary scenario. We demonstrate the approach on tag refinement and localization on DUT-WEBV, a large dataset of web videos, and show state-of-the-art results.
A data-driven approach for tag refinement and localization in web videos
S1077314215001216
The recent successful commercialization of depth sensors has made it possible to effectively capture depth images in real time, and thus creates a new modality for many computer vision tasks including hand gesture recognition and activity analysis. Most existing depth descriptors simply encode depth information as intensities while ignoring the richer 3D shape information. In this paper, we propose a novel and effective descriptor, the Histogram of 3D Facets (H3DF), to explicitly encode the 3D shape information from depth maps. A 3D Facet associated with a 3D cloud point characterizes the 3D local support surface. By robust coding and circular pooling 3D Facets from a depth map, the proposed H3DF descriptor can effectively represent both 3D shapes and structures of various depth maps. To address the recognition problems of dynamic actions and gestures, we further extend the proposed H3DF by combining it with an N-gram model and dynamic programming. The proposed descriptor is extensively evaluated on two public 3D static hand gesture datasets, one dynamic hand gesture dataset, and one popular 3D action recognition dataset. The recognition results outperform or are comparable with state-of-the-art performances.
Histogram of 3D Facets: A depth descriptor for human action and hand gesture recognition
S1077314215001228
This paper addresses the structure-and-motion problem, that requires to find camera motion and 3D structure from point matches. A new pipeline, dubbed Samantha, is presented, that departs from the prevailing sequential paradigm and embraces instead a hierarchical approach. This method has several advantages, like a provably lower computational complexity, which is necessary to achieve true scalability, and better error containment, leading to more stability and less drift. Moreover, a practical autocalibration procedure allows to process images without ancillary information. Experiments with real data assess the accuracy and the computational efficiency of the method.
Hierarchical structure-and-motion recovery from uncalibrated images
S107731421500123X
Recent developments in low-cost CMOS cameras have created the opportunity of bringing imaging capabilities to sensor networks and a new field called visual sensor networks (VSNs) has emerged. VSNs consist of image sensors, embedded processors, and wireless transceivers which are powered by batteries. Since energy and bandwidth resources are limited, setting up a tracking system in VSNs is a challenging problem. In this paper, we present a framework for human tracking in VSN environments. The traditional approach of sending compressed images to a central node has certain disadvantages such as decreasing the performance of further processing (i.e., tracking) because of low quality images. Instead, in our decentralized tracking framework, each camera node performs feature extraction and obtains likelihood functions. We propose a sparsity-driven method that can obtain bandwidth-efficient representation of likelihoods extracted by the camera nodes. Our approach involves the design of special overcomplete dictionaries that match the structure of the likelihoods and the transmission of likelihood information in the network through sparse representation in such dictionaries. We have applied our method for indoor and outdoor people tracking scenarios and have shown that it can provide major savings in communication bandwidth without significant degradation in tracking performance. We have compared the tracking results and communication loads with a block-based likelihood compression scheme, a decentralized tracking method and a distributed tracking method. Experimental results show that our sparse representation framework is an effective approach that can be used together with any probabilistic tracker in VSNs.
Sparsity-driven bandwidth-efficient decentralized tracking in visual sensor networks
S1077314215001241
In general, computational methods to estimate the color of the light source are based on single, low-level image cues such as pixel values and edges. Only a few methods are proposed exploiting multiple cues for color constancy by incorporating pixel values, edge information and higher-order image statistics. However, expanding color constancy beyond these low-level image statistics (pixels, edges and n-jets) to include high-level cues and integrate all these cues together into a unified framework has not been explored. In this paper, the color of the light source is estimated using (low-level) image statistics, (intermediate-level) regions, and (high-level) scene characteristics. A Bayesian framework is proposed combining the different cues in a principled way. Our experiments show that the proposed algorithm outperforms the original Bayesian method. The mean error is reduced by 33.3% with respect to the original Bayesian method and the median error is reduced by 37.1% on the re-processed version of the Gehler color constancy dataset. Our method outperforms most of the state-of-the-art color constancy algorithms in mean angular error and obtains the highest accuracy in terms of median angular error.
Color constancy by combining low-mid-high level image cues
S1077314215001253
An ongoing challenge in the area of image segmentation is in dealing with scenes exhibiting complex textural characteristics. While many approaches have been proposed to tackle this particular challenge, a related topic of interest that has not been fully explored for dealing with this challenge is stochastic texture models, particularly for characterizing textural characteristics within regions of varying sizes and shapes. Therefore, this paper presents a novel method for image segmentation based on the concept of multi-scale stochastic regional texture appearance models. In the proposed method, a multi-scale representation of the image is constructed using an iterative bilateral scale space decomposition. Local texture features are then extracted via image patches and random projections to generate stochastic texture features. A texton dictionary is built from the stochastic features, and used to represent the global texture appearance model. Based on this global texture appearance model, a regional texture appearance model can then be obtained based on the texton occurrence probability given a region within an image. Finally, a stochastic region merging algorithm that allows the computation of complex features is presented to perform image segmentation based on proposed regional texture appearance model. Experimental results using the BDSD300 segmentation dataset showed that the proposed method achieves a Probabilistic Rand Index (PRI) of 0.83 and an F-measure of 0.77@(0.92, 0.68), and provides improved handling of color and luminance variation, as well as strong segmentation performance for images with highly textured regions when compared to a number of previous methods. These results suggest that the proposed stochastic regional texture appearance model is better suited for handling the texture variations of natural scenes, leading to more accurate segmentations, particularly in situations characterized by complex textural characteristics.
Image segmentation via multi-scale stochastic regional texture appearance models
S1077314215001265
Semantic attributes enable a richer description of scenes than basic category labels. While traditionally scenes have been analyzed using global image features such as Gist, recent studies suggest that humans often describe scenes in ways that are naturally characterized by local image evidence. For example, humans often describe scenes by their functions or affordances, which are largely suggested by the objects in the scene. In this paper, we leverage a large collection of modern object detectors trained at the web scale to derive effective high-level features for scene attribute recognition. We conduct experiments using two modern object detection frameworks: a semi-supervised learner that continuously learns object models from web images, and a state-of-the-art deep network. The detector response features improve the state of the art on the standard scene attribute benchmark by 5% average precision, and also capture intuitive object-scene relationships, such as the positive correlation of castles with “vacationing/touring” scenes.
Improving scene attribute recognition using web-scale object detectors
S1077314215001277
The question whether a caricature of a 2D sketch, or an object in 3D can be generated automatically is probably as old as the attempt to answer the question of what defines art. In an attempt to provide a partial answer, we propose a computational approach for automatic caricaturization. The idea is to rely on intrinsic geometric properties of a given model that are invariant to poses, articulations, and gestures. A property of a surface that is preserved while it undergoes such deformations is self-isometry. In other words, while smiling, running, and posing, we do not change much the intrinsic geometry of our facial surface, the area of our body, or the size of our hands. The proposed method locally amplifies the area of a given surface based on its Gaussian curvature. It is shown to produce a natural comic exaggeration effect which can be efficiently computed as a solution of a Poisson equation. We demonstrate the power of the proposed method by applying it to a variety of meshes such as human faces, statues, and animals. The results demonstrate enhancement and exaggeration of the shape’s features into an artistic caricature. As most poses and postures are almost isometries, the use of the Gaussian curvature as the scaling factor allows the proposed method to handle animated sequences while preserving the identity of the animated creature.
Computational caricaturization of surfaces
S1077314215001289
Point set registration problem confronts with the challenge of large degree of degradations, such as deformation, noise, occlusion and outlier. In this paper, we present a novel robust method for non-rigid point set registration, and it includes four important parts are as follows: First, we used a mixture of asymmetric Gaussian model (MoAG) Kato et al. (2002) [1], a new probability model which can capture spatially asymmetric distributions, to represent each point set. Second, based on the representation of point set by MoAG, we used soft assignment technique to recover the correspondences, and correlation-based method to estimate the transformation parameters between two point sets. Point set registration is formulated as an optimization problem. Third, we solved the optimization problem under regularization theory in a feature space, i.e., Reproducing Kernel Hilbert Space (RKHS). Finally, we chose control points to build a kernel using low-rank kernel matrix approximation. Thus the computational complexity can be reduced down to O(N) approximately. Experimental results on 2D, 3D non-rigid point set, and real image registration demonstrate that our method is robust to a large degree of degradations, and it outperforms several state-of-the-art methods in most tested scenarios.
A robust non-rigid point set registration method based on asymmetric gaussian representation
S1077314215001290
3D metric data of environmental structures is nowadays present in many information sources (maps, GIS) and can be easily acquired with modern depth sensing technology (RGBD, laser). This wealth of information can be readily used for single view calibration of 2D cameras with radial distortion, provided that image structures can be matched with the 3D data. In this paper we present an analysis of the level of accuracy that can be obtained when such calibration is performed with the 2D–3D DLT-Lines algorithm. The analysis propagates uncertainty in the detection of features at the image level to camera pose, and from there to 3D reconstruction. The analytic error propagation expressions are derived using first order uncertainty models, and are validated with Monte Carlo simulations in a virtual indoor environment. The method is general and can be applied to other calibration methods, as long as explicit or implicit expressions can be derived for the transformation from image coordinates to 3D reconstruction. We present results with real data for two applications: i) the 3D reconstruction of an outdoors building for which 3D information is given by a map, observed by a mobile phone camera; and ii) the uncertainty in the localization at the floor plane of points observed by a fixed camera calibrated by a robot equipped with an RGBD camera navigating in a typical indoor environment.
Uncertainty analysis of the DLT-Lines calibration algorithm for cameras with radial distortion
S1077314215001307
Many recent advances in multi-target tracking have grown concern over latent corresponding relation among observations, e.g. social relationship. To handle long-term occlusion within group and tracking failure caused by interaction of targets, various correlations among tracklets need to be exploited. In this paper, a paratactic–serial tracklet graph (PSTG) theory is proposed for inter-tracklet analysis in multi-target tracking to avoid tracking failure caused by long-term occlusion within group or crossing trajectories. Contrary to recent approaches, a novel PSTG is defined to describe the correlation among all tracklets in spatio-temporal domain to model the mutual influence among trajectories. Paratactic–tracklet graph extends the potential relationship among tracklets which show similar motion patterns in spatio-temporal neighbor. Serial–tracklet graph enhances the integrity and continuity of trajectories which represent two trajectory fragments of a certain target in different periods. Furthermore, a PSTG-based multi-label optimization algorithm is presented to make the trajectory estimation more accurate. A PSTG energy is minimized by multi-label optimization, including group, integrity and spatio-temporal constraints. Experiments demonstrate the anti-occlusion performance of the proposed approach on several public datasets and actual surveillance sequences, and achieve competitive results by quantitative evaluation.
PSTG-based multi-label optimization for multi-target tracking
S1077314215001319
The recent advancement of multi-sensor technologies and algorithms has boosted significant progress to human action recognition systems, especially for dealing with realistic scenarios. However, partial occlusion, as a major obstacle in real-world applications, has not received sufficient attention in the action recognition community. In this paper, we extensively investigate how occlusion can be addressed by multi-view fusion. Specifically, we propose a robust representation called local nearest neighbour embedding (LNNE). We then extend the LNNE method to 3 multi-view fusion scenarios. Additionally, we provide detailed analysis of the proposed voting strategy from the boosting point of view. We evaluate our approach on both synthetic and realistic occluded databases, and the LNNE method outperforms the state-of-the-art approaches in all tested scenarios.
Recognising occluded multi-view actions using local nearest neighbour embedding
S1077314215001320
The goal of multiple foreground cosegmentation (MFC) is to extract a finite number of foreground objects from an input image collection, while only an unknown subset of such objects is presented in each image. In this paper, we propose a novel unsupervised framework for decomposing MFC into three distinct yet mutually related tasks: image segmentation, segment matching, and figure/ground (F/G) assignment. By our decomposition, image segments sharing similar visual appearances will be identified as foreground objects (or their parts), and these segments will be also separated from background regions. To relate the decomposed outputs for discovering high-level object information, we construct foreground object hypotheses, which allows us to determine the foreground objects in each individual image without any user interaction, the use of pre-trained classifiers, or the prior knowledge of foreground object numbers. In our experiments, we first evaluate our proposed decomposition approach on the iCoseg dataset for single foreground cosegmentation. Empirical results on the FlickrMFC dataset will further verify the effectiveness of our approach for MFC problems.
Optimizing the decomposition for multiple foreground cosegmentation
S1077314215001332
Object detection using shape is interesting since it is well known that humans can recognize an object simply from its shape. Thus, shape-based methods have great promise to handle a large amount of shape variation using a compact representation. In this paper, we present a new algorithm for object detection that uses a single reasonably good sketch as a reference to build a model for the object. The method hierarchically segments a given sketch into parts using an automatic algorithm and estimates a different affine transformation for each part while matching. A Hough-style voting scheme collects evidence for the object from the leaves to the root in the part decomposition tree for robust detection. Missing edge segments, clutter and generic object deformations are handled by flexibly following the contour paths in the edge image that resemble the model contours. Efficient data-structures and a two-stage matching approach assist in yielding an efficient and robust system. Results on ETHZ and several other popular image datasets yield promising results compared to the state-of-the-art. A new dataset of real-life hand-drawn sketches for all the object categories in the ETHZ dataset is also used for evaluation.
Part-based deformable object detection with a single sketch
S1077314215001344
In this paper we propose a novel method for detecting and tracking facial landmark features on 3D static and 3D dynamic (a.k.a. 4D) range data. Our proposed method involves fitting a shape index-based statistical shape model (SI-SSM) with both global and local constraints to the input range data. Our proposed model makes use of the global shape of the facial data as well as local patches, consisting of shape index values, around landmark features. The shape index is used due to its invariance to both lighting and pose changes. The fitting is performed by finding the correlation between the shape model and the input range data. The performance of our proposed method is evaluated in terms of various geometric data qualities, including data with noise, incompletion, occlusion, rotation, and various facial motions. The accuracy of detected features is compared to the ground truth data as well as to start of the art results. We test our method on five publicly available 3D/4D databases: BU-3DFE, BU-4DFE, BP4D-Spontaneous, FRGC 2.0, and Eurecom Kinect Face Dataset. The efficacy of the detected landmarks is validated through applications for geometric based facial expression classification for both posed and spontaneous expressions, and head pose estimation. The merit of our method is manifested as compared to the state of the art feature tracking methods.
Landmark localization on 3D/4D range data using a shape index-based statistical shape model with global and local constraints
S1077314215001356
Most image encodings achieve orientation invariance by aligning the patches to their dominant orientations and translation invariance by completely ignoring patch position or by max-pooling. Albeit successful, such choices introduce too much invariance because they do not guarantee that the patches are rotated or translated consistently. In this paper, we propose a geometric-aware aggregation strategy, which jointly encodes the local descriptors together with their patch dominant angle or location. The geometric attributes are encoded in a continuous manner by leveraging explicit feature maps. Our technique is compatible with generic match kernel formulation and can be employed along with several popular encoding methods, in particular Bag-of-Words, VLAD and the Fisher vector. The method is further combined with an efficient monomial embedding to provide a codebook-free method aggregating local descriptors into a single vector representation. Invariance is achieved by efficient similarity estimation of multiple rotations or translations, offered by a simple trigonometric polynomial. This strategy is effective for image search, as shown by experiments performed on standard benchmarks for image and particular object retrieval, namely Holidays and Oxford buildings.
Rotation and translation covariant match kernels for image retrieval
S1077314215001368
Processing 3D point cloud data is of primary interest in many areas of computer vision, including object grasping, robot navigation, and object recognition. The introduction of affordable RGB-D sensors has created a great interest in the computer vision community towards developing efficient algorithms for point cloud processing. Previously, capturing a point cloud required expensive specialized sensors such as lasers or dedicated range imaging devices; now, range data is readily available from low-cost sensors that provide easily extractable point clouds from a depth map. From here, an interesting challenge is to find different objects in the point cloud. Various descriptors have been introduced to match features in a point cloud. Cheap sensors are not necessarily designed to produce precise measurements, which means that the data is not as accurate as a point cloud provided from a laser or a dedicated range finder. Although some feature descriptors have been shown to be successful in recognizing objects from point clouds, there still exists opportunities for improvement. The aim of this paper is to introduce techniques from other fields, such as image processing, into 3D point cloud processing in order to improve rendering, classification, and recognition. Covariances have proven to be a success not only in image processing, but in other domains as well. This work develops the application of covariances in conjunction with 3D point cloud data.
Covariance based point cloud descriptors for object detection and recognition
S107731421500137X
Would it be possible to automatically associate ancient pictures to modern ones and create fancy cultural heritage city maps? We introduce here the task of recognizing the location depicted in an old photo given modern annotated images collected from the Internet. We present an extensive analysis on different features, looking for the most discriminative and most robust to the image variability induced by large time lags. Moreover, we show that the described task benefits from domain adaptation.
Location recognition over large time lags
S1077314215001381
This paper presents a smart surveillance system named CASSANDRA, aimed at detecting instances of aggressive human behavior in public environments. A distinguishing aspect of CASSANDRA is the exploitation of complementary audio and video cues to disambiguate scene activity in real-life environments. From the video side, the system uses overlapping cameras to track persons in 3D and to extract features regarding the limb motion relative to the torso. From the audio side, it classifies instances of speech, screaming, singing, and kicking-object. The audio and video cues are fused with contextual cues (interaction, auxiliary objects); a Dynamic Bayesian Network (DBN) produces an estimate of the ambient aggression level. Our prototype system is validated on a realistic set of scenarios performed by professional actors at an actual train station to ensure a realistic audio and video noise setting.
Multi-modal human aggression detection
S1077314215001393
Frequent in practice spatially variant contrast/offset deviations that preserve image appearance hinder classification based on signal co-occurrence statistics. Contrast/offset-invariant descriptors of ordinal signal relations, such as local binary or ternary patterns (LBP/LTP), are popular means to overcome this drawback. This paper extends conventional LBP/LTP-based classifiers towards learning, rather than prescribing most characteristic shapes, sizes, and numbers of these patterns for semi-supervised texture classification and retrieval. The goal is to discriminate a particular texture represented by a single training or query sample from other types of textures. The proposed learning framework models images as samples from a high-order ordinal Markov–Gibbs random field (MGRF). Approximate analytical estimates of the model parameters guide selecting characteristic patterns of a given order, the higher order patterns being learned on the basis of the already found lower order ones. Comparative experiments on four texture databases confirmed that classifiers with the learned multiple LTPs from the 3rd to 8th order consistently outperform more conventional ones with the prescribed 9th-order fixed-shape LBP/LTPs or a few other filters.
Learnable high-order MGRF models for contrast-invariant texture recognition
S107731421500140X
This paper proposes methods for people re-identification across non-overlapping cameras. We improve the robustness of re-identification by using additional group features acquired from the groups of people detected by each camera. People are grouped by discriminatively classifying the spatio-temporal features of their trajectories into those of grouped people and non-grouped people. Thereafter, three group features are obtained in each group and utilized with other general features of each person (e.g., color histogram, transit time between cameras, etc.) for people re-identification. Our experimental results have demonstrated improvements in people grouping and people re-identification when our proposed methods have been applied to a public dataset.
People re-identification across non-overlapping cameras using group features
S1077314215001551
The efficient search and retrieval of the increasing volume of stereo videos drives the need for the semantic description of its content. The analysis and description of the disparity (depth) data available on such videos, offers extra information, either for developing better video content search algorithms, or for improving the 3D viewing experience. Taking the above into account, the purpose of this paper is twofold. First, to provide a mathematical analysis of the relation of object motion between world and display space and on how disparity changes affect the 3D viewing experience. Second, to propose algorithms for semantically characterizing the motion of an object or object ensembles along any of the X, Y, Z axis. Experimental results of the proposed algorithms for semantic motion description in stereo video content are given.
Object motion analysis description in stereo video content
S1077314215001563
A probabilistic real time tracking algorithm is proposed where the target’s feature distribution is represented by a Gaussian mixture model (GMM). The target localization is achieved by maximizing its weighted likelihood in the image sequence. The role of the weight in the likelihood definition is important as it allows gradient based optimization to be performed, which would not be feasible in a context of standard likelihood representations. Moreover, the algorithm handles scale and rotation changes of the target, as well as appearance changes, which modifies the components of the GMM. The real time performance is experimentally confirmed, while the algorithms has comparative performance with other state-of-the-art tracking algorithms.
Visual tracking using spatially weighted likelihood of Gaussian mixtures
S1077314215001575
Motion segmentation refers to the task of segmenting moving objects subject to their motion in order to distinguish and track them in a video. This is a challenging task in situations where different objects share similar movement patterns, or in cases where one object is occluded by others in part of the scene. In such cases, unsupervised motion segmentation fails and additional information is needed to boost the performance. Based on a formulation of the clustering task as an optimization problem using a multi-labeled Markov Random Field, we develop a semi-supervised motion segmentation algorithm by setting up a framework for incorporating prior knowledge into the segmentation algorithm. Prior knowledge is given in the form of manually labelling trajectories that belong to the various objects in one or more frames of the video. Clearly, one wishes to limit the amount of manual labelling in order for the algorithm to be as autonomous as possible. Towards that end, we propose a particle matching procedure that extends the prior knowledge by automatically matching particles in frames over which fast motion or occlusion occur. The performance of the proposed method is studied through a variety of experiments on videos involving fast and complicated motion, occlusion and re-appearance, and low quality film. The qualitative and quantitative results confirm reliable performance on the types of applications our method is designed for.
Weakly supervised motion segmentation with particle matching
S1077314215001587
We present a fully automatic multimodal 2D + 3D feature-based facial expression recognition approach and demonstrate its performance on the BU–3DFE database. Our approach combines multi-order gradient-based local texture and shape descriptors in order to achieve efficiency and robustness. First, a large set of fiducial facial landmarks of 2D face images along with their 3D face scans are localized using a novel algorithm namely incremental Parallel Cascade of Linear Regression (iPar–CLR). Then, a novel Histogram of Second Order Gradients (HSOG) based local image descriptor in conjunction with the widely used first-order gradient based SIFT descriptor are used to describe the local texture around each 2D landmark. Similarly, the local geometry around each 3D landmark is described by two novel local shape descriptors constructed using the first-order and the second-order surface differential geometry quantities, i.e., Histogram of mesh Gradients (meshHOG) and Histogram of mesh Shape index (curvature quantization, meshHOS). Finally, the Support Vector Machine (SVM) based recognition results of all 2D and 3D descriptors are fused at both feature-level and score-level to further improve the accuracy. Comprehensive experimental results demonstrate that there exist impressive complementary characteristics between the 2D and 3D descriptors. We use the BU–3DFE benchmark to compare our approach to the state-of-the-art ones. Our multimodal feature-based approach outperforms the others by achieving an average recognition accuracy of 86.32%. Moreover, a good generalization ability is shown on the Bosphorus database.
An efficient multimodal 2D + 3D feature-based approach to automatic facial expression recognition
S1077314215001599
We propose a new graphical model, called a Sequential Interval Network (SIN), for parsing complex, structured activities whose composition can be represented by a stochastic grammar. By exploiting the grammar, the generated network captures an activity’s global temporal structure while avoiding a time-sliced manner model. In this network, the hidden variables are the start and end times of the component actions, which allows reasoning about duration and observation on interval/segment level. Exact inference can be achieved and yield the posterior probabilities of the timing variables as well as each frame’s component label. Importantly, by using uninformative expected value of future observations, the network can predict the probability distribution of the timing of future component actions. We demonstrate this framework on vision tasks such as recognition and temporally segmentation of action sequence, or parsing and making future prediction online when running in streaming mode while observing an assembly task.
Sequential Interval Network for parsing complex structured activity
S1077314215001605
Automatic perception of facial expressions with scaling differences, pose variations and occlusions would greatly enhance natural human robot interaction. This research proposes unsupervised automatic facial point detection integrated with regression-based intensity estimation for facial action units (AUs) and emotion clustering to deal with such challenges. The proposed facial point detector is able to detect 54 facial points in images of faces with occlusions, pose variations and scaling differences using Gabor filtering, BRISK (Binary Robust Invariant Scalable Keypoints), an Iterative Closest Point (ICP) algorithm and fuzzy c-means (FCM) clustering. Especially, in order to effectively deal with images with occlusions, ICP is first applied to generate neutral landmarks for the occluded facial elements. Then FCM is used to further reason the shape of the occluded facial region by taking the prior knowledge of the non-occluded facial elements into account. Post landmark correlation processing is subsequently applied to derive the best fitting geometry for the occluded facial element to further adjust the neutral landmarks generated by ICP and reconstruct the occluded facial region. We then conduct AU intensity estimation respectively using support vector regression and neural networks for 18 selected AUs. FCM is also subsequently employed to recognize seven basic emotions as well as neutral expressions. It also shows great potential to deal with compound and newly arrived novel emotion class detection. The overall system is integrated with a humanoid robot and enables it to deal with challenging real-life facial emotion recognition tasks.
Adaptive facial point detection and emotion recognition for a humanoid robot
S1077314215001617
Hand detection has many important applications in Human-Computer Interactions, yet it is a challenging problem because the appearance of hands can vary greatly in images. In this paper, we present a new approach that exploits the inherent contextual information from structured hand labelling for pixel-level hand detection and hand part labelling. By using a random forest framework, our method can predict hand mask and hand part labels in an efficient and robust manner. Through experiments, we demonstrate that our method can outperform other state-of-the-art pixel-level detection methods in ego-centric videos, and further be able to parse hand parts in details.
Structured forests for pixel-level hand detection and hand part labelling
S1077314215001629
A more natural, intuitive, user-friendly, and less intrusive Human–Computer interface for controlling an application by executing hand gestures is presented. For this purpose, a robust vision-based hand-gesture recognition system has been developed, and a new database has been created to test it. The system is divided into three stages: detection, tracking, and recognition. The detection stage searches in every frame of a video sequence potential hand poses using a binary Support Vector Machine classifier and Local Binary Patterns as feature vectors. These detections are employed as input of a tracker to generate a spatio-temporal trajectory of hand poses. Finally, the recognition stage segments a spatio-temporal volume of data using the obtained trajectories, and compute a video descriptor called Volumetric Spatiograms of Local Binary Patterns (VS-LBP), which is delivered to a bank of SVM classifiers to perform the gesture recognition. The VS-LBP is a novel video descriptor that constitutes one of the most important contributions of the paper, which is able to provide much richer spatio-temporal information than other existing approaches in the state of the art with a manageable computational cost. Excellent results have been obtained outperforming other approaches of the state of the art.
Human–computer interaction based on visual hand-gesture recognition using volumetric spatiograms of local binary patterns
S1077314215001630
Automatic recognition of fingerspelling postures in a live environment is a challenging task primarily due to the complex computation of popular moment-based and spectral descriptors. Shape matrix offers a time-efficient alternative that samples the shape region through the intersection points of adjacent log-polar sections. However, sparse sampling of the region by discrete log-polar intersection points cannot capture salience of the shape. This manuscript proposes modified forms of the shape matrix which can capture salience of the fingerspelling postures by the precise sampling of contours and regions. For effective segmentation and subsequent description, hand postures are acquired through the depth sensor. Proposed shape matrix variants are evaluated for fingerspelling recognition with one-handed and two-handed postures. Experiments are rigorously performed on three datasets including one-handed signs of American Sign Language (ASL), NTU hand digits, and both one-handed and two-handed signs of Indian Sign Language (ISL). Proposed shape matrix variants supersede the benchmark shape context and Gabor features by obtaining 94.15% accuracy on ISL dataset with minimum mean running time of 0.029 s. On ASL and NTU datasets, 91.86% and 95.11% accuracies are obtained with 0.0172 and 0.0483 s mean running times, respectively.
A framework for live and cross platform fingerspelling recognition using modified shape matrix variants on depth silhouettes
S1077314215001642
The recent literature on visual recognition and image classification has been mainly focused on Deep Convolutional Neural Networks (Deep CNN) [A. Krizhevsky, I. Sutskever, G. E. Hinton, Imagenet classification with deep convolutional neural networks, in: Advances in neural information processing systems, 2012, pp. 1097–1105.] and their variants, which has resulted in a significant progression of the performance of these algorithms. Building on these recent advances, this paper proposes to explicitly add translation and scale invariance to Deep CNN-based local representations, by introducing a new algorithm for image recognition which is modeling image categories as a collection of automatically discovered distinctive parts. These parts are matched across images while learning their visual model and are finally pooled to provide images signatures. The appearance model of the parts is learnt from the training images to allow the distinction between the categories to be recognized. A key ingredient of the approach is a softassign-like matching algorithm that simultaneously learns the model of each part and automatically assigns image regions to the model’s parts. Once the model of the category is trained, it can be used to classify new images by finding image’s regions similar to the learned parts and encoding them in a single compact signature. The experimental validation shows that the performance of the proposed approach is better than those of the latest Deep Convolutional Neural Networks approaches, hence providing state-of-the art results on several publicly available datasets.
Discriminative part model for visual recognition
S1077314215001794
Successful efforts in hand gesture recognition research within the last two decades paved the path for natural human–computer interaction systems. Unresolved challenges such as reliable identification of gesturing phase, sensitivity to size, shape, and speed variations, and issues due to occlusion keep hand gesture recognition research still very active. We provide a review of vision-based hand gesture recognition algorithms reported in the last 16 years. The methods using RGB and RGB-D cameras are reviewed with quantitative and qualitative comparisons of algorithms. Quantitative comparison of algorithms is done using a set of 13 measures chosen from different attributes of the algorithm and the experimental methodology adopted in algorithm evaluation. We point out the need for considering these measures together with the recognition accuracy of the algorithm to predict its success in real-world applications. The paper also reviews 26 publicly available hand gesture databases and provides the web-links for their download.
Recent methods and databases in vision-based hand gesture recognition: A review