FileName
stringlengths 17
17
| Abstract
stringlengths 163
6.01k
| Title
stringlengths 12
421
|
---|---|---|
S1077314213002439 | Example-based approaches have been very successful for human motion analysis but their accuracy strongly depends on the similarity of the viewpoint in testing and training images. In practice, roof-top cameras are widely used for video surveillance and are usually placed at a significant angle from the floor, which is different from typical training viewpoints. We present a methodology for view-invariant monocular human motion analysis in man-made environments in which we exploit some properties of projective geometry and the presence of numerous easy-to-detect straight lines. We also assume that observed people move on a known ground plane. First, we model body poses and silhouettes using a reduced set of training views. Then, during the online stage, the homography that relates the selected training plane to the input image points is calculated using the dominant 3D directions of the scene, the location on the ground plane and the camera view in both training and testing images. This homographic transformation is used to compensate for the changes in silhouette due to the novel viewpoint. In our experiments, we show that it can be employed in a bottom-up manner to align the input image to the training plane and process it with the corresponding view-based silhouette model, or top-down to project a candidate silhouette and match it in the image. We present qualitative and quantitative results on the CAVIAR dataset using both bottom-up and top-down types of framework and demonstrate the significant improvements of the proposed homographic alignment over a commonly used similarity transform. | Exploiting projective geometry for view-invariant monocular human motion analysis in man-made environments |
S1077314214000083 | This article presents a novel method for estimating the dense three-dimensional motion of a scene from multiple cameras. Our method employs an interconnected patch model of the scene surfaces. The interconnected nature of the model means that we can incorporate prior knowledge about neighbouring scene motions through the use of a Markov Random Field, whilst the patch-based nature of the model allows the use of efficient techniques for estimating the local motion at each patch. An important aspect of our work is that the method takes account of the fact that local surface texture strongly dictates the accuracy of the motion that can be estimated at each patch. Even with simple squared-error cost functions, it produces results that are either equivalent to or better than results from a method based upon a state-of-the-art optical flow technique, which uses well-developed robust cost functions and energy minimisation techniques. | Estimating scene flow using an interconnected patch surface model with belief-propagation inference |
S1077314214000095 | Recognizing actions is one of the important challenges in computer vision with respect to video data, with applications to surveillance, diagnostics of mental disorders, and video retrieval. Compared to other data modalities such as documents and images, processing video data demands orders of magnitude higher computational and storage resources. One way to alleviate this difficulty is to focus the computations to informative (salient) regions of the video. In this paper, we propose a novel global spatio-temporal self-similarity measure to score saliency using the ideas of dictionary learning and sparse coding. In contrast to existing methods that use local spatio-temporal feature detectors along with descriptors (such as HOG, HOG3D, and HOF), dictionary learning helps consider the saliency in a global setting (on the entire video) in a computationally efficient way. We consider only a small percentage of the most salient (least self-similar) regions found using our algorithm, over which spatio-temporal descriptors such as HOG and region covariance descriptors are computed. The ensemble of such block descriptors in a bag-of-features framework provides a holistic description of the motion sequence which can be used in a classification setting. Experiments on several benchmark datasets in video based action classification demonstrate that our approach performs competitively to the state of the art. | Action recognition using global spatio-temporal features derived from sparse representations |
S1077314214000101 | In this paper, a novel methodology is presented aiming at the automatic identification of the writer of ancient inscriptions and Byzantine codices. This identification can offer unambiguous dating of these ancient manuscripts. The introduced methodology is also applicable to contours of complexes of letters or any class of similar curves. The method presented here initially estimates the normalized curvature at each pixel of a letter contour. Subsequently, it performs pair-wise comparisons of the curvatures sequences that correspond to two realizations of the same alphabet symbol. Then, it introduces a new Proposition that, on the basis of the previous results, offers a closed solution to the problem of matching two equinumerous digital contours in the Least Squares sense. Next, a criterion is employed quantifying the similarity of two realizations of the same alphabet symbol. Finally, a number of statistical criteria are introduced for the automatic identification of the writer of ancient manuscripts. The introduced method did not employ any reference manuscript neither the number of distinct hands who had written the considered set of manuscripts nor any related information whatsoever; it also performs quite efficiently even if a small number of realizations (less than 6) of certain alphabet symbols appear in a tested document. The only a priori knowledge is the alphabet of the language under consideration. We would like to stress that otherwise the method does not depend at all on the language itself. Namely it does not take into account if the alphabet is Latin, Greek, Etruscan, etc. The methodology and the related, developed information system has been applied to 46 ancient inscriptions of the Classical and Hellenistic era and 23 Byzantine codices, offering 100% accurate results, in the sense that the obtained results are in full agreement with prominent scholars in the field of Archaeology, History and Classical Studies. | Identifying the writer of ancient inscriptions and Byzantine codices. A novel approach |
S1077314214000113 | Most background modeling techniques use a single leaning rate of adaptation that is inadequate for real scenes because the background model is unable to effectively deal with both slow and sudden illumination changes. This paper presents an algorithm based on a self-adaptive Gaussian mixture to model the background of a scene imaged by a static video camera. Such background modeling is used in conjunction with foreground detection to find objects of interest that do not belong to the background. The model uses a dynamic learning rate with adaptation to global illumination to cope with sudden variations of scene illumination. The algorithm performance is benchmarked using the video sequences created for the Background Models Challenge (BMC) [1]. Experimental results are compared with the performance of other algorithms benchmarked with the BMC dataset, and demonstrate comparable detection rates. | A self-adaptive Gaussian mixture model |
S1077314214000125 | In this paper we present ideas from computational topology, applicable in analysis of point cloud data. In particular, the point cloud can represent a feature space of a collection of objects such as images or text documents. Computing persistent homology reveals the global structure of similarities between the data. Furthermore, we argue that it is essential to incorporate higher-degree relationships between objects. Finally, we show that new computational topology algorithms expose much better practical performance compared to standard techniques. | Towards topological analysis of high-dimensional feature spaces |
S1077314214000137 | Face reconstruction from images has been a core topic for the last decades, and is now involved in many applications such as identity verification or human–computer interaction. The 3D Morphable Model introduced by Blanz and Vetter has been widely used to this end, because its specific 3D modeling offers robustness to pose variation and adaptability to the specificities of each face. To overcome the limitations of methods using a single image, and since video has become more and more affordable, we propose a new method which exploits video sequences to consolidate the 3D head shape estimation using successive frames. Based on particle filtering, our algorithm updates the model estimation at each instant and it is robust to noisy observations. A comparison with the Levenberg–Marquardt global optimization approach on various sets of data shows visual improvements both on pose and shape estimation. Biometric performances confirm this trend with a mean reduction of 10% in terms of False Rejection Rate. | Recursive head reconstruction from multi-view video sequences |
S1077314214000149 | It is convenient to calibrate time-of-flight cameras by established methods, using images of a chequerboard pattern. The low resolution of the amplitude image, however, makes it difficult to detect the board reliably. Heuristic detection methods, based on connected image-components, perform very poorly on this data. An alternative, geometrically-principled method is introduced here, based on the Hough transform. The projection of a chequerboard is represented by two pencils of lines, which are identified as oriented clusters in the gradient-data of the image. A projective Hough transform is applied to each of the two clusters, in axis-aligned coordinates. The range of each transform is properly bounded, because the corresponding gradient vectors are approximately parallel. Each of the two transforms contains a series of collinear peaks; one for every line in the given pencil. This pattern is easily detected, by sweeping a dual line through the transform. The proposed Hough-based method is compared to the standard OpenCV detection routine, by application to several hundred time-of-flight images. It is shown that the new method detects significantly more calibration boards, over a greater variety of poses, without any overall loss of accuracy. This conclusion is based on an analysis of both geometric and photometric error. | Automatic detection of calibration grids in time-of-flight images |
S1077314214000150 | Dealing with high-dimensional data has always been a major problem in the research of pattern recognition and machine learning. Among all the dimensionality reduction techniques, Linear Discriminant Analysis (LDA) is one of the most popular methods that have been widely used in many classification applications. But LDA can only utilize labeled samples while neglect the unlabeled samples, which are abundant and can be easily obtained in the real world. In this paper, we propose a new dimensionality reduction method by using unlabeled samples to enhance the performance of LDA. The new method first propagates the label information from labeled set to unlabeled set via a label propagation process, where the predicted labels of unlabeled samples, called soft labels, can be obtained. It then incorporates the soft labels into the construction of scatter matrixes to find a transformed matrix for dimensionality reduction. In this way, the proposed method can preserve more discriminative information, which is preferable when solving the classification problem. Extensive simulations are conducted on several datasets and the results show the effectiveness of the proposed method. | Soft label based Linear Discriminant Analysis for image recognition and retrieval |
S1077314214000228 | This paper presents a neuroscience inspired information theoretic approach to motion segmentation. Robust motion segmentation represents a fundamental first stage in many surveillance tasks. As an alternative to widely adopted individual segmentation approaches, which are challenged in different ways by imagery exhibiting a wide range of environmental variation and irrelevant motion, this paper presents a new biologically-inspired approach which computes the multivariate mutual information between multiple complementary motion segmentation outputs. Performance evaluation across a range of datasets and against competing segmentation methods demonstrates robust performance. | Biologically-inspired robust motion segmentation using mutual information |
S107731421400023X | Recently, new high-level features have been proposed to describe the semantic content of images. These features, that we call supervised, are obtained by exploiting the information provided by an additional set of labeled images. Supervised features were successfully used in the context of image classification and retrieval, where they showed excellent results. In this paper, we will demonstrate that they can be effectively used also for unsupervised image categorization, that is, for grouping semantically similar images. We have experimented different state-of-the-art clustering algorithms on various standard data sets commonly used for supervised image classification evaluations. We have compared the results obtained by using four supervised features (namely, classemes, prosemantic features, object bank, and a feature obtained from a Canonical Correlation Analysis) against those obtained by using low-level features. The results show that supervised features exhibit a remarkable expressiveness which allows to effectively group images into the categories defined by the data sets’ authors. | On the use of supervised features for unsupervised image categorization: An evaluation |
S1077314214000241 | Spectral hashing (SpH) is an efficient and simple binary hashing method, which assumes that data are sampled from a multidimensional uniform distribution. However, this assumption is too restrictive in practice. In this paper we propose an improved method, fitted spectral hashing (FSpH), to relax this distribution assumption. Our work is based on the fact that one-dimensional data of any distribution could be mapped to a uniform distribution without changing the local neighbor relations among data items. We have found that this mapping on each PCA direction has certain regular pattern, and could be fitted well by S-curve function (Sigmoid function). With more parameters Fourier function also fits data well. Thus with Sigmoid function and Fourier function, we propose two binary hashing methods: SFSpH and FFSpH. Experiments show that our methods are efficient and outperform state-of-the-art methods. | FSpH: Fitted spectral hashing for efficient similarity search |
S1077314214000253 | This paper presents a general formulation, named ProJective Matrix Factorization with unified embedding (PJMF), by which social image retagging is transformed to the nearest tag-neighbor search for each image. We solve the proposed PJMF as an optimization problem mainly considering the following issues. First, we attempt to find two latent representations in a unified space for images and tags respectively and explore the two representations to reconstruct the observed image-tag correlation in a nonlinear manner. In this case, the relevance between an image and a tag can be directly modeled as the pair-wise similarity in the unified space. Second, the image latent representation is assumed to be projected from its original visual feature representation with an orthogonal transformation matrix. The projection makes convenient to embed any images including out-of-samples into the unified space, and naturally the image retagging problem can be solved by the nearest tag-neighbors search for those images in the unified space. Third, local geometry preservations of image space and tag space respectively are explored as constraints in order to make image similarity (and tag relevance) consistent in the original space and the corresponding latent space. Experimental results on two publicly available benchmarks validate the encouraging performance of our work over the state-of-the-arts. | Projective Matrix Factorization with unified embedding for social image tagging |
S1077314214000289 | In this paper, we present a novel fast and accurate numerical method for the surface embedding narrow volume reconstruction from unorganized points in R 3 . Though the level set method prevails in the image processing, it requires a redistancing procedure to maintain a desired shape of the level set function. On the other hand, our method is based on the Allen–Cahn equation, which has been applied in image segmentation due to its motion by mean curvature property. We modify the original Allen–Cahn equation by multiplying a control function to restrict the evolution within a narrow band around the given surface data set. To improve the numerical stability of our proposed model, we split the governing equation into linear and nonlinear terms and use an operator splitting technique. The linear equation is solved by the multigrid method which is a fast solver and the nonlinear equation is solved analytically. The unconditional stability of the proposed scheme is also proved. Various numerical results are presented to demonstrate the robustness and accuracy of the proposed method. | Surface embedding narrow volume reconstruction from unorganized points |
S1077314214000290 | Representing videos using vocabularies composed of concept detectors appears promising for generic event recognition. While many have recently shown the benefits of concept vocabularies for recognition, studying the characteristics of a universal concept vocabulary suited for representing events is ignored. In this paper, we study how to create an effective vocabulary for arbitrary-event recognition in web video. We consider five research questions related to the number, the type, the specificity, the quality and the normalization of the detectors in concept vocabularies. A rigorous experimental protocol using a pool of 1346 concept detectors trained on publicly available annotations, two large arbitrary web video datasets and a common event recognition pipeline allow us to analyze the performance of various concept vocabulary definitions. From the analysis we arrive at the recommendation that for effective event recognition the concept vocabulary should (i) contain more than 200 concepts, (ii) be diverse by covering object, action, scene, people, animal and attribute concepts, (iii) include both general and specific concepts, (iv) increase the number of concepts rather than improve the quality of the individual detectors, and (v) contain detectors that are appropriately normalized. We consider the recommendations for recognizing video events by concept vocabularies the most important contribution of the paper, as they provide guidelines for future work. | Recommendations for recognizing video events by concept vocabularies |
S1077314214000307 | We derive an explicit relation between local affine approximations resulting from matching of affine invariant regions and the epipolar geometry in the case of a two view geometry. Most methods that employ the affine relations do so indirectly by generating pointwise correspondences from the affine relations. In this paper we derive an explicit relation between the local affine approximations and the epipolar geometry. We show that each affine approximation between images is equivalent to 3 linear constraints on the fundamental matrix and that the linear conditions guarantee the existence of an homography, compatible with the fundamental matrix. We further show that two affine relations constrain the location of the epipole to a conic section. Therefore, the location of the epipole can be extracted from 3 regions by intersecting conics. The result is further employed to derive a procedure for estimating the fundamental matrix, based on the estimated location of the epipole. It is shown to be more accurate and to require less iterations in LO-RANSAC based estimation, than the current point based approaches that employ the affine relation to generate pointwise correspondences and then calculate the fundamental matrix from the pointwise relations. | Conic epipolar constraints from affine correspondences |
S1077314214000319 | We propose a new Statistical Complexity Measure (SCM) to qualify edge maps without Ground Truth (GT) knowledge. The measure is the product of two indices, an Equilibrium index E obtained by projecting the edge map into a family of edge patterns, and an Entropy index H , defined as a function of the Kolmogorov–Smirnov (KS) statistic. This new measure can be used for performance characterization which includes: (i) the specific evaluation of an algorithm (intra-technique process) in order to identify its best parameters and (ii) the comparison of different algorithms (inter-technique process) in order to classify them according to their quality. Results made over images of the South Florida and Berkeley databases show that our approach significantly improves over Pratt’s Figure of Merit (PFoM) which is the objective reference-based edge map evaluation standard, as it takes into account more features in its evaluation. | Unsupervised edge map scoring: A statistical complexity approach |
S1077314214000320 | The selection of discriminative features is an important and effective technique for many computer vision and multimedia tasks. Using irrelevant features in classification or clustering tasks could deteriorate the performance. Thus, designing efficient feature selection algorithms to remove the irrelevant features is a possible way to improve the classification or clustering performance. With the successful usage of sparse models in image and video classification and understanding, imposing structural sparsity in feature selection has been widely investigated during the past years. Motivated by the merit of sparse models, in this paper we propose a novel feature selection method using a sparse model. Different from the state of the art, our method is built upon ℓ 2 , p -norm and simultaneously considers both the global and local (GLocal) structures of data distribution. Our method is more flexible in selecting the discriminating features as it is able to control the degree of sparseness. Moreover, considering both global and local structures of data distribution makes our feature selection process more effective. An efficient algorithm is proposed to solve the ℓ 2 , p -norm joint sparsity optimization problem in this paper. Experimental results performed on real-world image and video datasets show the effectiveness of our feature selection method compared to several state-of-the-art methods. | GLocal tells you more: Coupling GLocal structural for feature selection with sparsity for image and video classification |
S1077314214000332 | We propose an efficient method to learn a compact and discriminative dictionary for visual categorization, in which the dictionary learning is formulated as a problem of graph partition. Firstly, an approximate kNN graph is efficiently computed on the data set using a divide-and-conquer strategy. And then the dictionary learning is achieved by seeking a graph topology on the resulting kNN graph that maximizes a submodular objective function. Due to the property of diminishing return and monotonicity of the defined objective function, it can be solved by means of a fast greedy-based optimization. By combing these two efficient ingredients, we finally obtain a genuinely fast algorithm for dictionary learning, which is promising for large-scale datasets. Experimental results demonstrate its encouraging performance over several recently proposed dictionary learning methods. | Efficient dictionary learning for visual categorization |
S1077314214000344 | In this paper we present a new approach to semantically segment a scene based on video activity and to transfer the semantic categories to other, different scenarios. In the proposed approach, a user annotates a few scenes by labeling each area with a functional category such as background, entry/exit, walking path, interest point. For each area, we calculate features derived from object tracks computed in real-time on hours of video. The characteristics of each functional area learned in the labeled training sequences are then used to classify regions in different scenarios. We demonstrate the proposed approach on several hours of three different indoor scenes, where we achieve state-of-the-art classification results. | Semantic video scene segmentation and transfer |
S1077314214000356 | This work presents an objective performance analysis of statistical tests for edge detection which are suitable for textured or cluttered images. The tests are subdivided into two-sample parametric and non-parametric tests and are applied using a dual-region based edge detector which analyses local image texture difference. Through a series of experimental tests objective results are presented across a comprehensive dataset of images using a Pixel Correspondence Metric (PCM). The results show that statistical tests can in many cases, outperform the Canny edge detection method giving robust edge detection, accurate edge localisation and improved edge connectivity throughout. A visual comparison of the tests is also presented using representative images taken from typical textured histological data sets. The results conclude that the non-parametric Chi Square ( χ 2 ) and Kolmogorov Smirnov (KS) statistical tests are the most robust edge detection tests where image statistical properties cannot be assumed a priori or where intensity changes in the image are nonuniform and that the parametric Difference of Boxes (DoB) test and the Student’s t-test are the most suitable for intensity based edges. Conclusions and recommendations are finally presented contrasting the tests and giving guidelines for their practical use while finally confirming which situations improved edge detection can be expected. | A performance evaluation of statistical tests for edge detection in textured images |
S1077314214000368 | Recent work has shown the advantages of using high level representation such as attribute-based descriptors over low-level feature sets in face verification. However, in most work each attribute is coded with extremely short information length (e.g., “is Male”, “has Beard”) and all the attributes belonging to the same object are assumed to be independent of each other when using them for prediction. To address the above two problems, we propose a discriminative distributed-representation for attribute description; on the basis of this description, we present a novel method to model the relationship between attributes and exploit such relationship to improve the performance of face verification, in the meantime taking uncertainty in attribute responses into account. Specifically, inspired by the vector representation of words in the literature of text categorization, we first represent the meaning of each attribute as a high-dimensional vector in the subject space, then construct an attribute-relationship graph based on the distribution of attributes in that space. With this graph, we are able to explicitly constrain the searching space of parameter values of a discriminative classifier to avoid over-fitting. The effectiveness of the proposed method is verified on two challenging face databases (i.e., LFW and PubFig) and the a-Pascal object dataset. Furthermore, we extend the proposed method to the case with continuous attributes with promising results. | Exploiting relationship between attributes for improved face verification |
S107731421400037X | Tagging is nowadays the most prevalent and practical way to make images searchable. However, in reality many manually-assigned tags are irrelevant to image content and hence are not reliable for applications. A lot of recent efforts have been conducted to refine image tags. In this paper, we propose to do tag refinement from the angle of topic modeling and present a novel graphical model, regularized latent Dirichlet allocation (rLDA). In the proposed approach, tag similarity and tag relevance are jointly estimated in an iterative manner, so that they can benefit from each other, and the multi-wise relationships among tags are explored. Moreover, both the statistics of tags and visual affinities of images in the corpus are explored to help topic modeling. We also analyze the superiority of our approach from the deep structure perspective. The experiments on tag ranking and image retrieval demonstrate the advantages of the proposed method. | Image tag refinement by regularized latent Dirichlet allocation |
S1077314214000381 | A hierarchical vision system, inspired by the functional architecture of the cortical motion pathway, to provide motion interpretation and to guide real-time actions in the real-world, is proposed. Such a neuromimetic architecture exploits (i) log-polar mapping for data reduction, (ii) a population of motion energy neurons to compute the optic flow, and (iii) a population of adaptive templates in the cortical domain to gain the flow’s affine description. The time-to-contact and the surface orientations of points of interest in the real-world are computed by directly combining the linear description of the cortical flow. The approach is validated through quantitative tests in synthetic environments, and in real-world automotive and robotics situations. | An integrated neuromimetic architecture for direct motion interpretation in the log-polar domain |
S1077314214000393 | We propose an image classification framework by leveraging the non-negative sparse coding, correlation constrained low rank and sparse matrix decomposition technique (CCLR-Sc+SPM). First, we propose a new non-negative sparse coding along with max pooling and spatial pyramid matching method (Sc+SPM) to extract local feature’s information in order to represent images, where non-negative sparse coding is used to encode local features. Max pooling along with spatial pyramid matching (SPM) is then utilized to get the feature vectors to represent images. Second, we propose to leverage the correlation constrained low-rank and sparse matrix recovery technique to decompose the feature vectors of images into a low-rank matrix and a sparse error matrix by considering the correlations between images. To incorporate the common and specific attributes into the image representation, we still adopt the idea of sparse coding to recode the Sc+SPM representation of each image. In particular, we collect the columns of the both matrixes as the bases and use the coding parameters as the updated image representation by learning them through the locality-constrained linear coding (LLC). Finally, linear SVM classifier is trained for final classification. Experimental results show that the proposed method achieves or outperforms the state-of-the-art results on several benchmarks. | Image classification by non-negative sparse coding, correlation constrained low-rank and sparse decomposition |
S107731421400040X | This paper presents a disparity calculation algorithm based on stereo-vision for obstacle detection and free space calculation. This algorithm incorporates line segmentation, multi-pass aggregation and efficient local optimisation in order to produce accurate disparity values. It is specifically designed for traffic scenes where most of the objects can be represented by planes in the disparity domain. The accurate horizontal disparity gradient for the side planes are also extracted during the disparity optimisation stage. Then, an obstacle detection algorithm based on the U–V-disparity is introduced. Instead of using the Hough transform for line detection which is extremely sensitive to the parameter settings, the G-disparity image is proposed for the detection of side planes. Then, the vertical planes are detected separately after removing all the side planes. Faster detection speed, lower parameter sensitivity and improved performance are achieved comparing with the Hough transform based detection. After the obstacles are located and removed from the disparity map, most of the remaining pixels are projections from the road surface. Using a spline as the road model, the vertical profile of the road surface is estimated. Finally, the free-space is calculated based on the vertical road profile which is not restricted by the planar road surface assumption. | Robust obstacle detection based on a novel disparity calculation method and G-disparity |
S1077314214000538 | Approximate nearest neighbor search has attracted much attention recently, which allows for fast query with a predictable sacrifice in search quality. Among the related works, k-means quantizers are possibly the most adaptive methods, and have shown the superiority on search accuracy than the others. However, a common problem shared by the traditional quantizers is that during the out-of-sample extension process, the naive strategy considers only the similarities in Euclidean space without taking into account the statistical and geometrical properties of the data. To cope with this problem, in this paper a novel approach is proposed by formulating a generalized likelihood ratio analysis. In particular, the proposed method takes a physically meaningful discrimination on the affiliations of the new samples with respect to the obtained Voronoi cells. This discrimination essentially imposes the measure of statistical consistency on out-of-sample extension. The experimental studies on two large data sets show that the proposed method is more effective than the benchmark algorithms. | Statistical quantization for similarity search |
S107731421400054X | In query-by-semantic-example image retrieval, images are ranked by similarity of semantic descriptors. These descriptors are obtained by classifying each image with respect to a pre-defined vocabulary of semantic concepts. In this work, we consider the problem of improving the accuracy of semantic descriptors through cross-modal regularization, based on auxiliary text. A cross-modal regularizer, composed of three steps, is proposed. Training images and text are first mapped to a common semantic space. A regularization operator is then learned for each concept in the semantic vocabulary. This is an operator which maps the semantic descriptors of images labeled with that concept to the descriptors of the associated texts. A convex formulation of the learning problem is introduced, enabling the efficient computation of concept-specific regularization operators. The third step is the selection of the most suitable operator for the image to regularize. This is implemented through a quantization of the semantic space, where a regularization operator is associated with each quantization cell. Overall, the proposed regularizer is a non-linear mapping, implemented as a piecewise linear transformation of the semantic image descriptors to regularize. This transformation is a form of cross-modal domain adaptation. It is shown to achieve better performance than recent proposals in the domain adaptation literature, while requiring much simpler optimization. | Cross-modal domain adaptation for text-based regularization of image semantics in image retrieval systems |
S1077314214000551 | This paper presents an unsupervised image segmentation approach for obtaining a set of silhouettes along with the visual hull (VH) of an object observed from multiple viewpoints. The proposed approach can deal with mostly any type of appearance characteristics such as texture, similar background color, shininess, transparency besides other phenomena such as shadows and color bleeding. Compared to more classical methods for silhouette extraction from multiple views, for which certain assumptions are made on the object or scene, neither the background nor the object appearance properties are modeled. The only assumption is the constancy of the unknown background for a given camera viewpoint while the object is under motion. The principal idea of the method is the estimation of the temporal evolution of each pixel over time which provides a stability measurement and leads to its associated background likelihood. In order to cope with shadows and self-shadows, an object is captured under different lighting conditions. Furthermore, the information from the space, time and lighting domains is exploited and merged based on a MRF framework and the constructed energy function is minimized via graph cut. Experiments are performed on a light stage where the object is set on a turntable and is observed from calibrated viewpoints on a hemisphere around the object. Real data experiments show that the proposed approach allows for robust and efficient VH reconstruction of a variety of challenging objects. | Unsupervised visual hull extraction in space, time and light domains |
S1077314214000563 | In this paper, we investigate the concept of projective depth, demonstrate its application and significance in view-invariant action recognition. We show that projective depths are invariant to camera internal parameters and orientation, and hence can be used to identify similar motion of body-points from varying viewpoints. By representing the human body as a set of points, we decompose a body posture into a set of projective depths. The similarity between two actions is, therefore, measured by the motion of projective depths. We exhaustively investigate the different ways of extracting planes, which can be used to estimate the projective depths for use in action recognition including (i) ground plane, (ii) body-point triplets, (iii) planes in time, and (iv) planes extracted from mirror symmetry. We analyze these different techniques and analyze their efficacy in view-invariant action recognition. Experiments are performed on three categories of data including the CMU MoCap dataset, Kinect dataset, and IXMAS dataset. Results evaluated over semi-synthetic video data and real data confirm that our method can recognize actions, even when they have dynamic timeline maps, and the viewpoints and camera parameters are unknown and totally different. | View invariant action recognition using projective depth |
S1077314214000575 | The detection of image detail variation due to changes in illumination direction is a key issue in 3D shape and texture analysis. In this paper two approaches for estimating the optimal illumination direction for maximum enhancement of image detail and maximum suppression of shadows and highlights are presented. The methods are applicable both to single image/single illumination direction imaging and to photometric stereo imaging. This paper uses class-specific prior knowledge, where the distribution of the normals of the class of surfaces is used in the optimisation. Both the Lambertian and the Phong models are considered and the theoretical development is demonstrated with experimental results for both models. For each method experiments were performed using artificial images with isotropic and anisotropic distributions of normals, followed by experiments with real faces but synthesised images. Finally, results are presented using real objects and faces with and without ground-truth. | Optimal illumination directions for faces and rough surfaces for single and multiple light imaging using class-specific prior knowledge |
S107731421400068X | The minimum barrier distance, MBD, introduced recently in [1], is a pseudo-metric defined on a compact subset D of the Euclidean space R n and whose values depend on a fixed map (an image) f from D into R . The MBD is defined as the minimal value of the barrier strength of a path between the points, which constitutes the length of the smallest interval containing all values of f along the path. In this paper we present a polynomial time algorithm, that provably calculates the exact values of MBD for the digital images. We compare this new algorithm, theoretically and experimentally, with the algorithm presented in [1], which computes the approximate values of the MBD. Moreover, we notice that every generalized distance function can be naturally translated to an image segmentation algorithm. The algorithms that fall under such category include: Relative Fuzzy Connectedness, and those associated with the minimum barrier, fuzzy distance, and geodesic distance functions. In particular, we compare experimentally these four algorithms on the 2D and 3D natural and medical images with known ground truth and at varying level of noise, blur, and inhomogeneity. | Efficient algorithm for finding the exact minimum barrier distance |
S1077314214000745 | Although there are many excellent clustering algorithms, effective clustering remains very challenging for large datasets that contain many classes. Image clustering presents further problems because automatically computed image distances are often noisy. We address these challenges in two ways. First, we propose a new algorithm to cluster a subset of the images only (we call this subclustering), which will produce a few examples from each class. Subclustering will produce smaller but purer clusters. Then we make use of human input in an active subclustering algorithm to further improve results. We run experiments on a face image dataset and a leaf image dataset and show that our proposed algorithms perform better than baseline methods. | Active subclustering |
S1077314214000757 | In this paper we empirically analyze the importance of sparsifying representations for classification purposes. We focus on those obtained by convolving images with linear filters, which can be either hand designed or learned, and perform extensive experiments on two important Computer Vision problems, image categorization and pixel classification. To this end, we adopt a simple modular architecture that encompasses many recently proposed models. The key outcome of our investigations is that enforcing sparsity constraints on features extracted in a convolutional architecture does not improve classification performance, whereas it does so when redundancy is artificially introduced. This is very relevant for practical purposes, since it implies that the expensive run-time optimization required to sparsify the representation is not always justified, and therefore that computational costs can be drastically reduced. | On the relevance of sparsity for image classification |
S1077314214000769 | Hierarchical classification (HC) is a popular and efficient way for detecting the semantic concepts from the images. The conventional method always selects the branch with the highest classification response. This branch selection strategy has a risk of propagating classification errors from higher levels of the hierarchy to the lower levels. We argue that the local strategy is too arbitrary, because the candidate nodes are considered individually, which ignores the semantic and context relationships among concepts. In this paper, we first propose a novel method for HC, which is able to utilize the semantic relationship among candidate nodes and their children to recover the responses of unreliable classifiers of the candidate nodes. Thus the error is expected to be reduced by a collaborative branch selection scheme. The approach is further extended to enable multiple branch selection, where other relationships (e.g., contextual information) are incorporated, with the hope of providing the branch selection a more globally valid, semantically and contextually consistent view. An extensive set of experiments on three large-scale datasets shows that the proposed methods outperform the conventional HC method, and achieve a satisfactory balance between the effectiveness and efficiency. | Collaborative error reduction for hierarchical classification |
S1077314214000770 | The segmentation of objects has been an area of interest in numerous fields. The use of texture has been explored to improve convergence in the presence of cluttered backgrounds or objects with distinct textures, where intensity variations are insufficient. Additionally, saliency and feature maps have been applied for contour initialization. However, taking advantage of texture to improve initialization and convergence has not been extensively explored. To address this, we propose a hybrid structural and texture distinctiveness vector field convolution (STVFC) approach, where both the structural characteristics and the concept of texture distinctiveness are incorporated into a multi-functional vector field convolution (VFC) model. In this novel approach, texture distinctiveness is used to enable automatic initialization and is incorporated with intensity variation to improve and accelerate convergence towards the object boundary. Experiments using three datasets, containing natural images and Brodatz textures, demonstrated that STVFC achieved better or comparable segmentation accuracy. | Hybrid structural and texture distinctiveness vector field convolution for region segmentation |
S1077314214000782 | User-provided textual tags of web images are widely utilized for facilitating image management and retrieval. Yet they are usually incomplete and insufficient to describe the whole semantic content of the corresponding images, resulting in performance degradations of various tag-dependent applications. In this paper, we propose a novel method denoted as DLSR for automatic image tag completion via Dual-view Linear Sparse Reconstructions. Given an incomplete initial tagging matrix with each row representing an image and each column representing a tag, DLSR performs tag completion from both views of image and tag, exploiting various available contextual information. Specifically, for a to-be-completed image, DLSR exploits image-image correlations by linearly reconstructing its low-level image features and initial tagging vector with those of others, and then utilizes them to obtain an image-view reconstructed tagging vector. Meanwhile, by linearly reconstructing the tagging column vector of each tag with those of others, DLSR exploits tag-tag correlations to get a tag-view reconstructed tagging vector with the initially labeled tags. Then both image-view and tag-view reconstructed tagging vectors are combined for better predicting missing related tags. Extensive experiments conducted on benchmark datasets and real-world web images well demonstrate the reasonableness and effectiveness of the proposed DLSR. And it can be utilized to enhance a variety of tag-dependent applications such as image auto-annotation. | Image tag completion via dual-view linear sparse reconstructions |
S1077314214000794 | Due to the explosive growth of the multimedia contents in recent years, scalable similarity search has attracted considerable attention in many large-scale multimedia applications. Among the different similarity search approaches, hashing based approximate nearest neighbor (ANN) search has become very popular owing to its computational and storage efficiency. However, most of the existing hashing methods usually adopt a single modality or integrate multiple modalities simply without exploiting the effect of different features. To address the problem of learning compact hashing codes with multiple modality, we propose a semi-supervised Multi-Graph Hashing (MGH) framework in this paper. Different from the traditional methods, our approach can effectively integrate the multiple modalities with optimized weights in a multi-graph learning scheme. In this way, the effects of different modalities can be adaptively modulated. Besides, semi-supervised information is also incorporated into the unified framework and a sequential learning scheme is adopted to learn complementary hash functions. The proposed framework enables direct and fast handling for the query examples. Thus, the binary codes learned by our approach can be more effective for fast similarity search. Extensive experiments are conducted on two large public datasets to evaluate the performance of our approach and the results demonstrate that the proposed approach achieves promising results compared to the state-of-the-art methods. | Semi-supervised multi-graph hashing for scalable similarity search |
S1077314214000800 | In this paper, we formulate an adaptive Rao-Blackwellized particle filtering method with Gaussian mixture models to cope with significant variations of the target appearance during object tracking. By modeling target appearance as Gaussian mixture models, we introduce an efficient method for computing particle weights. We incrementally update the appearance models using an on-line Expectation–Maximization algorithm. To achieve robustness to outliers caused by tracking error or partial occlusion in updating the appearance models, we divide the target area into sub-regions and estimate the appearance models independently for each of those sub-regions. We demonstrate the robustness of the proposed method for object tracking using a number of publicly available datasets. | Rao-Blackwellized particle filtering with Gaussian mixture models for robust visual tracking |
S1077314214000812 | We study the 3D reconstruction of an isometric surface from point correspondences between a template and a single input image. The template shows the surface flat and fronto-parallel. We propose three new methods. The first two use a convex relaxation of isometry to inextensibility. They are formulated as Second Order Cone Programs (SOCP). The first proposed method is point-wise (it reconstructs only the input point correspondences) while the second proposed method uses a smooth and continuous surface model, based on Free-Form Deformations (FFD). The third proposed method uses the ‘true’ nonconvex isometric constraint and the same continuous surface model. It is formulated with Nonlinear Least-Squares and can thus be solved with the efficient Levenberg–Marquardt minimization method. The proposed approaches may be combined in a single pipeline whereby one of the convex approximations is used to initialize the nonconvex method. Our contributions solve two important limitations of current state of the art: our convex methods are the first ones to handle noise in both the template and image points, and our nonconvex method is the first one to use ‘true’ isometric constraints. Our experimental results on simulated and real data show that our convex point-wise method and our nonconvex method outperform respectively current initialization and refinement methods in 3D reconstructed surface accuracy. | Monocular template-based 3D surface reconstruction: Convex inextensible and nonconvex isometric methods |
S1077314214000824 | The core problem addressed in this article is the 3D position detection of a spherical object of known-radius in a single image frame, obtained by a dioptric vision system consisting of only one fisheye lens camera that follows equidistant projection model. The central contribution is a bijection principle between a known-radius spherical object’s 3D world position and its 2D projected image curve, that we prove, thus establishing that for every possible 3D world position of the spherical object, there exists a unique curve on the image plane if the object is projected through a fisheye lens that follows equidistant projection model. Additionally, we present a setup for the experimental verification of the principle’s correctness. In previously published works we have applied this principle to detect and subsequently track a known-radius spherical object. | 3D to 2D bijection for spherical objects under equidistant fisheye projection |
S1077314214000836 | This paper proposes a novel online domain-shift appearance learning and object tracking scheme on a Riemannian manifold for visual and infrared videos, especially for video scenarios containing large deformable objects with fast out-of-plane pose changes that could be accompanied by partial occlusions. Although Riemannian manifolds and covariance descriptors are promising for visual object tracking, the use of Riemannian mean from a window of observations, spatially insensitive covariance descriptors, fast significant out-of-plane (non-planar) pose changes, and long-term partial occlusions of large-size deformable objects in video limits the performance of such trackers. The proposed method tackles these issues with the following main contributions: (a) Proposing a Bayesian formulation on Riemannian manifolds by using particle filters on the manifold and using appearance particles in each time instant for computing the Riemannian mean, rather than using a window of observations. (b) Proposing a nonlinear dynamic model for online domain-shift learning on the manifold, where the model includes both manifold object appearance and its velocity. (c) Introducing a criterion-based partial occlusion handling approach in online learning. (d) Tracking object bounding box by using affine parametric shape modeling with manifold appearance embedded. (e) Incorporating spatial, frequency and orientation information in the covariance descriptor by extracting Gabor features in a partitioned bounding box. (f) Effectively applying to both visual-band videos and thermal-infrared videos. To realize the proposed tracker, two particle filters are employed: one is applied on the Riemannian manifold for generating candidate appearance particles and another is on vector space for generating candidate box particles. Further, tracking and online learning are performed in alternation to mitigate the tracking drift. Experiments on both visual and infrared videos have shown robust tracking performance of the proposed scheme. Comparisons and evaluations with ten existing state-of-art trackers provide further support to the proposed scheme. | Online domain-shift learning and object tracking based on nonlinear dynamic models and particle filters on Riemannian manifolds |
S1077314214000848 | “Actions in the wild” is the term given to examples of human motion that are performed in natural settings, such as those harvested from movies [1] or Internet databases [2]. This paper presents an approach to the categorisation of such activity in video, which is based solely on the relative distribution of spatio-temporal interest points. Presenting the Relative Motion Descriptor, we show that the distribution of interest points alone (without explicitly encoding their neighbourhoods) effectively describes actions. Furthermore, given the huge variability of examples within action classes in natural settings, we propose to further improve recognition by automatically detecting outliers, and breaking complex action categories into multiple modes. This is achieved using a variant of Random Sampling Consensus (RANSAC), which identifies and separates the modes. We employ a novel reweighting scheme within the RANSAC procedure to iteratively reweight training examples, ensuring their inclusion in the final classification model. We demonstrate state-of-the-art performance on five human action datasets. | Capturing relative motion and finding modes for action recognition in the wild |
S1077314214000939 | The human face conveys to other human beings, and potentially to computer systems, information such as identity, intentions, emotional and health states, attractiveness, age, gender and ethnicity. In most cases analyzing this information involves the computer science as well as the human and medical sciences. The most studied multidisciplinary problems are analyzing emotions, estimating age and modeling aging effects. An emerging area is the analysis of human attractiveness. The purpose of this paper is to survey recent research on the computer analysis of human beauty. First we present results in human sciences and medicine pointing to a largely shared and data-driven perception of attractiveness, which is a rationale of computer beauty analysis. After discussing practical application areas, we survey current studies on the automatic analysis of facial attractiveness aimed at: (i) relating attractiveness to particular facial features; (ii) assessing attractiveness automatically; (iii) improving the attractiveness of 2D or 3D face images. Finally we discuss open problems and possible lines of research. | Computer analysis of face beauty: A survey |
S1077314214000940 | Thanks to compact data representations and fast similarity computation, many binary code embedding techniques have been proposed for large-scale similarity search used in many computer vision applications including image retrieval. Most prior techniques have centered around optimizing a set of projections for accurate embedding. In spite of active research efforts, existing solutions suffer from diminishing marginal efficiency and high quantization errors as more code bits are used. To reduce both quantization error and diminishing efficiency we propose a novel binary code embedding scheme, Quadra-Embedding, that assigns two bits for each projection to define four quantization regions, and a binary code distance function tailored to our method. Our method is directly applicable to most binary code embedding methods. Our scheme combined with four state-of-the-art embedding methods has been evaluated and achieves meaningful accuracy improvement in most experimental configurations. | Quadra-embedding: Binary code embedding with low quantization error |
S1077314214000952 | Two images of a scene consisting of multiple flat surfaces are related by a collection of homography matrices. Practitioners typically estimate these homographies separately thereby violating inherent inter-homography constraints that arise naturally out of the rigid geometry of the scene. We demonstrate that through a suitable choice of parametrisation multiple homographies can be jointly estimated in a manner so as to satisfy all inter-homography constraints. Unlike the cost functions used previously for solving this problem, our cost function does not correspond to fitting one set of homography matrices to another set of homography matrices. Instead, we utilise the Sampson distance for homography matrix estimation and operate directly on image data points. By using the Sampson distance and working directly on data points, we expedite the application of a vast amount of knowledge that already exists for Sampson-distance-based single homography or fundamental matrix estimation. The estimation framework reported in this paper establishes a new baseline for joint multiple homography estimation and at the same time raises intriguing new research questions. The work may be of interest to a broad range of researchers who require the estimation of homography matrices with uncalibrated cameras as part of their solution. | Sampson distance based joint estimation of multiple homographies with uncalibrated cameras |
S1077314214000964 | Under the popular Markov random field (MRF) model, low-level vision problems are usually formulated by prior and likelihood models. In recent years, the priors have been formulated from high-order cliques and have demonstrated their robustness in many problems. However, the likelihoods have remained zeroth-order clique potentials. This zeroth-order clique assumption causes inaccurate solution and gives rise to undesirable fattening effect especially when window-based matching costs are employed. In this paper, we investigate high-order likelihood modeling for the stereo matching problem which advocates the dissimilarity measure between the whole reference image and the warped non-reference image. If the dissimilarity measure is evaluated between filtered stereo images, the matching cost can be modeled as high-order clique potentials. When linear filters and nonparametric census filter are used, it is shown that the high-order clique potentials can be reduced to pairwise energy functions. Consequently, a global optimization is possible by employing efficient graph cuts algorithm. Experimental results show that the proposed high-order likelihood models produce significantly better results than the conventional zeroth-order models qualitatively as well as quantitatively. | Stereo reconstruction using high-order likelihoods |
S1077314214000976 | Multiphase active contour based models are useful in identifying multiple regions with spatial consistency but varying characteristics such as the mean intensities of regions. Segmenting brain magnetic resonance images (MRIs) using a multiphase approach is useful to differentiate white and gray matter tissue for anatomical, functional and disease studies. Multiphase active contour methods are superior to other approaches due to their topological flexibility, accurate boundaries, robustness to image variations and adaptive energy functionals. Globally convex methods are furthermore initialization independent. We extend the relaxed globally convex Chan and Vese two-phase piecewise constant energy minimization formulation of Chan et al. (2006) [1] to the multiphase domain and prove the existence of a global minimizer in a specific space which is one of the novel contributions of the paper. An efficient dual minimization implementation of our binary partitioning function model accurately describes disjoint regions using stable segmentations by avoiding local minima solutions. Experimental results indicate that the proposed approach provides consistently better accuracy than other related multiphase active contour algorithms using four different error metrics (Dice, Rand Index, Global Consistency Error and Variation of Information) even under severe noise, intensity inhomogeneities, and partial volume effects in MRI imagery. | Fast and globally convex multiphase active contours for brain MRI segmentation |
S1077314214000988 | This paper presents a local 3D descriptor for surface matching dubbed SHOT. Our proposal stems from a taxonomy of existing methods which highlights two major approaches, referred to as Signatures and Histograms, inherently emphasizing descriptiveness and robustness respectively. We formulate a comprehensive proposal which encompasses a repeatable local reference frame as well as a 3D descriptor, the latter featuring an hybrid structure between Signatures and Histograms so as to aim at a more favorable balance between descriptive power and robustness. A quite peculiar trait of our method concerns seamless integration of multiple cues within the descriptor to improve distinctiveness, which is particularly relevant nowadays due to the increasing availability of affordable RGB-D sensors which can gather both depth and color information. A thorough experimental evaluation based on datasets acquired with different types of sensors, including a novel RGB-D dataset, vouches that SHOT outperforms state-of-the-art local descriptors in experiments addressing descriptor matching for object recognition, 3D reconstruction and shape retrieval. | SHOT: Unique signatures of histograms for surface and texture description |
S107731421400099X | This paper presents a multiview model of object categories, generally applicable to virtually any type of image features, and methods to efficiently perform, in a unified manner, detection, localization and continuous pose estimation in novel scenes. We represent appearance as distributions of low-level, fine-grained image features. Multiview models encode the appearance of objects at discrete viewpoints, and, in addition, how these viewpoints deform into one another as the viewpoint continuously varies (as detected from optical flow between training examples). Using a measure of similarity between an arbitrary test image and such a model at chosen viewpoints, we perform all tasks mentioned above with a common method. We leverage the simplicity of low-level image features, such as points extracted along edges, or coarse-scale gradients extracted densely over the images, by building probabilistic templates, i.e. distributions of features, learned from one or several training examples. We efficiently handle these distributions with probabilistic techniques such as kernel density estimation, Monte Carlo integration and importance sampling. We provide an extensive evaluation on a wide variety of benchmark datasets. We demonstrate performance on the “ETHZ Shape” dataset, with single (hand-drawn) and multiple training examples, well above baseline methods, on par with a number of more task-specific methods. We obtain remarkable performance on the recognition of more complex objects, notably the cars of the “3D Object” dataset of Savarese et al. with detection rates of 92.5 % and an accuracy in pose estimation of 91 % . We perform better than the state-of-the-art on continuous pose estimation with the “rotating cars” dataset of Ozuysal et al. We also demonstrate particular capabilities with a novel dataset featuring non-textured objects of undistinctive shapes, the pose of which can only be determined from shading, captured here by coarse scale intensity gradients. | Multiview feature distributions for object detection and continuous pose estimation |
S1077314214001003 | In many geometry processing applications, the estimation of differential geometric quantities such as curvature or normal vector field is an essential step. In this paper, we investigate a new class of estimators on digital shape boundaries based on integral invariants (Pottmann et al., 2007) [39]. More precisely, we provide both proofs of multigrid convergence of principal curvature estimators and a complete experimental evaluation of their performances. | Multigrid convergent principal curvature estimators in digital geometry |
S1077314214001015 | Statistical shape models (SSMs) are a well-established tool in medical image analysis. The most challenging part of SSM construction, which cannot be solved trivially in 3D, is the establishment of corresponding points, so-called landmarks. A popular approach for solving the correspondence problem is to minimize a groupwise objective function using the optimization by re-parameterization approach. To this end, several objective functions, optimization strategies and re-parameterization functions have been proposed. While previous evaluation studies focused mainly on the objective function, we provide a detailed evaluation of different correspondence methods, objective functions, re-parameterization, and optimization strategies. Moreover and contrary to previous works, we use distance measures that compare landmark shape vectors to the original input shapes, thus adequately accounting for correspondences which undersample certain regions of the input shapes. Additionally, we segment binary expert segmentations to benchmark SSMs constructed from different correspondences. This new evaluation technique overcomes limitations of the correspondence based evaluation and allows for directly quantifying the influence of the correspondence on the expected segmentation accuracy. From our evaluation results we identify pitfalls of the current approach and derive practical recommendations for implementing a groupwise optimization pipeline. | Using image segmentation for evaluating 3D statistical shape models built with groupwise correspondence optimization |
S1077314214001027 | In clinical practice, traditional X-ray radiography is widely used, and knowledge of landmarks and contours in anteroposterior (AP) pelvis X-rays is invaluable for computer aided diagnosis, hip surgery planning and image-guided interventions. This paper presents a fully automatic approach for landmark detection and shape segmentation of both pelvis and femur in conventional AP X-ray images. Our approach is based on the framework of landmark detection via Random Forest (RF) regression and shape regularization via hierarchical sparse shape composition. We propose a visual feature FL-HoG (Flexible-Level Histogram of Oriented Gradients) and a feature selection algorithm based on trace radio optimization to improve the robustness and the efficacy of RF-based landmark detection. The landmark detection result is then used in a hierarchical sparse shape composition framework for shape regularization. Finally, the extracted shape contour is fine-tuned by a post-processing step based on low level image features. The experimental results demonstrate that our feature selection algorithm reduces the feature dimension in a factor of 40 and improves both training and test efficiency. Further experiments conducted on 436 clinical AP pelvis X-rays show that our approach achieves an average point-to-curve error around 1.2mm for femur and 1.9mm for pelvis. | Fully automatic segmentation of AP pelvis X-rays via random forest regression with efficient feature selection and hierarchical sparse shape composition |
S1077314214001039 | This paper presents radial distortion invariants and their application to lens evaluation under a single-optical-axis omnidirectional camera. Little work on geometric invariants of distorted images has been reported previously. We establish accurate geometric invariants from 2-dimensional/3-dimensional space points and their radially distorted image points. Based on the established invariants in a single image, we construct criterion functions and then design a feature vector for evaluating the camera lens, where the infinity norm of the feature vector is computed to indicate the tangent distortion amount. The evaluation is simple and convenient thanks to the feature vector that is analytical and straightforward on image points and space points without any other computations. In addition, the evaluation is flexible since the used invariants make any a coordinate system of measuring space or image points workable. Moreover, the constructed feature vector is free of point orders and resistant to noise. The established invariants in the paper have other potential applications such as camera calibration, image rectification, structure reconstruction, image matching, and object recognition. Extensive experiments, including on structure reconstruction, demonstrate the usefulness, higher accuracy, and higher stability of the present work. | Radial distortion invariants and lens evaluation under a single-optical-axis omnidirectional camera |
S1077314214001040 | A method to decompose a 3D object into simple parts starting from its curve skeleton is described. Branches of the curve skeleton are classified as meaningful and non-meaningful by using the notion of zone of influence of the points where skeleton branches meet. Meaningful branches are associated to subsets of the object, which are obtained by subtracting from the input object suitably expanded versions of the zones of influence, termed overlapping regions. Decision is taken on whether the overlapping regions should be individual decomposition components, or should be assigned to properly selected adjacent object’s subsets. The so obtained object’s decomposition components are subdivided into parts characterized by simple shape through the polygonal approximation of the corresponding skeleton branches. | From skeleton branches to object parts |
S107731421400112X | This paper describes a method of gait recognition by suppressing and using gait fluctuations. Inconsistent phasing between a matching pair of gait image sequences because of temporal fluctuations degrades the performance of gait recognition. We remove the temporal fluctuations by generating a phase-normalized gait image sequence with equal phase intervals. If inter-period gait fluctuations within a gait image sequence are repeatedly observed for the same subject, they can be regarded as a useful distinguishing gait feature. We extract phase fluctuations as temporal fluctuations as well as gait fluctuation image and trajectory fluctuations as spatial fluctuations. We combine them with the matching score using the phase-normalized image sequence as additional matching scores in the score-level fusion framework or as quality measures in the score-normalization framework. We evaluated the methods in experiments using large-scale publicly available databases and showed the effectiveness of the proposed methods. | Gait recognition by fluctuations |
S1077314214001131 | With systems for acquiring 3D surface data being evermore commonplace, it has become important to reliably extract specific shapes from the acquired data. In the presence of noise and occlusions, this can be done through the use of statistical shape models, which are learned from databases of clean examples of the shape in question. In this paper, we review, analyze and compare different statistical models: from those that analyze the variation in geometry globally to those that analyze the variation in geometry locally. We first review how different types of models have been used in the literature, then proceed to define the models and analyze them theoretically, in terms of both their statistical and computational aspects. We then perform extensive experimental comparison on the task of model fitting, and give intuition about which type of model is better for a few applications. Due to the wide availability of databases of high-quality data, we use the human face as the specific shape we wish to extract from corrupted data. | Review of statistical shape spaces for 3D data with comparative analysis for human faces |
S1077314214001179 | In binary tomography the goal is to reconstruct the inner structure of homogeneous objects from their projections. This is usually required from a low number of projections, which are also likely to be affected by noise and measurement errors. In general, the distorted and incomplete projection data holds insufficient information for the correct reconstruction of the original object. In this paper, we describe two methods for approximating the local uncertainty of the reconstructions, i.e., identifying how the information stored in the projections determine each part of the reconstructed image. These methods can measure the uncertainty of the reconstruction without any knowledge from the original object itself. Moreover, we provide a global uncertainty measure that can assess the information content of a projection set and predict the error to be expected in the reconstruction of a homogeneous object. We also give an experimental evaluation of our proposed methods, mention some of their possible applications, and describe how the uncertainty measure can be used to improve the performance of the DART reconstruction algorithm. | Local and global uncertainty in binary tomographic reconstruction |
S107731421400126X | In this paper, we present the reconstructed residual error, which evaluates the quality of a given segmentation of a reconstructed image in tomography. This novel evaluation method, which is independent of the methods that were used to reconstruct and segment the image, is applicable to segmentations that are based on the density of the scanned object. It provides a spatial map of the errors in the segmented image, based on the projection data. The reconstructed residual error is a reconstruction of the difference between the recorded data and the forward projection of that segmented image. The properties and applications of the algorithm are verified experimentally through simulations and experimental micro-CT data. The experiments show that the reconstructed residual error is close to the true error, that it can improve gray level estimates, and that it can help discriminating between different segmentations. | The reconstructed residual error: A novel segmentation evaluation measure for reconstructed images in tomography |
S1077314214001271 | Visual estimation of head pose is desirable for computer vision applications such as face recognition, human computer interaction, and affective computing. However, accurate estimation of head pose in uncontrolled environment is still a grand challenge. This paper proposes a novel feature representation model for accurate pose estimation. In this model, a range image is divided into a set of simple slices that contain abundant geometric cues can be used to accurately describe the poses of a subject. This model provides a general framework for designing new features for head pose estimation. According to this model, design of a new feature model for describing a slice, then a new set of features is generated by combining all slices for describing range images. Due to the huge number of slices that can be generated from single range image, even a simple description model of slice can achieve robust performance. With the guide of this model, two novel range image representation models, which are Local Slice Depth (LSD) and Local Slice Orientation (LSO), are designed. LSD can be used for coarse estimation of head poses, while LSO can achieve accurate results. Moreover, in order to evaluate the performance of proposed representation model, an automatic head pose estimation method is implemented using a Kinect sensor. Firstly both color and range images captured by a Kinect sensor are used to localize and segment the facial region from background. Secondly, two novel integral images, namely slice depth integral image and slice coordinates integral image, are proposed to achieve real-time feature extraction. Finally, random forests are used to learn a stable relationship between slice feature descriptors and head pose parameters. Experiments on both low-quality depth data set Biwi and high-quality depth data set ETH demonstrate state-of-the-art performance of our method. | Slice representation of range data for head pose estimation |
S1077314214001283 | A relative pose and target model estimation framework using calibrated multicamera clusters is presented. It is able to accurately track up-to-date relative motion, including scale, between the camera cluster and the (free-moving) completely unknown target object or environment using only image measurements from a set of perspective cameras. The cameras within the cluster may be arranged in any configuration, even such that there is no spatial overlap in their fields-of-view. An analysis of the set of degenerate motions for a cluster composed of three cameras is performed. It is shown that including the third camera eliminates many of the previously known ambiguities for two-camera clusters. The estimator performance and the degeneracy analysis conclusions are confirmed in experiment with ground truth data collected from an optical motion capture system for the proposed three-camera cluster against other camera configurations suggested in the literature. | Scale recovery in multicamera cluster SLAM with non-overlapping fields of view |
S1077314214001295 | Computed tomography is a noninvasive technique for reconstructing an object from projection data. If the object consists of only a few materials, discrete tomography allows us to use prior knowledge of the gray values corresponding to these materials to improve the accuracy of the reconstruction. The Discrete Algebraic Reconstruction Technique (DART) is a reconstruction algorithm for discrete tomography. DART can result in accurate reconstructions, computed by iteratively refining the boundary of the object. However, this boundary update is not robust against noise and DART does not work well when confronted with high noise levels. In this paper we propose a modified DART algorithm, which imposes a set of soft constraints on the pixel values. The soft constraints allow noise to be spread across the whole image domain, proportional to these constraints, rather than across boundaries. The results of our numerical experiments show that SDART yields more accurate reconstructions, compared to DART, if the signal-to-noise ratio is low. | SDART: An algorithm for discrete tomography from noisy projections |
S1077314214001301 | We present a system to track the positions of multiple persons in a scene from overlapping cameras. The distinguishing aspect of our method is a novel, two-step approach that jointly estimates person position and track assignment. The proposed approach keeps solving the assignment problem tractable, while taking into account how different assignments influence feature measurement. In a hypothesis generation stage, the similarity between a person at a particular position and an active track is based on a subset of cues (appearance, motion) that are guaranteed observable in the camera views. This allows for efficient computation of the K-best joint estimates for person position and track assignment under an approximation of the likelihood function. In a subsequent hypothesis verification stage, the known person positions associated with these K-best solutions are used to define a larger set of actually visible cues, which enables a re-ranking of the found assignments using the full likelihood function. We demonstrate that our system outperforms the state-of-the-art on four challenging multi-person datasets (indoor and outdoor), involving 3–5 overlapping cameras and up to 23 persons simultaneously. Two of these datasets are novel: we make the associated images and annotations public to facilitate benchmarking. | Joint multi-person detection and tracking from overlapping cameras |
S1077314214001313 | For very large datasets with more than a few classes, producing ground-truth data can represent a substantial, and potentially expensive, human effort. This is particularly evident when the datasets have been collected for a particular purpose, e.g. scientific inquiry, or by autonomous agents in novel and inaccessible environments. In these situations there is scope for the use of unsupervised approaches that can model collections of images and automatically summarise their content. To this end, we present novel hierarchical Bayesian models for image clustering, image segment clustering, and unsupervised scene understanding. The purpose of this investigation is to highlight and compare hierarchical structures for modelling context within images based on visual data alone. We also compare the unsupervised models with state-of-the-art supervised and weakly supervised models for image understanding. We show that some of the unsupervised models are competitive with the supervised and weakly supervised models on standard datasets. Finally, we demonstrate these unsupervised models working on a large dataset containing more than one hundred thousand images of the sea floor collected by a robot. | Hierarchical Bayesian models for unsupervised scene understanding |
S1077314214001325 | The shift from model-based approaches to data-driven ones is opening new frontiers in computer vision. Several tasks which required the development of sophisticated parametric models can now be solved through simple algorithms, by offloading the complexity of the task to the amount of available data. However, in order to develop data-driven approaches, it is necessary to have large annotated datasets. Unfortunately, manual labeling of large scale datasets is a complex, error prone and tedious task, especially when dealing with noisy images or with fine-grained visual tasks. In this paper we present an automatic label propagation approach that transfers labels from a small set of manually labeled images to a large set of unlabeled items by means of nearest-neighbor search operating on HoG image descriptors. In particular, we introduce the concept of mutual local similarity between the labeled query image and its nearest neighbors as the condition to be verified for propagating labels. The performance evaluation, carried out on the COREL 5K dataset and on a dataset of 20 million underwater low-quality images, showed how big data combined to simple nonparametric approaches allows to solve effectively complex visual tasks. | Nonparametric label propagation using mutual local similarity in nearest neighbors |
S1077314214001337 | Selecting correct matches from a set of tentative feature point correspondences plays a vital important role in many tasks, such as structure from motion (SfM), wide baseline stereo and image search. In this paper, we propose an efficient and effective method for identifying correct matches from an initial batch of feature correspondences. The proposed method first obtains a subset of correct matches based on the assumption that the local geometric structure among a feature point and its nearest neighbors in an image cannot be easily affected by both geometric and photometric transformations, and thus should be observed in the matched images. For efficiency, we model this local geometric structure by a set of linear coefficients that reconstruct the point from its neighbors. After obtaining a portion of correct matches, we then provide two ways to accurately estimate the correctness of each match and to efficiently estimate the number of correct matches, respectively. The proposed method is evaluated on both applications including image matching and image re-ranking. Experimental results on several public datasets show that our method outperforms state-of-the-art techniques in terms of speed and accuracy. | Exploiting local linear geometric structure for identifying correct matches |
S1077314214001349 | Background subtraction is a commonly used technique in computer vision for detecting objects. While there is an extensive literature regarding background subtraction, most of the existing methods assume that the camera is stationary. This assumption limits their applicability to moving camera scenarios. In this paper, we approach the background subtraction problem from a geometric perspective to overcome this limitation. In particular, we introduce a 2.5D background model that describes the scene in terms of both its appearance and geometry. Unlike previous methods, the proposed algorithm does not rely on certain camera motions or assumptions about the scene geometry. The scene is represented as a stack of parallel hypothetical planes each of which is associated with a homography transform. A pixel that belongs to a background scene consistently maps between the consecutive frames based on its transformation with respect to the “hypothetical plane” it lies on. This observation disambiguates moving objects from the background. Experiments show that the proposed method, when compared to the recent literature, can successfully detect moving objects in complex scenes and with significant camera motion. | Background subtraction for the moving camera: A geometric approach |
S1077314214001350 | Catadioptric systems consist of the combination of lenses and mirrors. From them, central panoramic systems stand out because they provide a unique effective viewpoint, leading to the well-known unifying theory for central catadioptric systems. This paper considers catadioptric systems consisting of a conical mirror and a perspective camera. Although a system with conical mirror does not possess a single projection point, it has some advantages as the cone is a very simple shape to produce, it has higher resolution in the peripheral, and adds less optical distortion to the images. The contributions of this work are the model of this non-central system by means of projective mappings from a torus to a plane, the procedure to calibrate this system, and the definition of the conical fundamental matrix with a role similar to that of perspective cameras. Additionally, a procedure to compute the relative motion between two views from the conical fundamental matrix is presented. The proposal is illustrated with simulations and real experiments. | Unitary torus model for conical mirror based catadioptric system |
S1077314214001362 | This work focuses on tracking objects being used by humans. These objects are often small, fast moving and heavily occluded by the user. Attempting to recover their 3D position and orientation over time is a challenging research problem. To make progress we appeal to the fact that these objects are often used in a consistent way. The body poses of different people using the same object tend to have similarities, and, when considered relative to those body poses, so do the respective object poses. Our intuition is that, in the context of recent advances in body-pose tracking from RGB-D data, robust object-pose tracking during human-object interactions should also be possible. We propose a combined generative and discriminative tracking framework able to follow gradual changes in object-pose over time but also able to re-initialise object-pose upon recognising distinctive body-poses. The framework is able to predict object-pose relative to a set of independent coordinate systems, each one centred upon a different part of the body. We conduct a quantitative investigation into which body parts serve as the best predictors of object-pose over the course of different interactions. We find that while object-translation should be predicted from nearby body parts, object-rotation can be more robustly predicted by using a much wider range of body parts. Our main contribution is to provide the first object-tracking system able to estimate 3D translation and orientation from RGB-D observations of human-object interactions. By tracking precise changes in object-pose, our method opens up the possibility of more detailed computational reasoning about human-object interactions and their outcomes. For example, in assistive living systems that go beyond just recognising the actions and objects involved in everyday tasks such as sweeping or drinking, to reasoning that a person has “missed sweeping under the chair” or “not drunk enough water today”. | Tracking object poses in the context of robust body pose estimates |
S1077314214001374 | With the widespread proliferation of computers, many human activities entail the use of automatic image analysis. The basic features used for image analysis include color, texture, and shape. In this paper, we propose a new shape description method, called Hough Transform Statistics (HTS), which uses statistics from the Hough space to characterize the shape of objects or regions in digital images. A modified version of this method, called Hough Transform Statistics neighborhood (HTSn), is also presented. Experiments carried out on three popular public image databases showed that the HTS and HTSn descriptors are robust, since they presented precision-recall results much better than several other well-known shape description methods. When compared to Beam Angle Statistics (BAS) method, a shape description method that inspired their development, both the HTS and the HTSn methods presented inferior results regarding the precision-recall criterion, but superior results in the processing time and multiscale separability criteria. The linear complexity of the HTS and the HTSn algorithms, in contrast to BAS, make them more appropriate for shape analysis in high-resolution image retrieval tasks when very large databases are used, which are very common nowadays. | HTS and HTSn: New shape descriptors based on Hough transform statistics |
S1077314214001386 | In this paper, we present a methodology for refining the segmentation of human silhouettes in indoor videos acquired by fisheye cameras. This methodology is based on a fisheye camera model that employs a spherical optical element and central projection. The parameters of the camera model are determined only once (during calibration), using the correspondence of a number of user-defined landmarks, both in real world coordinates and on a captured video frame. Subsequently, each pixel of the video frame is inversely mapped to the direction of view in the real world and the relevant data are stored in look-up tables for fast utilization in real-time video processing. The proposed fisheye camera model enables the inference of possible real world positions and conditionally the height and width of a segmented cluster of pixels in the video frame. In this work we utilize the proposed calibrated camera model to achieve a simple geometric reasoning that corrects gaps and mistakes of the human figure segmentation, detects segmented human silhouettes inside and outside the room and rejects segmentation that corresponds to non-human activity. Unique labels are assigned to each refined silhouette, according to their estimated real world position and appearance and the trajectory of each silhouette in real world coordinates is estimated. Experimental results are presented for a number of video sequences, in which the number of false positive pixels (regarding human silhouette segmentation) is substantially reduced as a result of the application of the proposed geometry-based segmentation refinement. | Refinement of human silhouette segmentation in omni-directional indoor videos |
S1077314214001398 | Estimating the body shape and posture of a dressed human subject in motion represented as a sequence of (possibly incomplete) 3D meshes is important for virtual change rooms and security. To solve this problem, statistical shape spaces encoding human body shape and posture variations are commonly used to constrain the search space for the shape estimate. In this work, we propose a novel method that uses a posture-invariant shape space to model body shape variation combined with a skeleton-based deformation to model posture variation. Our method can estimate the body shape and posture of both static scans and motion sequences of human body scans with clothing that fits relatively closely to the body. In case of motion sequences, our method takes advantage of motion cues to solve for a single body shape estimate along with a sequence of posture estimates. We apply our approach to both static scans and motion sequences and demonstrate that using our method, higher fitting accuracy is achieved than when using a variant of the popular SCAPE model [2,18] as statistical model. | Estimation of human body shape and posture under clothing |
S1077314214001404 | This paper presents a representation of 3D facial motion sequences that allows performing statistical analysis of 3D face shapes in motion. The resulting statistical analysis is applied to automatically generate realistic facial animations and to recognize dynamic facial expressions. To perform statistical analysis of 3D facial shapes in motion over different subjects and different motion sequences, a large database of motion sequences needs to be brought in full correspondence. Existing algorithms that compute correspondences between 3D facial motion sequences either require manual input or suffer from instabilities caused by drift. For large databases, algorithms that require manual interaction are not practical. We propose an approach to robustly compute correspondences between a large set of facial motion sequences in a fully automatic way using a multilinear model as statistical prior. In order to register the motion sequences, a good initialization is needed. We obtain this initialization by introducing a landmark prediction method for 3D motion sequences based on Markov Random Fields. Using this motion sequence registration, we find a compact representation of each motion sequence consisting of one vector of coefficients for identity and a high dimensional curve for expression. Based on this representation, we synthesize new motion sequences and perform expression recognition. We show experimentally that the obtained registration is of high quality, where 56% of all vertices are at distance at most 1mm from the input data, and that our synthesized motion sequences look realistic. | 3D faces in motion: Fully automatic registration and statistical analysis |
S1077314214001416 | Evaluating the performance of computer vision algorithms is classically done by reporting classification error or accuracy, if the problem at hand is the classification of an object in an image, the recognition of an activity in a video or the categorization and labeling of the image or video. If in addition the detection of an item in an image or a video, and/or its localization are required, frequently used metrics are Recall and Precision, as well as ROC curves. These metrics give quantitative performance values which are easy to understand and to interpret even by non-experts. However, an inherent problem is the dependency of quantitative performance measures on the quality constraints that we need impose on the detection algorithm. In particular, an important quality parameter of these measures is the spatial or spatio-temporal overlap between a ground-truth item and a detected item, and this needs to be taken into account when interpreting the results. We propose a new performance metric addressing and unifying the qualitative and quantitative aspects of the performance measures. The performance of a detection and recognition algorithm is illustrated intuitively by performance graphs which present quantitative performance values, like Recall, Precision and F-Score, depending on quality constraints of the detection. In order to compare the performance of different computer vision algorithms, a representative single performance measure is computed from the graphs, by integrating out all quality parameters. The evaluation method can be applied to different types of activity detection and recognition algorithms. The performance metric has been tested on several activity recognition algorithms participating in the ICPR 2012 HARL competition. | Evaluation of video activity localizations integrating quality and quantity measurements |
S1077314214001520 | The family of shortest isothetic paths (FSIP) between two grid points in a digital object A is defined to be the collection of all possible shortest isothetic paths that connect them. We propose here a fast algorithm to compute the FSIP between two given grid points inside a digital object, which is devoid of any hole. The proposed algorithm works with the object boundary as input and does not resort to analysis of the interior pixels. Given the digital object A with n boundary pixels, it first constructs the inner isothetic cover that tightly inscribes A in O n g log n g time, where g is a positive integer that denotes the unit of the underlying square grid. Then for any two points on the inner cover, it computes the FSIP in O ( n / g ) time, using certain combinatorial rules based on the characteristic properties of FSIP. We report experimental results that show the effectiveness of the algorithm and its further prospects in shape analysis, object decomposition, and in other related applications. | On the family of shortest isothetic paths in a digital object—An algorithm with applications |
S1077314214001532 | Identification of illumination, the main step in colour constancy processing, is an important problem in imaging for digital images or video, forming a prerequisite for many computer vision applications. In this paper we present a new and effective physics-based colour constancy algorithm which makes use of a novel Log-Relative-Chromaticity planar constraint. We call the new feature the Zeta-image. We show that this new feature makes use of a novel application of the Kullback–Leibler Divergence, here applied to chromaticity values instead of probabilities. The new method requires no training data or tunable parameters. Moreover it is simple to implement and very fast. Our experimental results across datasets of real images show that the proposed method significantly outperforms other unsupervised methods while its estimation accuracy is comparable with more complex, supervised, methods. As well, we show that the new planar constraint can be used as a post-processing stage for any candidate colour constancy method in order to improve its accuracy. Its application in this paper demonstrates its utility, delivering state of the art performance. The Zeta-image is a wholly new representation for understanding highlights in images, and we show as well that it can be used to identify and remove specularities. More generally, since the Zeta-image is intimately bound up with specularities, we show how specular content in the image can be manipulated, either decreasing or increasing highlights. | The Zeta-image, illuminant estimation, and specularity manipulation |
S1077314214001544 | The annotation of image and video data of large datasets is a fundamental task in multimedia information retrieval and computer vision applications. The aim of annotation tools is to relieve the user from the burden of the manual annotation as much as possible. To achieve this ideal goal, many different functionalities are required in order to make the annotations process as automatic as possible. Motivated by the limitations of existing tools, we have developed the iVAT: an interactive Video Annotation Tool. It supports manual, semi-automatic and automatic annotations through the interaction of the user with various detection algorithms. To the best of our knowledge, it is the first tool that integrates several computer vision algorithms working in an interactive and incremental learning framework. This makes the tool flexible and suitable to be used in different application domains. A quantitative and qualitative evaluation of the proposed tool on a challenging case study domain is presented and discussed. Results demonstrate that the use of the semi-automatic, as well as the automatic, modality drastically reduces the human effort while preserving the quality of the annotations. | An interactive tool for manual, semi-automatic and automatic video annotation |
S1077314214001556 | We propose an Euclidean medial axis filtering method which generates subsets of the Euclidean medial axis in discrete grids, where filtering rate is controlled by one parameter. The method is inspired by Miklos’, Giesen’s and Pauly’s scale axis method which preserves important features of an input object from shape understanding point of view even if they are at different scales. There is an important difference between the axis produced by our method and the scale axis. Contrarily to ours, the scale axis is not, in general, a subset of the Euclidean medial axis. It is even not necessarily a subset of the original shape. In addition, we propose a new method for the generation of a hierarchy of scale filtered Euclidean medial axes. We prove the correctness of the method. The methods and their properties are presented in 2D space but they can be easily extended to any dimension. Moreover, we propose a new methodology for the experimental comparison of medial axis filtering algorithms, based on five different quality criteria. This methodology allows one to compare algorithms independently on the meaning of their filtering parameter, which ensures a fair confrontation. The results of this confrontation with related previously introduced methods are included and discussed. | Scale filtered Euclidean medial axis and its hierarchy |
S1077314214001568 | This article presents a framework supporting rapid prototyping of multimodal applications, the creation and management of datasets and the quantitative evaluation of classification algorithms for the specific context of gesture recognition. A review of the available corpora for gesture recognition highlights their main features and characteristics. The central part of the article describes a novel method that facilitates the cumbersome task of corpora creation. The developed method supports automatic ground truthing of the data during the acquisition of subjects by enabling automatic labeling and temporal segmentation of gestures through scripted scenarios. The temporal errors generated by the proposed method are quantified and their impact on the performances of recognition algorithm are evaluated and discussed. The proposed solution offers an efficient approach to reduce the time required to ground truth corpora for natural gestures in the context of close human–computer interaction. | Gesture recognition corpora and tools: A scripted ground truthing method |
S107731421400157X | In the object recognition community, much effort has been spent on devising expressive object representations and powerful learning strategies for designing effective classifiers, capable of achieving high accuracy and generalization. In this scenario, the focus on the training sets has been historically weak; by and large, training sets have been generated with a substantial human intervention, requiring considerable time. In this paper, we present a strategy for automatic training set generation. The strategy uses semantic knowledge coming from WordNet, coupled with the statistical power provided by Google Ngram, to select a set of meaningful text strings related to the text class-label (e.g., “cat”), that are subsequently fed into the Google Images search engine, producing sets of images with high training value. Focusing on the classes of different object recognition benchmarks (PASCAL VOC 2012, Caltech-256, ImageNet, GRAZ and OxfordPet), our approach collects novel training images, compared to the ones obtained by exploiting Google Images with the simple text class-label. In particular, we show that the gathered images are better able to capture the different visual facets of a concept, thus encoding in a more successful manner the intra-class variance. As a consequence, training standard classifiers with this data produces performances not too distant from those obtained from the classical hand-crafted training sets. In addition, our datasets generalize well and are stable, that is, they provide similar performances on diverse test datasets. This process does not require manual intervention and is completed in a few hours. | Semantically-driven automatic creation of training sets for object recognition |
S1077314214001581 | We present an approach for determining the temporal consistency of Particle Filters in video tracking based on model validation of their uncertainty over sliding windows. The filter uncertainty is related to the consistency of the dispersion of the filter hypotheses in the state space. We learn an uncertainty model via a mixture of Gamma distributions whose optimum number is selected by modified information-based criteria. The time-accumulated model is estimated as the sequential convolution of the uncertainty model. Model validation is performed by verifying whether the output of the filter belongs to the convolution model through its approximated cumulative density function. Experimental results and comparisons show that the proposed approach improves both precision and recall of competitive approaches such as Gaussian-based online model extraction, bank of Kalman filters and empirical thresholding. We combine the proposed approach with a state-of-the-art online performance estimator for video tracking and show that it improves accuracy compared to the same estimator with manually tuned thresholds while reducing the overall computational cost. | Temporal validation of Particle Filters for video tracking |
S1077314214001593 | This paper presents a system for crowdsourcing saliency interest points for 3D photo-textured maps rendered on smartphones and tablets. An app was created that is capable of interactively rendering 3D reconstructions gathered with an Autonomous Underwater Vehicle. Through hundreds of thousands of logged user interactions with the models we attempt to data-mine salient interest points. To this end we propose two models for calculating saliency from human interaction with the data. The first uses the view frustum of the camera to track the amount of time points are on screen. The second uses the velocity of the camera as an indicator of saliency and uses a Hidden Markov model to learn the classification of salient and non-salient points. To provide a comparison to existing techniques several traditional visual saliency approaches are applied to orthographic views of the models’ photo-texturing. The results of all approaches are validated with human attention ground truth gathered using a remote gaze-tracking system that recorded the locations of the person’s attention while exploring the models. | Discovering salient regions on 3D photo-textured maps: Crowdsourcing interaction data from multitouch smartphones and tablets |
S107731421400160X | We present a passive forensics method to distinguish photorealistic computer graphics (PRCG) from natural images (photographs). The goals of our work are to improve the detection accuracy and the robustness to content-preserving image manipulations. In the proposed method, Homomorphic filtering is used to highlight the detail information of image. We find that the texture changes are different between photographs and PRCG images under same Homomorphic filtering transformation, and then we use the difference matrixes to describe the differences of texture changes. We define a customized statistical feature, named texture similarity, and combine it with the statistical features extracted from the co-occurrence matrixes of differential matrixes to construct forensics features. Then we develop a statistical model and use SVM as classifier to distinguish PRCG from photographs. Experimental results show that the proposed method enjoys following advantages: (1) Proposed method reaches higher detection accuracy, synchronously, it is robust to tolerate content-preserving manipulations such as JPEG compression, adding noise, histogram equalization, and filtering. (2) Proposed method is provided with satisfactory generalization capability, it will be available when the training samples and the testing samples come from different sources. | A statistical feature based approach to distinguish PRCG from photographs |
S1077314214001611 | Existing crowd counting algorithms rely on holistic, local or histogram based features to capture crowd properties. Regression is then employed to estimate the crowd size. Insufficient testing across multiple datasets has made it difficult to compare and contrast different methodologies. This paper presents an evaluation across multiple datasets to compare holistic, local and histogram based methods, and to compare various image features and regression models. A K-fold cross validation protocol is followed to evaluate the performance across five public datasets: UCSD, PETS 2009, Fudan, Mall and Grand Central datasets. Image features are categorised into five types: size, shape, edges, keypoints and textures. The regression models evaluated are: Gaussian process regression (GPR), linear regression, K nearest neighbours (KNN) and neural networks (NN). The results demonstrate that local features outperform equivalent holistic and histogram based features; optimal performance is observed using all image features except for textures; and that GPR outperforms linear, KNN and NN regression. | An evaluation of crowd counting methods, features and regression models |
S1077314214001805 | We propose a novel approach for online action recognition. The action is represented in a low dimensional (15D) space using a covariance descriptor of shape and motion features – spatio-temporal coordinates and optical flow of pixels belonging to extracted silhouettes. We analyze the applicability of the descriptor for online scenarios where action classification is performed based on incomplete spatio-temporal volumes. In order to enable our online action classification algorithm to be applied in real time, we introduce two modifications, namely the incremental covariance update and the on demand nearest neighbor classification. In our experiments we use quality measures, such as latency, especially designed for the online scenario to report the algorithm’s performance. We evaluate the performance of our descriptor on standard, publicly available datasets for gesture recognition, namely the Cambridge-Gestures dataset and the ChaLearn One-Shot-Learning dataset and show that its performance is comparable to the state-of-the-art despite its relative simplicity. The evaluation on the UCF-101 action recognition dataset demonstrates that the descriptor is applicable in challenging unconstrained environments. | Online action recognition using covariance of shape and motion |
S1077314214001817 | Covariance is a well-established characterisation of the output uncertainty for estimators dealing with noisy data. It is conventionally estimated via first-order forward propagation (FOP) of input covariance. However, since FOP employs a local linear approximation of the estimator, its reliability is compromised in the case of nonlinear transformations. An alternative method, scaled unscented transformation (SUT) is known to cope with such cases better. However, despite the nonlinear nature of many vision problems, its adoption remains limited. This paper investigates the application of SUT on common minimal geometry solvers, a class of algorithms at the core of many applications ranging from image stitching to film production and robot navigation. The contributions include an experimental comparison of SUT against FOP on synthetic and real data, and practical suggestions for adapting the original SUT to the geometry solvers. The experiments demonstrate the superiority of SUT to FOP as a covariance estimator, over a range of scene types and noise levels, on synthetic and real data. | Covariance estimation for minimal geometry solvers via scaled unscented transformation |
S1077314214001829 | In Structure From Motion (SFM), image features are matched in either an extended number of frames or only in pairs of consecutive frames. Traditionally, SFM filters have been applied using only one of the two matching paradigms, with the Long Range (LR) feature technique being more popular because of the fact that features that are matched across multiple frames provide stronger constraints on structure and motion. Nevertheless, Frame-to-Frame (F2F) features possess the desirable property of being abundant because of the large similarity that exists between closely spaced frames. Although the use of such features has been limited mostly to the determination of inter-frame camera motion, we argue that significant improvements can be attained in online filter-based SFM by integrating the F2F features into filters that use LR features. The main contributions of this paper are twofold. First, it presents a new method that enables the incorporation of F2F information in any analytical filter in a fashion that requires minimal change to the existing filter. Our results show that by doing so, large increases in accuracy are achieved in both the structure and motion estimates. Second, thanks to mathematical simplifications we realize in the filter, we minimize the computational burden of F2F integration by two orders of magnitude, thereby enabling its real-time implementation. Experimental results on real and simulated data prove the success of the proposed approach. | Augmenting analytic SFM filters with frame-to-frame features |
S1077314214001830 | Time-of-flight cameras provide depth information, which is complementary to the photometric appearance of the scene in ordinary images. It is desirable to merge the depth and colour information, in order to obtain a coherent scene representation. However, the individual cameras will have different viewpoints, resolutions and fields of view, which means that they must be mutually calibrated. This paper presents a geometric framework for the resulting multi-view and multi-modal calibration problem. It is shown that three-dimensional projective transformations can be used to align depth and parallax-based representations of the scene, with or without Euclidean reconstruction. A new evaluation procedure is also developed; this allows the reprojection error to be decomposed into calibration and sensor-dependent components. The complete approach is demonstrated on a network of three time-of-flight and six colour cameras. The applications of such a system, to a range of automatic scene-interpretation problems, are discussed. | Cross-calibration of time-of-flight and colour cameras |
S1077314214001842 | This article presents a new method for analysing mosaics based on the mathematical principles of Symmetry Groups. This method has been developed to get the understanding present in patterns by extracting the objects that form them, their lattice, and the Wallpaper Group. The main novelty of this method resides in the creation of a higher level of knowledge based on objects, which makes it possible to classify the objects, to extract their main features (Point Group, principal axes, etc.), and the relationships between them. In order to validate the method, several tests were carried out on a set of Islamic Geometric Patterns from different sources, for which the Wallpaper Group has been successfully obtained in 85% of the cases. This method can be applied to any kind of pattern that presents a Wallpaper Group. Possible applications of this computational method include pattern classification, cataloguing of ceramic coatings, creating databases of decorative patterns, creating pattern designs, pattern comparison between different cultures, tile cataloguing, and so on. | A new method to analyse mosaics based on Symmetry Group theory applied to Islamic Geometric Patterns |
S1077314214001854 | Multi-script identification helps in automatically selecting an appropriate OCR when video has several scripts; however, script identification in video frames is challenging because low resolution and complex background of video often cause disconnections or the loss of text information. This paper presents a novel idea that integrates the Gradient-Spatial-Features (GSpF) and the Gradient-Structural-Features (GStF) at block level based on an error factor and the weights of the features to identify six video scripts, namely, Arabic, Chinese, English, Japanese, Korean and Tamil. Horizontal and vertical gradient values are first computed for each text block to increase the contrast of text pixels. Then the method divides the horizontal and the vertical gradient blocks into two equal parts at the centroid in the horizontal direction. Histogram operation on each part is performed to select dominant text pixels from respective subparts of the horizontal and the vertical gradient blocks, which results in text components. After extracting GSpF and GStF from the text components, we finally propose to integrate the spatial and the structural features based on end points, intersection points, junction points and straightness of the skeleton of text components in a novel way to identify the scripts. The method is evaluated on 970 video frames of six scripts which involves font, font size or contrast variations, and is compared with an existing method in terms of classification rate. Experimental results show that the proposed method achieves 83.0% average classification rate for video script identification. The method is also evaluated by testing on noisy images and scanned low resolution documents, illustrating the robustness and the extensibility of the proposed Gradient-Spatial-Structural Features. | New Gradient-Spatial-Structural Features for video script identification |
S1077314214001866 | In multi-atlas based segmentation, a target image is segmented by registering multiple atlas images to this target image and propagating the corresponding atlas segmentations. These propagated segmentations are then combined into a single segmentation in a process called label fusion. Multi-atlas based segmentation is a segmentation method that allows fully automatic segmentation of image populations that exhibit a large variability in shape and image quality. Fusing the results of multiple atlases makes this technique robust and reliable. Previously, we have presented the SIMPLE method for label fusion and have shown that it outperforms existing methods. However, the downside of this method is its computation time and the fact that it requires a large atlas set. This is not always a problem, but in some cases segmentation may be time-critical or large atlas sets are not available. This paper presents a new label fusion method which is a local version of the SIMPLE method that has two advantages: when a large atlas set is available it improves the accuracy of label fusion and when this is not the case it gives the same accuracy as the original SIMPLE method, but with considerably fewer atlases. This is made possible by better utilizing the local information contained in propagated segmentations that would otherwise be discarded. Our method (semi-)automatically divides the propagated segmentations in multiple regions. A label fusion process can then be applied to each of these regions separately and the end result can be reconstructed out of multiple partial results. We demonstrate that the number of atlases needed can be reduced to 20 atlases without compromising segmentation quality. Our method is validated in an application to segmentation of the prostate, using an atlas set of 125 manually segmented images. | Improving label fusion in multi-atlas based segmentation by locally combining atlas selection and performance estimation |
S1077314214001878 | In the context of category level scene classification, the bag-of-visual-words model (BoVW) is widely used for image representation. This model is appearance based and does not contain any information regarding the arrangement of the visual words in the 2D image space. To overcome this problem, recent approaches try to capture information about either the absolute or the relative spatial location of visual words. In the first category, the so-called Spatial Pyramid Representation (SPR) is very popular thanks to its simplicity and good results. Alternatively, adding information about occurrences of relative spatial configurations of visual words was proven to be effective but at the cost of higher computational complexity, specifically when relative distance and angles are taken into account. In this paper, we introduce a novel way to incorporate both distance and angle information in the BoVW representation. The novelty is first to provide a computationally efficient representation adding relative spatial information between visual words and second to use a soft pairwise voting scheme based on the distance in the descriptor space. Experiments on challenging data sets MSRC-2, 15Scene, Caltech101, Caltech256 and Pascal VOC 2007 demonstrate that our method outperforms or is competitive with concurrent ones. We also show that it provides important complementary information to the spatial pyramid matching and can improve the overall performance. | Spatial histograms of soft pairwise similar patches to improve the bag-of-visual-words model |
S107731421400188X | With the advent of the digital camera, a common popular imaging processing is high dynamic range (HDR) that aims to overcome the technological limitations of the irradiance sensor dynamic range. In this paper, we will present a new method to combine low dynamic range images (LDR) for HDR processing. This method is based on the theory of evidence. Without a prior knowledge of the sensor intrinsic parameters and no extra data, it allows to locally maximizing the signal to noise ratio over the entire acquisition dynamic. In addition, our method is less sensitive to object or people in motion into the scene that are causing ghost-like artifacts with the conventional methods. This technique require that the camera be absolutely still between exposures or need a translational alignment. Simulation and experimental results are presented to demonstrate both the accuracy and efficiency of our algorithm. | Evidence theory for high dynamic range reconstruction with linear digital cameras |
S1077314214001891 | The problem of image segmentation is formulated in terms of recursive partitioning of segments into subsegments by optimizing the proposed objective function via graph cuts. Our approach uses a special normalization of the objective function, which enables the production of a hierarchy of regular superpixels that adhere to image boundaries. To enforce compactness and visual homogeneity of segments a regularization strategy is proposed. Experiments on the Berkeley dataset show that the proposed algorithm is comparable in its performance to the state-of-the-art superpixel methods. | A graph based approach to hierarchical image over-segmentation |
S1077314214001908 | This article presents a new approach for constructing connected operators for image processing and analysis. It relies on a hierarchical Markovian unsupervised algorithm in order to classify the nodes of the traditional Max-Tree. This approach enables to naturally handle multivariate attributes in a robust non-local way. The technique is demonstrated on several image analysis tasks: filtering, segmentation, and source detection, on astronomical and biomedical images. The obtained results show that the method is competitive despite its general formulation. This article provides also a new insight in the field of hierarchical Markovian image processing showing that morphological trees can advantageously replace traditional quadtrees. | Connected image processing with multivariate attributes: An unsupervised Markovian classification approach |
S107731421400191X | Classical image segmentation techniques in computer vision exploit visual cues such as image edges, lines, color and texture. Due to the complexity of real scenarios, the main challenge is achieving meaningful segmentation of the imaged scene since real objects have substantial discontinuities in these visual cues. In this paper, a new focus-based perceptual cue is introduced: the focus signal. The focus signal captures the variations of the focus level of every image pixel as a function of time and is directly related to the geometry of the scene. In a practical application, a sequence of images corresponding to an autofocus sequence is processed in order to infer geometric information of the imaged scene using the focus signal. This information is integrated with the segmentation obtained using classical cues, such as color and texture, in order to yield an improved scene segmentation. Experiments have been performed using different off-the-shelf cameras including a webcam, a compact digital photography camera and a surveillance camera. Obtained results using Dice’s similarity coefficient and the pixel labeling error show that a significant improvement in the final segmentation can be achieved by incorporating the information obtained from the focus signal in the segmentation process. | Focus-aided scene segmentation |
S1077314214001921 | People detection in video surveillance environments is a task that has been generating great interest. There are many approaches trying to solve the problem either in controlled scenarios or in very specific surveillance applications. We address one of the main problems of people detection in video sequences: every people detector from the state of the art must maintain a balance between the number of false detections and the number of missing pedestrians. This compromise limits the global detection results. In order to reduce or relax this limitation and improve the detection results, we evaluate two different post-processing subtasks. Firstly, we propose the use of people-background segmentation as a filtering stage in people detection. Then, we evaluate the combination of different detection approaches in order to add robustness to the detection and therefore improve the detection results. And, finally, we evaluate the successive application of both post-processing approaches. Experiments have been performed on two extensive datasets and using different people detectors from the state of the art: the results show the benefits achieved using the proposed post-processing techniques. | Post-processing approaches for improving people detection performance |
S1077314214001933 | For tracking systems consisting of multiple cameras with overlapping field-of-views, homography-based approaches are widely adopted to significantly reduce occlusions among pedestrians by sharing information among multiple views. However, in these approaches, the usage of information under real-world coordinates is only at a preliminary level. Therefore, in this paper, a multi-camera tracking system with integrated crowd simulation is proposed in order to explore the possibility to make homography information more helpful. Two crowd simulators with different simulation strategies are used to investigate the influence of the simulation strategy on the final tracking performance. The performance is evaluated by multiple object tracking precision and accuracy (MOTP and MOTA) metrics, for all the camera views and the results obtained under real-world coordinates. The experimental results demonstrate that crowd simulators boost the tracking performance significantly, especially for crowded scenes with higher density. In addition, a more realistic simulation strategy helps to further improve the overall tracking result. | Analysis-by-synthesis: Pedestrian tracking with crowd simulation models in a multi-camera video network |
S1077314214002069 | Clouds are a cue for estimating weak correspondences in outdoor cameras. These correspondences encode the uncertain spatio-temporal relationships between pixels both within individual cameras and across networks of cameras. Using this generalized notion of correspondence, we present methods for estimating the geometry of an outdoor scene from: (1) a single calibrated camera, (2) a network of calibrated cameras, and (3) a collection of arbitrary, uncalibrated cameras. Our methods do not require camera motion nor overlapping fields of view, and use simple geometric constraints based on appearance changes caused by cloud shadows. We define these geometric constraints, describe new algorithms for estimating shape given videos from multiple partly cloud days, and evaluate these algorithms on real and synthetic scenes. | Scene shape estimation from multiple partly cloudy days |
S1077314214002070 | We study techniques for monitoring and understanding real-world human activities, in particular of drivers, from distributed vision sensors. Real-time and early prediction of maneuvers is emphasized, specifically overtake and brake events. Study this particular domain is motivated by the fact that early knowledge of driver behavior, in concert with the dynamics of the vehicle and surrounding agents, can help to recognize dangerous situations. Furthermore, it can assist in developing effective warning and driver assistance systems. Multiple perspectives and modalities are captured and fused in order to achieve a comprehensive representation of the scene. Temporal activities are learned from a multi-camera head pose estimation module, hand and foot tracking, ego-vehicle parameters, lane and road geometry analysis, and surround vehicle trajectories. The system is evaluated on a challenging dataset of naturalistic driving in real-world settings. | On surveillance for safety critical events: In-vehicle video networks for predictive driver assistance systems |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.