FileName
stringlengths 17
17
| Abstract
stringlengths 163
6.01k
| Title
stringlengths 12
421
|
---|---|---|
S1077314213000659 | This paper presents an improved Euclidean Ricci flow method for spherical parameterization. We subsequently invent a scale space processing built upon Ricci energy to extract robust surface features for accurate surface registration. Since our method is based on the proposed Euclidean Ricci flow, it inherits the properties of Ricci flow such as conformality, robustness and intrinsicalness, facilitating efficient and effective surface mapping. Compared with other surface registration methods using curvature or sulci pattern, our method demonstrates a significant improvement for surface registration. In addition, Ricci energy can capture local differences for surface analysis as shown in the experiments and applications. | Ricci flow-based spherical parameterization and surface registration |
S1077314213000660 | Deformable registration is prone to errors when it involves large and complex deformations, since the procedure can easily end up in a local minimum. To reduce the number of local minima, and thus the risk of misalignment, regularization terms based on prior knowledge can be incorporated in registration. We propose a regularization term that is based on statistical knowledge of the deformations that are to be expected. A statistical model, trained on the shapes of a set of segmentations, is integrated as a penalty term in a free-form registration framework. For the evaluation of our approach, we perform inter-patient registration of MR images, which were acquired for planning of radiation therapy of cervical cancer. The manual delineations of structures such as the bladder and the clinical target volume are available. For both structures, leave-one-patient-out registration experiments were performed. The propagated atlas segmentations were compared to the manual target segmentations by Dice similarity and Hausdorff distance. Compared with registration without the use of statistical knowledge, the segmentations were significantly improved, by 0.1 in Dice similarity and by 8mm Hausdorff distance on average for both structures. | Free-form image registration regularized by a statistical shape model: application to organ segmentation in cervical MR |
S1077314213000684 | Cardiac magnetic resonance imaging (MRI) has been extensively used in the diagnosis of cardiovascular disease and its quantitative evaluation. Cardiac MRI techniques have been progressively improved, providing high-resolution anatomical and functional information. One of the key steps in the assessment of cardiovascular disease is the quantitative analysis of the left ventricle (LV) contractile function. Thus, the accurate delineation of LV boundary is of great interest to improve diagnostic performance. In this work, we present a novel segmentation algorithm of LV from cardiac MRI incorporating an implicit shape prior without any training phase using level sets in a variational framework. The segmentation of LV still remains a challenging problem due to its subtle boundary, occlusion, and inhomogeneity. In order to overcome such difficulties, a shape prior knowledge on the anatomical constraint of LV is integrated into a region-based segmentation framework. The shape prior is introduced based on the anatomical shape similarity between endocardium and epicardium. The shape of endocardium is assumed to be mutually similar under scaling to the shape of epicardium. An implicit shape representation using signed distance function is introduced and their discrepancy is measured in a probabilistic way. Our shape constraint is imposed by a mutual similarity of shapes without any training phase that requires a collection of shapes to learn their statistical properties. The performance of the proposed method has been demonstrated on fifteen clinical datasets, showing its potential as the basis in the clinical diagnosis of cardiovascular disease. | Multiphase segmentation using an implicit dual shape prior: Application to detection of left ventricle in cardiac MRI |
S1077314213000702 | In this work we present an improvement to the popular Active Appearance Model (AAM) algorithm, that we call the Multiple-Levelset AAM (MLA). The MLA can simultaneously segment multiple objects, and makes use of multiple levelsets, rather than anatomical landmarks, to define the shapes. AAMs traditionally define the shape of each object using a set of anatomical landmarks. However, landmarks can be difficult to identify, and AAMs traditionally only allow for segmentation of a single object of interest. The MLA, which is a landmark independent AAM, allows for levelsets of multiple objects to be determined and allows for them to be coupled with image intensities. This gives the MLA the flexibility to simulataneously segmentation multiple objects of interest in a new image. In this work we apply the MLA to segment the prostate capsule, the prostate peripheral zone (PZ), and the prostate central gland (CG), from a set of 40 endorectal, T2-weighted MRI images. The MLA system we employ in this work leverages a hierarchical segmentation framework, so constructed as to exploit domain specific attributes, by utilizing a given prostate segmentation to help drive the segmentations of the CG and PZ, which are embedded within the prostate. Our coupled MLA scheme yielded mean Dice accuracy values of .81, .79 and .68 for the prostate, CG, and PZ, respectively using a leave-one-out cross validation scheme over 40 patient studies. When only considering the midgland of the prostate, the mean DSC values were .89, .84, and .76 for the prostate, CG, and PZ respectively. | Simultaneous segmentation of prostatic zones using Active Appearance Models with multiple coupled levelsets |
S1077314213000714 | The problem of extracting anatomical structures from medical images is both very important and difficult. In this paper we are motivated by a new paradigm in medical image segmentation, termed Citizen Science, which involves a volunteer effort from multiple, possibly non-expert, human participants. These contributors observe 2D images and generate their estimates of anatomical boundaries in the form of planar closed curves. The challenge, of course, is to combine these different estimates in a coherent fashion and to develop an overall estimate of the underlying structure. Treating these curves as random samples, we use statistical shape theory to generate joint inferences and analyze this data generated by the citizen scientists. The specific goals in this analysis are: (1) to find a robust estimate of the representative curve that provides an overall segmentation, (2) to quantify the level of agreement between segmentations, both globally (full contours) and locally (parts of contours), and (3) to automatically detect outliers and help reduce their influence in the estimation. We demonstrate these ideas using a number of artificial examples and real applications in medical imaging, and summarize their potential use in future scenarios. | Statistical analysis of manual segmentations of structures in medical images |
S1077314213000726 | Most existing approaches in structure from motion for deformable objects focus on non-incremental solutions utilizing batch type algorithms. All data is collected before shape and motion reconstruction take place. This methodology is inherently unsuitable for applications that require real-time learning. Ideally the online system is capable of incrementally learning and building accurate shapes using current measurement data and past reconstructed shapes. Estimation of 3D structure and camera position is done online. To rely only on the measurements up until that moment is still a challenging problem. In this paper, a novel approach is proposed for recursive recovery of non-rigid structures from image sequences captured by a single camera. The main novelty in the proposed method is an adaptive algorithm for construction of shape constraints imposing stability on the online reconstructed shapes. The proposed, adaptively learned constraints have two aspects: constraints imposed on the basis shapes, the basic “building blocks” from which shapes are reconstructed; as well as constraints imposed on the mixing coefficients in the form of their probability distribution. Constraints are updated when the current model no longer adequately represents new shapes. This is achieved by means of Incremental Principal Component Analysis (IPCA). The proposed technique is also capable to handle missing data. Results are presented for motion capture based data of articulated face and simple human full-body movement. | Recursive non-rigid structure from motion with online learned shape prior |
S1077314213000738 | Segmenting the right ventricle (RV) in magnetic resonance (MR) images is required for cardiac function assessment. The segmentation of the RV is a difficult task due to low contrast with surrounding tissues and high shape variability. To overcome these problems, we introduce a segmentation method based on a statistical shape model obtained with a principal component analysis (PCA) on a set of representative shapes of the RV. Shapes are not represented by a set of points, but by distance maps to their contour, relaxing the need for a costly landmark detection and matching process. A shape model is thus obtained by computing a PCA on the shape variations. This prior is registered onto the image via a very simple user interaction and then incorporated into the well-known graph cut framework in order to guide the segmentation. Our semi-automatic segmentation method has been applied on 248 MR images of a publicly available dataset (from MICCAI’12 Right Ventricle Segmentation Challenge). We show that encouraging results can be obtained for this challenging application. | Graph cut segmentation with a statistical shape model in cardiac MRI |
S107731421300074X | Organ shape plays an important role in clinical diagnosis, surgical planning and treatment evaluation. Shape modeling is a critical factor affecting the performance of deformable model based segmentation methods for organ shape extraction. In most existing works, shape modeling is completed in the original shape space, with the presence of outliers. In addition, the specificity of the patient was not taken into account. This paper proposes a novel target-oriented shape prior model to deal with these two problems in a unified framework. The proposed method measures the intrinsic similarity between the target shape and the training shapes on an embedded manifold by manifold learning techniques. With this approach, shapes in the training set can be selected according to their intrinsic similarity to the target image. With more accurate shape guidance, an optimized search is performed by a deformable model to minimize an energy functional for image segmentation, which is efficiently achieved by using dynamic programming. Our method has been validated on 2D prostate localization and 3D prostate segmentation in MRI scans. Compared to other existing methods, our proposed method exhibits better performance in both studies. | Global structure constrained local shape prior estimation for medical image segmentation |
S1077314213000751 | In recent years, gradient vector flow (GVF) based algorithms have been successfully used to segment a variety of 2-D and 3-D imagery. However, due to the compromise of internal and external energy forces within the resulting partial differential equations, these methods may lead to biased segmentation results. In this paper, we propose MSGVF, a mean shift based GVF segmentation algorithm that can successfully locate the correct borders. MSGVF is developed so that when the contour reaches equilibrium, the various forces resulting from the different energy terms are balanced. In addition, the smoothness constraint of image pixels is kept so that over- or under-segmentation can be reduced. Experimental results on publicly accessible datasets of dermoscopic and optic disc images demonstrate that the proposed method effectively detects the borders of the objects of interest. | Mean shift based gradient vector flow for image segmentation |
S1077314213000763 | Precise segmentation and identification of thoracic vertebrae is important for many medical imaging applications though it remains challenging due to the vertebra’s complex shape and varied neighboring structures. In this paper, a new method based on learned bone-structure edge detectors and a coarse-to-fine deformable surface model is proposed to segment and identify vertebrae in 3D CT thoracic images. In the training stage, a discriminative classifier for object-specific edge detection is trained using steerable features and statistical shape models for 12 thoracic vertebrae are also learned. For the run-time testing, we design a new coarse-to-fine, two-stage segmentation strategy: subregions of a vertebra first deform together as a group; then vertebra mesh vertices in a smaller neighborhood move group-wise to progressively drive the deformable model towards edge response maps by optimizing a probability cost function. In this manner, the smoothness and topology of vertebrae shapes are guaranteed. This algorithm performs successfully with reliable mean point-to-surface errors 0.95±0.91mm on 40 volumes. Consequently a vertebra identification scheme is also proposed via mean surface mesh matching. We achieve a success rate of 73.1% using a single vertebra, and over 95% for 8 or more vertebra which is comparable or slightly better than state-of-the-art [5]. | Hierarchical segmentation and identification of thoracic vertebra using learning-based edge detection and coarse-to-fine deformable model |
S1077314213000775 | Heart disease is the leading cause of death in the modern world. Cardiac imaging is routinely applied for assessment and diagnosis of cardiac diseases. Computerized image analysis methods are now widely applied to cardiac segmentation and registration in order to extract the anatomy and contractile function of the heart. The vast number of recent papers on this topic point to the need for an up to date survey in order to summarize and classify the published literature. This paper presents a survey of shape modeling applications to cardiac image analysis from MRI, CT, echocardiography, PET, and SPECT and aims to (1) introduce new methodologies in this field, (2) classify major contributions in image-based cardiac modeling, (3) provide a tutorial to beginners to initiate their own studies, and (4) introduce the major challenges of registration and segmentation and provide practical examples. The techniques surveyed include statistical models, deformable models/level sets, biophysical models, and non-rigid registration using basis functions. About 130 journal articles are categorized based on methodology, output, imaging system, modality, and validations. The advantages and disadvantages of the registration and validation techniques are discussed as appropriate in each section. | A survey of shaped-based registration and segmentation techniques for cardiac images |
S1077314213000787 | Segmentation of the left ventricle (LV) is a hot topic in cardiac magnetic resonance (MR) images analysis. In this paper, we present an automatic LV myocardial boundary segmentation method using the parametric active contour model (or snake model). By convolving the gradient map of an image, a fast external force named gradient vector convolution (GVC) is presented for the snake model. A circle-based energy is incorporated into the GVC snake model to extract the endocardium. With this prior constraint, the snake contour can conquer the unexpected local minimum stemming from artifacts and papillary muscle, etc. After the endocardium is detected, the original edge map around and within the endocardium is directly set to zero. This modified edge map is used to generate a new GVC force filed, which automatically pushes the snake contour directly to the epicardium by employing the endocardium result as initialization. Meanwhile, a novel shape-similarity based energy is proposed to prevent the snake contour from being strapped in faulty edges and to preserve weak boundaries. Both qualitative and quantitative evaluations on our dataset and the publicly available database (e.g. MICCAI 2009) demonstrate the good performance of our algorithm. | Segmentation of the left ventricle in cardiac cine MRI using a shape-constrained snake model |
S1077314213000799 | 3D anatomical shape atlas construction has been extensively studied in medical image analysis research, owing to its importance in model-based image segmentation, longitudinal studies and populational statistical analysis, etc. Among multiple steps of 3D shape atlas construction, establishing anatomical correspondences across subjects, i.e., surface registration, is probably the most critical but challenging one. Adaptive focus deformable model (AFDM) [1] was proposed to tackle this problem by exploiting cross-scale geometry characteristics of 3D anatomy surfaces. Although the effectiveness of AFDM has been proved in various studies, its performance is highly dependent on the quality of 3D surface meshes, which often degrades along with the iterations of deformable surface registration (the process of correspondence matching). In this paper, we propose a new framework for 3D anatomical shape atlas construction. Our method aims to robustly establish correspondences across different subjects and simultaneously generate high-quality surface meshes without removing shape details. Mathematically, a new energy term is embedded into the original energy function of AFDM to preserve surface mesh qualities during deformable surface matching. More specifically, we employ the Laplacian representation to encode shape details and smoothness constraints. An expectation–maximization style algorithm is designed to optimize multiple energy terms alternatively until convergence. We demonstrate the performance of our method via a set of diverse applications, including a population of sparse cardiac MRI slices with 2D labels, 3D high resolution CT cardiac images and rodent brain MRIs with multiple structures. The constructed shape atlases exhibit good mesh qualities and preserve fine shape details. The constructed shape atlases can further benefit other research topics such as segmentation and statistical analysis. | 3D anatomical shape atlas construction using mesh quality preserved deformable models |
S107731421300088X | In this paper, we address the problem of 2D–3D pose estimation. Specifically, we propose an approach to jointly track a rigid object in a 2D image sequence and to estimate its pose (position and orientation) in 3D space. We revisit a joint 2D segmentation/3D pose estimation technique, and then extend the framework by incorporating a particle filter to robustly track the object in a challenging environment, and by developing an occlusion detection and handling scheme to continuously track the object in the presence of occlusions. In particular, we focus on partial occlusions that prevent the tracker from extracting an exact region properties of the object, which plays a pivotal role for region-based tracking methods in maintaining the track. To this end, a dynamical choice of how to invoke the objective functional is performed online based on the degree of dependencies between predictions and measurements of the system in accordance with the degree of occlusion and the variation of the object’s pose. This scheme provides the robustness to deal with occlusions of an obstacle with different statistical properties from that of the object of interest. Experimental results demonstrate the practical applicability and robustness of the proposed method in several challenging scenarios. | Particle filters and occlusion handling for rigid 2D–3D pose tracking |
S1077314213000891 | The problem of dimensionality reduction is to map data from high dimensional spaces to low dimensional spaces. In the process of dimensionality reduction, the data structure, which is helpful to discover the latent semantics and simultaneously respect the intrinsic geometric structure, should be preserved. In this paper, to discover a low-dimensional embedding space with the nature of structure preservation and basis compactness, we propose a novel dimensionality reduction algorithm, called Structure Preserving Non-negative Matrix Factorization (SPNMF). In SPNMF, three kinds of constraints, namely local affinity, distant repulsion, and embedding basis redundancy elimination, are incorporated into the NMF framework. SPNMF is formulated as an optimization problem and solved by an effective iterative multiplicative update algorithm. The convergence of the proposed update solutions is proved. Extensive experiments on both synthetic data and six real world data sets demonstrate the encouraging performance of the proposed algorithm in comparison to the state-of-the-art algorithms, especially some related works based on NMF. Moreover, the convergence of the proposed updating rules is experimentally validated. | Structure preserving non-negative matrix factorization for dimensionality reduction |
S1077314213000908 | In this paper we present a real-time tracking algorithm that is able to deal with complex occlusions involving a plurality of moving objects simultaneously. The rationale is grounded on a suitable representation and exploitation of the recent history of each single moving object being tracked. The object history is encoded using a state, and the transitions among the states are described through a Finite State Automata (FSA). In presence of complex situations the tracking is properly solved by making the FSA’s of the involved objects interact with each other. This is the way for basing the tracking decisions not only on the information present in the current frame, but also on conditions that have been observed more stably over a longer time span. The object history can be used to reliably discern the occurrence of the most common problems affecting object detection, making this method particularly robust in complex scenarios. An experimental evaluation of the proposed approach has been made on two publicly available datasets, the ISSIA Soccer Dataset and the PETS 2010 database. | A real time algorithm for people tracking using contextual reasoning |
S107731421300091X | Object recognition systems constitute a deeply entrenched and omnipresent component of modern intelligent systems. Research on object recognition algorithms has led to advances in factory and office automation through the creation of optical character recognition systems, assembly-line industrial inspection systems, as well as chip defect identification systems. It has also led to significant advances in medical imaging, defence and biometrics. In this paper we discuss the evolution of computer-based object recognition systems over the last fifty years, and overview the successes and failures of proposed solutions to the problem. We survey the breadth of approaches adopted over the years in attempting to solve the problem, and highlight the important role that active and attentive approaches must play in any solution that bridges the semantic gap in the proposed object representations, while simultaneously leading to efficient learning and inference algorithms. From the earliest systems which dealt with the character recognition problem, to modern visually-guided agents that can purposively search entire rooms for objects, we argue that a common thread of all such systems is their fragility and their inability to generalize as well as the human visual system can. At the same time, however, we demonstrate that the performance of such systems in strictly controlled environments often vastly outperforms the capabilities of the human visual system. We conclude our survey by arguing that the next step in the evolution of object recognition algorithms will require radical and bold steps forward in terms of the object representations, as well as the learning and inference algorithms used. | 50 Years of object recognition: Directions forward |
S1077314213000921 | Recently there has been a considerable interest in dynamic textures due to the explosive growth of multimedia databases. In addition, dynamic texture appears in a wide range of videos, which makes it very important in applications concerning to model physical phenomena. Thus, dynamic textures have emerged as a new field of investigation that extends the static or spatial textures to the spatio-temporal domain. In this paper, we propose a novel approach for dynamic texture segmentation based on automata theory and k-means algorithm. In this approach, a feature vector is extracted for each pixel by applying deterministic partially self-avoiding walks on three orthogonal planes of the video. Then, these feature vectors are clustered by the well-known k-means algorithm. Although the k-means algorithm has shown interesting results, it only ensures its convergence to a local minimum, which affects the final result of segmentation. In order to overcome this drawback, we compare six methods of initialization of the k-means. The experimental results have demonstrated the effectiveness of our proposed approach compared to the state-of-the-art segmentation methods. | Dynamic texture segmentation based on deterministic partially self-avoiding walks |
S1077314213000933 | In this paper, we explore how a wide field-of-view imaging system that consists of a number of cameras in a network arranged to approximate a spherical eye can reduce the complexity of estimating camera motion. Depth map of the imaged scene can be reconstructed once the camera motion is there. We present a direct method to recover camera motion from video data, which neither requires establishment of feature correspondences nor recovery of optical flow, but from normal flow which is directly observable. With a wide visual field, the inherent ambiguities between translation and rotation disappear. Several subsets of normal flow pairs and triplets can be utilized to constraint the directions of translation and rotation separately. The intersection of solution spaces arising from normal flow pairs or triplets yields the estimate on the direction of motion. In addition, the larger number of normal flow measurements so resulted can be used to combat the local flow extraction error. Rotational magnitude is recovered in a subsequent stage. This article details how motion recovery can be improved with the use of such an approximate spherical imaging system. Experimental results on synthetic and real image data are provided. The results show that the accuracy of motion estimation is comparable to those of the state-of-the-art methods that require to use explicit feature correspondences or full optical flows, and our method has a much faster computational speed. | Determining shape and motion from non-overlapping multi-camera rig: A direct approach using normal flows |
S1077314213000945 | A stochastic structure for single and multi-agent level set method is investigated in this article in an attempt to overcome local optima problems in image segmentation. Like other global optimization methods that take advantage of random operators and multi-individual search algorithms, the best agent in this proposed algorithm plays the role of leader in order to enable the algorithm to find the global solution. To accomplish this, the procedure employs a set of stochastic partial differential equations (SPDE), each one of which evolves based on its own stochastic dynamics. The agents are then compelled to simultaneously converge to the best available topology. Moreover, the stochastic dynamics of each agent extends the stochastic level set approach by using a multi source structure. Each source is a delta function centered on a point of evolving front. Lastly, while the computational costs of these methods are higher than the region-based level set method, the probability of finding the global solution is significantly increased. | Multi-agent stochastic level set method in image segmentation |
S1077314213000957 | This paper presents an adaptive spatial information-theoretic fuzzy clustering algorithm to improve the robustness of the conventional fuzzy c-means (FCM) clustering algorithms for image segmentation. This is achieved through the incorporation of information-theoretic framework into the FCM-type algorithms. By combining these two concepts and modifying the objective function of the FCM algorithm, we are able to solve the problems of sensitivity to noisy data and the lack of spatial information, and improve the image segmentation results. The experimental results have shown that this robust clustering algorithm is useful for MRI brain image segmentation and it yields better segmentation results when compared to the conventional FCM approach. | An adaptive spatial information-theoretic fuzzy clustering algorithm for image segmentation |
S1077314213000969 | Detecting objects, estimating their pose, and recovering their 3D shape are critical problems in many vision and robotics applications. This paper addresses the above needs using a two stages approach. In the first stage, we propose a new method called DEHV – Depth-Encoded Hough Voting. DEHV jointly detects objects, infers their categories, estimates their pose, and infers/decodes objects depth maps from either a single image (when no depth maps are available in testing) or a single image augmented with depth map (when this is available in testing). Inspired by the Hough voting scheme introduced in [1], DEHV incorporates depth information into the process of learning distributions of image features (patches) representing an object category. DEHV takes advantage of the interplay between the scale of each object patch in the image and its distance (depth) from the corresponding physical patch attached to the 3D object. Once the depth map is given, a full reconstruction is achieved in a second (3D modelling) stage, where modified or state-of-the-art 3D shape and texture completion techniques are used to recover the complete 3D model. Extensive quantitative and qualitative experimental analysis on existing datasets [2–4] and a newly proposed 3D table-top object category dataset shows that our DEHV scheme obtains competitive detection and pose estimation results. Finally, the quality of 3D modelling in terms of both shape completion and texture completion is evaluated on a 3D modelling dataset containing both in-door and out-door object categories. We demonstrate that our overall algorithm can obtain convincing 3D shape reconstruction from just one single uncalibrated image. | Object detection, shape recovery, and 3D modelling by depth-encoded hough voting |
S1077314213001033 | Gradient vector flow (GVF) active contour model shows good performance at concavity convergence and initialization insensitivity, yet it is susceptible to weak edges as well as deep and narrow concavity. This paper proposes a novel external force, called adaptive diffusion flow (ADF), with adaptive diffusion strategies according to the characteristics of an image region in the parametric active contour model framework for image segmentation. We exploit a harmonic hypersurface minimal functional to substitute smoothness energy term in GVF for alleviating the possible leakage. We make use of the p(x) harmonic maps, in which p(x) ranges from 1 to 2, such that the diffusion process of the flow field can be adjusted adaptively according to image characteristics. We also incorporate an infinity laplacian functional to ADF active contour model to drive the active contours onto deep and narrow concave regions of objects. The experimental results demonstrate that ADF active contour model possesses several good properties, including noise robustness, weak edge preserving and concavity convergence. | Adaptive diffusion flow active contours for image segmentation |
S1077314213001045 | Topological properties are with invariance and take priority over other features, which play an important role in cognition. This paper introduces a new attention selection model called TPA (topological properties-based attention), which adopts topological properties and quaternion. In TPA, using Unit-linking PCNN (Pulse Coupled Neural Network) hole-filter expresses an important topological property (the connectivity) in visual attention selection. Meanwhile, using the quaternion Fourier transform based phase spectrum of an image or a frame in a video obtains the spatio-temporal saliency map, which shows the result of attention selection. Adjusting the weight of a topological channel can change its influence. The experimental results show that TPA reflects the real attention selection more accurately than PQFT (Phase spectrum of Quaternion Fourier Transform). | Attention selection using global topological properties based on pulse coupled neural network |
S1077314213001057 | This paper presents new methods to segment thin tree structures, which are, for example present in microglia extensions and cardiac or neuronal blood vessels. Many authors have used minimal cost paths, or geodesics relative to a local weighting potential P, to find a vessel pathway between two end points. We utilize a set of such geodesic paths to find a tubular tree structure by seeking minimal interaction. We introduce a new idea that we call geodesic voting or geodesic density. The approach consists of computing geodesics from a set of end points scattered in the image which flow toward a given source point. The target structure corresponds to image points with a high geodesic density. The “Geodesic density” is defined at each pixel of the image as the number of geodesics that pass over this pixel. The potential P is defined in such way that it takes low values along the tree structure, therefore geodesics will migrate toward this structure thereby yielding a high geodesic density. We further adapt these methods to segment complex tree structures in a noisy medium and apply them to segment microglia extensions from confocal microscope images as well as vessels. | Geodesic voting for the automatic extraction of tree structures. Methods and applications |
S1077314213001069 | Structured-light systems (SLSs) are widely used in active stereo vision to perform 3D modelling of a surface of interest. We propose a flexible method to calibrate SLSs projecting point patterns. The method is flexible in two respects. First, the calibration is independent of the number of points and their spatial distribution inside the pattern. Second, no positioning device is required since the projector geometry is determined in the camera coordinate system based on unknown positions of the calibration board. The projector optical center is estimated together with the 3D rays originating from the projector using a numerical optimization procedure. We study the 3D point reconstruction accuracy for two SLSs involving a laser based projector and a pico-projector, respectively, and for three point patterns. We finally illustrate the potential of our active vision system for a medical endoscopy application where a 3D cartography of the inspected organ (a large field of view surface also including image textures) can be reconstructed from a video acquisition using the laser based SLS. | Flexible calibration of structured-light systems projecting point patterns |
S1077314213001070 | A binary iriscode is a very compact representation of an iris image. For a long time it was assumed that the iriscode did not contain enough information to allow for the reconstruction of the original iris. The present work proposes a novel probabilistic approach based on genetic algorithms to reconstruct iris images from binary templates and analyzes the similarity between the reconstructed synthetic iris image and the original one. The performance of the reconstruction technique is assessed by empirically estimating the probability of successfully matching the synthesized iris image against its true counterpart using a commercial matcher. The experimental results indicate that the reconstructed images look reasonably realistic. While a human expert may not be easily deceived by them, they can successfully deceive a commercial matcher. Furthermore, since the proposed methodology is able to synthesize multiple iris images from a single iriscode, it has other potential applications including privacy enhancement of iris-based systems. | Iris image reconstruction from binary templates: An efficient probabilistic approach based on genetic algorithms |
S1077314213001082 | This paper introduces a novel method for recovering light directions and camera parameters using a single sphere. Traditional methods for estimating light directions using spheres either assume both the radius and center of the sphere being known precisely, or they depend on multiple calibrated views to recover these parameters. In this paper, it will be shown that light directions can be uniquely determined from specular highlights observed in a single view of a sphere without knowing or recovering the exact radius and center of the sphere. Besides, given multiple views of the sphere, it will be shown that the focal length and the relative positions and orientations of the cameras can be determined using the recovered sphere and light directions. Closed form solutions for estimation of light directions and camera poses are presented, and an optimization procedure for estimation of the focal length is introduced. Experimental results on synthetic and real data demonstrates both the accuracy and robustness of the proposed method. | Camera and light calibration from reflections on a sphere |
S1077314213001094 | This paper presents an appearance-based method for estimating head direction that automatically adapts to individual scenes. Appearance-based estimation methods usually require a ground-truth dataset taken from a scene that is similar to test video sequences. However, it is almost impossible to acquire many manually labeled head images for each scene. We introduce an approach that automatically aggregates labeled head images by inferring head direction labels from walking direction. Furthermore, in order to deal with large variations that occur in head appearance even within the same scene, we introduce an approach that segments a scene into multiple regions according to the similarity of head appearances. Experimental results demonstrate that our proposed method achieved higher accuracy in head direction estimation than conventional approaches that use a scene-independent generic dataset. | Head direction estimation from low resolution images with scene adaptation |
S1077314213001239 | This paper presents an approach for detecting suspicious events in videos by using only the video itself as the training samples for valid behaviors. These salient events are obtained in real-time by detecting anomalous spatio-temporal regions in a densely sampled video. The method codes a video as a compact set of spatio-temporal volumes, while considering the uncertainty in the codebook construction. The spatio-temporal compositions of video volumes are modeled using a probabilistic framework, which calculates their likelihood of being normal in the video. This approach can be considered as an extension of the Bag of Video words (BOV) approaches, which represent a video as an order-less distribution of video volumes. The proposed method imposes spatial and temporal constraints on the video volumes so that an inference mechanism can estimate the probability density functions of their arrangements. Anomalous events are assumed to be video arrangements with very low frequency of occurrence. The algorithm is very fast and does not employ background subtraction, motion estimation or tracking. It is also robust to spatial and temporal scale changes, as well as some deformations. Experiments were performed on four video datasets of abnormal activities in both crowded and non-crowded scenes and under difficult illumination conditions. The proposed method outperformed all other approaches based on BOV that do not account for contextual information. | An on-line, real-time learning method for detecting anomalies in videos using spatio-temporal compositions |
S1077314213001240 | Moving vehicle detection and classification using multimodal data is a challenging task in data collection, audio-visual alignment, data labeling and feature selection under uncontrolled environments with occlusions, motion blurs, varying image resolutions and perspective distortions. In this work, we propose an effective multimodal temporal panorama approach for moving vehicle detection and classification using a novel long-range audio-visual sensing system. A new audio-visual vehicle (AVV) dataset is created, which features automatic vehicle detection and audio-visual alignment, accurate vehicle extraction and reconstruction, and efficient data labeling. In particular, vehicles’ visual images are reconstructed once detected in order to remove most of the occlusions, motion blurs, and variations of perspective views. Multimodal audio-visual features are extracted, including global geometric features (aspect ratios, profiles), local structure features (HOGs), as well various audio features (MFCCs, etc.). Using radial-based SVMs, the effectiveness of the integration of these multimodal features is thoroughly and systematically studied. The concept of MTP may not be only limited to visual, motion and audio modalities; it could also be applicable to other sensing modalities that can obtain data in the temporal domain. | A multimodal temporal panorama approach for moving vehicle detection, reconstruction and classification |
S1077314213001252 | A fast registration making use of implicit polynomial (IP) models is helpful for the real-time pose estimation from single clinical free-hand Ultrasound (US) image, because it is superior in the areas such as robustness against image noise, fast registration without enquiring correspondences, and fast IP coefficient transformation. However it might lead to the lack of accuracy or failure registration. In this paper, we present a novel registration method based on a coarse-to-fine IP representation. The approach starts from a high-speed and reliable registration with a coarse (of low degree) IP model and stops when the desired accuracy is achieved by a fine (of high degree) IP model. Over the previous IP-to-point based methods our contributions are: (i) keeping the efficiency without requiring pair-wised correspondences, (ii) enhancing the robustness, and (iii) improving the accuracy. The experimental result demonstrates the good performance of our registration method and its capabilities of overcoming the limitations of unconstrained freehand ultrasound data, resulting in fast, robust and accurate registration. | A coarse-to-fine IP-driven registration for pose estimation from single ultrasound image |
S1077314213001264 | In this paper, we propose a novel stereo method for registering foreground objects in a pair of thermal and visible videos of close-range scenes. In our stereo matching, we use Local Self-Similarity (LSS) as similarity metric between thermal and visible images. In order to accurately assign disparities to depth discontinuities and occluded Region Of Interest (ROI), we have integrated color and motion cues as soft constraints in an energy minimization framework. The optimal disparity map is approximated for image ROIs using a Belief Propagation (BP) algorithm. We tested our registration method on several challenging close-range indoor video frames of multiple people at different depths, with different clothing, and different poses. We show that our global optimization algorithm significantly outperforms the existing state-of-the art method, especially for disparity assignment of occluded people at different depth in close-range surveillance scenes and for relatively large camera baseline. | A LSS-based registration of stereo thermal–visible videos of multiple people using belief propagation |
S1077314213001276 | In this paper, we present a new framework for three-dimensional (3D) reconstruction of multiple rigid objects from dynamic scenes. Conventional 3D reconstruction from multiple views is applicable to static scenes, in which the configuration of objects is fixed while the images are taken. In our framework, we aim to reconstruct the 3D models of multiple objects in a more general setting where the configuration of the objects varies among views. We solve this problem by object-centered decomposition of the dynamic scenes using unsupervised co-recognition approach. Unlike conventional motion segmentation algorithms that require small motion assumption between consecutive views, co-recognition method provides reliable accurate correspondences of a same object among unordered and wide-baseline views. In order to segment each object region, we benefit from the 3D sparse points obtained from the structure-from-motion. These points are reliable and serve as automatic seed points for a seeded-segmentation algorithm. Experiments on various real challenging image sequences demonstrate the effectiveness of our approach, especially in the presence of abrupt independent motions of objects. | Multi-object reconstruction from dynamic scenes: An object-centered approach |
S1077314213001288 | In this paper, we present a method to recover the parameters governing the reflection of light from a surface making use of a single hyperspectral image. To do this, we view the image radiance as a combination of specular and diffuse reflection components and present a cost functional which can be used for purposes of iterative least squares optimisation. This optimisation process is quite general in nature and can be applied to a number of reflectance models widely used in the computer vision and graphics communities. We elaborate on the use of these models in our optimisation process and provide a variant of the Beckmann–Kirchhoff model which incorporates the Fresnel reflection term. We show results on synthetic images and illustrate how the recovered photometric parameters can be employed for skin recognition in real world imagery, where our estimated albedo yields a classification rate of 95.09±4.26% as compared to an alternative, whose classification rate is of 90.94±6.12%. We also show quantitative results on the estimation of the index of refraction, where our method delivers an average per-pixel angular error of 0.15°. This is a considerable improvement with respect to an alternative, which yields an error of 9.9°. | An optimisation approach to the recovery of reflection parameters from a single hyperspectral image |
S107731421300129X | We address the problem of predicting category labels for unlabeled videos in a large video dataset by using a ground-truth set of objectively labeled videos that we have created. Large video databases like YouTube require that a user uploading a new video assign to it a category label from a prescribed set of labels. Such category labeling is likely to be corrupted by the subjective biases of the uploader. Despite their noisy nature, these subjective labels are frequently used as gold standard in algorithms for multimedia classification and retrieval. Our goal in this paper is NOT to propose yet another algorithm that predicts labels for unseen videos based on the subjective ground-truth. On the other hand, our goal is to demonstrate that the video classification performance can be improved if instead of using subjective labels, we first create an objectively labeled ground-truth set of videos and then train a classifier based on such a ground-truth so as to predict objective labels for the set of unlabeled videos. With regard to how we generate the objectively-labeled ground-truth dataset, we base it on the notion that when a video is labeled by a panel of diverse individuals, the majority opinion rendered by the panel may be taken to be the objective opinion. In this manner, using judgments provided by multiple human annotators, we have collected objective labels for a ground-truth dataset consisting of randomly-selected 1000 videos from the TinyVideos database that contains roughly 52,000 videos from YouTube (courtesy of Karpenko and Aarabi [1]). Through a fourfold cross-validation experiment on the ground-truth set, we demonstrate that the objective labels have a superior consistency compared to the subjective labels when used for video classification. We show that this claim is valid for several different kinds of feature sets that one can use to compare videos and with two different types of classifiers that one can use for label prediction. Subsequently, we use the ground-truth dataset of 1000 videos to predict the objective category labels of the remaining 51,000 videos. We compare the objective labels thus determined with the subjective labels provided by the video uploaders and qualitatively argue for the more informative nature of the objective labels. | Using objective ground-truth labels created by multiple annotators for improved video classification: A comparative study |
S1077314213001306 | Uncooperative iris identification systems at a distance suffer from poor resolution of the acquired iris images, which significantly degrades iris recognition performance. Super-resolution techniques have been employed to enhance the resolution of iris images and improve the recognition performance. However, most existing super-resolution approaches proposed for the iris biometric super-resolve pixel intensity values, rather than the actual features used for recognition. This paper thoroughly investigates transferring super-resolution of iris images from the intensity domain to the feature domain. By directly super-resolving only the features essential for recognition, and by incorporating domain specific information from iris models, improved recognition performance compared to pixel domain super-resolution can be achieved. A framework for applying super-resolution to nonlinear features in the feature-domain is proposed. Based on this framework, a novel feature-domain super-resolution approach for the iris biometric employing 2D Gabor phase-quadrant features is proposed. The approach is shown to outperform its pixel domain counterpart, as well as other feature domain super-resolution approaches and fusion techniques. | Feature-domain super-resolution for iris recognition |
S1077314213001318 | We present a variational framework for naturally incorporating prior shape knowledge in guidance of active contours for boundary extraction in images. This framework is especially suitable for images collected outside the visible spectrum, where boundary estimation is difficult due to low contrast, low resolution, and presence of noise and clutter. Accordingly, we illustrate this approach using the segmentation of various objects in synthetic aperture sonar (SAS) images of underwater terrains. We use elastic shape analysis of planar curves in which the shapes are considered as elements of a quotient space of an infinite dimensional, non-linear Riemannian manifold. Using geodesic paths under the elastic Riemannian metric, one computes sample mean and covariances of training shapes in each classes and derives statistical models for capturing class-specific shape variability. These models are then used as shape priors in a variational setting to solve for Bayesian estimation of desired contours as follows. In traditional active contour models curves are driven towards minimum of an energy composed of image and smoothing terms. We introduce an additional shape term based on shape models of relevant shape classes. The minimization of this total energy, using iterated gradient-based updates of curves, leads to an improved segmentation of object boundaries. This is demonstrated using a number of shape classes in two large SAS image datasets. | Elastic shapes models for improving segmentation of object boundaries in synthetic aperture sonar images |
S107731421300132X | Saliency detection has been researched a lot in recent years. Traditional methods are mostly conducted and evaluated on conventional RGB images. Few work has considered the incorporation of multi-spectral clues. Considering the success of including near-infrared spectrum in applications such as face recognition and scene categorization, this paper presents a multi-spectral dataset and applies it in saliency detection. Experiments demonstrate that the incorporation of near-infrared band is effective in the saliency detection procedure. We also test the combinational models for integrating visible and near-infrared bands. Results show that there is no single model to effect on every saliency detection method. Models should be selected according to the specific employed method. | Multi-spectral dataset and its application in saliency detection |
S1077314213001331 | With the aim of elaborating a mobile application, accessible to anyone and with educational purposes, we present a method for tree species identification that relies on dedicated algorithms and explicit botany-inspired descriptors. Focusing on the analysis of leaves, we developed a working process to help recognize species, starting from a picture of a leaf in a complex natural background. A two-step active contour segmentation algorithm based on a polygonal leaf model processes the image to retrieve the contour of the leaf. Features we use afterwards are high-level geometrical descriptors that make a semantic interpretation possible, and prove to achieve better performance than more generic and statistical shape descriptors alone. We present the results, both in terms of segmentation and classification, considering a database of 50 European broad-leaved tree species, and an implementation of the system is available in the iPhone application Folia. | Understanding leaves in natural images – A model-based approach for tree species identification |
S1077314213001343 | In this paper, we present a comprehensive survey of Markov Random Fields (MRFs) in computer vision and image understanding, with respect to the modeling, the inference and the learning. While MRFs were introduced into the computer vision field about two decades ago, they started to become a ubiquitous tool for solving visual perception problems around the turn of the millennium following the emergence of efficient inference methods. During the past decade, a variety of MRF models as well as inference and learning methods have been developed for addressing numerous low, mid and high-level vision problems. While most of the literature concerns pairwise MRFs, in recent years we have also witnessed significant progress in higher-order MRFs, which substantially enhances the expressiveness of graph-based models and expands the domain of solvable problems. This survey provides a compact and informative summary of the major literature in this research topic. | Markov Random Field modeling, inference & learning in computer vision & image understanding: A survey |
S1077314213001355 | Facial expressions analysis plays an important part in emotion detection. However, having an automatic and non-intrusive system to detect blended facial expression is still a challenging problem, especially when the subject is unknown to the system. Here, we propose a method that adapts to the morphology of the subject and that is based on a new invariant representation of facial expressions. In our system, one expression is defined by its relative position to 8 other expressions. As the mode of representation is relative, we show that the resulting expression space is person-independent. The 8 expressions are synthesized for each unknown subject from plausible distortions. Recognition tasks are performed in this space with a basic algorithm. The experiments have been performed on 22 different blended expressions and on either known or unknown subjects. The recognition results on known subjects demonstrate that the representation is robust to the type of data (shape and/or texture information) and to the dimensionality of the expression space. The recognition results on 22 expressions of unknown subjects show that a dimensionality of the expression space of 4 is enough to outperform traditional methods based on active appearance models and accurately describe an expression. | Invariant representation of facial expressions for blended expression recognition on unknown subjects |
S1077314213001367 | We propose a human motion tracking method that not only captures the motion of the skeleton model but also generates a sequence of surfaces using images acquired by multiple synchronized cameras. Our method extracts articulated postures with 42 degrees of freedom through a sequence of visual hulls. We seek a globally optimized solution for likelihood using local memorization of the “fitness” of each body segment. Our method efficiently avoids problems of local minima by using a mean combination and an articulated combination of particles selected according to the weights of the different body segments. The surface is produced by deforming the template and the details are recovered by fitting the deformed surface to 2D silhouette rims. The extracted posture and estimated surface are cooperatively refined by registering the corresponding body segments. In our experiments, the mean error between the samples of the deformed reference model and the target is about 2cm and the mean matching difference between the images projected by the estimated surfaces and the original images is about 6%. | Cooperative estimation of human motion and surfaces using multiview videos |
S1077314213001379 | The interest in automatic surveillance and monitoring systems has been growing over the last years due to increasing demands for security and law enforcement applications. Although, automatic surveillance systems have reached a significant level of maturity with some practical success, it still remains a challenging problem due to large variation in illumination conditions. Recognition based only on the visual spectrum remains limited in uncontrolled operating environments such as outdoor situations and low illumination conditions. In the last years, as a result of the development of low-cost infrared cameras, night vision systems have gained more and more interest, making infrared (IR) imagery as a viable alternative to visible imaging in the search for a robust and practical identification system. Recently, some researchers have proposed the fusion of data recorded by an IR sensor and a visible camera in order to produce information otherwise not obtainable by viewing the sensor outputs separately. In this article, we propose the application of finite mixtures of multidimensional asymmetric generalized Gaussian distributions for different challenging tasks involving IR images. The advantage of the considered model is that it has the required flexibility to fit different shapes of observed non-Gaussian and asymmetric data. In particular, we present a highly efficient expectation–maximization (EM) algorithm, based on minimum message length (MML) formulation, for the unsupervised learning of the proposed model’s parameters. In addition, we study its performance in two interesting applications namely pedestrian detection and multiple target tracking. Furthermore, we examine whether fusion of visual and thermal images can increase the overall performance of surveillance systems. | Finite asymmetric generalized Gaussian mixture models learning for infrared object detection |
S1077314213001380 | This paper discusses the problem of segmenting foreground objects precisely in surveillance video images when foreground moving objects and the still backgrounds have the similar color parts. Motivated by the studies in color constancy, the notion of color invariants is introduced to realize integrated segmentation in color similar situations. Color invariants, which are derived from a physical model, are used as descriptors of image. Then a simple background subtraction method using the color invariants is performed to examine the effectiveness of color invariants in color similar situations. The experimental results demonstrated that the color invariants based method performed well in various situations of color similarity and also was robust to environmental illumination change. Moreover, the color invariants based method achieved higher accuracy and efficiency of background subtraction compared with other existing algorithms in practical real-time surveillance video images of indoor environments. | A novel background subtraction method based on color invariants |
S1077314213001392 | Aligning shapes is essential in many computer vision problems and generalized Procrustes analysis (GPA) is one of the most popular algorithms to align shapes. However, if some of the shape data are missing, GPA cannot be applied. In this paper, we propose EM-GPA, which extends GPA to handle shapes with hidden (missing) variables by using the expectation-maximization (EM) algorithm. For example, 2D shapes can be considered as 3D shapes with missing depth information due to the projection of 3D shapes into the image plane. For a set of 2D shapes, EM-GPA finds scales, rotations and 3D shapes along with their mean and covariance matrix for 3D shape modeling. A distinctive characteristic of EM-GPA is that it does not enforce any rank constraint often appeared in other work and instead uses GPA constraints to resolve the ambiguity in finding scales, rotations, and 3D shapes. The experimental results show that EM-GPA can recover depth information accurately even when the noise level is high and there are a large number of missing variables. By using the images from the FRGC database, we show that EM-GPA can successfully align 2D shapes by taking the missing information into consideration. We also demonstrate that the 3D mean shape and its covariance matrix are accurately estimated. As an application of EM-GPA, we construct a 2D+3D AAM (active appearance model) using the 3D shapes obtained by EM-GPA, and it gives a similar success rate in model fitting compared to the method using real 3D shapes. EM-GPA is not limited to the case of missing depth information, but it can be easily extended to more general cases. | EM-GPA: Generalized Procrustes analysis with hidden variables for 3D shape modeling |
S1077314213001409 | This paper proposes an accurate, rotation invariant, and fast approach for detection of facial features from thermal images. The proposed approach combines both appearance and geometric information to detect the facial features. A texture based detector is performed using Haar features and AdaBoost algorithm. Then the relation between these facial features is modeled using a complex Gaussian distribution, which is invariant to rotation. Experiments show that our proposed approach outperforms existing algorithms for facial features detection in thermal images. The proposed approach’s performance is illustrated in a face recognition framework, which is based on extracting a local signature around facial features. Also, the paper presents a comparative study for different signature techniques with different facial image resolutions. The results of this comparative study suggest the minimum facial image resolution in thermal images, which can be used in face recognition. The study also gives a guideline for choosing a good signature, which leads to the best recognition rate. | Face recognition in low resolution thermal images |
S1077314213001525 | The majority of methods for the automatic surface reconstruction of an environment from an image sequence have two steps: Structure-from-Motion and dense stereo. From the computational standpoint, it would be interesting to avoid dense stereo and to generate a surface directly from the sparse cloud of 3D points and their visibility information provided by Structure-from-Motion. The previous attempts to solve this problem are currently very limited: the surface is non-manifold or has zero genus, the experiments are done on small scenes or objects using a few dozens of images. Our solution does not have these limitations. Furthermore, we experiment with hand-held or helmet-held catadioptric cameras moving in a city and generate 3D models such that the camera trajectory can be longer than one kilometer. | Manifold surface reconstruction of an environment from sparse Structure-from-Motion data |
S1077314213001537 | This paper presents a three-phase gait recognition method that analyses the spatio-temporal shape and dynamic motion (STS-DM) characteristics of a human subject’s silhouettes to identify the subject in the presence of most of the challenging factors that affect existing gait recognition systems. In phase 1, phase-weighted magnitude spectra of the Fourier descriptor of the silhouette contours at ten phases of a gait period are used to analyse the spatio-temporal changes of the subject’s shape. A component-based Fourier descriptor based on anatomical studies of human body is used to achieve robustness against shape variations caused by all common types of small carrying conditions with folded hands, at the subject’s back and in upright position. In phase 2, a full-body shape and motion analysis is performed by fitting ellipses to contour segments of ten phases of a gait period and using a histogram matching with Bhattacharyya distance of parameters of the ellipses as dissimilarity scores. In phase 3, dynamic time warping is used to analyse the angular rotation pattern of the subject’s leading knee with a consideration of arm-swing over a gait period to achieve identification that is invariant to walking speed, limited clothing variations, hair style changes and shadows under feet. The match scores generated in the three phases are fused using weight-based score-level fusion for robust identification in the presence of missing and distorted frames, and occlusion in the scene. Experimental analyses on various publicly available data sets show that STS-DM outperforms several state-of-the-art gait recognition methods. | Gait recognition based on shape and motion analysis of silhouette contours |
S1077314213001550 | Sparse coding represents a signal sparsely by using an overcomplete dictionary, and obtains promising performance in practical computer vision applications, especially for signal restoration tasks such as image denoising and image inpainting. In recent years, many discriminative sparse coding algorithms have been developed for classification problems, but they cannot naturally handle visual data represented by multiview features. In addition, existing sparse coding algorithms use graph Laplacian to model the local geometry of the data distribution. It has been identified that Laplacian regularization biases the solution towards a constant function which possibly leads to poor extrapolating power. In this paper, we present multiview Hessian discriminative sparse coding (mHDSC) which seamlessly integrates Hessian regularization with discriminative sparse coding for multiview learning problems. In particular, mHDSC exploits Hessian regularization to steer the solution which varies smoothly along geodesics in the manifold, and treats the label information as an additional view of feature for incorporating the discriminative power for image annotation. We conduct extensive experiments on PASCAL VOC’07 dataset and demonstrate the effectiveness of mHDSC for image annotation. | Multiview Hessian discriminative sparse coding for image annotation |
S1077314213001562 | We present a novel keyframe-based global localization method for markerless real-time camera tracking. Our system contains an offline module to select features from a group of reference images and an online module to match them to the input live video for quickly estimating the camera pose. The main contribution lies in constructing an optimal set of keyframes from the input reference images, which are required to approximately cover the entire space and at the same time to minimize the content redundancy among the selected frames. This strategy not only greatly saves computation, but also helps significantly reduce the number of repeated features. For a large-scale scene, it requires a significant effort to capture sufficient reference images and reconstruct the 3D environment. In order to alleviate the effort of offline preprocessing and enhance the tracking ability in a larger scale scene, we also propose an online reference map extension module, which can real-time reconstruct new 3D features and select online keyframes to extend the keyframe set. In addition, we develop a parallel-computing framework that employs both GPUs and multi-threading for speedup. Experimental results show that our method dramatically enhances the computing efficiency and eliminates the jittering artifacts in real-time camera tracking. | Efficient keyframe-based real-time camera tracking |
S1077314213001574 | The ever increasing Internet image collection densely samples the real world objects, scenes, etc. and is commonly accompanied with multiple metadata such as textual descriptions and user comments. Such image data has potential to serve as a knowledge source for large-scale image applications. Facilitated by such publically available and ever-increasing loosely annotated image data on the Internet, we propose a scalable data-driven solution for annotating and retrieving Web-scale image data. We extrapolate from large-scale loosely annotated images a compact and informative representation, namely ObjectPatchNet. Each vertex in ObjectPatchNet, which is called as an ObjectPatchNode, is defined as a collection of discriminative image patches annotated with object category labels. The edge linking two ObjectPatchNodes models the co-occurrence relationship among different objects in the same image. Therefore, ObjectPatchNet models not only probabilistically labeled image patches, but also the contextual relationship between objects. It is well suited to scalable image annotation task. Besides, we further take ObjectPatchNet as a visual vocabulary with semantic labels, and hence are able to easily develop inverted file indexing for efficient semantic image retrieval. ObjectPatchNet is tested on both large-scale image annotation and large-scale image retrieval applications. Experimental results manifest that ObjectPatchNet is both discriminative and efficient in these applications. | ObjectPatchNet: Towards scalable and semantic image annotation and retrieval |
S1077314213001586 | In the real world, people often have a habit tending to pay more attention to some things usually noteworthy, while ignore others. This phenomenon is associated with the top-down attention. Modeling this kind of attention has recently raised many interests in computer vision due to a wide range of practical applications. Majority of the existing models are based on eye-tracking or object detection. However, these methods may not apply to practical situations, because the eye movement data cannot be always recorded or there may be inscrutable objects to be handled in large-scale data sets. This paper proposes a Tag-Saliency model based on hierarchical image over-segmentation and auto-tagging, which can efficiently extract semantic information from large scale visual media data. Experimental results on a very challenging data set show that, the proposed Tag-Saliency model has the ability to locate the truly salient regions in a greater probability than other competitors. | Tag-Saliency: Combining bottom-up and top-down information for saliency detection |
S1077314213001598 | Video sharing websites have recently become a tremendous video source, which is easily accessible without any costs. This has encouraged researchers in the action recognition field to construct action database exploiting Web sources. However Web sources are generally too noisy to be used directly as a recognition database. Thus building action database from Web sources has required extensive human efforts on manual selection of video parts related to specified actions. In this paper, we introduce a novel method to automatically extract video shots related to given action keywords from Web videos according to their metadata and visual features. First, we select relevant videos among tagged Web videos based on the relevance between their tags and the given keyword. After segmenting selected videos into shots, we rank these shots exploiting their visual features in order to obtain shots of interest as top ranked shots. Especially, we propose to adopt Web images and human pose matching method in shot ranking step and show that this application helps to boost more relevant shots to the top. This unsupervised method of ours only requires the provision of action keywords such as “surf wave” or “bake bread” at the beginn ing. We have made large-scale experiments on various kinds of human actions as well as non-human actions and obtained promising results. | Automatic extraction of relevant video shots of specific actions exploiting Web data |
S1077314213001604 | Many research have been focusing on how to match the textual query with visual images and their surrounding texts or tags for Web image search. The returned results are often unsatisfactory due to their deviation from user intentions, particularly for queries with heterogeneous concepts (such as “apple”, “jaguar”) or general (non-specific) concepts (such as “landscape”, “hotel”). In this paper, we exploit social data from social media platforms to assist image search engines, aiming to improve the relevance between returned images and user intentions (i.e., social relevance). Facing the challenges of social data sparseness, the tradeoff between social relevance and visual relevance, and the complex social and visual factors, we propose a community-specific Social-Visual Ranking (SVR) algorithm to rerank the Web images returned by current image search engines. The SVR algorithm is implemented by PageRank over a hybrid image link graph, which is the combination of an image social-link graph and an image visual-link graph. By conducting extensive experiments, we demonstrated the importance of both visual factors and social factors, and the advantages of social-visual ranking algorithm for Web image search. | Social-oriented visual image search |
S1077314213001616 | This paper presents a new approach for tracking hand rotation and various grasping gestures through an infrared camera. For the complexity and ambiguity of an observed hand shape, it is difficult to simultaneously estimate hand configuration and orientation from a silhouette image of a grasping hand gesture. This paper proposes a dynamic shape model for hand grasping gestures using cylindrical manifold embedding to analyze variations of hand shape in different hand configurations between two key hand poses and in simultaneous circular view change by hand rotation. An arbitrary hand shape between two key hand poses from any view can be generated using a cylindrical manifold embedding point after learning nonlinear generative models from the embedding space to the corresponding hand shape observed. The cylindrical manifold embedding model is extended to various grasping gestures by decomposing multiple cylindrical manifold embeddings through grasping style analysis. Grasping hand gestures with simultaneous hand rotation are tracked using particle filters on the manifold space with grasping style estimation. Experimental results for synthetic and real data indicate that the proposed model can accurately track various grasping gestures with hand rotation. The proposed approach may be applied to advanced user interfaces in dark environments by using images beyond the visible spectrum. | Tracking hand rotation and various grasping gestures from an IR camera using extended cylindrical manifold embedding |
S1077314213001719 | In many robust model fitting methods, obtaining promising hypotheses is critical to the fitting process. However the sampling process unavoidably generates many irrelevant hypotheses, which can be an obstacle for accurate model fitting. In particular, the mode seeking based fitting methods are very sensitive to the proportion of good/bad hypotheses for fitting multi-structure data. To improve hypothesis generation for the mode seeking based fitting methods, we propose a novel sample-and-filter strategy to (1) identify and filter out bad hypotheses on-the-fly, and (2) use the remaining good hypotheses to guide the sampling to further expand the set of good hypotheses. The outcome is a small set of hypotheses with a high concentration of good hypotheses. Compared to other sampling methods, our method yields a significantly large proportion of good hypotheses, which greatly improves the accuracy of the mode seeking-based fitting methods. | A simultaneous sample-and-filter strategy for robust multi-structure model fitting |
S1077314213001720 | In this paper, we propose a constrained optimization approach to improving both the robustness and accuracy of kernel tracking which is appropriate for real-time video surveillance due to its low computational load. Typical tracking with histogram-wise matching provides robustness but has insufficient accuracy, because it does not involve spatial information. On the other hand, tracking with pixel-wise matching achieves accurate performance but is not robust against deformation of a target object. To find the best compromise between robustness and accuracy, in our paper, we combine histogram-wise matching and pixel-wise template matching via constrained optimization problem. Firstly, we propose a novel weight image representing both the probability of foreground and the degree of similarity between the template and a candidate target image. The weight image is used to formulate an objective function for the histogram-wise weight matching. Then the pixel-wise matching is formulated as a constrained optimization problem using the result of the histogram-wise weight matching. In consequence, the proposed approach optimizes pixel-wise template similarity (for accuracy) under the constraints of histogram-wise feature similarity (for robustness). Experimental results show the combined effects, and demonstrate that our method outperforms recent tracking algorithms in terms of robustness, accuracy, and computational cost. | Combining histogram-wise and pixel-wise matchings for kernel tracking through constrained optimization |
S1077314213001732 | The registration of multiple 3D structures in order to obtain a full-side representation of a scene is a long-time studied subject. Even if the multiple pairwise registrations are almost correct, usually the concatenation of them along a cycle produces a non-satisfactory result at the end of the process due to the accumulation of the small errors. Obviously, the situation can still be worse if, in addition, we have incorrect pairwise correspondences between the views. In this paper, we embed the problem of global multiple views registration into a Bayesian framework, by means of an Expectation–Maximization (EM) algorithm, where pairwise correspondences are treated as missing data and, therefore, inferred through a maximum a posteriori (MAP) process. The presented formulation simultaneously considers uncertainty on pairwise correspondences and noise, allowing a final result which outperforms, in terms of accuracy and robustness, other state-of-the-art algorithms. Experimental results show a reliability analysis of the presented algorithm with respect to the percentage of a priori incorrect correspondences and their consequent effect on the global registration estimation. This analysis compares current state-of-the-art global registration methods with our formulation revealing that the introduction of a Bayesian formulation allows reaching configurations with a lower minimum of the global cost function. | Bayesian perspective for the registration of multiple 3D views |
S1077314213001744 | With millions of users and billions of photos, web-scale face recognition is a challenging task that demands speed, accuracy, and scalability. Most current approaches do not address and do not scale well to Internet-sized scenarios such as tagging friends or finding celebrities. Focusing on web-scale face identification, we gather an 800,000 face dataset from the Facebook social network that models real-world situations where specific faces must be recognized and unknown identities rejected. We propose a novel Linearly Approximated Sparse Representation-based Classification (LASRC) algorithm that uses linear regression to perform sample selection for ℓ1-minimization, thus harnessing the speed of least-squares and the robustness of sparse solutions such as SRC. Our efficient LASRC algorithm achieves comparable performance to SRC with a 100–250 times speedup and exhibits similar recall to SVMs with much faster training. Extensive tests demonstrate our proposed approach is competitive on pair-matching verification tasks and outperforms current state-of-the-art algorithms on open-universe identification in uncontrolled, web-scale scenarios. | Face recognition for web-scale datasets |
S1077314213001756 | This paper presents a robust video stabilization method by solving a novel formulation for the camera motion estimation. We introduce spatio-temporal weighting on local patches in optimization formulation, which enables one-step direct estimation without outlier elimination adopted in most existing methods. The spatio-temporal weighting represents the reliability of a local region in estimation of camera motion. The weighting emphasizes regions which have the similar motion to the camera motion, such as backgrounds, and reduces the influence of unimportant regions, such as moving objects. In this paper, we develop a formula to determine the spatio-temporal weights considering the age, edges, saliency, and distribution information of local patches. The proposed scheme reduces the computational load by eliminating the integration part of local motions and decreases accumulation of fitting errors in the existing two-step estimation methods. Through numerical experiments on several unstable videos, we verify that the proposed method gives better performance in camera motion estimation and stabilization of jittering video sequences. | Spatio-temporal weighting in local patches for direct estimation of camera motion in video stabilization |
S1077314213001768 | We present a novel approach for robust localization of multiple people observed using a set of static cameras. We use this location information to generate a visualization of the virtual offside line in soccer games. To compute the position of the offside line, we need to localize players’ positions, and identify their team roles. We solve the problem of fusing corresponding players’ positional information by finding minimum weight K-length cycles in a complete K-partite graph. Each partite of the graph corresponds to one of the K cameras, whereas each node of a partite encodes the position and appearance of a player observed from a particular camera. To find the minimum weight cycles in this graph, we use a dynamic programming based approach that varies over a continuum from maximally to minimally greedy in terms of the number of graph-paths explored at each iteration. We present proofs for the efficiency and performance bounds of our algorithms. Finally, we demonstrate the robustness of our framework by testing it on 82,000 frames of soccer footage captured over eight different illumination conditions, play types, and team attire. Our framework runs in near-real time, and processes video from 3 full HD cameras in about 0.4s for each set of corresponding 3 frames. | A visualization framework for team sports captured using multiple static cameras |
S1077314213001835 | Local feature based object tracking approaches have been promising in solving the tracking problems such as occlusions and illumination variations. However, existing approaches typically model feature variations using prototypes, and this discrete representation cannot capture the gradual changing property of local appearance. In this paper, we propose to model each local feature as a feature manifold to characterize the smooth changing behavior of the feature descriptor. The manifold is constructed from a series of transformed images simulating possible variations of the feature being tracked. We propose to build a collection of linear subspaces which approximate the original manifold as a low dimensional representation. This representation is used for object tracking. Object location is located by a feature-to-manifold matching process. Our tracking method can update the manifold status, add new feature manifolds and remove expiring ones adaptively according to object appearance. We show both qualitatively and quantitatively this representation significantly improves the tracking performance under occlusions and appearance variations using standard tracking dataset. | Object tracking using learned feature manifolds |
S1077314213001847 | Tracking objects in videos by the mean shift algorithm with color weighted histograms has received much attention in recent years. However, the stability of weights in mean shift still needs to be improved especially under low-contrast scenes with complex motions. This paper presents a new type of color cue, which produces stable weights for mean shift tracking and can be computed pixel by pixel efficiently. The proposed color cue employs global tracking techniques to overcome the illustrated drawbacks of the mean shift algorithm. It represents a target candidate with a larger scale than that of the target model so that the model is much more precise than the candidate. We illustrate that the weights by this way are more reliable under various scenes. To further suppress surrounding clutters, we establish a new spatial context model so that the optimization results are a set of weights which can be computed pixel by pixel. The proposed color cue is called CIG since it computes the weights based on spatial Context Information and Global tracking skills. Experimental results on various tracking videos show that weight images by CIG have higher stability and precision than those of current methods especially under low-contrast scenes with complex motions. | Visual object tracking using spatial Context Information and Global tracking skills |
S1077314213001859 | We propose a new joint view-identity manifold (JVIM) for multi-view and multi-target shape modeling that is well-suited for automated target tracking and recognition (ATR) in infrared imagery. As a shape generative model, JVIM features a novel manifold structure that imposes a conditional dependency between the two shape-related factors, view and identity, in a unified latent space, which is embedded with one view-independent identity manifold and infinite identity-dependent view manifolds. A modified local linear Gaussian process latent variable model (LL-GPLVM) is proposed for JVIM learning where a stochastic gradient descent method is used to improve the learning efficiency. We also develop a local inference technique to speed up JVIM-based shape interpolation. Due to its probabilistic and continuous nature, JVIM provides effective shape synthesis and supports robust ATR inference for both known and unknown target types under arbitrary views. Experiments on both synthetic data and the SENSIAC infrared ATR database demonstrate the advantages of the proposed method over several existing techniques both qualitatively and quantitatively. | Joint view-identity manifold for infrared target tracking and recognition |
S1077314213001860 | Standard wildfire smoke detection systems detect fires using remote cameras located at observation posts. Images from the cameras are analyzed using standard computer vision techniques, and human intervention is required only in situations in which the system raises an alarm. The number of alarms depends largely on manually set detection sensitivity parameters. One of the primary drawbacks of this approach is the false alarm rate, which impairs the usability of the system. In this paper, we present a novel approach using GIS and augmented reality to include the spatial and fire risk data of the observed scene. This information is used to improve the reliability of the existing systems through automatic parameter adjustment. For evaluation, three smoke detection methods were improved using this approach and compared to the standard versions. The results demonstrated significant improvement in different smoke detection aspects, including detection range, rate of correct detections and decrease in the false alarm rate. | Adaptive estimation of visual smoke detection parameters based on spatial data and fire risk index |
S1077314213001914 | We describe SnooperText, an original detector for textual information embedded in photos of building façades (such as names of stores, products and services) that we developed for the iTowns urban geographic information project. SnooperText locates candidate characters by using toggle-mapping image segmentation and character/non-character classification based on shape descriptors. The candidate characters are then grouped to form either candidate words or candidate text lines. These candidate regions are then validated by a text/non-text classifier using a HOG-based descriptor specifically tuned to single-line text regions. These operations are applied at multiple image scales in order to suppress irrelevant detail in character shapes and to avoid the use of overly large kernels in the segmentation. We show that SnooperText outperforms other published state-of-the-art text detection algorithms on standard image benchmarks. We also describe two metrics to evaluate the end-to-end performance of text extraction systems, and show that the use of SnooperText as a pre-filter significantly improves the performance of a general-purpose OCR algorithm when applied to photos of urban scenes. | SnooperText: A text detection system for automatic indexing of urban scenes |
S1077314213001926 | Semantic image segmentation is of fundamental importance in a wide variety of computer vision tasks, such as scene understanding, robot navigation and image retrieval, which aims to simultaneously decompose an image into semantically consistent regions. Most of existing works addressed it as structured prediction problem by combining contextual information with low-level cues based on conditional random fields (CRFs), which are often learned by heuristic search based on maximum likelihood estimation. In this paper, we use maximum margin based structural support vector machine (S-SVM) model to combine multiple levels of cues to attenuate the ambiguity of appearance similarity and propose a novel multi-class ranking based global constraint to confine the object classes to be considered when labeling regions within an image. Compared with existing global cues, our method is more balanced between expressive power for heterogeneous regions and the efficiency of searching exponential space of possible label combinations. We then introduce inter-class co-occurrence statistics as pairwise constraints and combine them with the prediction from local and global cues based on S-SVMs framework. This enables the joint inference of labeling within an image for better consistency. We evaluate our algorithm on two challenging datasets which are widely used for semantic segmentation evaluation: MSRC-21 dataset and Stanford Background dataset and experimental results show that we obtain high competitive performance compared with state-of-the-art methods, despite that our model is much simpler and efficient. | Efficient semantic image segmentation with multi-class ranking prior |
S1077314213001938 | This paper presents a novel algorithm for medial surfaces extraction that is based on the density-corrected Hamiltonian analysis of Torsello and Hancock [1]. In order to cope with the exponential growth of the number of voxels, we compute a first coarse discretization of the mesh which is iteratively refined until a desired resolution is achieved. The refinement criterion relies on the analysis of the momentum field, where only the voxels with a suitable value of the divergence are exploded to a lower level of the hierarchy. In order to compensate for the discretization errors incurred at the coarser levels, a dilation procedure is added at the end of each iteration. Finally we design a simple alignment procedure to correct the displacement of the extracted skeleton with respect to the true underlying medial surface. We evaluate the proposed approach with an extensive series of qualitative and quantitative experiments. | Coarse-to-fine skeleton extraction for high resolution 3D meshes |
S107731421300194X | This article presents a unified framework for detecting, segmenting and tracking unknown objects in everyday scenes, allowing for inspection of object hypotheses during interaction over time. A heterogeneous scene representation is proposed, with background regions modeled as a combinations of planar surfaces and uniform clutter, and foreground objects as 3D ellipsoids. Recent energy minimization methods based on loopy belief propagation, tree-reweighted message passing and graph cuts are studied for the purpose of multi-object segmentation and benchmarked in terms of segmentation quality, as well as computational speed and how easily methods can be adapted for parallel processing. One conclusion is that the choice of energy minimization method is less important than the way scenes are modeled. Proximities are more valuable for segmentation than similarity in colors, while the benefit of 3D information is limited. It is also shown through practical experiments that, with implementations on GPUs, multi-object segmentation and tracking using state-of-art MRF inference methods is feasible, despite the computational costs typically associated with such methods. | Detecting, segmenting and tracking unknown objects using multi-label MRF inference |
S1077314213001999 | Localization of chess-board vertices is a common task in computer vision, underpinning many applications, but relatively little work focusses on designing a specific feature detector that is fast, accurate and robust. In this paper the ‘Chess-board Extraction by Subtraction and Summation’ (ChESS) feature detector, designed to exclusively respond to chess-board vertices, is presented. The method proposed is robust against noise, poor lighting and poor contrast, requires no prior knowledge of the extent of the chess-board pattern, is computationally very efficient, and provides a strength measure of detected features. Such a detector has significant application both in the key field of camera calibration, as well as in structured light 3D reconstruction. Evidence is presented showing its superior robustness, accuracy, and efficiency in comparison to other commonly used detectors, including Harris & Stephens and SUSAN, both under simulation and in experimental 3D reconstruction of flat plate and cylindrical objects. | ChESS – Quick and robust detection of chess-board features |
S1077314213002002 | Despite many alternatives to feature tracking problem, iterative least squares solution solving the optical flow constraint has been the most popular approach used by many in the field. This paper attempts to leverage the former efforts to enhance feature tracking methods by introducing a view geometric constraint to the tracking problem. In contrast to alternative geometry based methods, the proposed approach provides a closed form solution to optical flow estimation from image appearance and view geometry constraints. We particularly use invariants in the projective coordinates generated from tracked features that results in a new optical flow equation. This treatment provides persistent tracking of features even when they are occluded. At the end of each tracking loop the quality of the tracked features is judged using both appearance similarity and geometric consistency. Our experiments demonstrate robust tracking performance even when the features are occluded or they undergo appearance changes due to projective deformation of the template. | Persistent tracking of static scene features using geometry |
S1077314213002014 | A substantial number of local feature extraction and description methodologies have been proposed as image recognition algorithms. However, these algorithms do not exhibit adequate performance with regard to repeatability, accuracy, and time consumption for both affine transformation and monotonic intensity change. In this paper, we propose a new descriptor, named Resistant to Affine Transformation and Monotonic Intensity Change (RATMIC). Unlike traditional descriptors, we utilize an adaptive division strategy and intensity order to construct the new descriptor, which is actually resistant to affine transformation and monotonic intensity change. Extensive experiments demonstrate the effectiveness and efficiency of the new descriptor compared to existing state-of-the-art descriptors. | A new descriptor resistant to affine transformation and monotonic intensity change |
S1077314213002026 | Macrofeatures are mid-level features that jointly encode a set of low-level features in a neighborhood. We propose a macrofeature layout selection technique to improve localization performance in an object detection task. Our method employs line, triangle, and pyramid layouts, which are composed of several local blocks represented by the Histograms of Oriented Gradients (HOGs) features in a multi-scale feature pyramid. Such macrofeature layouts are integrated into a boosting framework for object detection, where the best layout is selected to build a weak classifier in a greedy manner at each iteration. The proposed algorithm is applied to pedestrian detection and implemented using GPU. Our pedestrian detection algorithm performs better in terms of detection and localization accuracy with great efficiency when compared to several state-of-the-art techniques in public datasets. | Macrofeature layout selection for pedestrian localization and its acceleration using GPU |
S1077314213002038 | In this paper, we introduce for the first time the notion of directed hypergraphs in image processing and particularly image segmentation. We give a formulation of a random walk in a directed hypergraph that serves as a basis to a semi-supervised image segmentation procedure that is configured as a machine learning problem, where a few sample pixels are used to estimate the labels of the unlabeled ones. A directed hypergraph model is proposed to represent the image content, and the directed random walk formulation allows to compute a transition matrix that can be exploited in a simple iterative semi-supervised segmentation process. Experiments over the Microsoft GrabCut dataset have achieved results that demonstrated the relevance of introducing directionality in hypergraphs for computer vision problems. | Random walks in directed hypergraphs and application to semi-supervised image segmentation |
S107731421300204X | We present TouchCut; a robust and efficient algorithm for segmenting image and video sequences with minimal user interaction. Our algorithm requires only a single finger touch to identify the object of interest in the image or first frame of video. Our approach is based on a level set framework, with an appearance model fusing edge, region texture and geometric information sampled local to the touched point. We first present our image segmentation solution, then extend this framework to progressive (per-frame) video segmentation, encouraging temporal coherence by incorporating motion estimation and a shape prior learned from previous frames. This new approach to visual object cut-out provides a practical solution for image and video segmentation on compact touch screen devices, facilitating spatially localized media manipulation. We describe such a case study, enabling users to selectively stylize video objects to create a hand-painted effect. We demonstrate the advantages of TouchCut by quantitatively comparing against the state of the art both in terms of accuracy, and run-time performance. | TouchCut: Fast image and video segmentation using single-touch interaction |
S1077314213002051 | The Mumford–Shah segmentation model is an energy model widely applied in computer vision. Many attempts have been made to minimize the energy of the model. We focus on recently proposed two methods for solving multi-phase segmentation; the graph cuts method by Bae and Tai (2009) [16] and the Monte Carlo method by Watanabe et al. (2011) [21]. We compare the convergence of solutions, the values of obtained energy, the computational time, etc. Finally we propose a hybrid method combining the advantages of the Monte Carlo and the graph cuts. The hybrid method can find the global minimum energy solution efficiently without sensitivity of initial guess. | Comparison of multi-label graph cuts method and Monte Carlo simulation with block-spin transformation for the piecewise constant Mumford–Shah segmentation model |
S1077314213002063 | In this study, we propose a novel approach to facial expression recognition that capitalizes on the anatomical structure of the human face. We model human face with a high-polygon wireframe model that embeds all major muscles. Influence regions of facial muscles are estimated through a semi-automatic customization process. These regions are projected to the image plane to determine feature points. Relative displacement of each feature point between two image frames is treated as an evidence of muscular activity. Feature point displacements are projected back to the 3D space to estimate the new coordinates of the wireframe vertices. Muscular activities that would produce the estimated deformation are solved through a least squares algorithm. We demonstrate the representative power of muscle force based features on three classifiers; NB, SVM and Adaboost. Ability to extract muscle forces that compose a facial expression will enable detection of subtle expressions, replicating an expression on animated characters and exploration of psychologically unknown mechanisms of facial expressions. | Facial expression recognition based on anatomy |
S1077314213002075 | Visual tracking is a challenging problem, as the appearance of an object may change due to viewpoint variations, illumination changes, and occlusion. It may also leave the field of view (FOV), then reappears. In order to track and reacquire an unknown object with limited labeling data, we propose to learn these changes online and incrementally build a model that encodes all appearance variations while tracking. To address this semi-supervised learning problem, we propose a co-training framework with cascade particle filter to label incoming data continuously and online update hybrid generative and discriminative models. Each of the layers in the cascade contains one or more either generative or discriminative appearance models. The cascade manner of organizing the particle filter enables the efficient evaluation of multiple appearance models with different computational costs; thus improves the speed of the tracker. The proposed online framework provides temporally local tracking that adapts to appearance changes. Moreover, it provides an object-specific detection ability that allows to reacquire an object after total occlusion. Extensive experiments demonstrate that under challenging situations, our method has strong reacquisition ability and robustness to distracters in clutter background. We also provide quantitative comparisons to other state of the art trackers. | Co-trained generative and discriminative trackers with cascade particle filter |
S107731421300221X | Two of the main ingredients of topological persistence for shape comparison are persistence diagrams and the matching distance. Persistence diagrams are signatures capturing meaningful properties of shapes, while the matching distance can be used to stably compare them. From the application viewpoint, one drawback of these tools is the computational cost for evaluating the matching distance. In this paper we introduce a new framework for the matching distance estimation: It preserves the reliability of the entire approach in comparing shapes, extremely reducing the computational cost. Theoretical results are supported by experiments on 3D-models. | Comparing shapes through multi-scale approximations of the matching distance |
S1077314213002221 | In this paper we present a methodology of classifying hepatic (liver) lesions using multidimensional persistent homology, the matching metric (also called the bottleneck distance), and a support vector machine. We present our classification results on a dataset of 132 lesions that have been outlined and annotated by radiologists. We find that topological features are useful in the classification of hepatic lesions. We also find that two-dimensional persistent homology outperforms one-dimensional persistent homology in this application. | Classification of hepatic lesions using the matching metric |
S1077314213002233 | Space or voxel carving (Broadhurst et al., 2001; Culbertson et al., 1999; Kutulakos and Seitz, 2000; Seitz et al., 2006) is a technique for creating a three-dimensional reconstruction of an object from a series of two-dimensional images captured from cameras placed around the object at different viewing angles. However, little work has been done to date on evaluating the quality of space carving results. This paper extends the work reported in (Gutierrez et al., 2012), where application of persistent homology was initially proposed as a tool for providing a topological analysis of the carving process along the sequence of 3D reconstructions with increasing number of cameras. We give now a more extensive treatment by: (1) developing the formal framework by which persistent homology can be applied in this context; (2) computing persistent homology of the 3D reconstructions of 66 new frames, including different poses, resolutions and camera orders; (3) studying what information about stability, topological correctness and influence of the camera orders in the carving performance can be drawn from the computed barcodes. | Topological evaluation of volume reconstructions by voxel carving |
S1077314213002245 | We present a robust background model for object detection and its performance evaluation using the database of the Background Models Challenge (BMC). Background models should detect foreground objects robustly against background changes, such as “illumination changes” and “dynamic changes”. In this paper, we propose two types of spatiotemporal background modeling frameworks that can adapt to illumination and dynamic changes in the background. Spatial information can be used to absorb the effects of illumination changes because they affect not only a target pixel but also its neighboring pixels. Additionally, temporal information is useful in handling the dynamic changes, which are observed repeatedly. To establish the spatiotemporal background model, our frameworks model an illumination invariant feature and a similarity of intensity changes among a set of pixels according to statistical models, respectively. Experimental results obtained for the BMC database show that our models can detect foreground objects robustly against background changes. | Object detection based on spatiotemporal background models |
S1077314213002269 | We propose the 3dSOBS+ algorithm, a newly designed approach for moving object detection based on a neural background model automatically generated by a self-organizing method. The algorithm is able to accurately handle scenes containing moving backgrounds, gradual illumination variations, and shadows cast by moving objects, and is robust against false detections for different types of videos taken with stationary cameras. Experimental results and comparisons conducted on the Background Models Challenge benchmark dataset demonstrate the improvements achieved by the proposed algorithm, that compares well with the state-of-the-art methods. | The 3dSOBS+ algorithm for moving object detection |
S1077314213002270 | A learning-based framework for action representation and recognition relying on the description of an action by time series of optical flow motion features is presented. In the learning step, the motion curves representing each action are clustered using Gaussian mixture modeling (GMM). In the recognition step, the optical flow curves of a probe sequence are also clustered using a GMM, then each probe sequence is projected onto the training space and the probe curves are matched to the learned curves using a non-metric similarity function based on the longest common subsequence, which is robust to noise and provides an intuitive notion of similarity between curves. Alignment between the mean curves is performed using canonical time warping. Finally, the probe sequence is categorized to the learned action with the maximum similarity using a nearest neighbor classification scheme. We also present a variant of the method where the length of the time series is reduced by dimensionality reduction in both training and test phases, in order to smooth out the outliers, which are common in these type of sequences. Experimental results on KTH, UCF Sports and UCF YouTube action databases demonstrate the effectiveness of the proposed method. | Matching mixtures of curves for human action recognition |
S1077314213002282 | Sketch-based 3D shape retrieval has become an important research topic in content-based 3D object retrieval. To foster this research area, two Shape Retrieval Contest (SHREC) tracks on this topic have been organized by us in 2012 and 2013 based on a small-scale and large-scale benchmarks, respectively. Six and five (nine in total) distinct sketch-based 3D shape retrieval methods have competed each other in these two contests, respectively. To measure and compare the performance of the top participating and other existing promising sketch-based 3D shape retrieval methods and solicit the state-of-the-art approaches, we perform a more comprehensive comparison of fifteen best (four top participating algorithms and eleven additional state-of-the-art methods) retrieval methods by completing the evaluation of each method on both benchmarks. The benchmarks, results, and evaluation tools for the two tracks are publicly available on our websites [1,2]. | A comparison of methods for sketch-based 3D shape retrieval |
S1077314213002294 | Foreground detection is the first step in video surveillance system to detect moving objects. Recent research on subspace estimation by sparse representation and rank minimization represents a nice framework to separate moving objects from the background. Robust Principal Component Analysis (RPCA) solved via Principal Component Pursuit decomposes a data matrix A in two components such that A = L + S , where L is a low-rank matrix and S is a sparse noise matrix. The background sequence is then modeled by a low-rank subspace that can gradually change over time, while the moving foreground objects constitute the correlated sparse outliers. To date, many efforts have been made to develop Principal Component Pursuit (PCP) methods with reduced computational cost that perform visually well in foreground detection. However, no current algorithm seems to emerge and to be able to simultaneously address all the key challenges that accompany real-world videos. This is due, in part, to the absence of a rigorous quantitative evaluation with synthetic and realistic large-scale dataset with accurate ground truth providing a balanced coverage of the range of challenges present in the real world. In this context, this work aims to initiate a rigorous and comprehensive review of RPCA-PCP based methods for testing and ranking existing algorithms for foreground detection. For this, we first review the recent developments in the field of RPCA solved via Principal Component Pursuit. Furthermore, we investigate how these methods are solved and if incremental algorithms and real-time implementations can be achieved for foreground detection. Finally, experimental results on the Background Models Challenge (BMC) dataset which contains different synthetic and real datasets show the comparative performance of these recent methods. | Robust PCA via Principal Component Pursuit: A review for a comparative evaluation in video surveillance |
S1077314213002300 | Accurate video-based ball tracking in team sports is important for automated game analysis, and has proven very difficult because the ball is often occluded by the players. In this paper, we propose a novel approach to addressing this issue by formulating the tracking in terms of deciding which player, if any, is in possession of the ball at any given time. This is very different from standard approaches that first attempt to track the ball and only then to assign possession. We will show that our method substantially increases performance when applied to long basketball and soccer sequences. | Take your eyes off the ball: Improving ball-tracking by focusing on team play |
S1077314213002312 | We propose a set of atomic modeling operators for simplifying and refining cell complexes in arbitrary dimensions. Such operators either preserve the homology of the cell complex, or they modify it in a controlled way. We show that such operators form a minimally complete basis for updating cell complexes, and we compare them with various operators previously proposed in the literature. Based on the new operators, we define a hierarchical model for cell complexes, that we call a Hierarchical Cell Complex (HCC), and we discuss its properties. An HCC implicitly encodes a virtually continuous set of complexes obtained from the original complex through the application of our operators. Then, we describe the implementation of a version of the HCC based on the subset of the proposed modeling operators which preserve homology. We apply the homology-preserving HCC to enhance the efficiency in extracting homology generators at different resolutions. To this aim, we propose an algorithm which computes homology generators on the coarsest representation of the original complex, and uses the hierarchical model to propagate them to complexes at any intermediate resolution, and we prove its correctness. Finally, we present experimental results showing the efficiency and effectiveness of the proposed approach. | Topological modifications and hierarchical representation of cell complexes in arbitrary dimensions |
S1077314213002324 | Based on local spline embedding (LSE) and maximum margin criterion (MMC), two orthogonal locally discriminant spline embedding techniques (OLDSE-I and OLDSE-II) are proposed for plant leaf recognition in this paper. By OLDSE-I or OLDSE-II, the plant leaf images are mapped into a leaf subspace for analysis, which can detect the essential leaf manifold structure. Different from principal component analysis (PCA) and linear discriminant analysis (LDA) which can only deal with flat Euclidean structures of plant leaf space, OLDSE-I and OLDSE-II not only inherit the advantages of local spline embedding (LSE), but makes full use of class information to improve discriminant power by introducing translation and rescaling models. The proposed OLDSE-I and OLDSE-II methods are applied to recognize the plant leaf and are examined using the ICL-PlantLeaf and Swedish plant leaf image databases. The numerical results show compared with MMC, LDA, SLPP, and LDSE, the proposed OLDSE-I and OLDSE-II methods can achieve higher recognition rate. | Orthogonal locally discriminant spline embedding for plant leaf recognition |
S1077314213002336 | We present a new approach to image indexing and retrieval, which integrates appearance with global image geometry in the indexing process, while enjoying robustness against viewpoint change, photometric variations, occlusion, and background clutter. We exploit shape parameters of local features to estimate image alignment via a single correspondence. Then, for each feature, we construct a sparse spatial map of all remaining features, encoding their normalized position and appearance, typically vector quantized to visual word. An image is represented by a collection of such feature maps and RANSAC-like matching is reduced to a number of set intersections. The required index space is still quadratic in the number of features. To make it linear, we propose a novel feature selection model tailored to our feature map representation, replacing our earlier hashing approach. The resulting index space is comparable to baseline bag-of-words, scaling up to one million images while outperforming the state of the art on three publicly available datasets. To our knowledge, this is the first geometry indexing method to dispense with spatial verification at this scale, bringing query times down to milliseconds. | Towards large-scale geometry indexing by feature selection |
S1077314213002348 | Background modeling is a well-know approach to detect moving objects in video sequences. In recent years, background modeling methods that adopt spatial and texture information have been developed for dealing with complex scenarios. However, none of the investigated approaches have been tested under extreme conditions, such as the underwater domain, on which effects compromising the video quality affect negatively the performance of the background modeling process. In order to overcome such difficulties, more significant features and more robust methods must be found. In this paper, we present a kernel density estimation method which models background and foreground by exploiting textons to describe textures within small and low contrasted regions. Comparison with other texture descriptors, namely, local binary pattern (LBP) and scale invariant local ternary pattern (SILTP) shown improved performance. Besides, quantitative and qualitative performance evaluation carried out on three standard datasets showing very complex conditions revealed that our method outperformed state-of-the-art methods that use different features and modeling techniques and, most importantly, it is able to generalize over different scenarios and targets. | A texton-based kernel density estimation approach for background modeling under extreme conditions |
S107731421300235X | We present a novel method that evaluates the geometric consistency of putative point matches in weakly calibrated settings, i.e. when the epipolar geometry but not the camera calibration is known, using only the point coordinates as information. The main idea behind our approach is the fact that each point correspondence in our data belongs to one of two classes (inliers/outlier). The classification of each point match relies on the histogram of a quantity representing the difference between cross ratios derived from a construction involving 6-tuples of point matches. Neither constraints nor scenario dependent parameters/thresholds are needed. Even for few candidate point matches the ensemble of 6-tuples containing each of them turns to provide statistically reliable histograms that prove to discriminate between inliers and outliers. In fact, in most cases a random sampling among this population is sufficient. Nevertheless, the accuracy of the method is positively correlated to its sampling density leading to an accuracy versus resulting computational complexity trade-off. Theoretical analysis and experiments are given that show the consistent performance of the proposed classification method when applied in inlier/outlier discrimination. The achieved accuracy is favourably evaluated against established methods that employ geometric only information, i.e. those relying on the Sampson, the algebraic and the symmetric epipolar distances. Finally, we also present an application of our scheme in uncalibrated stereo inside a RANSAC framework and compare it to the same as above methods. | A method for the evaluation of projective geometric consistency in weakly calibrated stereo with application to point matching |
S1077314213002361 | Background subtraction (BS) is a crucial step in many computer vision systems, as it is first applied to detect moving objects within a video stream. Many algorithms have been designed to segment the foreground objects from the background of a sequence. In this article, we propose to use the BMC (Background Models Challenge) dataset, and to compare the 29 methods implemented in the BGSLibrary. From this large set of various BG methods, we have conducted a relevant experimental analysis to evaluate both their robustness and their practical performance in terms of processor/memory requirements. | A comprehensive review of background subtraction algorithms evaluated with synthetic and real videos |
S1077314213002373 | In this paper we address an important issue in human–robot interaction, that of accurately deriving pointing information from a corresponding gesture. Based on the fact that in most applications it is the pointed object rather than the actual pointing direction which is important, we formulate a novel approach which takes into account prior information about the location of possible pointed targets. To decide about the pointed object, the proposed approach uses the Dempster–Shafer theory of evidence to fuse information from two different input streams: head pose, estimated by visually tracking the off-plane rotations of the face, and hand pointing orientation. Detailed experimental results are presented that validate the effectiveness of the method in realistic application setups. | Visual estimation of pointed targets for robot guidance via fusion of face pose and hand orientation |
S1077314213002385 | A model based on the concept of topological suspension is constructed with the purpose of testing and comparing different shape similarity measures in computer vision and graphics. This model gives an automatic way to produce interesting shapes of arbitrarily high dimension as quality tests of algorithms that have been used in low dimensions, but are now intended for comparing multidimensional data sets. The analysis of the matching distance method is provided for one and two-parameter measuring functions on closed curves and surfaces, whose suspension is defined, respectively, on surfaces in R 3 and 3D objects in R 4 . Perspectives for applying this model to other shape descriptors used for digital images are pointed out. | Suspension models for testing shape similarity methods |
S1077314213002397 | In this paper we present a novel template-based approach for fast object detection. In particular we investigate the use of Dominant Orientation Templates (DOT), a binary template representation introduced by Hinterstoisser et al., as a means for fast detection of objects even if textureless. During training, we learn a binary mask for each template that allows to remove background clutter while at the same time including relevant context information. These mask templates then serve as weak classifiers in an Adaboost framework. We demonstrate our method on detection of shape-oriented object classes as well as multiview vehicle detection. We obtain a fast yet highly accurate method for category level detection that compares favorably to other more complicated yet much slower approaches. We further show how to efficiently transfer meta-data using the top most similar activated templates. Finally, we propose an optimization scheme for detection of specific objects using our proposed masks trained by the SVM, resulting in an increment of up to 17% in performance of the DOT method, without sacrificing testing speed and it is able to run the training on real time. | Boosting masked dominant orientation templates for efficient object detection |
S1077314213002403 | This paper presents an efficient, accurate, and robust template-based visual tracker. In this method, the target is represented by two heterogeneous and adaptive Gaussian-based templates which can model both short- and long-term changes in the target appearance. The proposed localization algorithm features an interactive multi-start optimization process that takes into account generic transformations using a combination of sampling- and gradient-based techniques in a unified probabilistic framework. Both the short- and long-term templates are used to find the best location of the target, simultaneously. This approach further increased both the efficiency and accuracy of the proposed tracker. The contributions of the proposed tracking method include: (1) Flexible multi-model target representation which in general can accurately and robustly handle challenging situations such as significant appearance and shape changes, (2) Robust template updating algorithm where a combination of tracking time step, a forgetting factor, and an uncertainty margin are used to update the mean and variance of the Gaussian functions, and (3) Efficient and interactive multi-start optimization which can improve the accuracy, robustness, and efficiency of the target localization by parallel searching in different time-varying templates. Several challenging and publicly available videos have been used to both demonstrate and quantify the superiority of the proposed tracking method in comparison with other state-of-the-art trackers. | Efficient and robust multi-template tracking using multi-start interactive hybrid search |
S1077314213002415 | Statistical shape from shading under general light conditions can be thought of as a parameter-fitting problem to a bilinear model. Here, the parameters are personal attributes and light conditions. Parameters of a bilinear model are usually estimated using the alternating least squares method with a computational complexity of O ( ( n s + n ϕ ) 2 n p ) , where n s , n ϕ , and n p are the dimensions of the light conditions, personal attributes, and face image features, respectively, for each iteration. In this paper, we propose an alternative algorithm with a computational complexity of O ( n s n ϕ ) for each iteration. Only the initial step requires a computational complexity of O ( n s n ϕ n p ) . This can be accomplished by reformulating the problem to that of a linear least squares problem, with a search space limited to a set of rank-one matrices. The rank-one condition is relaxed to obtain a possibly full-rank matrix. The algorithm then finds the best rank-one approximation of that matrix. By the Eckart–Young theorem, the best approximation is the outer product of the left and right singular vectors corresponding to the largest singular value. Since only this pair of singular vectors is needed, it is better to use the power iteration method, which has a computational complexity of O ( n s n ϕ ) for each iteration, than calculating the full singular value decomposition. The proposed method provides accurate reconstruction results and takes approximately 45ms on a PC, which is adequate for real-time applications. | Real-time facial shape recovery from a single image under general, unknown lighting by rank relaxation |
S1077314213002427 | Near-duplicate image search in very large Web databases has been a hot topic in recent years. In the traditional methods, the Bag-of-Visual-Words (BoVW) model and the inverted index structure are very widely adopted. Despite the simplicity, efficiency and scalability, these algorithms highly depends on the accurate matching of local features. However, there are many reasons in real applications that limit the descriptive power of low-level features, and therefore cause the search results suffer from unsatisfied precision and recall. To overcome these shortcomings, it is reasonable to re-rank the initial search results using some post-processing approaches, such as spatial verification, query expansion and diffusion-based algorithms. In this paper, we investigate the re-ranking problem from a graph-based perspective. We construct ImageWeb, a sparse graph consisting of all the images in the database, in which two images are connected if and only if one is ranked among the top of another’s initial search result. Based on the ImageWeb, we use HITS, a query-dependent algorithm to re-rank the images according to the affinity values. We verify that it is possible to discover the nature of image relationships for search result refinement without using any handcrafted methods such as spatial verification. We also consider some tradeoff strategies to intuitively guide the selection of searching parameters. Experiments are conducted on the large-scale image datasets with more than one million images. Our algorithm achieves the state-of-the-art search performance with very fast speed at the online stages. | Fast and accurate near-duplicate image search with affinity propagation on the ImageWeb |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.