FileName
stringlengths 17
17
| Abstract
stringlengths 163
6.01k
| Title
stringlengths 12
421
|
---|---|---|
S0198971516300023 | Recent advances in public sector open data and online mapping software are opening up new possibilities for interactive mapping in research applications. Increasingly there are opportunities to develop advanced interactive platforms with exploratory and analytical functionality. This paper reviews tools and workflows for the production of online research mapping platforms, alongside a classification of the interactive functionality that can be achieved. A series of mapping case studies from government, academia and research institutes are reviewed. The conclusions are that online cartography's technical hurdles are falling due to open data releases, open source software and cloud services innovations. The data exploration functionality of these new tools is powerful and complements the emerging fields of big data and open GIS. International data perspectives are also increasingly feasible. Analytical functionality for web mapping is currently less developed, but promising examples can be seen in areas such as urban analytics. For more presentational research communication applications, there has been progress in story-driven mapping drawing on data journalism approaches that are capable of connecting with very large audiences. | Online interactive thematic mapping: Applications and techniques for socio-economic research |
S0198971516300047 | Geographical masking is the conventional solution to protect the privacy of individuals involved in confidential spatial point datasets. The masking process displaces confidential locations to protect individual privacy while maintaining a fine level of spatial resolution. The adaptive form of this process aims to further minimize the displacement error by taking into account the underlying population density. We describe an alternative adaptive geomasking method, referred to as Adaptive Areal Elimination (AAE). AAE creates areas of a minimum K-anonymity and then original points are either randomly perturbed within the areas or aggregated to the median centers of the areas. In addition to the masked points, K-anonymized areas can be safely disclosed as well without increasing the risk of re-identification. Using a burglary dataset from Vienna, AAE is compared with an existing adaptive geographical mask, the donut mask. The masking methods are evaluated for preserving a predefined K-anonymity and the spatial characteristics of the original points. The spatial characteristics are assessed with four measures of spatial error: displaced distance, correlation coefficient of density surfaces, hotspots' divergence, and clusters' specificity. Masked points from point aggregation of AAE have the highest spatial error in all the measures but the displaced distance. In contrast, masked points from the donut mask are displaced the least, preserve the original spatial clusters better, have the highest clusters' specificity and correlation coefficient of density surfaces. However, when the donut mask is adapted to achieve an actual K-anonymity, the random perturbation of AAE introduces less spatial error than the donut mask for all the measures of spatial error. | Adaptive areal elimination (AAE): A transparent way of disclosing protected spatial datasets |
S0198971516300084 | Flooding is a widely occurring natural hazard that noticeably damages property, people, and the environment. In the context of climate change, the integration of spatial planning with flood-risk management has gained prominence as an approach to mitigating the risks of flooding. The absence of easy access to integrated and high-quality information, and the technologies and tools to use information are among the factors that impede this integration. Limited research has been conducted to develop a framework and to investigate the role of information and technologies in this integration. This study draws primarily on the European experiences and literature and identifies three dimensions of the integration of spatial planning with flood-risk management: territorial, policy, and institutional. To facilitate integration and in accord with these three dimensions, a Spatially Integrated Policy Infrastructure (SIPI) is conceptualised that encompasses data and information, decision support and analysis tools, and access tools and protocols. This study presents the connections between SIPI elements and integration dimensions, which is important for a better understanding of roles of geographic information and technologies in integration. The conceptual framework of SIPI will govern further development and evaluation of SIPI. | Integrating spatial planning and flood risk management: A new conceptual framework for the spatially integrated policy infrastructure |
S0198971516300102 | Characterizing urban landscapes is important given the present and future projections of global population that favor urban growth. The definition of “urban†on a thematic map has proven to be problematic since urban areas are heterogeneous in terms of land use and land cover. Further, certain urban classes are inherently imprecise due to the difficulty in integrating various social and environmental inputs into a precise definition. Social components often include demographic patterns, transportation, building type and density while ecological components include soils, elevation, hydrology, climate, vegetation and tree cover. In this paper, we adopt a coupled human and natural system (CHANS) integrated scientific framework for characterizing urban landscapes. We implement the framework by adopting a fuzzy sets concept of “urban characterization†since fuzzy sets relate to classes of object with imprecise boundaries in which membership is a matter of degree. For dynamic mapping applications, user-defined classification schemes involving rules combining different social and ecological inputs can lead to a degree of quantification in class labeling varying from “highly urban†to “least urbanâ€. A socio-economic perspective of urban may include threshold values for population and road network density while a more ecological perspective of urban may utilize the ratio of natural versus built area and percent forest cover. Threshold values are defined to derive the fuzzy rules of membership, in each case, and various combinations of rules offer a greater flexibility to characterize the many facets of the urban landscape. We illustrate the flexibility and utility of this fuzzy inference approach called the Fuzzy Urban Index for the Boston Metro region with five inputs and eighteen rules. The resulting classification map shows levels of fuzzy membership ranging from highly urban to least urban or rural in the Boston study region. We validate our approach using two experts assessing accuracy of the resulting fuzzy urban map. We discuss how our approach can be applied in other urban contexts with newly emerging descriptors of urban sustainability, urban ecology and urban metabolism. | Characterizing urban landscapes using fuzzy sets |
S0198971516300394 | Social media data are increasingly perceived as alternative sources to public attitude surveys because of the volume of available data that are time-stamped and (sometimes) precisely located. Such data can be mined to provide planners, marketers and researchers with useful information about activities and opinions across time and space. However, in their raw form, textual data are still difficult to analyse coherently and Twitter streams pose particular interpretive challenges because they are restricted to just 140 characters. This paper explores the use of an unsupervised learning algorithm to classify geo-tagged Tweets from Inner London recorded during typical weekdays throughout 2013 into a small number of groups, following extensive text cleaning techniques. Our classification identifies 20 distinctive and interpretive topic groupings, which represent key types of Tweets, from describing activities or informal conversations between users, to the use of check-in applets. Our motivation is to use the classification to demonstrate how the nature of the content posted on Twitter varies according to the characteristics of places and users. Topics and attitudes expressed through Tweets are found to vary substantially across Inner London, and by time of day. Some observed variations in behaviour on Twitter can be attributed to the inferred demographic and socio-economic characteristics of users, but place and local activities can also exert a considerable influence. Overall, the classification was found to provide a valuable framework for investigating the content and coverage of Twitter usage across Inner London. | The geography of Twitter topics in London |
S0262885613000346 | Engineers have proposed many watermark mechanisms for protecting the content of digital media from unauthorized use. The visible watermark scheme indicates the copyright of digital media posted over the Internet by embedding an inconspicuous but recognizable pattern into media. However, the embedding process often results in serious distortion of the protected image. Since the strength of the watermark in conventional methods mainly depends on the feature of protected media, this may lead to unsatisfactory transparency of watermarked images. This paper proposes a removable solution for visible watermark mechanism. By adopting the subsampling technique, the method proposes a contrast-adaptive strategy to solve this problem. This method can also guarantee the essentials of general visible watermark schemes. Experimental results show that the proposed method outperforms related works in terms of preserving the quality of the restored image. | Contrast-Adaptive Removable Visible Watermarking (CARVW) mechanism |
S0262885613000358 | We propose a novel symmetry-driven Bayesian framework to incorporate structural shape into conventional geometrical shape descriptor of an image indexing and retrieval. We use rotation and reflection symmetries for structural shape description. Symmetry detection on each shape image provides a qualitative and a quantitative categorization of the types and the degrees of symmetry level. The posterior shape similarity enhances the shape matching performance based on the symmetry structural discrimination. Experimental results show statistically significant improvement on retrieval accuracy over the state of the art methods on MPEG-7 data set. | Symmetry-driven shape description for image retrieval |
S0262885613000462 | Human faces encode plenty of useful information. Recent studies in psychology and human perception have found that facial features have relations to human weight or body mass index (BMI). These studies focus on finding the correlations between facial features and the BMI. Motivated by the recent psychology studies, we develop a computational method to predict the BMI from face images automatically. We formulate the BMI prediction from facial features as a machine vision problem, and evaluate our approach on a large database with more than 14,500 face images. A promising result has been obtained, which demonstrates the feasibility of developing a computational system for BMI prediction from face images at a large scale. | A computational approach to body mass index prediction from face images |
S0262885613000474 | In this paper, an efficient method for text-independent writer identification using a codebook method is proposed. The method uses the occurrence histogram of the shapes in a codebook to create a feature vector for each specific manuscript. For cursive handwritings, a wide variety of different shapes exist in the connected components obtained from the handwriting. Small fragments of connected components are used to avoid complex patterns. Two efficient methods for extracting codes from contours are introduced. One method uses the actual pixel coordinates of contour fragments while the other one uses a linear piece-wise approximation using segment angles and lengths. To evaluate the methods, writer identification is conducted on two English and three Farsi handwriting databases. Both methods show promising performances with the performance of second method being better than the first one. | Offline text-independent writer identification using codebook and efficient code extraction methods |
S0262885613000590 | In this paper, we introduce a novel framework for low-level image processing and analysis. First, we process images with very simple, difference-based filter functions. Second, we fit the 2-parameter Weibull distribution to the filtered output. This maps each image to the 2D Weibull manifold. Third, we exploit the information geometry of this manifold and solve low-level image processing tasks as minimisation problems on point sets. For a proof-of-concept example, we examine the image autofocusing task. We propose appropriate cost functions together with a simple implicitly-constrained manifold optimisation algorithm and show that our framework compares very favourably against common autofocus methods from literature. In particular, our approach exhibits the best overall performance in terms of combined speed and accuracy. | The Weibull manifold in low-level image processing: An application to automatic image focusing |
S0262885613000607 | Human Nonverbal Communication Computing aims to investigate how people exploit nonverbal aspects of their communication to coordinate their activities and social relationships. Nonverbal behavior plays important roles in message production and processing, relational communication, social interaction and networks, deception and impression management, and emotional expression. This is a fundamental yet challenging research topic. To effectively analyze Nonverbal Communication Computing, motion analysis methods have been widely investigated and employed. In this paper, we introduce the concept and applications of Nonverbal Communication Computing and also review some of the motion analysis methods employed in this area. They include face tracking, expression recognition, body reconstruction, and group activity analysis. In addition, we also discuss some open problems and the future directions of this area. | A review of motion analysis methods for human Nonverbal Communication Computing |
S0262885613000644 | In this paper, we tackle the problem of gait recognition based on the model-free approach. Numerous methods exist; they all lead to high dimensional feature spaces. To address the problem of high dimensional feature space, we propose the use of the Random Forest algorithm to rank features' importance. In order to efficiently search throughout subspaces, we apply a backward feature elimination search strategy. Our first experiments are carried out on unknown covariate conditions. Our first results suggest that the selected features contribute to increase the CCR of different existing classification methods. Secondary experiments are performed on unknown covariate conditions and viewpoints. Inspired by the location of our first experiments' features, we proposed a simple mask. Experimental results demonstrate that the proposed mask gives satisfactory results for all angles of the probe and consequently is not view specific. We also show that our mask performs well when an uncooperative experimental setup is considered as compared to the state-of-the art methods. As a consequence, we propose a panoramic gait recognition framework on unknown covariate conditions. Our results suggest that panoramic gait recognition can be performed under unknown covariate conditions. Our approach can greatly reduce the complexity of the classification problem while achieving fair correct classification rates when gait is captured with unknown conditions. | Feature subset selection applied to model-free gait recognition |
S0262885613000656 | In the spirit of recent work on contextual recognition and estimation, we present a method for estimating the pose of human hands, employing information about the shape of the object in the hand. Despite the fact that most applications of human hand tracking involve grasping and manipulation of objects, the majority of methods in the literature assume a free hand, isolated from the surrounding environment. Occlusion of the hand from grasped objects does in fact often pose a severe challenge to the estimation of hand pose. In the presented method, object occlusion is not only compensated for, it contributes to the pose estimation in a contextual fashion; this without an explicit model of object shape. Our hand tracking method is non-parametric, performing a nearest neighbor search in a large database (.. entries) of hand poses with and without grasped objects. The system that operates in real time, is robust to self occlusions, object occlusions and segmentation errors, and provides full hand pose reconstruction from monocular video. Temporal consistency in hand pose is taken into account, without explicitly tracking the hand in the high-dim pose space. Experiments show the non-parametric method to outperform other state of the art regression methods, while operating at a significantly lower computational cost than comparable model-based hand tracking methods. | Non-parametric hand pose estimation with object context |
S0262885613000760 | Conventional particle filtering-based visual ego-motion estimation or visual odometry often suffers from large local linearization errors in the case of abrupt camera motion. The main contribution of this paper is to present a novel particle filtering-based visual ego-motion estimation algorithm that is especially robust to the abrupt camera motion. The robustness to the abrupt camera motion is achieved by multi-layered importance sampling via particle swarm optimization (PSO), which iteratively moves particles to higher likelihood region without local linearization of the measurement equation. Furthermore, we make the proposed visual ego-motion estimation algorithm in real-time by reformulating the conventional vector space PSO algorithm in consideration of the geometry of the special Euclidean group SE(3), which is a Lie group representing the space of 3-D camera poses. The performance of our proposed algorithm is experimentally evaluated and compared with the local linearization and unscented particle filter-based visual ego-motion estimation algorithms on both simulated and real data sets. | Geometric particle swarm optimization for robust visual ego-motion estimation via particle filtering |
S0262885613000772 | This paper presents a novel skeleton pruning approach based on a 2D empirical mode like decomposition (EMD-like). The EMD algorithm can decompose any nonlinear and non-stationary data into a number of intrinsic mode functions (IMFs). When the object contour is decomposed by empirical mode like decomposition (EMD-like), the IMFs of the object provide a workspace with very good properties for obtaining the object's skeleton. The theoretical properties and the performed experiments demonstrate that the obtained skeletons match to hand-labeled skeletons provided by human subjects. Even in the presence of significant noise and shape variations, cuts and tears, the resulted skeletons have the same topology as the original skeletons. In particular, the proposed approach produces no spurious branches as many existing skeleton pruning methods and moreover, does not displace the skeleton points, which are all centers of maximal disks. | Empirical mode decomposition on skeletonization pruning |
S0262885613000784 | Automatic facial landmarking is a crucial prerequisite of many applications dedicated to face analysis. In this paper we describe a two-step method. In a first step, each landmark position in the image is predicted independently. To achieve fast and accurate localizations, we implement detectors based on a two-stage classifier and we use multiple kernel learning algorithms to combine multi-scale features. In a second step, to increase the robustness of the system, we introduce spatial constraints between landmarks. To this end, parameters of a deformable shape model are optimized using the first step outputs through a Gauss–Newton algorithm. Extensive experiments have been carried out on different databases (PIE, LFPW, Cohn-Kanade, Face Pix and BioID), assessing the accuracy and the robustness of the proposed approach. They show that the proposed algorithm is not significantly affected by small rotations, facial expressions or natural occlusions and can be favorably compared with the current state of the art landmarking systems. | Multi-Kernel Appearance Model |
S0262885613000887 | In this paper the problem of human ear recognition in the Mid-wave infrared (MWIR) spectrum is studied in order to illustrate the advantages and limitations of the ear-based biometrics that can operate in day and night time environments. The main contributions of this work are two-fold: First, a dual-band database is assembled that consists of visible (baseline) and mid-wave IR left and right profile face images. Profile face images were collected using a high definition mid-wave IR camera that is capable of acquiring thermal imprints of human skin. Second, a fully automated, thermal imaging based, ear recognition system is proposed that is designed and developed to perform real-time human identification. The proposed system tests several feature extraction methods, namely: (i) intensity-based such as independent component analysis (ICA), principal component analysis (PCA), and linear discriminant analysis (LDA); (ii) shape-based such as scale invariant feature transform (SIFT); as well as (iii) texture-based such as local binary patterns (LBP), and local ternary patterns (LTP). Experimental results suggest that LTP (followed by LBP) yields the best performance (Rank1=80:68%) on manually segmented ears and (Rank1=68:18%) on ear images that are automatically detected and segmented. By fusing the matching scores obtained by LBP and LTP, the identification performance increases by about 5%. Although these results are promising, the outcomes of our study suggest that the design and development of automated ear-based recognition systems that can operate efficiently in the lower part of the passive IR spectrum are very challenging tasks. | On ear-based human identification in the mid-wave infrared spectrum |
S0262885613000899 | Current image matting methods based on color sampling use color to distinguish between foreground and background pixels. However, they fail when the corresponding color distributions overlap. Other methods that define correlation between neighboring pixels based on color aim to propagate the opacity parameter α from known pixels to unknown pixels. However, strong edges of textured regions may block the propagation of α. In this paper, a new matting strategy is proposed that delivers an accurate matte by considering texture as a feature that can complement color even if the foreground and background color distributions overlap and the image is a complex one with highly textured regions. The texture feature is extracted in such a way as to increase distinction between foreground and background regions. An objective function containing color and texture components is optimized to find the best foreground and background pair among a set of candidate pairs. The effectiveness of proposed method is compared quantitatively as well as qualitatively with other matting methods by evaluating their results on a benchmark dataset and a set of complex images. The evaluations show that the proposed method presented the best among state of the art matting methods. | Using texture to complement color in image matting |
S0262885613000905 | Answering to the growing demand of machine vision applications for the latest generation of electronic devices endowed with camera platforms, several moving object detection strategies have been proposed in recent years. Among them, spatio-temporal based non-parametric methods have recently drawn the attention of many researchers. These methods, by combining a background model and a foreground model, achieve high-quality detections in sequences recorded with non-completely static cameras and in scenarios containing complex backgrounds. However, since they have very high memory and computational associated costs, they apply some simplifications in the background modeling process, therefore decreasing the quality of the modeling. Here, we propose a novel background modeling that is applicable to any spatio-temporal non-parametric moving object detection strategy. Through an efficient and robust method to dynamically estimate the bandwidth of the kernels used in the modeling, both the usability and the quality of previous approaches are improved. Furthermore, by adding a novel mechanism to selectively update the background model, the number of misdetections is significantly reduced, achieving an additional quality improvement. Empirical studies on a wide variety of video sequences demonstrate that the proposed background modeling significantly improves the quality of previous strategies while maintaining the computational requirements of the detection process. | Improved background modeling for real-time spatio-temporal non-parametric moving object detection strategies |
S0262885613000917 | We present a new method for multi-agent activity analysis and recognition that uses low level motion features and exploits the inherent structure and recurrence of motion present in multi-agent activity scenarios. Our representation is inspired by the need to circumvent the difficult problem of tracking in multi-agent scenarios and the observation that for many visual multi-agent recognition tasks, the spatiotemporal description of events irrespective of agent identity is sufficient for activity classification. We begin by learning generative models describing motion induced by individual actors or groups, which are considered to be agents. These models are Gaussian mixture distributions learned by linking clusters of optical flow to obtain contiguous regions of locally coherent motion. These possibly overlapping regions or segments, known as motion patterns are then used to analyze a scene by estimating their spatial and temporal relationships. The geometric transformations between two patterns are obtained by iteratively warping one pattern onto another, whereas the temporal relationships are obtained from their relative times of occurrence within videos. These motion segments and their spatio-temporal relationships are represented as a graph, where the nodes are the statistical distributions, and the edges have geometric transformations between motion patterns transformed to Lie space, as their attributes. Two activity instances are then compared by estimating the cost of attributed inexact graph matching. We demonstrate the application of our framework in the analysis of American football plays, a typical multi-agent activity. The performance analysis of our method shows that it is feasible and easily generalizable. | Multi-agent event recognition by preservation of spatiotemporal relationships between probabilistic models |
S0262885613000929 | Motion segmentation refers to the problem of separating the objects in a video sequence according to their motion. It is a fundamental problem of computer vision, since various systems focusing on the analysis of dynamic scenes include motion segmentation algorithms. In this paper we present a novel approach, where a video shot is temporally divided in successive and overlapping windows and motion segmentation is performed on each window respectively. This attribute renders the algorithm suitable even for long video sequences. In the last stage of the algorithm the segmentation results for every window are aggregated into a final segmentation. The presented algorithm can handle effectively asynchronous trajectories on each window even when they have no temporal intersection. The evaluation of the proposed algorithm on the Berkeley motion segmentation benchmark demonstrates its scalability and accuracy compared to the state of the art. | Motion-based segmentation of objects using overlapping temporal windows |
S0262885613001017 | In this article, a novel technique for fixation prediction and saccade generation will be introduced. The proposed model simulates saccadic eye movement to incorporate the underlying eye movement mechanism into saliency estimation. To this end, a simple salience measure is introduced. Afterwards, we derive a system model for saccade generation and apply it in a stochastic filtering framework. The proposed model will dynamically make a saccade toward the next predicted fixation and produces saliency maps. Evaluation of the proposed model is carried out in terms of saccade generation performance and saliency estimation. Saccade generation evaluation reveals that the proposed model outperforms inhibition of return. Also, experiments signify integration of eye movement mechanism into saliency estimation boosts the results. Finally, comparison with several saliency models shows the proposed model performs aptly. | Stochastic bottom–up fixation prediction and saccade generation |
S0262885613001029 | Tracking vehicles using a network of cameras with non-overlapping views is a challenging problem of great importance in traffic surveillance. One of the main challenges is accurate vehicle matching across the cameras. Even if the cameras have similar views on vehicles, vehicle matching remains a difficult task due to changes of their appearance between observations, and inaccurate detections and occlusions, which often occur in real scenarios. To be executed on smart cameras the matching has also to be efficient in terms of needed data and computations. To address these challenges we present a low complexity method for vehicle matching robust against appearance changes and inaccuracies in vehicle detection. We efficiently represent vehicle appearances using signature vectors composed of Radon transform like projections of the vehicle images and compare them in a coarse-to-fine fashion using a simple combination of 1-D correlations. To deal with appearance changes we include multiple observations in each vehicle appearance model. These observations are automatically collected along the vehicle trajectory. The proposed signature vectors can be calculated in low-complexity smart cameras, by a simple scan-line algorithm of the camera software itself, and transmitted to the other smart cameras or to the central server. Extensive experiments based on real traffic surveillance videos recorded in a tunnel validate our approach. | Vehicle matching in smart camera networks using image projection profiles at multiple instances |
S0262885613001030 | This paper proposes a weighted scheme for elastic graph matching hand posture recognition. Visual features scattered on the elastic graph are assigned corresponding weights according to their relative ability to discriminate between gestures. The weights' values are determined using adaptive boosting. A dictionary representing the variability of each gesture class is expressed in the form of a bunch graph. The positions of the nodes in the bunch graph are determined using three techniques: manually, semi-automatically, and automatically. Experimental results also show that the semi-automatic annotation method is efficient and accurate in terms of three performance measures; assignment cost, accuracy, and transformation error. In terms of the recognition accuracy, our results show that the hierarchical weighting on features has more significant discriminative power than the classic method (uniform weighting). The hierarchical elastic graph matching (WEGM) approach was used to classify a lexicon of ten hand postures, and it was found that the poses were recognized with a recognition accuracy of 97.08% on average. Using the weighted scheme, computing cycles can be decreased by only computing the features for those nodes whose weight is relatively high and ignoring the remaining nodes. It was found that only 30% of the nodes need to be computed to obtain a recognition accuracy of over 90%. | Recognizing hand gestures using the weighted elastic graph matching (WEGM) method |
S0262885613001042 | We propose a scheme for comparing local neighborhoods (window) of image points, to estimate optical flow using discrete optimization. The proposed approach is based on using large correlation windows with adaptive support-weights. We present three new types of weighting constraints derived from image gradient, color statistics and occlusion information. The first type provides gradient structure constraints that favor flow consistency across strong image gradients. The second type imposes perceptual color constraints that reinforce relationship among pixels in a window according to their color statistics. The third type yields occlusion constraints that reject pixels that are seen in one window but not seen in the other. All these constraints contribute to suppress the effect of cluttered background, which is unavoidably included in the large correlation windows. Experimental results demonstrate that each of the proposed constraints appreciably elevates the quality of estimations, and that they jointly yield results that compare favorably to current techniques, especially on object boundaries. | Adaptive large window correlation for optical flow estimation with discrete optimization |
S0262885613001066 | Cheap, ubiquitous, high-resolution digital cameras have led to opportunities that demand camera-based text understanding, such as wearable computing or assistive technology. Perspective distortion is one of the main challenges for text recognition in camera captured images since the camera may often not have a fronto-parallel view of the text. We present a method for perspective recovery of text in natural scenes, where text can appear as isolated words, short sentences or small paragraphs (as found on posters, billboards, shop and street signs etc.). It relies on the geometry of the characters themselves to estimate a rectifying homography for every line of text, irrespective of the view of the text over a large range of orientations. The horizontal perspective foreshortening is corrected by fitting two lines to the top and bottom of the text, while the vertical perspective foreshortening and shearing are estimated by performing a linear regression on the shear variation of the individual characters within the text line. The proposed method is efficient and fast. We present comparative results with improved recognition accuracy against the current state-of-the-art. | Fast perspective recovery of text in natural scenes |
S0262885613001078 | This paper addresses the general problem of robust parametric model estimation from data that has both an unknown (and possibly majority) fraction of outliers as well as an unknown scale of measurement noise. We focus on computer vision applications from image correspondences, such as camera resectioning, estimation of the fundamental matrix or relative pose for 3D reconstruction, and estimation of 2D homographies for image registration and motion segmentation, although there are many other applications. In practice, these methods typically rely on a predefined inlier thresholds because automatic scale detection is usually too unreliable or too slow. We propose a new method for robust estimation with automatic scale detection that is faster, more precise and more robust than previous alternatives, and show that it can be practically applied to these problems. | Efficient and robust model fitting with unknown noise scale |
S0262885613001091 | Shape-from-focus (SFF) is a passive technique widely used in image processing for obtaining depth-maps. This technique is attractive since it only requires a single monocular camera with focus control, thus avoiding correspondence problems typically found in stereo, as well as more expensive capturing devices. However, one of its main drawbacks is its poor performance when the change in the focus level is difficult to detect. Most research in SFF has focused on improving the accuracy of the depth estimation. Less attention has been paid to the problem of providing quality measures in order to predict the performance of SFF without prior knowledge of the recovered scene. This paper proposes a reliability measure aimed at assessing the quality of the depth-map obtained using SFF. The proposed reliability measure (the R-measure) analyzes the shape of the focus measure function and estimates the likelihood of obtaining an accurate depth estimation without any previous knowledge of the recovered scene. The proposed R-measure is then applied for determining the image regions where SFF will not perform correctly in order to discard them. Experiments with both synthetic and real scenes are presented. | Reliability measure for shape-from-focus |
S0262885613001108 | Range imaging sensors such as Kinect and time-of-flight cameras can produce aligned depth and color images in real time. However, the depth maps captured by such sensors contain numerous invalid regions and suffer from heavy noise. These defects more or less influence the use of depth information in practical applications. In order to enhance the depth maps, this paper proposes a new inpainting approach based on the fast marching method (FMM). We extend the inpainting model and the propagation strategy of FMM to incorporate color information for depth inpainting. An edge-preserving guided filter is further applied for noise reduction. To validate our algorithm, we perform experiments on both Kinect data and Middlebury dataset which, respectively, provide qualitative and quantitative results. Meanwhile, we also compare it to the original FMM and other two state-of-the-art depth enhancement methods. Experimental results show that our method performs better than the local methods in terms of both visual and metric qualities, and it achieves visually comparable results to the time-consuming global method. | Guided depth enhancement via a fast marching method |
S0262885613001121 | We present a novel method for on-line, joint object tracking and segmentation in a monocular video captured by a possibly moving camera. Our goal is to integrate tracking and fine segmentation of a single, previously unseen, potentially non-rigid object of unconstrained appearance, given its segmentation in the first frame of an image sequence as the only prior information. To this end, we tightly couple an existing kernel-based object tracking method with Random Walker-based image segmentation. Bayesian inference mediates between tracking and segmentation, enabling effective data fusion of pixel-wise spatial and color visual cues. The fine segmentation of an object at a certain frame provides tracking with reliable initialization for the next frame, closing the loop between the two building blocks of the proposed framework. The effectiveness of the proposed methodology is evaluated experimentally by comparing it to a large collection of state of the art tracking and video-based object segmentation methods on the basis of a data set consisting of several challenging image sequences for which ground truth data is available. | Integrating tracking with fine object segmentation |
S0262885613001273 | Wildfire smoke detection is particularly important for early warning systems, because smoke usually rises before flames arise. Therefore, this paper presents an automatic wildfire smoke detection method using computer vision and pattern recognition techniques. First, candidate blocks are identified using key-frame differences and nonparametric smoke color models to detect smoke-colored moving objects. Subsequently, three-dimensional spatiotemporal volumes are built by combining the candidate blocks in the current key-frame with the corresponding blocks in previous frames. A histogram of oriented gradient (HOG) is extracted, and a histogram of oriented optical flow (HOOF) is extracted as a temporal feature based on the fact that the direction of smoke diffusion is upward owing to thermal convection. From spatiotemporal features of training data, a visual codebook and a bag-of-features (BoF) histogram are generated using our proposed weighting scheme. For smoke verification, a random forest classifier is built during the training phase using the BoF histogram. The random forest with the BoF histogram can increase the detection accuracy performance when compared with related methods and allow smoke detection to be carried out in near real time. | Spatiotemporal bag-of-features for early wildfire smoke detection |
S0262885613001285 | One of the greatest challenges while working on image segmentation algorithms is a comprehensive measure to evaluate their accuracy. Although there are some measures for doing this task, but they can consider only one aspect of segmentation in evaluation process. The performance of evaluation measures can be improved using a combination of single measures. However, combination of single measures does not always lead to an appropriate criterion. Besides its effectiveness, the efficiency of the new measure should be considered. In this paper, a new and combined evaluation measure based on genetic programming (GP) has been sought. Because of the nature of evolutionary approaches, the proposed approach allows nonlinear and linear combinations of other single evaluation measures and can search within many and different combinations of basic operators to find a good enough one. We have also proposed a new fitness function to make GP enable to search within search space effectively and efficiently. To test the method, Berkeley and Weizmann datasets besides several different experiments have been used. Experimental results demonstrate that the GP based approach is suitable for effective combination of single evaluation measures. | A new evaluation measure for color image segmentation based on genetic programming approach |
S0262885613001297 | Intensity inhomogeneity often appears in medical images, such as X-ray tomography and magnetic resonance (MR) images, due to technical limitations or artifacts introduced by the object being imaged. It is difficult to segment such images by traditional level set based segmentation models. In this paper, we propose a new level set method integrating local and global intensity information adaptively to segment inhomogeneous images. The local image information is associated with the intensity difference between the average of local intensity distribution and the original image, which can significantly increase the contrast between foreground and background. Thus, the images with intensity inhomogeneity can be efficiently segmented. What is more, to avoid the re-initialization of the level set function and shorten the computational time, a simple and fast level set evolution formulation is used in the numerical implementation. Experimental results on synthetic images as well as real medical images are shown in the paper to demonstrate the efficiency and robustness of the proposed method. | A new level set method for inhomogeneous image segmentation |
S0262885613001303 | There are many “machine vision” models of the visual saliency mechanism, which controls the process of selecting and allocating attention to the most “prominent” locations in the scene and helps humans interact with the visual environment efficiently (Itti and C. Koch, 2001; Gao et al., 2000). It is important to know which models perform the best in mimicking the saliency mechanism of the human visual system. There are several metrics to compare saliency models; however, results from different metrics vary widely in evaluating models. In this paper, a procedure is proposed for evaluating metrics for comparing saliency maps using a database of human fixations on approximately 1000 images. This procedure is then employed to identify the best metric. This best metric is then used to evaluate ten published bottom-up saliency models. An optimized level of the blurriness and center-bias is found for each visual saliency model. Performance of the models is also analyzed on a dataset of 54 synthetic images. | Selection of a best metric and evaluation of bottom-up visual saliency models |
S0262885613001315 | This paper presents a novel approach for action recognition, localization and video matching based on a hierarchical codebook model of local spatio-temporal video volumes. Given a single example of an activity as a query video, the proposed method finds similar videos to the query in a target video dataset. The method is based on the bag of video words (BOV) representation and does not require prior knowledge about actions, background subtraction, motion estimation or tracking. It is also robust to spatial and temporal scale changes, as well as some deformations. The hierarchical algorithm codes a video as a compact set of spatio-temporal volumes, while considering their spatio-temporal compositions in order to account for spatial and temporal contextual information. This hierarchy is achieved by first constructing a codebook of spatio-temporal video volumes. Then a large contextual volume containing many spatio-temporal volumes (ensemble of volumes) is considered. These ensembles are used to construct a probabilistic model of video volumes and their spatio-temporal compositions. The algorithm was applied to three available video datasets for action recognition with different complexities (KTH, Weizmann, and MSR II) and the results were superior to other approaches, especially in the case of a single training example and cross-dataset1 action recognition. | Human activity recognition in videos using a single example |
S0262885613001327 | Building facade detection is an important problem in computer vision, with applications in mobile robotics and semantic scene understanding. In particular, mobile platform localization and guidance in urban environments can be enabled with accurate models of the various building facades in a scene. Toward that end, we present a system for detection, segmentation, and parameter estimation of building facades in stereo imagery. The proposed method incorporates multilevel appearance and disparity features in a binary discriminative model, and generates a set of candidate planes by sampling and clustering points from the image with Random Sample Consensus (RANSAC), using local normal estimates derived from Principal Component Analysis (PCA) to inform the planar models. These two models are incorporated into a two-layer Markov Random Field (MRF): an appearance- and disparity-based discriminative classifier at the mid-level, and a geometric model to segment the building pixels into facades at the high-level. By using object-specific stereo features, our discriminative classifier is able to achieve substantially higher accuracy than standard boosting or modeling with only appearance-based features. Furthermore, the results of our MRF classification indicate a strong improvement in accuracy for the binary building detection problem and the labeled planar surface models provide a good approximation to the ground truth planes. | Building facade detection, segmentation, and parameter estimation for mobile robot stereo vision |
S0262885613001339 | Text contained in scene images provides the semantic context of the images. For that reason, robust extraction of text regions is essential for successful scene text understanding. However, separating text pixels from scene images still remains as a challenging issue because of uncontrolled lighting conditions and complex backgrounds. In this paper, we propose a two-stage conditional random field (TCRF) approach to robustly extract text regions from the scene images. The proposed approach models the spatial and hierarchical structures of the scene text, and it finds text regions based on the scene text model. In the first stage, the system generates multiple character proposals for the given image by using multiple image segmentations and a local CRF model. In the second stage, the system selectively integrates the generated character proposals to determine proper character regions by using a holistic CRF model. Through the TCRF approach, we cast the scene text separation problem as a probabilistic labeling problem, which yields the optimal label configuration of pixels that maximizes the conditional probability of the given image. Experimental results indicate that our framework exhibits good performance in the case of the public databases. | Integrating multiple character proposals for robust scene text extraction |
S0262885613001431 | A large number of methods have been published that aim to evaluate various components of multi-view geometry systems. Most of these have focused on the feature extraction, description and matching stages (the visual front end), since geometry computation can be evaluated through simulation. Many data sets are constrained to small scale scenes or planar scenes that are not challenging to new algorithms, or require special equipment. This paper presents a method for automatically generating geometry ground truth and challenging test cases from high spatio-temporal resolution video. The objective of the system is to enable data collection at any physical scale, in any location and in various parts of the electromagnetic spectrum. The data generation process consists of collecting high resolution video, computing accurate sparse 3D reconstruction, video frame culling and down sampling, and test case selection. The evaluation process consists of applying a test 2-view geometry method to every test case and comparing the results to the ground truth. This system facilitates the evaluation of the whole geometry computation process or any part thereof against data compatible with a realistic application. A collection of example data sets and evaluations is included to demonstrate the range of applications of the proposed system. | Evaluation of two-view geometry methods with automatic ground-truth generation |
S0262885613001443 | This paper presents an improved multiple instance learning (MIL) tracker representing target with Distribution Fields (DFs) and building a weighted-geometric-mean MIL classifier. Firstly, we adopt DF layer as feature instead of traditional Haar-like one to model the target thanks to the DF specificity and the landscape smoothness. Secondly, we integrate sample importance into the weighted-geometric-mean MIL model and derive an online approach to maximize the bag likelihood by AnyBoost gradient framework to select the most discriminative layers. Due to the target model consisting of selected discriminative layers, our tracker is more robust while needing fewer features than the traditional Haar-like one and the original DFs one. The experimental results show higher performances of our tracker than those of five state-of-the-art ones on several challenging video sequences. | Visual tracking based on Distribution Fields and online weighted multiple instance learning |
S0262885613001455 | Topological Active Nets are promising parametric deformable models that integrate features of region-based and boundary-based segmentation techniques. Problems associated with the complexity of the model, however, have limited their utility. This paper introduces an extension of the model, defining a new behavior for changing its topology, as well as a novel external force definition and a new local search optimization procedure. In particular, we propose a new automatic pre-processing phase, a new external energy term based on the Extended Vector Field Convolution, node movement constraints to avoid crossing links, and different procedures to perform link cuts and hole detection. Moreover, the new local search procedure also incorporates heuristics to correct the position of eventually misplaced nodes. The proposal has been tested on 18 synthetic images which present different segmentation difficulties along with 3 real medical images. Its performance has been compared with that of the original Topological Active Net optimization approach along with both state-of-the-art parametric and geometric active contours: two snakes (based on Gradient Vector Flow and Vector Field Convolution), and two level sets (Chan and Vese, and Geodesic Active Contour). Our new method outperforms all the others for the given image sets, in terms of segmentation accuracy measured by using four standard segmentation metrics. | Extended Topological Active Nets |
S0262885613001467 | Many recent image retrieval methods are based on the “bag-of-words” (BoW) model with some additional spatial consistency checking. This paper proposes a more accurate similarity measurement that takes into account spatial layout of visual words in an offline manner. The similarity measurement is embedded in the standard pipeline of the BoW model, and improves two features of the model: i) latent visual words are added to a query based on spatial co-occurrence, to improve query recall; and ii) weights of reliable visual words are increased to improve the precision. The combination of these methods leads to a more accurate measurement of image similarity. This is similar in concept to the combination of query expansion and spatial verification, but does not require query time processing, which is too expensive to apply to full list of ranked results. Experimental results demonstrate the effectiveness of our proposed method on three public datasets. | Spatially aware feature selection and weighting for object retrieval |
S0262885613001479 | In this paper we present a comparative study of two approaches for road traffic density estimation. The first approach uses the microscopic parameters which are extracted using both motion detection and tracking methods from a video sequence, and the second approach uses the macroscopic parameters which are directly estimated by analyzing the global motion in the video scene. The extracted parameters are applied to three classifiers, the K Nearest Neighbor (KNN) classifier, the LVQ classifier and the SVM classifier, in order to classify the road traffic in three categories: light, medium and heavy. The methods are compared based on their robustness to the classification of different road traffic states. The goal of this study is to propose an algorithm for road traffic density estimation with a high precision. | Road traffic density estimation using microscopic and macroscopic parameters |
S0262885613001480 | Discriminative human pose estimation is the problem of inferring the 3D articulated pose of a human directly from an image feature. This is a challenging problem due to the highly non-linear and multi-modal mapping from the image feature space to the pose space. To address this problem, we propose a model employing a mixture of Gaussian processes where each Gaussian process models a local region of the pose space. By employing the models in this way we are able to overcome the limitations of Gaussian processes applied to human pose estimation — their O(N 3) time complexity and their uni-modal predictive distribution. Our model is able to give a multi-modal predictive distribution where each mode is represented by a different Gaussian process prediction. A logistic regression model is used to give a prior over each expert prediction in a similar fashion to previous mixture of expert models. We show that this technique outperforms existing state of the art regression techniques on human pose estimation data sets for ballet dancing, sign language and the HumanEva data set. | Mixtures of Gaussian process models for human pose estimation |
S0262885613001492 | This paper examines the issue of face, speaker and bi-modal authentication in mobile environments when there is significant condition mismatch. We introduce this mismatch by enrolling client models on high quality biometric samples obtained on a laptop computer and authenticating them on lower quality biometric samples acquired with a mobile phone. To perform these experiments we develop three novel authentication protocols for the large publicly available MOBIO database. We evaluate state-of-the-art face, speaker and bi-modal authentication techniques and show that inter-session variability modelling using Gaussian mixture models provides a consistently robust system for face, speaker and bi-modal authentication. It is also shown that multi-algorithm fusion provides a consistent performance improvement for face, speaker and bi-modal authentication. Using this bi-modal multi-algorithm system we derive a state-of-the-art authentication system that obtains a half total error rate of 6.3% and 1.9% for Female and Male trials, respectively. | Bi-modal biometric authentication on mobile phones in challenging conditions |
S0262885613001509 | Recent research emphasizes more on analyzing multiple features to improve face recognition (FR) performance. One popular scheme is to extend the sparse representation based classification framework with various sparse constraints. Although these methods jointly study multiple features through the constraints, they just process each feature individually such that they overlook the possible high-level relationship among different features. It is reasonable to assume that the low-level features of facial images, such as edge information and smoothed/low-frequency image, can be fused into a more compact and more discriminative representation based on the latent high-level relationship. FR on the fused features is anticipated to produce better performance than that on the original features, since they provide more favorable properties. Focusing on this, we propose two different strategies which start from fusing multiple features and then exploit the dictionary learning (DL) framework for better FR performance. The first strategy is a simple and efficient two-step model, which learns a fusion matrix from training face images to fuse multiple features and then learns class-specific dictionaries based on the fused features. The second one is a more effective model requiring more computational time that learns the fusion matrix and the class-specific dictionaries simultaneously within an iterative optimization procedure. Besides, the second model considers to separate the shared common components from class-specified dictionaries to enhance the discrimination power of the dictionaries. The proposed strategies, which integrate multi-feature fusion process and dictionary learning framework for FR, realize the following goals: (1) exploiting multiple features of face images for better FR performances; (2) learning a fusion matrix to merge the features into a more compact and more discriminative representation; (3) learning class-specific dictionaries with consideration of the common patterns for better classification performance. We perform a series of experiments on public available databases to evaluate our methods, and the experimental results demonstrate the effectiveness of the proposed models. | Integration of multi-feature fusion and dictionary learning for face recognition |
S0262885613001510 | In this paper, we propose a visual tracking algorithm by incorporating the appearance information gathered from two collaborative feature sets and exploiting its geometric structures. A structured visual dictionary (SVD) can be learned from both appearance and geometric structure, thereby enhancing its discriminative strength between the foreground object and the background. Experimental results show that the proposed tracking algorithm using SVD (SVDTrack) performs favorably against the state-of-the-art methods. | Learning structured visual dictionary for object tracking |
S0262885613001522 | This paper proposes a method for keyword spotting in off-line Chinese handwritten documents using a contextual word model, which measures the similarity between the query word and every candidate word in the document by combining a character classifier and the geometric context as well as linguistic context. The geometric context model characterizes the single-character likeliness and between-character relationship. The linguistic model utilizes the dependency of the word with the external adjacent characters. The combining weights are optimized on training documents. Experiments on a large handwriting database CASIA-HWDB demonstrate the effectiveness of the proposed method and justify the benefits of geometric and linguistic contexts. Compared to transcription-based text search, the proposed method can provide higher recall rate, and for spotting words of four characters, the proposed method provides both higher precision and recall rate. | Keyword spotting in unconstrained handwritten Chinese documents using contextual word model |
S0262885613001534 | The estimation of camera orientation from image lines using the anthropic environment restriction is a well-known problem, but traditional methods to solve it depend on line extraction, a relatively complex procedure that is also incompatible with distorted images. We propose Corisco, a monocular orientation estimation method based on edgels instead of lines. Edgels are points sampled from image edges with their tangential directions, extracted in Corisco using a grid mask. The estimation aligns the measured edgel directions with the predicted directions calculated from the orientation, using a known camera model. Corisco uses the M-estimation technique to define an objective function that is optimized by two algorithms in sequence: RANSAC, which gives robustness and flexibility to Corisco, and FilterSQP, which performs a continuous optimization to refine the initial estimate, using closed formulas for the function derivatives. Corisco is the first edgel-based method able to analyze images with any camera model, and it also allows for a compromise between speed and accuracy, so that its performance can be tuned according to the application requirements. Our experiments demonstrate the effectiveness of Corisco with various camera models, and its performance surpasses similar edgel-based methods. The accuracy displayed a mean error below 2° for execution times above 8s in a conventional computer, and above 3° for less than 2s. | Corisco: Robust edgel-based orientation estimation for generic camera models |
S0262885613001546 | Using image hierarchies for visual categorization has been shown to have a number of important benefits. Doing so enables a significant gain in efficiency (e.g., logarithmic with the number of categories [16,12]) or the construction of a more meaningful distance metric for image classification [17]. A critical question, however, still remains controversial: would structuring data in a hierarchical sense also help classification accuracy? In this paper we address this question and show that the hierarchical structure of a database can be indeed successfully used to enhance classification accuracy using a sparse approximation framework. We propose a new formulation for sparse approximation where the goal is to discover the sparsest path within the hierarchical data structure that best represents the query object. Extensive quantitative and qualitative experimental evaluation on a number of branches of the Imagenet database [7] as well as on the Caltech-256 [12] demonstrate our theoretical claims and show that our approach produces better hierarchical categorization results than competing techniques. | Hierarchical classification of images by sparse approximation |
S0262885613001637 | Methods designed for tracking in dense crowds typically employ prior knowledge to make this difficult problem tractable. In this paper, we show that it is possible to handle this problem, without any priors, by utilizing the visual and contextual information already available in such scenes. We propose a novel tracking method tailored to dense crowds which provides an alternative and complementary approach to methods that require modeling of crowd flow and, simultaneously, is less likely to fail in the case of dynamic crowd flows and anomalies by minimally relying on previous frames. Our method begins with the automatic identification of prominent individuals from the crowd that are easy to track. Then, we use Neighborhood Motion Concurrence to model the behavior of individuals in a dense crowd, this predicts the position of an individual based on the motion of its neighbors. When the individual moves with the crowd flow, we use Neighborhood Motion Concurrence to predict motion while leveraging five-frame instantaneous flow in case of dynamically changing flow and anomalies. All these aspects are then embedded in a framework which imposes hierarchy on the order in which positions of individuals are updated. Experiments on a number of sequences show that the proposed solution can track individuals in dense crowds without requiring any pre-processing, making it a suitable online tracking algorithm for dense crowds. | Tracking in dense crowds using prominence and neighborhood motion concurrence |
S0262885613001649 | This paper deals with model-based pose estimation (or camera localization). We propose a direct approach that takes into account the image as a whole. For this, we consider a similarity measure, the mutual information. Mutual information is a measure of the quantity of information shared by two signals (or two images in our case). Exploiting this measure allows our method to deal with different image modalities (real and synthetic). Furthermore, it handles occlusions and illumination changes. Results with synthetic (benchmark) and real image sequences, with static or mobile camera, demonstrate the robustness of the method and its ability to produce stable and precise pose estimations. | Direct model based visual tracking and pose estimation using mutual information |
S0262885613001650 | Multipath interference of light is the cause of important errors in Time of Flight (ToF) depth estimation. This paper proposes an algorithm that removes multipath distortion from a single depth map obtained by a ToF camera. Our approach does not require information about the scene, apart from ToF measurements. The method is based on fitting ToF measurements with a radiometric model. Model inputs are depth values free from multipath interference whereas model outputs consist of synthesized ToF measurements. We propose an iterative optimization algorithm that obtains model parameters that best reproduce ToF measurements, recovering the depth of the scene without distortion. We show results with both synthetic and real scenes captured by commercial ToF sensors. In all cases, our algorithm accurately corrects the multipath distortion, obtaining depth maps that are very close to ground truth data. | Modeling and correction of multipath interference in time of flight cameras |
S0262885613001662 | This paper presents a thorough study of gender classification methodologies performing on neutral, expressive and partially occluded faces, when they are used in all possible arrangements of training and testing roles. A comprehensive comparison of two representation approaches (global and local), three types of features (grey levels, PCA and LBP), three classifiers (1-NN, PCA+LDA and SVM) and two performance measures (CCR and d′) is provided over single- and cross-database experiments. Experiments revealed some interesting findings, which were supported by three non-parametric statistical tests: when training and test sets contain different types of faces, local models using the 1-NN rule outperform global approaches, even those using SVM classifiers; however, with the same type of faces, even if the acquisition conditions are diverse, the statistical tests could not reject the null hypothesis of equal performance of global SVMs and local 1-NNs. | Face gender classification: A statistical study when neutral and distorted faces are combined for training and testing purposes |
S0262885613001741 | Since 2005, human and computer performance has been systematically compared as part of face recognition competitions, with results being reported for both still and video imagery. The key results from these competitions are reviewed. To analyze performance across studies, the cross-modal performance analysis (CMPA) framework is introduced. The CMPA framework is applied to experiments that were part of face a recognition competition. The analysis shows that for matching frontal faces in still images, algorithms are consistently superior to humans. For video and difficult still face pairs, humans are superior. Finally, based on the CMPA framework and a face performance index, we outline a challenge problem for developing algorithms that are superior to humans for the general face recognition problem. | Comparison of human and computer performance across face recognition experiments |
S0262885613001753 | The analysis of regular texture images is cast in a model comparison framework. Texel lattice hypotheses are used to define statistical models which are compared in terms of their ability to explain the images. This approach is used to estimate lattice geometry from patterns that exhibit translational symmetry (regular textures). It is also used to determine whether images consist of such regular textures. A method based on this approach is described in which lattice hypotheses are generated using analysis of peaks in the image autocorrelation function, statistical models are based on Gaussian or Gaussian mixture clusters, and model comparison is performed using the marginal likelihood as approximated by the Bayes Information Criterion (BIC). Experiments on public domain images and a commercial textile image archive demonstrate substantially improved accuracy compared to several alternative methods. | Lattice estimation from images of patterns that exhibit translational symmetry |
S0262885613001765 | Despite the successes in the last two decades, the state-of-the-art face detectors still have problems in dealing with images in the wild due to large appearance variations. Instead of leaving appearance variations directly to statistical learning algorithms, we propose a hierarchical part based structural model to explicitly capture them. The model enables part subtype option to handle local appearance variations such as closed and open month, and part deformation to capture the global appearance variations such as pose and expression. In detection, candidate window is fitted to the structural model to infer the part location and part subtype, and detection score is then computed based on the fitted configuration. In this way, the influence of appearance variation is reduced. Besides the face model, we exploit the co-occurrence between face and body, which helps to handle large variations, such as heavy occlusions, to further boost the face detection performance. We present a phrase based representation for body detection, and propose a structural context model to jointly encode the outputs of face detector and body detector. Benefit from the rich structural face and body information, as well as the discriminative structural learning algorithm, our method achieves state-of-the-art performance on FDDB, AFW and a self-annotated dataset, under wide comparisons with commercial and academic methods. | Face detection by structural models |
S0262885613001777 | Identical twins pose a great challenge to face recognition due to high similarities in their appearances. Motivated by the psychological findings that facial motion contains identity signatures and the observation that twins may look alike but behave differently, we develop a talking profile to use the identity signatures in the facial motion to distinguish between identical twins. The talking profile for a subject is defined as a collection of multiple types of usual face motions from the video. Given two talking profiles, we compute the similarities of the same type of face motion in both profiles and then perform the classification based on those similarities. To compute the similarity of each type of face motion, we give higher weights to more abnormal motions which are assumed to carry more identity signature information. Our approach, named Exceptional Motion Reporting Model (EMRM), is unrelated with appearance, and can handle realistic facial motion in human subjects, with no restrictions of speed of motion or video frame rate. We first conduct our experiments on a video database containing 39 pairs of twins. The experimental results demonstrate that identical twins can be distinguished better by the talking profiles over the traditional appearance based approach. Moreover, we collected a non-twin YouTube dataset with 99 subjects. The results on this dataset verified that the talking profile can be the potential biometric. We further conducted an experiment to test the robustness of talking profile to the time. Videos from 10 subjects which span across years or even decades in their lives are collected. The results indicated the robustness of talking profile to the aging process. | A talking profile to distinguish identical twins |
S0262885613001789 | This paper proposes an unsupervised variational segmentation approach of color–texture images. To improve the description ability, the compact multi-scale structure tensor, total variation flow, and color information are integrated to extract color–texture information. Since heterogeneous image object and nonlinear variation exist in color–texture image, it is not appropriate to use one single/multiple constant in the Chan and Vese (CV) model to describe each phase [1,2]. Therefore, a multiphase successive active contour model (MSACM) based on the multivariable Gaussian distribution is presented to describe each phase. As geodesic active contour (GAC) has a stronger ability in capturing boundary. To inherit the advantages of edge-based model and region-based model, we incorporate the GAC into the MSACM to enhance the detection ability for concave edge. Although multiphase optimization of our proposed MSACM is a NP hard problem, we can discretely and approximately solve it by a multilayer graph method. In addition, to segment the color–texture image automatically, an adaptive iteration convergence criterion is designed by incorporating the local Kullback–Leibler distance and global phase label, so that we can control the segmentation process converges. Comparing to state-of-the-art unsupervised segmentation methods on a substantial of color texture images, our approach achieves a significantly better performance on capture ability of homogeneous region/smooth boundary and accuracy. | Unsupervised multiphase color–texture image segmentation based on variational formulation and multilayer graph |
S0262885613001790 | The relationship between nonverbal behavior and severity of depression was investigated by following depressed participants over the course of treatment and video recording a series of clinical interviews. Facial expressions and head pose were analyzed from video using manual and automatic systems. Both systems were highly consistent for FACS action units (AUs) and showed similar effects for change over time in depression severity. When symptom severity was high, participants made fewer affiliative facial expressions (AUs 12 and 15) and more non-affiliative facial expressions (AU 14). Participants also exhibited diminished head motion (i.e., amplitude and velocity) when symptom severity was high. These results are consistent with the Social Withdrawal hypothesis: that depressed individuals use nonverbal behavior to maintain or increase interpersonal distance. As individuals recover, they send more signals indicating a willingness to affiliate. The finding that automatic facial expression analysis was both consistent with manual coding and revealed the same pattern of findings suggests that automatic facial expression analysis may be ready to relieve the burden of manual coding in behavioral and clinical science. | Nonverbal social withdrawal in depression: Evidence from manual and automatic analyses |
S0262885613001807 | Facial expression recognition (FER) systems must ultimately work on real data in uncontrolled environments although most research studies have been conducted on lab-based data with posed or evoked facial expressions obtained in pre-set laboratory environments. It is very difficult to obtain data in real-world situations because privacy laws prevent unauthorized capture and use of video from events such as funerals, birthday parties, marriages etc. It is a challenge to acquire such data on a scale large enough for benchmarking algorithms. Although video obtained from TV or movies or postings on the World Wide Web may also contain ‘acted’ emotions and facial expressions, they may be more ‘realistic’ than lab-based data currently used by most researchers. Or is it? One way of testing this is to compare feature distributions and FER performance. This paper describes a database that has been collected from television broadcasts and the World Wide Web containing a range of environmental and facial variations expected in real conditions and uses it to answer this question. A fully automatic system that uses a fusion based approach for FER on such data is introduced for performance evaluation. Performance improvements arising from the fusion of point-based texture and geometry features, and the robustness to image scale variations are experimentally evaluated on this image and video dataset. Differences in FER performance between lab-based and realistic data, between different feature sets, and between different train-test data splits are investigated. | Facial expression recognition experiments with data from television broadcasts and the World Wide Web |
S0262885613001819 | In this paper, we present an unsupervised distance learning approach for improving the effectiveness of image retrieval tasks. We propose a Reciprocal kNN Graph algorithm that considers the relationships among ranked lists in the context of a k-reciprocal neighborhood. The similarity is propagated among neighbors considering the geometry of the dataset manifold. The proposed method can be used both for re-ranking and rank aggregation tasks. Unlike traditional diffusion process methods, which require matrix multiplication operations, our algorithm takes only a subset of ranked lists as input, presenting linear complexity in terms of computational and storage requirements. We conducted a large evaluation protocol involving shape, color, and texture descriptors, various datasets, and comparisons with other post-processing approaches. The re-ranking and rank aggregation algorithms yield better results in terms of effectiveness performance than various state-of-the-art algorithms recently proposed in the literature, achieving bull's eye and MAP scores of 100% on the well-known MPEG-7 shape dataset. | Unsupervised manifold learning using Reciprocal kNN Graphs in image re-ranking and rank aggregation tasks |
S0262885613001820 | We propose a measure of information gained through biometric matching systems. Firstly, we discuss how the information about the identity of a person is derived from biometric samples through a biometric system, and define the “biometric system entropy” or BSE based on mutual information. We present several theoretical properties and interpretations of the BSE, and show how to design a biometric system which maximizes the BSE. Then we prove that the BSE can be approximated asymptotically by the relative entropy D(fG (x)∥fI (x)) where fG (x) and fI (x) are probability mass functions of matching scores between samples from individuals and among population. We also discuss how to evaluate the BSE of a biometric system and show experimental evaluation of the BSE of face, fingerprint and multimodal biometric systems. | A measure of information gained through biometric systems |
S0262885614000031 | In this paper, how to calibrate a fixed multi-camera system and simultaneously achieve a Euclidean reconstruction from a set of segments is addressed. It is well known that only a projective reconstruction could be achieved without any prior information. Here, the known segment lengths are exploited to upgrade the projective reconstruction to a Euclidean reconstruction and simultaneously calibrate the intrinsic and extrinsic camera parameters. At first, a DLT(Direct Linear Transformation)-like algorithm for the Euclidean upgrading from segment lengths is derived in a very simple way. Although the intermediate results in the DLT-like algorithm are essentially equivalent to the quadric of segments (QoS), the DLT-like algorithm is of higher accuracy than the existing linear algorithms derived from the QoS because of a more accurate way to extract the plane at infinity from the intermediate results. Then, to further improve the accuracy of Euclidean upgrading, two weighted DLT-like algorithms are presented by weighting the linear constraint equations in the original DLT-like algorithm. Finally, using the results of these linear algorithms as the initial values, a new weighted nonlinear algorithm for Euclidean upgrading is explored to recover the Euclidean structure more accurately. Extensive experimental results on both the synthetic data and the real image data demonstrate the effectiveness of our proposed algorithms in Euclidean upgrading and multi-camera calibration. | Euclidean upgrading from segment lengths: DLT-like algorithm and its variants |
S0262885614000043 | Confronted with the explosive growth of web images, the web image annotation has become a critical research issue for image search and index. Sparse feature selection plays an important role in improving the efficiency and performance of web image annotation. Meanwhile, it is beneficial to developing an effective mechanism to leverage the unlabeled training data for large-scale web image annotation. In this paper we propose a novel sparse feature selection framework for web image annotation, namely sparse Feature Selection based on Graph Laplacian (FSLG)2. FSLG applies the l2,1/2-matrix norm into the sparse feature selection algorithm to select the most sparse and discriminative features. Additional, graph Laplacian based semi-supervised learning is used to exploit both labeled and unlabeled data for enhancing the annotation performance. An efficient iterative algorithm is designed to optimize the objective function. Extensive experiments on two web image datasets are performed and the results illustrate that our method is promising for large-scale web image annotation. | Sparse feature selection based on graph Laplacian for web image annotation |
S0262885614000055 | Mobile devices, namely phones and tablets, have long gone “smart”. Their growing use is both a cause and an effect of their technological advancement. Among the others, their increasing ability to store and exchange sensitive information, has caused interest in exploiting their vulnerabilities, and the opposite need to protect users and their data through secure protocols for access and identification on mobile platforms. Face and iris recognition are especially attractive, since they are sufficiently reliable, and just require the webcam normally equipping the involved devices. On the contrary, the alternative use of fingerprints requires a dedicated sensor. Moreover, some kinds of biometrics lend themselves to uses that go beyond security. Ambient intelligence services bound to the recognition of a user, as well as social applications, such as automatic photo tagging on social networks, can especially exploit face recognition. This paper describes FIRME (Face and Iris Recognition for Mobile Engagement) as a biometric application based on a multimodal recognition of face and iris, which is designed to be embedded in mobile devices. Both design and implementation of FIRME rely on a modular architecture, whose workflow includes separate and replaceable packages. The starting one handles image acquisition. From this point, different branches perform detection, segmentation, feature extraction, and matching for face and iris separately. As for face, an antispoofing step is also performed after segmentation. Finally, results from the two branches are fused. In order to address also security-critical applications, FIRME can perform continuous reidentification and best sample selection. To further address the possible limited resources of mobile devices, all algorithms are optimized to be low-demanding and computation-light. | FIRME: Face and Iris Recognition for Mobile Engagement |
S0262885614000067 | The human visual system (HSV) is quite adept at swiftly detecting objects of interest in complex visual scene. Simulating human visual system to detect visually salient regions of an image has been one of the active topics in computer vision. Inspired by random sampling based bagging ensemble learning method, an ensemble dictionary learning (EDL) framework for saliency detection is proposed in this paper. Instead of learning a universal dictionary requiring a large number of training samples to be collected from natural images, multiple over-complete dictionaries are independently learned with a small portion of randomly selected samples from the input image itself, resulting in more flexible multiple sparse representations for each of the image patches. To boost the distinctness of salient patch from background region, we present a reconstruction residual based method for dictionary atom reduction. Meanwhile, with the obtained multiple probabilistic saliency responses for each of the patches, the combination of them is finally carried out from the probabilistic perspective to achieve better predictive performance on saliency region. Experimental results on several open test datasets and some natural images demonstrate that the proposed EDL for saliency detection is much more competitive compared with some existing state-of-the-art algorithms. | Ensemble dictionary learning for saliency detection |
S0262885614000134 | Dense disparity map is required by many great 3D applications. In this paper, a novel stereo matching algorithm is presented. The main contributions of this work are three-fold. Firstly, a new cost-volume filtering method is proposed. A novel concept named “two-level local adaptation†is introduced to guide the proposed filtering approach. Secondly, a novel post-processing method is proposed to handle both occlusions and textureless regions. Thirdly, a parallel algorithm is proposed to efficiently calculate an integral image on GPU, and it accelerates the whole cost-volume filtering process. The overall stereo matching algorithm generates the state-of-the-art results. At the time of submission, it ranks the 10th among about 152 algorithms on the Middlebury stereo evaluation benchmark, and takes the 1st place in all local methods. By implementing the entire algorithm on the NVIDIA Tesla C2050 GPU, it can achieve over 30 million disparity estimates per second (MDE/s). | Fast stereo matching using adaptive guided filtering |
S0262885614000146 | In this paper, we propose an album-oriented face-recognition model that exploits the album structure for face recognition in online social networks. Albums, usually associated with pictures of a small group of people at a certain event or occasion, provide vital information that can be used to effectively reduce the possible list of candidate labels. We show how this intuition can be formalized into a model that expresses a prior on how albums tend to have many pictures of a small number of people. We also show how it can be extended to include other information available in a social network. Using two real-world datasets independently drawn from Facebook, we show that this model is broadly applicable and can significantly improve recognition rates. | Exploring album structure for face recognition in online social networks |
S0262885614000158 | This paper proposes a new method for self-calibrating a set of stationary non-rotating zooming cameras. This is a realistic configuration, usually encountered in surveillance systems, in which each zooming camera is physically attached to a static structure (wall, ceiling, robot, or tripod). In particular, a linear, yet effective method to recover the affine structure of the observed scene from two or more such stationary zooming cameras is presented. The proposed method solely relies on point correspondences across images and no knowledge about the scene is required. Our method exploits the mostly translational displacement of the so-called principal plane of each zooming camera to estimate the location of the plane at infinity. The principal plane of a camera, at any given setting of its zoom, is encoded in its corresponding perspective projection matrix from which it can be easily extracted. As a displacement of the principal plane of a camera under the effect of zooming allows the identification of a pair of parallel planes, each zooming camera can be used to locate a line on the plane at infinity. Hence, two or more such zooming cameras in general positions allow the obtainment of an estimate of the plane at infinity making it possible, under the assumption of zero-skew and/or known aspect ratio, to linearly calculate the camera's parameters. Finally, the parameters of the camera and the coordinates of the plane at infinity are refined through a nonlinear least-squares optimization procedure. The results of our extensive experiments using both simulated and real data are also reported in this paper. | Self-calibration of stationary non-rotating zooming cameras |
S0262885614000171 | When estimating human gaze directions from captured eye appearances, most existing methods assume a fixed head pose because head motion changes eye appearance greatly and makes the estimation inaccurate. To handle this difficult problem, in this paper, we propose a novel method that performs accurate gaze estimation without restricting the user's head motion. The key idea is to decompose the original free-head motion problem into subproblems, including an initial fixed head pose problem and subsequent compensations to correct the initial estimation biases. For the initial estimation, automatic image rectification and joint alignment with gaze estimation are introduced. Then compensations are done by either learning-based regression or geometric-based calculation. The merit of using such a compensation strategy is that the training requirement to allow head motion is not significantly increased; only capturing a 5-s video clip is required. Experiments are conducted, and the results show that our method achieves an average accuracy of around 3° by using only a single camera. | Learning gaze biases with head motion for head pose-free gaze estimation |
S0262885614000183 | This paper focuses on activity recognition when multiple views are available. In the literature, this is often performed using two different approaches. In the first one, the systems build a 3D reconstruction and match that. However, there are practical disadvantages to this methodology since a sufficient number of overlapping views is needed to reconstruct, and one must calibrate the cameras. A simpler alternative is to match the frames individually. This offers significant advantages in the system architecture (e.g., it is easy to incorporate new features and camera dropouts can be tolerated). In this paper, the second approach is employed and a novel fusion method is proposed. Our fusion method collects the activity labels over frames and cameras, and then fuses activity judgments as the sequence label. It is shown that there is no performance penalty when a straightforward weighted voting scheme is used. In particular, when there are enough overlapping views to generate a volumetric reconstruction, our recognition performance is comparable with that produced by volumetric reconstructions. However, if the overlapping views are not adequate, the performance degrades fairly gracefully, even in cases where test and training views do not overlap. | Recognizing activities in multiple views with fusion of frame judgments |
S0262885614000195 | This paper presents an on-line adaptive metric to estimate the similarity between the target representation model and new image received at every time instant. The similarity measure, also known as observation likelihood, plays a crucial role in the accuracy and robustness of visual tracking. In this work, an L2-norm is adaptively weighted at every matching step to calculate the similarity between the target model and image descriptors. A histogram-based classifier is learned on-line to categorize the matching errors into three classes namely i) image noise, ii) significant appearance changes, and iii) outliers. A robust weight is assigned to each matching error based on the class label. Therefore, the proposed similarity measure is able to reject outliers and adapt to the target model by discriminating the appearance changes from the undesired outliers. The experimental results show the superiority of the proposed method with respect to accuracy and robustness in the presence of severe and long-term occlusion and image noise in comparison with commonly used robust regressors. | Adaptive on-line similarity measure for direct visual tracking |
S0262885614000262 | Person re-identification is a fundamental task in automated video surveillance and has been an area of intense research in the past few years. Given an image/video of a person taken from one camera, re-identification is the process of identifying the person from images/videos taken from a different camera. Re-identification is indispensable in establishing consistent labeling across multiple cameras or even within the same camera to re-establish disconnected or lost tracks. Apart from surveillance it has applications in robotics, multimedia and forensics. Person re-identification is a difficult problem because of the visual ambiguity and spatiotemporal uncertainty in a person's appearance across different cameras. These difficulties are often compounded by low resolution images or poor quality video feeds with large amounts of unrelated information in them that does not aid re-identification. The spatial or temporal conditions to constrain the problem are hard to capture. However, the problem has received significant attention from the computer vision research community due to its wide applicability and utility. In this paper, we explore the problem of person re-identification and discuss the current solutions. Open issues and challenges of the problem are highlighted with a discussion on potential directions for further research. Figure illustrates the role of person re-identification in a typical surveillance scenario. An area monitored by multiple cameras is depicted by top view of a building floor plan and the relative placement of the cameras' with respect to the building. Colored dots depict different people and numbers besides the dots are the IDs assigned to the people. As a person moves from one camera's FOV into another camera's FOV, re-identification is required to establish correspondence between disconnected tracks to accomplish multiple camera tracking. This paper explores the problem of person re-identification and discusses the current solutions. Open issues and challenges of the problem are highlighted with a discussion on potential directions for further research. | A survey of approaches and trends in person re-identification |
S0262885614000274 | 3D shape descriptor has been used widely in the field of 3D object retrieval. However, the performance of object retrieval greatly depends on the shape descriptor used. The aims of this study is to review and compare the common 3D shape descriptors proposed in 3D object retrieval literature for object recognition and classification based on Kinect-like depth image obtained from RGB-D object dataset. In this paper, we introduce (1) inter-class; and (2) intra-class evaluation in order to study the feasibility of such descriptors in object recognition. Based on these evaluations, local spin image outperforms the rest in discriminating different classes when several depth images from an instance per class are used in inter-class evaluation. This might be due to the slightly consistent local shape property of such images and due to the proposed local similarity measurement that manages to extract the local based descriptor. However, shape distribution performs excellent for intra-class evaluation (that involves several instances per class) may be due to the global shape from different instances per class is slightly unchanged. These results indicate a remarkable feasibility analysis of the 3D shape descriptor in object recognition that can be potentially used for Kinect-like sensor. | 3D shape descriptor for object recognition based on Kinect-like depth image |
S0262885614000286 | Object tracking quality usually depends on video scene conditions (e.g. illumination, density of objects, object occlusion level). In order to overcome this limitation, this article presents a new control approach to adapt the object tracking process to the scene condition variations. More precisely, this approach learns how to tune the tracker parameters to cope with the tracking context variations. The tracking context, or context, of a video sequence is defined as a set of six features: density of mobile objects, their occlusion level, their contrast with regard to the surrounding background, their contrast variance, their 2D area and their 2D area variance. In an offline phase, training video sequences are classified by clustering their contextual features. Each context cluster is then associated to satisfactory tracking parameters. In the online control phase, once a context change is detected, the tracking parameters are tuned using the learned values. The approach has been experimented with three different tracking algorithms and on long, complex video datasets. This article brings two significant contributions: (1) a classification method of video sequences to learn offline tracking parameters and (2) a new method to tune online tracking parameters using tracking context. | Online parameter tuning for object tracking algorithms |
S0262885614000298 | In this paper, a statistical approach to static texture description is developed, which combines a local pattern coding strategy with a robust global descriptor to achieve highly discriminative power, invariance to photometric transformation and strong robustness against geometric changes. Built upon the local binary patterns that are encoded at multiple scales, a statistical descriptor, called pattern fractal spectrum, characterizes the self-similar behavior of the local pattern distributions by calculating fractal dimension on each type of pattern. Compared with other fractal-based approaches, the proposed descriptor is compact, highly distinctive and computationally efficient. We applied the descriptor to texture classification. Our method has demonstrated excellent performance in comparison with state-of-the-art approaches on four challenging benchmark datasets. | A distinct and compact texture descriptor |
S0262885614000304 | We present a method for the recognition of complex actions. Our method combines automatic learning of simple actions and manual definition of complex actions in a single grammar. Contrary to the general trend in complex action recognition that consists in dividing recognition into two stages, our method performs recognition of simple and complex actions in a unified way. This is performed by encoding simple action HMMs within the stochastic grammar that models complex actions. This unified approach enables a more effective influence of the higher activity layers into the recognition of simple actions which leads to a substantial improvement in the classification of complex actions. We consider the recognition of complex actions based on person transits between areas in the scene. As input, our method receives crossings of tracks along a set of zones which are derived using unsupervised learning of the movement patterns of the objects in the scene. We evaluate our method on a large dataset showing normal, suspicious and threat behaviour on a parking lot. Experiments show an improvement of ~30% in the recognition of both high-level scenarios and their composing simple actions with respect to a two-stage approach. Experiments with synthetic noise simulating the most common tracking failures show that our method only experiences a limited decrease in performance when moderate amounts of noise are added. | A unified approach to the recognition of complex actions from sequences of zone-crossings |
S0262885614000316 | In this paper we present a robust and lightweight method for the automatic fitting of deformable 3D face models on facial images. Popular fitting techniques such as those based on statistical models of shape and appearance require a training stage based on a set of facial images and their corresponding facial landmarks, which have to be manually labeled. Therefore, new images in which to fit the model cannot differ too much in shape and appearance (including illumination variation, facial hair, wrinkles, etc.) from those used for training. By contrast, our approach can fit a generic face model in two steps: (1) the detection of facial features based on local image gradient analysis and (2) the backprojection of a deformable 3D face model through the optimization of its deformation parameters. The proposed approach can retain the advantages of both learning-free and learning-based approaches. Thus, we can estimate the position, orientation, shape and actions of faces, and initialize user-specific face tracking approaches, such as Online Appearance Models (OAMs), which have shown to be more robust than generic user tracking approaches. Experimental results show that our method outperforms other fitting alternatives under challenging illumination conditions and with a computational cost that allows its implementation in devices with low hardware specifications, such as smartphones and tablets. Our proposed approach lends itself nicely to many frameworks addressing semantic inference in face images and videos. | Efficient generic face model fitting to images and videos |
S0262885614000444 | Dictionary learning plays a crucial role in sparse representation based image classification. In this paper, we propose a novel approach to learn a discriminative dictionary with low-rank regularization on the dictionary. Specifically, we apply Fisher discriminant function to the coding coefficients to make the dictionary more discerning, that is, a small ratio of the within-class scatter to between-class scatter. In practice, noisy information in the training samples will undermine the discriminative ability of the dictionary. Inspired by the recent advances in low-rank matrix recovery theory, we apply low-rank regularization on the dictionary to tackle this problem. The iterative projection method (IPM) and inexact augmented Lagrange multiplier (ALM) algorithm are adopted to solve our objective function. The proposed discriminative dictionary learning with low-rank regularization (D 2 L 2 R 2) approach is evaluated on four face and digit image datasets in comparison with existing representative dictionary learning and classification algorithms. The experimental results demonstrate the superiority of our approach. | Learning low-rank and discriminative dictionary for image classification |
S0262885614000456 | Automatic pain recognition from videos is a vital clinical application and, owing to its spontaneous nature, poses interesting challenges to automatic facial expression recognition (AFER) research. Previous pain vs no-pain systems have highlighted two major challenges: (1) ground truth is provided for the sequence, but the presence or absence of the target expression for a given frame is unknown, and (2) the time point and the duration of the pain expression event(s) in each video are unknown. To address these issues we propose a novel framework (referred to as MS-MIL) where each sequence is represented as a bag containing multiple segments, and multiple instance learning (MIL) is employed to handle this weakly labeled data in the form of sequence level ground-truth. These segments are generated via multiple clustering of a sequence or running a multi-scale temporal scanning window, and are represented using a state-of-the-art Bag of Words (BoW) representation. This work extends the idea of detecting facial expressions through ‘concept frames’ to ‘concept segments’ and argues through extensive experiments that algorithms such as MIL are needed to reap the benefits of such representation. The key advantages of our approach are: (1) joint detection and localization of painful frames using only sequence-level ground-truth, (2) incorporation of temporal dynamics by representing the data not as individual frames but as segments, and (3) extraction of multiple segments, which is well suited to signals with uncertain temporal location and duration in the video. Extensive experiments on UNBC-McMaster Shoulder Pain dataset highlight the effectiveness of the approach by achieving competitive results on both tasks of pain classification and localization in videos. We also empirically evaluate the contributions of different components of MS-MIL. The paper also includes the visualization of discriminative facial patches, important for pain detection, as discovered by our algorithm and relates them to Action Units that have been associated with pain expression. We conclude the paper by demonstrating that MS-MIL yields a significant improvement on another spontaneous facial expression dataset, the FEEDTUM dataset. | Classification and weakly supervised pain localization using multiple segment representation |
S0262885614000468 | Changes in eyebrow configuration, in conjunction with other facial expressions and head gestures, are used to signal essential grammatical information in signed languages. This paper proposes an automatic recognition system for non-manual grammatical markers in American Sign Language (ASL) based on a multi-scale, spatio-temporal analysis of head pose and facial expressions. The analysis takes account of gestural components of these markers, such as raised or lowered eyebrows and different types of periodic head movements. To advance the state of the art in non-manual grammatical marker recognition, we propose a novel multi-scale learning approach that exploits spatio-temporally low-level and high-level facial features. Low-level features are based on information about facial geometry and appearance, as well as head pose, and are obtained through accurate 3D deformable model-based face tracking. High-level features are based on the identification of gestural events, of varying duration, that constitute the components of linguistic non-manual markers. Specifically, we recognize events such as raised and lowered eyebrows, head nods, and head shakes. We also partition these events into temporal phases. We separate the anticipatory transitional movement (the onset) from the linguistically significant portion of the event, and we further separate the core of the event from the transitional movement that occurs as the articulators return to the neutral position towards the end of the event (the offset). This partitioning is essential for the temporally accurate localization of the grammatical markers, which could not be achieved at this level of precision with previous computer vision methods. In addition, we analyze and use the motion patterns of these non-manual events. Those patterns, together with the information about the type of event and its temporal phases, are defined as the high-level features. Using this multi-scale, spatio-temporal combination of low- and high-level features, we employ learning methods for accurate recognition of non-manual grammatical markers in ASL sentences. | Non-manual grammatical marker recognition based on multi-scale, spatio-temporal analysis of head pose and facial expressions |
S0262885614000481 | Hair segmentation is challenging due to the diverse appearance, irregular region boundary and the influence of complex background. To deal with this problem, we propose a novel data-driven method, named Isomorphic Manifold Inference (IMI). The IMI method assumes the coarse probability map and the binary segmentation map as a couple of isomorphic manifolds and tries to learn hair specific priors from manually labeled training images. For an input image, firstly, the method calculates a coarse probability map. Then it exploits regression techniques to obtain the relationship between the coarse probability map of the test image and those of training images. Finally, this relationship, i.e., a coefficient set, is transferred to the binary segmentation maps and a soft segmentation of the test image will be achieved by a linear combination of those binary maps. Further, we employ this soft segmentation as a shape cue and integrate it with color and texture cues into a unified segmentation framework. A better segmentation is achieved by the Graph Cuts optimization. Extensive experiments are conducted to validate effectiveness of the IMI method, compare contributions of different cues and investigate the generalization of IMI method. The results strongly encourage our method. | Data-driven hair segmentation with isomorphic manifold inference |
S0262885614000493 | Concurrently obtaining an accurate, robust and fast global registration of multiple 3D scans is still an open issue for modern 3D modeling pipelines, especially when high metric precision as well as easy usage of high-end devices (structured-light or laser scanners) are required. Various solutions have been proposed (either heuristic, iterative and/or closed form solutions) which present some compromise concerning the fulfillment of the above contrasting requirements. Our purpose here, compared to existing reference solutions, is to go a step further in this perspective by presenting a new technique able to provide improved alignment performance, even on large datasets (both in terms of number of views and/or point density) of range images. Relying on the ‘Optimization-on-a-Manifold’ (OOM) approach, originally proposed by Krishnan et al., we propose a set of methodological and computational upgrades that produce an operative impact on both accuracy, robustness and computational performance compared to the original solution. In particular, always basing on an unconstrained error minimization over the manifold of rotations, instead of relying on a static set of point correspondences, our algorithm updates the optimization iterations with a dynamically modified set of correspondences in a computationally effective way, leading to substantial improvements in terms of registration accuracy and convergence trend. Other proposed improvements are directed to a substantial reduction of the computational load without sacrificing the alignment performance. Stress tests with increasing view misalignment allowed us to appreciate the convergence robustness of the proposed solution. Eventually, we demonstrate that for very large datasets a further computational speedup can be reached by the adoption of a hybrid (local heuristic followed by global optimization) registration approach. | Global registration of large collections of range images with an improved Optimization-on-a-Manifold approach |
S0262885614000511 | Text-based image retrieval may perform poorly due to the irrelevant and/or incomplete text surrounding the images in the web pages. In such situations, visual content of the images can be leveraged to improve the image ranking performance. In this paper, we look into this problem of image re-ranking and propose a system that automatically constructs multiple candidate “multi-instance bags (MI-bags)”, which are likely to contain relevant images. These automatically constructed bags are then utilized by ensembles of Multiple Instance Learning (MIL) classifiers and the images are re-ranked according to the final classification responses. Our method is unsupervised in the sense that, the only input to the system is the text query itself, without any user feedback or annotation. The experimental results demonstrate that constructing multiple instance bags based on the retrieval order and utilizing ensembles of MIL classifiers greatly enhance the retrieval performance, achieving on par or better results compared to the state-of-the-art. | Ensemble of multiple instance classifiers for image re-ranking |
S0262885614000523 | Robust high-dimensional data processing has witnessed an exciting development in recent years. Theoretical results have shown that it is possible using convex programming to optimize data fit to a low-rank component plus a sparse outlier component. This problem is also known as robust PCA, and it has found application in many areas of computer vision. In image and video processing and face recognition, the opportunity to process massive image databases is emerging as people upload photo and video data online in unprecedented volumes. However, data quality and consistency is not controlled in any way, and the massiveness of the data poses a serious computational challenge. In this paper we present t-GRASTA, or “Transformed GRASTA (Grassmannian robust adaptive subspace tracking algorithm)”. t-GRASTA iteratively performs incremental gradient descent constrained to the Grassmann manifold of subspaces in order to simultaneously estimate three components of a decomposition of a collection of images: a low-rank subspace, a sparse part of occlusions and foreground objects, and a transformation such as rotation or translation of the image. We show that t-GRASTA is 4 × faster than state-of-the-art algorithms, has half the memory requirement, and can achieve alignment for face images as well as jittered camera surveillance images. | Iterative Grassmannian optimization for robust image alignment |
S0262885614000614 | Finding regions of interest (ROIs) is a fundamentally important problem in the area of computer vision and image processing. Previous studies addressing this issue have mainly focused on investigating chromatic cues to characterize visually salient image regions, while less attention has been devoted to monochromatic cues. The purpose of this paper is the study of monochromatic cues, which have the potential to complement chromatic cues, for the detection of ROIs in an image. This paper first presents a taxonomy of existing ROI detection approaches using monochromatic cues, ranging from well-known algorithms to the most recently published techniques. We then propose a novel monochromatic cue for ROI detection. Finally, a comparative evaluation has been conducted on large scale challenging test sets of real-world natural scenes. Experimental results demonstrate that the use of our proposed monochromatic cue yields a more accurate identification of ROIs. This paper serves as a benchmark for future research on this particular topic and a steppingstone for developers and practitioners interested in adopting monochromatic cues to ROI detection systems and methodologies. | A novel monochromatic cue for detecting regions of visual interest |
S0262885614000626 | Avoiding the use of complicated pre-processing steps such as accurate face and body part segmentation or image normalization, this paper proposes a novel face/person image representation which can properly handle background and illumination variations. Denoted as gBiCov, this representation relies on the combination of Biologically Inspired Features (BIF) and Covariance descriptors [1]. More precisely, gBiCov is obtained by computing and encoding the difference between BIF features at different scales. The distance between two persons can then be efficiently measured by computing the Euclidean distance of their signatures, avoiding some time consuming operations in Riemannian manifold required by the use of Covariance descriptors. In addition, the recently proposed KISSME framework [2] is adopted to learn a metric adapted to the representation. To show the effectiveness of gBiCov, experiments are conducted on three person re-identification tasks (VIPeR, i-LIDS and ETHZ) and one face verification task (LFW), on which competitive results are obtained. As an example, the matching rate at rank 1 on the VIPeR dataset is of 31.11%, improving the best previously published result by more than 10. | Covariance descriptor based on bio-inspired features for person re-identification and face verification |
S0262885614000638 | This article discusses the motion analysis based on dense optical flow fields and for a new generation of robotic moving systems with real-time constraints. It focuses on a surveillance scenario where an especially designed autonomous mobile robot uses a monocular camera for perceiving motion in the environment. The computational resources and the processing-time are two of the most critical aspects in robotics and therefore, two non-parametric techniques are proposed, namely, the Hybrid Hierarchical Optical Flow Segmentation and the Hybrid Density-Based Optical Flow Segmentation. Both methods are able to extract the moving objects by performing two consecutive operations: refining and collecting. During the refining phase, the flow field is decomposed in a set of clusters and based on descriptive motion properties. These properties are used in the collecting stage by a hierarchical or density-based scheme to merge the set of clusters that represent different motion models. In addition, a model selection method is introduced. This novel method analyzes the flow field and estimates the number of distinct moving objects using a Bayesian formulation. The research evaluates the performance achieved by the methods in a realistic surveillance situation. The experiments conducted proved that the proposed methods extract reliable motion information in real-time and without using specialized computers. Moreover, the resulting segmentation is less computationally demanding compared to other recent methods and therefore, they are suitable for most of the robotic or surveillance applications. | Unsupervised flow-based motion analysis for an autonomous moving system |
S0262885614000651 | Human action recognition has lots of real-world applications, such as natural user interface, virtual reality, intelligent surveillance, and gaming. However, it is still a very challenging problem. In action recognition using the visible light videos, the spatiotemporal interest point (STIP) based features are widely used with good performance. Recently, with the advance of depth imaging technology, a new modality has appeared for human action recognition. It is important to assess the performance and usefulness of the STIP features for action analysis on the new modality of 3D depth map. In this paper, we evaluate the spatiotemporal interest point (STIP) based features for depth-based action recognition. Different interest point detectors and descriptors are combined to form various STIP features. The bag-of-words representation and the SVM classifiers are used for action learning. Our comprehensive evaluation is conducted on four challenging 3D depth databases. Further, we use two schemes to refine the STIP features, one is to detect the interest points in RGB videos and apply to the aligned depth sequences, and the other is to use the human skeleton to remove irrelevant interest points. These refinements can help us have a deeper understanding of the STIP features on 3D depth data. Finally, we investigate a fusion of the best STIP features with the prevalent skeleton features, to present a complementary use of the STIP features for action recognition on 3D data. The fusion approach gives significantly higher accuracies than many state-of-the-art results. | Evaluating spatiotemporal interest point features for depth-based action recognition |
S0262885614000754 | We present a novel approach for the estimation of a person's overall body orientation, 3D shape and texture, from overlapping cameras. A distinguishing aspect of our approach is the use of spherical harmonics for 3D shape- and texture-representation; it offers a compact, low-dimensional representation, which elegantly copes with rotation estimation. The estimation process alternates between the estimation of texture, orientation and shape. Texture is estimated by sampling image intensities with the predicted 3D shape (i.e. torso and head) and the predicted orientation, from the last time step. Orientation (i.e. rotation around torso major axis) is estimated by minimizing the difference between a learned texture model in a canonical orientation and the current texture estimate. The newly estimated orientation allows to update the 3D shape estimate, taking into account the new 3D shape measurement obtained by volume carving. We investigate various components of our approach in experiments on synthetic and real-world data. We show that our proposed method has lower orientation estimation error than other methods that use fixed 3D shape models, for data involving persons. | Coupled person orientation estimation and appearance modeling using spherical harmonics |
S0262885614000766 | Most current tracking approaches utilize only one type of feature to represent the target and learn the appearance model of the target just by using the current frame or a few recent ones. The limited representation of one single type of feature might not represent the target well. What's more, the appearance model learning from the current frame or a few recent ones is intolerant of abrupt appearance changes in short time intervals. These two factors might cause the track's failure. To overcome these two limitations, in this paper, we apply the Augmented Kernel Matrix (AKM) classification to combine two complementary features, pixel intensity and LBP (Local Binary Pattern) features, to enrich the target's representation. Meanwhile, we employ the AKM clustering to group the tracking results into a few aspects. And then, the representative patches are selected and added into the training set to learn the appearance model. This makes the appearance model cover more aspects of the target appearance and more robust to abrupt appearance changes. Experiments compared with several state-of-the-art methods on challenging sequences demonstrate the effectiveness and robustness of the proposed algorithm. | Robust visual tracking via augmented kernel SVM |
S0262885614000778 | Recently, Universum data that does not belong to any class of the training data, has been applied for training better classifiers. In this paper, we address a novel boosting algorithm called AdaBoost that can improve the classification performance of AdaBoost with Universum data. AdaBoost chooses a function by minimizing the loss for labeled data and Universum data. The cost function is minimized by a greedy, stagewise, functional gradient procedure. Each training stage of AdaBoost is fast and efficient. The standard AdaBoost weights labeled samples during training iterations while AdaBoost gives an explicit weighting scheme for Universum samples as well. In addition, this paper describes the practical conditions for the effectiveness of Universum learning. These conditions are based on the analysis of the distribution of ensemble predictions over training samples. Experiments on handwritten digits classification and gender classification problems are presented. As exhibited by our experimental results, the proposed method can obtain superior performances over the standard AdaBoost by selecting proper Universum data. | Exploiting Universum data in AdaBoost using gradient descent |
S0262885614000791 | Human age, gender and ethnicity are valuable demographic characteristics. They are also important soft biometric traits useful for human identification or verification. We present a framework that can estimate the three traits jointly. The joint estimation framework could deal with the mutual influence of age, gender, and ethnicity implicitly. Under this joint estimation framework, we explore different methods for simultaneous estimation of age, gender, and ethnicity. The canonical correlation analysis (CCA) based methods, and partial least squares (PLS) models are explored under our joint estimation framework. Both the linear and nonlinear methods are investigated to measure the performance. We also validate some extensions of these methods, such as the least squares formulations of the CCA methods. We found some consistent ranking of these methods under our joint estimation framework. More importantly, we found that the CCA based methods can derive an extremely low dimensionality in estimating age, gender and ethnicity. An analysis of this property is given based on the rank theory. The experiments are conducted on a very large database containing more than 55,000 face images. | A framework for joint estimation of age, gender and ethnicity on a large database |
S0262885614000808 | We introduce a new computational phonetic modeling framework for sign language (SL) recognition. This is based on dynamic–static statistical subunits and provides sequentiality in an unsupervised manner, without prior linguistic information. Subunit “sequentiality” refers to the decomposition of signs into two types of parts, varying and non-varying, that are sequentially stacked across time. Our approach is inspired by the Movement–Hold SL linguistic model that refers to such sequences. First, we segment signs into intra-sign primitives, and classify each segment as dynamic or static, i.e., movements and non-movements. These segments are then clustered appropriately to construct a set of dynamic and static subunits. The dynamic/static discrimination allows us employing different visual features for clustering the dynamic or static segments. Sequences of the generated subunits are used as sign pronunciations in a data-driven lexicon. Based on this lexicon and the corresponding segmentation, each subunit is statistically represented and trained on multimodal sign data as a hidden Markov model. In the proposed approach, dynamic/static sequentiality is incorporated in an unsupervised manner. Further, handshape information is integrated in a parallel hidden Markov modeling scheme. The novel sign language modeling scheme is evaluated in recognition experiments on data from three corpora and two sign languages: Boston University American SL which is employed pre-segmented at the sign-level, Greek SL Lemmas, and American SL Large Vocabulary Dictionary, including both signer dependent and unseen signers' testing. Results show consistent improvements when compared with other approaches, demonstrating the importance of dynamic/static structure in sub-sign phonetic modeling. | Dynamic–static unsupervised sequentiality, statistical subunits and lexicon for sign language recognition |
S0262885614000821 | The present work attempts to build a bio-cryptographic system that combines transformed minutiae pairwise feature and user-generated password fuzzy vault. The fingerprint fuzzy vault is based on a new minutiae pairwise structure, which overcomes the fingerprint feature publication while the secret binary vault code is generated according to the fingerprint fuzzy vault result. The authentication process involves two stages: fuzzy vault matching and secret vault code validation. Our minutiae pairwise transformation produces different templates thus resolving the problem of cross matching attacks in fingerprint fuzzy vault. So, the original fingerprint template cannot be recreated because it is protected by the key generated from the user password. In addition, the proposed bio-cryptographic system ensures an acceptable security level for user authentication. | Password hardened fuzzy vault for fingerprint authentication system |
S0262885614000833 | Automatically focusing and seeing occluded moving object in cluttered and complex scene is a significant challenging task for many computer vision applications. In this paper, we present a novel synthetic aperture imaging approach to solve this problem. The unique characteristics of this work include the following: (1) To the best of our knowledge, this work is the first to simultaneously solve camera array auto focusing and occluded moving object imaging problem. (2) A unified framework is designed to achieve seamless interaction between the focusing and imaging modules. (3) In the focusing module, a local and global constraint-based optimization algorithm is presented to dynamically estimate the focus plane of the moving object. (4) In the imaging module, a novel visibility analysis based active synthetic aperture imaging approach is proposed to remove the occluder and significantly improve the quality of occluded object imaging. An active camera array system has been set up and evaluated in challenging indoor and outdoor scenes. Extensive experimental results with qualitative and quantitative analyses demonstrate the superiority of the proposed approach compared with state-of-the-art approaches. | Simultaneous active camera array focus plane estimation and occluded moving object imaging |
S0262885614000845 | A method to obtain accurate hand gesture classification and fingertip localization from depth images is proposed. The Oriented Radial Distribution feature is utilized, exploiting its ability to globally describe hand poses, but also to locally detect fingertip positions. Hence, hand gesture and fingertip locations are characterized with a single feature calculation. We propose to divide the difficult problem of locating fingertips into two more tractable problems, by taking advantage of hand gesture as an auxiliary variable. Along with the method we present the ColorTip dataset, a dataset for hand gesture recognition and fingertip classification using depth data. ColorTip contains sequences where actors wear a glove with colored fingertips, allowing automatic annotation. The proposed method is evaluated against recent works in several datasets, achieving promising results in both gesture classification and fingertip localization. | Real-time fingertip localization conditioned on hand gesture classification |
S0262885614000857 | This paper introduces four classes of rotation-invariant orthogonal moments by generalizing four existing moments that use harmonic functions in their radial kernels. Members of these classes share beneficial properties for image representation and pattern recognition like orthogonality and rotation-invariance. The kernel sets of these generic harmonic function-based moments are complete in the Hilbert space of square-integrable continuous complex-valued functions. Due to their resemble definition, the computation of these kernels maintains the simplicity and numerical stability of existing harmonic function-based moments. In addition, each member of one of these classes has distinctive properties that depend on the value of a parameter, making it more suitable for some particular applications. Comparison with existing orthogonal moments defined based on Jacobi polynomials and eigenfunctions has been carried out and experimental results show the effectiveness of these classes of moments in terms of representation capability and discrimination power. | Generic polar harmonic transforms for invariant image representation |
S0262885614000924 | We introduce a robust framework for learning and fusing of orientation appearance models based on both texture and depth information for rigid object tracking. Our framework fuses data obtained from a standard visual camera and dense depth maps obtained by low-cost consumer depth cameras such as the Kinect. To combine these two completely different modalities, we propose to use features that do not depend on the data representation: angles. More specifically, our framework combines image gradient orientations as extracted from intensity images with the directions of surface normals computed from dense depth fields. We propose to capture the correlations between the obtained orientation appearance models using a fusion approach motivated by the original Active Appearance Models (AAMs). To incorporate these features in a learning framework, we use a robust kernel based on the Euler representation of angles which does not require off-line training, and can be efficiently implemented online. The robustness of learning from orientation appearance models is presented both theoretically and experimentally in this work. This kernel enables us to cope with gross measurement errors, missing data as well as other typical problems such as illumination changes and occlusions. By combining the proposed models with a particle filter, the proposed framework was used for performing 2D plus 3D rigid object tracking, achieving robust performance in very difficult tracking scenarios including extreme pose variations. | Online learning and fusion of orientation appearance models for robust rigid object tracking |
S0262885614000936 | This paper presents a matching strategy to improve the discriminative power of histogram-based keypoint descriptors by constraining the range of allowable dominant orientations according to the context of the scene under observation. This can be done when the descriptor uses a circular grid and quantized orientation steps, by computing or providing a global reference orientation based on the feature matches. The proposed matching strategy is compared with the standard approaches used with the SIFT and GLOH descriptors and the recent rotation invariant MROGH and LIOP descriptors. A new evaluation protocol based on an approximated overlap error is presented to provide an effective analysis in the case of non-planar scenes, thus extending the current state-of-the-art results. | Keypoint descriptor matching with context-based orientation estimation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.