FileName
stringlengths
17
17
Abstract
stringlengths
163
6.01k
Title
stringlengths
12
421
S1077314215001800
A novel algorithm for wide-baseline matching called MODS—matching on demand with view synthesis—is presented. The MODS algorithm is experimentally shown to solve a broader range of wide-baseline problems than the state of the art while being nearly as fast as standard matchers on simple problems. The apparent robustness vs. speed trade-off is finessed by the use of progressively more time-consuming feature detectors and by on-demand generation of synthesized images that is performed until a reliable estimate of geometry is obtained. We introduce an improved method for tentative correspondence selection, applicable both with and without view synthesis. A modification of the standard first to second nearest distance rule increases the number of correct matches by 5–20% at no additional computational cost. Performance of the MODS algorithm is evaluated on several standard publicly available datasets, and on a new set of geometrically challenging wide baseline problems that is made public together with the ground truth. Experiments show that the MODS outperforms the state-of-the-art in robustness and speed. Moreover, MODS performs well on other classes of difficult two-view problems like matching of images from different modalities, with wide temporal baseline or with significant lighting changes.
MODS: Fast and robust method for two-view matching
S1077314215001836
In many pattern recognition and computer vision problems, it is often necessary to compare multiple sets of elements that are completely or partially overlapping and possibly corrupted by noise. Finding a correspondence between elements from the different sets is one of the crucial tasks that several computer vision, robotics or image registration methods have to cope with. The aim of this paper is to find a consensus correspondence between two sets of points, given several initial correspondences between these two sets. We present three different methods: iterative, voting and agglomerative. If the noise randomly affects the original data, we suppose that, while using the deducted correspondence, the process obtains better results than each individual correspondence. The different correspondences between two sets of points are obtained through different feature extractors or matching algorithms. Experimental validation shows the runtime and accuracy for the three methodologies. The agglomerative method obtains the highest accuracy compared to the other consensus methods and also the individual ones, while obtaining an acceptable runtime.
Consensus of multiple correspondences between sets of elements
S1077314215001848
Scene parsing is the task of labeling every pixel in an image with its semantic category. We present CollageParsing, a nonparametric scene parsing algorithm that performs label transfer by matching content-adaptive windows. Content-adaptive windows provide a higher level of perceptual organization than superpixels, and unlike superpixels are designed to preserve entire objects instead of fragmenting them. Performing label transfer using content-adaptive windows enables the construction of a more effective Markov random field unary potential than previous approaches. On a standard benchmark consisting of outdoor scenes from the LabelMe database, CollageParsing obtains state-of-the-art performance with 15–19% higher average per-class accuracy than recent nonparametric scene parsing algorithms.
Scene parsing by nonparametric label transfer of content-adaptive windows
S107731421500185X
With the rapidly increasing demands from surveillance and security industries, crowd behaviour analysis has become one of the hotly pursued video event detection frontiers within the computer vision arena in recent years. This research has investigated innovative crowd behaviour detection approaches based on statistical crowd features extracted from video footages. In this paper, a new crowd video anomaly detection algorithm has been developed based on analysing the extracted spatio-temporal textures. The algorithm has been designed for real-time applications by deploying low-level statistical features and alleviating complicated machine learning and recognition processes. In the experiments, the system has been proven a valid solution for detecting anomaly behaviours without strong assumptions on the nature of crowds, for example, subjects and density. The developed prototype shows improved adaptability and efficiency against chosen benchmark systems.
Spatio-temporal texture modelling for real-time crowd anomaly detection
S1077314215001873
This paper presents a novel method for separating reflection components in a single image based on the dichromatic reflection model. Our method is based on a modified version of sparse non-negative matrix factorization (NMF). It simultaneously performs the estimation of diffuse colors and the separation of reflection components through optimization. Our method does not use a spatial prior such as smoothness of colors on the object surface, which is in contrast with recent methods attempting to use such priors to improve separation accuracy. Experimental results show that as compared with these recent methods that use priors, our method is more accurate and robust. For example, it can better deal with difficult cases such as the case where a diffuse color is close to the illumination color.
Separation of reflection components by sparse non-negative matrix factorization
S1077314215001885
This paper addresses the problem of 3D face shape approximation from occluding contours, i.e., the boundaries between the facial region and the background. To this end, a linear regression process that models the relationship between a set of 2D occluding contours and a set of 3D vertices is applied onto the corresponding training sets using Partial Least Squares. The result of this step is a regression matrix which is capable of estimating new 3D face point clouds from the out-of-training 2D Cartesian pixel positions of the selected contours. Our approach benefits from the highly correlated spaces spanned by the 3D vertices around the occluding boundaries of a face and their corresponding 2D pixel projections. As a result, the proposed method resembles dense surface shape recovery from missing data. Our technique is evaluated over four scenarios designed to investigate both the influence of the contours included in the training set and the considered number of contours. Qualitative and quantitative experiments demonstrate that using contours outperform the state of the art on the database used in this article. Even using a limited number of contours provides a useful approximation to the 3D face surface.
Statistical 3D face shape estimation from occluding contours
S1077314215001976
In this paper, we present a method for real-time pose estimation of rigid objects in heavily cluttered environments. At its core, the method relies on the template matching method proposed by Hinterstoisser et al., which is used to generate pose hypotheses. We improved the method by introducing a compensation for bias toward simple shapes and by changing the way modalities such as edges and surface normals are combined. Additionally, we incorporated surface normals obtained with photometric stereo that can produce a dense normal field at a very high level of detail. An iterative algorithm was employed to select the best pose hypotheses among the possible candidates provided by template matching. An evaluation of the pose estimation reliability and a comparison with the current state-of-the-art was performed on several synthetic and several real datasets. The results indicate that the proposed improvements to the similarity measure and the incorporation of surface normals obtained with photometric stereo significantly improve the pose estimation reliability.
Real-time pose estimation of rigid objects in heavily cluttered environments
S1077314215001988
In this paper we propose a method for human action recognition based on a string kernel framework. An action is represented as a string, where each symbol composing it is associated to an aclet, that is an atomic unit of the action encoding a feature vector extracted from raw data. In this way, measuring similarities between actions leads to design a similarity measure between strings. We propose to define this string’s similarity using the global alignment kernel framework. In this context, the similarity between two aclets is computed by a novel soft evaluation method based on an enhanced gaussian kernel. The main advantage of the proposed approach lies in its ability to effectively deal with actions of different lengths or different temporal scales as well as with noise introduced during the features extraction step. The proposed method has been tested over three publicly available datasets, namely the MIVIA, the CAD and the MHAD, and the obtained results, compared with several state of the art approaches, confirm the effectiveness and the applicability of our system in real environments, where unexperienced operators can easily configure it.
Action recognition by using kernels on aclets sequences
S107731421500199X
The interdigital palm region represents about 30% of the palm area and is inherently acquired during palmprint imaging, nevertheless it has not yet attracted any noticeable attention in biometrics research. This paper investigates the ridge pattern characteristics of the interdigital palm region for its usage in biometric identification. An anatomical study of the interdigital area is initially carried out, leading to the establishment of five categories according to the distribution of the singularities and three regions of interest for biometrics. With the identified regions, our study analyzes the matching performance of the interdigital palm biometrics and its combination with the conventional palmprint matching approaches and presents comparative experimental results using four competing feature extraction methods. This study has been carried out with two publicly available databases. The first one consists of 2,080 images of 416 subjects acquired with a touchless low-cost imaging device focused on acquiring the interdigital palm area. The second database is the publicly available BiosecurID hand database which consists of 3,200 images from 400 users. The experimental results presented in this paper suggest that features from the interdigital palm region can be used to achieve competitive performances as well as offer significant improvements for conventional palmprint recognition.
Interdigital palm region for biometric identification
S1077314215002003
Feature matching is an important step for many computer vision applications. This paper introduces the development of a new feature descriptor, called SYnthetic BAsis (SYBA), for feature point description and matching. SYBA is built on the basis of the compressed sensing theory that uses synthetic basis functions to encode or reconstruct a signal. It is a compact and efficient binary descriptor that performs a number of similarity tests between a feature image region and a selected number of synthetic basis images and uses their similarity test results as the feature descriptors. SYBA is compared with four well-known binary descriptors using three benchmarking datasets as well as a newly created dataset that was designed specifically for a more thorough statistical T-test. SYBA is less computationally complex and produces better feature matching results than other binary descriptors. It is hardware-friendly and suitable for embedded vision applications.
An efficient feature descriptor based on synthetic basis functions and uniqueness matching strategy
S1077314215002015
With the recent explosion in the use of video surveillance in security, social and industrial applications, it is highly desired to develop “smart” cameras which are capable of not only supporting high-efficiency surveillance video coding but also facilitating some content analysis tasks such as moving object detection. Usually, background modeling is one of fundamental pre-processing steps in many surveillance video coding and analysis tasks. Among various background models, Gaussian Mixture Model (GMM) is considered as one of the best parametric modeling methods for both video coding and analysis tasks. However, a number of floating-point calculations and division operations largely limit its application in the hardware implementation (e.g., FPGA, SOC). To address this problem, this paper proposes a fixed-point Gaussian Mixture Model (fGMM), which can be used in the hardware implementation of the analysis-friendly surveillance video codec in smart cameras. In this paper, we first mathematically derive a fixed-point formulation of GMMs by introducing several integer variables to replace the corresponding float ones in GMM so as to eliminate the floating-point calculations, and then present a division simulation algorithm and an approximate calculation to replace the division operations. Extensive experiments on the PKU-SVD-A dataset show that fGMM can achieve comparable performance with the float GMM on both surveillance video coding and object detection tasks, and outperforms several state-of-the-art methods remarkably. We also implemented fGMM in FPGA. The result shows that the FPGA implementation of our fGMM can process HD videos in real-time, just requiring 140 MHz user logic and 622 MHz DDR3 memory with 64-bit data bus.
Fixed-point Gaussian Mixture Model for analysis-friendly surveillance video coding
S1077314215002027
We propose a new image representation for texture categorization and facial analysis, relying on the use of higher-order local differential statistics as features. It has been recently shown that small local pixel pattern distributions can be highly discriminative while being extremely efficient to compute, which is in contrast to the models based on the global structure of images. Motivated by such works, we propose to use higher-order statistics of local non-binarized pixel patterns for the image description. The proposed model does not require either (i) user specified quantization of the space (of pixel patterns) or (ii) any heuristics for discarding low occupancy volumes of the space. We propose to use a data driven soft quantization of the space, with parametric mixture models, combined with higher-order statistics, based on Fisher scores. We demonstrate that this leads to a more expressive representation which, when combined with discriminatively learned classifiers and metrics, achieves state-of-the-art performance on challenging texture and facial analysis datasets, in low complexity setup. Further, it is complementary to higher complexity features and when combined with them improves performance.
Local Higher-Order Statistics (LHS) describing images with statistics of local non-binarized pixel patterns
S1077314215002039
Recent algorithms for exemplar-based single image super-resolution have shown impressive results, mainly due to well-chosen priors and recently also due to more accurate blur kernels. Some methods exploit clustering of patches, local gradients or some context information. However, to the best of our knowledge, there is no literature studying the benefits of using semantic information at the image level. By semantic information we mean image segments with corresponding categorical labels. In this paper we investigate the use of semantic information in conjunction with A+, a state-of-the-art super-resolution method. We conduct experiments on large standard datasets of natural images with semantic annotations, and discuss the benefits vs. the drawbacks of using semantic information. Experimental results show that our semantic driven super-resolution can significantly improve over the original settings.
Semantic super-resolution: When and where is it useful?
S1077314215002040
Information on whether a musician in a large symphonic orchestra plays her instrument at a given time stamp or not is valuable for a wide variety of applications aiming at mimicking and enriching the classical music concert experience on modern multimedia platforms. In this work, we propose a novel method for generating playing/non-playing labels per musician over time by efficiently and effectively combining an automatic analysis of the video recording of a symphonic concert and human annotation. In this way, we address the inherent deficiencies of traditional audio-only approaches in the case of large ensembles, as well as those of standard human action recognition methods based on visual models. The potential of our approach is demonstrated on two representative concert videos (about 7 hours of content) using a synchronized symbolic music score as ground truth. In order to identify the open challenges and the limitations of the proposed method, we carry out a detailed investigation of how different modules of the system affect the overall performance.
On detecting the playing/non-playing activity of musicians in symphonic music videos
S1077314215002052
Automatically detecting events in crowded scenes is a challenging task in Computer Vision. A number of offline approaches have been proposed for solving the problem of crowd behavior detection, however the offline assumption limits their application in real-world video surveillance systems. In this paper, we propose an online and real-time method for detecting events in crowded video sequences. The proposed approach is based on the combination of visual feature extraction and image segmentation and it works without the need of a training phase. A quantitative experimental evaluation has been carried out on multiple publicly available video sequences, containing data from various crowd scenarios and different types of events, to demonstrate the effectiveness of the approach.
Online real-time crowd behavior detection in video sequences
S1077314215002076
Detecting groups is becoming of relevant interest as an important step for scene (and especially activity) understanding. Differently from what is commonly assumed in the computer vision community, different types of groups do exist, and among these, standing conversational groups (a.k.a. F-formations) play an important role. An F-formation is a common type of people aggregation occurring when two or more persons sustain a social interaction, such as a chat at a cocktail party. Indeed, detecting and subsequently classifying such an interaction in images or videos is of considerable importance in many applicative contexts, like surveillance, social signal processing, social robotics or activity classification, to name a few. This paper presents a principled method to approach to this problem grounded upon the socio-psychological concept of an F-formation. More specifically, a game-theoretic framework is proposed, aimed at modeling the spatial structure characterizing F-formations. In other words, since F-formations are subject to geometrical configurations on how humans have to be mutually located and oriented, the proposed solution is able to account for these constraints while also statistically modeling the uncertainty associated with the position and orientation of the engaged persons. Moreover, taking advantage of video data, it is also able to integrate temporal information over multiple frames utilizing the recent notions from multi-payoff evolutionary game theory. The experiments have been performed on several benchmark datasets, consistently showing the superiority of the proposed approach over the state of the art, and its robustness under severe noise conditions.
Detecting conversational groups in images and sequences: A robust game-theoretic approach
S1077314215002088
This work presents a statistical recognition approach performing large vocabulary continuous sign language recognition across different signers. Automatic sign language recognition is currently evolving from artificial lab-generated data to ‘real-life’ data. To the best of our knowledge, this is the first time system design on a large data set with true focus on real-life applicability is thoroughly presented. Our contributions are in five areas, namely tracking, features, signer dependency, visual modelling and language modelling. We experimentally show the importance of tracking for sign language recognition with respect to the hands and facial landmarks. We further contribute by explicitly enumerating the impact of multimodal sign language features describing hand shape, hand position and movement, inter-hand-relation and detailed facial parameters, as well as temporal derivatives. In terms of visual modelling we evaluate non-gesture-models, length modelling and universal transition models. Signer-dependency is tackled with CMLLR adaptation and we further improve the recognition by employing class language models. We evaluate on two publicly available large vocabulary databases representing lab-data (SIGNUM database: 25 signers, 455 sign vocabulary, 19k sentences) and unconstrained ‘real-life’ sign language (RWTH-PHOENIX-Weather database: 9 signers, 1081 sign vocabulary, 7k sentences) and achieve up to 10.0%/16.4% and respectively up to 34.3%/53.0% word error rate for single signer/multi-signer setups. Finally, this work aims at providing a starting point to newcomers into the field.
Continuous sign language recognition: Towards large vocabulary statistical recognition systems handling multiple signers
S1077314215002118
Human action recognition is an important and challenging task due to intra-class variation and complexity of actions which is caused by diverse style and duration in performed action. Previous works mostly concentrate on either depth or RGB data to build an understanding about the shape and movement cues in videos but fail to simultaneously utilize rich information in both channels. In this paper we study the problem of RGB-D action recognition from both RGB and depth sequences using kernel descriptors. Kernel descriptors provide an unified and elegant framework to turn pixel-level attributes into descriptive information about the performed actions in video. We show how using simple kernel descriptors over pixel attributes in video sequences achieves a great success compared to the state-of-the-art complex methods. Following the success of kernel descriptors (Bo, et al., 2010) on object recognition task, we put forward the claim that using 3D kernel descriptors could be an effective way to project the low-level features on 3D patches into a powerful structure which can effectively describe the scene. We build our system upon the 3D Gradient kernel descriptor and construct a hierarchical framework by employing efficient match kernel (EMK) (Bo, and Sminchisescu, 2009) and hierarchical kernel descriptors (HKD) as higher levels to abstract the mid-level features for classification. Through extensive experiments we demonstrate the proposed approach achieves superior performance on four standard RGB-D sequences benchmarks.
Learning hierarchical 3D kernel descriptors for RGB-D action recognition
S107731421500212X
In this paper a framework is proposed to localize both Farsi/Arabic and Latin scene texts with different sizes, fonts and orientations. First, candidate text regions are extracted via an MSER detector enhanced by weighted median filtering to adopt the low resolution texts. At the same time based on fuzzy inference system (FIS), the input image is classified into images with a focused text content and incidental scene text images which the image does not focus on the text content. For the focused scene text images the non-text candidates are filtered via an FIS. On the other hand, for the incidental scene text images apart from the FIS, an extra filtering algorithm based on low rank matrix recovery is proposed. Finally, a new approach based on the clustering, minimum area rectangle and radon transform techniques is proposed to create the single arbitrarily oriented text lines from the remaining text regions. To evaluate the proposed algorithm, we created a collection of natural images containing both Farsi/Arabic and Latin texts. Compared with the state-of-the-art methods, the proposed method achieves the best performance on our and Epshtein datasets and competitive performances on the ICDAR dataset.
Localizing scene texts by fuzzy inference systems and low rank matrix recovery model
S1077314215002143
Despite significant improvements have been made for visual tracking in recent years, tracking arbitrary object is still a challenging problem. In this paper, we present a weighted part model tracker that can efficiently handle partial occlusion and appearance change. Firstly, the object appearance is modeled by a mixture of deformable part models with a graph structure. Secondly, through modeling the temporal evolution of each part with a mixture of Gaussian distribution, we present a temporal weighted model to dynamically adjust the importance of each part by measuring the fitness to the historical temporal distributions in the tracking process. Moreover, the temporal weighted models are used to control the sample selections for the update of part models, which makes different parts update differently due to partial occlusion or drastic appearance change. Finally, the weighted part models are solved by structural learning to locate the object. Experimental results show the superiority of the proposed approach.
Learning weighted part models for object tracking
S1077314215002155
This paper proposes a novel framework for Relevance Feedback based on the Fisher Kernel (FK). Specifically, we train a Gaussian Mixture Model (GMM) on the top retrieval results (without supervision) and use this to create a FK representation, which is therefore specialized in modelling the most relevant examples. We use the FK representation to explicitly capture temporal variation in video via frame-based features taken at different time intervals. While the GMM is being trained, a user selects from the top examples those which he is looking for. This feedback is used to train a Support Vector Machine on the FK representation, which is then applied to re-rank the top retrieved results. We show that our approach outperforms other state-of-the-art relevance feedback methods. Experiments were carried out on the Blip10000, UCF50, UCF101 and ADL standard datasets using a broad range of multi-modal content descriptors (visual, audio, and text).
Fisher Kernel Temporal Variation-based Relevance Feedback for video retrieval
S1077314215002167
We present a novel framework for tracking multiple objects imaged from one or more static cameras, where the problems of object detection and data association are expressed by a single objective function. Particularly, we combine a sparsity-driven detector with the network-flow data association technique. The framework follows the Lagrange dual decomposition strategy, taking advantage of the often complementary nature of the two subproblems. Our coupling formulation avoids the problem of error propagation from which traditional “detection-tracking approaches” to multiple object tracking suffer. We also eschew common heuristics such as “non-maximum suppression” of hypotheses by modeling the joint image likelihood as opposed to applying independent likelihood assumptions. Our coupling algorithm is guaranteed to converge and can resolve the ambiguities in track maintenance due to frequent occlusion and indistinguishable appearance between objects. Furthermore, our method does not have severe scalability issues but can process hundreds of frames at the same time. Our experiments involve challenging, notably distinct datasets and demonstrate that our method can achieve results comparable to or better than those of state-of-art approaches.
Global optimization for coupled detection and data association in multiple object tracking
S1077314215002179
This paper presents a novel double-layer sparse representation (DLSR) approach, for improving both reconstructive and discriminative capabilities of unsupervised dictionary learning. In supervised/unsupervised discriminative dictionary learning, classical approaches usually develop a discriminative term for learning multiple sub-dictionaries, each of which corresponds to one-class training image patches. As such, the image patches for different classes can be discriminated by coefficients of sparse representation, with respect to different sub-dictionaries. However, in the unsupervised scenario, some of the training patches for learning the sub-dictionaries of different clusters are related to more than one cluster. Thus, we propose a DLSR formulation in this paper to impose the first-layer sparsity on the coefficients and the second-layer sparsity on the clusters for each training patch, embedding both the reconstructive (via the first-layer) and discriminative (via the second-layer) capabilities in the learned dictionary. To address the proposed DLSR formulation, a simple yet effective algorithm, called DLSR-OMP, is developed on the basis of the conventional OMP algorithm. Finally, the experiments verify that our approach can improve reconstruction and clustering performance of the learned dictionaries of the conventional approaches. More importantly, the experimental results on texture segmentation show that our approach outperforms other state-of-the-art discriminative dictionary learning approaches in the clustering task.
A novel double-layer sparse representation approach for unsupervised dictionary learning
S1077314215002180
This paper focuses on recognizing the human interaction relative to human emotion, and addresses the problem of interaction features representation. We propose a two-layer feature description structure that exploits the representation of spatio-temporal motion features and context features hierarchically. On the lower layer, the local features for motion and interactive context are extracted respectively. We first characterize the local spatio-temporal trajectories as the motion features. Instead of hand-crafted features, a new hierarchical spatio-temporal trajectory coding model is presented to learn and represent the local spatio-temporal trajectories. To further exploit the spatial and temporal relationships in the interactive activities, we then propose an interactive context descriptor, which extracts the local interactive contours from frames. These contours implicitly incorporate the contextual spatial and temporal information. On the higher layer, semi-global features are represented based on the local features encoded on the lower layer. And a spatio-temporal segment clustering method is designed for features extraction on this layer. This method takes the spatial relationship and temporal order of local features into account and creates the mid-level motion features and mid-level context features. Experiments on three challenging action datasets in video, including HMDB51, Hollywood2 and UT-Interaction, are conducted. The results demonstrate the efficacy of the proposed structure, and validate the effectiveness of the proposed context descriptor.
Affective interaction recognition using spatio-temporal features and context
S1077314215002192
Beyond recognizing actions of individuals, activity group localization in videos aims to localize groups of persons in spatiotemporal spaces and recognize what activity the group performs. In this paper, we propose a latent graph model to simultaneously address the problem of multi-target tracking, group discovery and activity recognition. Our key insight is to exploit the contextual relations among people. We present them as a latent relational graph, which hierarchically encodes the association potentials between tracklets, intra-group interactions, correlations, and inter-group compatibilities. Our model is capable of propagating multiple evidences among different layers of the latent graph. Particularly, associated tracklets assist accurate group discovery, activity recognition can benefit from knowing the whole structured groups, and the group and activity information in turn provides strong cues for establishing coherent associations between tracklets. Experiments on five datasets demonstrate that our model achieves both significant improvements in activity group localization and competitive performance on activity recognition.
Localizing activity groups in videos
S1077314215002209
We present a review on the current state of publicly available datasets within the human action recognition community; highlighting the revival of pose based methods and recent progress of understanding person–person interaction modeling. We categorize datasets regarding several key properties for usage as a benchmark dataset; including the number of class labels, ground truths provided, and application domain they occupy. We also consider the level of abstraction of each dataset; grouping those that present actions, interactions and higher level semantic activities. The survey identifies key appearance and pose based datasets, noting a tendency for simplistic, emphasized, or scripted action classes that are often readily definable by a stable collection of sub-action gestures. There is a clear lack of datasets that provide closely related actions, those that are not implicitly identified via a series of poses and gestures, but rather a dynamic set of interactions. We therefore propose a novel dataset that represents complex conversational interactions between two individuals via 3D pose. 8 pairwise interactions describing 7 separate conversation based scenarios were collected using two Kinect depth sensors. The intention is to provide events that are constructed from numerous primitive actions, interactions and motions, over a period of time; providing a set of subtle action classes that are more representative of the real world, and a challenge to currently developed recognition methodologies. We believe this is among one of the first datasets devoted to conversational interaction classification using 3D pose features and the attributed papers show this task is indeed possible. The full dataset is made publicly available to the research community at [1].
From pose to activity: Surveying datasets and introducing CONVERSE
S1077314215002325
Traditional approaches for video classification treat the entire video clip as one data instance. They extract visual features from video frames which are then quantized (e.g., K-means) and pooled (e.g., average pooling) to produce a single feature vector. Such holistic representations of videos are further used as inputs of a classifier. Despite of efficiency, global and aggregate feature representation unavoidably brings in redundant and noisy information from background and unrelated video frames that sometimes overwhelms targeted visual patterns. Besides, temporal correlations between consecutive video frames are also ignored in both training and testing, which may be the key indicator of an action or event. To this end, we propose Weakly Supervised Sequence Modeling (WSSM), a novel framework that combines multiple-instance learning (MIL) and Conditional Random Field (CRF) model seamlessly. Our model takes each entire video as a bag and one video segment as an instance. In our framework, the salient local patterns for different video categories are explored by MIL, and intrinsic temporal dependencies between instances are explicitly exploited using the powerful chain CRF model. In the training stage, we design a novel conditional likelihood formulation which only requires annotation on videos. Such likelihood can be maximized using an alternating optimization method. The training algorithm is guaranteed to converge and is very efficient. In the testing stage, videos are classified by the learned CRF model. The proposed WSSM algorithm outperforms other MIL-based approaches in both accuracy and efficiency on synthetic data and realistic videos for gesture and action classification.
Video Classification via Weakly Supervised Sequence Modeling
S1077314215002337
The spiralling increase of video data has rendered the automated localization and recognition of activities an essential step for video content understanding. In this work, we introduce novel algorithms for detecting human activities in the spatial domain via a binary activity detection mask, the Motion Boundary Activity Area (MBAA), and in the time domain by a new approach, Statistical Sequential Boundary Detection (SSBD). MBAAs are estimated by analyzing the motion vectors using the Kurtosis metric, while dense trajectories are extracted and described using a low level HOGHOF descriptor and high level Fisher representation scheme, modeling a Support Vector Data Description (SVDD) hypersphere. SSBD is then realized by applying Sequential Change Detection with the Cumulative Sum (CUSUM) algorithm on the distances between Fisher data descriptors and the corresponding reference SVDD hyperspheres for rapid detection of changes in the activity pattern. Activities in the resulting video subsequences are then classified using an multi-class SVM model, leading to state of the art results. Our experiments with benchmark and real world data demonstrate that our technique is successful in reducing the computational cost and also in improving activity detection rates.
Activity detection using Sequential Statistical Boundary Detection (SSBD)
S1077314215002349
In the last years, the number of surveillance cameras placed in public locations has increase vastly and as consequence, a huge amount of visual data is generated every minute. In general, this data is analyzed manually, a challenging task which is labor intensive and prone to errors. Therefore, automatic approaches must be employed to enable the processing of the data, so that human operators only need to reason about selected portions. Computer vision problems focused on solving problems in the domain of visual surveillance have been developed aiming at finding accurate and efficient solutions. The main goal of such systems is to analyze the scene focusing on the detection and recognition of suspicious activities performed by humans in the scene, so that the security staff can pay closer attention to these preselected activities. However, these systems are rarely tackled in a scalable manner. Before developing a full surveillance system, several problems have to be solved, which are usually solved individually. However, in a real surveillance scenario, these problems have to be solved in sequence considering only videos as the input. With that in mind, this work proposes a framework for scalable video analysis called Smart Surveillance Framework (SSF) to allow researchers to implement their solutions to the surveillance problems as a sequence of processing modules that communicates through a shared memory.
A scalable and flexible framework for smart video surveillance
S1077314215002362
LP relaxation based message passing and flow-based algorithms are two of the popular techniques for performing MAP inference in graphical models. Generic Cuts (GC) (Arora et al., 2015) combines the two approaches to generalize the traditional max-flow min-cut based algorithms for binary models with higher order clique potentials. The algorithm has been shown to be significantly faster than the state of the art algorithms. The time and memory complexities of Generic Cuts are linear in the number of constraints, which in turn is exponential in the clique size. This limits the applicability of the approach to small cliques only. In this paper, we propose a lazy version of Generic Cuts exploiting the property that in most of such inference problems a large fraction of the constraints are never used during the course of minimization. We start with a small set of constraints (called the active constraints) which are expected to play a role during the minimization process. GC is then run with this reduced set allowing it to be efficient in time and memory. The set of active constraints is adaptively learnt over multiple iterations while guaranteeing convergence to the optimum for submodular clique potentials. Our experiments show that the number of constraints required by the algorithm is typically less than 3% of the total number of constraints. Experiments on computer vision datasets show that our approach can significantly outperform the state of the art both in terms of time and memory and is scalable to clique sizes that could not be handled by existing approaches.
Lazy Generic Cuts
S1077314215002374
In video-surveillance, violent event detection is of utmost interest. Although action recognition has been well studied in computer vision, literature for violence detection in video is far sparser, and even more for surveillance applications. As aggressive events are difficult to define due to their variability and often need high-level interpretation, we decided to first try to characterize what is frequently present in video with violent human behaviors, at a low level: jerky and unstructured motion. Thus, a novel problem-specific Rotation-Invariant feature modeling MOtion Coherence (RIMOC) was proposed, in order to capture its structure and discriminate the unstructured motions. It is based on the eigenvalues obtained from the second-order statistics of the Histograms of Optical Flow vectors from consecutive temporal instants, locally and densely computed, and further embedded into a spheric Riemannian manifold. The proposed RIMOC feature is used to learn statistical models of normal coherent motions in a weakly supervised manner. A multi-scale scheme applied on an inference-based method allows the events with erratic motion to be detected in space and time, as good candidates of aggressive events. We experimentally show that the proposed method produces results comparable to a state-of-the-art supervised approach, with added simplicity in training and computation. Thanks to the compactness of the feature, real-time computation is achieved in learning as well as in detection phase. Extensive experimental tests on more than 18 h of video are provided in different in-lab and real contexts, such as railway cars equipped with on-board cameras.
RIMOC, a feature to discriminate unstructured motions: Application to violence detection for video-surveillance
S1077314215002453
Currently, Markov–Gibbs random field (MGRF) image models which include high-order interactions are almost always built by modelling responses of a stack of local linear filters. Actual interaction structure is specified implicitly by the filter coefficients. In contrast, we learn an explicit high-order MGRF structure by considering the learning process in terms of general exponential family distributions nested over base models, so that potentials added later can build on previous ones. We relatively rapidly add new features by skipping over the costly optimisation of parameters. We introduce the use of local binary patterns as features in MGRF texture models, and generalise them by learning offsets to the surrounding pixels. These prove effective as high-order features, and are fast to compute. Several schemes for selecting high-order features by composition or search of a small subclass are compared. Additionally we present a simple modification of the maximum likelihood as a texture modelling-specific objective function which aims to improve generalisation by local windowing of statistics. The proposed method was experimentally evaluated by learning high-order MGRF models for a broad selection of complex textures and then performing texture synthesis, and succeeded on much of the continuum from stochastic through irregularly structured to near-regular textures. Learning interaction structure is very beneficial for textures with large-scale structure, although those with complex irregular structure still provide difficulties. The texture models were also quantitatively evaluated on two tasks and found to be competitive with other works: grading of synthesised textures by a panel of observers; and comparison against several recent MGRF models by evaluation on a constrained inpainting task.
Texture modelling with nested high-order Markov–Gibbs random fields
S1077314215002465
High-quality light field photography has been one of the most difficult challenges in computational photography. Conventional methods either sacrifice resolution, use multiple devices, or require multiple images to be captured. Combining coded image acquisition and compressive reconstruction is one of the most promising directions to overcome limitations of conventional light field cameras. We present a new approach to compressive light field photography that exploits a joint tensor low-rank and sparse prior (LRSP) on natural light fields. As opposed to recently proposed light field dictionaries, our method does not require a computationally expensive learning stage but rather models the redundancies of high dimensional visual signals using a tensor low-rank prior. This is not only computationally more efficient but also more flexible in that the proposed techniques are easily applicable to a wide range of different imaging systems, camera parameters, and also scene types.
Tensor low-rank and sparse light field photography
S1077314215002489
Learning human activity models from streaming videos should be a continuous process as new activities arrive over time. However, recent approaches for human activity recognition are usually batch methods, which assume that all the training instances are labeled and present in advance. Among such methods, the exploitation of the inter-relationship between the various objects in the scene (termed as context) has proved extremely promising. Many state-of-the-art approaches learn human activity models continuously but do not exploit the contextual information. In this paper, we propose a novel framework that continuously learns both of the appearance and the context models of complex human activities from streaming videos. We automatically construct a conditional random field (CRF) graphical model to encode the mutual contextual information among the activities and the related object attributes. In order to reduce the amount of manual labeling of the incoming instances, we exploit active learning to select the most informative training instances with respect to both of the appearance and the context models to incrementally update these models. Rigorous experiments on four challenging datasets demonstrate that our framework outperforms state-of-the-art approaches with significantly less amount of manually labeled data.
Incremental learning of human activity models from videos
S1077314215002490
Multicuts enable to conveniently represent discrete graphical models for unsupervised and supervised image segmentation, in the case of local energy functions that exhibit symmetries. The basic Potts model and natural extensions thereof to higher-order models provide a prominent class of such objectives, that cover a broad range of segmentation problems relevant to image analysis and computer vision. We exhibit a way to systematically take into account such higher-order terms for computational inference. Furthermore, we present results of a comprehensive and competitive numerical evaluation of a variety of dedicated cutting-plane algorithms. Our approach enables the globally optimal evaluation of a significant subset of these models, without compromising runtime. Polynomially solvable relaxations are studied as well, along with advanced rounding schemes for post-processing.
Higher-order segmentation via multicuts
S1077314215002507
The proliferation of video data makes it imperative to develop automatic approaches that semantically analyze and summarize the ever-growing massive visual data. As opposed to existing approaches built on still images, we propose an algorithm that detects recurring primary object and learns cohort object proposals over space-time in video. Our core contribution is a graph transduction process that exploits both appearance cues learned from rudimentary detections of object-like regions, and the intrinsic structures within video data. By exploiting the fact that rudimentary detections of recurring objects in video, despite appearance variation and sporadity of detection, collectively describe the primary object, we are able to learn a holistic model given a small set of object-like regions. This prior knowledge of the recurring primary object can be propagated to the rest of the video to generate a diverse set of object proposals in all frames, incorporating both spatial and temporal cues. This set of rich descriptions underpins a robust object segmentation method against the changes in appearance, shape and occlusion in natural videos. We present extensive experiments on challenging datasets that demonstrate the superior performance of our approach compared with the state-of-the-art methods.
Primary object discovery and segmentation in videos via graph-based transductive inference
S1077314215002556
In the past, the huge and profitable interaction between Pattern Recognition and biology/bioinformatics was mainly unidirectional, namely targeted at applying PR tools and ideas to analyse biological data. In this paper we investigate an alternative approach, which exploits bioinformatics solutions to solve PR problems: in particular, we address the 2D shape classification problem using classical biological sequence analysis approaches – for which a vast amount of tools and solutions have been developed and improved in more than 40 years of research. First, we highlight the similarities between 2D shapes and biological sequences, then we propose three methods to encode a shape as a biological sequence. Given the encoding, we can employ standard biological sequence analysis tools to derive a similarity, which can be exploited in a nearest neighbor framework. Classification results, obtained on 5 standard datasets, confirm the potentials of the proposed unconventional interaction between PR and bioinformatics. Moreover, we provide some evidences of how it is possible to exploit other bioinformatics concepts and tools to interpret data and results, confirming the flexibility of the proposed framework.
A bioinformatics approach to 2D shape classification
S107731421500257X
In this paper, we cast multi-target tracking as a dense subgraph discovering problem on the undirected relation graph of all given target hypotheses. We aim to extract multiple clusters (dense subgraphs), in which each cluster contains a set of hypotheses of one particular target. In the presence of occlusion or similar moving targets or when there is no reliable evidence for the target’s presence, each target trajectory is expected to be fragmented into multiple tracklets. The proposed tracking framework can efficiently link such fragmented target trajectories to build a longer trajectory specifying the true states of the target. In particular, a discriminative scheme is devised via learning the targets’ appearance models. Moreover, the smoothness characteristic of the target trajectory is utilised by suggesting a smoothness tracklet affinity model to increase the power of the proposed tracker to produce persistent target trajectories revealing different targets’ moving paths. The performance of the proposed approach has been extensively evaluated on challenging public datasets and also in the context of team sports (e.g. soccer, AFL), where team players tend to exhibit quick and unpredictable movements. Systematic experimental results conducted on a large set of sequences show that the proposed approach performs better than the state-of-the-art trackers, in particular, when dealing with occlusion and fragmented target trajectory.
Efficient multi-target tracking via discovering dense subgraphs
S1077314215002581
This work converts the surveillance video to a temporal domain image called temporal profile that is scrollable and scalable for quick searching of long surveillance video by human operators. Such a profile is sampled with linear pixel lines located at critical locations in the video frames. It has precise time stamp on the target passing events through those locations in the field of view, shows target shapes for identification, and facilitates the target search in long videos. In this paper, we first study the projection and shape properties of dynamic scenes in the temporal profile so as to set sampling lines. Then, we design methods to capture target motion and preserve target shapes for target recognition in the temporal profile. It also provides the uniformed resolution of large crowds passing through so that it is powerful in target counting and flow measuring. We also align multiple sampling lines to visualize the spatial information missed in a single line temporal profile. Finally, we achieve real time adaptive background removal and robust target extraction to ensure long-term surveillance. Compared to the original video or the shortened video, this temporal profile reduced data by one dimension while keeping the majority of information for further video investigation. As an intermediate indexing image, the profile image can be transmitted via network much faster than video for online video searching task by multiple operators. Because the temporal profile can abstract passing targets with efficient computation, an even more compact digest of the surveillance video can be created.
Temporal mapping of surveillance video for indexing and summarization
S1077314215002611
Independent mobility involves a number of challenges for people with visual impairment or blindness. In particular, in many countries the majority of traffic lights are still not equipped with acoustic signals. Recognizing traffic lights through the analysis of images acquired by a mobile device camera is a viable solution already experimented in scientific literature. However, there is a major issue: the recognition techniques should be robust under different illumination conditions. This contribution addresses the above problem with an effective solution: besides image processing and recognition, it proposes a robust setup for image capture that makes it possible to acquire clearly visible traffic light images regardless of daylight variability due to time and weather. The proposed recognition technique that adopts this approach is reliable (full precision and high recall), robust (works in different illumination conditions) and efficient (it can run several times a second on commercial smartphones). The experimental evaluation conducted with visual impaired subjects shows that the technique is also practical in supporting road crossing.
Robust traffic lights detection on mobile devices for pedestrians with visual impairment
S1077314215002623
A real-world object surface often consists of multiple materials. Recognizing surface materials is important because it significantly benefits understanding the quality and functionality of the object. However, identifying multiple materials on a surface from a single photograph is very challenging because different materials are often interweaved together and hard to be segmented for separate identification. To address this problem, we present a multi-label learning framework for identifying multiple materials of a real-world object surface without a segmentation for each of them. We find that there are potential correlations between materials and that correlations are relevant to object category. For example, a surface of monitor likely consists of plastic and glasses rather than wood or stone. It motivates us to learn the correlations of material labels locally on each semantic object cluster. To this end, samples are semantically grouped according to their object categories. For each group of samples, we employ a Directed Acyclic Graph (DAG) to encode the conditional dependencies of material labels. These object-specific DAGs are then used for assisting the inference of surface materials. The key enabler of the proposed method is that the object recognition provides a semantic cue for material recognition by formulating an object-specific DAG learning. We test our method on the ALOT database and show consistent improvements over the state-of-the-arts.
Learning object-specific DAGs for multi-label material recognition
S1077314215002635
In this paper, we present a photometric stereo algorithm for estimating surface height. We follow recent work that uses photometric ratios to obtain a linear formulation relating surface gradients and image intensity. Using smoothed finite difference approximations for the surface gradient, we are able to express surface height recovery as a linear least squares problem that is large but sparse. In order to make the method practically useful, we combine it with a model-based approach that excludes observations which deviate from the assumptions made by the image formation model. Despite its simplicity, we show that our algorithm provides surface height estimates of a high quality even for objects with highly non-Lambertian appearance. We evaluate the method on both synthetic images with ground truth and challenging real images that contain strong specular reflections and cast shadows.
Height from photometric ratio with model-based light source selection
S1077314215002647
Recognising human actions in real-time can provide users with a natural user interface (NUI) enabling a range of innovative and immersive applications. A NUI application should not restrict users’ movements; it should allow users to transition between actions in quick succession, which we term as compound actions. However, the majority of action recognition researchers have focused on individual actions, so their approaches are limited to recognising single actions or multiple actions that are temporally separated. This paper proposes a novel online action recognition method for fast detection of compound actions. A key contribution is our hierarchical body model that can be automatically configured to detect actions based on the low level body parts that are the most discriminative for a particular action. Another key contribution is a transfer learning strategy to allow the tasks of action segmentation and whole body modelling to be performed on a related but simpler dataset, combined with automatic hierarchical body model adaption on a more complex target dataset. Experimental results on a challenging and realistic dataset show an improvement in action recognition performance of 16% due to the introduction of our hierarchical transfer learning. The proposed algorithm is fast with an average latency of just 2 frames (66 ms) and outperforms state of the art action recognition algorithms that are capable of fast online action recognition.
Hierarchical transfer learning for online recognition of compound actions
S1077314215002660
Handling all together large displacements, motion details and occlusions remains an open issue for reliable computation of optical flow in a video sequence. We propose a two-step aggregation paradigm to address this problem. The idea is to supply local motion candidates at every pixel in a first step, and then to combine them to determine the global optical flow field in a second step. We exploit local parametric estimations combined with patch correspondences and we experimentally demonstrate that they are sufficient to produce highly accurate motion candidates. The aggregation step is designed as the discrete optimization of a global regularized energy. The occlusion map is estimated jointly with the flow field throughout the two steps. We propose a generic exemplar-based approach for occlusion filling with motion vectors. We achieve state-of-the-art results in the MPI-Sintel benchmark, with particularly significant improvements in the case of large displacements and occlusions.
Aggregation of local parametric candidates with exemplar-based occlusion handling for optical flow
S1077314215002672
Total variational (TV) methods using l 1-norm are efficient approaches for optical flow determination. This contribution presents a multi-resolution TV-l 1 approach using a data-term based on neighborhood descriptors and a weighted non-local regularizer. The proposed algorithm is robust to illumination changes. The benchmarking of the proposed algorithm is done with three reference databases (Middlebury, KITTI and MPI Sintel). On these databases, the proposed approach exhibits an optimal compromise between robustness, accuracy and computation speed. Numerous tests performed both on complicated data of the reference databases and on challenging endoscopic images acquired under three different modalities demonstrate the robustness and accuracy of the method against the presence of large or small displacements, weak texture information, varying illumination conditions and modality changes.
Illumination invariant optical flow using neighborhood descriptors
S1077314215002696
We present an algorithm for graph based saliency computation that utilizes the underlying dense subgraphs in finding visually salient regions in an image. To compute the salient regions, the model first obtains a saliency map using random walks on a Markov chain. Next, k-dense subgraphs are detected to further enhance the salient regions in the image. Dense subgraphs convey more information about local graph structure than simple centrality measures. To generate the Markov chain, intensity and color features of an image in addition to region compactness is used. For evaluating the proposed model, we do extensive experiments on benchmark image data sets. The proposed method performs comparable to well-known algorithms in salient region detection.
A dense subgraph based algorithm for compact salient image region detection
S1077314215002702
Facial micro-expression is important and prevalent as it reveals the actual emotion of humans. Especially, the automated micro-expression analysis substituted for humans begins to gain the attention recently. However, largely unsolved problems of detecting micro-expressions for subsequent analysis need to be addressed sequentially, such as subtle head movements and unconstrained lighting conditions. To face these challenges, we propose a probabilistic framework to detect spontaneous micro-expression clips temporally from a video sequence (micro-expression spotting) in this paper. In the probabilistic framework, a random walk model is presented to calculate the probability of individual frames having micro-expressions. The Adaboost model is utilized to estimate the initial probability for each frame and the correlation between frames would be considered into the random walk model. The active shape model and Procrustes analysis, which are robust to the head movement and lighting variation, are used to describe the geometric shape of human face. Then the geometric deformation would be modeled and used for Adaboost training. Through performing the experiments on two spontaneous micro-expression datasets, we verify the effectiveness of our proposed micro-expression spotting approach.
Spontaneous micro-expression spotting via geometric deformation modeling
S1077314215002714
Removing the influence of occlusion on the depth estimation for light field images has always been a difficult problem, especially for highly noisy and aliased images captured by plenoptic cameras. In this paper, a spinning parallelogram operator (SPO) is integrated into a depth estimation framework to solve these problems. Utilizing the regions divided by the operator in an Epipolar Plane Image (EPI), the lines that indicate depth information are located by maximizing the distribution distances of the regions. Unlike traditional multi-view stereo matching methods, the distance measure is able to keep the correct depth information even if they are occluded or noisy. We further choose the relative reliable information among the rich structures in the light field to reduce the influences of occlusion and ambiguity. The discrete labeling problem is then solved by a filter-based algorithm to fast approximate the optimal solution. The major advantage of the proposed method is that it is insensitive to occlusion, noise, and aliasing, and has no requirement for depth range and angular resolution. It therefore can be used in various light field images, especially in plenoptic camera images. Experimental results demonstrate that the proposed method outperforms state-of-the-art depth estimation methods on light field images, including both real world images and synthetic images, especially near occlusion boundaries.
Robust depth estimation for light field via spinning parallelogram operator
S1077314215002726
Vision tasks are complicated by the nonuniform apparent motion associated with dynamic cameras in complex 3D environments. We present a framework for light field cameras that simplifies dynamic-camera problems, allowing stationary-camera approaches to be applied. No depth estimation or scene modelling is required – apparent motion is disregarded by exploiting the scene geometry implicitly encoded by the light field. We demonstrate the strength of this framework by applying it to change detection from a moving camera, arriving at the surprising and useful result that change detection can be carried out with a closed-form solution. Its constant runtime, low computational requirements, predictable behaviour, and ease of parallel implementation in hardware including FPGA and GPU make this solution desirable in embedded application, e.g. robotics. We show qualitative and quantitative results for imagery captured using two generations of Lytro camera, with the proposed method generally outperforming both naive pixel-based methods and, for a commonly-occurring class of scene, state-of-the-art structure from motion methods. We quantify the tradeoffs between tolerance to camera motion and sensitivity to change, and the impact of coherent, widespread scene changes. Finally, we discuss generalization of the proposed framework beyond change detection, allowing classically still-camera-only methods to be applied in moving-camera scenarios.
Simple change detection from mobile light field cameras
S107731421500274X
Monocular plenoptic cameras are slightly modified, off-the-shelf cameras that have novel capabilities as they allow for truly passive, high-resolution range sensing through a single camera lens. Commercial plenoptic cameras, however, are presently delivering range data in non-metric units, which is a barrier to novel applications e.g. in the realm of robotics. In this work we revisit the calibration of focused plenoptic cameras and bring forward a novel approach that leverages traditional methods for camera calibration in order to deskill the calibration procedure and to increase accuracy. First, we detach the estimation of parameters related to either brightness images or depth data. Second, we present novel initialization methods for the parameters of the thin lens camera model—the only information required for calibration is now the size of the pixel element and the geometry of the calibration plate. The accuracy of the calibration results corroborates our belief that monocular plenoptic imaging is a disruptive technology that is capable of conquering new markets as well as traditional imaging domains.
Stepwise calibration of focused plenoptic cameras
S1077314216000138
Smart environments and monitoring systems are popular research areas nowadays due to its potential to enhance the quality of life. Applications such as human behavior analysis and workspace ergonomics monitoring are automated, thereby improving well-being of individuals with minimal running cost. The central problem of smart environments is to understand what the user is doing in order to provide the appropriate support. While it is difficult to obtain information of full body movement in the past, depth camera based motion sensing technology such as Kinect has made it possible to obtain 3D posture without complex setup. This has fused a large number of research projects to apply Kinect in smart environments. The common bottleneck of these researches is the high amount of errors in the detected joint positions, which would result in inaccurate analysis and false alarms. In this paper, we propose a framework that accurately classifies the nature of the 3D postures obtained by Kinect using a max-margin classifier. Different from previous work in the area, we integrate the information about the reliability of the tracked joints in order to enhance the accuracy and robustness of our framework. As a result, apart from general classifying activity of different movement context, our proposed method can classify the subtle differences between correctly performed and incorrectly performed movement in the same context. We demonstrate how our framework can be applied to evaluate the user’s posture and identify the postures that may result in musculoskeletal disorders. Such a system can be used in workplace such as offices and factories to reduce risk of injury. Experimental results have shown that our method consistently outperforms existing algorithms in both activity classification and posture healthiness classification. Due to the low cost and the easy deployment process of depth camera based motion sensors, our framework can be applied widely in home and office to facilitate smart environments.
Improving posture classification accuracy for depth sensor-based human activity monitoring in smart environments
S107731421600014X
Recognizing scene text is a challenging problem, even more so than the recognition of scanned documents. This problem has gained significant attention from the computer vision community in recent years, and several methods based on energy minimization frameworks and deep learning approaches have been proposed. In this work, we focus on the energy minimization framework and propose a model that exploits both bottom-up and top-down cues for recognizing cropped words extracted from street images. The bottom-up cues are derived from individual character detections from an image. We build a conditional random field model on these detections to jointly model the strength of the detections and the interactions between them. These interactions are top-down cues obtained from a lexicon-based prior, i.e., language statistics. The optimal word represented by the text image is obtained by minimizing the energy function corresponding to the random field model. We evaluate our proposed algorithm extensively on a number of cropped scene text benchmark datasets, namely Street View Text, ICDAR 2003, 2011 and 2013 datasets, and IIIT 5K-word, and show better performance than comparable methods. We perform a rigorous analysis of all the steps in our approach and analyze the results. We also show that state-of-the-art convolutional neural network features can be integrated in our framework to further improve the recognition performance.
Enhancing energy minimization framework for scene text recognition with top-down cues
S1077314216000151
In this paper we propose a strategy for semi-supervised image classification that leverages unsupervised representation learning and co-training. The strategy, that is called CURL from co-training and unsupervised representation learning, iteratively builds two classifiers on two different views of the data. The two views correspond to different representations learned from both labeled and unlabeled data and differ in the fusion scheme used to combine the image features. To assess the performance of our proposal, we conducted several experiments on widely used data sets for scene and object recognition. We considered three scenarios (inductive, transductive and self-taught learning) that differ in the strategy followed to exploit the unlabeled data. As image features we considered a combination of GIST, PHOG, and LBP as well as features extracted from a Convolutional Neural Network. Moreover, two embodiments of CURL are investigated: one using Ensemble Projection as unsupervised representation learning coupled with Logistic Regression, and one based on LapSVM. The results show that CURL clearly outperforms other supervised and semi-supervised learning methods in the state of the art.
CURL: Image Classification using co-training and Unsupervised Representation Learning
S1077314216000175
Scene parsing, using both images and range data, is one of the key problems in computer vision and robotics. In this paper, a street scene parsing scheme that takes advantages of images from perspective cameras and range data from LiDAR is presented. First, pre-processing on the image set is performed and the corresponding point cloud is segmented according to semantics and transformed into an image pose. A graph matching approach is introduced into our parsing framework, in order to identify similar sub-regions from training and test images in terms of both local appearance and spatial structure. By using the sub-graphs inherited from training images, as well as the cues obtained from point clouds, this approach can effectively interpret the street scene via a guided MRF inference. Experimental results show a promising performance of our approach.
Scene parsing using graph matching on street-view data
S1077314216000187
Shape matching and retrieval have been some of the fundamental topics in computer vision. Object shape is a meaningful and informative cue in object recognition, where an effective shape descriptor plays an important role. To capture the invariant features of both local shape details and visual parts, we propose a novel invariant multi-scale descriptor for shape matching and retrieval. In this work, we define three types of invariants to capture the shape features from different aspects. Each type of the invariants is used in multiple scales from a local range to a semi-global part. An adaptive discrete contour evolution method is also proposed to extract the salient feature points of a shape contour for compact representation. Shape matching is performed using the dynamic programming algorithm. The proposed method is invariant to rotation, scale variation, intra-class variation, articulated deformation and partial occlusion. Our method is robust to noise as well. To validate the invariance and robustness of our proposed method, we perform experiments on multiple benchmark datasets, including MPEG-7, Kimia and articulated shape datasets. The competitive results indicate the effectiveness of our proposed method for shape matching and retrieval.
Invariant multi-scale descriptor for shape representation, matching and retrieval
S1077314216000448
Line triangulation, as a classical problem in computer vision, is to determine the 3D coordinates of a line based on its 2D image projections from more than two views of cameras. Classical approaches for line triangulation are based on algebraic errors, which do not have any geometrical meaning. In addition, an effective metric to evaluate 3D errors of line triangulation is not available in the literature. In this paper, a comprehensive study of line triangulation is conducted using geometric cost functions. Compared to the algebraic error based approaches, geometric error based algorithm is more meaningful, and thus, yields better estimation results. The main contributions of this study include: (i) it is proved that the optimal solution to minimizing the geometric errors can be transformed to finding the real roots of algebraic equations; (ii) an effective iterative algorithm, ITEg, is proposed to minimizing the geometric errors; and (iii) an in-depth comparative evaluations on three metrics in 3D line space, the Euclidean metric, the orthogonal metric, and the quasi-Riemannian metric, are carried out. Extensive experiments on synthetic data and real images are carried out to validate and demonstrate the effectiveness of the proposed algorithms.
Triangulation and metric of lines based on geometric error
S1077314216000485
Large appearance changes in visual tracking affect the tracking performance severely. To address this challenge, in this paper we develop an effective appearance model with the highly discriminative features. We propose an online Fisher discrimination boosting feature selection mechanism, which selects features that reduce the with-in scatter while enlarging the between-class scatter, thereby enhancing the discriminative capability between the target and background. Moreover, we utilize a particle filtering framework for visual tracking, in which the weights of candidate particles take into account the context information around the particles, thereby enhancing the robustness of tracking. In order to increase efficiency, a coarse-to-fine search strategy is exploited to efficiently and accurately locate the target. Extensive experiments on the CVPR2013 tracking benchmark demonstrate the competitive performance of our algorithm over other representative algorithms in terms of accuracy and robustness.
Robust object tracking by online Fisher discrimination boosting feature selection
S1077314216000497
The existing cosegmentation methods focus on exploiting inter-image information to extract a common object from a single image group. Observing that in many practical scenarios there often exist multiple image groups with distinct characteristics but related to the same common object, in this paper we propose a multi-group image cosegmentation framework, which not only discoveries inter-image information within each image group, but also transfers inter-group information among different image groups so as to produce more accurate object priors. Particularly, the multi-group cosegmentation task is formulated as an energy minimization problem, where we employ Markov random field (MRF) segmentation model and the dense correspondence model in the model design and adapt the Expectation-Maximization algorithm (EM) to solve the optimization. We apply the proposed framework on three practical scenarios including image complexity based cosegmentation, multiple training group cosegmentation and multiple noise image group cosegmentation. Experimental results on four benchmark datasets demonstrate that the proposed multi-group image cosegmentation framework is able to discover more accurate object priors and outperform state-of-the-art single-group image cosegmentation methods.
Cosegmentation of multiple image groups
S1077314216000503
This paper introduces a fast algorithm for randomized computation of a low-rank Dynamic Mode Decomposition (DMD) of a matrix. Here we consider this matrix to represent the development of a spatial grid through time e.g. data from a static video source. DMD was originally introduced in the fluid mechanics community, but is also suitable for motion detection in video streams and its use for background subtraction has received little previous investigation. In this study we present a comprehensive evaluation of background subtraction, using the randomized DMD and compare the results with leading robust principal component analysis algorithms. The results are convincing and show the random DMD is an efficient and powerful approach for background modeling, allowing processing of high resolution videos in real-time. Supplementary materials include implementations of the algorithms in Python.
Randomized low-rank Dynamic Mode Decomposition for motion detection
S1077314216000515
The performance of depth reconstruction in binocular stereo relies on how adequate the predefined baseline for a target scene is. Wide-baseline stereo is capable of discriminating depth better than the narrow-baseline stereo, but it often suffers from spatial artifacts. Narrow-baseline stereo can provide a more elaborate depth map with fewer artifacts, while its depth resolution tends to be biased or coarse due to the short disparity. In this paper, we propose a novel optical design of heterogeneous stereo fusion on a binocular imaging system with a refractive medium, where the binocular stereo part operates as wide-baseline stereo, and the refractive stereo module works as narrow-baseline stereo. We then introduce a stereo fusion workflow that combines the refractive and binocular stereo algorithms to estimate fine depth information through this fusion design. In addition, we propose an efficient calibration method for refractive stereo. The quantitative and qualitative results validate the performance of our stereo fusion system in measuring depth in comparison with homogeneous stereo approaches.
Stereo fusion: Combining refractive and binocular disparity
S1077314216000527
We propose a system for analyzing the structure of a web page based on purely visual information, rather than on implementation details. This is advantageous because regardless of the complexity of the underlying implementation, the web page is designed to be easily interpreted visually. Our method produces a hierarchical segmentation reflecting the visual structure of the rendered page. This rich information about the presentation of the web page can be used by other systems which produce alternate presentations more suitable for users with visual or cognitive disabilities.
Purely vision-based segmentation of web pages for assistive technology
S1077314216000540
Extracting moving objects from a video sequence and estimating the background of each individual image are fundamental issues in many practical applications such as visual surveillance, intelligent vehicle navigation, and traffic monitoring. Recently, some methods have been proposed to detect moving objects in a video via low-rank approximation and sparse outliers where the background is modeled with the computed low-rank component of the video and the foreground objects are detected as the sparse outliers in the low-rank approximation. Many of these existing methods work in a batch manner, preventing them from being applied in real time and long duration tasks. To address this issue, some online methods have been proposed; however, existing online methods fail to provide satisfactory results under challenging conditions such as dynamic background scene and noisy environments. In this paper, we present an online sequential framework, namely contiguous outliers representation via online low-rank approximation (COROLA), to detect moving objects and learn the background model at the same time. We also show that our model can detect moving objects with a moving camera. Our experimental evaluation uses simulated data and real public datasets to demonstrate the superior performance of COROLA to the existing batch and online methods in terms of both accuracy and efficiency.
COROLA: A sequential solution to moving object detection using low-rank approximation
S1077314216000655
In the recent years, computer vision has been undergoing a period of great development, testified by the many successful applications that are currently available in a variety of industrial products. Yet, when we come to the most challenging and foundational problem of building autonomous agents capable of performing scene understanding in unrestricted videos, there is still a lot to be done. In this paper we focus on semantic labeling of video streams, in which a set of semantic classes must be predicted for each pixel of the video. We propose to attack the problem from bottom to top, by introducing Developmental Visual Agents (DVAs) as general purpose visual systems that can progressively acquire visual skills from video data and experience, by continuously interacting with the environment and following lifelong learning principles. DVAs gradually develop a hierarchy of architectural stages, from unsupervised feature extraction to the symbolic level, where supervisions are provided by external users, pixel-wise. Differently from classic machine learning algorithms applied to computer vision, which typically employ huge datasets of fully labeled images to perform recognition tasks, DVAs can exploit even a few supervisions per semantic category, by enforcing coherence constraints based on motion estimation. Experiments on different vision tasks, performed on a variety of heterogeneous visual worlds, confirm the great potential of the proposed approach.
Semantic video labeling by developmental visual agents
S1077314216000692
Current and near-term implantable prosthetic vision systems offer the potential to restore some visual function, but suffer from limited resolution and dynamic range of induced visual percepts. This can make navigating complex environments difficult for users. We introduce semantic labeling as a technique to improve navigation outcomes for prosthetic vision users. We produce a novel egocentric vision dataset to demonstrate how semantic labeling can be applied to this problem. We also improve the speed of semantic labeling with sparse computation of unary potentials, enabling its use in real-time wearable assistive devices. We use simulated prosthetic vision to demonstrate the results of our technique. Our approach allows a prosthetic vision system to selectively highlight specific classes of objects in the user’s field of view, improving the user’s situational awareness.
Semantic labeling for prosthetic vision
S1077314216000710
We address the task of estimating large-scale land surface conditions using overhead aerial (macro-level) images and street view (micro-level) images. These two types of images are captured from orthogonal viewpoints and have different resolutions, thus conveying very different types of information that can be used in a complementary way. Moreover, their integration is necessary to enable an accurate understanding of changes in natural phenomena over massive city-scale landscapes. The key technical challenge is devising a method to integrate these two disparate types of image data in an effective manner, to leverage the wide coverage capabilities of macro-level images and detailed resolution of micro-level images. The strategy proposed in this work uses macro-level imaging to learn the extent to which the land condition corresponds between land regions that share similar visual characteristics (e.g., mountains, streets, buildings, rivers), whereas micro-level images are used to acquire high resolution statistics of land conditions (e.g., the amount of debris on the ground). By combining macro- and micro-level information about regional correspondences and surface conditions, our proposed method is capable of generating detailed estimates of land surface conditions over an entire city.
Hybrid macro–micro visual analysis for city-scale state estimation
S1077314216000722
Simple linear iterative clustering (SLIC) that partitions an image into multiple homogeneous regions, superpixels, has been widely used as a preprocessing step in various image processing and computer vision applications due to its outstanding performance in terms of speed and accuracy. However, determining a segment that each pixel belongs to still requires tedious, iterative computation, which hinders real-time execution of SLIC. In this paper, we propose an accelerated SLIC superpixel segmentation algorithm where the number of candidate segments for each pixel is reduced effectively by exploiting high spatial redundancy within natural images. Because all candidate segments should be inspected in order to choose the best one, candidate reduction significantly improves computational efficiency. Various characteristics of the proposed acceleration algorithm are investigated. The experimental results confirmed that the proposed superpixel segmentation algorithm runs up to about five times as fast as SLIC while producing almost the same superpixel segmentation performance, sometimes better than SLIC, with respect to under-segmentation error and boundary recall.
Subsampling-based acceleration of simple linear iterative clustering for superpixel segmentation
S1077314216000746
In murky water, the light interaction with the medium particles results in a complex image formation model that is hard to use effectively with a shape estimation framework like Photometric Stereo. All previous approaches have resorted to necessary model simplifications that were though used arbitrarily, without describing how their validity can be estimated in an unknown underwater situation. In this work, we evaluate the effectiveness of such simplified models and we show that this varies strongly with the imaging conditions. For this reason, we propose a novel framework that can predict the effectiveness of a photometric model when the scene is unknown. To achieve this we use a dynamic lighting framework where a robotic platform is able to probe the scene with varying light positions, and the respective change in estimated surface normals serves as a faithful proxy of the true reconstruction error. This creates important benefits over traditional Photometric Stereo frameworks, as our system can adapt some critical factors to an underwater scenario, such as the camera-scene distance and the light position or the photometric model, in order to minimize the reconstruction error. Our work is evaluated through both numerical simulations and real experiments for different distances, underwater visibilities and light source baselines.
Model effectiveness prediction and system adaptation for photometric stereo in murky water
S1077314216000758
In this paper, we present a multi-modal perception based framework to realize a non-intrusive domestic assistive robotic system. It is non-intrusive in that it only starts interaction with a user when it detects the user’s intention to do so. All the robot’s actions are based on multi-modal perceptions which include user detection based on RGB-D data, user’s intention-for-interaction detection with RGB-D and audio data, and communication via user distance mediated speech recognition. The utilization of multi-modal cues in different parts of the robotic activity paves the way to successful robotic runs (94% success rate). Each presented perceptual component is systematically evaluated using appropriate dataset and evaluation metrics. Finally the complete system is fully integrated on the PR2 robotic platform and validated through system sanity check runs and user studies with the help of 17 volunteer elderly participants.
A multi-modal perception based assistive robotic system for the elderly
S107731421600076X
Natural and intuitive human interaction with robotic systems is a key point to develop robots assisting people in an easy and effective way. In this paper, a Human Robot Interaction (HRI) system able to recognize gestures usually employed in human non-verbal communication is introduced, and an in-depth study of its usability is performed. The system deals with dynamic gestures such as waving or nodding which are recognized using a Dynamic Time Warping approach based on gesture specific features computed from depth maps. A static gesture consisting in pointing at an object is also recognized. The pointed location is then estimated in order to detect candidate objects the user may refer to. When the pointed object is unclear for the robot, a disambiguation procedure by means of either a verbal or gestural dialogue is performed. This skill would lead to the robot picking an object in behalf of the user, which could present difficulties to do it by itself. The overall system — which is composed by a NAO and Wifibot robots, a KinectTM v2 sensor and two laptops — is firstly evaluated in a structured lab setup. Then, a broad set of user tests has been completed, which allows to assess correct performance in terms of recognition rates, easiness of use and response times.
A real-time Human-Robot Interaction system based on gestures for assistive scenarios
S1077314216000771
Detection of running behavior, the specific anomaly from common walking, has been playing a critical rule in practical surveillance systems. However, only a few works focus on this particular field and the lack of a consistent benchmark with reasonable size limits the persuasive evaluation and comparison. In this paper, for the first time, we propose a standard benchmark database with diversity of scenes and groundtruth for human running detection, and introduce several criteria for performance evaluation in the meanwhile. In addition, a baseline running detection algorithm is presented and extensively evaluated on the proposed benchmark qualitatively and quantitatively. The main purpose of this paper is to lay the foundation for further research in the human running detection domain, by making experimental evaluation more standardized and easily accessible. All the benchmark videos with groundtruth and source codes will be made publicly available online.
Human running detection: Benchmark and baseline
S1077314216300017
Fully-automated segmentation algorithms offer fast, objective, and reproducible results for large data collections. However, these techniques cannot handle tasks that require contextual knowledge not readily available in the images alone. Thus, the supervision of an expert is necessary. We present a generative model for image segmentation, based on a Bayesian inference. Not only does our approach support an intuitive and convenient user interaction subject to the bottom-up constraints introduced by the image intensities, it also circumvents the main limitations of a human observer—3D visualization and modality fusion. The user “dialogue” with the segmentation algorithm via several mouse clicks in regions of disagreement, is formulated as a continuous probability map, that represents the user’s certainty to whether the current segmentation should be modified. Considering this probability map as the voxel-vise Bernoulli priors on the image labels allows spatial encoding of the user-provided input. The method is exemplified for the segmentation of cerebral hemorrhages (CH) in human brain CT scans; ventricles in degenerative mice brain MRIs, and tumors in multi-modal human brain MRIs and is shown to outperform three interactive, state-of-the-art segmentation methods in terms of accuracy, efficiency and user-workload.
Probabilistic model for 3D interactive segmentation
S1077314216300029
Quantification of the thigh inter-muscular adipose tissue (IMAT) plays a critical role in various medical data analysis tasks, e.g., the analysis of physical performance or the diagnosis of knee osteoarthritis. Several techniques have been proposed to perform automated thigh tissues quantification. However, none of them has provided an effective method to track fascia lata, which is an important anatomic trail to distinguish between subcutaneous adipose tissue (SAT) and IMAT in the thigh. As a result, the estimates of IMAT may not be accurate due to the unclear appearance cues, complicated anatomic, or pathological characteristics of the fascia lata. Thus, prior tissue information, e.g., intensity, orientation and scale, becomes critical to infer the fascia lata location from magnetic resonance (MR) images. In this paper, we propose a novel detection-driven and sparsity-constrained deformable model to obtain accurate fascia lata labeling. The model’s deformation is driven by the detected control points on fascia lata through a discriminative detector in a narrow-band fashion. By using a sparsity-constrained optimization, the deformation is solved from errors and outliers suppression. The proposed approach has been evaluated on a set of 3D MR thigh volumes. In a comparison with the state-of-the-art framework, our approach produces superior performance.
A detection-driven and sparsity-constrained deformable model for fascia lata labeling and thigh inter-muscular adipose quantification
S1077314216300030
Toothbrush training is a complex and not fun task for the child nor for the parents or for the dental stuff. Parents and hygienists often report that they are frustrated by poor responses to the training and in most of cases children go home and resume wrong brushing habits, if any. In this paper we present a novel approach where the tooth brushing procedure can become a fun and enjoyable task for kids using a cheap toothbrush accessory and a tablet or a smartphone. The main idea is to apply a simple and cheap 3D colored target at the end of the toothbrush and to track and analyze its motion, imparted by the child. In particular, from the tablet camera it is possible to track both the toothbrush target and the child’s facial parts in order to estimate the brushed dental side. The proposed approach has been tested on seven kids showing good results both in propensity and accuracy after a 20 days period.
Toothbrush motion analysis to help children learn proper tooth brushing
S1077314216300042
We present a fast and online human-robot interaction approach that progressively learns multiple object classifiers using scanty human supervision. Given an input video stream recorded during the human-robot interaction, the user just needs to annotate a small fraction of frames to compute object specific classifiers based on random ferns which share the same features. The resulting methodology is fast (in a few seconds, complex object appearances can be learned), versatile (it can be applied to unconstrained scenarios), scalable (real experiments show we can model up to 30 different object classes), and minimizes the amount of human intervention by leveraging the uncertainty measures associated to each classifier. We thoroughly validate the approach on synthetic data and on real sequences acquired with a mobile platform in indoor and outdoor scenarios containing a multitude of different objects. We show that with little human assistance, we are able to build object classifiers robust to viewpoint changes, partial occlusions, varying lighting and cluttered backgrounds.
Interactive multiple object learning with scanty human supervision
S1077314216300054
A new eye blink detection algorithm is proposed. Motion vectors obtained by Gunnar–Farneback tracker in the eye region are analyzed using a state machine for each eye. Normalized average motion vector with standard deviation and time constraint are the input to the state machine. Motion vectors are normalized by the intraocular distance to achieve invariance to the eye region size. The proposed method outperforms related work on the majority of available datasets. We extend the way how to evaluate eye blink detection algorithms without the impact of algorithms used for face and eye detection. We also introduce a new challenging dataset Researcher’s night, which contains more than 100 unique individuals with 1849 annotated eye blinks. It is currently the largest dataset available.
Eye blink detection based on motion vectors analysis
S1077314216300066
This paper presents a multiple-sensors, 3D vision-based, autonomous wheelchair-mounted robotic manipulator (WMRM). Two 3D sensors were employed: one for object recognition, and the other for recognizing body parts (face and hands). The goal is to recognize everyday items and automatically interact with them in an assistive fashion. For example, when a cereal box is recognized, it is grasped, poured in a bowl, and brought to the user. Daily objects (i.e. bowl and hat) were automatically detected and classified using a three-steps procedure: (1) remove background based on 3D information and find the point cloud of each object; (2) extract feature vectors for each segmented object from its 3D point cloud and its color image; and (3) classify feature vectors as objects after applying a nonlinear support vector machine (SVM). To retrieve specific objects, three user interface methods were adopted: voice-based, gesture-based, and hybrid commands. The presented system was tested using two common activities of daily living -- feeding and dressing. The results revealed that an accuracy of 98.96% is achieved for a dataset with twelve daily objects. The experimental results indicated that hybrid (gesture and speech) interaction outperforms any single modal interaction.
Enhanced control of a wheelchair-mounted robotic manipulator using 3-D vision and multimodal interaction
S1077314216300078
The identification of visual cues in facial images has been widely explored in the broad area of computer vision. However theoretical analyses are often not transformed into widespread assistive Human-Computer Interaction (HCI) systems, due to factors such as inconsistent robustness, low efficiency, large computational expense or strong dependence on complex hardware. We present a novel gender recognition algorithm, a modular eye centre localisation approach and a gaze gesture recognition method, aiming to escalate the intelligence, adaptability and interactivity of HCI systems by combining demographic data (gender) and behavioural data (gaze) to enable development of a range of real-world assistive-technology applications. The gender recognition algorithm utilises Fisher Vectors as facial features which are encoded from low-level local features in facial images. We experimented with four types of low-level features: greyscale values, Local Binary Patterns (LBP), LBP histograms and Scale Invariant Feature Transform (SIFT). The corresponding Fisher Vectors were classified using a linear Support Vector Machine. The algorithm has been tested on the FERET database, the LFW database and the FRGCv2 database, yielding 97.7%, 92.5% and 96.7% accuracy respectively. The eye centre localisation algorithm has a modular approach, following a coarse-to-fine, global-to-regional scheme and utilising isophote and gradient features. A Selective Oriented Gradient filter has been specifically designed to detect and remove strong gradients from eyebrows, eye corners and self-shadows (which sabotage most eye centre localisation methods). The trajectories of the eye centres are then defined as gaze gestures for active HCI. The eye centre localisation algorithm has been compared with 10 other state-of-the-art algorithms with similar functionality and has outperformed them in terms of accuracy while maintaining excellent real-time performance. The above methods have been employed for development of a data recovery system that can be employed for implementation of advanced assistive technology tools. The high accuracy, reliability and real-time performance achieved for attention monitoring, gaze gesture control and recovery of demographic data, can enable the advanced human-robot interaction that is needed for developing systems that can provide assistance with everyday actions, thereby improving the quality of life for the elderly and/or disabled.
Gender and gaze gesture recognition for human-computer interaction
S107731421630008X
Background estimation in video consists in extracting a foreground-free image from a set of training frames. Moving and stationary objects may affect the background visibility, thus invalidating the assumption of many related literature where background is the temporal dominant data. In this paper, we present a temporal-spatial block-level approach for background estimation in video to cope with moving and stationary objects. First, a Temporal Analysis module obtains a compact representation of the training data by motion filtering and dimensionality reduction. Then, a threshold-free hierarchical clustering determines a set of candidates to represent the background for each spatial location (block). Second, a Spatial Analysis module iteratively reconstructs the background using these candidates. For each spatial location, multiple reconstruction hypotheses (paths) are explored to obtain its neighboring locations by enforcing inter-block similarities and intra-block homogeneity constraints in terms of color discontinuity, color dissimilarity and variability. The experimental results show that the proposed approach outperforms the related state-of-the-art over challenging video sequences in presence of moving and stationary objects.
Rejection based multipath reconstruction for background estimation in video sequences with stationary objects
S1077314216300091
Video based action recognition is one of the important and challenging problems in computer vision research. Bag of visual words model (BoVW) with local features has been very popular for a long time and obtained the state-of-the-art performance on several realistic datasets, such as the HMDB51, UCF50, and UCF101. BoVW is a general pipeline to construct a global representation from local features, which is mainly composed of five steps; (i) feature extraction, (ii) feature pre-processing, (iii) codebook generation, (iv) feature encoding, and (v) pooling and normalization. Although many efforts have been made in each step independently in different scenarios, their effects on action recognition are still unknown. Meanwhile, video data exhibits different views of visual patterns , such as static appearance and motion dynamics. Multiple descriptors are usually extracted to represent these different views. Fusing these descriptors is crucial for boosting the final performance of an action recognition system. This paper aims to provide a comprehensive study of all steps in BoVW and different fusion methods, and uncover some good practices to produce a state-of-the-art action recognition system. Specifically, we explore two kinds of local features, ten kinds of encoding methods, eight kinds of pooling and normalization strategies, and three kinds of fusion methods. We conclude that every step is crucial for contributing to the final recognition rate and improper choice in one of the steps may counteract the performance improvement of other steps. Furthermore, based on our comprehensive study, we propose a simple yet effective representation, called hybrid supervector, by exploring the complementarity of different BoVW frameworks with improved dense trajectories. Using this representation, we obtain impressive results on the three challenging datasets; HMDB51 (61.9%), UCF50 (92.3%), and UCF101 (87.9%).
Bag of visual words and fusion methods for action recognition: Comprehensive study and good practice
S1077314216300224
In the recent years personal health monitoring systems have been gaining popularity, both as a result of the pull from the general population, keen to improve well-being and early detection of possibly serious health conditions and the push from the industry eager to translate the current significant progress in computer vision and machine learning into commercial products. One of such systems is the Wize Mirror, built as a result of the FP7 funded SEMEOTICONS (SEMEiotic Oriented Technology for Individuals CardiOmetabolic risk self-assessmeNt and Self-monitoring) project. The project aims to translate the semeiotic code of the human face into computational descriptors and measures, automatically extracted from videos, multispectral images, and 3D scans of the face. The multisensory platform, being developed as the result of that project, in the form of a smart mirror, looks for signs related to cardio-metabolic risks. The goal is to enable users to self-monitor their well-being status over time and improve their life-style via tailored user guidance. This paper is focused on the description of the part of that system, utilising computer vision and machine learning techniques to perform 3D morphological analysis of the face and recognition of psycho-somatic status both linked with cardio-metabolic risks. The paper describes the concepts, methods and the developed implementations as well as reports on the results obtained on both real and synthetic datasets.
Wize Mirror - a smart, multisensory cardio-metabolic risk monitoring system
S1077314216300236
Extracting 3D information from a moving camera is traditionally based on interest point detection and matching. This is especially challenging in urban indoor- and outdoor environments, where the number of distinctive interest points is naturally limited. While common Structure-from-Motion (SfM) approaches usually manage to obtain the correct camera poses, the number of accurate 3D points is very small due to the low number of matchable features. Subsequent Multi-view Stereo approaches may help to overcome this problem, but suffer from a high computational complexity. We propose a novel approach for the task of 3D scene abstraction, which uses straight line segments as underlying features. We use purely geometric constraints to match 2D line segments from different images, and formulate the reconstruction procedure as a graph-clustering problem. We show that our method generates accurate 3D models with low computational costs, which makes it especially useful for large-scale urban datasets.
Efficient 3D scene abstraction using line segments
S1077314216300248
In this paper, a novel wearable RGB-D camera based indoor navigation system for the visually impaired is presented. The system guides the visually impaired user from one location to another location without a prior map or GPS information. Accurate real-time egomotion estimation, mapping, and path planning in the presence of obstacles are essential for such a system. We perform real-time 6-DOF egomotion estimation using sparse visual features, dense point clouds, and the ground plane to reduce drift from a head-mounted RGB-D camera. The system also builds 2D probabilistic occupancy grid map for efficient traversability analysis which is a basis for dynamic path planning and obstacle avoidance. The system can store and reload maps generated by the system while traveling and continually expand the coverage area of navigation. Next, the shortest path between the start location to destination is generated. The system generates a safe and efficient way point based on the traversability analysis result and the shortest path and updates the way point while a user is constantly moving. Appropriate cues are generated and delivered to a tactile feedback system to guide the visually impaired user to the way point. The proposed wearable system prototype is composed of multiple modules including a head-mounted RGB-D camera, standard laptop that runs a navigation software, smart phone user interface, and haptic feedback vest. The proposed system achieves real-time navigation performance at 28.6Hz in average on a laptop, and helps the visually impaired extends the range of their activities and improve the orientation and mobility performance in a cluttered environment. We have evaluated the performance of the proposed system in mapping and localization with blind-folded and the visually impaired subjects. The mobility experiment results show that navigation in indoor environments with the proposed system avoids collisions successfully and improves mobility performance of the user compared to conventional and state-of-the-art mobility aid devices.
RGB-D camera based wearable navigation system for the visually impaired
S107731421630025X
Common to much work on land-cover classification in multispectral imagery is the use of single satellite images for training the classifiers for the different land types. Unfortunately, more often than not, decision boundaries derived in this manner do not extrapolate well from one image to another. This happens for several reasons, most having to do with the fact that different satellite images correspond to different view angles on the earth’s surface, different sun angles, different seasons, and so on. In this paper, we get around these limitations of the current state-of-the-art by first proposing a new integrated representation for all of the images, overlapping and non-overlapping, that cover a large geographic ROI (Region of Interest). In addition to helping understand the data variability in the images, this representation also makes it possible to create the ground truth that can be used for ROI-based wide-area learning of the classifiers. We use this integrated representation in a new Bayesian framework for data classification that is characterized by: (1) learning of the decision boundaries from a sampling of all the satellite data available for an entire geographic ROI; (2) probabilistic modeling of within-class and between-class variations, as opposed to the more traditional probabilistic modeling of the “feature vectors” extracted from the measurement data; and (3) using variance-based ML (maximum-likelihood) and MAP (maximum a posteriori) classifiers whose decision boundary calculations incorporate all of the multi-view data for a geographic point if that point is selected for learning and testing. We show results with the new classification framework for an ROI in Chile whose size is roughly 10,000 square kilometers. This ROI is covered by 189 satellite images with varying degrees of overlap. We compare the classification performance of the proposed ROI-based framework with the results obtained by extrapolating the decision boundaries learned from a single image to the entire ROI. Using a 10-fold cross-validation test, we demonstrate significant increases in the classification accuracy for five of the six land-cover classes. In addition, we show that our variance based Bayesian classifier outperforms a traditional Support Vector Machine (SVM) based approach to classification for four out of six classes.
A variance-based Bayesian framework for improving Land-Cover classification through wide-area learning from large geographic regions
S1077314216300261
In this paper we propose a complete pipeline for medical image modality classification focused on the application of discrete Bayesian network classifiers. Modality refers to the categorization of biomedical images from the literature according to a previously defined set of image types, such as X-ray, graph or gene sequence. We describe an extensive pipeline starting with feature extraction from images, data combination, pre-processing and a range of different classification techniques and models. We study the expressive power of several image descriptors along with supervised discretization and feature selection to show the performance of discrete Bayesian networks compared to the usual deterministic classifiers used in image classification. We perform an exhaustive experimentation by using the ImageCLEFmed 2013 collection. This problem presents a high number of classes so we propose several hierarchical approaches. In a first set of experiments we evaluate a wide range of parameters for our pipeline along with several classification models. Finally, we perform a comparison by setting up the competition environment between our selected approaches and the best ones of the original competition. Results show that the Bayesian Network classifiers obtain very competitive results. Furthermore, the proposed approach is stable and it can be applied to other problems that present inherent hierarchical structures of classes.
Medical image modality classification using discrete Bayesian networks
S1077314216300273
In this work, we aim to segment and detect water in videos. Water detection is beneficial for appllications such as video search, outdoor surveillance, and systems such as unmanned ground vehicles and unmanned aerial vehicles. The specific problem, however, is less discussed compared to general texture recognition. Here, we analyze several motion properties of water. First, we describe a video pre-processing step, to increase invariance against water reflections and water colours. Second, we investigate the temporal and spatial properties of water and derive corresponding local descriptors. The descriptors are used to locally classify the presence of water and a binary water detection mask is generated through spatio-temporal Markov Random Field regularization of the local classifications. Third, we introduce the Video Water Database, containing several hours of water and non-water videos, to validate our algorithm. Experimental evaluation on the Video Water Database and the DynTex database indicates the effectiveness of the proposed algorithm, outperforming multiple algorithms for dynamic texture recognition and material recognition.
Water detection through spatio-temporal invariant descriptors
S1077314216300285
Variational inference techniques are powerful methods for learning probabilistic models and provide significant advantages over maximum likelihood (ML) or maximum a posteriori (MAP) approaches. Nevertheless they have not yet been fully exploited for image processing applications. In this paper we present a variational Bayes (VB) approach for image segmentation. We aim to show that VB provides a framework for generalising existing segmentation algorithms that rely on an expectation–maximisation formulation, while increasing their robustness and computational stability. We also show how optimal model complexity can be automatically determined in a variational setting, as opposed to ML frameworks which are intrinsically prone to overfitting. Finally, we demonstrate how suitable intensity priors, that can be used in combination with the presented algorithm, can be learned from large imaging data sets by adopting an empirical Bayes approach.
Variational inference for medical image segmentation
S1077314216300297
Recently introduced cost-effective depth sensors coupled with real-time skeleton extraction algorithms have generated a renewed interest in skeleton-based human action recognition. Most of the existing skeleton-based approaches use either the joint locations or the joint angles to represent the human skeleton. In this paper, we introduce and evaluate a new family of skeletal representations for human action recognition, which we refer to as R3DG features. The proposed representations explicitly model the 3D geometric relationships between various body parts using rigid body transformations, i.e., rotations and translations in 3D space. Using the proposed skeletal representations, human actions are modeled as curves in R3DG feature spaces. Finally, we perform action recognition by classifying these curves using a combination of dynamic time warping, Fourier temporal pyramid representation and support vector machines. Experimental results on five benchmark action datasets show that the proposed representations perform better than many existing skeletal representations. The proposed approach also outperforms various state-of-the-art skeleton-based human action recognition approaches.
R3DG features: Relative 3D geometry-based skeletal representations for human action recognition
S1077314216300327
During the last decades photogrammetric computer vision systems have been well established in scientific and commercial applications. Recent developments in image-based 3D reconstruction systems have resulted in an easy way of creating realistic, visually appealing and accurate 3D models. We present a fully automated processing pipeline for metric and geo-accurate 3D reconstructions of complex geometries supported by an online feedback method for user guidance during image acquisition. Our approach is suited for seamlessly matching and integrating images with different scales, from different view points (aerial and terrestrial), and with different cameras into one single reconstruction. We evaluate our approach based on different datasets for applications in mining, archaeology and urban environments and thus demonstrate the flexibility and high accuracy of our approach. Our evaluation includes accuracy related analyses investigating camera self-calibration, georegistration and camera network configuration.
Evaluations on multi-scale camera networks for precise and geo-accurate reconstructions from aerial and terrestrial images with user guidance
S1077314216300339
Studies in biological vision have always been a great source of inspiration for design of computer vision algorithms. In the past, several successful methods were designed with varying degrees of correspondence with biological vision studies, ranging from purely functional inspiration to methods that utilise models that were primarily developed for explaining biological observations. Even though it seems well recognised that computational models of biological vision can help in design of computer vision algorithms, it is a non-trivial exercise for a computer vision researcher to mine relevant information from biological vision literature as very few studies in biology are organised at a task level. In this paper we aim to bridge this gap by providing a computer vision task centric presentation of models primarily originating in biological vision studies. Not only do we revisit some of the main features of biological vision and discuss the foundations of existing computational studies modelling biological vision, but also we consider three classical computer vision tasks from a biological perspective: image sensing, segmentation and optical flow. Using this task-centric approach, we discuss well-known biological functional principles and compare them with approaches taken by computer vision. Based on this comparative analysis of computer and biological vision, we present some recent models in biological vision and highlight a few models that we think are promising for future investigations in computer vision. To this extent, this paper provides new insights and a starting point for investigators interested in the design of biology-based computer vision algorithms and pave a way for much needed interaction between the two communities leading to the development of synergistic models of artificial and biological vision.
Bio-inspired computer vision: Towards a synergistic approach of artificial and biological vision
S1077314216300352
Dynamic textures (DTs) are moving sequences of natural scenes with some form of temporal regularity such as boiling water, a flag fluttering in the wind. The motion causes continuous changes in the geometry of dynamic textures thus it is difficult to apply traditional vision algorithms to recognize this class of textures. This paper proposes a scheme for modeling and classification of dynamic textures using a local image descriptor which efficiently encodes texture information in a space-time domain. The proposed descriptor extends the well-known spatial texture descriptor, local binary pattern (LBP), to spatio-temporal domain in order to represent the DT by combining appearance feature with the motion. Although, local binary patterns are used extensively in visual recognition applications due to their excellent performance and computational simplicity, but sometimes unable to differentiate different structures properly due to their dependency on center pixel as a threshold. In this paper, a new descriptor based on a global adaptive threshold is employed to compute the structure pattern of local image patch which differentiates various local image structures more efficiently. However, the LBP pattern defines the spatial structure of a local image patch but it does not give information about the contrast of local image patch. We have used Michelson contrast to compute the difference in luminance in the local texture and clubbed with local structure pattern computed using the proposed descriptor. Extensive experiments on dynamic texture databases (Dyntex, Dyntex++ and UCLA) prove the efficiency of the proposed method.
A novel scheme based on local binary pattern for dynamic texture recognition
S1084804515001654
Battery-powered unmanned aerial vehicle based video sensing system is more cost-saving and energy-saving than traditional aircraft based systems. However, high-volume real-time sensing data is more vulnerable in unmanned system than that in manned system. Meanwhile, the computation and energy resources in such system are very limited, which restricts the use of complex encryption process on video data. Therefore, how to achieve confidentiality of video data under limited resources efficiently needs to be addressed. Firstly, resources constraints with their development trends in video sensing system are studied. Secondly, an information-utility-value-oriented resource-efficient encryption optimization model under resources constraints is given. Thirdly, based on this model, a video-compression-independent speed-adjustable lightweight encryption scheme with its improved version is proposed. Fourthly, a DSP and ARM based embedded secure video sensing system is designed, and the proposed encryption scheme has been implemented in it. In addition, theoretical analyses based on information theory and experimental analyses on throughputs show that the proposed encryption schemes can meet the real-time requirements of system under the tight resources constraints.
A resource-efficient multimedia encryption scheme for embedded video sensing system based on unmanned aircraft
S1093326313000958
We here present an improved version of AutoGrow (version 3.0), an evolutionary algorithm that works in conjunction with existing open-source software to automatically optimize candidate ligands for predicted binding affinity and other druglike properties. Though no substitute for the medicinal chemist, AutoGrow 3.0, unlike its predecessors, attempts to introduce some chemical intuition into the automated optimization process. AutoGrow 3.0 uses the rules of click chemistry to guide optimization, greatly enhancing synthesizability. Additionally, the program discards any growing ligand whose physical and chemical properties are not druglike. By carefully crafting chemically feasible druglike molecules, we hope that AutoGrow 3.0 will help supplement the chemist's efforts. To demonstrate the utility of the program, we use AutoGrow 3.0 to generate predicted inhibitors of three important drug targets: Trypanosoma brucei RNA editing ligase 1, peroxisome proliferator-activated receptor γ, and dihydrofolate reductase. In all cases, AutoGrow generates druglike molecules with high predicted binding affinities. AutoGrow 3.0 is available free of charge (http://autogrow.ucsd.edu) under the terms of the GNU General Public License and has been tested on Linux and Mac OS X.
AutoGrow 3.0: An improved algorithm for chemically tractable, semi-automated protein inhibitor design
S1093326313002052
A variety of popular molecular dynamics (MD) simulation packages were independently developed in the last decades to reach diverse scientific goals. However, such non-coordinated development of software, force fields, and analysis tools for molecular simulations gave rise to an array of software formats and arbitrary conventions for routine preparation and analysis of simulation input and output data. Different formats and/or parameter definitions are used at each stage of the modeling process despite largely contain redundant information between alternative software tools. Such Babel of languages that cannot be easily and univocally translated one into another poses one of the major technical obstacles to the preparation, translation, and comparison of molecular simulation data that users face on a daily basis. Here, we present the MDWiZ platform, a freely accessed online portal designed to aid the fast and reliable preparation and conversion of file formats that allows researchers to reproduce or generate data from MD simulations using different setups, including force fields and models with different underlying potential forms. The general structure of MDWiZ is presented, the features of version 1.0 are detailed, and an extensive validation based on GROMACS to LAMMPS conversion is presented. We believe that MDWiZ will be largely useful to the molecular dynamics community. Such fast format and force field exchange for a given system allows tailoring the chosen system to a given computer platform and/or taking advantage of a specific capabilities offered by different software engines.
MDWiZ: A platform for the automated translation of molecular dynamics simulations
S1093326314001570
In this review we give an overview of the field of Computational enzymology. We start by describing the birth of the field, with emphasis on the work of the 2013 chemistry Nobel Laureates. We then present key features of the state-of-the-art in the field, showing what theory, accompanied by experiments, has taught us so far about enzymes. We also briefly describe computational methods, such as quantum mechanics-molecular mechanics approaches, reaction coordinate treatment, and free energy simulation approaches. We finalize by discussing open questions and challenges.
Challenges in computational studies of enzyme structure, function and dynamics
S1093326314002101
Cancer is a complex disease resulting from the uncontrolled proliferation of cell signaling events. Protein kinases have been identified as central molecules that participate overwhelmingly in oncogenic events, thus becoming key targets for anticancer drugs. A majority of studies converged on the idea that ligand-binding pockets of kinases retain clues to the inhibiting abilities and cross-reacting tendencies of inhibitor drugs. Even though these ideas are critical for drug discovery, validating them using experiments is not only difficult, but also in some cases infeasible. To overcome these limitations and to test these ideas at the molecular level, we present here the results of receptor-focused in-silico docking of nine marketed drugs to 19 different wild-type and mutated kinases chosen from a wide range of families. This investigation highlights the need for using relevant models to explain the correct inhibition trends and the results are used to make predictions that might be able to influence future experiments. Our simulation studies are able to correctly predict the primary targets for each drug studied in majority of cases and our results agree with the existing findings. Our study shows that the conformations a given receptor acquires during kinase activation, and their micro-environment, defines the ligand partners. Type II drugs display high compatibility and selectivity for DFG-out kinase conformations. On the other hand Type I drugs are less selective and show binding preferences for both the open and closed forms of selected kinases. Using this receptor-focused approach, it is possible to capture the observed fold change in binding affinities between the wild-type and disease-centric mutations in ABL kinase for Imatinib and the second-generation ABL drugs. The effects of mutation are also investigated for two other systems, EGFR and B-Raf. Finally, by including pathway information in the design it is possible to model kinase inhibitors with potentially fewer side-effects.
Can structural features of kinase receptors provide clues on selectivity and inhibition? A molecular modeling study
S1093326315300462
The biological function of the pleiotropic cytokine interleukin-10 (IL-10), which has an essential role in inflammatory processes, is known to be affected by glycosaminoglycans (GAGs). GAGs are essential constituents of the extracellular matrix with an important role in modulating the biological function of many proteins. The molecular mechanisms governing the IL-10–GAG interaction, though, are unclear so far. In particular, detailed knowledge about GAG binding sites and recognition mode on IL-10 is lacking, despite of its imminent importance for understanding the functional consequences of IL-10–GAG interaction. In the present work, we report a GAG binding site on IL-10 identified by applying computational methods based on Coulomb potential calculations and specialized molecular dynamics simulations. The identified GAG binding site is constituted of several positively charged residues, which are conserved among species. Exhaustive conformational space sampling of a series of GAG ligands binding to IL-10 led to the observation of two GAG binding modes in the predicted binding site, and to the identification of IL-10 residues R104, R106, R107, and K119 as being most important for molecular GAG recognition. In silico mutation as well as single-residue energy decomposition and detailed analysis of hydrogen-bonding behavior led to the conclusion that R107 is most essential and assumes a unique role in IL-10–GAG interaction. This structural and dynamic characterization of GAG-binding to IL-10 represents an important step for further understanding the modulation of the biological function of IL-10.
Identification and characterization of a glycosaminoglycan binding site on interleukin-10 via molecular simulation methods
S109332631530067X
Humic substances are ubiquitous in the environment and have manifold functions. While their composition is well known, information on the chemical structure and three-dimensional conformation is scarce. Here we describe the Vienna Soil-Organic-Matter Modeler, which is an online tool to generate condensed phase computer models of humic substances (http://somm.boku.ac.at). Many different models can be created that reflect the diversity in composition and conformations of the constituting molecules. To exemplify the modeler, 18 different models are generated based on two experimentally determined compositions, to explicitly study the effect of varying e.g. the amount of water molecules in the models or the pH. Molecular dynamics simulations were performed on the models, which were subsequently analyzed in terms of structure, interactions and dynamics, linking macroscopic observables to the microscopic composition of the systems. We are convinced that this new tool opens the way for a wide range of in silico studies on soil organic matter.
Vienna Soil-Organic-Matter Modeler—Generating condensed-phase models of humic substances
S1361841513000510
Computed Tomographic (CT) colonography is a technique used for the detection of bowel cancer or potentially precancerous polyps. The procedure is performed routinely with the patient both prone and supine to differentiate fixed colonic pathology from mobile faecal residue. Matching corresponding locations is difficult and time consuming for radiologists due to colonic deformations that occur during patient repositioning. We propose a novel method to establish correspondence between the two acquisitions automatically. The problem is first simplified by detecting haustral folds using a graph cut method applied to a curvature-based metric applied to a surface mesh generated from segmentation of the colonic lumen. A virtual camera is used to create a set of images that provide a metric for matching pairs of folds between the prone and supine acquisitions. Image patches are generated at the fold positions using depth map renderings of the endoluminal surface and optimised by performing a virtual camera registration over a restricted set of degrees of freedom. The intensity difference between image pairs, along with additional neighbourhood information to enforce geometric constraints over a 2D parameterisation of the 3D space, are used as unary and pair-wise costs respectively, and included in a Markov Random Field (MRF) model to estimate the maximum a posteriori fold labelling assignment. The method achieved fold matching accuracy of 96.0% and 96.1% in patient cases with and without local colonic collapse. Moreover, it improved upon an existing surface-based registration algorithm by providing an initialisation. The set of landmark correspondences is used to non-rigidly transform a 2D source image derived from a conformal mapping process on the 3D endoluminal surface mesh. This achieves full surface correspondence between prone and supine views and can be further refined with an intensity based registration showing a statistically significant improvement (p <0.001), and decreasing mean error from 11.9mm to 6.0mm measured at 1743 reference points from 17 CTC datasets.
Endoluminal surface registration for CT colonography using haustral fold matching
S1361841513001023
Angiographic methods can provide valuable information on vessel morphology and hemodynamics, but are often qualitative in nature, somewhat limiting their ability for comparison across arteries and subjects. In this work we present a method for quantifying absolute blood volume flow rates within large vessels using dynamic angiographic data. First, a kinetic model incorporating relative blood volume, bolus dispersion and signal attenuation is fitted to the data. A self-calibration method is also described for both 2D and 3D data sets to convert the relative blood volume parameter into absolute units. The parameter values are then used to simulate the signal arising from a very short bolus, in the absence of signal attenuation, which can be readily encompassed within a vessel mask of interest. The volume flow rate can then be determined by calculating the resultant blood volume within the vessel mask divided by the simulated bolus duration. This method is applied to non-contrast magnetic resonance imaging data from a flow phantom and also to the cerebral arteries of healthy volunteers acquired using a 2D vessel-encoded pseudocontinuous arterial spin labeling pulse sequence. This allows the quantitative flow contribution in downstream vessels to be determined from each major brain-feeding artery. Excellent agreement was found between the actual and estimated flow rates in the phantom, particularly below 4.5ml/s, typical of the cerebral vasculature. Flow rates measured in healthy volunteers were generally consistent with values found in the literature. This method is likely to be of use in patients with a variety of cerebrovascular diseases, such as the assessment of collateral flow in patients with steno-occlusive disease or the evaluation of arteriovenous malformations.
A theoretical framework for quantifying blood volume flow rate from dynamic angiographic data and application to vessel-encoded arterial spin labeling MRI
S1361841513001606
Motion correction in Dynamic Contrast Enhanced (DCE-) MRI is challenging because rapid intensity changes can compromise common (intensity based) registration algorithms. In this study we introduce a novel registration technique based on robust principal component analysis (RPCA) to decompose a given time-series into a low rank and a sparse component. This allows robust separation of motion components that can be registered, from intensity variations that are left unchanged. This Robust Data Decomposition Registration (RDDR) is demonstrated on both simulated and a wide range of clinical data. Robustness to different types of motion and breathing choices during acquisition is demonstrated for a variety of imaged organs including liver, small bowel and prostate. The analysis of clinically relevant regions of interest showed both a decrease of error (15–62% reduction following registration) in tissue time–intensity curves and improved areas under the curve (AUC60) at early enhancement.
Respiratory motion correction in dynamic MRI using robust data decomposition registration – Application to DCE-MRI