abstract
stringlengths 8
10.1k
| authors
stringlengths 9
1.96k
| title
stringlengths 6
367
| __index_level_0__
int64 13
1,000k
|
---|---|---|---|
Accurate and ambulatory measurement of blood pressure (BP) is essential for efficient diagnosis, management and prevention of cardiovascular diseases (CVDs). However, traditional cuff-based BP measurement methods provide only intermittent BP readings and can cause discomfort with the occlusive cuff. Although pulse transit time (PTT) method is promising for cuffless and continuous BP measurement, its pervasive use is restricted by its limited accuracy and requirement of placing sensors on multiple body sites. To tackle these issues, we propose a novel dual-modality arterial pulse monitoring system for continuous blood pressure measurement, which simultaneously records the pressure and photoplethysmography (PPG) signals of radial artery. The obtained signals can be used to generate a pressure-volume curve, from which the elasticity index (EI) and viscosity index (VI) can be extracted. Experiments were carried out among 7 healthy subjects with their PPG, ECG, arterial pressure wave and reference BP collected to examine the effectiveness of the proposed indexes. The results of this study demonstrate that a linear regression model combining EI and VI has significantly higher BP tracking correlation coefficient as compared to the PTT method. This suggests that the proposed system and method can potentially be used for convenient and continuous blood pressure estimation with higher accuracy. | ['Wenxuan Dai', 'Yuan-Ting Zhang', 'Jing Liu', 'Xiao-Rong Ding', 'Ni Zhao'] | Dual-modality arterial pulse monitoring system for continuous blood pressure measurement | 914,083 |
Error correction codes (ECCs) are commonly used in computer systems to protect information from errors. For example, single error correction (SEC) codes are frequently used for memory protection. Due to continuous technology scaling, soft errors on registers have become a major concern, and ECCs are required to protect them. Nevertheless, using an ECC increases delay, area and power consumption. In this way, ECCs are traditionally designed focusing on minimizing the number of redundant bits added. This is important in memories, as these bits are added to each word in the whole memory. However, this fact is less important in registers, where minimizing the encoding and decoding delay can be more interesting. This paper proposes a method to develop codes with 1-gate delay encoders and 4-gate delay decoders, independently of the word length. These codes have been designed to correct single errors only in data bits to reduce the overhead. | ['Luis-J. Saiz-Adalid', 'Pedro J. Gil', 'Joaquin Gracia-Moran', 'Daniel Gil-Tomas', 'J.-Carlos Baraza-Calvo'] | Ultrafast Single Error Correction Codes for Protecting Processor Registers | 609,811 |
One of the primary effects of software abstraction has been to further the notion of computer programs as objects rather than moving programming closer to the problem being solved. Knowledge abstraction, however, allows software to take a significant step toward the problem domain. | ['Russ Abbott'] | Knowledge abstraction | 710,051 |
Feedback Tracking Control of Non-Markovian Quantum Systems | ['Shibei Xue', 'Michael R. Hush', 'Ian R. Petersen'] | Feedback Tracking Control of Non-Markovian Quantum Systems | 916,345 |
This paper introduces design guidelines for new technology that leverage our understanding of traditional interactions with bound paper in the form of books and notebooks. Existing, physical interactions with books have evolved over hundreds of years, providing a rich history that we can use to inform our design of new computing technologies. In this paper, we initially survey existing paper technology and summarize previous historical and anthropological analyses of people's interactions with bound paper. We then present our development of three design principles for personal and portable technologies based on these analyses. For each design guideline, we describe a design scenario illustrating these principles in action. | ['Daniela K. Rosner', 'Lora Oehlberg', 'Kimiko Ryokai'] | Studying paper use to inform the design of personal and portable technology | 324,830 |
This paper presents a new method for measuring the complexity of a software test. This provides a quantitative assessment of each test outcome. Testing effort may be focused on those functions that result in high fractional complexity. The complexity of a test provides a surrogate measure of software faults. The measurement technique is applied to an actual test of a software system. | ['John C. Munson', 'Gregory A. Hall'] | Dynamic program complexity and software testing | 318,934 |
Generalized Quasi-Likelihood (GQL) Inferences | ['Brajendra C. Sutradhar'] | Generalized Quasi-Likelihood (GQL) Inferences | 586,308 |
The linear-chain CRFs is one of the most popular discriminative models for human action recognition, as it can achieve good prediction performance in temporal sequential labeling by capturing the one-or few-timestep interactions of the target states. However, existing CRFs formulations have limited capabilities to capture deeper intermediate representations within the target states and higher order dependence between the given states, which are potentially useful and significant in the modeling of complex action recognition scenarios. To address these issues, we formulate a deep recursive and hierarchical conditional random fields (DR-HCRFs) model in an infinite-order dependencies framework. The DR-HCRFs model is able to capture richer contextual information in the target states, and infinite-order temporal-dependencies between the given states. Moreover, we derive a mean-field-like approximation of the model marginal likelihood to efficiently facilitate the model inference. The parameters of the predefined model are learnt with the block-coordinate primal-dual Frank-Wolfe algorithm in a structured support vector machine framework. Experimental results on the CAD-120 benchmark dataset demonstrate that the proposed approach can achieve high scalability and perform better than other state-of-the-art methods in terms of the evaluation criteria. | ['Tianliang Liu', 'Xincheng Wang', 'Xiubing Dai', 'Jiebo Luo'] | Deep recursive and hierarchical conditional random fields for human action recognition | 781,805 |
The analysis and design of complex systems, which very often are composed of several sub-systems, takes advantages by the use of distributed simulation techniques. Unfortunately, the development of distributed simulation systems requires a significant expertise and a considerable effort for the inherent complexity of available standards, such as HLA. This paper introduces a model-driven approach to support the automated generation of HLA-based distributed simulations starting from system descriptions specified by use of SysML (Systems Modeling Language), the UML-based general purpose modeling language for systems engineering. The proposed approach is founded on the use of model transformation techniques and relies on standards introduced by the Model Driven Architecture (MDA). The method exploits several UML models that embody the details required to support two transformations that automatically map the source SysML model into a HLA-specific model and then use the latter to generate the Java/HLA source code. To this purpose, this paper also introduces two UML profiles, used to annotate UML diagrams in order both to represent HLA-based details and to support the automated generation of the HLA-based simulation code. | ['Paolo Bocciarelli', "Andrea D'Ambrogio", 'Gabriele Fabiani'] | A Model-driven Approach to Build HLA-based Distributed Simulations from SysML Models | 688,853 |
Urban road state identification refers to determining the operation status of the road network system, which plays an important role in urban road traffic management. By clustering time series of traffic flow, typical fluctuation pattern recognize algorithms of traffic flow can get the urban road network operation states. As the detected traffic data contain vague and uncertain information, preprocessing is needed. An improved fuzzy c-means clustering (FCM) method is proposed in this paper. A case study based on urban road section of Beijing City demonstrates the feasibility and effectiveness of the improved FCM algorithm. | ['Guangyu Zhu', 'Jianjun Chen', 'Peng Zhang'] | Fuzzy c-means clustering identification method of urban road traffic state | 606,046 |
Convex Combination of Three Affine Projections Adaptive Filters | ['Leonel Arevalo', 'José Antonio Apolinário', 'Marcello L. R. de Campos', 'R. Sampaio-Neto'] | Convex Combination of Three Affine Projections Adaptive Filters | 806,860 |
The issue of confidentiality and privacy in general databases has become increasingly prominent in recent years. A key element in preserving privacy and confidentiality of sensitive data is the ability to evaluate the extent of all potential disclosure for such data. This is one major challenge for all existing perturbation or transformation based approaches as they conduct disclosure analysis on the perturbed or transformed data, which is too large, considering many organizational databases typically contain a huge amount of data with a large number of categorical and numerical attributes. Instead of conducting disclosure analysis on perturbed or transformed data, our approach is to build an approximate statistical model first and analyze various potential disclosure in terms of parameters of the model built. As the model learned is the only means to generate data for release, all confidential information which snoopers can derive is contained in those parameters. | ['Xintao Wu', 'Songtao Guo', 'Yingjiu Li'] | Towards value disclosure analysis in modeling general databases | 195,794 |
Existing approaches for merging the results of parallel development activities are limited. These approaches can be characterised as state-based: only the initial and final states are considered. This paper introduces operation-based merging, which uses the operations that were performed during development. In many cases operation-based merging has advantages over state-based merging, because it automatically respects the data-type invariants of the objects, is extensible for arbitrary object types, provides better conflict detection and allows for better support for solving these conflicts. Several algorithms for conflict detection are described and compared. | ['Ernst Lippe', 'Norbert van Oosterom'] | Operation-based merging | 440,682 |
A STUDY OF PRIORITY COGNIZANCE IN CONFLICT RESOLUTION FOR FIRM REAL TIME DATABASE SYSTEMS | ['Anindya Datta', 'Igor R. Viguier', 'Sang Hyuk Son', 'Vijay Kumar'] | A STUDY OF PRIORITY COGNIZANCE IN CONFLICT RESOLUTION FOR FIRM REAL TIME DATABASE SYSTEMS | 408,058 |
We demonstrate how Dijkstra's algorithm for shortest path queries can be accelerated by using precomputed shortest path distances. Our approach allows a completely flexible tradeoff between query time and space consumption for precomputed distances. In particular, sublinear space is sufficient to give the search a strong “sense of direction”. We evaluate our approach experimentally using large, real-world road networks. | ['Jens Maue', 'Peter Sanders', 'Domagoj Matijevic'] | Goal-directed shortest-path queries using precomputed cluster distances | 141,305 |
Accurate detection of low contrast-to-noise ratio (CNR) blood oxygenation level dependent (BOLD) signals in functional magnetic resonance imaging (fMRI) data is important for presurgical planning and cognitive research. Robust detection is challenging in small regions of low CNR activation since the probability of detecting each individual voxel is low. We present a processing technique for improving the detection of localized low CNR BOLD signals in fMRI data. When applied to synthetic fMRI data, this blind estimation scheme significantly improves the probability of correctly detecting voxels in a small region of activation with a CNR between 0.5 and 1.0 compared to the standard general linear model approach. More activation is detected in expected (based on input stimulus) regions of experimental data after processing with the proposed technique. | ['Ian C. Atkinson', 'Farzad Kamalabadi', 'Douglas L. Jones', 'Keith R. Thulborn'] | Blind Estimation for Localized Low Contrast-to-Noise Ratio BOLD Signals | 451,723 |
Atypical or delayed emotion processing has been repeatedly reported in individuals with Autism Spectrum Disorder (ASD); the overwhelming research focus has been on examining the level of their impairments to perceive emotions via (mostly static) facial expressions and body movement. While it is difficult for them to read others' emotions (including those with ASD); it might be just as challenging for neuro-typical (NT) individuals to recognize their emotions. Instead of pursuing a deeper understanding down the path on the expressive abilities of individuals with ASD, in our on-going study, we focus on the integration of a naturalistic multi-sensory environment (including a facial expression recognition module through Kinect V2's HD Face API) to help NT individuals "read" the emotions of children with ASD. In this short report, we focus on our system design and offered some early insights derived from initial testing on the potential of such environment. | ['Tiffany Y. Tang'] | Helping Neuro-typical Individuals to "Read" the Emotion of Children with Autism Spectrum Disorder: an Internet-of-Things Approach | 843,153 |
DodOrg—A Self-adaptive Organic Many-core Architecture | ['Thomas Ebi', 'David Kramer', 'Christian Schuck', 'Alexander von Renteln', 'Jürgen Becker', 'Uwe Brinkschulte', 'Jörg Henkel', 'Wolfgang Karl'] | DodOrg—A Self-adaptive Organic Many-core Architecture | 596,090 |
Super-resolution imaging is to overcome the inherent limitations of image acquisition to create high-resolution images from their low-resolution counterparts. In this paper, a novel state-space approach is proposed to incorporate the temporal correlations among the low-resolution observations into the framework of the Kalman filtering. The proposed approach exploits both the temporal correlations information among the high-resolution images and the temporal correlations information among the low-resolution images to improve the quality of the reconstructed high-resolution sequence. Experimental results show that the proposed framework is superior to bi-linear interpolation, bi-cubic spline interpolation and the conventional Kalman filter approach, due to the consideration of the temporal correlations among the low-resolution images. | ['Jing Tian', 'Kai-Kuang Ma'] | A new state-space approach for super-resolution image sequence reconstruction | 524,725 |
Accurate geometric calibration of a spaceborne sensor critical to allowing the imaging system to produce hig quality image products. This paper describes the post-launc calibration techniques developed and used to characteriz the payloads on the five RapidEye satellites. It illustrat how ground-based post-launch calibration techniques can b used to mount a successful geometric calibration campaig consistent with a small-sat satellite mission. | ['Brian C. Robertson', 'Keith Dennis Richard Beckett', 'Chris Rampersad', 'Rony Putih'] | Quantitative geometric calibration & validation of the Rapideye constellation | 475,385 |
Head motion during real walking is complex: The basic translational path is obscured by head bobbing. Many VE applications would be improved if a bobbing-free path were available. This paper introduces a model that describes head position while walking in terms of a bobbing free path and the head bobs. We introduce two methods to approximate the model from head-track data. | ['J. Wendt', 'Mary C. Whitton', 'David Adalsteinsson', 'Frederick P. Brooks'] | Reliable forward walking parameters from head-track data alone | 121,565 |
A real-time adaptive resource allocation algorithm considering the end user's Quality of Experience (QoE) in the context of video streaming service is presented in this work. An objective no-reference quality metric, namely Pause Intensity (PI), is used to control the priority of resource allocation to users during the scheduling process. An online adjustment has been introduced to adaptively set the scheduler's parameter and maintain a desired trade-off between fairness and efficiency. The correlation between the data rates (i.e. Video code rates) demanded by users and the data rates allocated by the scheduler is taken into account as well. The final allocated rates are determined based on the channel status, the distribution of PI values among users, and the scheduling policy adopted. Furthermore, since the user's capability varies as the environment conditions change, the rate adaptation mechanism for video streaming is considered and its interaction with the scheduling process under the same PI metric is studied. The feasibility of implementing this algorithm is examined and the result is compared with the most commonly existing scheduling methods. | ['Mirghiasaldin Seyedebrahimi', 'Xiao-Hong Peng', 'Robert B. Harrison'] | Adaptive Resource Allocation for QoE-Aware Mobile Communication Networks | 344,846 |
User studies are important for many aspects of the design process and involve techniques ranging from informal surveys to rigorous laboratory studies. However, the costs involved in engaging users often requires practitioners to trade off between sample size, time requirements, and monetary costs. Micro-task markets, such as Amazon's Mechanical Turk, offer a potential paradigm for engaging a large number of users for low time and monetary costs. Here we investigate the utility of a micro-task market for collecting user measurements, and discuss design considerations for developing remote micro user evaluation tasks. Although micro-task markets have great potential for rapidly collecting user measurements at low costs, we found that special care is needed in formulating tasks in order to harness the capabilities of the approach. | ['Aniket Kittur', 'Ed H. Chi', 'Bongwon Suh'] | Crowdsourcing user studies with Mechanical Turk | 335,857 |
This paper considers the noncooperative maximization of mutual information in the vector Gaussian interference channel in a fully distributed fashion via game theory. This problem has been widely studied in a number of works during the past decade for frequency-selective channels, and recently for the more general multiple-input multiple-output (MIMO) case, for which the state-of-the art results are valid only for nonsingular square channel matrices. Surprisingly, these results do not hold true when the channel matrices are rectangular and/or rank deficient matrices. The goal of this paper is to provide a complete characterization of the MIMO game for arbitrary channel matrices, in terms of conditions guaranteeing both the uniqueness of the Nash equilibrium and the convergence of asynchronous distributed iterative waterfilling algorithms. Our analysis hinges on new technical intermediate results, such as a new expression for the MIMO waterfilling projection valid (also) for singular matrices, a mean-value theorem for complex matrix-valued functions, and a general contraction theorem for the multiuser MIMO watefilling mapping valid for arbitrary channel matrices. The quite surprising result is that uniqueness/convergence conditions in the case of tall (possibly singular) channel matrices are more restrictive than those required in the case of (full rank) fat channel matrices. We also propose a modified game and algorithm with milder conditions for the uniqueness of the equilibrium and convergence, and virtually the same performance (in terms of Nash equilibria) of the original game. | ['Gesualdo Scutari', 'Daniel P. Palomar', 'Sergio Barbarossa'] | The MIMO Iterative Waterfilling Algorithm | 8,345 |
Due to the poverty of available bandwidth resources and increasing demand for cellular communication services, the problem of channel assignment becomes increasingly important. To trace optimal assignment, several algorithms have been proposed to minimize the amount of required channels. However, the total number of available frequencies are given and fixed in many situations. A new cost model is required for assigning channel in the cellular networks with limited bandwidth. We analyze the cost of each assignment in the view of damages from blocking calls and interfered by other frequencies. Furthermore, we formulate a new optimization problem for the fixed channel assignment problem by incorporating the limited bandwidth constraint into its cost model. To minimize the cost function, we adopt genetic approach to propose an evolutionary approach. Experimental results show that the cost function does reflect the quality of different assignments and also show that our algorithm does improve the solution quality significantly. | ['Ming-Hui Jin', 'Hsiao-Kuang Wu', 'Jorng-Tzong Horng', 'Chai-Hsuan Tsai'] | An evolutionary approach to fixed channel assignment problems with limited bandwidth constraint | 405,957 |
A macro–micro composite precision positioning stage is mainly used in microelectronics manufacturing to achieve high velocity, high precision, and large-stroke positioning. The positioning accuracy and working efficiency of the stage are influenced by the inertial vibration caused by motion with high acceleration. This paper proposes an active vibration reduction (AVR) method employing a piezoelectric device for a designed macro–micro motion stage. The design model of the stage is established and its dynamic models are explored. The feasibility of the piezoelectric device as a vibration damper for the designed positioning stage is demonstrated through theoretical analyses, including natural frequency analysis and inertial vibration energy analysis. Furthermore, an optimal design of the stage with the AVR mechanism is established and then verified experimentally. The performance of the AVR method is examined and characterized through investigation of the differences in inertial vibration energy with and without the AVR, and the performance of the proposed method in terms of the vibration amplitude and positioning time is measured at different accelerations, velocities, and strokes. The theoretical and experimental analyses indicate the effectiveness of the proposed vibration reduction method, and this method could be employed in several applications that require vibration reduction. | ['Zhang Lh', 'Jian Gao', 'Xin Chen', 'Hui Tang', 'Yun Chen', 'Yunbo He', 'Zhijun Yang'] | A Rapid Vibration Reduction Method for Macro–Micro Composite Precision Positioning Stage | 868,449 |
Auctions are an important and common form of commerce today. A difficult aspect of auctions is that the bidder must be present at the site of the auction. This reduces the appeal of auction and restricts the number of people who would otherwise participate in it. An auction over an electronic network is therefore an attractive way of conducting business. The author proposes a protocol for electronic auctions. This protocol ensures: (a) anonymity of the customer, (b) security from passive attacks, active attacks, message corruption, and loss of messages, (c) customer privacy, and (d) atomicity (i.e., under all circumstances, the transaction is either completed or aborted). A logic is developed based on the semantics of BAN-style logic (M. Burrows et al., 1990). Using this logic, the properties of anonymity, security, privacy, and atomicity are proved for the proposed protocol. | ['Srividhya Subramanian'] | Design and verification of a secure electronic auction protocol | 113,686 |
In a recent article, knowledge modelling at the knowledge level for the task of moving objects detection in image sequences has been introduced. In this paper, the algorithmic lateral inhibition (ALI) method is now applied in the generic dynamic and selective visual attention (DSVA) task with the objective of moving objects detection, labelling and further tracking. The four basic subtasks, namely feature extraction, feature integration, attention building and attention reinforcement in our proposal of DSVA are described in detail by inferential CommonKADS schemes. It is shown that the ALI method, in its various forms, that is to say, recurrent and non-recurrent, temporal, spatial and spatial-temporal, may perfectly be used as a problem-solving-method in most of the subtasks involved in the DSVA task. q 2005 Elsevier Ltd. All rights reserved. | ['María T. López', 'Antonio Fernández-Caballero', 'José Mira', 'Ana E. Delgado', 'Miguel Angel Fernández'] | Algorithmic lateral inhibition method in dynamic and selective visual attention task: Application to moving objects detection and labelling | 433,033 |
The systematic study of subcellular location pattern is very important for fully characterizing the human proteome. Nowadays, with the great advances in automated microscopic imaging, accurate bioimage-based classification methods to predict protein subcellular locations are highly desired. All existing models were constructed on the independent parallel hypothesis, where the cellular component classes are positioned independently in a multi-class classification engine. The important structural information of cellular compartments is missed. To deal with this problem for developing more accurate models, we proposed a novel cell structure-driven classifier construction approach (SC-PSorter) by employing the prior biological structural information in the learning model. Specifically, the structural relationship among the cellular components is reflected by a new codeword matrix under the error correcting output coding framework. Then, we construct multiple SC-PSorter-based classifiers corresponding to the columns of the error correcting output coding codeword matrix using a multi-kernel support vector machine classification approach. Finally, we perform the classifier ensemble by combining those multiple SC-PSorter-based classifiers via majority voting.We evaluate our method on a collection of 1636 immunohistochemistry images from the Human Protein Atlas database. The experimental results show that our method achieves an overall accuracy of 89.0%, which is 6.4% higher than the state-of-the-art method.The dataset and code can be downloaded from https://github.com/shaoweinuaa/[email protected] data are available at Bioinformatics online. | ['Wei Shao', 'Mingxia Liu', 'Daoqiang Zhang'] | Human Cell Structure-driven Model Construction for Predicting Protein Subcellular Location from Biological Images | 325,308 |
For a graph G and an order a on V(G), we define a greedy defining set as a subset S of V(G) with an assignment of colors to vertices in S, such that the pre-coloring can be extended to a x( G)-coloring of G by the greedy coloring of (G, a). A greedy defining set ofaX( G)-coloring C of G is a greedy defining set, which results in the coloring C (by the greedy procedure). We denote the size of a greedy defining set of C with minimum cardinality by G D N (G, a, C). In this paper we show that the problem of determining GDN(G,a,C), for an instance (G,a,C) is an NP-complete problem. | ['Manouchehr Zaker'] | Greedy defining sets of graphs | 562,138 |
This paper deals with the class of robotic assemblies where position uncertainty far exceeds assembly clearance, and visual assistance is not available to resolve the uncertainty. Under this scenario, we can implement a localization strategy that resolves the uncertainty using a pre-acquired map of all possible peg-hole contact configurations. Prior to assembly, the strategy explores the contact configuration space (C-space) by sequentially bringing the peg (under different configurations) into contact with the stationary (fixtured) hole and matching the contact configurations thus recorded with the map. The different peg configurations can be actively chosen to maximize uncertainty-reduction. However, with a sampled map of the contact C-space, discretization errors are introduced, and implementing deterministic matching (at the fine-grained level necessary for assembly) would soon become prohibitively expensive in terms of computation. Additionally, with global initial uncertainty, multiple solutions abound in our localization problem. In this paper, we introduce a particle filter implementation which can not only handle the discretization errors in map-matching, but also track multiple solutions simultaneously. The particle filter implementation was validated on computer simulations of round and square peg-in-hole assemblies, before testing it on corresponding actual robotic assemblies. The implementation was highly successful on both the assemblies, reducing the uncertainty by more than 95 % and making it easy for a previously-devised compliant strategy to achieve assembly. Results from the simulations and actual assemblies are reported. We also present a comparison of these results (using random localization: peg moves selected randomly) with preliminary results from assemblies using active localization. | ['Siddharth R. Chhatpar', 'Michael S. Branicky'] | Particle filtering for localization in robotic assemblies with position uncertainty | 112,650 |
The authors consider the end-point position control problem of a one-link flexible manipulator under wide spectrum of operating conditions. Uncertain stiffness variation may arise in practice and is treated as the uncertainty. A robust control scheme is designed for the proposed control problem; the scheme is capable of achieving better accuracy for the manipulator without the complete knowledge of uncertainty. The only required information of the uncertainty is its possible bound. > | ['Victor V. Korolov', 'Y. H. Chen'] | Robust control of a flexible manipulator arm | 207,175 |
Application of Artificial Neural Networks for Inflow Estimation of Yuvacık Dam Catchment Area | ['Bahattin Yanık', 'Melih İnal', 'Erhan Butun'] | Application of Artificial Neural Networks for Inflow Estimation of Yuvacık Dam Catchment Area | 635,678 |
This study uses a p-timed Petri net to represent the initial scheduling of a project: places are tasks and each place has assigned a initial cost and a initial time execution. Risk management is applied to project scheduling in order to identify and mitigate risks where uncertain variables are modelled as stochastic variables. The algorithm determines the set of mitigation actions that reduce the risk exposure stated in the chance constraints of the proposed optimization problem. This work shows how this problem can be modelled as a stochastic optimization problem requiring that constraints should be held with a probability exceeding /spl alpha/. | ['Ascension Zafra-Cabeza', 'Miguel A. Ridao', 'Eduardo F. Camacho'] | Chance constrained project scheduling under risk | 79,666 |
We study the classical problem of noisy constrained capacity in the case of the binary symmetric channel (BSC), namely, the capacity of a BSC whose inputs are sequences chosen from a constrained set. Motivated by a result of Ordentlich and Weissman in [28], we derive an asymptotic formula (when the noise parameter is small) for the entropy rate of a hidden Markov chain, observed when a Markov chain passes through a BSC. Using this result we establish an asymptotic formula for the capacity of a BSC with input process supported on an irreducible flnite type constraint, as the noise parameter tends to zero. 1. Introduction and Background. Let X;Y be discrete random variables with alphabet X;Y and joint probability mass function pX;Y (x;y) 4 P(X = x;Y = y), x 2 X;y 2 Y (for notational simplicity, we will write p(x;y) rather than pX;Y (x;y), similarly p(x);p(y) rather than pX(x);pY (y), respectively, when it is clear from the context). The entropy H(X) of the discrete random variable X, which measures the level of uncertainty of X, is deflned as (in this paper log is taken to mean the natural logarithm) | ['Guangyue Han', 'Brian Marcus'] | Asymptotics of Input-Constrained Binary Symmetric Channel Capacity | 349,744 |
Improving gender equity in computing programmes: some suggestions for increasing female participation and retention rates | ['G. Joy Teague', 'Valerie A. Clarke'] | Improving gender equity in computing programmes: some suggestions for increasing female participation and retention rates | 246,120 |
We introduce a new multiagent negotiation algorithm for large and complex domains, called NB3. It applies Branch & Bound to search for good offers to propose. To analyze its performance we present a new problem called the Negotiating Salesmen Problem. We have conducted some experiments with NB3 from which we conclude that it manages to decrease the traveling cost of the agents significantly, that it outperforms random search and that it scales well with the complexity of the problem. | ['Dave de Jonge', 'Carles Sierra'] | Branch and Bound for negotiations in large agreement spaces | 756,558 |
Background#R##N#The emergence of next-generation RNA sequencing (RNA-Seq) provides tremendous opportunities for researchers to analyze alternative splicing on a genome-wide scale. However, accurate detection of intron retention (IR) events from RNA-Seq data has remained an unresolved challenge in next-generation sequencing (NGS) studies. | ['Yang Bai', 'Shufan Ji', 'Yadong Wang'] | IRcall and IRclassifier: two methods for flexible detection of intron retention events from RNA-Seq data | 274,984 |
Virtual and augmented reality are becoming the new medium that transcend the way we interact with virtual content, paving the way for many immersive and interactive forms of applications. The main purpose of my research is to create a seamless combination of physiological sensing with virtual reality to provide users with a new layer of input modality or as a form of implicit feedback. To achieve this, my research focuses in novel augmented reality (AR) and virtual reality (VR) based application for a multi-user, multi-view, multi-modal system augmented by physiological sensing methods towards an increased public and social acceptance. | ['Yun Suen Pai'] | Physiological Signal-Driven Virtual Reality in Social Spaces | 916,609 |
Computation outsourcing is an integral part of cloud computing. It enables end-users to outsource their computational tasks to the cloud and utilize the shared cloud resources in a pay-per-use manner. However, once the tasks are outsourced, the end-users will lose control of their data, which may result in severe security issues especially when the data is sensitive. To address this problem, secure outsourcing mechanisms have been proposed to ensure security of the end-users' outsourced data. In this paper, we investigate outsourcing of general computational problems which constitute the mathematical basics for problems emerged from various fields such as engineering and finance. To be specific, we propose affine mapping based schemes for the problem transformation and outsourcing so that the cloud is unable to learn any key information from the transformed problem. Meanwhile, the overhead for the transformation is limited to an acceptable level compared to the computational savings introduced by the outsourcing itself. Furthermore, we develop cost-aware schemes to balance the trade-offs between end-users' various security demands and computational overhead. We also propose a verification scheme to ensure that the end-users will always receive a valid solution from the cloud. Our extensive complexity and security analysis show that our proposed Cost-Aware Secure Outsourcing (CASO) scheme is both practical and effective. | ['Kai Zhou', 'Jian Ren'] | CASO: Cost-Aware Secure Outsourcing of General Computational Problems | 592,227 |
Colonoscopy is an endoscopic technique that allows a physician to inspect the inside of the human colon and to perform - if deemed necessary - at the same time a number of diagnostic and therapeutic operations. In order to see the inside of the colon, a video signal of the internal mucosa of the colon is generated by a tiny video camera at the tip of the endoscope and displayed on a monitor for real-time analysis by the physician. We have captured and stored these videos in digital format and call these colonoscopy videos. Based on new algorithms for instrument detection and shot segmentation, we introduce new spatio-temporal analysis techniques to automatically identify an operation shot - a segment of visual data in a colonoscopy video that corresponds to a diagnostic or therapeutic operation. Our experiments on real colonoscopy videos demonstrate the effectiveness of the proposed approach. The proposed techniques and software are useful for 1) postprocedure review for causes of complications due to diagnostic or therapeutic operations; 2) establishment of an effective content-based retrieval system to facilitate endoscopic research and education; 3) development of a systematic approach to assess and improve the procedural skills of endoscopists. | ['Yu Cao', 'Danyu Liu', 'Wallapak Tavanapong', 'J. Wong', 'Jung Hwan Oh', 'P.C. de Groen'] | Computer-Aided Detection of Diagnostic and Therapeutic Operations in Colonoscopy Videos | 356,448 |
Increasing design complexity eventually leads to a design process that is distributed over several companies. This is already found in the automotive industry but SoC design appears to move in the same direction. Design processes for complex systems are iterative, but iteration hardly reaches beyond company borders. Iterations require availability of preliminary design data and estimations, but due to cost and liability issues suppliers often hesitate to provide such preliminary data. Moreover, companies are rarely able to judge the accuracy and precision of externally estimated data. So, the systems integrator experiences increased design risk. Particular mechanisms are needed to ensure, that the integrated system will meet the overall requirements even if part of the early estimations are wrong or imprecise. Based on work in supply chain management, we propose an inter-company design process that is based on formal techniques from real-time systems engineering and so called flexible quantity contracts. In this process, formal techniques control design risk and flexible contracts regulate cooperation and cost distribution. The process effectively delays the design freeze point beyond the contract conclusion to enable design iterations. We explain the process and give an example. | ['Judita Kruse', 'Clive Thomsen', 'Rolf Ernst', 'Thomas Volling', 'Thomas Spengler'] | Introducing Flexible Quantity Contracts into Distributed SoC and Embedded System Design Processes | 424,965 |
This paper describes a new approach for modeling joints in an articulated 3D body model for tracking of the configuration of a human body. The used model consists of a set of rigid generalized cylinders. The joints between the cylinders are modeled as artificial point correspondences within the ICP (iterative closest point) tracking algorithm, which results in a set of forces and torques maintaining the model constraints. It is shown that different joint types with different degrees of freedom can be modeled with this approach. Experiments show the functionality and robustness of the presented model | ['Steffen Knoop', 'Stefan Vacek', 'Rüdiger Dillmann'] | Modeling joint constraints for an articulated 3D human body model with artificial correspondences in ICP | 926,623 |
We treat the problem of movement prediction as a classification task. We assume the existence of a (gradually populated and/or trained) knowledge base and try to compare the movement pattern of a certain object with stored information in order to predict its future location. We introduce a novel distance metric function based on weighted spatial and velocity context used for location prediction. The proposed distance metric is compared with other distance metrics in the literature on real traffic data and reveals its superiority. | ['Theodoros Anagnostopoulos', 'Christos Anagnostopoulos', 'Stathes Hadjiefthymiades', 'Arkady B. Zaslavsky'] | On-Line Location Prediction Exploiting Spatial and Velocity Context | 421,468 |
Machine learning can differentiate venom toxins from other proteins having non-toxic physiological functions | ['Ranko Gacesa', 'David Barlow', 'Paul F. Long'] | Machine learning can differentiate venom toxins from other proteins having non-toxic physiological functions | 904,020 |
This note studies the performance of control systems subject to average data-rate limits. We focus on a situation where a noisy LTI system has been designed assuming transparent feedback and, due to implementation constraints, a source coding scheme (with unity signal transfer function) has to be deployed in the feedback path. For this situation and by focusing on a specific source coding scheme, we give a closed-form upper bound on the minimal average data-rate that allows one to attain a given performance level. Instrumental to our main result is the explicit solution of a related signal-to-noise ratio minimization problem, subject to a closed loop performance constraint. | ['Eduardo I. Silva', 'Milan S. Derpich', 'Jan Østergaard'] | An Achievable Data-Rate Region Subject to a Stationary Performance Constraint for LTI Plants | 326,416 |
Since the crash of the dot.coms, investors have gotten a lot more careful with where they place their money. Now more than ever it becomes really important for venture capitalists (VCs) to monitor the state of the startups market and continually update their investment strategy to suit the rapidly changing market conditions. This paper presents three new visualization metaphors (Spiral Map, TimeTicker, and Double Histogram) for monitoring the startups market. While we are focusing on the VC domain, the visual metaphors developed are general and can be easily applied to other domains. | ['Mei C. Chuah'] | Demystifying venture capital investing | 268,389 |
Display Omitted Novel graph clustering algorithms (kNAS) is proposed, for overlapping community detection in large graph by combining the topological and attribute similarity that partitions the large graph into m clusters having high intracluster and low intercluster similarity.The core node in the graph is identified using Local Outlier Factor. Structural Similarity is based on grouping of nodes based on the neighbourhood of the core node and Attribute Similarity is achieved using Similarity Score.An objective function is defined for the faster convergence of the clustering algorithm.Density and Tanimoto coefficient are the validation measures used to define the effectiveness and quality of the proposed algorithm with the existing algorithms. A simple and novel approach to identify the clusters based on structural and attribute similarity in graph network is proposed which is a fundamental task in community detection. We identify the dense nodes using Local Outlier Factor (LOF) approach that measures the degree of outlierness, forms a basic intuition for generating the initial core nodes for the clusters. Structural Similarity is identified using k-neighbourhood and Attribute similarity is estimated through Similarity Score among the nodes in the group of structural clusters. An objective function is defined to have quick convergence in the proposed algorithm. Through extensive experiments on dataset (DBLP) with varying sizes, we demonstrate the effectiveness and efficiency of our proposed algorithm k-Neighbourhood Attribute Structural (kNAS) over state-of-the-art methods which attempt to partition the graph based on structural and attribute similarity in field of community detection. Additionally, we find the qualitative and quantitative benefit of combining both the similarities in graph. | ['M. Parimala Boobalan', 'Daphne Lopez', 'Xiao Zhi Gao'] | Graph clustering using k-Neighbourhood Attribute Structural similarity | 816,322 |
Tightly-Coupled Model Aided Visual-Inertial Fusion for Quadrotor Micro Air Vehicles | ['Dinuka M. W. Abeywardena', 'Gamini Dissanayake'] | Tightly-Coupled Model Aided Visual-Inertial Fusion for Quadrotor Micro Air Vehicles | 132,796 |
The Completely Independent Spanning Trees ( CISTs ) are a very useful construct in a computer network. It can find applications in many important network functions, especially in reliable broadcasting, i.e., guaranteeing broadcasting operation in the presence of faulty nodes. The question for the existence of two CISTs in an arbitrary network is an NP-hard problem. Therefore most research on CISTs to date has been concerning networks of specific structures. In this paper, we propose an algorithm to construct two CISTs in the crossed cube , a prominent, widely studied variant of the well-known hypercube. The construction algorithm will be presented, and its correctness proved. Based on that, the existence of two CISTs in a special Bijective Connection network based on crossed cube is also discussed. | ['Baolei Cheng', 'Dajin Wang', 'Jianxi Fan'] | Constructing completely independent spanning trees in crossed cubes | 969,562 |
We develop an information-theoretic framework that explores the identifiability of top-K ranked items. The goal of the problem considered herein is to recover a consistent ordering that emphasizes the top-K ranked items, based on partially revealed preferences. Under the Bradley-Terry-Luce model that postulates a set of latent preference scores underlying all items and the odds of paired comparisons depend only on the relative scores of the items involved, we characterize the fundamental limits (up to some constant gap) on the amount of information required for reliably identifying the top-K ranked items. Here we introduce an information-theoretic notion of reliable ranking, meaning that the probability of the estimated ranking being inconsistent with the ground truth can be made arbitrarily close to zero. We single out one significant measure that plays a crucial role in determining the limits: the separation measure that quantifies the gap of preference scores between the Kth and (K + 1)th ranked items. We show that the minimum sample complexity required for reliable top-K ranking scales inversely with the separation measure. | ['Yuxin Chen', 'Changho Suh'] | Top-K ranking: An information-theoretic perspective | 588,940 |
This paper explores existing theories, frameworks and models for handling collective user experience in the context of Distributed Interactive Multimedia Environments (DIME) and more specifically Augmented Sport applications. Besides discussing previous experimental work in the domain of Augmented Sport, we introduce Future Media Internet (FMI) technologies in relation with Mixed Reality (MR) platforms, user experience (UX), quality of Service (QoS) and quality of Experience (QoE) within 3D Tele-Immersive Environments that are part of the broad DIME domain. Finally, we present the 3D LIVE project QoS-UX-QoE approach and model that will be applied along three use cases (Skiing, Jogging and Golfing) experiments for anticipating the potential user adoption. | ['Marc Pallot', 'Remy Eynard', 'Benjamin Poussard', 'Olivier Christmann', 'Simon Richir'] | Augmented sport: exploring collective user experience | 528,516 |
Exact Graph Edit Distance Computation Using a Binary Linear Program | ['Julien Lerouge', 'Zeina Abu-Aisheh', 'Romain Raveaux', 'Pierre Héroux', 'Sébastien Adam'] | Exact Graph Edit Distance Computation Using a Binary Linear Program | 934,849 |
The search for a useful explanatory model based on a Bayesian Network (BN) now has a long and successful history. However, when the dependence structure between the variables of the problem is asymmetric then this cannot be captured by the BN. The Chain Event Graph (CEG) provides a richer class of models which incorporates these types of dependence structures as well as retaining the property that conclusions can be easily read back to the client. We demonstrate on a real health study how the CEG leads us to promising higher scoring models and further enables us to make more refined conclusions than can be made from the BN. Further we show how these graphs can express causal hypotheses about possible interventions that could be enforced. It is demonstrated on a real health study how the CEG can refine an initial BN analysis.Equivalent scoring methods allow for direct comparison between models.The CEG leads to higher scoring models from which more refined conclusions are made. | ['Lorna M. Barclay', 'Jane L. Hutton', 'Jim Q. Smith'] | Refining a Bayesian Network using a Chain Event Graph | 28,254 |
Internet of Things (IoT) is a global network of physical and virtual ‘things’ connected to the internet. Each object has unique ID which is used for identification. IoT is the emerging technology which will change the way we interact with devices. In future almost every electronic device will be a smart device which can compute and communicate with hand-held and other infrastructure devices. As most of the devices may be battery operated, due to less processing power the security and privacy is a major issue in IoT. Authentication, Identification and device heterogeneity are the major security and privacy concerns in IoT. Major challenges include integration, scalability, ethics communication mechanism, business models and surveillance. In this paper major issues related to security and privacy of IoT are focused. | ['Aqeel ur Rehman', 'Sadiq ur Rehman', 'Iqbal Uddin Khan', 'Muzaffar Moiz', 'Sarmad Hasan'] | Security and Privacy Issues in IoT | 966,960 |
In this study, alternative methods for studying legibility of text while walking with a mobile phone were examined. Normal reading and pseudo-text search were used as visual tasks in four walking conditions. Visual performance and subjective evaluation of task difficulty were used as measures of text legibility. According to the results, visual performance suffers from increasing walking speed, and the effects are greater on reading velocity for pseudo-text search. Subjects also use more homogenous strategies when reading compared to pseudo-text search, and therefore it is concluded that reading is a more useful measure of legibility. Subjective measures are found to be more sensitive to small variations in legibility than objective measures, and give additional information about task demands. Hence, without both objective and subjective measurements important information about legibility in different conditions and with different tasks will be lost. | ['Terhi Mustonen', 'Maria Olkkonen', 'Jukka Häkkinen'] | Examining mobile phone text legibility while walking | 310,913 |
Information is a strategic company resource, but there is no consensus in the literature regarding the set of dimensions to be considered when measuring the quality of the information. Most measures of information quality depend on user perception. Using multiple correlation analysis, we obtain a model that allows us to explain how information quality dimensions influence information consumers’ overall feeling of being well informed. A set of dimensions that any measure of information quality should at least include is proposed. This exploratory study reports the results of a research survey among managers of companies committed to quality management within the framework of a Total Quality Management (TQM) model, which is an information-intensive management model. | ['Marta Zárraga-Rodríguez', 'M. Jesús Álvarez'] | Experience: Information Dimensions Affecting Employees’ Perceptions Towards Being Well Informed | 616,865 |
Unlabeled data and other marginals. | ['Mark Hasegawa-Johnson', 'Jui-Ting Huang', 'Xiaodan Zhuang'] | Unlabeled data and other marginals. | 785,382 |
In recent years, video forensics has become an important issue. Video inter-frame forgery detection is a significant branch of forensics. In this paper, a new algorithm based on the consistency of velocity field is proposed to detect v ideo inter-frame forgery (i.e., consecutive frame deletion and consecutive frame duplication). The generalized extreme studentized deviate (ESD) test is applied to identify the forgery types and locate the manipulated positions in forged videos. Experiments show the effectiveness of our algorithm. | ['Yuxing Wu', 'Xinghao Jiang', 'Tanfeng Sun', 'Wang Wq'] | Exposing video inter-frame forgery based on velocity field consistency | 254,004 |
Ontologies have primarily been promoted to facilitate inter agent communication and knowledge reuse. There has not been as much emphasis on using ontologies to improve system quality. As a result, typically, ontology development, and verification and validation are treated as different stages in the life cycle. However, this paper argues that ontology design should include emergent knowledge that previously might only have been considered or generated at the time of verification and validation. Emergent knowledge differs from the existent knowledge that is typically included in ontologies. Existent variable knowledge is knowledge about variables that derives from the model of the variable being used, e.g., how is a conceptual variable measured. Emergent variable knowledge is knowledge about variables that emerges after variables have been named, e.g., cardinality, which variables interact with each other and how, e.g., independent and dependent variables. Much emergent knowledge can be used for verification and validation. Including emergent knowledge can facilitate the use of ontologies to design and build systems with fewer anomalies prior to testing. | ["Daniel E. O'Leary"] | Functional ontology artifacts: Existent and emergent knowledge | 508,461 |
This paper describes an inexpensive kit which is designed to allow users to get acquainted with modern instrumentation without the necessity to get to a laboratory where the instruments are available. The kit takes advantage of any low-cost audio card installed into a personal computer and is based on an open-source code, which interacts with the audio card, and on a simple test board, which allows the audio card to be calibrated. The kit is the base of the Instrumentation Training Project at the Politecnico di Torino, which is composed of a set of guided experiments based on the capabilities of the training kit. | ['Alessio Carullo', 'Marco Parvis', 'Alberto Vallan'] | An audio card-based kit for educational purposes | 551,472 |
Social media platforms, such as Twitter, offer a rich source of real-time information about real-world events, particularly during mass emergencies. Sifting valuable information from social media provides useful insight into time-critical situations for emergency officers to understand the impact of hazards and act on emergency responses in a timely manner. This work focuses on analyzing Twitter messages generated during natural disasters, and shows how natural language processing and data mining techniques can be utilized to extract situation awareness information from Twitter. We present key relevant approaches that we have investigated including burst detection, tweet filtering and classification, online clustering, and geotagging. | ['Jie Yin', 'Sarvnaz Karimi', 'Andrew Lampert', 'Mark A. Cameron', 'Bella Robinson', 'Robert Power'] | Using Social Media to Enhance Emergency Situation Awareness: Extended Abstract | 680,169 |
A deductive database audit trail | ['David L. Sallach'] | A deductive database audit trail | 398,636 |
The safe and economic exploitation of remote manipulation techniques is dependent upon accurate, responsive handling abilities. This usually means the use of teleoperation. However, the construction and control of effective general purpose end-effectors remains complex, and tactile data collection by, and feedback from, these devices is at best primitive. This work studies the development of an advanced instrumented finger with multimodal tactile sensations ranging from contact pressure/force, to hardness, texture, temperature, slip, surface profile/shape, and thermal conductivity. This is integrated with a portable gloved tactile feedback unit providing the operator with directly stimulated feedback of tactile data (tele-taction) on the pressure, vibrational and thermal effects of the handling operation. > | ['Darwin G. Caldwell', 'Clarence Gosney'] | Enhanced tactile feedback (tele-taction) using a multi-functional sensory system | 242,096 |
The performance of embodied multi-agent systems depends, in addition to the agent architectures of the employed agents, on their physical characteristics (e.g., sensory range, speed, etc.) and group properties (e.g., number of agents, types of agents, etc.). Consequently, it is difficult to evaluate the performance of a multi-agent system based on the performance of an agent architecture alone, even in homogeneous teams. In this paper, we propose a method for analyzing the performance of multi-agent systems based on the notion of "performance-cost-tradeoff," which attempts to determine the relations among different cost-dimensions by performing a performance sampling of these dimensions and comparing them relative to their associated costs. Specifically, we investigate the performance-cost tradeoffs of four candidate architectures for a multi-agent territory exploration task in which a group of agents is required to visit a set of checkpoints randomly placed in an environment in the shortest time possible. Performance tradeoffs between three dimensions (sensory range, group size, and prediction) are then used to illustrate the cost-benefit analyses performed to determine the best agent configurations for different practical settings. | ['Paul W. Schermerhorn', 'Matthias Scheutz'] | Social, Physical, and Computational Tradeoffs in Collaborative Multi-agent Territory Exploration Tasks | 423,183 |
In this paper, we describe the implementation of an optimization suite (OS) to facilitate the scheduling of radio advertisements for one of the largest media companies in the United States. Advertisements are scheduled adhering to complex criteria from the advertisers with the objective of maximizing the revenue for the company. Advertisers offer two types of flexibility for demand fulfillment: market flexibility provides an opportunity to shift the demand across demographics, and time flexibility allows the demand to be shifted across the broadcasting time horizon. The scale of inventories, fair and equitable distribution, flexibilities, and other complex criteria from the advertisers necessitated the development of a sophisticated OS to generate rosters for the placement of advertisements. The OS uses optimization models and four heuristics procedures to generate an advertisement placement roster for each station. The company has adapted the OS into its information systems to seamlessly incorporate optimization into its decision-making process. | ['Saravanan Venkatachalam', 'Fion Wong', 'Emrah Uyar', 'Stan Ward', 'Amit Aggarwal'] | Media Company Uses Analytics to Schedule Radio Advertisement Spots | 590,838 |
Multiobjective evolutionary clustering algorithms are based on the optimization of several objective functions that guide the search following a cycle based on evolutionary algorithms. Their capabilities allow them to find better solutions than with conventional clustering algorithms when more than one criterion is necessary to obtain understandable patterns from the data. However, these kind of techniques are expensive in terms of computational time and memory usage, and specific strategies are required to ensure their successful scalability when facing large-scale data sets. This work proposes the application of a data subset approach for scaling-up multiobjective clustering algorithms and it also analyzes the impact of three stratification methods. The experiments show that the use of the proposed data subset approach improves the performance of multiobjective evolutionary clustering algorithms without considerably penalizing the accuracy of the final clustering solution. | ['Alvaro Garcia-Piquer', 'Jaume Bacardit', 'Albert Fornells', 'Elisabet Golobardes'] | Scaling-up multiobjective evolutionary clustering algorithms using stratification | 951,993 |
Abstract In this work, some sufficient conditions are obtained for oscillation of all solutions of the even order neutral differential equations with variable coefficients and delays. Our results improve and generalize the known results. In particular, the results are new even when n = 2 . | ['Quanxin Zhang', 'Jurang Yan'] | Oscillation behavior of even order neutral differential equations with variable coefficients | 678,884 |
This paper presents the software architecture and middleware for a distributed smart camera (DSC) system that performs multi-target tracking. Middleware is used to abstract away important operations from the details of network operation. The paper describes the tracking algorithms and their middleware needs, the software architecture of the tracker, and the lessons learned for the next generation of tracker. | ['Senem Velipasalar', 'Wayne H. Wolf'] | Lessons from a distributed peer-to-peer smart tracker | 562,496 |
A nuclear export signal (NES) is a protein localization signal, which is involved in binding of cargo proteins to nuclear export receptor, thus contributes to regulate localization of cellular proteins. Consensus sequences of NES have been used to detect NES from protein sequences, but suffer from poor predictive power. Some recent peering works were proposed to use biochemical properties of experimental verified NES to refine NES candidates. Those methods can achieve high prediction rates, but their execution time will become unacceptable for large-scale NES searching if too much properties are involved. In this work, we developed a novel computational approach, named NES-REBS, to search NES from protein sequences, where biochemical properties of experimental verified NES, including secondary structure and surface accessibility, are utilized to refine NES candidates obtained by matching popular consensus sequences. We test our method by searching 262 experimental verified NES from 221 NES-containing protein sequences. It is obtained that NES-REBS runs in 2–3mins and performs well by achieving precision rate 47.2% and sensitivity 54.6%. | ['Tingfang Wu', 'Xun Wang', 'Zheng Zhang', 'Faming Gong', 'Tao Song', 'Zhihua Chen', 'Pan Zhang', 'Yang Zhao'] | NES-REBS: A novel nuclear export signal prediction method using regular expressions and biochemical properties | 702,842 |
In advancing our prior work on a unified theory for pseudospectral (PS) optimal control, we present the mathematical foundations for spectral collocation over arbitrary grids. The computational framework is not based on any particular choice of quadrature nodes associated with orthogonal polynomials. Because our framework applies to non-Gaussian grids, a number of hidden properties are uncovered. A key result of this paper is the discovery of the dual connections between PS and Galerkin approximations. Inspired by Polak's pioneering work on consistent approximation theory, we analyze the dual consistency of PS discretization. This analysis reveals the hidden relationship between Galerkin and pseudospectral optimal control methods while uncovering some finer points on covector mapping theorems. The new theory is used to demonstrate via a numerical example that a PS method can be surprisingly robust to grid selection. For example, even when 60 % of the grid points are chosen to be uniform--the worst possible selection from a pseudospectral perspective--a PS method can still produce satisfactory result. Consequently, it may be possible to choose non-Gaussian grid points to support different resolutions over the same grid. | ['Qi Gong', 'I.M. Ross', 'Fariba Fahroo'] | Spectral and Pseudospectral Optimal Control Over Arbitrary Grids | 653,623 |
Geographically replicating popular objects in the Internet speeds up content distribution at the cost of keeping the replicas consistent and up-to-date. The overall effectiveness of replication can be measured by the total communication cost consisting of client accesses and consistency management, both of which depend on the locations of the replicas. This paper investigates the problem of placing replicas under the widely used TTL-based consistency scheme. A polynomial-time algorithm is proposed to compute the optimal placement of a given number of replicas in a network. The new replica placement scheme is compared, using real Internet topologies and Web traces, against two existing approaches which do not consider consistency management or assume invalidation-based consistency scheme. The factors affecting their performance are identified and discussed | ['Xueyan Tang', 'Huicheng Chi', 'Samuel T. Chanson'] | Optimal Replica Placement under TTL-Based Consistency | 341,267 |
We present an aggressive interprocedural analysis for inferring value equalities which are independent of the concrete interpretation of the operator symbols. These equalities, called Herbrand equalities, are therefore an ideal basis for truly machine-independent optimizations as they hold on every machine. Besides a general correctness theorem, covering arbitrary call-by-value parameters and local and global variables, we also obtain two new completeness results: one by constraining the analysis problem to Herbrand constants, and one by allowing side-effect-free functions only. Thus if we miss a constant/equality in these two scenarios, then there exists a separating interpretation of the operator symbols. | ['Markus Müller-Olm', 'Helmut Seidl', 'Bernhard Steffen'] | Interprocedural herbrand equalities | 472,055 |
A neuro-fuzzy classifier (NFC) of sleep-wake states and stages has been developed for healthy infants of ages 6 mo and onward. The NFC takes five input patterns previously identified on 20-s epochs from polysomnographic recordings and assigns them to one out of five possible classes: Wakefulness, REM-Sleep, Non-REM Sleep Stage 1, Stage 2, and Stage 3-4. The definite criterion for a sleep state or stage to be established is duration of at least 1 min. The data set consisted of a total of 14 continuous recordings of naturally occurring naps (average duration: 143plusmn39 min), corresponding to a total of 6021 epochs. They were divided in a training, a validation and a test set with 7, 2, and 5 recordings, respectively. During supervised training, the system determined the fuzzy concepts associated to the inputs and the rules required for performing the classification, extracting knowledge from the training set, and pruning nonrelevant rules. Results on an independent test set achieved 83.9plusmn0.4% of expert agreement. The fuzzy rules obtained from the training examples without a priori information showed a high level of coincidence with the crisp rules stated by the experts, which are based on internationally accepted criteria. These results show that the NFC can be a valuable tool for implementing an automated sleep-wake classification system | ['Claudio M. Held', 'Jaime E. Heiss', 'Pablo A. Estévez', 'Claudio A. Perez', 'Marcelo Garrido', 'Cecilia Algarín', 'Patricio Peirano'] | Extracting Fuzzy Rules From Polysomnographic Recordings for Infant Sleep Classification | 422,336 |
Detection of Large Segmentation Errors with Score Predictive Model | ['Martin Matura', 'Jindřich Matoušek'] | Detection of Large Segmentation Errors with Score Predictive Model | 641,554 |
In this paper, we propose a simple and efficient mechanism to increase the throughput of an adaptive router in Network-on-Chip (NoC) . One of the most serious disadvantages of fully adaptive wormhole routers is its performance degradation due to the routing decision time. The key idea to overcome this shortcoming is the use of different clocks in a head flit and body flits, because the body flits can be forwarded immediately and the FIFO usually operates faster than route decision logic in an adaptive router. The major contributions of this paper are: 1) a proposal of a simple and efficient mechanism to improve the performance of fully adaptive wormhole routers, 2) a quantitative evaluation of the proposed mechanism showing that the proposed one can support higher throughput than a conventional one, and 3) an evaluation of hardware overhead for the proposed router. In summary, the proposed clock boosting mechanism enhances the throughput of the original adaptive router by increasing the accepted load and decreasing the average latency in the region of effective bandwidth. | ['Nader Bagherzadeh', 'Seung Eun Lee'] | Increasing the throughput of an adaptive router in network-on-chip (NoC) | 263,751 |
Exploiting Sequential Influence for Personalized Location-Based Recommendation Systems | ['Jia-Dong Zhang', 'Chi-Yin Chow'] | Exploiting Sequential Influence for Personalized Location-Based Recommendation Systems | 815,954 |
Services Personalization Approach for a Collaborative Care Ecosystem | ['Thais Andrea Baldissera', 'Luis M. Camarinha-Matos'] | Services Personalization Approach for a Collaborative Care Ecosystem | 888,558 |
We present a code transformation for concurrent data struc- tures, which increases their scalability without sacrificing correctness. Our transformation takes lock-based code and replaces some of the locking steps therein with optimistic synchronization in order to reduce contention. The main idea is to have each operation perform an optimistic traversal of the data structure as long as no shared memory locations are updated, and then proceed with pessimistic code. The transformed code inherits essential properties of the original one, including linearizability, serializability, and deadlock freedom.#R##N#Our work complements existing pessimistic transformations that make sequential code thread-safe by adding locks. In essence, we provide a way to optimize such transformations by reducing synchronization bottlenecks (for example, locking the root of a tree). The resulting code scales well and significantly outperforms pessimistic approaches. We further compare our synthesized code to state-of-the-art data structures implemented by experts. We find that its performance is comparable to that achieved by the custom-tailored implementations. Our work thus shows the promise that automated approaches bear for overcoming the difficulty involved in manually hand-crafting concurrent data structures. | ['Maya Arbel', 'Guy Golan-Gueta', 'Eshcar Hillel', 'Idit Keidar'] | Towards Automatic Lock Removal for Scalable Synchronization | 600,019 |
The Role of Knowledge Keepers in an Artificial Primitive Human Society: An Agent-Based Approach | ['Marzieh Jahanbazi', 'Christopher Frantz', 'Maryam Purvis', 'Martin K. Purvis'] | The Role of Knowledge Keepers in an Artificial Primitive Human Society: An Agent-Based Approach | 858,870 |
Changes in Cognitive Processes upon Learning Mini-Shogi. | ['Takeshi Ito', 'Daisuke Takano', 'Xiaohong Wan', 'Keiji Tanaka'] | Changes in Cognitive Processes upon Learning Mini-Shogi. | 800,615 |
This paper explores and compares the nature of the nonlinear filtering techniques on mobile robot pose estimation. Three nonlinear filters are implemented including the extended Kalman filter (EKF), the unscented Kalman filter (UKF) and the particle filter (PF). The criteria of comparison is the magnitude of the error of pose estimation, the computational time, and the robustness of each filter to noise. The filters are applied to two applications including the pose estimation of a two-wheeled robot in an experimental platform and the pose estimation of a three-wheeled robot in a simulated environment. The robots both in the experimental and simulated platform move along a nonlinear trajectory like a circular arc or a spiral. The performance of their pose estimation are compared and analysed in this paper. | ['Zongwen Xue', 'Howard M. Schwartz'] | A comparison of mobile robot pose estimation using nonlinear filters: simulation and experimental results | 708,858 |
A Reverse Approach to Named Entity Extraction and Linking in Microposts. | ['Kara Greenfield', 'Rajmonda S. Caceres', 'Michael Coury', 'Kelly Geyer', 'Youngjune Gwon', 'Jason Matterer', 'Alyssa Mensch', 'Cem Safak Sahin', 'Olga Simek'] | A Reverse Approach to Named Entity Extraction and Linking in Microposts. | 983,717 |
This research study analyses the impact of semi-conductor (t osc) and dielectric (t ox) thicknesses on top contact (TC) and bottom contact (BC) organic thin film transistors (OTFTs) using Atlas 2-D numerical device simulation. Thickness of each layer is varied from 20 to 150 nm. The parameters such as electric field, charge carrier distribution and trap density are analysed from device physics point of view with variations in organic semi-conductor layer and dielectric thicknesses. A decrease of 22% in TC to BC current ratio is observed for maximum increase in t osc, whereas, it remains almost constant at unity with variations in t ox. Furthermore, the maximum mobility for TC is achieved at t osc of 20 nm and reduces monotonically with further increase in thickness because of lowering of average charge. However, its highest value is obtained at 60 nm for BC structure that declines with positive or negative change in t osc. Besides this, the threshold voltage (V t) shows a reduction of 50% for both the structures on scaling down t ox from 150 to 20 nm. Furthermore, the ON to OFF current ratio is found to be more dependent on t osc as compared with t ox. This is because of a dominant impact of t osc reduction on OFF current as compared with impact of t ox reduction on the ON current. Additionally, a decrease in contact resistance (R C) is observed in TC structure for thicker active layer while operating at lower Vgs . However, at high gate voltage, t osc maps to the access resistance that results in higher R C values. | ['Brijesh Kumar', 'Brajesh Kumar Kaushik', 'Yuvraj Singh Negi'] | Analysis of electrical parameters of organic thin film transistors based on thickness variation in semiconducting and dielectric layers | 131,653 |
OFDM is one of the promising modulation candidates for a fourth generation broadband mobile communication system because of its robustness against intersymbol interference (ISI). The adaptive modulation scheme is also an efficient scheme to increase the transmission rate by changing the channel modulation scheme according to the estimated channel state information. Since its implementation depends on the channel environment of the system and control period by using feedback information, this paper presents an evaluation for the effects of various modulation scheme combinations, target BER, Doppler frequency, and various adaptation intervals as control period on the performance of adaptive OFDM. We also propose a predicted feedback information scheme which increases the adaptation interval using the predicted power estimation in order to reduce the transmission time of feedback information from receiver to transmitter. Computer simulation results show that the case with BPSK, QPSK and 16QAM modulation combination at target BER 10/sup -2/ achieves 2Mbit/s improvement over other combination cases in high Doppler frequency. On the other hand, at target BER 10/sup -3/, the case with BPSK, QPSK, 8PSK and 16QAM modulation combination achieves 3Mbit/s improvement compared to the case of target BER 10/sup -2/. It is also shown that the predicted feedback information scheme effectively reduces the transmission time of feedback information from the receiver to transmitter. | ['Chang Jun Ahn', 'Iwao Sasase'] | The effects of modulation combination, target BER, Doppler frequency, and adaptation interval on the performance of adaptive OFDM in broadband mobile channel | 197,717 |
This work presents the electronic design of an intelligent device that includes a monitor system for automatic movements of a robotic hospital bed based on posture classication and identication. This feature was carried out in response to the necessities dened by the application of a diagnostic identication methodology. This method was successfully applied to a public Mexican hospital and the issue identied was the mobility of elderly people and physically challenged individuals. The movement of these patients can be performed routinely or sporadically during their stay in a hospital. For patients who require a particular routine application of this action, the system includes an intelligent monitor system. This intelligent system allows medical experts to program the movements of the robotic bed considering the posture of the patients and the time in bed. This paper shows the hardware and software design of the electronic system and the physicals results. | ['Eduardo Vázquez-Santacruz', 'Cuauhtémoc Morales-Cruz', 'Mariano Gamboa-Zúñiga'] | Electronic System of an Intelligent Machine: the Case of an Assistive Bed Device | 841,389 |
The widespread use of BPMN to describe business processes#R##N#is highlighting the need to integrate the description of the operational#R##N#flow with domain specific information. In most cases the domain knowledge#R##N#is already represented in domain ontologies or can be derived from#R##N#the existing documentation. To preserve the simplicity of the original#R##N#BPMN model specifications our approach is to integrate the semantic#R##N#and BPMN information through a separate view that can be semiautomatically#R##N#built relying on the capabilities of the SyBeL modeling#R##N#language. This unified view can be used to produce different artifacts#R##N#to support the implementation phases of the | ['Giuseppe Della Penna', 'Roberto del Sordo', 'Benedetto Intrigila', 'Nicolò Mezzopera', 'Maria Teresa Pazienza'] | A Lightweight Formalism for the Integration of BPMN Models with Domain Ontologies | 687,160 |
Many computing systems today are heterogeneous in that they consist of a mix of different types of processing units (e.g., CPUs, GPUs). Each of these processing units has different execution capabilities and energy consumption characteristics. Job mapping and scheduling play a crucial role in such systems as they strongly affect the overall system performance, energy consumption, peak power and peak temperature. Allocating resources (e.g., core scaling, threads allocation) is another challenge since different sets of resources exhibit different behavior in terms of performance and energy consumption. Many studies have been conducted on job scheduling with an eye on performance improvement. However, few of them takes into account both performance and energy. We thus propose our novel Performance, Energy and Thermal aware Resource Allocator and Scheduler (PETRAS) which combines job mapping, core scaling, and threads allocation into one scheduler. Since job mapping and scheduling are known to be NP-hard problems, we apply an evolutionary algorithm called a Genetic Algorithm (GA) to find an efficient job schedule in terms of execution time and energy consumption, under peak power and peak temperature constraints. Experiments conducted on an actual system equipped with a multicore CPU and a GPU show that PETRAS finds efficient schedules in terms of execution time and energy consumption. Compared to performance-based GA and other schedulers, on average, PETRAS scheduler can achieve up to a 4.7x of speedup and an energy saving of up to 195%. | ['Shouq Alsubaihi', 'Jean-Luc Gaudiot'] | PETRAS: Performance, Energy and Thermal Aware Resource Allocation and Scheduling for Heterogeneous Systems | 997,928 |
In wireless communications research, a number of literature assume that every node knows all of its neighbor nodes. To this end, neighbor discovery research has been conducted, but it still has room for improvement in terms of discovery delay. Furthermore, prior work has overlooked energy efficiency, which is considered as the critical factor in wireless devices or appliances. For better performance with respect to the discovery delay and energy efficiency, we proposed a novel p-persistent-based neighbor discovery protocol and devised a simple and light algorithm estimating the number of neighbor nodes to support the proposed protocol. Our protocol requires a lower delay and a smaller number of messages for the discovery process than the existing protocols. For extensive performance evaluation, we adopted extra comparison targets from other research areas within the same context. Copyright © 2011 John Wiley & Sons, Ltd. | ['Kyunghwi Kim', 'Heejun Roh', 'Wonjun Lee', 'Sinjae Lee', 'Ding Zhu Du'] | PND: a p‐persistent neighbor discovery protocol in wireless networks | 338,301 |
Material balance equations describe the dynamics of the species in open reaction systems and contain information regarding reaction topology, kinetics and operation mode. For reaction systems, the state variables (the numbers of moles, or concentrations) have recently been transformed into decoupled reaction variants (extents of reaction), and reaction invariants (extents of flow) (Amrhein et al., AIChE Journal, 2010). This paper analyses the conditions under which an open homogeneous reaction system is cooperative in the extents domain. Further, it is shown that the dynamics of the extents of flow exhibit cooperative behavior. Further, we provide the conditions under which the dynamics of the extents of reaction exhibit cooperative behavior. Our results provide physical insights into cooperative and competitive nature of the underlying reaction system in the presence of material exchange with surrounding (i.e., inlet and outlet flows). The results of the article are demonstrated via examples. | ['Nirav Bhatt', 'Sriniketh Srinivasan'] | On Cooperative Behavior of Open Homogeneous Chemical Reaction Systems in the Extent Domain | 640,702 |
The retrieval of plant biophysical and biochemical properties from high spectral resolution data represents an active area of research within the remote sensing field. Scientific studies in this area are usually supported by computational simulations of light attenuation processes within foliar tissues. In heterogeneous organic materials, like plant leaves, sieve and detour effects can affect these processes and ultimately change the light gradients within these tissues and their spectral signatures. Although these effects have been extensively examined for applications involving the interactions of visible radiation with plant leaves, little is known about their role in the infrared domain. In this paper, we describe the procedural basis for their incorporation in the modeling of infrared-radiation transport (in the range of 750-2500 nm) within plant leaves. We also assess their impact on the predictability of simulation solutions relating the directionality of the incident radiation and the internal arrangement of the tissues to changes on foliar spectral signatures in this domain. Our investigation is grounded by the observations involving the modeled results and quantitative and qualitative data reported in the literature. | ['Gladimir V. G. Baranoski', 'Denise Eng'] | An Investigation on Sieve and Detour Effects Affecting the Interaction of Collimated and Diffuse Infrared Radiation (750 to 2500 nm) With Plant Leaves | 274,538 |
Leading experts debate how virtualization and clouds impact network service architectures. | ['Mache Creeger'] | Moving to the Edge: An ACM CTO Roundtable on Network Virtualization | 284,278 |
The behaviour of a given system may be forecast using two general methodologies. The first depends upon knowledge of the laws that govern a particular phenomenon. When this knowledge is expressed in terms of a precise set of equations, which, in principle can be solved, then, providing that the initial conditions are specified, the future behaviour of the system may be predicted. However, in cases of systems belonging to behavioural science and economics, for example, the rules governing the behaviour of the system are not readily available. A second, less powerful method, involves the discovery of empirical regularities in observations of the system. As emphasised by Refenes, Azema-Barac, Chen and Karoussos (1993), such regularities are often masked by noise, whilst phenomena that seem random, without apparent periodicities, remain recurrent in a generic sense. As with any technology that is readily available, those companies that are using neural networks successfully are probably remaining silent so as to maintain their competitive advantage. As for the less than silent ones, it is doubtful whether they have discovered the advantages that neural networks may offer. This research, using a backpropagation neural network methodology, proposes to establish whether using neural networks to predict company failure is more successful than using established methodologies. | ['R.J. Van Eyden', 'P.W.C. De Wit', 'John Arron'] | Predicting company failure-a comparison between neural networks and established statistical techniques by applying the McNemar test | 306,777 |
Continuous space word representations extracted from neural network language models have been used effectively for natural language processing, but until recently it was not clear whether the spatial relationships of such representations were interpretable. Mikolov et al. (2013) show that these representations do capture syntactic and semantic regularities. Here, we push the interpretation of continuous space word representations further by demonstrating that vector offsets can be used to derive adjectival scales (e.g., okay < good < excellent). We evaluate the scales on the indirect answers to yes/no questions corpus (de Marneffe et al., 2010). We obtain 72.8% accuracy, which outperforms previous results ( 60%) on this corpus and highlights the quality of the scales extracted, providing further support that the continuous space word representations are meaningful. | ['Joo-Kyung Kim', 'Marie-Catherine de Marneffe'] | Deriving Adjectival Scales from Continuous Space Word Representations | 616,784 |
Graph matching has a wide spectrum of real-world applications and in general is known NP-hard. In many vision tasks, one realistic problem arises for finding the global node mappings across a batch of corrupted weighted graphs. This paper is an attempt to connect graph matching, especially multi-graph matching to the matrix decomposition model and its relevant on-the-shelf convex optimization algorithms. Our method aims to extract the common inliers and their synchronized permutations from disordered weighted graphs in the presence of deformation and outliers. Under the proposed framework, several variants can be derived in the hope of accommodating to other types of noises. Experimental results on both synthetic data and real images empirically show that the proposed paradigm exhibits several interesting behaviors and in many cases performs competitively with the state-of-the-arts. | ['Junchi Yan', 'Hongteng Xu', 'Hongyuan Zha', 'Xiaokang Yang', 'Huanxi Liu', 'Stephen M. Chu'] | A Matrix Decomposition Perspective to Multiple Graph Matching | 576,908 |
This article discusses a control architecture for autonomous sailboat navigation and also presents a sailboat prototype built for experimental validation of the proposed architecture. The main goal is to allow long endurance autonomous missions, such as ocean monitoring. As the system propulsion relies on wind forces instead of motors, sailboat techniques are introduced and discussed, including the needed sensors, actuators and control laws. Mathematical modeling of the sailboat, as well as control strategies developed using PID and fuzzy controllers to control the sail and the rudder are also presented. Furthermore, we also present a study of the hardware architecture that enables the system overall performance to be increased. The sailboat movement can be planned through predetermined geographical way-points provided by a base station. Simulated and experimental results are presented to validate the control architecture, including tests performed on a lake. Underwater robotics can rely on such a platform by using it as a basis vessel, where autonomous charging of unmanned vehicles could be done or else as a relay surface base station for transmitting data. | ['Davi dos Reis Santos', 'Andouglas Silva Junior', 'Alvaro Negreiros', 'João Paulo Simões Vilas Bôas', 'Justo Morales Álvarez', 'Andre P. D. Araujo', 'Rafael Vidal Aroca', 'Luiz M. G. Gonçalves'] | Design and Implementation of a Control System for a Sailboat Robot | 626,239 |
On the Spanning Ratio of Constrained Yao-Graphs. | ['André van Renssen'] | On the Spanning Ratio of Constrained Yao-Graphs. | 739,756 |
In many critical systems domains, test suite adequacy is currently measured using structural coverage metrics over the source code. Of particular interest is the modified condition/decision coverage (MC/DC) criterion required for, e.g., critical avionics systems. In previous investigations we have found that the efficacy of such test suites is highly dependent on the structure of the program under test and the choice of variables monitored by the oracle. MC/DC adequate tests would frequently exercise faulty code, but the effects of the faults would not propagate to the monitored oracle variables. In this report, we combine the MC/DC coverage metric with a notion of observability that helps ensure that the result of a fault encountered when covering a structural obligation propagates to a monitored variable; we term this new coverage criterion Observable MC/DC (OMC/DC). We hypothesize this path requirement will make structural coverage metrics 1.) more effective at revealing faults, 2.) more robust to changes in program structure, and 3.) more robust to the choice of variables monitored. We assess the efficacy and sensitivity to program structure of OMC/DC as compared to masking MC/DC using four subsystems from the civil avionics domain and the control logic of a microwave. We have found that test suites satisfying OMC/DC are significantly more effective than test suites satisfying MC/DC, revealing up to 88% more faults, and are less sensitive to program structure and the choice of monitored variables. | ['Michael W. Whalen', 'Gregory Gay', 'Dongjiang You', 'Mats Per Erik Heimdahl', 'Matt Staats'] | Observable modified Condition/Decision coverage | 348,666 |
In the past, several authors have expressed their concerns over the poor congestion control in mobile wireless ad-hoc networks using traditional reference layer model. Many solutions were proposed to handle growing traffic and congestion in the network, using link layer information. Existing solutions have shown difficulties in dealing with congestion with varying packets drop. Moreover, ensuring the superior performance of congestion control schemes with traditional referenced layer model is a challenging issue, due to quick topology changes, dynamic wireless channel characteristics, link-layer contentions, etc. In this paper, we propose an effective cross-layer adaptive transmission method to handle the congestion in mobile wireless ad-hoc networks adequately. Simulation results exemplify the usefulness of the proposed method in handling congestion, and yields better results compared to existing approaches. | ['Varun Sharma', 'Mahesh Kumar'] | Adaptive congestion control scheme in mobile ad-hoc networks | 906,826 |
This paper investigates a novel multi-target tracking algorithm for jointly estimating the number of multiple targets and their states from noisy measurements in the presence of data association uncertainty, target birth, clutter and missed detections. Probability hypothesis density (PHD) filter is a popular multi-target Bayes filter. But the standard PHD filter assumes that the target birth intensity is known or homogeneous, which usually results in inefficiency or false tracks in a cluttered scene. To solve this weakness, an iterative random sample consensus (I-RANSAC) algorithm with a sliding window is proposed to incrementally estimate the target birth intensity from uncertain measurements at each scan in time. More importantly, I-RANSAC is combined with PHD filter, which involves applying the PHD filter to eliminate clutter and noise, as well as to discriminate between survival and birth target originated measurements. Then birth targets originated measurements are employed to update the birth intensity by the I-RANSAC as the input of PHD filter. Experimental results prove that the proposed algorithm can improve number and state estimation of targets even in scenarios with intersections, occlusions, and birth targets born at arbitrary positions. | ['Jingjing Wu', 'Ke Li', 'Qiuju Zhang', 'Wei An', 'Yi Jiang', 'Xueliang Ping', 'Peng Chen'] | Iterative RANSAC based adaptive birth intensity estimation in GM-PHD filter for multi-target tracking | 877,238 |