abstract
stringlengths 8
10.1k
| authors
stringlengths 9
1.96k
| title
stringlengths 6
367
| __index_level_0__
int64 13
1,000k
|
---|---|---|---|
Compact laser illumination system for endoscopic interventions. | ['B. Blase'] | Compact laser illumination system for endoscopic interventions. | 680,392 |
Beyond Presence: How Holistic Experience Drives Training and Education. | ['Dustin B. Chertoff', 'Sae Schatz'] | Beyond Presence: How Holistic Experience Drives Training and Education. | 804,109 |
Assistive Technologies may greatly contribute to give autonomy and independence for individuals with physical limitations. Electric wheelchairs are examples of those assistive technologies and nowadays each time becoming more intelligent due to the use of technology that provides assisted safer driv- ing. Usually, the user controls the electric wheelchair with a conventional ana- log joystick. However, this implies the need for an appropriate methodology to map the position of the joystick handle, in a Cartesian coordinate system, to the wheelchair wheels intended velocities. This mapping is very important since it will determine the response behavior of the wheelchair to the user manual con- trol. This paper describes the implementation of several joystick mappings in an intelligent wheelchair (IW) prototype. Experiments were performed in a realis- tic simulator using cerebral palsy users with distinct driving abilities. The users had 6 different joystick control mapping methods and for each user the usability and the users' preference order was measured. The results achieved show that a linear mapping, with appropriate parameters, between the joystick's coordinates and the wheelchair wheel speeds is preferred by the majority of the users. | ['Brígida Mónica Faria', 'Luís Miguel Ferreira', 'Luís Paulo Reis', 'Nuno Lau', 'Marcelo Petry'] | Intelligent Wheelchair Manual Control Methods A Usability Study by Cerebral Palsy Patients | 453,165 |
This paper describes the implementation and evaluation of a program that uses active recruiting and peer-led team learning to try to increase the participation and success of women and minority students in undergraduate computer science. These strategies were applied at eight universities starting in the fall of 2004. There have been some impressive results: We succeeded in attracting under-represented students who would not otherwise have taken a CS course. Evaluation shows that participation in our program significantly improves retention rates and grades, especially for women. Students in the program, as well as the students who served as peer leaders, are uniformly enthusiastic about their experience. | ['Susan Horwitz', 'Susan H. Rodger', 'Maureen Biggers', 'David Binkley', 'C. Kolin Frantz', 'Dawn M. Gundermann', 'Susanne E. Hambrusch', 'Steven Huss-Lederman', 'Ethan V. Munson', 'Barbara G. Ryder', 'Monica Sweat'] | Using peer-led team learning to increase participation and success of under-represented groups in introductory computer science | 581,267 |
In this paper, we proposed an integrated data mining system for patient monitoring with applications on asthma care. In this system, two data mining methods named PBD and PBC are designed for predicting asthma attacks. The main methodology is to extract the significant information of asthma attacks and build classifiers by using users' daily bio-signal records and environmental data. Meanwhile, helpful medical information and suggestions supported by doctors are applied. In this way, the proposed system can predict the chances of asthma attacks and provide patients with the proper medical instructions or health messages. The experimental evaluation results proved that the proposed mechanism is effective and reliable in asthma attack prediction. | ['Vincent S. Tseng', 'Chao-Hui Lee', 'J. Chia-Yu Chen'] | An Integrated Data Mining System for Patient Monitoring with Applications on Asthma Care | 478,578 |
Optimal resource management for smart grid powered multi-input multi-output (MIMO) systems is of great importance for future green wireless communications. A novel framework is put forth to account for the stochastic renewable energy sources (RES), dynamic energy prices, as well as random wireless channels. Based on practical models, the resource allocation task is formulated as an optimization problem that aims at maximizing the weighted sum-rate of the MIMO broadcast channels. A two-way transaction mechanism and storage units are introduced to accommodate the RES variability. In addition to system operating constraints, a budget threshold is imposed on the worst-case energy transaction cost due to the possibly adversarial nature. Capitalizing on the uplink-downlink duality and the Lagrangian relaxation-based subgradient method, an efficient algorithm is developed to obtain the optimal strategy. Generalizations to the setups of time-varying channels and ON–OFF transmissions are also discussed. Numerical results are provided to corroborate the merits of the novel approaches. | ['Shuyan Hu', 'Yu Zhang', 'Xin Wang', 'Georgios B. Giannakis'] | Weighted Sum-Rate Maximization for MIMO Downlink Systems Powered by Renewables | 725,903 |
The main objective of this thesis is the development and exploitation of techniques to generate geometric samples for the purpose of image segmentation. A sampling-based approach provides a number of benefits over existing optimization-based methods such as robustness to noise and model error, characterization of segmentation uncertainty, natural handling of multi-modal distributions, and incorporation of partial segmentation information. This is important for applications which suffer from, e.g., low signal-to-noise ratio (SNR) or ill-posedness. #R##N#We create a curve sampling algorithm using the Metropolis-Hastings Markov chain Monte Carlo (MCMC) framework. With this method, samples from a target distribution it (which can be evaluated but not sampled from directly) are generated by creating a Markov chain whose stationary distribution is π and sampling many times from a proposal distribution q. We define a proposal distribution using random Gaussian curve perturbations, and show how to ensure detailed balance and ergodicity of the chain so that iterates of the Markov chain asymptotically converge to samples from π. We visualize the resulting samples using techniques such as confidence bounds and principal modes of variation and demonstrate the algorithm on examples such as prostate magnetic resonance (MR) images, brain MR images, and geological structure estimation using surface gravity observations. #R##N#We generalize our basic curve sampling framework to perform conditional simulation: a portion of the solution space is specified, and the remainder is sampled conditioned on that information. For difficult segmentation problems which are currently done manually by human experts, reliable semi-automatic segmentation approaches can significantly reduce the amount of time and effort expended on a problem. We also extend our framework to 3D by creating a hybrid 2D/3D Markov chain surface model. For this approach, the nodes on the chain represent entire curves on parallel planes, and the slices combine to form a complete surface. Interaction among the curves is described by an undirected Markov chain, and we describe methods to sample from this model using both local Metropolis-Hastings methods and the embedded hidden Markov model (HMM) algorithm. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.) | ['Alan S. Willsky', 'Ayres Fan'] | Curve sampling and geometric conditional simulation | 510,557 |
The Networked Interactive Media in Schools (NIMIS) project is an EU funded project based in three countries, which has designed and evaluated a classroom of the future for infants. The Computer Based Learning Unit at Leeds University, in conjunction with teachers and children has developed an application, ‘T'rrific Tales’, which allows children of age 5-6 years to co-construct a multi-frame cartoon to help storywriting with the help of an empathic agent. In this paper we discuss the factors that promote creativity and how the design of T'rrific Tales, used in the NIMIS classroom, is intended to help children be creative and includes the positive early results of our analysis of the stories told by the children. Context This paper considers issues involved in the software development in one area of a European project, which is developing a “classroom of the future” for primary children situated in three European schools one each in Portugal and Germany and England. The NIMIS project, (Networked Interactive Media in Schools) has several interwoven aims, technological, cognitive, and social, embedded in its conception. Central to the project’s aims is the smooth interaction between human and electronic communication, the digital complementing and facilitating the human. The project team envisages the marrying of co -operative technologies with intelligent ones such as “anthropomorp hic” agents and uses high technology interfaces (a large 50 inch touch screen and Wacom PL-300 tablets) in a real classrooms, with three different applications designed to encourage literacy and creative writing. Children are able to share and jointly create multimedia stories, exchange ideas, text pictures and sound. This paper looks at software being developed in the English school with which children can create cartoon-style stories using pictures, sound and text with the help of an agent. The paper is an extended version of one published on CD ROM as the proceedings of ISSEI 2000, Bergen, Norway (Brna & Cooper, 2000). | ['Bridget Cooper', 'Paul Brna'] | Fostering Cartoon-Style Creativity with Sensitive Agent Support in Tomorrow's Classroom | 131,642 |
The efficiency of a communication network depends not only on its control protocols, but also on its topology. We propose a distributed topology management algorithm that constructs and maintains a backbone topology based on a minimal dominating set (MDS) of the network. According to this algorithm, each node determines the membership in the MDS for itself and its one-hop neighbors based on two-hop neighbor information that is disseminated among neighboring nodes. The algorithm then ensures that the members of the MDS are connected into a connected dominating set (CDS), which can be used to form the backbone infrastructure of the communication network for such purposes as routing. The correctness of the algorithm is proven, and the efficiency is compared with other topology management heuristics using simulations. Our algorithm shows better behavior and higher stability in ad hoc networks than prior algorithms. | ['Lichun Bao', 'J. J. Garcia-Luna-Aceves'] | Topology management in ad hoc networks | 496,261 |
Block ciphers sensitive to Groebner Basis Attacks. | ['Johannes A. Buchmann', 'Andrei Pychkine', 'Ralf-Philipp Weinmann'] | Block ciphers sensitive to Groebner Basis Attacks. | 741,536 |
This paper proposes a novel automatic video diagnosing method. The purpose of this study is to detect the attack added on the authorized video and identify the attack category. The video is added with the crypto-watermarks using dual-domain quaternary watermarking algorithm in advance. The crypto-watermarks, which are generated using visual cryptography, have different capabilities against the various attacks. We extract the watermarks from the suspected video, measure the bit-error-rate between the extracted and specified crypto-watermarks, and then analyze bit-error-rates to determine what kind of attack added on the video. The experimental results demonstrate that the proposed method not only can identify the single attack, but it can identify the composite attack and detect the corrupted frames. Even if the video is not embedded with crypto-watermarks, we can differentiate it from the authorized videos. | ['Yi-Chong Zeng', 'Soo-Chang Pei'] | Automatic video diagnosing method using embedded crypto-watermarks | 270,965 |
This paper deals with superposition modulation for a three-node system (two sources, one destination) in a cooperative ad hoc configuration. An information-theoretic achievable capacity region is given for both amplify-and-forward (AF) and decode-and-forward (DF) superposition schemes, and it is shown that it is dependent on the frame length. To simplify the analysis of the superposition modulation, an approximation of the achievable capacity region is proposed. This approximation uses only two well-defined time slots with appropriate capacity constraints and efficiently approaches the true capacity region for large frame lengths. Another issue that is discussed in this paper is the definition of the optimal superposition weight factor. In contrast with previously reported studies, which were based on simulation results, a theoretical framework that provides the optimal weight factor is also investigated. The proposed algorithm uses simplified outage probabilities of the system model, and the result only depends on the required spectral efficiency. Analytical results and simulation studies show the gains of the proposed schemes. | ['Ioannis Krikidis'] | Analysis and Optimization Issues for Superposition Modulation in Cooperative Networks | 277,786 |
Dynamic Programming (DP)-based stereo matching consists of three major parts: matching cost computation (M.C.C.), minimum cost accumulation (M.C.A.), and disparity optimization (D.O.). This paper presents two architectures of implementations: array-based and memory-based. The array-based implementation is a systolic-like design consisting of regularly connected processing elements (PEs). The memory-based design replaces most of the PEs by memory units in order to reduce area cost. Both architectures adopt the concept of double buffer designs in order to process contiguous images. Experimental results show that the proposed design can achieve real-time processing speed at reasonable area cost. | ['Shen-Fu Hsiao', 'Wen-Ling Wang', 'Po-Sheng Wu'] | VLSI implementations of stereo matching using Dynamic Programming | 335,502 |
This paper aims to provide ways to enhance overall performance of crowdfunding platforms by improving success prospects of projects post-launch. Pledge behavior at the initial stages of project launch is a key indicator of project success. So, this work identifies projects to be promoted on basis of their pledge behavior at such a crucial phase. The time series of pledge amount is analyzed to understand dynamics of funding pattern and to predict a project's chances of successful funding. Statistical analysis was performed on two different datasets of projects launched over crowdfunding platform Kickstarter. The results obtained provide better understanding of the funding pattern of successful and unsuccessful projects. On the basis of behavior pattern, projects are classified as overfunded, funded, potential and low potential. To classify a project, Euclidean distance of the target project with median of the funding pattern of different categories is used to find closest category to which a project belongs. This process is effective and less expensive in terms of computation. | ['Jaya Gera', 'Harmeet Kaur'] | Dynamics of Pledge Behavior of Crowdfunded Projects | 910,036 |
Embedding a sequence of virtual networks (VNs) into a given physical network substrate to accommodate as many VN requests as possible is known to be NP-hard. This paper presents a new approach to studying this problem. In particular, we devise a topology-aware measure on node resources based on random walks and use it to rank a node's resources and topological attributes. We then devise a greedy algorithm that matches nodes in the VN to nodes in the substrate network according to node ranks. In most situations there exist multiple embedding solutions, and so we want to find the best embedding that increases the possibility of accepting future VN requests and optimizes the revenue for the provider of the substrate network. We present an integer linear programming formulation for this optimization problem when path splitting is not allowed. We then devise a fast-convergent discrete Particle Swarm Optimization algorithm to approximate this problem. Extensive simulation results show that our algorithms produce near optimal solutions and significantly outperform existing algorithms in terms of the ratio of the long-term average revenue over the VN request acceptance. | ['Xiang Cheng', 'Sen Su', 'Zhongbao Zhang', 'Kai Shuang', 'Fangchun Yang', 'Yan Luo', 'Jie Wang'] | Virtual network embedding through topology awareness and optimization | 47,280 |
Efficient Cross User Client Side Data Deduplication in Hadoop. | ['Priteshkumar Prajapati', 'Parth Shah', 'Amit Ganatra', 'Sandipkumar Patel'] | Efficient Cross User Client Side Data Deduplication in Hadoop. | 996,628 |
In a data-driven economy that struggles to cope with the volume and diversity of information, data quality assessment has become a necessary precursor to data analytics. Real-world data often contains inconsistencies, conflicts and errors. Such dirty data increases processing costs and has a negative impact on analytics. Assessing the quality of a dataset is especially important when a party is considering acquisition of data held by an untrusted entity. In this scenario, it is necessary to consider privacy risks of the stakeholders. This paper examines challenges in privacy-preserving data quality assessment. A two-party scenario is considered, consisting of a client that wishes to test data quality and a server that holds the dataset. Privacy-preserving protocols are presented for testing important data quality metrics: completeness, consistency, uniqueness, timeliness and validity. For semi-honest parties, the protocols ensure that the client does not discover any information about the data other than the value of the quality metric. The server does not discover the parameters of the client's query, the specific attributes being tested and the computed value of the data quality metric. The proposed protocols employ additively homomorphic encryption in conjunction with condensed data representations such as counting hash tables and histograms, serving as efficient alternatives to solutions based on private set intersection. | ['Julien Freudiger', 'Shantanu Rane', 'Alejandro E. Brito', 'Ersin Uzun'] | Privacy Preserving Data Quality Assessment for High-Fidelity Data Sharing | 399,179 |
Engineering and Education for the Future. | ['Edward A. Lee', 'David G. Messerschmitt'] | Engineering and Education for the Future. | 769,273 |
Since wireless sensor actor networks (WSANs) interact with critical physical environments, one of the important issues of WSANs is real-time considerations. Existing WSANs suffer from the lack of a real-time task allocation in support of real-time communication and coordination. In this paper we present a two level task allocation mechanism. We first break end-to-end periodic tasks into real-time jobs, and then use appropriate algorithms for sensing tasks and acting tasks. To formally state our approach, we propose a model for WSANs using graph transformation systems. Using this formalism we analyze the correctness of our algorithms. We show that the proposed algorithms guarantee that the tasks complete their activities before their deadlines expire. To show the efficiency of our algorithms we have simulated the model. Simulation results showed an improvement of 65 percent in deadline hit ratio comparing our approach to FIFO algorithm. | ['Hossein Momeni', 'Mohsen Sharifi', 'Saeed Sedighian'] | A New Approach to Task Allocation in Wireless Sensor Actor Networks | 465,672 |
This paper presents the framework of PT(CP)-resolution which is the unification of PT-resolution and conditional proof. PT-resolution is resolution with partial intersection and truncation. The paper analyses the necessity, feasibility and also limitations of this unification and formulates PT(CP)-resolution. PT(CP)-resolution is regarded as an improved version of PT-resolution. It resumes its derivation when the derivation is terminated before the query is answered. | ['Faye F. Liu'] | PT(CP)-resolution | 155,391 |
In modern symmetrical chip multiprocessor (CMP) architecture, problems in cache coherence, context switch overheads and serialized code bottleneck are major causes of excessive computing power dissipation in the application of simultaneous multithreading (SMT) technique. This research models and manages above-mentioned problems based on user application usage patterns identified in a mobile computing platform. A novel scheduler has been developed to realize power management schemes based on the Linux kernel (version. 3.0.1) and deployed in Android 4.0 ICS. The scheduler monitors multiple system performance metrics and predicts power dissipation based on the historical user application usage values as well as the content of the scheduler run queue. The length of the time slices and the variables of process control blocks are adjusted to optimize power dissipation according to the prediction. The proposed scheduler module has achieved a power dissipation reduction of 13 to 24% in a GEM5 simulated environment. | ['Hou Zhao Qi Rex', 'Jong Ching Chuen', 'Andreas Herkersdorf'] | Linux apps-usage-driven power dissipation-aware scheduler | 879,995 |
Automatic verification of real-tim systems using epsilon. | ['Jens Chr. Godskesen', 'Kim Guldstrand Larsen', 'Arne Skou'] | Automatic verification of real-tim systems using epsilon. | 972,693 |
This paper is a study of a new subdivision scheme - Extended Loop $proposed within the MPEG-4 multimedia compression standard by Superscape. Extended Loop is a scheme designed for subdividing arbitrary polygons and is a hybrid between Loop for triangular meshes and Catmull-Clark for quad meshes. Therefore, we compare the visual appearance and complexity of Extended Loop with these traditional schemes. This paper shows that besides its superior flexibility, the scheme exhibits performances comparable to Loop and Catmull-Clark. | ['Nicolaas Tack', 'Gauthier Lafruit', 'Rudy Lauwereins'] | Visual and complexity analysis of the Extended Loop subdivision scheme | 114,330 |
This study explores the competing influences of different types of board interlocks on diffusion of a strategic initiative among a population of firms. We examine a broad social network of interlocking directors in U.S. firms over a period of 17 years and consider the likelihood that these firms will adopt a strategy of expansion into China. Results show that ties to adopters that unsuccessfully implement this strategy have a nearly equal and opposing effect on the likelihood of adoption as do ties to those that successfully implement the strategy. Ties to those that do not implement the strategy also have a suppressive effect on the likelihood of adoption. Furthermore, we examine a firm's position in the core-periphery structure of the interlocking directorate, finding that ties to adopters closer to the network core positively affect the likelihood of adoption. We discuss the implications of our study for social network analysis, governance, and internationalization research. | ['Brian L. Connelly', 'Jonathan L. Johnson', 'Laszlo Tihanyi', 'Alan E. Ellstrand'] | More Than Adopters: Competing Influences in the Interlocking Directorate | 230,390 |
Parallel computing systems provide hardware redundancy that helps to achieve low cost fault-tolerance, by duplicating the task into more than a single processor, and comparing the states of the processors at checkpoints. This paper suggests a novel technique, based on a Markov reward model (MRM), for analyzing the performance of checkpointing schemes with task duplication. We show how this technique can be used to derive the average execution time of a task and other important parameters related to the performance of checkpointing schemes. Our analytical results match well the values we obtained using a simulation program. We compare the average task execution time and total work of four checkpointing schemes, and show that generally increasing the number of processors reduces the average execution time, but increases the total work done by the processors. However, in cases where there is a big difference between the time it takes to perform different operations, those results can change. > | ['Avi Ziv', 'Jehoshua Bruck'] | Analysis of checkpointing schemes for multiprocessor systems | 72,661 |
Compressed sensing (CS) breaks the limit of Nyquist sampling rate and provides a new method for information sampling. Compressed video sensing (CVS) introduces CS into video codec and decreases the burden of encoding. For three-dimensional video data, the cube-based CVS scheme is an intuitive method in that CS measurements can span the entire spatial and temporal extent of a video sequence. Unfortunately, the measuring of multiple frames in video cubes simultaneously requires complex calculation and expensive spatial costs, which is largely considered impractical to implement in a real device. In this paper, a novel compressed video sensing scheme that is exactly suitable for wireless multimedia sensor networks is proposed by changing the method of processing multiple frames in video cubes. At the encoder, sampling rate redistribution (SRR) algorithm increases the measurements contained in the first and last frames so that they can assist to reconstruct intermediate frames. At the decoder, all measurements are scrambled by global measurement scrambling (GMS) algorithm to make them similar to measurements of the global CS acquisition. The experimental results show that the proposed scheme effectively improves the decoding performance on the premise of realizable hardware devices. | ['Yonghong Kuo', 'Yatian Gao', 'Xin Zhang', 'Jian Chen'] | A new multiple frames decoding and frame wise measurement for compressed video sensing | 659,680 |
Analyses of survey results from a random sample of women bloggers (N = 298) show three motivations drive women to use social media – information, engagement, and recreation. The recreation motivation outweighs the other two motivations in predicting frequency of social media use. However, when differences between Facebook, Twitter, and other social media were considered, results show women bloggers turn to social media in general for recreation, but to Facebook for engagement and to Twitter for information. Findings also show that psychological needs for affiliation and self-disclosure are related to the engagement motivation, and self-disclosure is associated with the information motivation. The results are discussed in relation to need theory. | ['Gina Masullo Chen'] | Why do women bloggers use social media? Recreation and information motivations outweigh engagement motivations | 242,846 |
This paper deals with two subjects. First, we will show how support vector machine (SVM) regression problem can be solved as the maximum a posteriori prediction in the Bayesian framework. The second part describes an approximation technique that is useful in performing calculations for SVMs based on the mean field algorithm which was originally proposed in Statistical Physics of disordered systems. One advantage is that it handle posterior averages for Gaussian process which are not analytically tractable. | ['Junbin Gao', 'Steve R. Gunn', 'Chris J. Harris'] | Mean Field Method for the Support Vector Machine Regression | 279,013 |
Compression and SSDs: Where and How? | ['Aviad Zuck', 'Sivan Toledo', 'Dmitry Sotnikov', 'Danny Harnik'] | Compression and SSDs: Where and How? | 608,670 |
In industrial organisations, software products are quite large and complex, consisting of a number of classes. Thus, it is not possible to test all the products with a finite number of resources. Hence, it would be beneficial if we could predict in advance some of the attributes associated with the classes such as change proneness, defect proneness, maintenance effort, etc. In this paper, we have dealt with one of the quality attributes, i.e., change proneness. Changes in the software are unavoidable and thus, early prediction of change proneness will help the developers to focus the limited resources on the classes which are predicted to be change-prone. We have conducted a systematic review which evaluates all the available important studies relevant to the area of change proneness. This will help us to identify gaps in the current technology and discuss possible new directions of research in the areas related to change proneness. | ['Ruchika Malhotra', 'Ankita Jain Bansal'] | Software change prediction: a literature review | 937,638 |
Pupils' collaboration around a large display. | ['Rosa Lanzilotti', 'Carmelo Ardito', 'Maria Francesca Costabile', 'Antonella De Angeli', 'Giuseppe Desolda'] | Pupils' collaboration around a large display. | 705,033 |
A number of approaches combine the principles and technologies of Linked Data and RESTful services. Services and APIs are thus enriched by, and contribute to, the Web of Data. These resource-centric approaches, referred to as Linked APIs, focus on flexibility and the integration capabilities of Linked Data. We use our experience in teaching students on how to use Linked APIs to identify the existing challenges in the area. Additionally we introduce the LAPIS catalogue, a directory for Linked APIs as basis for the research to address the identified challenges. | ['Steffen Stadtmüller', 'Sebastian Speiser', 'Andreas Harth'] | Future Challenges for Linked APIs | 759,674 |
This paper is about localising across extreme lighting and weather conditions. We depart from the traditional point-feature-based approach as matching under dramatic appearance changes is a brittle and hard thing. Point feature detectors are fixed and rigid procedures which pass over an image examining small, low-level structure such as corners or blobs. They apply the same criteria applied all images of all places. This paper takes a contrary view and asks what is possible if instead we learn a bespoke detector for every place. Our localisation task then turns into curating a large bank of spatially indexed detectors and we show that this yields vastly superior performance in terms of robustness in exchange for a reduced but tolerable metric precision. We present an unsupervised system that produces broad-region detectors for distinctive visual elements, called scene signatures, which can be associated across almost all appearance changes. We show, using 21km of data collected over a period of 3 months, that our system is capable of producing metric localisation estimates from night-to-day or summer-to-winter conditions. | ['Colin McManus', 'Ben Upcroft', 'Paul Newmann'] | Scene signatures : localised and point-less features for localisation | 941,066 |
A software for automatic CTG analysis which could be a useful clinical tool is shown.The software provides important classical and nonlinear parameters.Simulated and real CTG traces were analysed to test software reliability.The software showed good performance, in fact all simulated events were detected.On real CTG: sensitivity equal to 93%, positive predictive value 82%, accuracy 77%. Despite the widespread use of cardiotocography in foetal monitoring, the evaluation of foetal status suffers from a considerable inter and intra-observer variability. In order to overcome the main limitations of visual cardiotocographic assessment, computerised methods to analyse cardiotocographic recordings have been recently developed. In this study, a new software for automated analysis of foetal heart rate is presented. It allows an automatic procedure for measuring the most relevant parameters derivable from cardiotocographic traces. Simulated and real cardiotocographic traces were analysed to test software reliability. In artificial traces, we simulated a set number of events (accelerations, decelerations and contractions) to be recognised. In the case of real signals, instead, results of the computerised analysis were compared with the visual assessment performed by 18 expert clinicians and three performance indexes were computed to gain information about performances of the proposed software. The software showed preliminary performance we judged satisfactory in that the results matched completely the requirements, as proved by tests on artificial signals in which all simulated events were detected from the software. Performance indexes computed in comparison with obstetricians' evaluations are, on the contrary, not so satisfactory; in fact they led to obtain the following values of the statistical parameters: sensitivity equal to 93%, positive predictive value equal to 82% and accuracy equal to 77%. Very probably this arises from the high variability of trace annotation carried out by clinicians. | ['Maria Romano', 'Paolo Bifulco', 'Mariano Ruffo', 'Giovanni Improta', 'Fabrizio Clemente', 'Mario Cesarelli'] | Software for computerised analysis of cardiotocographic traces | 570,742 |
This article describes a case in which decisions are made by two biopharmaceutical firms in pursuit of FDA approval of a drug to treat idiopathic pulmonary fibrosis (IPF). The case contains information on each firm’s estimates of costs, revenues and likelihood of success, as well as average values for these available from the scientific literature. The case provides an opportunity to apply decision analysis in the form of decision trees to various decision problems and to perform sensitivity analysis. It can be used as an introduction or as an application of decision trees after an introduction. The students are first exposed to one firm’s simple decision based on expert opinion, which is then modified by the inclusion of data. The advantage of expressing information in a tree diagram becomes apparent. The tree diagram is then examined to expose hedging strategies, one of which introduces the second larger firm as a potential licensee. The second firm presents its own view of the decision process based on its own expertise, thus allowing for a rich discussion of sensitivity analysis. Students are to evaluate the first firm’s approach to decision making and whether the second firm should be a licensee or not. Teaching Note: Interested Instructors please see the Instructor Materials page for access to the restricted materials. To maintain the integrity and usefulness of cases published in ITE , unapproved distribution of the case teaching notes and other restricted materials to any other party is prohibited. | ['David P. Kopcso', 'Howard Simon', 'Annie Gao'] | Case Article—Idiopathic Pulmonary Fibrosis | 717,568 |
By analyzing the level of perceived risk in the domain of e-business, the interaction initiating agent can determine before hand whether or not it will achieve its desired outcomes and the associated consequences to it in interacting with the other agent. In our previous work, we have proposed a methodology by which the initiating agent ascertains the numeric level of perceived risk in forming an interaction. In this paper, we propose a methodology by which the initiating agent of the interaction determines the semantic level of perceived risk, for it to be utilized while making an informed interaction-based decision with an agent. | ['Omar Khadeer Hussain', 'Elizabeth Chang', 'Tharam S. Dillon', 'Farookh Khadeer Hussain'] | Ascertaining the Semantic and Linguistic Level of Perceived Risk in e-Business Interactions | 89,792 |
In this monograph we survey results from a newly emerging line of research that targets algorithm analysis in the physical interference model. In the main part of our monograph we focus on wireless scheduling: given a set of communication requests, arbitrarily distributed in space, how can these requests be scheduled efficiently? We study the difficulty of this problem and we examine algorithms for wireless scheduling with provable performance guarantees. Moreover, we present a few results for related problems and give additional context. | ['Olga Goussevskaia', 'Yvonne Anne Pignolet', 'Roger Wattenhofer'] | Efficiency of Wireless Networks: Approximation Algorithms for the Physical Interference Model | 234,554 |
We describe an iterative method for solving absolute value equations. The result gives a sufficient condition for unique solvability of these equations for arbitrary right-hand sides. This sufficient condition is compared with that one by Mangasarian and Meyer. | ['Jiri Rohn', 'Vahideh Hooshyarbakhsh', 'Raena Farhadsefat'] | An iterative method for solving absolute value equations and sufficient conditions for unique solvability | 101,659 |
The state of the art of software development has changed considerably from the folkloric approaches of the 1950s and 60s. But has the state of the practice kept up? A commonly held (rather cynical) view is that the great revolutions associated with the names of Dijkstra, Wirth, Mills, Hoare, Parnas, Myers and others might as well not have happened for all the effect they had on the practice of the average developer. During the years 1984 through 1987, the authors conducted a series of performance benchmarking exercises to allow individuals and organizations to evaluate their relative productivity. The emphasis of the benchmarks was on speed of program construction and low defect rate. A side-effect of the exercise was that nearly 400 programmers wrote the same program (they all wrote to the same specification) and sent in listings of these programs along with their questionnaires, time logs, and test results. This afforded an opportunity to assess design and coding practice of a wide sample of developers. | ['Tom DeMarco', 'Tim Lister'] | Software Development: State Of The Art Vs. State Of The Practice | 295,810 |
Kronos is a signal-processing programming language based on the principles of semifunctional reactive systems. It is aimed at efficient signal processing at the elementary level, and built to scale towards higher-level tasks by utilizing the powerful programming paradigms of “metaprogramming” and reactive multirate systems. The Kronos language features expressive source code as well as a streamlined, efficient runtime. The programming model presented is adaptable for both sample-stream and event processing, offering a cleanly functional programming paradigm for a wide range of musical signal-processing problems, exemplified herein by a selection and discussion of code examples. | ['Vesa Norilo'] | Kronos: A Declarative Metaprogramming Language for Digital Signal Processing | 594,464 |
We present first results on the comparison of transactional commit protocols for mobile context (2PC, UCM and CO2PC). | ['Christophe Bobineau', 'Cyril Labbé', 'Claudia Roncancio', 'Patricia Serrano-Alvarado'] | Transaction commit protocols for mobile environment: a first study | 397,716 |
This paper presents a method for volumetric reconstruction from helical computerized tomography (H-CT) data which are collected with a fan beam source. An interpretation of the H-CT data in terms of the axial computerized tomography (A-CT) data is provided. This analysis indicates that the H-CT data for positive and negative detector angles can be combined to form periodically nonuniform hexagonal samples of the A-CT data. A Fourier-based method to reconstruct the A-CT data from this form of data coverage is presented. The target function is then reconstructed using the conventional fan beam computed tomography algorithms for A-CT. | ['Ariel Rischal', 'Susan S. Young', 'Mehrdad Soumekh'] | A reconstruction method for helical computed tomography | 433,089 |
The paper is concerned with non-fragile control design via output feedback controller for uncertain fuzzy systems. In control design for physical systems, there are chances that malfunction in actuator happens and an exact value of the control input may not be applied. Hence, controller gain variations as well as uncertainty in the system parameters should be considered in the control design. For an uncertain fuzzy system, a design method of a non-fragile output feedback controller is proposed by introducing a new class of non-parallel distributed compensators(non-PDCs) where the integrals of the membership functions are involved. Non-PDC is a generalized controller of PDC, which is a traditional controller for fuzzy systems. A non-PDC non-fragile output feedback controller for uncertain fuzzy systems is obtained from new fuzzy multiple Lyapunov functions and its control design conditions are given in terms of a set of linear matrix inequalities(LMIs), which are easily numerically solvable. The descriptor system approach, which leads to relaxation in controller design conditions, is also employed. Finally, a numerical example is given to illustrate our nonlinear control design and to show the effectiveness over other existing results. | ['Jun Yoneyama', 'Kenta Hoshino'] | A novel non-fragile output feedback controller design for uncertain Takagi-Sugeno fuzzy systems | 940,380 |
Objective : We describe and evaluate an automated software tool for nerve-fiber detection and quantification in corneal confocal microscopy (CCM) images, combining sensitive nerve- fiber detection with morphological descriptors. Method : We have evaluated the tool for quantification of Diabetic Sensorimotor Polyneuropathy (DSPN) using both new and previously published morphological features. The evaluation used 888 images from 176 subjects (84 controls and 92 patients with type 1 diabetes). The patient group was further subdivided into those with ( $n = 63$ ) and without ( $n = 29$ ) DSPN. Results : We achieve improved nerve- fiber detection over previous results (91.7% sensitivity and specificity in identifying nerve-fiber pixels). Automatic quantification of nerve morphology shows a high correlation with previously reported, manually measured, features. Receiver Operating Characteristic (ROC) analysis of both manual and automatic measurement regimes resulted in similar results in distinguishing patients with DSPN from those without: AUC of about 0.77 and 72% sensitivity-specificity at the equal error rate point. Conclusion : Automated quantification of corneal nerves in CCM images provides a sensitive tool for identification of DSPN. Its performance is equivalent to manual quantification, while improving speed and repeatability. Significance : CCM is a novel in vivo imaging modality that has the potential to be a noninvasive and objective image biomarker for peripheral neuropathy. Automatic quantification of nerve morphology is a major step forward in the early diagnosis and assessment of progression, and, in particular, for use in clinical trials to establish therapeutic benefit in diabetic and other peripheral neuropathies. | ['Xin Chen', 'Jim Graham', 'Mohammad A. Dabbah', 'Ioannis N. Petropoulos', 'Mitra Tavakoli', 'Rayaz A. Malik'] | An Automatic Tool for Quantification of Nerve Fibers in Corneal Confocal Microscopy Images | 814,752 |
Estimates of predicate selectivities by database query optimizers often differ significantly from those actually encountered during query execution, leading to poor plan choices and inflated response times. In this paper, we investigate mitigating this problem by replacing selectivity error-sensitive plan choices with alternative plans that provide robust performance. Our approach is based on the recent observation that even the complex and dense "plan diagrams" associated with industrial-strength optimizers can be efficiently reduced to "anorexic" equivalents featuring only a few plans, without materially impacting query processing quality.#R##N##R##N#Extensive experimentation with a rich set of TPC-H and TPC-DS-based query templates in a variety of database environments indicate that plan diagram reduction typically retains plans that are substantially resistant to selectivity errors on the base relations. However, it can sometimes also be severely counter-productive, with the replacements performing much worse. We address this problem through a generalized mathematical characterization of plan cost behavior over the parameter space, which lends itself to efficient criteria of when it is safe to reduce. Our strategies are fully non-invasive and have been implemented in the Picasso optimizer visualization tool. | ['D Harish', 'Pooja N. Darera', 'Jayant R. Haritsa'] | Identifying robust plans through plan diagram reduction | 75,529 |
Arquitectura para la Clasificación y Composición de Servicios Web. | ['Ismael Navas Delgado', 'Maria del Mar Rojano-Muñoz', 'José Francisco Aldana Montes'] | Arquitectura para la Clasificación y Composición de Servicios Web. | 765,491 |
Security and Communication Networks#R##N#Early View (Online Version of Record published before inclusion in an issue) | ['Avleen Kaur Malhi', 'Shalini Batra'] | Genetic‐based framework for prevention of masquerade and DDoS attacks in vehicular ad‐hocnetworks | 852,174 |
Behavioral Dynamics of a Collision Avoidance Task: How Asymmetry Stabilizes Performance. | ['Brian A. Eiler', 'Rachel W. Kallen', 'Steven J. Harrison', 'Elliot Saltzman', 'R. C. Schmidt', 'Mike Richardson'] | Behavioral Dynamics of a Collision Avoidance Task: How Asymmetry Stabilizes Performance. | 755,516 |
In this paper, we propose two different types of cooperative transmission protocols, referred to as spatial multiplexing with receive diversity (SMRD), that can support a full rate, i.e., one symbol transmission per each time slot. We show that the BER performance can be significantly improved with a proper design of SMRD under the AF (amplify-and-forward) mode of relaying, when there is no interference among all symbols transmitted in the same time slot. Furthermore, we consider a more practical implementation of SMRD with successive interference cancellation (SIC) to deal with co-channel interference. Our simulation result shows that the proposed transmission protocol achieves 6 dB gain over the case of noncooperation (direct transmission) without any bandwidth expansion. | ['Hyun Seok Ryu', 'Chung Gu Kang', 'Dong Seung Kwon'] | Transmission Protocol for Cooperative MIMO with Full Rate: Design and Analysis | 118,491 |
Decoupling Rendering and Display Using Fast Depth Image Based Rendering on the GPU | ['Julian Meder', 'Beat D. Brüderlin'] | Decoupling Rendering and Display Using Fast Depth Image Based Rendering on the GPU | 866,423 |
An automatic defect detection system for the micro lens of a cellular phone was studied. We presented a defect detection method using LoG filter and multiple-level thresholds. We could find general defects such as scratch and even the stain defects that the detection might be difficult because of the low contrast between its background and a defect in the lens images. We did experiment using 50 spherical lens images with defects. As the experimental results, it showed the average recall and precision are about 92%, 77%, respectively in the case of sigma=2.5. | ['Jaeyoung Yi', 'K. M. Kim', 'Sung-Hwan Jung'] | Lens Inspection System Using LoG and Multi-level Thresholds | 74,049 |
Underwater wireless sensor networks (UWSNs) are a promising technology to provide oceanographers with environmental data in real time. Suitable network topologies to monitor estuaries are formed by strings coming together to a sink node. This network may be understood as an oriented graph. A number of MAC techniques can be used in UWSNs, but Spatial-TDMA is preferred for fixed networks. In this paper, a scheduling procedure to obtain the optimal fair frame is presented, under ideal conditions of synchronization and transmission errors. The main objective is to find the theoretical maximum throughput by overlapping the transmissions of the nodes while keeping a balanced received data rate from each sensor, regardless of its location in the network. The procedure searches for all cliques of the compatibility matrix of the network graph and solves a Multiple-Vector Bin Packing (MVBP) problem. This work addresses the optimization problem and provides analytical and numerical results for both the minimum frame length and the maximum achievable throughput. | ['Miguel-Angel Luque-Nieto', 'José Miguel Moreno-Roldán', 'Javier Poncela', 'Pablo Otero'] | Optimal Fair Scheduling in S-TDMA Sensor Networks for Monitoring River Plumes | 657,573 |
This paper presents an incremental optical encoder based position and velocity measurements VLSI chip with a serial peripheral interface (SPI). It combines period and frequency countings to provide velocity estimates with good dynamic behavior over a wide speed range. By sensing the velocity of the encoder, it reserves the computational power of a supervisory microcontroller, and subsequently enhances the performance of the total system. Furthermore, multiple copies of the velocity encoder can access the behavior of multiple motors in parallel. It is compact with lower power consumption compared to traditional FPGA implementations. Although designed for use in the control unit of a medical robot with 34-axes and tight space and power constraints, it can be readily used in other applications. It is implemented in a 2P3M 0.5mum CMOS process and consumes 4.82mW power with active area of 0.45mm 2 . | ['Ndubuisi Ekekwe', 'Ralph Etienne-Cummings', 'Peter Kazanzides'] | Incremental Encoder Based Position and Velocity Measurements VLSI Chip with Serial Peripheral Interface | 517,923 |
A recent work proposed to simplify fat-trees with adaptive routing by means of a load-balancing deterministic routing algorithm. The resultant network has performance figures comparable to the more complex adaptive routing fat-trees when packets need to be delivered in order. In a second work by the same authors published in IEEE CAL, they propose to simplify the fat-tree to a unidirectional multistage interconnection network (UMIN), using the same load-balancing deterministic routing algorithm. They show that comparable performance figures are achieved with much lower network complexity. In this comment we show that the proposed load-balancing deterministic routing is in fact the routing scheme used by the butterfly network. Moreover we show that the properties of the simplified UMIN network proposed by them are intrinsic to the standard butterfly and other existing UMINs. | ['Elisardo Antelo'] | A Comment on "Beyond Fat-tree: Unidirectional Load-Balanced Multistage Interconnection Network" | 33,297 |
3-Iodothyronamine (Tι AM) is a novel relative of thyroid hormone that plays a role in critical body regulatory processes such as glucose metabolism, thermal regulation, and heart beating. This paper was aimed at characterizing time dynamics of T 1 AM and its catabolite 3-iodothyroacetic acid (TA 1 ) in different biological scales with linear time-invariant models. Culture medium samples coming from culture of H9c2 murine cells and perfusion liquid samples from perfused rat heart were collected after the injection of a T 1 AM bolus. T 1 AM and TA 1 concentrations in the samples were assayed with high-performance liquid chromatography coupled to tandem mass spectrometry. Kinetic constants relative to T 1 AM transport and conversion were estimated with weighted least-squares method. We found that these constants can be related with an allometric power law depending on mass, with a negative exponent of -0.27 ± 0.19, implying that the velocity of conversion and internalization of Ti AM decreases with increasing of system mass. | ['Gianni Orsi', 'Sabina Frascarelli', 'Riccardo Zucchi', 'Giovanni Vozzi'] | LTI Models for 3-Iodothyronamine Time Dynamics: A Multiscale View | 127,041 |
Our goal is to show tight bounds on the running time of algorithms for scheduling and packing problems. To prove lower bounds, we investigate implications of the exponential time hypothesis on such algorithms. For exact algorithms we consider the dependence of the running time on the number $n$ of items (for packing) or jobs (for scheduling). We prove a lower bound of $2^{{\rm o}(n)} \times \|{I}\|^{{{\rm O}(n)}}$, where $\|{I}\|$ denotes the encoding length of the instance, for several of these problems, including SubsetSum, Knapsack, BinPacking, $\langle{P2}\|{C_{\max}}\rangle,\ \text{and}\ \langle{P2}\|{\sum w_j C_j}\rangle$. We also develop an algorithmic framework that is able to solve a large number of scheduling and packing problems in time $2^{{\rm o}(n)} \times \|{I}\|^{{{\rm O}(n)}}$. Finally, we consider approximation schemes. We show that there is no polynomial time approximation scheme for MultipleKnapsack (MKS) and 2d-Knapsack with running time $2^{{\rm o}(1/\epsilon)} \times \|{I}\|^{{{\rm ... | ['Klaus Jansen', 'Felix Land', 'Kati Land'] | Bounding the Running Time of Algorithms for Scheduling and Packing Problems | 641,819 |
The subjective quality achieved by most audio codecs, including MPEG-4 AAC, depends strongly on the algorithms used for encoder parameter selection. As a practical measure in conventional encoders, the overall encoding procedure is usually divided into a sequence of smaller problems that are solved heuristically. In this paper, we formulate the MPEG-4 AAC encoding problem as a multidimensional optimization procedure and present simulation results indicating performance gains relative to conventional approaches. | ['Claus Bauer', 'Matt Fellers', 'Grant Allen Davidson'] | Multidimensional Optimization of MPEG-4 AAC Encoding | 106,606 |
This paper focuses on a network design for cooperative active safety systems (CASS). A Medium Access Control (MAC) protocol design is proposed for a vehicle to send safety message to other vehicles. A quality of service (QoS) model for safety messages is developed that is consistent with the active safety systems literature. The QoS target involves having each message being received with high probability within its specified lifetime by each vehicle within its specified range. The protocol design is based on the rapid re-broadcast of each message multiple times within its lifetime. Six different design variations are proposed. Equations are derived and a simulation tool is developed for assessing the performance of the designs. Design performance is dependent on the number of re-broadcasts, power, modulation, coding, and vehicular traffic volumes. The evaluation focuses on vehicles that are driving on a 4- to 8-lane freeway using 802.11a radio running over a 20 MHz channel in the DSRC spectrum. Results indicate that under certain assumptions regarding the loss probability tolerated by safety applications, the design is able to transport safety messages in vehicular ad-hoc networks. | ['Qing Xu', 'Tony Mak', 'Jeff Ko', 'Raja Sengupta'] | Medium Access Control Protocol Design for Vehicle-Vehicle Safety Messages | 551,967 |
This paper proposes efficient step response model implementation strategies that lead to accurate control and high computational performance in an embedded Model Predictive Control (MPC) scheme. Different implementations of the step response prediction model are examined, and inherent properties that directly affect control performance in the presence of disturbances are discussed. Model errors that are inconsistent with bias updates (i.e. the model of unknown disturbances commonly used in step response MPC) are identified, and it is shown that the bias updates may worsen the effect of the errors in some cases. Particular attention is paid to the robustness of the prediction models to small truncation errors and errors in the input or measured disturbance history. Several implementation aspects that are crucial for embedded targets with limited resources are discussed. The findings are illustrated by simple simulation examples and an industrial case-study involving hardware-in-the-loop simulation of a subsea compact separation process. | ['D. K. M. Kufoalor', 'Lars Imsland', 'Tor Arne Johansen'] | Efficient implementation of step response models for embedded Model Predictive Control | 699,122 |
This paper analyzed the Formal expression corresponding relationships between relational databases and OWL ontologies firstly, and then proposed technology of ontology automatic construction based on relational databases. The main steps and key links were discussed in detail, and then an ontology generator named OWLFROMDB was implemented, which can automatically convert a relational database to an OWL ontology, and offers a much cheaper alternative than building a new one from scratch. An experimental case was given to verify the effectiveness of the tool at last. | ['Chen He-ping', 'He Lu', 'Chen Bin'] | Research and Implementation of Ontology Automatic Construction Based on Relational Database | 508,689 |
ABSTRACTThe Internet has created new opportunities for peer-to-peer (P2P) social lending platforms, which have the potential to transform the way microfinance institutions raise and allocate funds used for poverty reduction. Depending upon where decision-making rights are allocated, there is the potential for identification bias whereby lenders may be motivated to give to specific projects with which they have an affinity without regard to whether it represents a sound financial investment. Using data collected from Kiva, we present empirical evidence that distant upstream lenders do not have adequate information about local business and loan conditions to make sound microfinance funding decisions, but instead make decisions based on identification biases. Furthermore, more information provided on the P2P lending site about the prospective loan does not improve the lender’s information about the loan conditions, but rather exacerbates the identification bias effect. | ['Frederick J. Riggins', 'David M. Weber'] | Information asymmetries and identification bias in P2P social microlending | 975,739 |
Semi-automatic Hand Annotation of Egocentric Recordings | ['Stijn De Beugher', 'Geert Brône', 'Toon Goedemé'] | Semi-automatic Hand Annotation of Egocentric Recordings | 765,063 |
A generalisation of the Dirac-delta function and its family of derivatives recently proposed as a means of introducing impulses on the complex plane in Laplace and z transform domains is shown to extend the applications of Bilateral Laplace and z transforms. Transforms of two-sided signals and sequences are made possible by extending the domain of distributions to cover generalized functions of complex variables. The domains of Bilateral Laplace and z transforms are shown to extend to two-sided exponentials and fast-rising functions, which, without such generalized impulses have no transform. Applications include generalized forms of the sampling theorem, a new type of spatial convolution on the s and z planes and solutions of differential and difference equations with two-sided infinite duration forcing functions and sequences. | ['Michael J. Corinthios'] | Extending Laplace and z transform domains | 93,375 |
A hierarchical wavelet transform coding structure which satisfies the requirements of a high definition video cassette recorder (HD-VCR) for studio application in wireless environments is proposed. All of the coefficients in the transformed image are adaptively quantized and hierarchically converted into a single stream or binary symbols using the monotone decreasing property. The resulting bit stream goes through an adaptive arithmetic coder, and two-level control of the bit rate is performed so as to maintain a constant target bit rate. The proposed algorithm can be efficiently implemented with much reduced computing time for the real world application. It is verified by simulation that for a test sequence of images at least 4 dB of PSNR improvement can be achieved compared with other DCT-based schemes at a constant target bit rate, avoiding the blocking effect which would be impossible otherwise. | ['Hyun Meen Jung', 'Yongkyu Kim', 'Seunghyeon Rhee', 'Hung-Yeop Sung', 'Kyu Tae Park'] | HD-VCR codec for studio application using quadtree structured binary symbols in wavelet transform domain | 8,176 |
The effective use of (human) resources and a mature product assumes the application of a software development life cycle (SDLC) management system which actively supports the process of development. A well-adjusted SDLC system is a prerequisite in safety-critical developments, such as the development of medical devices. Otherwise, the intensive documentation needed proving the correct and safe operation of the equipment cannot be fulfilled. The aim of this paper is to provide structured and mostly quantified criteria for companies which plan to establish newly an application life cycle management system or improve an existing one. The answering of these question helps making an objective and optimal choice among the different systems. Finally, this paper prepares a case study, where the application of these criteria will be demonstrated in practice. | ['Jozsef Klespitz', 'Miklos Biro', 'Levente Kovács'] | Evaluation criteria for application life cycle management systems in practice | 861,031 |
This paper proposes the TMO model-based monitoring structure (TMS) for monitoring TMO model-based real-time systems. The monitoring infrastructure configured through TMS is managed by the middleware layer, allowing for automatic monitoring and ease of deployment. In addition, since TMS is designed to utilize proven distributed capabilities enabled by the TMO model, it allows for stable, distributed monitoring on the TMO systems. As the results of experiments indicate, TMS instrumentation overhead on the execution of middleware threads and TMO methods does not exceed 1ms. This means TMS has little or no effect on the operation of the middleware and TMO methods. As such, TMS is a suitable structure for monitoring TMO systems in a stable manner. | ['Yoon Seok Jeong', 'Tae Wan Kim', 'Chun-Hyon Chang'] | Modeling of a monitoring scheme for TMO model-based real-time systems | 233,588 |
In this paper, the problem of performance driven circuit partitioning is considered. The parameters taken into consideration to measure performance are power interconnection resource constraints. An algorithm is presented to build clusters in a bottom up manner while decomposing clusters for cutsize and delay minimization as well as power consumption and resource constraint. A partitioning method in a top down manner is applied based on the probability function. | ['Ling Wang', 'Henry Selvaraj'] | Performance driven circuit clustering and partitioning | 370,482 |
In this paper, we introduce an improved Greedy Randomized Adaptive Search Procedure (GRASP) based heuristic for the multi-product multi-vehicle inventory routing problem (MMIRP). The inventory routing problem, which combines the vehicle-routing problem and the inventory control decisions, is one of the most important problems in combinatorial optimization field. To deal with the MMIRP, we develop a GRASP-based heuristic (GBH). Each GBH iteration consists of two sequential phases; the first phase is a Greedy Randomized Procedure, in which, the best tradeoff between the inventory holding cost and routing cost is looked. Then, in the second phase, as local search for the GRASP, we use the Tabu search (TS) meta-heuristic to improve the solution found in the first phase. The GBH two phases are repeated until some stopped criterion is met. Our proposed method is evaluated on two benchmark data sets, and successfully compared with two state-of-the-art algorithms. | ['Oualid Guemri', 'Abdelghani Bekrar', 'Bouziane Beldjilali', 'Damien Trentesaux'] | GRASP-based heuristic algorithm for the multi-product multi-vehicle inventory routing problem | 714,111 |
We study the relationship between chatter on social media and observed actions concerning charitable donation. One hypothesis is that a fraction of those who act will also tweet about it, implying a linear relation. However, if the contagion is present, we expect a superlinear scaling. We consider two scenarios: donations in response to a natural disaster, and regular donations. We empirically validate the model using two location-paired sets of social media and donation data, corresponding to the two scenarios. Results show a quadratic relation between chatter and action in emergency response case. In case of regular donations, we observe a near-linear relation. Additionally, regular donations can be explained by demographic factors, while for a disaster response social media is a much better predictor of action. A contagion model is used to predict the near-quadratic scaling for the disaster response case. This suggests that diffusion is present in emergency response case, while regular charity does not spread via social network. Understanding the scaling behavior that relates social media chatter to physical actions is an important step in estimating the extent of a response and for determining social media strategies to affect the response. | ['Rostyslav Korolov', 'Justin Peabody', 'Allen Lavoie', 'Sanmay Das', 'Malik Magdon-Ismail', 'William A. Wallace'] | Predicting charitable donations using social media | 813,269 |
The regulator problem with robustness is solved for systems modelled by rational transfer matrices. A topology for possibly unstable plants is presented. A necessary and sufficient condition is derived for the existence of a proper controller which provides internal stability and output regulation throughout an open neighbourhood of the plant. A characterization of all such controllers is determined. | ['Bruce A. Francis', 'M. Vidyasagar'] | Brief paper: Algebraic and topological aspects of the regulator problem for lumped linear systems | 819,885 |
In this paper, we propose a flexible architecture that performs the computation of the discrete wavelet transform, requiring a small memory space and is capable of operating at high sampling rate. The architecture employs two filtering blocks to compute the transform and one buffer to store the intermediate results. Each filtering block has two processing units that operate independently in parallel using a two-phase scheduling. An efficient scheme for the synchronization of the data flow among the three blocks is provided in order to minimize the buffer size and increase the speed of operation. Verilog and HSPICE simulation results are presented to show that the proposed architecture is more efficient for the computation of a fully decomposed discrete wavelet transform with high-tap filters than some other existing architectures in terms of their areas and speed of operations. | ['Chengjun Zhang', 'Chunyan Wang', 'M.O. Ahmad'] | An efficient buffer-based architecture for on-line computation of 1-D discrete wavelet transform | 333,907 |
Styryl dyes are fluorescent, lipophilic cations that have been used as specific labeling probes of mitochondria in living cells. For specific applications such as epifluorescence microscopy or flow cytometry, it is often desirable to synthesize fluorescent derivatives with optimized excitation, emission, and localization properties. Here, we present a chemoinformatic strategy suitable for multiparameter analysis of a combinatorial library of styryl molecules supertargeted to mitochondria. The strategy is based on a simple additive model relating the spectral and subcellular localization characteristics of styryl compounds to the two chemical building blocks that are used to synthesize the molecules. Using a cross-validation approach, the additive model predicts with a high degree of confidence the subcellular localization and spectral properties of the styryl product, from numerical scores that are independently associated with the individual building blocks of the molecule. The fit of the data indicates that more complex, nonadditive interactions between the two building blocks play a minor role in determining the molecule’s optical or biological properties. Moreover, the observed additive relationship allows mechanistic inferences to be made regarding the structure -property relationship observed for this particular class of molecules. It points to testable, mechanistic hypotheses about how chemical structure, fluorescence, and localization properties are related. | ['Kerby Shedden', 'Julie Brumer', 'Young Tae Chang', 'Gustavo R. Rosania'] | Chemoinformatic analysis of a supertargeted combinatorial library of styryl molecules. | 577,845 |
This paper presents a low power pulsed UWB receiver sampling below Nyquist rate which can accomodate time-varying data rate and quality-of-service requirements for applications communicating via UWB. The performance of pulse amplitude and pulse position modulations is assessed in AWGN and dense multipath environments using the standard IEEE 802.15.3a channel models. The proposed subsampling receiver provides an attractive digital alternative to the classical approach based on analog correlations, and can reach data rates above 100 Mb/s. | ['Yves Vanderperren', 'Wim Dehaene', 'Geert Leus'] | A Flexible Low Power Subsampling UWB Receiver Based on Line Spectrum Estimation Methods | 317,618 |
An increasing number of processor architectures support scratch-pad memory - software managed on-chip memory. Scratch-pad memory provides low latency data storage, like on-chip caches, but under explicit software control. The simple design and predictable nature of scratchpad memories has seen them incorporated into a number of embedded and real-time system processors. They are also employed by multi-core architectures to isolate processor core local data and act as low latency inter-core shared memory. Managing scratch-pad memory by hand is time consuming, error prone and potentially wasteful; tools that automatically manage this memory are essential for its use by general purpose software. While there has been promising work in compile time allocation of scratch-pad memory, there will always be applications which require run-time allocation. Modern dynamic memory management techniques are too heavy-weight for scratch-pad management. This paper presents the Scratch-Pad Memory Allocator , a light-weight memory management algorithm, specifically designed to manage small on-chip memories. This algorithm uses a variety of techniques to reduce its memory footprint while still remaining effective, including: representing memory both as fixed-sized blocks and variable-sized regions within these blocks; coding of memory state in bitmap structures; and exploiting the layout of adjacent regions to dispense with boundary tags for split and coalesce operations. We compare the performance of this allocator against Doug Lea's malloc implementation for the management of core-local and inter-core shared scratchpad memories under real world memory traces. This algorithm manages small memories efficiently and scales well under load when multiple competing cores access shared memory. | ['Ross McIlroy', 'Peter Dickman', 'Joseph S. Sventek'] | Efficient dynamic heap allocation of scratch-pad memory | 364,036 |
This paper presents a preliminary system architecture of integrating OMNeT++ into the mosaik co-simulation framework. This will enable realistic simulation of communication network protocols and services for smart grid scenarios and on the other side, further development of communication protocols for smart grid applications. Thus, by integrating OMNeT++ and mosaik, both communities will be able to leverage each others's sophisticated simulation models and expertise. The main challenges identified are the external management of the OMNeT++ simulation kernel and performance issues when federating various simulators, including OMNeT++ into the mosaik framework. The purpose of this paper is to bring these challenges up and to gather relevant experience and expertise from the OMNeT++ community. We especially encourage collaboration among all OMNeT++ developers and users. | ['Jens Dede', 'Koojana Kuladinithi', 'Anna Förster', 'Okko Nannen', 'Sebastian Lehnhoff'] | OMNeT++ and mosaik: Enabling Simulation of Smart Grid Communications | 594,708 |
Distances and scores are widely used to measure similarity between collections of information, such as preference profiles, belief sets, judgment sets, argument labelings, etc. Defining a function that quantifies the similarity between information sets of logically interrelated information is non- trivial, as witnessed by the shortage of such quantifiers in the literature. We propose a similarity measure for judgment sets that is "sensitive" to logic dependencies among the judgments. | ['Marija Slavkovik', 'Thomas Ågotnes'] | A judgment set similarity measure based on prime implicants | 583,577 |
We consider secure resource allocations for orthogonal frequency division multiple access (OFDMA) two-way relay wireless sensor networks (WSNs). The joint problem of subcarrier (SC) assignment, SC pairing and power allocations, is formulated under scenarios of using and not using cooperative jamming (CJ) to maximize the secrecy sum rate subject to limited power budget at the relay station (RS) and orthogonal SC allocation policies. The optimization problems are shown to be mixed integer programming and nonconvex. For the scenario without CJ, we propose an asymptotically optimal algorithm based on the dual decomposition method and a suboptimal algorithm with lower complexity. For the scenario with CJ, the resulting optimization problem is nonconvex, and we propose a heuristic algorithm based on alternating optimization. Finally, the proposed schemes are evaluated by simulations and compared with the existing schemes. | ['Haijun Zhang', 'Hong Xing', 'Julian Cheng', 'Arumugam Nallanathan', 'Victor Victor C.M. Leung'] | Secure Resource Allocation for OFDMA Two-Way Relay Wireless Sensor Networks Without and With Cooperative Jamming | 634,804 |
A novel algorithm is introduced for learning fuzzy measures for Choquet integral-based information fusion. The new algorithm goes beyond previously published MCE-based approaches. It has the advantage that it is applicable to general measures, as opposed to only the Sugeno class of measures. In addition, the monotonicity constraints are handled easily with minimal time or storage requirements. Learning the fuzzy measure is framed as a maximum a posteriori (MAP) parameter learning problem. In order to maintain the constraints, this MAP problem is solved with a Gibbs sampler using an expectation maximization (EM) framework. For these reasons, the new algorithm is referred to as the MAP-EM MCE logistic LASSO algorithm. Results are given on synthetic and real data sets, the latter obtained from a landmine detection problem. Average reductions in false alarms of about 25% are achieved on the landmine detection problem and probabilities of detection in the interesting and meaningful range of 85%-95%. | ['Andres Mendez-Vazquez', 'Paul D. Gader'] | Maximum A Posteriori EM MCE Logistic LASSO for learning fuzzy measures | 227,788 |
During the night or in poorly lit areas, thermal cameras are a better choice instead of normal cameras for security surveillance because they do not rely on illumination. A thermal camera is able to detect a person within its view, but identification from only thermal information is not an easy task. The purpose of this paper is to reconstruct the face image of a person from the thermal spectrum to the visible spectrum. After the reconstruction, further image processing can be employed, including identification/recognition. Concretely, we propose a two-step thermal-to-visible-spectrum reconstruction method based on Canonical Correlation Analysis (CCA). The reconstruction is done by utilizing the relationship between images in both thermal infrared and visible spectra obtained by CCA. The whole image is processed in the first step while the second step processes patches in an image. Results show that the proposed method gives satisfying results with the two-step approach and outperforms comparative methods in both quality and recognition evaluations. | ['Brahmastro Kresnaraman', 'Daisuke Deguchi', 'Tomokazu Takahashi', 'Yoshito Mekada', 'Ichiro Ide', 'Hiroshi Murase'] | Reconstructing Face Image from the Thermal Infrared Spectrum to the Visible Spectrum | 710,350 |
It is now the case that well-performing flash LIDAR focal plane array devices are commercially available. Such devices give us the ability to measure and record frame-registered 3D point cloud sequences at video frame rates. For many 3D computer vision applications this allows the processes of structure from motion or multi-view stereo reconstruction to be circumvented. This allows us to construct simpler, more efficient, and more robust 3D computer vision systems. This is a particular advantage for ground-based vision tasks which necessitate real-time or near real-time operation. The goal of this work is introduce several important considerations for dealing with commercial 3D Flash LIDAR data and to describe useful strategies for noise filtering, structural segmentation, and meshing of ground-based data. With marginal refinement efforts the results of this work are directly applicable to many ground-based computer vision tasks. | ['Donald J. Natale', 'Matthew S. Baran', 'Richard L. Tutwiler'] | Point cloud processing strategies for noise filtering, structural segmentation, and meshing of ground-based 3D Flash LIDAR images | 926,221 |
We address the problem of power estimation at the register-transfer level (RTL). At this level, the circuit is described in terms of a set of interconnected memory elements and combinational modules of different degrees of complexity. We propose a bottom-up approach to create a simplified high-level model of the block behavior for power estimation, which is described by a symbolic local polynomial. We use an efficient gate-level modeling based on the polynomial simulation method and ZBDDs. We present a set of experimental results that show a large improvement in performance and robustness when compared to previous approaches. | ['Ricardo S. Ferreira', 'Anne-Marie Trullemans', 'José C. Costa', 'José C. Monteiro'] | Probabilistic bottom-up RTL power estimation | 465,403 |
We address the problem of designing optimal schemes for the generation of secure cryptographic keys from continuous noisy data. We argue that, contrary to the discrete case, a universal fuzzy extractor does not exist. This implies that in the continuous case, key extraction schemes have to be designed for particular probability distributions. We extend the known definitions of the correctness and security properties of fuzzy extractors. Our definitions apply to continuous as well as discrete variables. We propose a generic construction for fuzzy extractors from noisy continuous sources, using independent partitions. The extra freedom in the choice of discretization, which does not exist in the discrete case, is advantageously used to give the extracted key a uniform distribution. We analyze the privacy properties of the scheme and the error probabilities in a one-dimensional toy model with simplified noise. Finally, we study the security implications of incomplete knowledge of the source’s probability distribution P. We derive a bound on the min-entropy of the extracted key under the worst-case assumption, where the attacker knows P exactly. | ['Evgeny Verbitskiy', 'Pim Tuyls', 'Chibuzo Obi', 'Berry Schoenmakers', 'Boris Skoric'] | Key Extraction From General Nondiscrete Signals | 855,904 |
"The Semantic Web is an extension of the current web in which information is given well-defined meaning, better enabling computers and people to work in co-operation." -Tim Berners-Lee, James Hendler, Ora Lassila, The Semantic Web, Scientific American, May 2001. | ['Dieter Fensel'] | Semantic web services: a communication infrastructure for eWork and eCommerce | 509,379 |
In this letter, a novel approach that utilizes the spectrum information (i.e., images) provided in a modern light detection and ranging (LiDAR) sensor is proposed for the registration of multistation LiDAR data sets. First, the conjugate points in the images collected at varied LiDAR stations are extracted through the speedup robust feature technique. Then, by applying the image-object space mapping technique, the 3-D coordinates of the conjugate image points can be obtained. Those identified 3-D conjugate points are then fed into a registration model so that the transformation parameters can be immediately solved using the efficient noniterative solution to linear transformations technique. Based on numerical results from a case study, it has been demonstrated that, by implementing the proposed approach, a fully automatic and reliable registration of multistation LiDAR point clouds can be achieved without the need for any human intervention. | ['Jen-Yu Han', 'Nei-Hao Perng', 'Huang-Jie Chen'] | LiDAR Point Cloud Registration by Image Detection Technique | 278,990 |
The problem of grasping is widely studied in the robotics community. This project focuses on the identification of object graspable features using images and object structural information. The primary aim is the creation of a framework in which the information gathered by the vision system can be integrated with automatically generated knowledge, modelled by means of fuzzy description logics. | ['Nicola Vitucci'] | Autonomous object manipulation: a semantic-driven approach | 668,906 |
In this paper, we provide a new axiomatization of the event-model-based Dynamic Epistemic Logic, based on the completeness proof method proposed in [Wang and Cao, 2013]. This axiomatization does not use any of the standard reduction axioms, but naturally captures the essence of the update product. We demonstrate the use of our new axiomatization and the corresponding proof techniques by three sets of results: characterization theorems of the update operations, representation theorems of the DEL-generatable epistemic temporal structures given a fixed event model, and a complete axiomatization of DEL on models with protocols. | ['Yanjing Wang', 'Guillaume Aucher'] | An alternative axiomatization of DEL and its applications | 586,090 |
An intervention on a variable removes the influences that usually have a causal effect on that variable. Gates [1] are a general-purpose graphical modelling notation for representing such context-specific independencies in the structure of a graphical model. We extend d-separation to cover gated graphical models and show that it subsumes do calculus [2] when gates are used to represent interventions. We also show how standard message passing inference algorithms, such as belief propagation, can be applied to the gated graph. This demonstrates that causal reasoning can be performed by probabilistic inference alone. | ['John Winn'] | Causality with Gates | 91,179 |
On etablit des conditions suffisantes pour la commandabilite des systemes a retard quasi lineaires avec des perturbations non lineaires. Ces conditions sont obtenues par resolution d'un systeme d'equations integrales non lineaires avec l'aide du principe du point fixe de Schauder | ['Krishnan Balachandran'] | Nonlinear perturbations of quasi-linear delay control systems. | 769,865 |
MEseum: Personalized Experience with Narrative Visualization for Museum Visitors | ['Ali Arya', 'Jesse Gerroir', 'Efetobore Mike-Ifeta', 'Andres A. Navarro-Newball', 'Edmund Prakash'] | MEseum: Personalized Experience with Narrative Visualization for Museum Visitors | 847,047 |
In this paper, we present a formalisation of a subset of the unifying theories of programming (UTP). In UTP, the alphabetised relational calculus is used to describe and relate different programming paradigms, including functional, imperative, logic, and parallel programming. We develop a verification framework for UTP; we give a formal semantics to an imperative programming language, and use our definitions to create a deep embedding of the language in Z. We use ProofPowerZ, a theorem prover for Z to provide mechanised support for reasoning about programs in the unifying theory. | ['Gift Nuka', 'Jim Woodcock'] | Mechanising a Unifying Theory | 881,975 |
Redundant Number Systems have been widely used in fast arithmetic circuits design. Signed-Digit (SD) or generally High-Radix SD (HRSD) number system is one of the most important redundant number systems. HRSD additions are used in many arithmetic functions as basic operations. Hence, improving the additions characteristics will improve the performance of almost all arithmetic modules. Several HRSD adders have been introduced in literatures. In this paper a new maximally redundant HRSD adder is proposed. This adder is compared to some most efficient HRSD adders previously published. The proposed adder is fabricated using a standard TSMC 65nm CMOS technology at 1volt supply voltage. The adder consumes 2.5% less power than the best previous published HRSD design. These implementations are also synthesized with FPGA flow on Xilinx Virtex2. The experimental result shows 5% and 6% decreases in the area and delay, respectively. | ['Somayeh Timarchi', 'Keivan Navi', 'Omid Kavehei'] | Maximally Redundant High-Radix Signed-Digit Adder: New Algorithm and Implementation | 168,420 |
In classification tasks, class-modular strategy has been widely used. It has outperformed classical strategy for pattern classification task in many applications. However, in some modular architecture, such as one against all in support vector machines classifier, the training dataset for one class risks to heavily outnumber the other classes. In this challenging situation, the trained classifier will accurately classify the majority class; nevertheless, it marginalizes the minority class. As a result, True Negatives rate (TNr) will be very high while the True Positives rate (TPr) will be low. The main goal of this work is to improve TPr without much sacrifice in TNr. In this paper, we propose oversampling the minority class using polynomial fitting functions. Four new approaches were proposed: star topology, bus topology, polynomial curve topology and mesh topology. Star and mesh topologies approach had led to the best performances. | ['Sami Gazzah', 'N.E. Ben Amara'] | New Oversampling Approaches Based on Polynomial Fitting for Imbalanced Data Sets | 60,385 |
In this paper, a new representation model using the existing cycles in the topological structure of the molecules is proposed. Extracting all cycles of a molecule, its topological structure can be represented by means of a weighted, colored, and nondirected graph named "cycle graph", where the nodes represent the cycles in the molecule and the edges the common nodes among those cycles. In this paper, the capacity of cycle graph for the extraction of topological descriptors contributing appropriate measures of complexity, cyclicity, and symmetry of cyclical systems is presented. | ['Gonzalo Cerruela García', 'Irene Luque Ruiz', 'Miguel Ángel Gómez-Nieto'] | Representation of the molecular topology of cyclical structures by means of cycle graphs. 1. Extraction of topological properties. | 575,367 |
The U. S. Department of Defense deploys civilians to Afghanistan and Iraq to provide engineering, logistical, financial, and security support to ministerial-level officials and organizations. Those civilians are prepared for deployment in a program that includes cultural familiarization training through participation in live vignettes that recreate meetings and encounters with foreign government officials, military and police personnel, and private citizens. In those vignettes the officials are portrayed by human role-players who enact their roles based on scripts and their own personal knowledge of the culture and roles they are depicting. In the Congressionally-mandated study reported on in this paper, emerging modeling and simulation technologies in the areas of virtual environments, natural language processing, and artificial intelligence are being examined to determine the feasibility of replacing the human role-players in this training with virtual characters, or avatars, in a virtual environment. A decomposition of an interaction between a trainee and a role-player avatar reveals that several technologies must operate at a high level of effectiveness to satisfy the training requirements. | ['Mikel D. Petty', 'Walter S. Barge'] | Emerging modeling and simulation technologies needed to implement cultural familiarization training in virtual environments (WIP) | 614,199 |
Most static algorithms that schedule parallel programs represented by macro dataflow graphs are sequential. This paper discusses the essential issues pertaining to parallelization of static scheduling and presents two efficient parallel scheduling algorithms. The proposed algorithms have been implemented on an Intel Paragon machine and their performances have been evaluated. These algorithms produce high-quality scheduling and are much faster than existing sequential and parallel algorithms. | ['Min-You Wu', 'Wei Shu'] | On parallelization of static scheduling algorithms | 236,936 |
This paper presents a new state-of-the-art for document image classification and retrieval, using features learned by deep convolutional neural networks (CNNs). In object and scene analysis, deep neural nets are capable of learning a hierarchical chain of abstraction from pixel inputs to concise and descriptive representations. The current work explores this capacity in the realm of document analysis, and confirms that this representation strategy is superior to a variety of popular handcrafted alternatives. Extensive experiments show that (i) features extracted from CNNs are robust to compression, (ii) CNNs trained on non-document images transfer well to document analysis tasks, and (iii) enforcing region-specific feature-learning is unnecessary given sufficient training data. This work also makes available a new labelled subset of the IIT-CDIP collection, containing 400,000 document images across 16 categories. | ['Adam W. Harley', 'Alex Ufkes', 'Konstantinos G. Derpanis'] | Evaluation of deep convolutional nets for document image classification and retrieval | 128,161 |
Several half-duplex decode-and-forward two-way relaying protocols that efficiently exploit quantized channel state information (CSI) at the transmitters (CSIT) are investigated. Adapting the number of channel uses for each relaying phase and the transmit power based on limited CSIT is shown to result in a significant improvement in the diversity-multiplexing tradeoff (DMT). With CSI feedback from relay to sources, allocating the number of channel uses is sufficient to match the performance of power allocation. However, power control is instrumental to efficiently exploit CSI feedback from sources to relay. | ['Tung T. Kim', 'H. Vincent Poor'] | On the DMT of bidirectional relaying with limited feedback | 522,824 |
Evaluating bias in retrieval systems for recall oriented documents retrieval. | ['Sanam Noor', 'Shariq Bashir'] | Evaluating bias in retrieval systems for recall oriented documents retrieval. | 809,996 |
We have determined the capacity and information efficiency of an associative net configured in a brain-like way with partial connectivity and noisy input cues. Recall theory was used to calculate the capacity when pattern recall is achieved using a winners-take-all strategy. Transforming the dendritic sum according to input activity and unit usage can greatly increase the capacity of the associative net under these conditions. For moderately sparse patterns, maximum information efficiency is achieved with very low connectivity levels (≤ 10%). This corresponds to the level of connectivity commonly seen in the brain and invites speculation that the brain is connected in the most information efficient way. | ['Bruce P. Graham', 'David Willshaw'] | Capacity and Information Efficiency of a Brain-like Associative Net | 286,544 |
Camera trapping is used by conservation biologists to study snow leopards. In this research, we introduce techniques that find motion in camera trap images. Images are grouped into sets and a common background image is computed for each set. The background and superpixel-based features are then used to segment each image into objects that correspond to motion. The proposed methods are robust to changes in illumination due to time of day or the presence of camera flash. | ['Agnieszka C. Miguel', 'Sara Beery', 'Erica Flores', 'Loren Klemesrud', 'Rana Bayrakcismith'] | Finding areas of motion in camera trap images | 880,234 |