abstract
stringlengths 0
11.1k
| authors
stringlengths 9
1.96k
| title
stringlengths 4
353
| __index_level_0__
int64 3
1,000k
|
---|---|---|---|
Combinatorial testing has been widely utilized in testing softwares, e.g. Siemens Suite. This paper aims to investigate the reason why combinatorial testing works in Siemens Suite. Experiments are designed to get the MFS(minimal failure-causing schema) for Siemens Suite, which has been used as a benchmark to evaluate the effectiveness of many testing techniques. The lowerbound of fault-detecting probability of 1C;-way combinatorial test suite for each program is calculated by analyzing the strengths and the number of MFS for each faulty version. Computational results could explain the effectiveness of combinatorial testing in Siemens Suite. | ['Chiya Xu', 'Yuanchao Qi', 'Ziyuan Wang', 'Weifeng Zhang'] | Analyzing Minimal Failure-Causing Schemas in Siemens Suite | 847,155 |
['Ruiwen Chen', 'Valentine Kabanets', 'Nitin Saurabh'] | An Improved Deterministic #SAT Algorithm for Small De Morgan Formulas. | 744,162 |
|
Many energy constrained devices such as cell-phones and wearable medical devices utilize sparse or bursty signals — signals characterized by relatively short periods of high activity. Traditional Nyquist-Rate converters are an inefficient tool for converting sparse signals, as a great deal of energy is wasted converting portions of the signal with relatively-low information content. Asynchronous sampling methods attempt to circumvent this inefficiency by sampling based upon the characteristics of the signal itself. However, many asynchronous sampling solutions struggle with a robustness/resolution trade-off. In this paper, we present a new paradigm in which a flexible analog frontend is paired with an asynchronous successive approximation data converter and an asynchronous time-to-digital converter. The result of this paradigm is that the system can be adapted to individual applications, allowing specific data points to be targeted and avoiding data conversion inefficiency. The system will be demonstrated along with the example application of measuring the QRS-complex within an ECG waveform. The demonstrated system, fabricated in standard 0.5μm and 0.35μm processes, produces a minimal number of voltage/time pairs required for studying the QRS complex while consuming 5.96μW of static power. | ['Brandon M. Kelly', 'David W. Graham Lane'] | An asynchronous ADC with reconfigurable analog pre-processing | 863,015 |
['Marco Conti', 'Franca Delmastro', 'Giovanni Turi'] | Peer-to-Peer Computing in Mobile Ad Hoc Networks. | 746,661 |
|
We describe a first principles based integrated modeling environment to study urban socio-communication networks which represent not just the physical cellular communication network, but also urban populations carrying digital devices interacting with the cellular network. The modeling environment is designed specifically to understand spectrum demand and dynamic cellular network traffic. One of its key features is its ability to support individual-based models at highly resolved spatial and temporal scales. We have instantiated the modeling environment by developing detailed models of population mobility, device ownership, calling patterns and call network. By composing these models using an appropriate in-built workflow, we obtain an integrated model that represents a dynamic socio-communication network for an entire urban region. In contrast with earlier papers that typically use proprietary data, these models use open source and commercial data sets. The dynamic model represents for a normative day, every individual in an entire region, with detailed demographics, a minute-by-minute schedule of each person's activities, the locations where these activities take place, and calling behavior of every individual. As an illustration of the applicability of the modeling environment, we have developed such a dynamic model for Portland, Oregon comprising of approximately 1.6 million individuals. We highlight the unique features of the models and the modeling environment by describing three realistic case studies. | ['Richard J. Beckman', 'Karthik Channakeshava', 'Fei Huang', 'Junwhan Kim', 'Achla Marathe', 'Madhav V. Marathe', 'Guanhong Pei', 'Sudip Saha', 'Anil Kumar S. Vullikanti'] | Integrated Multi-Network Modeling Environment for Spectrum Management | 225,204 |
Nowadays, evolution of mobile devices make demand for searching information increasing expressively. Many applications have been developed for recognition tasks. In this paper, we present a new and efficient visual search system for finding similar images on the large database. We first propose a compact, discriminative image representation called Locality Preserving Vector which can explicitly exploit neighborhood structure of data and attains high retrieval accuracy in the low-dimensional space. We then integrate topic modeling into visual search system for extracting topic related and image-specific information. These information enables images which likely contain the same objects to be ranked with higher similarity. The experiments show that our approach provides competitive accuracy with very low memory cost. | ['Nguyen Anh Tu', 'Young-Koo Lee'] | Locality Preserving Vector and Image-Specific Topic Model for Visual Recognition | 807,738 |
We describe a low-power VLSI wake-up detector for use in an acoustic surveillance sensor network. The detection criterion is based on the degree of low-frequency periodicity in the acoustic signal. To this end, we have developed a periodicity estimation algorithm that maps particularly well to a low-power VLSI implementation. The time-domain algorithm is based on the "bumpiness" of the autocorrelation of one-bit version of the signal. We discuss the relationship of this algorithm to the maximum-likelihood estimator for periodicity. We then describe a full-custom CMOS ASIC that implements this algorithm. This ASIC is fully functional and its core consumes 835 nano-Watts. The ASIC was integrated into an acoustic enclosure and tested outdoors on synthesized sounds. This unit was also deployed in a three-node sensor network and tested on ground-based vehicles. | ['David H. Goldberg', 'Andreas G. Andreou', 'Pedro Julián', 'Philippe O. Pouliquen', 'Laurence G. Riddle', 'Rich Rosasco'] | A wake-up detector for an acoustic surveillance sensor network: algorithm and VLSI implementation | 87,991 |
Vision location methods have been widely used in the motion estimation of unmanned aerial vehicles (UAVs).#R##N##R##N#The noise of the vision location result is usually modeled as the white gaussian noise so that this result could#R##N##R##N#be utilized as the observation vector in the kalman filter to estimate the motion of the vehicle. Since the#R##N##R##N#noise of the vision location result is affected by external environments, the variance of the noise is uncertain.#R##N##R##N#However, in previous researches the variance is usually set as a fixed empirical value, which will lower the#R##N##R##N#accuracy of the motion estimation. In this paper, a novel adaptive noise variance identification (ANVI) method#R##N##R##N#is proposed, which utilizes the special kinematic property of the UAV for frequency analysis and adaptively#R##N##R##N#identify the variance of the noise. Then, the adaptively identified variance are used in the kalman filter for#R##N##R##N#accurate motion estimation. The performance of the proposed method is assessed by simulations and field#R##N##R##N#experiments on a quadrotor system. The results illustrate the effectiveness of the method. | ['Fan Zhou', 'Wei Zheng', 'Zengfu Wang'] | Adaptive Noise Variance Identification in Vision-aided Motion Estimation for UAVs | 758,675 |
RSA is a very popular public-key based cryptosystem. The security of RSA is relied on the difficulty of large integer factorization. The General Number Field Sieve (GNFS) is an algorithm for factoring very large numbers, especially for integers over 110 digits. It is the asymptotically fastest known factoring algorithm. In this paper, we have successfully implemented the parallel General Number Field Sieve (GNFS) algorithm and integrated with a new method called Block Wiedemann's algorithm to solve the large and sparse linear system over GF(2) generated by the GNFS algorithm. The detailed parallel experimental results on a SUN cluster will be presented as well. | ['Na Guo', 'Laurence T. Yang', 'Man Lin', 'John Quinn'] | A Parallel GNFS Integrated with the Block Wiedemann's Algorithm for Integer Factorization | 96,142 |
Originally developed to connect processors and memories in multicomputers, prior research and design of interconnection networks have focused largely on performance. As these networks get deployed in a wide range of new applications, where power is becoming a key design constraint, we need to seriously consider power efficiency in designing interconnection networks. As the demand for network bandwidth increases, communication links, already a significant consumer of power now, will take up an ever larger portion of total system power budget. In this paper we motivate the use of dynamic voltage scaling (DVS) for links, where the frequency and voltage of links are dynamically adjusted to minimize power consumption. We propose a history-based DVS policy that judiciously adjusts link frequencies and voltages based on past utilization. Our approach realizes up to 6.3/spl times/ power savings (4.6/spl times/ on average). This is accompanied by a moderate impact on performance (15.2% increase in average latency before network saturation and 2.5% reduction in throughput.) To the best of our knowledge, this is the first study that targets dynamic power optimization of interconnection networks. | ['Li Shang', 'Li-Shiuan Peh', 'Niraj K. Jha'] | Dynamic voltage scaling with links for power optimization of interconnection networks | 85,751 |
This paper describes a method for labelling structural parts of a musical piece. Existing methods for the analysis of piece structure often name the parts with musically meaningless tags, e.g., "p1", "p2", "p3". Given a sequence of these tags as an input, the proposed system assigns musically more meaningful labels to these; e.g., given the input "p1, p2, p3, p2, p3" the system might produce "intro, verse, chorus, verse, chorus". The label assignment is chosen by scoring the resulting label sequences with Markov models. Both traditional and variable-order Markov models are evaluated for the sequence modelling. Search over the label permutations is done with N-best variant of token passing algorithm. The proposed method is evaluated with leave-one-out cross-validations on two large manually annotated data sets of popular music. The results show that Markov models perform well in the desired task. | ['Jouni Paulus', 'Anssi Klapuri'] | Labelling the Structural Parts of a Music Piece with Markov Models | 458,844 |
Question answering research has only recently started to spread from short factoid questions to more complex ones. One significant challenge is the evaluation: manual evaluation is a difficult, time-consuming process and not applicable within efficient development of systems. Automatic evaluation requires a corpus of questions and answers, a definition of what is a correct answer, and a way to compare the correct answers to automatic answers produced by a system. For this purpose we present a Wikipedia-based corpus of Whyquestions and corresponding answers and articles. The corpus was built by a novel method: paid participants were contacted through a Web-interface, a procedure which allowed dynamic, fast and inexpensive development of data collection methods. Each question in the corpus has several corresponding, partly overlapping answers, which is an asset when estimating the correctness of answers. In addition, the corpus contains information related to the corpus collection process. We believe this additional information can be used to post-process the data, and to develop an automatic approval system for further data collection projects conducted in a similar manner. | ['Joanna Mrozinski', 'Edward W. D. Whittaker', 'Sadaoki Furui'] | Collecting a Why-Question Corpus for Development and Evaluation of an Automatic QA-System | 385,593 |
Pilot symbol assisted modulation (PSAM) has previously been shown to give good performance in flat fading, noise and cochannel interference. The present paper analyzes its performance in ISI due to frequency selective fading, and provides a similar analysis of differential detection for comparison. The paper also introduces a method for performing the formidable average over transmitted data patterns simply, and with an analytical result, PSAM is shown to be sensitive to RMS delay spread, though it always gives better performance than differential detection. > | ['James K. Cavers'] | Pilot symbol assisted modulation and differential detection in fading and delay spread | 231,284 |
['Ana Paula Cláudio', 'Maria Beatriz Carmo', 'Augusta Gaspar', 'Renato Teixeira'] | Using Expressive and Talkative Virtual Characters in Social Anxiety Disorder Treatment | 808,818 |
|
This article describes the construction and validation of a three-dimensional model of the human CCR5 receptor using a homology-based approach starting from the X-ray structure of the bovine rhodopsin receptor. The reliability of the model is assessed through molecular dynamics and docking simulations using both natural agonists and a synthetic antagonist. Some important structural and functional features of the receptor cavity and the extracellular loops are identified, in agreement with data available from site-directed mutagenesis. The results of this study help to explain the structural basis for the recognition, activation, and inhibition processes of CCR5 and may provide fresh insights for the design of HIV-1 entry blockers. | ['Alessandra Fano', 'David W. Ritchie', 'Antonio Carrieri'] | Modeling the structural basis of human CCR5 chemokine receptor function : From homology model building and molecular dynamics validation to agonist and antagonist docking | 589,325 |
A virtual environment for interactive molecular dynamics simulation has been designed and implemented at the Fraunhofer Institute for Computer Graphics. Different kinds of virtual reality devices are used in the environment for immersive display and interaction with the molecular system. A parallel computer is used to simulate the physical and chemical properties of the molecular system dynamically. A high-speed network exchanges data between the simulation program and the modeling program. Molecular dynamics simulation virtual environment provides scientists with a powerful tool to study immersively the world of molecules. The dynamic interaction between an AIDS antiviral drug and reverse transcriptase enzyme is illustrated in the paper. | ['Zhuming Ai', 'Torsten Fröhlich'] | Molecular Dynamics Simulation in Virtual Environments | 44,072 |
This paper details tester-based optical and electrical diagnostic system and techniques that aim at diagnosing various types of problems that exist in today's VLSI chips, especially during initial bring-up stage. The versatility of the electrical test creates flexible test controls while optical diagnostic tools, such as emission-based systems, provide a deep understanding of what is going on inside the chip. Tightly integrating both methods produces a powerful diagnostic system and it also opens a door for creating a series of new diagnostic techniques for resolving new families of problems as illustrated in this paper with several examples. | ['Peilin Song', 'Franco Stellari'] | Tester-based optical and electrical diagnostic system and techniques | 35,306 |
The impact of information technology on business operations is widely recognized and its role in the emergence of new business models is well-known. In order to leverage the benefits of IT-supported business processes the security of the underlying information systems must be managed. Various socalled best-practice models and information security standards have positioned themselves as generic solutions for a broad range of risks. In this paper we inspect the metamodel of the information security standard ISO 27001 and describe its application for a set of generalized phases in information security management. We conclude with a demonstration of its practicality by providing an example of how such a metamodel can be applied, before discussing potential future research. | ['Danijel Milicevic', 'Matthias Goeken'] | Application of models in information security management | 120,937 |
Foreground segmentation enables dynamic reconstruction of the moving objects in static scenes. After KinectFusion had proposed a novel method that constructs the foreground from the Iterative Closest Point (ICP) outliers, numerous studies proposed filtration methods to reduce outlier noise. To this end, the relationship between outliers and the foreground is investigated, and a method to efficiently extract the foreground from outliers is proposed. The foreground is found to be directly connected to ICP distance outliers rather than the angle and distance outliers that have been used in past research. Quantitative results show that the proposed method outperforms prevalent foreground extraction methods, and attains an average increase of 11.8% in foreground quality. Moreover, real-time speed of 50 fps is achieved without heavy graph-based refinements, such as GrabCut. The proposed depth features surpass current 3D GrabCut, which only uses RGB-N. | ['Hamdi Sahloul', 'H. Jorge D. Figueroa', 'Shouhei Shirafuji', 'Jun Ota'] | Foreground segmentation with efficient selection from ICP outliers in 3D scene | 651,689 |
We present some improvements in the procedure for calculating power spectra of signals based on finite state descriptions and constant block size. In addition to simplified calculations, our results provide some insight into the form of the closed expressions and to the relation between the spectra and other properties of the codes. | ['Jørn Justesen'] | Calculation of power spectra for block coded signals | 204,644 |
['Frank Hannig', 'Andreas Herkersdorf'] | Introduction to the Special Issue on Testing, prototyping, and debugging of multi-core architectures | 597,631 |
|
In recent years the promotion and incorporation of computer-assisted learning courseware has been a feature of many Geography departments in higher education in the UK. There is little disagreement that this development needs to be thoroughly evaluated to ensure quality and effectiveness. However there has been a lack of rigorous evaluation in practice. A detailed illuminative evaluation of 120 Geography students using focus group interviews and an attitude survey reveals that CAL packages remain unpopular with most learners. This can be attributed to the content and presentation of packages but it is also suggested that contexts of use and perhaps staff disinterest are explanatory factors. Some gender-based and age-based attitude differences are noted. This type of evaluation is of greater use to curriculum developers than objective-led approaches. | ['Greg Spellman'] | Evaluation of CAL in higher education Geography | 156,435 |
Topic Modeling has been a useful tool for finding abstract topics (which are collections of words) governing a collection of documents. Each document is then expressed as a collection of generated topics. The most basic topic model is Latent Dirichlet Allocation (LDA). In this paper, we have developed Gibbs Sampling algorithm for Hierarchical Latent Dirichlet Allocation (HLDA) by incorporating time into our topic model. We call our model Hierarchical Latent Dirichlet Allocation with Topic Over Time (HLDA-TOT). We find topics for a collection of songs taken during the period 1990 to 2010. The dataset we used is taken from the Million Songs Dataset (MSD) consisting of a collection of 1000 songs. We have used Gibbs Sampling algorithm for inference in both HLDA and HLDA-TOT. Our experimental results demonstrates a comparison in the performances of HLDA and HLDA-TOT and it is shown that HLDA-TOT performs better in terms of 1) Number of topics generated for different depths 2) Number of empty topics generated for different depths and 3) held-out log likelihood for different depths. | ['Nishma Laitonjam', 'Vineet Padmanabhan', 'Arun K. Pujari', 'Rajendra Prasad Lal'] | Topic Modelling for Songs | 695,937 |
Workflows often operate in volatile environments in which the component services' QoS changes frequently. Optimally adapting to these changes becomes an important problem that must be addressed by the Web service composition and execution (WSCE) system being utilized. We adopt the A-WSCE framework that utilizes a three-stage approach for composing and executing Web workflows. The A-WSCE framework offers a way to adapt by defining multiple workflows and switching among them in case of component failure or changes in the QoS parameters. However, the A-WSCE framework suffers from the limitations imposed by a simple strategy of periodically checking the QoS offerings of randomly picked providers in order to decide whether the current workflow is optimal. To address these limitations, we associate the value of changed information (VOC) with each workflow and utilize the VOC to update which workflow to execute. We empirically demonstrate the improved performance of the workflows selected using the new approach in comparison to the original framework. | ['Girish Chafle', 'Prashant Doshi', 'John Harney', 'Sumit Mittal', 'Biplav Srivastava'] | Improved Adaptation of Web Service Compositions Using Value of Changed Information | 59,018 |
A new Conservative algorithm for both parallel and sequential simulation of networks is described. The technique is motivated by the construction of a high performance simulator for ATM networks. It permits very fast execution of models of ATM systems, both sequentially and in parallel. A simple analysis of the performance of the system is made. Initial performance results from parallel and sequential implementations are presented and compared with comparable results from an optimistic TimeWarp based simulator. It is shown that the conservative simulator performs well when the "density" of messages in the simulated system is high, a condition which is likely to hold in many interesting ATM scenarios. | ['John G. Cleary', 'Jya-Jang Tsai'] | Conservative parallel simulation of ATM networks | 446,190 |
['Antonio Moreno Ortiz', 'Victor Raskin', 'Sergei Nirenburg'] | New Developments in Ontological Semantics. | 744,832 |
|
Exploration and analysis of alternatives is one of the main activities in requirements engineering, both in early and in late requirements phases. While i* and i* -derived modeling notations provide facilities for capturing cer- tain types of variability, domain properties (and other external influences) and their effects on i* models cannot be easily modeled. In this paper, we propose to explore how our previous work on context-dependent goal models can be ex- tended to support i* . Here, we examine how i* modeling can benefit from both monitorable (i.e., defined through real-world phenomena) and non-monitorable (e.g., viewpoints, versions, etc.) contexts defined using our context framework. 1 Introduction i* is an agent-oriented modeling framework that centers on the notions of intelligent actor and intentional dependency. The Strategic Dependency (SD) model of i* focus- es on representing the relevant actors in the organization together with their intention- al dependencies, while the Strategic Rationale (SR) model captures the rationale be- hind the processes in organizations from the point of view of participating actors. Variability in requirements and in design has been identified as crucial for develop- ing future software systems (4,5). Moreover, flexible, robust, adaptive, mobile and pervasive applications are expected to account for the properties of (as well as to adapt to changes in) their environments. Thus, modeling variability in the system environment and its effects on requirements and on other types of models is a highly desirable feature of a modeling framework. However, i* does not support capturing of how domain variations affect its diagrams. This leads to two situations. First, an over- simplification of the diagrams through the assumption of domain uniformity with the hope of producing an i* model that is adequate for the most instances of a problem. Second, the production of multiple i* models to accommodate all domain variations. The former case leads to models that fail to account for the richness of domain varia- tions, while the latter introduces serious model management problems due to the need to oversee large numbers of models as well as to capture their relationships. In this paper, we adapt the ideas from (3) to the i* modeling framework and propose an ap- proach that uses contexts to structure domain variability and to concisely represent and analyze the variations in i* models resulting from this domain variability as well as from other external factors such as viewpoints, etc. Using the proposed approach, CEUR Proceedings of the 5th International i* Workshop (iStar 2011) | ['Alexei Lapouchnian', 'John Mylopoulos'] | Capturing Contextual Variability in i* Models | 664,350 |
A 900MHz frequency synthesizer is presented in this article. The purpose of the proposed architecture is to minimize lock time in Phase-Locked Loops (PLLs). The basic idea behind this topology is using a larger loop bandwidth and gain during the frequency switching transition and shifting gradually the loop bandwidth to the normal value after the PLL is locked. The structure has been simulated by HSPICE software in a TSMC 0.18um technology at the supply voltage of 1.8V. | ['Mehdi Ghasemzadeh', 'Sina Mahdavi', 'Abolfazl Zokaei', 'Khayrollah Hadidi'] | A new adaptive PLL to reduce the lock time in 0.18µm technology | 860,389 |
This contribution presents a machine vision system capable of revealing, detecting and characterizing defects on non-plane transparent surfaces. Because in this kind of surface, transparent and opaque defects can be found, special lighting conditions are required. Therefore, the cornerstone of this machine vision is the innovative lighting system developed. Thanks to this, the defect segmentation is straightforward and with a very low computational burden, allowing real-time inspection. To aid in the conception of the imaging conditions, the lighting system is completely described and also compared with other commercial lighting systems. In addition, for the defect segmentation, a new adaptive threshold selection algorithm is proposed. Finally, the system performance is assessed by conducting a series of tests using a commercial model of headlamp lens. | ['S. Satorres Martinez', 'J. Gomez Ortega', 'J. Gamez Garcia', 'A. Sanchez Garcia'] | A machine vision system for defect characterization on transparent parts with non-plane surfaces | 244,469 |
Discovering potentially useful and previously unknown information or knowledge from heterogeneous web contents such as "list all laptop prices from Walmart and Staples between 2013 and 2015 including make, type, screen size, CPU power, year of make", would require the difficult task of finding the schema of web documents from different web pages, performing web content data integration, building their virtual or physical data warehouse integration before web content extraction and mining from the database. Wrappers that extract target information from web pages can be manual, semi-supervised or automatic systems. Automatic systems such as the WebOMiner system, use some data extraction techniques based on parsing the web page html source code into a document object model (DOM) tree, then traverse the DOM for pattern discovery. Some limitations of these existing systems include using complicated matching techniques such as tree matching, Finite state automata, not yielding accurate results for complex queries such as historical and derived. This paper proposes building the WebOMiner S which uses web structure and content mining approaches on the DOM-tree html code to simplify and make more easily extendable, the web data extraction process of theWebOMiner system. TheWebOMiner system is based on non-deterministic finite state automata (NFA) to recognize and extract web different types (e.g., text, image, links, and lists). The proposed WebOMiner S replaces the use of NFA of the WebOMiner with a frequent structure finder algorithm which uses regular expression matching in Java xpath parser and methods (such as compile(),evaluate()) to dynamically discover the most frequent structure (which is the most frequently repeated blocks in the html code represented as tags ) in the Dom tree. This approach eliminates the need for any supervised training or updating the wrapper for each new B2C web page making the approach simpler, more easily extendable and automated. | ['Christie I. Ezeife', 'Bindu Peravali'] | Comparative Mining of B2C Web Sites by Discovering Web Database Schemas | 886,530 |
['Afonso Celso Medina', 'Luis G. Nardin', 'Newton Narciso Pereira', 'Rui Carlos Botter', 'Jaime Simão Sichman'] | A Distributed Simulation Model of the Maritime Logistics in an Iron Ore Supply Chain Management | 785,446 |
|
Automatic detection and recognition of ship in satellite images is very important and has a wide array of applications. This paper concentrates on optical satellite sensor, which provides an important approach for ship monitoring. Graph-based fore/background segmentation scheme is used to extract ship candidant from optical satellite image chip after the detection step, from course to fine. Shadows on the ship are extracted in a CFAR scheme. Because all the parameters in the graph-based algorithms and CFAR are adaptively determined by the algorithms, no parameter tuning problem exists in our method. Experiments based on measured optical satellite images shows our method achieved good balance between computation speed and ship extraction accuracy. | ['Feng Chen', 'Wenxian Yu', 'Xingzhao Liu', 'Kaizhi Wang', 'Lin Gong', 'Wentao Lv'] | Graph-based ship extraction scheme for optical satellite image | 44,618 |
The Papoulis Generalized Sampling Expansion (GSE) can be combined with Petersen-Middleton lattice sampling to obtain a GSE for n-dimensional bandlimited functions. Such expansions will be discussed for nonperiodic/periodic hybrids. The periodic and nonperiodic cases are incorporated as special cases. For a restrictive special case, a Parseval-like formula is proved for n-dimensional bandlimited functions sampled on periodic, nonuniform sample sets. | ['Steven H. Izen'] | Generalized sampling expansion on lattices | 517,282 |
['Lisette Espín-Noboa', 'Florian Lemmerich', 'Markus Strohmaier', 'Philipp Singer'] | A Hypotheses-driven Bayesian Approach for Understanding Edge Formation in Attributed Multigraphs | 948,687 |
|
['Jochen Sprickerhof', 'Andreas Nüchter', 'Kai Lingemann', 'Joachim Hertzberg'] | An Explicit Loop Closing Technique for 6D SLAM. | 764,442 |
|
This paper presents InDico, an approach for the automated analysis of business processes against confidentiality requirements. InDico is motivated by the fact that in spite of the correct deployment of access control mechanisms, information leaks in automated business processes can persist due to erroneous process design. InDico employs a meta-model based on Petri nets to formalize and analyze business processes, thereby enabling the identification of leaks caused by a flawed process design. | ['Rafael Accorsi', 'Claus Wonnemann'] | InDico: information flow analysis of business processes for confidentiality requirements | 387,443 |
Object-oriented database systems (OODBMS) offer powerful modeling concepts as required by advanced application domains like CAD/CAM/CAE or office automation. Typical applications have to handle large and complex structured objects which frequently change their value and their structure. As the structure is described in the schema of the database, support for schema evolution is a highly required feature. Therefore, a set of schema update primitives must be provided which can be used to perform the required changes, even in the presence of populated databases and running applications. In this paper, we use the versioning approach to schema evolution to support schema updates as a complex design task. The presented propagation mechanism is based on conversion functions that map objects between different types and can be used to support schema evolution and schema integration. | ['Sven-Eric Lautemann'] | A propagation mechanism for populated schema versions | 302,358 |
Designers of complex SoCs have to face the issue of tuning their design to achieve low power consumption without compromising performance. A set of complementary techniques at hardware level are able to reduce power consumption but most of these techniques impact system performance and behavior. At register transfer level, low power design flows are available. Unfortunately, equivalent design flows at transactional level are missing. In this paper we describe how a power/clock intent could be described at transactional level using a separation of concerns process and how the transactional simulation code merging functional and power behaviors can be generated automatically using a model-driven engineering approach. | ['Hend Affes', 'Amal Ben Ameur', 'Michel Auguin', 'François Verdier', 'Calypso Barnes'] | An ESL framework for low power architecture design space exploration | 952,969 |
We present a real-time rendering algorithm that generates soft shadows of dynamic scenes using a single light sample. As a depth-map algorithm it can handle arbitrary shadowed surfaces. The shadow-casting surfaces, however, should satisfy a few geometric properties to prevent artifacts. Our algorithm is based on a bivariate attenuation function, whose result modulates the intensity of a light causing shadows. The first argument specifies the distance of the occluding point to the shadowed point; the second argument measures how deep the shadowed point is inside the shadow. The attenuation function can be implemented using dependent texture accesses; the complete implementation of the algorithm can be accelerated by today’s graphics hardware. We outline the implementation, and discuss details of artifact prevention and filtering. | ['Florian Kirsch', 'Jürgen Döllner'] | Real-Time Soft Shadows Using a Single Light Sample | 402,999 |
Open source electrophysiology (ephys) recording systems have several advantages over commercial systems such as customization and affordability enabling more researchers to conduct ephys experiments. Notable open source ephys systems include Open-Ephys, NeuroRighter and more recently Willow, all of which have high channel count (64+), scalability, and advanced software to develop on top of. However, little work has been done to build an open source ephys system that is clinic compatible, particularly in the operating room where acute human electrocorticography (ECoG) research is performed. We developed an affordable (< $10,000) and open system for research purposes that features power isolation for patient safety, compact and water resistant enclosures and 256 recording channels sampled up to 20ksam/sec, 16-bit. The system was validated by recording ECoG with a high density, thin film device for an acute, awake craniotomy study at UC San Diego, Thornton Hospital Operating Room. | ['John Hermiz', 'Nick Rogers', 'Erik Kaestner', 'Mehran Ganji', 'Dan Cleary', 'Joseph Snider', 'David Barba', 'Shadi A. Dayeh', 'Eric Halgren', 'Vikash Gilja'] | A clinic compatible, open source electrophysiology system | 907,668 |
Cryptography is used for secure communication since ancient days for providing confidentiality, integrity and availability of the information. Public key cryptography is a classification of cryptography having pair of keys for encryption and decryption. Public key cryptography provides security and authentication using several algorithms. The Rivest-Shamir -Adleman RSA algorithm is prominent since its inception and is widely used. Several modified schemes were introduced to increase security in the RSA algorithm involving additional complexity. In this paper, we introduce a generalized algorithm over RSA that is advanced, adaptable and scalable in using a number of primes. Our algorithm uses 2k prime numbers with secure key generation involving additional complexity, making it computationally infeasible to determine decryption key. A user can use 4, 8, 16, 32, ' 2k prime numbers for generating public and private components securely. In our algorithm, public key and private key components are generated by making use of N, where N is a function of 2k prime numbers. When an attacker obtains a public key component n out of {E, n} by using factorization techniques such as general number field sieve or elliptic curve factorization, he or she can only obtain two initial prime numbers because n is a product of the first two prime numbers. However, finding the remaining prime numbers is computationally infeasible as no relevant information is available to the attacker. Hence, it is difficult for the attacker to determine the private key component D out of {D, n} knowing the public key component {E, n}. Thus, it is practically impossible to break our system using a brute force attack. Copyright © 2016 John Wiley & Sons, Ltd. | ['Auqib Hamid Lone', 'Aqeel Khalique'] | Generalized RSA using 2k prime numbers with secure key generation | 898,338 |
Given a 2-edge-connected, real weighted graph G with n vertices and m edges, the 2-edge-connectivity augmentation problem is that of finding a minimum weight set of edges of G to be added to a spanning subgraph H of G to make it 2-edge-connected. While the general problem is NP-hard and 2 -approximable, in this paper we prove that it becomes polynomial time solvable if H is a depth-first search tree of G . More precisely, we provide an efficient algorithm for solving this special case which runs in O(M · ?(M,n)) time, where ? is the classic inverse of Ackermann's function and M=m · ?(m,n) . This algorithm has two main consequences: first, it provides a faster 2 -approximation algorithm for the general 2 -edge-connectivity augmentation problem; second, it solves in O(m · ?(m,n)) time the problem of restoring, by means of a minimum weight set of replacement edges, the 2 -edge-connectivity of a 2-edge-connected communication network undergoing a link failure. | ['Anna Galluccio', 'Guido Proietti'] | Polynomial Time Algorithms for 2-Edge-Connectivity Augmentation Problems | 46,280 |
Estimating the strength of dependency between two variables is fundamental for exploratory analysis and many other applications in data mining. For example: non-linear dependencies between two continuous variables can be explored with the Maximal Information Coefficient (MIC); and categorical variables that are dependent to the target class are selected using Gini gain in random forests. Nonetheless, because dependency measures are estimated on finite samples, the interpretability of their quantification and the accuracy when ranking dependencies become challenging. Dependency estimates are not equal to 0 when variables are independent, cannot be compared if computed on different sample size, and they are inflated by chance on variables with more categories. In this paper, we propose a framework to adjust dependency measure estimates on finite samples. Our adjustments, which are simple and applicable to any dependency measure, are helpful in improving interpretability when quantifying dependency and in improving accuracy on the task of ranking dependencies. In particular, we demonstrate that our approach enhances the interpretability of MIC when used as a proxy for the amount of noise between variables, and to gain accuracy when ranking variables during the splitting procedure in random forests. | ['Simone Romano', 'Nguyen Xuan Vinh', 'James M. Bailey', 'Karin Verspoor'] | A Framework to Adjust Dependency Measure Estimates for Chance | 600,923 |
This paper presents Multipath-ChaMeLeon (MCML) as an update of the existing ChaMeLeon (CML) routing protocol. CML is a hybrid and adaptive protocol designed for Mobile Ad-Hoc Networks (MANETs), supporting emergency communications. M-CML adopts the attributes of the proactive Optimized Link State Protocol (OLSR) and extends it so as to implement a multipath routing approach based on the Expected Transmission Count (ETX). The paper substantiates the efficiency of the protocol through a simulation scenario within a MANET using the NS-3 simulator. The acquired results indicate that M-CML routing approach combined with an intelligent link metric such as the ETX reduces the effects of link instabilities and enhances the network performance in terms of resiliency and scalability. | ['Alexandros Ladas', 'Nikolaos Pavlatos', 'Nuwan Weerasinghe', 'Christos Politis'] | Multipath routing approach to enhance resiliency and scalability in ad-hoc networks | 844,421 |
Introductory engineering courses within large universities often have annual enrollments which can reach up to a thousand students. It is very challenging to achieve differentiated instruction in classrooms with class sizes and student diversity of such great magnitude. Professors can only assess whether students have mastered a concept by using multiple choice questions, while detailed homework assignments, such as planar truss diagrams, are rarely assigned because professors and teaching assistants would be too overburdened with grading to return assignments with valuable feedback in a timely manner. In this paper, we introduce Mechanix, a sketch-based deployed tutoring system for engineering students enrolled in statics courses. Our system not only allows students to enter planar truss and free body diagrams into the system just as they would with pencil and paper, but our system checks the student's work against a hand-drawn answer entered by the instructor, and then returns immediate and detailed feedback to the student. Students are allowed to correct any errors in their work and resubmit until the entire content is correct and thus all of the objectives are learned. Since Mechanix facilitates the grading and feedback processes, instructors are now able to assign free response questions, increasing teacher's knowledge of student comprehension. Furthermore, the iterative correction process allows students to learn during a test, rather than simply displaying memorized information. | ['Stephanie Valentine', 'Francisco Vides', 'George Lucchese', 'David Turner', 'Hong-hoe Kim', 'Wenzhe Li', 'Julie Linsey', 'Tracy Hammond'] | Mechanix: a sketch-based tutoring system for statics courses | 787,783 |
Service-based technology is becoming widespread, and many service oriented platforms are available with different characteristics and constraints, making them incompatible. Consequently, designing, developing and executing an application using services running on different platforms is a challenging task if not impossible. A first challenge is to provide an ideal and virtual SOC platform; ideal because it subsumes the actual real SOA platforms, virtual because it delegates the execution to these real platforms. But since “ideal” depends on the domain; the second challenge is to provide extensibility mechanisms allowing to define what ideal means in a given domain and to provide the engineering support for the design and development of service-based application dedicated to such domains. The paper describes how we addressed these challenges, and to what extent they have been met. | ['Jacky Estublier', 'Eric Simon'] | Universal and Extensible Service-Oriented Platform Feasibility and Experience: The Service Abstract Machine | 394,755 |
There are examples of robotic systems in which autonomous mobile robots self-assemble into larger connected entities. However, existing systems display little or no autonomous control over the shape of the connected entity thus formed. We describe a novel distributed mechanism that allows autonomous mobile robots to self-assemble into pre-specified patterns. Global patterns are 'grown' using locally applicable rules and local visual perception only. In this study, we focus on the low-level navigation and directional self-assembly part of the pattern formation process. We analyse the precision of this mechanism on real robots. | ['Anders Lyhne Christensen', "Rehan O'Grady", 'Marco Dorigo'] | A mechanism to self-assemble patterns with autonomous robots | 263,686 |
In recent years many tone mapping operators (TMOs) have been presented in order to display high dynamic range images (HDRI) on typical display devices. TMOs compress the luminance range while trying to maintain contrast. The inverse of tone mapping, inverse tone mapping, expands a low dynamic range image (LDRI) into an HDRI. HDRIs contain a broader range of physical values that can be perceived by the human visual system. We propose a new framework that approximates a solution to this problem. Our framework uses importance sampling of light sources to find the areas considered to be of high luminance and subsequently applies density estimation to generate an expand map in order to extend the range in the high luminance areas using an inverse tone mapping operator. The majority of today’s media is stored in the low dynamic range. Inverse tone mapping operators (iTMOs) could thus potentially revive all of this content for use in high dynamic range display and image based lighting (IBL). Moreover, we show another application that benefits quick capture of HDRIs for use in IBL. | ['Francesco Banterle', 'Patrick Ledda', 'Kurt Debattista', 'Alan Chalmers', 'Marina Bloj'] | A framework for inverse tone mapping | 92,129 |
The improvement of sanitary conditions and changes of life style making people's health awareness began to rise. The progress of medical and the promotion of medicine and health, the concept of preventive healthcare is widespread. People begin to concern about the quality of the environment where they stayed. The environmental comfort index include temperature, related humidity, illumination, noise and carbon dioxide. Among these, we monitor and record the values of temperature, related humidity and carbon dioxide. In this paper, we built an environment quality monitoring system, which can adjust the air quality in door and monitor the concentration of harmful gases like formaldehyde, volatile organic compounds and carbon monoxide. If the environment comfort value is not up to standard, the related equipment will be turned on. The system will alert if the content of harmful gases exceed the standard. We hope that based on these real-time data, the proposed system can help people make right and timely decisions, and act on time to maintain a beneficial environment in the monitored area. | ['Chao-Tung Yang', 'Jung-Chun Liu', 'Yun-Ting Wang', 'Chia-Cheng Wu', 'Fang-Yie Leu'] | Implementation of an environmental quality and harmful gases monitoring system | 910,902 |
Digital currencies represent a new method for exchange -- a payment method with no physical form, made real by the Internet. This new type of currency was created to ease online transactions and to provide greater convenience in making payments. However, a critical component of a monetary system is the people who use it. Acknowledging this, we present results of our interview study (N=20) with two groups of participants (users and non-users) about how they perceive the most popular digital currency, Bitcoin. Our results reveal: non-users mistakenly believe they are incapable of using Bitcoin, users are not well-versed in how the protocol functions, they have misconceptions about the privacy of transactions, and that Bitcoin satisfies properties of ideal payment systems as defined by our participants. Our results illustrate Bitcoin's tradeoffs, its uses, and barriers to entry. | ['Xianyi Gao', 'Gradeigh D. Clark', 'Janne Lindqvist'] | Of Two Minds, Multiple Addresses, and One Ledger: Characterizing Opinions, Knowledge, and Perceptions of Bitcoin Across Users and Non-Users | 794,988 |
Ordinary differential equations are arguably the most popular and useful mathematical tool for describing physical and biological processes in the real world. Often, these physical and biological processes are observed with errors, in which case the most natural way to model such data is via regression where the mean function is defined by an ordinary differential equation believed to provide an understanding of the underlying process. These regression based dynamical models are called differential equation models. Parameter inference from differential equation models poses computational challenges mainly due to the fact that analytic solutions to most differential equations are not available. In this paper, we propose an approximation method for obtaining the posterior distribution of parameters in differential equation models. The approximation is done in two steps. In the first step, the solution of a differential equation is approximated by the general one-step method which is a class of numerical numerical methods for ordinary differential equations including the Euler and the Runge-Kutta procedures; in the second step, nuisance parameters are marginalized using Laplace approximation. The proposed Laplace approximated posterior gives a computationally fast alternative to the full Bayesian computational scheme (such as Makov Chain Monte Carlo) and produces more accurate and stable estimators than the popular smoothing methods (called collocation methods) based on frequentist procedures. For a theoretical support of the proposed method, we prove that the Laplace approximated posterior converges to the actual posterior under certain conditions and analyze the relation between the order of numerical error and its Laplace approximation. The proposed method is tested on simulated data sets and compared with the other existing methods. | ['Sarat C. Dass', 'Jaeyong Lee', 'Kyoungjae Lee', 'Jonghun Park'] | Laplace based approximate posterior inference for differential equation models | 691,051 |
While the T-S fuzzy model is widely used in fuzzy control systems, the modelling error that will be referred to be as a lumped disturbance should be taken into account in order to improve the control performance. This paper presents a design of disturbance observer as well as state observer, and their existence conditions. Also, example is provided to demonstrate the effectiveness of the approaches proposed. | ['Hugang Han', 'Xiaodong Zhang', 'Dongyin Wu'] | Observers for a class of T-S fuzzy models with disturbance | 937,689 |
This publication provides an overview and discusses some challenges of surface tension directed fluidic self-assembly of semiconductor chips which are transported in a liquid medium. The discussion is limited to surface tension directed self-assembly where the capture, alignment, and electrical connection process is driven by the surface free energy of molten solder bumps where the authors have made a contribution. The general context is to develop a massively parallel and scalable assembly process to overcome some of the limitations of current robotic pick and place and serial wire bonding concepts. The following parts will be discussed: (2) Single-step assembly of LED arrays containing a repetition of a single component type; (3) Multi-step assembly of more than one component type adding a sequence and geometrical shape confinement to the basic concept to build more complex structures; demonstrators contain (3.1) self-packaging surface mount devices, and (3.2) multi-chip assemblies with unique angular orientation. Subsequently, measures are discussed (4) to enable the assembly of microscopic chips (10 μm–1 mm); a different transport method is introduced; demonstrators include the assembly of photovoltaic modules containing microscopic silicon tiles. Finally, (5) the extension to enable large area assembly is presented; a first reel-to-reel assembly machine is realized; the machine is applied to the field of solid state lighting and the emerging field of stretchable electronics which requires the assembly and electrical connection of semiconductor devices over exceedingly large area substrates. | ['Shantonu Biswas', 'Mahsa Mozafari', 'Thomas Stauden', 'Heiko O. Jacobs'] | Surface Tension Directed Fluidic Self-Assembly of Semiconductor Chips across Length Scales and Material Boundaries | 694,912 |
['Zhen Yang', 'Chuanzeng Liang', 'Jian Wang', 'Jinmei Lai'] | A new automatic method for testing interconnect resources in FPGAs based on general routing matrix | 549,613 |
|
['Solomon Gebreyohannes', 'Tadilo Endeshaw Bogale', 'William W. Edmonson', 'Lakemariam Yohannes Worku'] | Systems Engineering Education for East Africa | 953,823 |
|
Agent states and transitions between states are important abstractions in agent-based social simulation (ABSS). Although it is common to develop ad hoc implementations of state-based and transition-based agent behaviors, “best practice†software engineering processes provide transparent and formally grounded design notations that translate directly into working implementations. Statecharts are a software engineering design methodology and an explicit visual and logical representation of the states of system components and the transitions between those states. Used in ABSS, they can clarify a model’s logic and allow for efficient software engineering of complex state-based models. In addition to agent state and behavioral logic representation, visual statecharts can also be useful for monitoring agent status during a simulation, quickly conveying the underlying dynamics of complex models as a simulation evolves over time. Visual approaches include drag-and-drop editing capabilities for constructing state-based models of agent behaviors and conditions for agent state transitions. Repast Simphony is a widely used, open source, and freely accessible agent-based modeling toolkit. While it is possible for Repast Simphony users to create their own implementations of state-based agent behaviors and even create dynamic agent state visualizations, the effort involved in doing so is usually prohibitive. The new statecharts framework in Repast Simphony, a subset of Harel’s statecharts, introduces software engineering practices through the use of statecharts that directly translate visual representations of agent states and behaviors into software implementations. By integrating an agent statecharts framework into Repast Simphony, we have made it easier for users at all levels to take advantage of this important modeling paradigm. Through the visual programming that statecharts afford, users can effectively create the software underlying agents and agent-based models. This paper describes the development and use of the free and open source Repast Simphony statecharts capability for developing ABSS models. | ['Jonathan Ozik', 'Nicholson T. Collier', 'Todd E. Combs', 'Charles M. Macal', 'Michael J. North'] | Repast Simphony Statecharts | 510,132 |
This paper introduces a new approach to acoustic-phonetic modelling, the hidden dynamic model (HDM), which explicitly accounts for the coarticulation and transitions between neighbouring phones. Inspired by the fact that speech is really produced by an underlying dynamic system, the HDM consists of a single vector target per phone in a hidden dynamic space in which speech trajectories are produced by a simple dynamic system. The hidden space is mapped to the surface acoustic representation via a non-linear mapping in the form of a multilayer perceptron (MLP). Algorithms are presented for training of all the parameters (target vectors and MLP weights) from segmented and labelled acoustic observations alone, with no special initialisation. The model captures the dynamic structure of speech, and appears to aid a speech recognition task based on the SwitchBoard corpus. | ['Hywel B. Richards', 'John S. Bridle'] | The HDM: a segmental hidden dynamic model of coarticulation | 8,541 |
This paper presents a novel framework for recognition of Ethiopic characters using structural and syntactic techniques. Graphically complex characters are represented by the spatial relationships of less complex primitives which form a unique set of patterns for each character. The spatial relationship is represented by a special tree structure which is also used to generate string patterns of primitives. Recognition is then achieved by matching the generated string pattern against each pattern in the alphabet knowledge-base built for this purpose. The recognition system tolerates variations on the parameters of characters like font type, size and style. Direction field tensor is used as a tool to extract structural features. | ['Yaregal Assabie', 'Josef Bigun'] | Multifont size-resilient recognition system for Ethiopic script | 408,833 |
Energy bandstructure in silicon nanowires with [100] crystal orientation is calculated using the Tight-Binding (TB) model with supercell approach. Numerical methods are designed according to the physical model to reach an optimal computational performance. Computation results show that the bandstructures of silicon nanowires deviate from that of bulk silicon. The efficiency and accuracy of the TB algorithm are analysed and compared to its counterpart the Density Functional Theory (DFT). Test examples show that TB delivers a good accuracy while far superior over DFT in terms of computational cost. | ['Ximeng Guan', 'Zhiping Yu'] | Fast algorithm for bandstructure calculation in silicon nanowires using supercell approach | 528,445 |
['Thomas Leimkühler', 'Petr Kellnhofer', 'Tobias Ritschel', 'Karol Myszkowski', 'Hans-Peter Seidel'] | Perceptual Real-Time 2D-to-3D Conversion Using Cue Fusion | 973,170 |
|
A controller for a powered transfemoral prosthesis is presented, which can coordinate power delivery at the knee with the motion of the crankshaft on a bicycle. The controller continuously estimates the lengths of a four-bar linkage model through the application of a recursive least squares algorithm. The link lengths are used to estimate the angle of the bicycle crankshaft. With this measure, the delivery of knee torque is coordinated with the motion of the user. The controller is implemented on a prosthesis prototype and assessed on a transfemoral amputee subject ( N = 1). The subject exhibited a bilateral work asymmetry of 83.5% when cycling with his daily use prosthesis in a free-swing mode. When 60-N·m peak assistance was provided by the powered prosthesis, the bilateral work asymmetry was reduced to 11.4%. The subject's metabolic energy rate was measured for speed and power-matched cycling while the powered prosthesis provided zero assistance (i.e., turned of f and providing its back-drive resistance) or 30-N·m peak assistance. Subject's metabolic energy rate decreased by 16.5% when receiving powered assistance relative to the zero-assistance condition. | ['Brian E. Lawson', 'Elissa D. Ledoux', 'Michael Goldfarb'] | A Robotic Lower Limb Prosthesis for Efficient Bicycling | 975,911 |
['Antonio Alexandre Moura Costa', 'Reudismam Rolim de Sousa', 'Felipe Barbosa Araújo Ramos', 'Gustavo Soares', 'Hyggo Oliveira de Almeida', 'Angelo Perkusich'] | A Collaborative Method to Reduce the Running Time and Accelerate the k-Nearest Neighbors Search. | 673,999 |
|
This paper presents a number of new views and techniques claimed to be very important for the problem of face recognition in video (FRiV). First, a clear differentiation is made between photographic facial data and video-acquired facial data as being two different modalities: one providing hard biometrics, the other providing softer biometrics. Second, faces which have the resolution of at least 12 pixels between the eyes are shown to be recognizable by computers just as they are by humans. As a way to deal with low resolution and quality of each individual video frame, the paper offers to use the neuro-associative principle employed by human brain, according to which both memorization and recognition of data are done based on a flow of frames rather than on one frame: synaptic plasticity provides a way to memorize from a sequence, while the collective decision making over time is very suitable for recognition of a sequence. As a benchmark for FRiV approaches, the paper introduces the IIT-NRC video-based database of faces which consists of pairs of low-resolution video clips of unconstrained facial motions. The recognition rate of over 95%, which we achieve on this database, as well as the results obtained on real-time annotation of people on TV allow us to believe that the proposed framework brings us closer to the ultimate benchmark for the FRiV approaches, which is "if you are able to recognize a person, so should the computer". | ['Dmitry O. Gorodnichy'] | Video-based framework for face recognition in video | 123,618 |
In this paper, a recursive smoothing spline approach for contour reconstruction is studied and evaluated. Periodic smoothing splines are used by a robot to approximate the contour of encountered obstacles in the environment. The splines are generated through minimizing a cost function subject to constraints imposed by a linear control system and accuracy is improved iteratively using a recursive spline algorithm. The filtering effect of the smoothing splines allows for usage of noisy sensor data and the method is robust to odometry drift. The algorithm is extensively evaluated in simulations for various contours and in experiments using a SICK laser scanner mounted on a PowerBot from ActivMedia Robotics. | ['Maja Karasalo', 'Giacomo Piccolo', 'Danica Kragic', 'Xiaoming Hu'] | Contour reconstruction using recursive smoothing splines - Algorithms and experimental validation | 134,654 |
['Zbigniew R. Bogdanowicz'] | Antidirected Hamilton cycles in the Cartesian product of directed cycles. | 764,500 |
|
In this paper, we propose two beacon-less \(k\) NN query processing methods for reducing traffic and maintaining high accuracy of the query result in mobile ad hoc networks (MANETs). In these methods, the query-issuing node first forwards a \(k\) NN query using geo-routing to the nearest node from the point specified by the query (query point). Then, the nearest node from the query point forwards the query to other nodes close to the query point, and each node receiving the query replies with the information on itself. In this process, we adopt two different approaches: the Explosion (EXP) method and the Spiral (SPI) method. In the EXP method, the nearest node from the query point floods the query to nodes within a specific circular region, and each node receiving the query replies with information on itself. In the SPI method, the nearest node from the query point forwards the query to other nodes in a spiral manner, and the node that collects a satisfactory \(k\) NN result transmits the result to the query-issuing node. Experimental results show that our proposed methods reduce traffic and achieve high accuracy of the query result, in comparison with existing methods. | ['Yuka Komai', 'Yuya Sasaki', 'Takahiro Hara', 'Shojiro Nishio'] | \(K\) NN Query Processing Methods in Mobile Ad Hoc Networks | 534,897 |
In earlier work, we showed that the one-sided communication model found in PGAS languages (such as UPC) offers significant advantages in communication efficiency by decoupling data transfer from processor synchronization. We explore the use of the PGAS model on IBM BlueGene/P, an architecture that combines low-power, quad-core processors with extreme scalability. We demonstrate that the PGAS model, using a new port of the Berkeley UPC compiler and GASNet one-sided communication layer, outperforms two-sided (MPI) communication in both microbenchmarks and a case study of the communication-limited benchmark, NAS FT. We scale the benchmark up to 16,384 cores of the BlueGene/P and demonstrate that UPC consistently outperforms MPI by as much as 66% for some processor configurations and an average of 32%. In addition, the results demonstrate the scalability of the PGAS model and the Berkeley implementation of UPC, the viability of using it on machines with multicore nodes, and the effectiveness of the BG/P communication layer for supporting one-sided communication and PGAS languages. | ['Rajesh Nishtala', 'Paul Hargrove', 'Dan Bonachea', 'Katherine A. Yelick'] | Scaling communication-intensive applications on BlueGene/P using one-sided communication and overlap | 75,414 |
Large-scale data centers leverage virtualization technology to achieve excellent resource utilization, scalability, and high availability. Ideally, the performance of an application running inside a virtual machine (VM) shall be independent of co-located applications and VMs that share the physical machine. However, adverse interference effects exist and are especially severe for data-intensive applications in such virtualized environments. In this work, we present TRACON, a novel Task and Resource Allocation CONtrol framework that mitigates the interference effects from concurrent data-intensive applications and greatly improves the overall system performance. TRACON utilizes modeling and control techniques from statistical machine learning and consists of three major components: the interference prediction model that infers application performance from resource consumption observed from different VMs, the interference-aware scheduler that is designed to utilize the model for effective resource management, and the task and resource monitor that collects application characteristics at the runtime for model adaption. We implement and validate TRACON with a variety of cloud applications. The evaluation results show that TRACON can achieve up to 25 percent improvement on application throughput on virtualized servers. | ['Ron Chi-Lung Chiang', 'H. Howie Huang'] | TRACON: Interference-Aware Schedulingfor Data-Intensive Applicationsin Virtualized Environments | 214,047 |
At WSC 00, one of the authors (Crosbie) suggested that the development and publication of a model curriculum for MS programs in modeling and simulation would facilitate the development of such programs. This paper presents a first draft of a model curriculum developed by a small group at the McLeod Institute of Simulation Sciences at California State University, Chico. The aim of the draft is to stimulate further discussion in the M and S community with the goal of arriving at a generally acceptable outline that can serve as a guideline for new programs. | ['Roy E. Crosbie', 'John Zenor', 'Ralph C. Hilzer'] | More on a model curriculum for modeling and simulation | 290,283 |
The notion of m-stable sets was introduced in Peris and Subiza (2013) for abstract decision problems. Since it may lack internal stability and fail to discriminate alternatives in cyclic circumstances, we alter this notion, which leads to an alternative solution called w-stable set. Subsequently, we characterize w-stable set and compare it with other solutions in the literature. In addition, we propose a selection procedure to filter out more desirable w-stable sets. | ['Weibin Han', 'Adrian Van Deemen'] | On the solution of w-stable sets | 906,119 |
An object of the studies in this paper is the electromagnetic field scattered by forest canopy. The purpose of this work are experimental studies of space-time and frequency correlations of the field amplitude arising due to wave interaction with the forest canopy. As a result, in a fairly broad frequency range from 0.1 to 1.0 GHz, there were obtained field autocorrelation functions in both the frequency and time domains, as well as field cross correlation functions, with the receivers being separated along the wave propagation direction and across that both in the horizontal and vertical directions. On the basis of measured data, the major mechanisms of wave scattering by forest canopy were analyzed in both the time and frequency domains. Three main partial waves, namely, direct, lateral, and reflected from the ground ones were identified through the interference picture registered. The results obtained are of great importance for developing data processing algorithms in remote sensing regarding the forest canopies. | ['V. L. Mironov', 'Eugene D. Telpukhovsky', 'Vladimir Yakubov', 'S.N. Novik', 'Andrew V. Klokov'] | Space - temporal and frequency - polarization variations in the electromagnetic wave interacting with the forest canopy | 466,376 |
Soft errors are changes in logic state of a circuit/system resulting from the latching of single-event transients (transient voltage fluctuations at a logic node or SETs) caused by high-energy particle strikes or electrical noise. Due to technology scaling and reduced supply voltages, they are expected to increase by several orders of magnitude in logic circuits. Soft-error rate (SER) reduction can be achieved by using both spatial and temporal redundancy techniques. In this paper, we present a slack redistribution technique, applicable to pipelined circuits, to enhance SER reduction obtainable from time-redundancy based techniques. | ['S. Krishnamohan', 'Nihar R. Mahapatra'] | Slack redistribution in pipelined circuits for enhanced soft-error rate reduction | 360,267 |
['Christopher Haine', 'Olivier Aumage', 'Enguerrand Petit', 'Denis Barthou'] | Exploring and Evaluating Array Layout Restructuring for SIMDization | 624,369 |
|
['Haas Leiß'] | Towards Kleene Algebra with Recursion | 312,528 |
|
Motivation: Secondary structures are key descriptors of a protein fold and its topology. In recent years, they facilitated intensive computational tasks for finding structural homologues, fold prediction and protein design. Their popularity stems from an appealing regularity in patterns of geometry and chemistry. However, the definition of secondary structures is of subjective nature. An unsupervised de-novo discovery of these structures would shed light on their nature, and improve the way we use these structures in algorithms of structural bioinformatics.#R##N##R##N#Methods: We developed a new method for unsupervised partitioning of undirected graphs, based on patterns of small recurring network motifs. Our input was the network of all H-bonds and covalent interactions of protein backbones. This method can be also used for other biological and non-biological networks.#R##N##R##N#Results: In a fully unsupervised manner, and without assuming any explicit prior knowledge, we were able to rediscover the existence of conventional α-helices, parallel β-sheets, anti-parallel sheets and loops, as well as various non-conventional hybrid structures. The relation between connectivity and crystallographic temperature factors establishes the existence of novel secondary structures.#R##N##R##N#Contact:[email protected]; [email protected] | ['Barak Raveh', 'Ofer Rahat', 'Ronen Basri', 'Gideon Schreiber'] | Rediscovering secondary structures as network motifs---an unsupervised learning approach | 398,259 |
Studies of domain feature model validation only concerned with the rationality of feature model but ignored the correctness. For this situation, a sample-system-based approach of correctness validation is developed in this paper. Some concepts and terms are proposed to facilitate the process of establishing a map between the domain feature model and the object models of sample systems. To improve the reliability of the domain feature model, rules based the map is presented which will be helpful to make sure the correctness of domain feature model from the following aspects that is the integrity and correctness of information performance and the correctness of variability analysis. | ['Guanzhong Yang', 'Ting Deng'] | Sample-system-based domain feature model validation | 916,175 |
Chalmers Publication Library (CPL).#N# Forskningspublikationer fran Chalmers Tekniska Hogskola. | ['Evgenii Kotelnikov', 'Laura Kovács', 'Martin Suda', 'Andrei Voronkov'] | A Clausal Normal Form Translation for FOOL | 955,169 |
Accurate estimation of delay is a major challenge in current nanometer regime using Non Linear Delay Model (NLDM) due to issues such as parametric variation, nonlinear capacitance value etc. It demands a large number of simulations to be performed for getting the accurate delay values. To partly solve this issue, people have started using Effective Current Source Model (ECSM), which stores certain predefined Threshold Crossing Point (TCP) of the output voltage waveform with respect to different input transition time (T R ) values and load capacitance (C l ). In this work, we propose an analytical timing model relating 10% - 90% TCPs with C l and T R values. We also derive the relationship between the cell size and the model coefficients. We also derive the region of validity of the model in (T R , C l ) space and determine its relationship with cell size. The proposed model is in good agreement with HSPICE simulations with a maximum relative error of 2.5%. We verified the proposed model with technology scaling. We use this model and the relationships to reduce the number of simulations in ECSM library characterization. | ['Baljit Kaur', 'Sandeep Miryala', 'S. K. Manhas', 'Bulusu Anand'] | An efficient method for ECSM characterization of CMOS inverter in nanometer range technologies | 380,210 |
Identification of distinct clusters of documents in text collections has traditionally been addressed by making the assumption that the data instances can only be represented by homogeneous and uniform features. Many real-world data, on the other hand, comprise of multiple types of heterogeneous interrelated components, such as web pages and hyperlinks, online scientific publications and authors and publication venues to name a few. In this paper, we present KSVMeans, a clustering algorithm for multi-type interrelated datasets that integrates the well known K-Means clustering with the highly popular Support Vector Machines. The experimental results on authorship analysis of two real world web-based datasets show that K-SVMeans can successfully discover topical clusters of documents and achieve better clustering solutions than homogeneous data clustering. | ['Levent Bolelli', 'Seyda Ertekin', 'Ding Zhou', 'C. Lee Giles'] | K-SVMeans: A Hybrid Clustering Algorithm for Multi-Type Interrelated Datasets | 350,662 |
The software engineering discipline has been quite successful in creating various development models for constructing software systems. It has not however been that successful in creating later lifecycle process models. One of them concerns retirement. In this paper, we elicit a retirement process model and compare it to the current standard retirement process models. Our goal is to evaluate current retirement process standards and provide feedback for their extension. The elicitation process has been made within one Nordic financial company. | ['Mira Kajko-Mattsson', 'Ralf Fredriksson', 'Anna Hauzenberger'] | Eliciting a Retirement Process Model: Case Study 1 | 500,731 |
A capacitively coupled interdigital-gated HEMT structure was used to investigate the occurrence of uniformity of electric field distribution along the structure. The structure was designed and simulated using Commercial Electromagnetic Sonnet Suites software. The return loss characteristics were analyzed and evaluated. The comparison of the admittance characteristics from simulation between dc connected structure and capacitively coupled structure is carried out in order to evaluate electromagnetic wave propagation. This structure kept uniform electric field in the channel when the dc biased is applied to the interdigital gate, which modulates the potential in the channel. | ['Z.F. Mohd Ahir', 'A. Zulkifli', 'A.M. Hashim'] | Modeling and Characterization of Capacitively Coupled Interdigital-Gated HEMT Plasma Device for Terahertz Wave Amplification | 119,371 |
The optimization problem we study here consists in finding an optimal cable routing to connect a given number of offshore turbines to one (or more) offshore substation(s). Different constraints have to be respected, such as cable capacity, cable prices, crossing restrictions, limits on connections to substation(s), and possible presence of obstacles in the site. To solve this large-scale optimization problem we use a matheuristic approach, that is an hybridization of mathematical programming techniques and heuristics. First, a Mixed-Integer Linear Programming (MILP) model is defined. The MILP model is able to solve smaller instances to optimality but for large wind parks it fails in even finding a feasible solution. Therefore we investigate various matheuristics to handle this situation: the heuristics are used to decrease the number of variables in the optimization model by fixing some of them at each iteration. We propose and compare three different fixing strategy: “random fixing”, “distance based fixing” and “sector fixing”. Each of the three matheuristics has been tuned to find a proper trade-off between neighborhood size and solution time. Finally, we compare the solutions from the matheuristic framework with solutions from the initial MILP model on a number of real world instances, demonstrating the effectiveness of our approach when optimizing inter-array cable routing of big parks. | ['Martina Fischetti', 'David Pisinger'] | Inter-array cable routing optimization for big wind parks with obstacles | 978,664 |
The nearest neighbor (NN) classifier suffers from high time complexity when classifying a test instance since the need of searching the whole training set. Prototype generation is a widely used approach to reduce the classification time, which generates a small set of prototypes to classify a test instance instead of using the whole training set. In this paper, particle swarm optimization is applied to prototype generation and two novel methods for improving the classification performance are presented: 1) a fitness function named error rank and 2) the multiobjective (MO) optimization strategy. Error rank is proposed to enhance the generation ability of the NN classifier, which takes the ranks of misclassified instances into consideration when designing the fitness function. The MO optimization strategy pursues the performance on multiple subsets of data simultaneously, in order to keep the classifier from overfitting the training set. Experimental results over 31 UCI data sets and 59 additional data sets show that the proposed algorithm outperforms nearly 30 existing prototype generation algorithms. | ['Weiwei Hu', 'Ying Tan'] | Prototype Generation Using Multiobjective Particle Swarm Optimization for Nearest Neighbor Classification | 809,751 |
Probabilistic matrix factorization (PMF) is a powerful method for modeling data associated with pairwise relationships, nding use in collaborative ltering, computational biology, and document analysis, among other areas. In many domains, there are additional covariates that can assist in prediction. For example, when modeling movie ratings, we might know when the rating occurred, where the user lives, or what actors appear in the movie. It is dicult, however, to incorporate this side information into the PMF model. We propose a framework for incorporating side information by coupling together multiple PMF problems via Gaussian process priors. We replace scalar latent features with functions that vary over the covariate space. The GP priors on these functions require them to vary smoothly and share information. We apply this new method to predict the scores of professional basketball games, where side information about the venue and date of the game are relevant for the outcome. | ['Ryan P. Adams', 'George E. Dahl', 'Iain Murray'] | Incorporating Side Information in Probabilistic Matrix Factorization with Gaussian Processes | 34,916 |
This paper is based on an extensive study performed on a large software suite for over a decade. From the experiences derived from this study we created a first draft of a method combining an extended robustness analysis (RA) method with the future oriented method of technology forecasting (TF). In this method TF provides information about the systems future evolution to the RA which then generates the software design. The RA and TF methods will then form a feedback loop, which results in an improved reusable and robust software design. The purpose of the RATF method is to predict the evolutional path of the software system, thus making preparations for (example) functionality that will be needed in future generations, i.e. utilizing the power-of-prediction to implement the base of tomorrows functions today. | ['Göran Calås', 'Andreas Boklund', 'Stefan Mankefors-Christiernin'] | A First Draft of RATF: A Method Combining Robustness Analysis and Technology Forecasting | 436,789 |
A nonlinear predictive approach has been employed in MPEG (Moving Picture Experts Group) video transmission in order to improve the rate control performance of the video encoder. A nonlinear prediction and quantisation technique has been applied to the video rate control which employs a transmission buffer for constant bit rate video transmission. A radial basis function (RBF) network has been adopted as a video rate estimator to predict the rate value of a picture in advance of encoding. The quantiser control surfaces based on nonlinear equations, which map both estimated and current buffer occupancies to a suitable quantisation step size, have also been used to achieve quicker responses to dramatic video rate variation. This scheme aims to adequately accommodate non-stationary video in the limited capacity of the buffer. Performance has been evaluated in comparison to the MPEG2 Test Model 5 (TM5) in terms of the buffer occupancy and picture quality. | ['Yoo-Sok Saw', 'Peter Grant', 'John Hannah', 'Bernie Mulgrew'] | Nonlinear predictive rate control for constant bit rate MPEG video coders | 281,409 |
['Markus Zanker', 'Francesco Ricci', 'Dietmar Jannach', 'Loren G. Terveen'] | Editorial: Measuring the impact of personalization and recommendation on user behaviour | 570,264 |
|
['Jing Lv', 'Dongdai Lin'] | L-P States of RC4 Stream Cipher. | 799,930 |
|
We propose a simulated annealing based zero-skew clock net construction algorithm that works in any routing spaces, from Manhattan to Euclidean, with the added flexibility of optimizing either the wire length or the propagation delay. We first devise an O(log n) tree grafting perturbation function to construct a zero-skew clock tree under the Elmore delay model. This tree grafting scheme is able to explore the entire solution space asymptotically. A Gauss-Seidel iteration procedure is then applied to optimize the Steiner point positions. Experimental results have shown that our algorithm can achieve substantial delay reduction and encouraging wire length minimization compared to previous works. > | ['Nan-Chi Chou', 'Chung-Kuan Cheng'] | On general zero-skew clock net construction | 176,784 |
UMTS is evolving toward a future wireless all-IP network. In this paper we present how it supports realtime IP multimedia services, as these services are expected to drive the adoption of wireless all-IP networks. The scheme of real time streaming video is one of the newcomers in wireless data communication and in particular in UMTS, raising a number of novel requirements in both telecommunication and data communication systems. This scheme applies when the mobile user is experiencing real time video content. In this work we focus on the design and implementation of a rate and loss control mechanism for monitoring the UMTS network state and estimating the appropriate rate of the streaming video data. | ['Antonios G. Alexiou', 'Christos Bouras'] | Rate and loss control for video transmission over UMTS using real-time protocols | 180,065 |
['Kosuke Nakajima', 'Yuichi Itoh', 'Yusuke Hayashi', 'Kazuaki Ikeda', 'Kazuyuki Fujita', 'Takao Onoye'] | Emoballoon - A Balloon-Shaped Interface Recognizing Social Touch Interactions. | 802,613 |
|
We present a perceptually-tuned multiscale image segmentation algorithm that is based on spatially adaptive color and texture features. The proposed algorithm extends a previously proposed approach to include multiple texture scales. The determination of the multiscale texture features is based on perceptual considerations. We also examine the perceptual tuning of the algorithm and how it is affected by the presence of different texture scales. The multiscale extension is necessary for segmenting higher resolution images and is particularly effective in segmenting objects shown in different perspectives. The performance of the proposed algorithm is demonstrated in the domain of photographic images. | ['Junqing Chen', 'Thrasyvoulos N. Pappas', 'Aleksandra Mojsilovic', 'Bernice E. Rogowitz'] | Perceptually-tuned multiscale color-texture segmentation | 544,004 |
The interpretation of temporal expressions in text is an important constituent task for many practical natural language processing tasks, including question-answering, information extraction and text summarisation. Although temporal expressions have long been studied in the research literature, it is only more recently, with the impetus provided by exercises like the ACE Program, that attention has been directed to broad-coverage, implemented systems. In this paper, we describe our approach to intermediate semantic representations in the interpretation of temporal expressions. | ['Pawel P. Mazur', 'Robert Dale'] | An Intermediate Representation for the Interpretation of Temporal Expressions | 372,912 |
['Paul Scerri'] | Team Oriented Plans and Robot Swarms. | 803,861 |
|
We present the results of a pilot study that investigates if and how people judge the trustworthiness of a robot during social Human-Robot Interaction (sHRI). Current research in sHRI has observed that people tend to interact with robots socially. However, results from neuroscience suggests people use different cognitive mechanisms interacting with robots than they do with humans, leading to a debate about whether people truly perceive robots as social entities. Our paper focuses on one aspect of this debate, by examining trustworthiness between people and robots using behavioral economics' ‘Trust Game’ scenario. Our pilot study replicates a trust game scenario, where a person invests money with a robot trustee in hopes they will receive a larger sum (trusting the robot to give more back), then gets a chance to invest once more. Our qualitative analysis of investing behavior and interviews with participants suggests that people may follow a human-robot (h-r) trust model that is quite similar to the human-human trust model. Our results also suggest a possible resolution to the sHRI and Neuroscience debate: people try to interact socially with robots, but due to lack of common social cues, they draw from social experience, or create new experiences by actively exploring the robot behavior. | ['Roberta Cabral Mota', 'Daniel J. Rea', 'Anna Le Tran', 'James Everett Young', 'Ehud Sharlin', 'Mario Costa Sousa'] | Playing the ‘trust game’ with robots: Social strategies and experiences | 935,255 |
In many applications of wireless sensor networks, location is very important information. Conventional location information comes from manual setting or GPS device. However, manual location setting requires huge cost of human time, and GPS location setting requires expensive device cost and large device size. Furthermore, GPS can not operate in indoor environment. Both approaches are not applicable to localization task of wireless sensor networks. In this paper, an accurate and efficient localization algorithm based on multidimensional scaling (MDS) is proposed in hierarchical network environment. Through localized computation of multidimensional scaling within a cluster, the computation overhead can be distributed to each cluster. On the other hand, MDS-based localization approach requires the estimation of multihop distance. Through restricting estimation of multihop distances within a cluster, the proposed localization algorithm achieves better accuracy and can operate in non-convex network environment. Experimental results reveal that the proposed HMDS localization algorithm outperforms MDSMAP algorithm in terms of accuracy. | ['Gwo-Jong Yu', 'Shao-Chun Wang'] | A Hierarchical MDS-based Localization Algorithm for Wireless Sensor Networks | 171,042 |
Determination of the number of sources is a practical issue that has to be addressed in applications of underdetermined blind source separation (UBSS). This paper proposes a noise-robust UBSS algorithm for highly overlapped speech sources in the short-time Fourier transform (STFT) domain. The basic principle of the proposed algorithm firstly estimates the unknown number of sources in time-frequency domain. Secondly, the original sources are recovered by a separation method using both sparseness and temporal structure. To mitigate the noise effect on the detection of auto-source TF points, we propose a method to effectively detect the auto-term locations of the sources by using the principal component analysis (PCA) of the STFTs of noisy mixtures. | ['Zhe Wang', 'Guoan Bi'] | A time-frequency preprocessing method for blind source separation of speech signal with temporal structure | 727,660 |
['Aravind Sesagiri Raamkumar', 'Schubert Foo', 'Natalie Pang'] | Survey on inadequate and omitted citations in manuscripts: a precursory study in identification of tasks for a literature review and manuscript writing assistive system | 968,640 |
|
['Ivan Madjarov'] | Responsive Course Design - An Adaptive Approach to Designing Responsive m-Learning | 819,578 |
|
In this paper, we introduce two indoor Wireless Local Area Network (WLAN) positioning methods using augmented sparse recovery algorithms. These schemes render a sparse user's position vector, and in parallel, minimize the distance between the online measurement and radio map. The overall localization scheme for both methods consists of three steps: 1) coarse localization, obtained from comparing the online measurements with clustered radio map. A novel graph-based method is proposed to cluster the offline fingerprints. In the online phase, a Region Of Interest (ROI) is selected within which we search for the user's location; 2) Access Point (AP) selection; and 3) fine localization through the novel sparse recovery algorithms. Since the online measurements are subject to inordinate measurement readings, called outliers, the sparse recovery methods are modified in order to jointly estimate the outliers and user's position vector. The outlier detection procedure identifies the APs whose readings are either not available or erroneous. The proposed localization methods have been tested with Received Signal Strength (RSS) measurements in a typical office environment and the results show that they can localize the user with significantly high accuracy and resolution which is superior to the results from competing WLAN fingerprinting localization methods. | ['Ali Khalajmehrabadi', 'Nikolaos Gatsis', 'Daniel Pack', 'David Akopian'] | A Joint Indoor WLAN Localization and Outlier Detection Scheme Using LASSO and Elastic-Net Optimization Techniques | 913,529 |